1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 3008 3009 3010 3011 3012 3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 3026 3027 3028 3029 3030 3031 3032 3033 3034 3035 3036 3037 3038 3039 3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 3053 3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 3081 3082 3083 3084 3085 3086 3087 3088 3089 3090 3091 3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 3132 3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 3193 3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 3370 3371 3372 3373 3374 3375 3376 3377 3378 3379 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 3408 3409 3410 3411 3412 3413 3414 3415 3416 3417 3418 3419 3420 3421 3422 3423 3424 3425 3426 3427 3428 3429 3430 3431 3432 3433 3434 3435 3436 3437 3438 3439 3440 3441 3442 3443 3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 3454 3455 3456 3457 3458 3459 3460 3461 3462 3463 3464 3465 3466 3467 3468 3469 3470 3471 3472 3473 3474 3475 3476 3477 3478 3479 3480 3481 3482 3483 3484 3485 3486 3487 3488 3489 3490 3491 3492 3493 3494 3495 3496 3497 3498 3499 3500 3501 3502 3503 3504 3505 3506 3507 3508 3509 3510 3511 3512 3513 3514 3515 3516 3517 3518 3519 3520 3521 3522 3523 3524 3525 3526 3527 3528 3529 3530 3531 3532 3533 3534 3535 3536 3537 3538 3539 3540 3541 3542 3543 3544 3545 3546 3547 3548 3549 3550 3551 3552 3553 3554 3555 3556 3557 3558 3559 3560 3561 3562 3563 3564 3565 3566 3567 3568 3569 3570 3571 3572 3573 3574 3575 3576 3577 3578 3579 3580 3581 3582 3583 3584 3585 3586 3587 3588 3589 3590 3591 3592 3593 3594 3595 3596 3597 3598 3599 3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 3610 3611 3612 3613 3614 3615 3616 3617 3618 3619 3620 3621 3622 3623 3624 3625 3626 3627 3628 3629 3630 3631 3632 3633 3634 3635 3636 3637 3638 3639 3640 3641 3642 3643 3644 3645 3646 3647 3648 3649 3650 3651 3652 3653 3654 3655 3656 3657 3658 3659 3660 3661 3662 3663 3664 3665 3666 3667 3668 3669 3670 3671 3672 3673 3674 3675 3676 3677 3678 3679 3680 3681 3682 3683 3684 3685 3686 3687 3688 3689 3690 3691 3692 3693 3694 3695 3696 3697 3698 3699 3700 3701 3702 3703 3704 3705 3706 3707 3708 3709 3710 3711 3712 3713 3714 3715 3716 3717 3718 3719 3720 3721 3722 3723 3724 3725 3726 3727 3728 3729 3730 3731 3732 3733 3734 3735 3736 3737 3738 3739 3740 3741 3742 3743 3744 3745 3746 3747 3748 3749 3750 3751 3752 3753 3754 3755 3756 3757 3758 3759 3760 3761 3762 3763 3764 3765 3766 3767 3768 3769 3770 3771 3772 3773 3774 3775 3776 3777 3778 3779 3780 3781 3782 3783 3784 3785 3786 3787 3788 3789 3790 3791 3792 3793 3794 3795 3796 3797 3798 3799 3800 3801 3802 3803 3804 3805 3806 3807 3808 3809 3810 3811 3812 3813 3814 3815 3816 3817 3818 3819 3820 3821 3822 3823 3824 3825 3826 3827 3828 3829 3830 3831 3832 3833 3834 3835 3836 3837 3838 3839 3840 3841 3842 3843 3844 3845 3846 3847 3848 3849 3850 3851 3852 3853 3854 3855 3856 3857 3858 3859 3860 3861 3862 3863 3864 3865 3866 3867 3868 3869 3870 3871 3872 3873 3874 3875 3876 3877 3878 3879 3880 3881 3882 3883 3884 3885 3886 3887 3888 3889 3890 3891 3892 3893 3894 3895 3896 3897 3898 3899 3900 3901 3902 3903 3904 3905 3906 3907 3908 3909 3910 3911 3912 3913 3914 3915 3916 3917 3918 3919 3920 3921 3922 3923 3924 3925 3926 3927 3928 3929 3930 3931 3932 3933 3934 3935 3936 3937 3938 3939 3940 3941 3942 3943 3944 3945 3946 3947 3948 3949 3950 3951 3952 3953 3954 3955 3956 3957 3958 3959 3960 3961 3962 3963 3964 3965 3966 3967 3968 3969 3970 3971 3972 3973 3974 3975 3976 3977 3978 3979 3980 3981 3982 3983 3984 3985 3986 3987 3988 3989 3990 3991 3992 3993 3994 3995 3996 3997 3998 3999 4000 4001 4002 4003 4004 4005 4006 4007 4008 4009 4010 4011 4012 4013 4014 4015 4016 4017 4018 4019 4020 4021 4022 4023 4024 4025 4026 4027 4028 4029 4030 4031 4032 4033 4034 4035 4036 4037 4038 4039 4040 4041 4042 4043 4044 4045 4046 4047 4048 4049 4050 4051 4052 4053 4054 4055 4056 4057 4058 4059 4060 4061 4062 4063 4064 4065 4066 4067 4068 4069 4070 4071 4072 4073 4074 4075 4076 4077 4078 4079 4080 4081 4082 4083 4084 4085 4086 4087 4088 4089 4090 4091 4092 4093 4094 4095 4096 4097 4098 4099 4100 4101 4102 4103 4104 4105 4106 4107 4108 4109 4110 4111 4112 4113 4114 4115 4116 4117 4118 4119 4120 4121 4122 4123 4124 4125 4126 4127 4128 4129 4130 4131 4132 4133 4134 4135 4136 4137 4138 4139 4140 4141 4142 4143 4144 4145 4146 4147 4148 4149 4150 4151 4152 4153 4154 4155 4156 4157 4158 4159 4160 4161 4162 4163 4164 4165 4166 4167 4168 4169 4170 4171 4172 4173 4174 4175 4176 4177 4178 4179 4180 4181 4182 4183 4184 4185 4186 4187 4188 4189 4190 4191 4192 4193 4194 4195 4196 4197 4198 4199 4200 4201 4202 4203 4204 4205 4206 4207 4208 4209 4210 4211 4212 4213 4214 4215 4216 4217 4218 4219 4220 4221 4222 4223 4224 4225 4226 4227 4228 4229 4230 4231 4232 4233 4234 4235 4236 4237 4238 4239 4240 4241 4242 4243 4244 4245 4246 4247 4248 4249 4250 4251 4252 4253 4254 4255 4256 4257 4258 4259 4260 4261 4262 4263 4264 4265 4266 4267 4268 4269 4270 4271 4272 4273 4274 4275 4276 4277 4278 4279 4280 4281 4282 4283 4284 4285 4286 4287 4288 4289 4290 4291 4292 4293 4294 4295 4296 4297 4298 4299 4300 4301 4302 4303 4304 4305 4306 4307 4308 4309 4310 4311 4312 4313 4314 4315 4316 4317 4318 4319 4320 4321 4322 4323 4324 4325 4326 4327 4328 4329 4330 4331 4332 4333 4334 4335 4336 4337 4338 4339 4340 4341 4342 4343 4344 4345 4346 4347 4348 4349 4350 4351 4352 4353 4354 4355 4356 4357 4358 4359 4360 4361 4362 4363 4364 4365 4366 4367 4368 4369 4370 4371 4372 4373 4374 4375 4376 4377 4378 4379 4380 4381 4382 4383 4384 4385 4386 4387 4388 4389 4390 4391 4392 4393 4394 4395 4396 4397 4398 4399 4400 4401 4402 4403 4404 4405 4406 4407 4408 4409 4410 4411 4412 4413 4414 4415 4416 4417 4418 4419 4420 4421 4422 4423 4424 4425 4426 4427 4428 4429 4430 4431 4432 4433 4434 4435 4436 4437 4438 4439 4440 4441 4442 4443 4444 4445 4446 4447 4448 4449 4450 4451 4452 4453 4454 4455 4456 4457 4458 4459 4460 4461 4462 4463 4464 4465 4466 4467 4468 4469 4470 4471 4472 4473 4474 4475 4476 4477 4478 4479 4480 4481 4482 4483 4484 4485 4486 4487 4488 4489 4490 4491 4492 4493 4494 4495 4496 4497 4498 4499 4500 4501 4502 4503 4504 4505 4506 4507 4508 4509 4510 4511 4512 4513 4514 4515 4516 4517 4518 4519 4520 4521 4522 4523 4524 4525 4526 4527 4528 4529 4530 4531 4532 4533 4534 4535 4536 4537 4538 4539 4540 4541 4542 4543 4544 4545 4546 4547 4548 4549 4550 4551 4552 4553 4554 4555 4556 4557 4558 4559 4560 4561 4562 4563 4564 4565 4566 4567 4568 4569 4570 4571 4572 4573 4574 4575 4576 4577 4578 4579 4580 4581 4582 4583 4584 4585 4586 4587 4588 4589 4590 4591 4592 4593 4594 4595 4596 4597 4598 4599 4600 4601 4602 4603 4604 4605 4606 4607 4608 4609 4610 4611 4612 4613 4614 4615 4616 4617 4618 4619 4620 4621 4622 4623 4624 4625 4626 4627 4628 4629 4630 4631 4632 4633 4634 4635 4636 4637 4638 4639 4640 4641 4642 4643 4644 4645 4646 4647 4648 4649 4650 4651 4652 4653 4654 4655 4656 4657 4658 4659 4660 4661 4662 4663 4664 4665 4666 4667 4668 4669 4670 4671 4672 4673 4674 4675 4676 4677 4678 4679 4680 4681 4682 4683 4684 4685 4686 4687 4688 4689 4690 4691 4692 4693 4694 4695 4696 4697 4698 4699 4700 4701 4702 4703 4704 4705 4706 4707 4708 4709 4710 4711 4712 4713 4714 4715 4716 4717 4718 4719 4720 4721 4722 4723 4724 4725 4726 4727 4728 4729 4730 4731 4732 4733 4734 4735 4736 4737 4738 4739 4740 4741 4742 4743 4744 4745 4746 4747 4748 4749 4750 4751 4752 4753 4754 4755 4756 4757 4758 4759 4760 4761 4762 4763 4764 4765 4766 4767 4768 4769 4770 4771 4772 4773 4774 4775 4776 4777 4778 4779 4780 4781 4782 4783 4784 4785 4786 4787 4788 4789 4790 4791 4792 4793 4794 4795 4796 4797 4798 4799 4800 4801 4802 4803 4804 4805 4806 4807 4808 4809 4810 4811 4812 4813 4814 4815 4816 4817 4818 4819 4820 4821 4822 4823 4824 4825 4826 4827 4828 4829 4830 4831 4832 4833 4834 4835 4836 4837 4838 4839 4840 4841 4842 4843 4844 4845 4846 4847 4848 4849 4850 4851 4852 4853 4854 4855 4856 4857 4858 4859 4860 4861 4862 4863 4864 4865 4866 4867 4868 4869 4870 4871 4872 4873 4874 4875 4876 4877 4878 4879 4880 4881 4882 4883 4884 4885 4886 4887 4888 4889 4890 4891 4892 4893 4894 4895 4896 4897 4898 4899 4900 4901 4902 4903 4904 4905 4906 4907 4908 4909 4910 4911 4912 4913 4914 4915 4916 4917 4918 4919 4920 4921 4922 4923 4924 4925 4926 4927 4928 4929 4930 4931 4932 4933 4934 4935 4936 4937 4938 4939 4940 4941 4942 4943 4944 4945 4946 4947 4948 4949 4950 4951 4952 4953 4954 4955 4956 4957 4958 4959 4960 4961 4962 4963 4964 4965 4966 4967 4968 4969 4970 4971 4972 4973 4974 4975 4976 4977 4978 4979 4980 4981 4982 4983 4984 4985 4986 4987 4988 4989 4990 4991 4992 4993 4994 4995 4996 4997 4998 4999 5000 5001 5002 5003 5004 5005 5006 5007 5008 5009 5010 5011 5012 5013 5014 5015 5016 5017 5018 5019 5020 5021 5022 5023 5024 5025 5026 5027 5028 5029 5030 5031 5032 5033 5034 5035 5036 5037 5038 5039 5040 5041 5042 5043 5044 5045 5046 5047 5048 5049 5050 5051 5052 5053 5054 5055 5056 5057 5058 5059 5060 5061 5062 5063 5064 5065 5066 5067 5068 5069 5070 5071 5072 5073 5074 5075 5076 5077 5078 5079 5080 5081 5082 5083 5084 5085 5086 5087 5088 5089 5090 5091 5092 5093 5094 5095 5096 5097 5098 5099 5100 5101 5102 5103 5104 5105 5106 5107 5108 5109 5110 5111 5112 5113 5114 5115 5116 5117 5118 5119 5120 5121 5122 5123 5124 5125 5126 5127 5128 5129 5130 5131 5132 5133 5134 5135 5136 5137 5138 5139 5140 5141 5142 5143 5144 5145 5146 5147 5148 5149 5150 5151 5152 5153 5154 5155 5156 5157 5158 5159 5160 5161 5162 5163 5164 5165 5166 5167 5168 5169 5170 5171 5172 5173 5174 5175 5176 5177 5178 5179 5180 5181 5182 5183 5184 5185 5186 5187 5188 5189 5190 5191 5192 5193 5194 5195 5196 5197 5198 5199 5200 5201 5202 5203 5204 5205 5206 5207 5208 5209 5210 5211 5212 5213 5214 5215 5216 5217 5218 5219 5220 5221 5222 5223 5224 5225 5226 5227 5228 5229 5230 5231 5232 5233 5234 5235 5236 5237 5238 5239 5240 5241 5242 5243 5244 5245 5246 5247 5248 5249 5250 5251 5252 5253 5254 5255 5256 5257 5258 5259 5260 5261 5262 5263 5264 5265 5266 5267 5268 5269 5270 5271 5272 5273 5274 5275 5276 5277 5278 5279 5280 5281 5282 5283 5284 5285 5286 5287 5288 5289 5290 5291 5292 5293 5294 5295 5296 5297 5298 5299 5300 5301 5302 5303 5304 5305 5306 5307 5308 5309 5310 5311 5312 5313 5314 5315 5316 5317 5318 5319 5320 5321 5322 5323 5324 5325 5326 5327 5328 5329 5330 5331 5332 5333 5334 5335 5336 5337 5338 5339 5340 5341 5342 5343 5344 5345 5346 5347 5348 5349 5350 5351 5352 5353 5354 5355 5356 5357 5358 5359 5360 5361 5362 5363 5364 5365 5366 5367 5368 5369 5370 5371 5372 5373 5374 5375 5376 5377 5378 5379 5380 5381 5382 5383 5384 5385 5386 5387 5388 5389 5390 5391 5392 5393 5394 5395 5396 5397 5398 5399 5400 5401 5402 5403 5404 5405 5406 5407 5408 5409 5410 5411 5412 5413 5414 5415 5416 5417 5418 5419 5420 5421 5422 5423 5424 5425 5426 5427 5428 5429 5430 5431 5432 5433 5434 5435 5436 5437 5438 5439 5440 5441 5442 5443 5444 5445 5446 5447 5448 5449 5450 5451 5452 5453 5454 5455 5456 5457 5458 5459 5460 5461 5462 5463 5464 5465 5466 5467 5468 5469 5470 5471 5472 5473 5474 5475 5476 5477 5478 5479 5480 5481 5482 5483 5484 5485 5486 5487 5488 5489 5490 5491 5492 5493 5494 5495 5496 5497 5498 5499 5500 5501 5502 5503 5504 5505 5506 5507 5508 5509 5510 5511 5512 5513 5514 5515 5516 5517 5518 5519 5520 5521 5522 5523 5524 5525 5526 5527 5528 5529 5530 5531 5532 5533 5534 5535 5536 5537 5538 5539 5540 5541 5542 5543 5544 5545 5546 5547 5548 5549 5550 5551 5552 5553 5554 5555 5556 5557 5558 5559 5560 5561 5562 5563 5564 5565 5566 5567 5568 5569 5570 5571 5572 5573 5574 5575 5576 5577 5578 5579 5580 5581 5582 5583 5584 5585 5586 5587 5588 5589 5590 5591 5592 5593 5594 5595 5596 5597 5598 5599 5600 5601 5602 5603 5604 5605 5606 5607 5608 5609 5610 5611 5612 5613 5614 5615 5616 5617 5618 5619 5620 5621 5622 5623 5624 5625 5626 5627 5628 5629 5630 5631 5632 5633 5634 5635 5636 5637 5638 5639 5640 5641 5642 5643 5644 5645 5646 5647 5648 5649 5650 5651 5652 5653 5654 5655 5656 5657 5658 5659 5660 5661 5662 5663 5664 5665 5666 5667 5668 5669 5670 5671 5672 5673 5674 5675 5676 5677 5678 5679 5680 5681 5682 5683 5684 5685 5686 5687 5688 5689 5690 5691 5692 5693 5694 5695 5696 5697 5698 5699 5700 5701 5702 5703 5704 5705 5706 5707 5708 5709 5710 5711 5712 5713 5714 5715 5716 5717 5718 5719 5720 5721 5722 5723 5724 5725 5726 5727 5728 5729 5730 5731 5732 5733 5734 5735 5736 5737 5738 5739 5740 5741 5742 5743 5744 5745 5746 5747 5748 5749 5750 5751 5752 5753 5754 5755 5756 5757 5758 5759 5760 5761 5762 5763 5764 5765 5766 5767 5768 5769 5770 5771 5772 5773 5774 5775 5776 5777 5778 5779 5780 5781 5782 5783 5784 5785 5786 5787 5788 5789 5790 5791 5792 5793 5794 5795 5796 5797 5798 5799 5800 5801 5802 5803 5804 5805 5806 5807 5808 5809 5810 5811 5812 5813 5814 5815 5816 5817 5818 5819 5820 5821 5822 5823 5824 5825 5826 5827 5828 5829 5830 5831 5832 5833 5834 5835 5836 5837 5838 5839 5840 5841 5842 5843 5844 5845 5846 5847 5848 5849 5850 5851 5852 5853 5854 5855 5856 5857 5858 5859 5860 5861 5862 5863 5864 5865 5866 5867 5868 5869 5870 5871 5872 5873 5874 5875 5876 5877 5878 5879 5880 5881 5882 5883 5884 5885 5886 5887 5888 5889 5890 5891 5892 5893 5894 5895 5896 5897 5898 5899 5900 5901 5902 5903 5904 5905 5906 5907 5908 5909 5910 5911 5912 5913 5914 5915 5916 5917 5918 5919 5920 5921 5922 5923 5924 5925 5926 5927 5928 5929 5930 5931 5932 5933 5934 5935 5936 5937 5938 5939 5940 5941 5942 5943 5944 5945 5946 5947 5948 5949 5950 5951 5952 5953 5954 5955 5956 5957 5958 5959 5960 5961 5962 5963 5964 5965 5966 5967 5968 5969 5970 5971 5972 5973 5974 5975 5976 5977 5978 5979 5980 5981 5982 5983 5984 5985 5986 5987 5988 5989 5990 5991 5992 5993 5994 5995 5996 5997 5998 5999 6000 6001 6002 6003 6004 6005 6006 6007 6008 6009 6010 6011 6012 6013 6014 6015 6016 6017 6018 6019 6020 6021 6022 6023 6024 6025 6026 6027 6028 6029 6030 6031 6032 6033 6034 6035 6036 6037 6038 6039 6040 6041 6042 6043 6044 6045 6046 6047 6048 6049 6050 6051 6052 6053 6054 6055 6056 6057 6058 6059 6060 6061 6062 6063 6064 6065 6066 6067 6068 6069 6070 6071 6072 6073 6074 6075 6076 6077 6078 6079 6080 6081 6082 6083 6084 6085 6086 6087 6088 6089 6090 6091 6092 6093 6094 6095 6096 6097 6098 6099 6100 6101 6102 6103 6104 6105 6106 6107 6108 6109 6110 6111 6112 6113 6114 6115 6116 6117 6118 6119 6120 6121 6122 6123 6124 6125 6126 6127 6128 6129 6130 6131 6132 6133 6134 6135 6136 6137 6138 6139 6140 6141 6142 6143 6144 6145 6146 6147 6148 6149 6150 6151 6152 6153 6154 6155 6156 6157 6158 6159 6160 6161 6162 6163 6164 6165 6166 6167 6168 6169 6170 6171 6172 6173 6174 6175 6176 6177 6178 6179 6180 6181 6182 6183 6184 6185 6186 6187 6188 6189 6190 6191 6192 6193 6194 6195 6196 6197 6198 6199 6200 6201 6202 6203 6204 6205 6206 6207 6208 6209 6210 6211 6212 6213 6214 6215 6216 6217 6218 6219 6220 6221 6222 6223 6224 6225 6226 6227 6228 6229 6230 6231 6232 6233 6234 6235 6236 6237 6238 6239 6240 6241 6242 6243 6244 6245 6246 6247 6248 6249 6250 6251 6252 6253 6254 6255 6256 6257 6258 6259 6260 6261 6262 6263 6264 6265 6266 6267 6268 6269 6270 6271 6272 6273 6274 6275 6276 6277 6278 6279 6280 6281 6282 6283 6284 6285 6286 6287 6288 6289 6290 6291 6292 6293 6294 6295 6296 6297 6298 6299 6300 6301 6302 6303 6304 6305 6306 6307 6308 6309 6310 6311 6312 6313 6314 6315 6316 6317 6318 6319 6320 6321 6322 6323 6324 6325 6326 6327 6328 6329 6330 6331 6332 6333 6334 6335 6336 6337 6338 6339 6340 6341 6342 6343 6344 6345 6346 6347 6348 6349 6350 6351 6352 6353 6354 6355 6356 6357 6358 6359 6360 6361 6362 6363 6364 6365 6366 6367 6368 6369 6370 6371 6372 6373 6374 6375 6376 6377 6378 6379 6380 6381 6382 6383 6384 6385 6386 6387 6388 6389 6390 6391 6392 6393 6394 6395 6396 6397 6398 6399 6400 6401 6402 6403 6404 6405 6406 6407 6408 6409 6410 6411 6412 6413 6414 6415 6416 6417 6418 6419 6420 6421 6422 6423 6424 6425 6426 6427 6428 6429 6430 6431 6432 6433 6434 6435 6436 6437 6438 6439 6440 6441 6442 6443 6444 6445 6446 6447 6448 6449 6450 6451 6452 6453 6454 6455 6456 6457 6458 6459 6460 6461 6462 6463 6464 6465 6466 6467 6468 6469 6470 6471 6472 6473 6474 6475 6476 6477 6478 6479 6480 6481 6482 6483 6484 6485 6486 6487 6488 6489 6490 6491 6492 6493 6494 6495 6496 6497 6498 6499 6500 6501 6502 6503 6504 6505 6506 6507 6508 6509 6510 6511 6512 6513 6514 6515 6516 6517 6518 6519 6520 6521 6522 6523 6524 6525 6526 6527 6528 6529 6530 6531 6532 6533 6534 6535 6536 6537 6538 6539 6540 6541 6542 6543 6544 6545 6546 6547 6548 6549 6550 6551 6552 6553 6554 6555 6556 6557 6558 6559 6560 6561 6562 6563 6564 6565 6566 6567 6568 6569 6570 6571 6572 6573 6574 6575 6576 6577 6578 6579 6580 6581 6582 6583 6584 6585 6586 6587 6588 6589 6590 6591 6592 6593 6594 6595 6596 6597 6598 6599 6600 6601 6602 6603 6604 6605 6606 6607 6608 6609 6610 6611 6612 6613 6614 6615 6616 6617 6618 6619 6620 6621 6622 6623 6624 6625 6626 6627 6628 6629 6630 6631 6632 6633 6634 6635 6636 6637 6638 6639 6640 6641 6642 6643 6644 6645 6646 6647 6648 6649 6650 6651 6652 6653 6654 6655 6656 6657 6658 6659 6660 6661 6662 6663 6664 6665 6666 6667 6668 6669 6670 6671 6672 6673 6674 6675 6676 6677 6678 6679 6680 6681 6682 6683 6684 6685 6686 6687 6688 6689 6690 6691 6692 6693 6694 6695 6696 6697 6698 6699 6700 6701 6702 6703 6704 6705 6706 6707 6708 6709 6710 6711 6712 6713 6714 6715 6716 6717 6718 6719 6720 6721 6722 6723 6724 6725 6726 6727 6728 6729 6730 6731 6732 6733 6734 6735 6736 6737 6738 6739 6740 6741 6742 6743 6744 6745 6746 6747 6748 6749 6750 6751 6752 6753 6754 6755 6756 6757 6758 6759 6760 6761 6762 6763 6764 6765 6766 6767 6768 6769 6770 6771 6772 6773 6774 6775 6776 6777 6778 6779 6780 6781 6782 6783 6784 6785 6786 6787 6788 6789 6790 6791 6792 6793 6794 6795 6796 6797 6798 6799 6800 6801 6802 6803 6804 6805 6806 6807 6808 6809 6810 6811 6812 6813 6814 6815 6816 6817 6818 6819 6820 6821 6822 6823 6824 6825 6826 6827 6828 6829 6830 6831 6832 6833 6834 6835 6836 6837 6838 6839 6840 6841 6842 6843 6844 6845 6846 6847 6848 6849 6850 6851 6852 6853 6854 6855 6856 6857 6858 6859 6860 6861 6862 6863 6864 6865 6866 6867 6868 6869 6870 6871 6872 6873 6874 6875 6876 6877 6878 6879 6880 6881 6882 6883 6884 6885 6886 6887 6888 6889 6890 6891 6892 6893 6894 6895 6896 6897 6898 6899 6900 6901 6902 6903 6904 6905 6906 6907 6908 6909 6910 6911 6912 6913 6914 6915 6916 6917 6918 6919 6920 6921 6922 6923 6924 6925 6926 6927 6928 6929 6930 6931 6932 6933 6934 6935 6936 6937 6938 6939 6940 6941 6942 6943 6944 6945 6946 6947 6948 6949 6950 6951 6952 6953 6954 6955 6956 6957 6958 6959 6960 6961 6962 6963 6964 6965 6966 6967 6968 6969 6970 6971 6972 6973 6974 6975 6976 6977 6978 6979 6980 6981 6982 6983 6984 6985 6986 6987 6988 6989 6990 6991 6992 6993 6994 6995 6996 6997 6998 6999 7000 7001 7002 7003 7004 7005 7006 7007 7008 7009 7010 7011 7012 7013 7014 7015 7016 7017 7018 7019 7020 7021 7022 7023 7024 7025 7026 7027 7028 7029 7030 7031 7032 7033 7034 7035 7036 7037 7038 7039 7040 7041 7042 7043 7044 7045 7046 7047 7048 7049 7050 7051 7052 7053 7054 7055 7056 7057 7058 7059 7060 7061 7062 7063 7064 7065 7066 7067 7068 7069 7070 7071 7072 7073 7074 7075 7076 7077 7078 7079 7080 7081 7082 7083 7084 7085 7086 7087 7088 7089 7090 7091 7092 7093 7094 7095 7096 7097 7098 7099 7100 7101 7102 7103 7104 7105 7106 7107 7108 7109 7110 7111 7112 7113 7114 7115 7116 7117 7118 7119 7120 7121 7122 7123 7124 7125 7126 7127 7128 7129 7130 7131 7132 7133 7134 7135 7136 7137 7138 7139 7140 7141 7142 7143 7144 7145 7146 7147 7148 7149 7150 7151 7152 7153 7154 7155 7156 7157 7158 7159 7160 7161 7162 7163 7164 7165 7166 7167 7168 7169 7170 7171 7172 7173 7174 7175 7176 7177 7178 7179 7180 7181 7182 7183 7184 7185 7186 7187 7188 7189 7190 7191 7192 7193 7194 7195 7196 7197 7198 7199 7200 7201 7202 7203 7204 7205 7206 7207 7208 7209 7210 7211 7212 7213 7214 7215 7216 7217 7218 7219 7220 7221 7222 7223 7224 7225 7226 7227 7228 7229 7230 7231 7232 7233 7234 7235 7236 7237 7238 7239 7240 7241 7242 7243 7244 7245 7246 7247 7248 7249 7250 7251 7252 7253 7254 7255 7256 7257 7258 7259 7260 7261 7262 7263 7264 7265 7266 7267 7268 7269 7270 7271 7272 7273 7274 7275 7276 7277 7278 7279 7280 7281 7282 7283 7284 7285 7286 7287 7288 7289 7290 7291 7292 7293 7294 7295 7296 7297 7298 7299 7300 7301 7302 7303 7304 7305 7306 7307 7308 7309 7310 7311 7312 7313 7314 7315 7316 7317 7318 7319 7320 7321 7322 7323 7324 7325 7326 7327 7328 7329 7330 7331 7332 7333 7334 7335 7336 7337 7338 7339 7340 7341 7342 7343 7344 7345 7346 7347 7348 7349 7350 7351 7352 7353 7354 7355 7356 7357 7358 7359 7360 7361 7362 7363 7364 7365 7366 7367 7368 7369 7370 7371 7372 7373 7374 7375 7376 7377 7378 7379 7380 7381 7382 7383 7384 7385 7386 7387 7388 7389 7390 7391 7392 7393 7394 7395 7396 7397 7398 7399 7400 7401 7402 7403 7404 7405 7406 7407 7408 7409 7410 7411 7412 7413 7414 7415 7416 7417 7418 7419 7420 7421 7422 7423 7424 7425 7426 7427 7428 7429 7430 7431 7432 7433 7434 7435 7436 7437 7438 7439 7440 7441 7442 7443 7444 7445 7446 7447 7448 7449 7450 7451 7452 7453 7454 7455 7456 7457 7458 7459 7460 7461 7462 7463 7464 7465 7466 7467 7468 7469 7470 7471 7472 7473 7474 7475 7476 7477 7478 7479 7480 7481 7482 7483 7484 7485 7486 7487 7488 7489 7490 7491 7492 7493 7494 7495 7496 7497 7498 7499 7500 7501 7502 7503 7504 7505 7506 7507 7508 7509 7510 7511 7512 7513 7514 7515 7516 7517 7518 7519 7520 7521 7522 7523 7524 7525 7526 7527 7528 7529 7530 7531 7532 7533 7534 7535 7536 7537 7538 7539 7540 7541 7542 7543 7544 7545 7546 7547 7548 7549 7550 7551 7552 7553 7554 7555 7556 7557 7558 7559 7560 7561 7562 7563 7564 7565 7566 7567 7568 7569 7570 7571 7572 7573 7574 7575 7576 7577 7578 7579 7580 7581 7582 7583 7584 7585 7586 7587 7588 7589 7590 7591 7592 7593 7594 7595 7596 7597 7598 7599 7600 7601 7602 7603 7604 7605 7606 7607 7608 7609 7610 7611 7612 7613 7614 7615 7616 7617 7618 7619 7620 7621 7622 7623 7624 7625 7626 7627 7628 7629 7630 7631 7632 7633 7634 7635 7636 7637 7638 7639 7640 7641 7642 7643 7644 7645 7646 7647 7648 7649 7650 7651 7652 7653 7654 7655 7656 7657 7658 7659 7660 7661 7662 7663 7664 7665 7666 7667 7668 7669 7670 7671 7672 7673 7674 7675 7676 7677 7678 7679 7680 7681 7682 7683 7684 7685 7686 7687 7688 7689 7690 7691 7692 7693 7694 7695 7696 7697 7698 7699 7700 7701 7702 7703 7704 7705 7706 7707 7708 7709 7710 7711 7712 7713 7714 7715 7716 7717 7718 7719 7720 7721 7722 7723 7724 7725 7726 7727 7728 7729 7730 7731 7732 7733 7734 7735 7736 7737 7738 7739 7740 7741 7742 7743 7744 7745 7746 7747 7748 7749 7750 7751 7752 7753 7754 7755 7756 7757 7758 7759 7760 7761 7762 7763 7764 7765 7766 7767 7768 7769 7770 7771 7772 7773 7774 7775 7776 7777 7778 7779 7780 7781 7782 7783 7784 7785 7786 7787 7788 7789 7790 7791 7792 7793 7794 7795 7796 7797 7798 7799 7800 7801 7802 7803 7804 7805 7806 7807 7808 7809 7810 7811 7812 7813 7814 7815 7816 7817 7818 7819 7820 7821 7822 7823 7824 7825 7826 7827 7828 7829 7830 7831 7832 7833 7834 7835 7836 7837 7838 7839 7840 7841 7842 7843 7844 7845 7846 7847 7848 7849 7850 7851 7852 7853 7854 7855 7856 7857 7858 7859 7860 7861 7862 7863 7864 7865 7866 7867 7868 7869 7870 7871 7872 7873 7874 7875 7876 7877 7878 7879 7880 7881 7882 7883 7884 7885 7886 7887 7888 7889 7890 7891 7892 7893 7894 7895 7896 7897 7898 7899 7900 7901 7902 7903 7904 7905 7906 7907 7908 7909 7910 7911 7912 7913 7914 7915 7916 7917 7918 7919 7920 7921 7922 7923 7924 7925 7926 7927 7928 7929 7930 7931 7932 7933 7934 7935 7936 7937 7938 7939 7940 7941 7942 7943 7944 7945 7946 7947 7948 7949 7950 7951 7952 7953 7954 7955 7956 7957 7958 7959 7960 7961 7962 7963 7964 7965 7966 7967 7968 7969 7970 7971 7972 7973 7974 7975 7976 7977 7978 7979 7980 7981 7982 7983 7984 7985 7986 7987 7988 7989 7990 7991 7992 7993 7994 7995 7996 7997 7998 7999 8000 8001 8002 8003 8004 8005 8006 8007 8008 8009 8010 8011 8012 8013 8014 8015 8016 8017 8018 8019 8020 8021 8022 8023 8024 8025 8026 8027 8028 8029 8030 8031 8032 8033 8034 8035 8036 8037 8038 8039 8040 8041 8042 8043 8044 8045 8046 8047 8048 8049 8050 8051 8052 8053 8054 8055 8056 8057 8058 8059 8060 8061 8062 8063 8064 8065 8066 8067 8068 8069 8070 8071 8072 8073 8074 8075 8076 8077 8078 8079 8080 8081 8082 8083 8084 8085 8086 8087 8088 8089 8090 8091 8092 8093 8094 8095 8096 8097 8098 8099 8100 8101 8102 8103 8104 8105 8106 8107 8108 8109 8110 8111 8112 8113 8114 8115 8116 8117 8118 8119 8120 8121 8122 8123 8124 8125 8126 8127 8128 8129 8130 8131 8132 8133 8134 8135 8136 8137 8138 8139 8140 8141 8142 8143 8144 8145 8146 8147 8148 8149 8150 8151 8152 8153 8154 8155 8156 8157 8158 8159 8160 8161 8162 8163 8164 8165 8166 8167 8168 8169 8170 8171 8172 8173 8174 8175 8176 8177 8178 8179 8180 8181 8182 8183 8184 8185 8186 8187 8188 8189 8190 8191 8192 8193 8194 8195 8196 8197 8198 8199 8200 8201 8202 8203 8204 8205 8206 8207 8208 8209 8210 8211 8212 8213 8214 8215 8216 8217 8218 8219 8220 8221 8222 8223 8224 8225 8226 8227 8228 8229 8230 8231 8232 8233 8234 8235 8236 8237 8238 8239 8240 8241 8242 8243 8244 8245 8246 8247 8248 8249 8250 8251 8252 8253 8254 8255 8256 8257 8258 8259 8260 8261 8262 8263 8264 8265 8266 8267 8268 8269 8270 8271 8272 8273 8274 8275 8276 8277 8278 8279 8280 8281 8282 8283 8284 8285 8286 8287 8288 8289 8290 8291 8292 8293 8294 8295 8296 8297 8298 8299 8300 8301 8302 8303 8304 8305 8306 8307 8308 8309 8310 8311 8312 8313 8314 8315 8316 8317 8318 8319 8320 8321 8322 8323 8324 8325 8326 8327 8328 8329 8330 8331 8332 8333 8334 8335 8336 8337 8338 8339 8340 8341 8342 8343 8344 8345 8346 8347 8348 8349 8350 8351 8352 8353 8354 8355 8356 8357 8358 8359 8360 8361 8362 8363 8364 8365 8366 8367 8368 8369 8370 8371 8372 8373 8374 8375 8376 8377 8378 8379 8380 8381 8382 8383 8384 8385 8386 8387 8388 8389 8390 8391 8392 8393 8394 8395 8396 8397 8398 8399 8400 8401 8402 8403 8404 8405 8406 8407 8408 8409 8410 8411 8412 8413 8414 8415 8416 8417 8418 8419 8420 8421 8422 8423 8424 8425 8426 8427 8428 8429 8430 8431 8432 8433 8434 8435 8436 8437 8438 8439 8440 8441 8442 8443 8444 8445 8446 8447 8448 8449 8450 8451 8452 8453 8454 8455 8456 8457 8458 8459 8460 8461 8462 8463 8464 8465 8466 8467 8468 8469 8470 8471 8472 8473 8474 8475 8476 8477 8478 8479 8480 8481 8482 8483 8484 8485 8486 8487 8488 8489 8490 8491 8492 8493 8494 8495 8496 8497 8498 8499 8500 8501 8502 8503 8504 8505 8506 8507 8508 8509 8510 8511 8512 8513 8514 8515 8516 8517 8518 8519 8520 8521 8522 8523 8524 8525 8526 8527 8528 8529 8530 8531 8532 8533 8534 8535 8536 8537 8538 8539 8540 8541 8542 8543 8544 8545 8546 8547 8548 8549 8550 8551 8552 8553 8554 8555 8556 8557 8558 8559 8560 8561 8562 8563 8564 8565 8566 8567 8568 8569 8570 8571 8572 8573 8574 8575 8576 8577 8578 8579 8580 8581 8582 8583 8584 8585 8586 8587 8588 8589 8590 8591 8592 8593 8594 8595 8596 8597 8598 8599 8600 8601 8602 8603 8604 8605 8606 8607 8608 8609 8610 8611 8612 8613 8614 8615 8616 8617 8618 8619 8620 8621 8622 8623 8624 8625 8626 8627 8628 8629 8630 8631 8632 8633 8634 8635 8636 8637 8638 8639 8640 8641 8642 8643 8644 8645 8646 8647 8648 8649 8650 8651 8652 8653 8654 8655 8656 8657 8658 8659 8660 8661 8662 8663 8664 8665 8666 8667 8668 8669 8670 8671 8672 8673 8674 8675 8676 8677 8678 8679 8680 8681 8682 8683 8684 8685 8686 8687 8688 8689 8690 8691 8692 8693 8694 8695 8696 8697 8698 8699 8700 8701 8702 8703 8704 8705 8706 8707 8708 8709 8710 8711 8712 8713 8714 8715 8716 8717 8718 8719 8720 8721 8722 8723 8724 8725 8726 8727 8728 8729 8730 8731 8732 8733 8734 8735 8736 8737 8738 8739 8740 8741 8742 8743 8744 8745 8746 8747 8748 8749 8750 8751 8752 8753 8754 8755 8756 8757 8758 8759 8760 8761 8762 8763 8764 8765 8766 8767 8768 8769 8770 8771 8772 8773 8774 8775 8776 8777 8778 8779 8780 8781 8782 8783 8784 8785 8786 8787 8788 8789 8790 8791 8792 8793 8794 8795 8796 8797 8798 8799 8800 8801 8802 8803 8804 8805 8806 8807 8808 8809 8810 8811 8812 8813 8814 8815 8816 8817 8818 8819 8820 8821 8822 8823 8824 8825 8826 8827 8828 8829 8830 8831 8832 8833 8834 8835 8836 8837 8838 8839 8840 8841 8842 8843 8844 8845 8846 8847 8848 8849 8850 8851 8852 8853 8854 8855 8856 8857 8858 8859 8860 8861 8862 8863 8864 8865 8866 8867 8868 8869 8870 8871 8872 8873 8874 8875 8876 8877 8878 8879 8880 8881 8882 8883 8884 8885 8886 8887 8888 8889 8890 8891 8892 8893 8894 8895 8896 8897 8898 8899 8900 8901 8902 8903 8904 8905 8906 8907 8908 8909 8910 8911 8912 8913 8914 8915 8916 8917 8918 8919 8920 8921 8922 8923 8924 8925 8926 8927 8928 8929 8930 8931 8932 8933 8934 8935 8936 8937 8938 8939 8940 8941 8942 8943 8944 8945 8946 8947 8948 8949 8950 8951 8952 8953 8954 8955 8956 8957 8958 8959 8960 8961 8962 8963 8964 8965 8966 8967 8968 8969 8970 8971 8972 8973 8974 8975 8976 8977 8978 8979 8980 8981 8982 8983 8984 8985 8986 8987 8988 8989 8990 8991 8992 8993 8994 8995 8996 8997 8998 8999 9000 9001 9002 9003 9004 9005 9006 9007 9008 9009 9010 9011 9012 9013 9014 9015 9016 9017 9018 9019 9020 9021 9022 9023 9024 9025 9026 9027 9028 9029 9030 9031 9032 9033 9034 9035 9036 9037 9038 9039 9040 9041 9042 9043 9044 9045 9046 9047 9048 9049 9050 9051 9052 9053 9054 9055 9056 9057 9058 9059 9060 9061 9062 9063 9064 9065 9066 9067 9068 9069 9070 9071 9072 9073 9074 9075 9076 9077 9078 9079 9080 9081 9082 9083 9084 9085 9086 9087 9088 9089 9090 9091 9092 9093 9094 9095 9096 9097 9098 9099 9100 9101 9102 9103 9104 9105 9106 9107 9108 9109 9110 9111 9112 9113 9114 9115 9116 9117 9118 9119 9120 9121 9122 9123 9124 9125 9126 9127 9128 9129 9130 9131 9132 9133 9134 9135 9136 9137 9138 9139 9140 9141 9142 9143 9144 9145 9146 9147 9148 9149 9150 9151 9152 9153 9154 9155 9156 9157 9158 9159 9160 9161 9162 9163 9164 9165 9166 9167 9168 9169 9170 9171 9172 9173 9174 9175 9176 9177 9178 9179 9180 9181 9182 9183 9184 9185 9186 9187 9188 9189 9190 9191 9192 9193 9194 9195 9196 9197 9198 9199 9200 9201 9202 9203 9204 9205 9206 9207 9208 9209 9210 9211 9212 9213 9214 9215 9216 9217 9218 9219 9220 9221 9222 9223 9224 9225 9226 9227 9228 9229 9230 9231 9232 9233 9234 9235 9236 9237 9238 9239 9240 9241 9242 9243 9244 9245 9246 9247 9248 9249 9250 9251 9252 9253 9254 9255 9256 9257 9258 9259 9260 9261 9262 9263 9264 9265 9266 9267 9268 9269 9270 9271 9272 9273 9274 9275 9276 9277 9278 9279 9280 9281 9282 9283 9284 9285 9286 9287 9288 9289 9290 9291 9292 9293 9294 9295 9296 9297 9298 9299 9300 9301 9302 9303 9304 9305 9306 9307 9308 9309 9310 9311 9312 9313 9314 9315 9316 9317 9318 9319 9320 9321 9322 9323 9324 9325 9326 9327 9328 9329 9330 9331 9332 9333 9334 9335 9336 9337 9338 9339 9340 9341 9342 9343 9344 9345 9346 9347 9348 9349 9350 9351 9352 9353 9354 9355 9356 9357 9358 9359 9360 9361 9362 9363 9364 9365 9366 9367 9368 9369 9370 9371 9372 9373 9374 9375 9376 9377 9378 9379 9380 9381 9382 9383 9384 9385 9386 9387 9388 9389 9390 9391 9392 9393 9394 9395 9396 9397 9398 9399 9400 9401 9402 9403 9404 9405 9406 9407 9408 9409 9410 9411 9412 9413 9414 9415 9416 9417 9418 9419 9420 9421 9422 9423 9424 9425 9426 9427 9428 9429 9430 9431 9432 9433 9434 9435 9436 9437 9438 9439 9440 9441 9442 9443 9444 9445 9446 9447 9448 9449 9450 9451 9452 9453 9454 9455 9456 9457 9458 9459 9460 9461 9462 9463 9464 9465 9466 9467 9468 9469 9470 9471 9472 9473 9474 9475 9476 9477 9478 9479 9480 9481 9482 9483 9484 9485 9486 9487 9488 9489 9490 9491 9492 9493 9494 9495 9496 9497 9498 9499 9500 9501 9502 9503 9504 9505 9506 9507 9508 9509 9510 9511 9512 9513 9514 9515 9516 9517 9518 9519 9520 9521 9522 9523 9524 9525 9526 9527 9528 9529 9530 9531 9532 9533 9534 9535 9536 9537 9538 9539 9540 9541 9542 9543 9544 9545 9546 9547 9548 9549 9550 9551 9552 9553 9554 9555 9556 9557 9558 9559 9560 9561 9562 9563 9564 9565 9566 9567 9568 9569 9570 9571 9572 9573 9574 9575 9576 9577 9578 9579 9580 9581 9582 9583 9584 9585 9586 9587 9588 9589 9590 9591 9592 9593 9594 9595 9596 9597 9598 9599 9600 9601 9602 9603 9604 9605 9606 9607 9608 9609 9610 9611 9612 9613 9614 9615 9616 9617 9618 9619 9620 9621 9622 9623 9624 9625 9626 9627 9628 9629 9630 9631 9632 9633 9634 9635 9636 9637 9638 9639 9640 9641 9642 9643 9644 9645 9646 9647 9648 9649 9650 9651 9652 9653 9654 9655 9656 9657 9658 9659 9660 9661 9662 9663 9664 9665 9666 9667 9668 9669 9670 9671 9672 9673 9674 9675 9676 9677 9678 9679 9680 9681 9682 9683 9684 9685 9686 9687 9688 9689 9690 9691 9692 9693 9694 9695 9696 9697 9698 9699 9700 9701 9702 9703 9704 9705 9706 9707 9708 9709 9710 9711 9712 9713 9714 9715 9716 9717 9718 9719 9720 9721 9722 9723 9724 9725 9726 9727 9728 9729 9730 9731 9732 9733 9734 9735 9736 9737 9738 9739 9740 9741 9742 9743 9744 9745 9746 9747 9748 9749 9750 9751 9752 9753 9754 9755 9756 9757 9758 9759 9760 9761 9762 9763 9764 9765 9766 9767 9768 9769 9770 9771 9772 9773 9774 9775 9776 9777 9778 9779 9780 9781 9782 9783 9784 9785 9786 9787 9788 9789 9790 9791 9792 9793 9794 9795 9796 9797 9798 9799 9800 9801 9802 9803 9804 9805 9806 9807 9808 9809 9810 9811 9812 9813 9814 9815 9816 9817 9818 9819 9820 9821 9822 9823 9824 9825 9826 9827 9828 9829 9830 9831 9832 9833 9834 9835 9836 9837 9838 9839 9840 9841 9842 9843 9844 9845 9846 9847 9848 9849 9850 9851 9852 9853 9854 9855 9856 9857 9858 9859 9860 9861 9862 9863 9864 9865 9866 9867 9868 9869 9870 9871 9872 9873 9874 9875 9876 9877 9878 9879 9880 9881 9882 9883 9884 9885 9886 9887 9888 9889 9890 9891 9892 9893 9894 9895 9896 9897 9898 9899 9900 9901 9902 9903 9904 9905 9906 9907 9908 9909 9910 9911 9912 9913 9914 9915 9916 9917 9918 9919 9920 9921 9922 9923 9924 9925 9926 9927 9928 9929 9930 9931 9932 9933 9934 9935 9936 9937 9938 9939 9940 9941 9942 9943 9944 9945 9946 9947 9948 9949 9950 9951 9952 9953 9954 9955 9956 9957 9958 9959 9960 9961 9962 9963 9964 9965 9966 9967 9968 9969 9970 9971 9972 9973 9974 9975 9976 9977 9978 9979 9980 9981 9982 9983 9984 9985 9986 9987 9988 9989 9990 9991 9992 9993 9994 9995 9996 9997 9998 9999 10000 10001 10002 10003 10004 10005 10006 10007 10008 10009 10010 10011 10012 10013 10014 10015 10016 10017 10018 10019 10020 10021 10022 10023 10024 10025 10026 10027 10028 10029 10030 10031 10032 10033 10034 10035 10036 10037 10038 10039 10040 10041 10042 10043 10044 10045 10046 10047 10048 10049 10050 10051 10052 10053 10054 10055 10056 10057 10058 10059 10060 10061 10062 10063 10064 10065 10066 10067 10068 10069 10070 10071 10072 10073 10074 10075 10076 10077 10078 10079 10080 10081 10082 10083 10084 10085 10086 10087 10088 10089 10090 10091 10092 10093 10094 10095 10096 10097 10098 10099 10100 10101 10102 10103 10104 10105 10106 10107 10108 10109 10110 10111 10112 10113 10114 10115 10116 10117 10118 10119 10120 10121 10122 10123 10124 10125 10126 10127 10128 10129 10130 10131 10132 10133 10134 10135 10136 10137 10138 10139 10140 10141 10142 10143 10144 10145 10146 10147 10148 10149 10150 10151 10152 10153 10154 10155 10156 10157 10158 10159 10160 10161 10162 10163 10164 10165 10166 10167 10168 10169 10170 10171 10172 10173 10174 10175 10176 10177 10178 10179 10180 10181 10182 10183 10184 10185 10186 10187 10188 10189 10190 10191 10192 10193 10194 10195 10196 10197 10198 10199 10200 10201 10202 10203 10204 10205 10206 10207 10208 10209 10210 10211 10212 10213 10214 10215 10216 10217 10218 10219 10220 10221 10222 10223 10224 10225 10226 10227 10228 10229 10230 10231 10232 10233 10234 10235 10236 10237 10238 10239 10240 10241 10242 10243 10244 10245 10246 10247 10248 10249 10250 10251 10252 10253 10254 10255 10256 10257 10258 10259 10260 10261 10262 10263 10264 10265 10266 10267 10268 10269 10270 10271 10272 10273 10274 10275 10276 10277 10278 10279 10280 10281 10282 10283 10284 10285 10286 10287 10288 10289 10290 10291 10292 10293 10294 10295 10296 10297 10298 10299 10300 10301 10302 10303 10304 10305 10306 10307 10308 10309 10310 10311 10312 10313 10314 10315 10316 10317 10318 10319 10320 10321 10322 10323 10324 10325 10326 10327 10328 10329 10330 10331 10332 10333 10334 10335 10336 10337 10338 10339 10340 10341 10342 10343 10344 10345 10346 10347 10348 10349 10350 10351 10352 10353 10354 10355 10356 10357 10358 10359 10360 10361 10362 10363 10364 10365 10366 10367 10368 10369 10370 10371 10372 10373 10374 10375 10376 10377 10378 10379 10380 10381 10382 10383 10384 10385 10386 10387 10388 10389 10390 10391 10392 10393 10394 10395 10396 10397 10398 10399 10400 10401 10402 10403 10404 10405 10406 10407 10408 10409 10410 10411 10412 10413 10414 10415 10416 10417 10418 10419 10420 10421 10422 10423 10424 10425 10426 10427 10428 10429 10430 10431 10432 10433 10434 10435 10436 10437 10438 10439 10440 10441 10442 10443 10444 10445 10446 10447 10448 10449 10450 10451 10452 10453 10454 10455 10456 10457 10458 10459 10460 10461 10462 10463 10464 10465 10466 10467 10468 10469 10470 10471 10472 10473 10474 10475 10476 10477 10478 10479 10480 10481 10482 10483 10484 10485 10486 10487 10488 10489 10490 10491 10492 10493 10494 10495 10496 10497 10498 10499 10500 10501 10502 10503 10504 10505 10506 10507 10508 10509 10510 10511 10512 10513 10514 10515 10516 10517 10518 10519 10520 10521 10522 10523 10524 10525 10526 10527 10528 10529 10530 10531 10532 10533 10534 10535 10536 10537 10538 10539 10540 10541 10542 10543 10544 10545 10546 10547 10548 10549 10550 10551 10552 10553 10554 10555 10556 10557 10558 10559 10560 10561 10562 10563 10564 10565 10566 10567 10568 10569 10570 10571 10572 10573 10574 10575 10576 10577 10578 10579 10580 10581 10582 10583 10584 10585 10586 10587 10588 10589 10590 10591 10592 10593 10594 10595 10596 10597 10598 10599 10600 10601 10602 10603 10604 10605 10606 10607 10608 10609 10610 10611 10612 10613 10614 10615 10616 10617 10618 10619 10620 10621 10622 10623 10624 10625 10626 10627 10628 10629 10630 10631 10632 10633 10634 10635 10636 10637 10638 10639 10640 10641 10642 10643 10644 10645 10646 10647 10648 10649 10650 10651 10652 10653 10654 10655 10656 10657 10658 10659 10660 10661 10662 10663 10664 10665 10666 10667 10668 10669 10670 10671 10672 10673 10674 10675 10676 10677 10678 10679 10680 10681 10682 10683 10684 10685 10686 10687 10688 10689 10690 10691 10692 10693 10694 10695 10696 10697 10698 10699 10700 10701 10702 10703 10704 10705 10706 10707 10708 10709 10710 10711 10712 10713 10714 10715 10716 10717 10718 10719 10720 10721 10722 10723 10724 10725 10726 10727 10728 10729 10730 10731 10732 10733 10734 10735 10736 10737 10738 10739 10740 10741 10742 10743 10744 10745 10746 10747 10748 10749 10750 10751 10752 10753 10754 10755 10756 10757 10758 10759 10760 10761 10762 10763 10764 10765 10766 10767 10768 10769 10770 10771 10772 10773 10774 10775 10776 10777 10778 10779 10780 10781 10782 10783 10784 10785 10786 10787 10788 10789 10790 10791 10792 10793 10794 10795 10796 10797 10798 10799 10800 10801 10802 10803 10804 10805 10806 10807 10808 10809 10810 10811 10812 10813 10814 10815 10816 10817 10818 10819 10820 10821 10822 10823 10824 10825 10826 10827 10828 10829 10830 10831 10832 10833 10834 10835 10836 10837 10838 10839 10840 10841 10842 10843 10844 10845 10846 10847 10848 10849 10850 10851 10852 10853 10854 10855 10856 10857 10858 10859 10860 10861 10862 10863 10864 10865 10866 10867 10868 10869 10870 10871 10872 10873 10874 10875 10876 10877 10878 10879 10880 10881 10882 10883 10884 10885 10886 10887 10888 10889 10890 10891 10892 10893 10894 10895 10896 10897 10898 10899 10900 10901 10902 10903 10904 10905 10906 10907 10908 10909 10910 10911 10912 10913 10914 10915 10916 10917 10918 10919 10920 10921 10922 10923 10924 10925 10926 10927 10928 10929 10930 10931 10932 10933 10934 10935 10936 10937 10938 10939 10940 10941 10942 10943 10944 10945 10946 10947 10948 10949 10950 10951 10952 10953 10954 10955 10956 10957 10958 10959 10960 10961 10962 10963 10964 10965 10966 10967 10968 10969 10970 10971 10972 10973 10974 10975 10976 10977 10978 10979 10980 10981 10982 10983 10984 10985 10986 10987 10988 10989 10990 10991 10992 10993 10994 10995 10996 10997 10998 10999 11000 11001 11002 11003 11004 11005 11006 11007 11008 11009 11010 11011 11012 11013 11014 11015 11016 11017 11018 11019 11020 11021 11022 11023 11024 11025 11026 11027 11028 11029 11030 11031 11032 11033 11034 11035 11036 11037 11038 11039 11040 11041 11042 11043 11044 11045 11046 11047 11048 11049 11050 11051 11052 11053 11054 11055 11056 11057 11058 11059 11060 11061 11062 11063 11064 11065 11066 11067 11068 11069 11070 11071 11072 11073 11074 11075 11076 11077 11078 11079 11080 11081 11082 11083 11084 11085 11086 11087 11088 11089 11090 11091 11092 11093 11094 11095 11096 11097 11098 11099 11100 11101 11102 11103 11104 11105 11106 11107 11108 11109 11110 11111 11112 11113 11114 11115 11116 11117 11118 11119 11120 11121 11122 11123 11124 11125 11126 11127 11128 11129 11130 11131 11132 11133 11134 11135 11136 11137 11138 11139 11140 11141 11142 11143 11144 11145 11146 11147 11148 11149 11150 11151 11152 11153 11154 11155 11156 11157 11158 11159 11160 11161 11162 11163 11164 11165 11166 11167 11168 11169 11170 11171 11172 11173 11174 11175 11176 11177 11178 11179 11180 11181 11182 11183 11184 11185 11186 11187 11188 11189 11190 11191 11192 11193 11194 11195 11196 11197 11198 11199 11200 11201 11202 11203 11204 11205 11206 11207 11208 11209 11210 11211 11212 11213 11214 11215 11216 11217 11218 11219 11220 11221 11222 11223 11224 11225 11226 11227 11228 11229 11230 11231 11232 11233 11234 11235 11236 11237 11238 11239 11240 11241 11242 11243 11244 11245 11246 11247 11248 11249 11250 11251 11252 11253 11254 11255 11256 11257 11258 11259 11260 11261 11262 11263 11264 11265 11266 11267 11268 11269 11270 11271 11272 11273 11274 11275 11276 11277 11278 11279 11280 11281 11282 11283 11284 11285 11286 11287 11288 11289 11290 11291 11292 11293 11294 11295 11296 11297 11298 11299 11300 11301 11302 11303 11304 11305 11306 11307 11308 11309 11310 11311 11312 11313 11314 11315 11316 11317 11318 11319 11320 11321 11322 11323 11324 11325 11326 11327 11328 11329 11330 11331 11332 11333 11334 11335 11336 11337 11338 11339 11340 11341 11342 11343 11344 11345 11346 11347 11348 11349 11350 11351 11352 11353 11354 11355 11356 11357 11358 11359 11360 11361 11362 11363 11364 11365 11366 11367 11368 11369 11370 11371 11372 11373 11374 11375 11376 11377 11378 11379 11380 11381 11382 11383 11384 11385 11386 11387 11388 11389 11390 11391 11392 11393 11394 11395 11396 11397 11398 11399 11400 11401 11402 11403 11404 11405 11406 11407 11408 11409 11410 11411 11412 11413 11414 11415 11416 11417 11418 11419 11420 11421 11422 11423 11424 11425 11426 11427 11428 11429 11430 11431 11432 11433 11434 11435 11436 11437 11438 11439 11440 11441 11442 11443 11444 11445 11446 11447 11448 11449 11450 11451 11452 11453 11454 11455 11456 11457 11458 11459 11460 11461 11462 11463 11464 11465 11466 11467 11468 11469 11470 11471 11472 11473 11474 11475 11476 11477 11478 11479 11480 11481 11482 11483 11484 11485 11486 11487 11488 11489 11490 11491 11492 11493 11494 11495 11496 11497 11498 11499 11500 11501 11502 11503 11504 11505 11506 11507 11508 11509 11510 11511 11512 11513 11514 11515 11516 11517 11518 11519 11520 11521 11522 11523 11524 11525 11526 11527 11528 11529 11530 11531 11532 11533 11534 11535 11536 11537 11538 11539 11540 11541 11542 11543 11544 11545 11546 11547 11548 11549 11550 11551 11552 11553 11554 11555 11556 11557 11558 11559 11560 11561 11562 11563 11564 11565 11566 11567 11568 11569 11570 11571 11572 11573 11574 11575 11576 11577 11578 11579 11580 11581 11582 11583 11584 11585 11586 11587 11588 11589 11590 11591 11592 11593 11594 11595 11596 11597 11598 11599 11600 11601 11602 11603 11604 11605 11606 11607 11608 11609 11610 11611 11612 11613 11614 11615 11616 11617 11618 11619 11620 11621 11622 11623 11624 11625 11626 11627 11628 11629 11630 11631 11632 11633 11634 11635 11636 11637 11638 11639 11640 11641 11642 11643 11644 11645 11646 11647 11648 11649 11650 11651 11652 11653 11654 11655 11656 11657 11658 11659 11660 11661 11662 11663 11664 11665 11666 11667 11668 11669 11670 11671 11672 11673 11674 11675 11676 11677 11678 11679 11680 11681 11682 11683 11684 11685 11686 11687 11688 11689 11690 11691 11692 11693 11694 11695 11696 11697 11698 11699 11700 11701 11702 11703 11704 11705 11706 11707 11708 11709 11710 11711 11712 11713 11714 11715 11716 11717 11718 11719 11720 11721 11722 11723 11724 11725 11726 11727 11728 11729 11730 11731 11732 11733 11734 11735 11736 11737 11738 11739 11740 11741 11742 11743 11744 11745 11746 11747 11748 11749 11750 11751 11752 11753 11754 11755 11756 11757 11758 11759 11760 11761 11762 11763 11764 11765 11766 11767 11768 11769 11770 11771 11772 11773 11774 11775 11776 11777 11778 11779 11780 11781 11782 11783 11784 11785 11786 11787 11788 11789 11790 11791 11792 11793 11794 11795 11796 11797 11798 11799 11800 11801 11802 11803 11804 11805 11806 11807 11808 11809 11810 11811 11812 11813 11814 11815 11816 11817 11818 11819 11820 11821 11822 11823 11824 11825 11826 11827 11828 11829 11830 11831 11832 11833 11834 11835 11836 11837 11838 11839 11840 11841 11842 11843 11844 11845 11846 11847 11848 11849 11850 11851 11852 11853 11854 11855 11856 11857 11858 11859 11860 11861 11862 11863 11864 11865 11866 11867 11868 11869 11870 11871 11872 11873 11874 11875 11876 11877 11878 11879 11880 11881 11882 11883 11884 11885 11886 11887 11888 11889 11890 11891 11892 11893 11894 11895 11896 11897 11898 11899 11900 11901 11902 11903 11904 11905 11906 11907 11908 11909 11910 11911 11912 11913 11914 11915 11916 11917 11918 11919 11920 11921 11922 11923 11924 11925 11926 11927 11928 11929 11930 11931 11932 11933 11934 11935 11936 11937 11938 11939 11940 11941 11942 11943 11944 11945 11946 11947 11948 11949 11950 11951 11952 11953 11954 11955 11956 11957 11958 11959 11960 11961 11962 11963 11964 11965 11966 11967 11968 11969 11970 11971 11972 11973 11974 11975 11976 11977 11978 11979 11980 11981 11982 11983 11984 11985 11986 11987 11988 11989 11990 11991 11992 11993 11994 11995 11996 11997 11998 11999 12000 12001 12002 12003 12004 12005 12006 12007 12008 12009 12010 12011 12012 12013 12014 12015 12016 12017 12018 12019 12020 12021 12022 12023 12024 12025 12026 12027 12028 12029 12030 12031 12032 12033 12034 12035 12036 12037 12038 12039 12040 12041 12042 12043 12044 12045 12046 12047 12048 12049 12050 12051 12052 12053 12054 12055 12056 12057 12058 12059 12060 12061 12062 12063 12064 12065 12066 12067 12068 12069 12070 12071 12072 12073 12074 12075 12076 12077 12078 12079 12080 12081 12082 12083 12084 12085 12086 12087 12088 12089 12090 12091 12092 12093 12094 12095 12096 12097 12098 12099 12100 12101 12102 12103 12104 12105 12106 12107 12108 12109 12110 12111 12112 12113 12114 12115 12116 12117 12118 12119 12120 12121 12122 12123 12124 12125 12126 12127 12128 12129 12130 12131 12132 12133 12134 12135 12136 12137 12138 12139 12140 12141 12142 12143 12144 12145 12146 12147 12148 12149 12150 12151 12152 12153 12154 12155 12156 12157 12158 12159 12160 12161 12162 12163 12164 12165 12166 12167 12168 12169 12170 12171 12172 12173 12174 12175 12176 12177 12178 12179 12180 12181 12182 12183 12184 12185 12186 12187 12188 12189 12190 12191 12192 12193 12194 12195 12196 12197 12198 12199 12200 12201 12202 12203 12204 12205 12206 12207 12208 12209 12210 12211 12212 12213 12214 12215 12216 12217 12218 12219 12220 12221 12222 12223 12224 12225 12226 12227 12228 12229 12230 12231 12232 12233 12234 12235 12236 12237 12238 12239 12240 12241 12242 12243 12244 12245 12246 12247 12248 12249 12250 12251 12252 12253 12254 12255 12256 12257 12258 12259 12260 12261 12262 12263 12264 12265 12266 12267 12268 12269 12270 12271 12272 12273 12274 12275 12276 12277 12278 12279 12280 12281 12282 12283 12284 12285 12286 12287 12288 12289 12290 12291 12292 12293 12294 12295 12296 12297 12298 12299 12300 12301 12302 12303 12304 12305 12306 12307 12308 12309 12310 12311 12312 12313 12314 12315 12316 12317 12318 12319 12320 12321 12322 12323 12324 12325 12326 12327 12328 12329 12330 12331 12332 12333 12334 12335 12336 12337 12338 12339 12340 12341 12342 12343 12344 12345 12346 12347 12348 12349 12350 12351 12352 12353 12354 12355 12356 12357 12358 12359 12360 12361 12362 12363 12364 12365 12366 12367 12368 12369 12370 12371 12372 12373 12374 12375 12376 12377 12378 12379 12380 12381 12382 12383 12384 12385 12386 12387 12388 12389 12390 12391 12392 12393 12394 12395 12396 12397 12398 12399 12400 12401 12402 12403 12404 12405 12406 12407 12408 12409 12410 12411 12412 12413 12414 12415 12416 12417 12418 12419 12420 12421 12422 12423 12424 12425 12426 12427 12428 12429 12430 12431 12432 12433 12434 12435 12436 12437 12438 12439 12440 12441 12442 12443 12444 12445 12446 12447 12448 12449 12450 12451 12452 12453 12454 12455 12456 12457 12458 12459 12460 12461 12462 12463 12464 12465 12466 12467 12468 12469 12470 12471 12472 12473 12474 12475 12476 12477 12478 12479 12480 12481 12482 12483 12484 12485 12486 12487 12488 12489 12490 12491 12492 12493 12494 12495 12496 12497 12498 12499 12500 12501 12502 12503 12504 12505 12506 12507 12508 12509 12510 12511 12512 12513 12514 12515 12516 12517 12518 12519 12520 12521 12522 12523 12524 12525 12526 12527 12528 12529 12530 12531 12532 12533 12534 12535 12536 12537 12538 12539 12540 12541 12542 12543 12544 12545 12546 12547 12548 12549 12550 12551 12552 12553 12554 12555 12556 12557 12558 12559 12560 12561 12562 12563 12564 12565 12566 12567 12568 12569 12570 12571 12572 12573 12574 12575 12576 12577 12578 12579 12580 12581 12582 12583 12584 12585 12586 12587 12588 12589 12590 12591 12592 12593 12594 12595 12596 12597 12598 12599 12600 12601 12602 12603 12604 12605 12606 12607 12608 12609 12610 12611 12612 12613 12614 12615 12616 12617 12618 12619 12620 12621 12622 12623 12624 12625 12626 12627 12628 12629 12630 12631 12632 12633 12634 12635 12636 12637 12638 12639 12640 12641 12642 12643 12644 12645 12646 12647 12648 12649 12650 12651 12652 12653 12654 12655 12656 12657 12658 12659 12660 12661 12662 12663 12664 12665 12666 12667 12668 12669 12670 12671 12672 12673 12674 12675 12676 12677 12678 12679 12680 12681 12682 12683 12684 12685 12686 12687 12688 12689 12690 12691 12692 12693 12694 12695 12696 12697 12698 12699 12700 12701 12702 12703 12704 12705 12706 12707 12708 12709 12710 12711 12712 12713 12714 12715 12716 12717 12718 12719 12720 12721 12722 12723 12724 12725 12726 12727 12728 12729 12730 12731 12732 12733 12734 12735 12736 12737 12738 12739 12740 12741 12742 12743 12744 12745 12746 12747 12748 12749 12750 12751 12752 12753 12754 12755 12756 12757 12758 12759 12760 12761 12762 12763 12764 12765 12766 12767 12768 12769 12770 12771 12772 12773 12774 12775 12776 12777 12778 12779 12780 12781 12782 12783 12784 12785 12786 12787 12788 12789 12790 12791 12792 12793 12794 12795 12796 12797 12798 12799 12800 12801 12802 12803 12804 12805 12806 12807 12808 12809 12810 12811 12812 12813 12814 12815 12816 12817 12818 12819 12820 12821 12822 12823 12824 12825 12826 12827 12828 12829 12830 12831 12832 12833 12834 12835 12836 12837 12838 12839 12840 12841 12842 12843 12844 12845 12846 12847 12848 12849 12850 12851 12852 12853 12854 12855 12856 12857 12858 12859 12860 12861 12862 12863 12864 12865 12866 12867 12868 12869 12870 12871 12872 12873 12874 12875 12876 12877 12878 12879 12880 12881 12882 12883 12884 12885 12886 12887 12888 12889 12890 12891 12892 12893 12894 12895 12896 12897 12898 12899 12900 12901 12902 12903 12904 12905 12906 12907 12908 12909 12910 12911 12912 12913 12914 12915 12916 12917 12918 12919 12920 12921 12922 12923 12924 12925 12926 12927 12928 12929 12930 12931 12932 12933 12934 12935 12936 12937 12938 12939 12940 12941 12942 12943 12944 12945 12946 12947 12948 12949 12950 12951 12952 12953 12954 12955 12956 12957 12958 12959 12960 12961 12962 12963 12964 12965 12966 12967 12968 12969 12970 12971 12972 12973 12974 12975 12976 12977 12978 12979 12980 12981 12982 12983 12984 12985 12986 12987 12988 12989 12990 12991 12992 12993 12994 12995 12996 12997 12998 12999 13000 13001 13002 13003 13004 13005 13006 13007 13008 13009 13010 13011 13012 13013 13014 13015 13016 13017 13018 13019 13020 13021 13022 13023 13024 13025 13026 13027 13028 13029 13030 13031 13032 13033 13034 13035 13036 13037 13038 13039 13040 13041 13042 13043 13044 13045 13046 13047 13048 13049 13050 13051 13052 13053 13054 13055 13056 13057 13058 13059 13060 13061 13062 13063 13064 13065 13066 13067 13068 13069 13070 13071 13072 13073 13074 13075 13076 13077 13078 13079 13080 13081 13082 13083 13084 13085 13086 13087 13088 13089 13090 13091 13092 13093 13094 13095 13096 13097 13098 13099 13100 13101 13102 13103 13104 13105 13106 13107 13108 13109 13110 13111 13112 13113 13114 13115 13116 13117 13118 13119 13120 13121 13122 13123 13124 13125 13126 13127 13128 13129 13130 13131 13132 13133 13134 13135 13136 13137 13138 13139 13140 13141 13142 13143 13144 13145 13146 13147 13148 13149 13150 13151 13152 13153 13154 13155 13156 13157 13158 13159 13160 13161 13162 13163 13164 13165 13166 13167 13168 13169 13170 13171 13172 13173 13174 13175 13176 13177 13178 13179 13180 13181 13182 13183 13184 13185 13186 13187 13188 13189 13190 13191 13192 13193 13194 13195 13196 13197 13198 13199 13200 13201 13202 13203 13204 13205 13206 13207 13208 13209 13210 13211 13212 13213 13214 13215 13216 13217 13218 13219 13220 13221 13222 13223 13224 13225 13226 13227 13228 13229 13230 13231 13232 13233 13234 13235 13236 13237 13238 13239 13240 13241 13242 13243 13244 13245 13246 13247 13248 13249 13250 13251 13252 13253 13254 13255 13256 13257 13258 13259 13260 13261 13262 13263 13264 13265 13266 13267 13268 13269 13270 13271 13272 13273 13274 13275 13276 13277 13278 13279 13280 13281 13282 13283 13284 13285 13286 13287 13288 13289 13290 13291 13292 13293 13294 13295 13296 13297 13298 13299 13300 13301 13302 13303 13304 13305 13306 13307 13308 13309 13310 13311 13312 13313 13314 13315 13316 13317 13318 13319 13320 13321 13322 13323 13324 13325 13326 13327 13328 13329 13330 13331 13332 13333 13334 13335 13336 13337 13338 13339 13340 13341 13342 13343 13344 13345 13346 13347 13348 13349 13350 13351 13352 13353 13354 13355 13356 13357 13358 13359 13360 13361 13362 13363 13364 13365 13366 13367 13368 13369 13370 13371 13372 13373 13374 13375 13376 13377 13378 13379 13380 13381 13382 13383 13384 13385 13386 13387 13388 13389 13390 13391 13392 13393 13394 13395 13396 13397 13398 13399 13400 13401 13402 13403 13404 13405 13406 13407 13408 13409 13410 13411 13412 13413 13414 13415 13416 13417 13418 13419 13420 13421 13422 13423 13424 13425 13426 13427 13428 13429 13430 13431 13432 13433 13434 13435 13436 13437 13438 13439 13440 13441 13442 13443 13444 13445 13446 13447 13448 13449 13450 13451 13452 13453 13454 13455 13456 13457 13458 13459 13460 13461 13462 13463 13464 13465 13466 13467 13468 13469 13470 13471 13472 13473 13474 13475 13476 13477 13478 13479 13480 13481 13482 13483 13484 13485 13486 13487 13488 13489 13490 13491 13492 13493 13494 13495 13496 13497 13498 13499 13500 13501 13502 13503 13504 13505 13506 13507 13508 13509 13510 13511 13512 13513 13514 13515 13516 13517 13518 13519 13520 13521 13522 13523 13524 13525 13526 13527 13528 13529 13530 13531 13532 13533 13534 13535 13536 13537 13538 13539 13540 13541 13542 13543 13544 13545 13546 13547 13548 13549 13550 13551 13552 13553 13554 13555 13556 13557 13558 13559 13560 13561 13562 13563 13564 13565 13566 13567 13568 13569 13570 13571 13572 13573 13574 13575 13576 13577 13578 13579 13580 13581 13582 13583 13584 13585 13586 13587 13588 13589 13590 13591 13592 13593 13594 13595 13596 13597 13598 13599 13600 13601 13602 13603 13604 13605 13606 13607 13608 13609 13610 13611 13612 13613 13614 13615 13616 13617 13618 13619 13620 13621 13622 13623 13624 13625 13626 13627 13628 13629 13630 13631 13632 13633 13634 13635 13636 13637 13638 13639 13640 13641 13642 13643 13644 13645 13646 13647 13648 13649 13650 13651 13652 13653 13654 13655 13656 13657 13658 13659 13660 13661 13662 13663 13664 13665 13666 13667 13668 13669 13670 13671 13672 13673 13674 13675 13676 13677 13678 13679 13680 13681 13682 13683 13684 13685 13686 13687 13688 13689 13690 13691 13692 13693 13694 13695 13696 13697 13698 13699 13700 13701 13702 13703 13704 13705 13706 13707 13708 13709 13710 13711 13712 13713 13714 13715 13716 13717 13718 13719 13720 13721 13722 13723 13724 13725 13726 13727 13728 13729 13730 13731 13732 13733 13734 13735 13736 13737 13738 13739 13740 13741 13742 13743 13744 13745 13746 13747 13748 13749 13750 13751 13752 13753 13754 13755 13756 13757 13758 13759 13760 13761 13762 13763 13764 13765 13766 13767 13768 13769 13770 13771 13772 13773 13774 13775 13776 13777 13778 13779 13780 13781 13782 13783 13784 13785 13786 13787 13788 13789 13790 13791 13792 13793 13794 13795 13796 13797 13798 13799 13800 13801 13802 13803 13804 13805 13806 13807 13808 13809 13810 13811 13812 13813 13814 13815 13816 13817 13818 13819 13820 13821 13822 13823 13824 13825 13826 13827 13828 13829 13830 13831 13832 13833 13834 13835 13836 13837 13838 13839 13840 13841 13842 13843 13844 13845 13846 13847 13848 13849 13850 13851 13852 13853 13854 13855 13856 13857 13858 13859 13860 13861 13862 13863 13864 13865 13866 13867 13868 13869 13870 13871 13872 13873 13874 13875 13876 13877 13878 13879 13880 13881 13882 13883 13884 13885 13886 13887 13888 13889 13890 13891 13892 13893 13894 13895 13896 13897 13898 13899 13900 13901 13902 13903 13904 13905 13906 13907 13908 13909 13910 13911 13912 13913 13914 13915 13916 13917 13918 13919 13920 13921 13922 13923 13924 13925 13926 13927 13928 13929 13930 13931 13932 13933 13934 13935 13936 13937 13938 13939 13940 13941 13942 13943 13944 13945 13946 13947 13948 13949 13950 13951 13952 13953 13954 13955 13956 13957 13958 13959 13960 13961 13962 13963 13964 13965 13966 13967 13968 13969 13970 13971 13972 13973 13974 13975 13976 13977 13978 13979 13980 13981 13982 13983 13984 13985 13986 13987 13988 13989 13990 13991 13992 13993 13994 13995 13996 13997 13998 13999 14000 14001 14002 14003 14004 14005 14006 14007 14008 14009 14010 14011 14012 14013 14014 14015 14016 14017 14018 14019 14020 14021 14022 14023 14024 14025 14026 14027 14028 14029 14030 14031 14032 14033 14034 14035 14036 14037 14038 14039 14040 14041 14042 14043 14044 14045 14046 14047 14048 14049 14050 14051 14052 14053 14054 14055 14056 14057 14058 14059 14060 14061 14062 14063 14064 14065 14066 14067 14068 14069 14070 14071 14072 14073 14074 14075 14076 14077 14078 14079 14080 14081 14082 14083 14084 14085 14086 14087 14088 14089 14090 14091 14092 14093 14094 14095 14096 14097 14098 14099 14100 14101 14102 14103 14104 14105 14106 14107 14108 14109 14110 14111 14112 14113 14114 14115 14116 14117 14118 14119 14120 14121 14122 14123 14124 14125 14126 14127 14128 14129 14130 14131 14132 14133 14134 14135 14136 14137 14138 14139 14140 14141 14142 14143 14144 14145 14146 14147 14148 14149 14150 14151 14152 14153 14154 14155 14156 14157 14158 14159 14160 14161 14162 14163 14164 14165 14166 14167 14168 14169 14170 14171 14172 14173 14174 14175 14176 14177 14178 14179 14180 14181 14182 14183 14184 14185 14186 14187 14188 14189 14190 14191 14192 14193 14194 14195 14196 14197 14198 14199 14200 14201 14202 14203 14204 14205 14206 14207 14208 14209 14210 14211 14212 14213 14214 14215 14216 14217 14218 14219 14220 14221 14222 14223 14224 14225 14226 14227 14228 14229 14230 14231 14232 14233 14234 14235 14236 14237 14238 14239 14240 14241 14242 14243 14244 14245 14246 14247 14248 14249 14250 14251 14252 14253 14254 14255 14256 14257 14258 14259 14260 14261 14262 14263 14264 14265 14266 14267 14268 14269 14270 14271 14272 14273 14274 14275 14276 14277 14278 14279 14280 14281 14282 14283 14284 14285 14286 14287 14288 14289 14290 14291 14292 14293 14294 14295 14296 14297 14298 14299 14300 14301 14302 14303 14304 14305 14306 14307 14308 14309 14310 14311 14312 14313 14314 14315 14316 14317 14318 14319 14320 14321 14322 14323 14324 14325 14326 14327 14328 14329 14330 14331 14332 14333 14334 14335 14336 14337 14338 14339 14340 14341 14342 14343 14344 14345 14346 14347 14348 14349 14350 14351 14352 14353 14354 14355 14356 14357 14358 14359 14360 14361 14362 14363 14364 14365 14366 14367 14368 14369 14370 14371 14372 14373 14374 14375 14376 14377 14378 14379 14380 14381 14382 14383 14384 14385 14386 14387 14388 14389 14390 14391 14392 14393 14394 14395 14396 14397 14398 14399 14400 14401 14402 14403 14404 14405 14406 14407 14408 14409 14410 14411 14412 14413 14414 14415 14416 14417 14418 14419 14420 14421 14422 14423 14424 14425 14426 14427 14428 14429 14430 14431 14432 14433 14434 14435 14436 14437 14438 14439 14440 14441 14442 14443 14444 14445 14446 14447 14448 14449 14450 14451 14452 14453 14454 14455 14456 14457 14458 14459 14460 14461 14462 14463 14464 14465 14466 14467 14468 14469 14470 14471 14472 14473 14474 14475 14476 14477 14478 14479 14480 14481 14482 14483 14484 14485 14486 14487 14488 14489 14490 14491 14492 14493 14494 14495 14496 14497 14498 14499 14500 14501 14502 14503 14504 14505 14506 14507 14508 14509 14510 14511 14512 14513 14514 14515 14516 14517 14518 14519 14520 14521 14522 14523 14524 14525 14526 14527 14528 14529 14530 14531 14532 14533 14534 14535 14536 14537 14538 14539 14540 14541 14542 14543 14544 14545 14546 14547 14548 14549 14550 14551 14552 14553 14554 14555 14556 14557 14558 14559 14560 14561 14562 14563 14564 14565 14566 14567 14568 14569 14570 14571 14572 14573 14574 14575 14576 14577 14578 14579 14580 14581 14582 14583 14584 14585 14586 14587 14588 14589 14590 14591 14592 14593 14594 14595 14596 14597 14598 14599 14600 14601 14602 14603 14604 14605 14606 14607 14608 14609 14610 14611 14612 14613 14614 14615 14616 14617 14618 14619 14620 14621 14622 14623 14624 14625 14626 14627 14628 14629 14630 14631 14632 14633 14634 14635 14636 14637 14638 14639 14640 14641 14642 14643 14644 14645 14646 14647 14648 14649 14650 14651 14652 14653 14654 14655 14656 14657 14658 14659 14660 14661 14662 14663 14664 14665 14666 14667 14668 14669 14670 14671 14672 14673 14674 14675 14676 14677 14678 14679 14680 14681 14682 14683 14684 14685 14686 14687 14688 14689 14690 14691 14692 14693 14694 14695 14696 14697 14698 14699 14700 14701 14702 14703 14704 14705 14706 14707 14708 14709 14710 14711 14712 14713 14714 14715 14716 14717 14718 14719 14720 14721 14722 14723 14724 14725 14726 14727 14728 14729 14730 14731 14732 14733 14734 14735 14736 14737 14738 14739 14740 14741 14742 14743 14744 14745 14746 14747 14748 14749 14750 14751 14752 14753 14754 14755 14756 14757 14758 14759 14760 14761 14762 14763 14764 14765 14766 14767 14768 14769 14770 14771 14772 14773 14774 14775 14776 14777 14778 14779 14780 14781 14782 14783 14784 14785 14786 14787 14788 14789 14790 14791 14792 14793 14794 14795 14796 14797 14798 14799 14800 14801 14802 14803 14804 14805 14806 14807 14808 14809 14810 14811 14812 14813 14814 14815 14816 14817 14818 14819 14820 14821 14822 14823 14824 14825 14826 14827 14828 14829 14830 14831 14832 14833 14834 14835 14836 14837 14838 14839 14840 14841 14842 14843 14844 14845 14846 14847 14848 14849 14850 14851 14852 14853 14854 14855 14856 14857 14858 14859 14860 14861 14862 14863 14864 14865 14866 14867 14868 14869 14870 14871 14872 14873 14874 14875 14876 14877 14878 14879 14880 14881 14882 14883 14884 14885 14886 14887 14888 14889 14890 14891 14892 14893 14894 14895 14896 14897 14898 14899 14900 14901 14902 14903 14904 14905 14906 14907 14908 14909 14910 14911 14912 14913 14914 14915 14916 14917 14918 14919 14920 14921 14922 14923 14924 14925 14926 14927 14928 14929 14930 14931 14932 14933 14934 14935 14936 14937 14938 14939 14940 14941 14942 14943 14944 14945 14946 14947 14948 14949 14950 14951 14952 14953 14954 14955 14956 14957 14958 14959 14960 14961 14962 14963 14964 14965 14966 14967 14968 14969 14970 14971 14972 14973 14974 14975 14976 14977 14978 14979 14980 14981 14982 14983 14984 14985 14986 14987 14988 14989 14990 14991 14992 14993 14994 14995 14996 14997 14998 14999 15000 15001 15002 15003 15004 15005 15006 15007 15008 15009 15010 15011 15012 15013 15014 15015 15016 15017 15018 15019 15020 15021 15022 15023 15024 15025 15026 15027 15028 15029 15030 15031 15032 15033 15034 15035 15036 15037 15038 15039 15040 15041 15042 15043 15044 15045 15046 15047 15048 15049 15050 15051 15052 15053 15054 15055 15056 15057 15058 15059 15060 15061 15062 15063 15064 15065 15066 15067 15068 15069 15070 15071 15072 15073 15074 15075 15076 15077 15078 15079 15080 15081 15082 15083 15084 15085 15086 15087 15088 15089 15090 15091 15092 15093 15094 15095 15096 15097 15098 15099 15100 15101 15102 15103 15104 15105 15106 15107 15108 15109 15110 15111 15112 15113 15114 15115 15116 15117 15118 15119 15120 15121 15122 15123 15124 15125 15126 15127 15128 15129 15130 15131 15132 15133 15134 15135 15136 15137 15138 15139 15140 15141 15142 15143 15144 15145 15146 15147 15148 15149 15150 15151 15152 15153 15154 15155 15156 15157 15158 15159 15160 15161 15162 15163 15164 15165 15166 15167 15168 15169 15170 15171 15172 15173 15174 15175 15176 15177 15178 15179 15180 15181 15182 15183 15184 15185 15186 15187 15188 15189 15190 15191 15192 15193 15194 15195 15196 15197 15198 15199 15200 15201 15202 15203 15204 15205 15206 15207 15208 15209 15210 15211 15212 15213 15214 15215 15216 15217 15218 15219 15220 15221 15222 15223 15224 15225 15226 15227 15228 15229 15230 15231 15232 15233 15234 15235 15236 15237 15238 15239 15240 15241 15242 15243 15244 15245 15246 15247 15248 15249 15250 15251 15252 15253 15254 15255 15256 15257 15258 15259 15260 15261 15262 15263 15264 15265 15266 15267 15268 15269 15270 15271 15272 15273 15274 15275 15276 15277 15278 15279 15280 15281 15282 15283 15284 15285 15286 15287 15288 15289 15290 15291 15292 15293 15294 15295 15296 15297 15298 15299 15300 15301 15302 15303 15304 15305 15306 15307 15308 15309 15310 15311 15312 15313 15314 15315 15316 15317 15318 15319 15320 15321 15322 15323 15324 15325 15326 15327 15328 15329 15330 15331 15332 15333 15334 15335 15336 15337 15338 15339 15340 15341 15342 15343 15344 15345 15346 15347 15348 15349 15350 15351 15352 15353 15354 15355 15356 15357 15358 15359 15360 15361 15362 15363 15364 15365 15366 15367 15368 15369 15370 15371 15372 15373 15374 15375 15376 15377 15378 15379 15380 15381 15382 15383 15384 15385 15386 15387 15388 15389 15390 15391 15392 15393 15394 15395 15396 15397 15398 15399 15400 15401 15402 15403 15404 15405 15406 15407 15408 15409 15410 15411 15412 15413 15414 15415 15416 15417 15418 15419 15420 15421 15422 15423 15424 15425 15426 15427 15428 15429 15430 15431 15432 15433 15434 15435 15436 15437 15438 15439 15440 15441 15442 15443 15444 15445 15446 15447 15448 15449 15450 15451 15452 15453 15454 15455 15456 15457 15458 15459 15460 15461 15462 15463 15464 15465 15466 15467 15468 15469 15470 15471 15472 15473 15474 15475 15476 15477 15478 15479 15480 15481 15482 15483 15484 15485 15486 15487 15488 15489 15490 15491 15492 15493 15494 15495 15496 15497 15498 15499 15500 15501 15502 15503 15504 15505 15506 15507 15508 15509 15510 15511 15512 15513 15514 15515 15516 15517 15518 15519 15520 15521 15522 15523 15524 15525 15526 15527 15528 15529 15530 15531 15532 15533 15534 15535 15536 15537 15538 15539 15540 15541 15542 15543 15544 15545 15546 15547 15548 15549 15550 15551 15552 15553 15554 15555 15556 15557 15558 15559 15560 15561 15562 15563 15564 15565 15566 15567 15568 15569 15570 15571 15572 15573 15574 15575 15576 15577 15578 15579 15580 15581 15582 15583 15584 15585 15586 15587 15588 15589 15590 15591 15592 15593 15594 15595 15596 15597 15598 15599 15600 15601 15602 15603 15604 15605 15606 15607 15608 15609 15610 15611 15612 15613 15614 15615 15616 15617 15618 15619 15620 15621 15622 15623 15624 15625 15626 15627 15628 15629 15630 15631 15632 15633 15634 15635 15636 15637 15638 15639 15640 15641 15642 15643 15644 15645 15646 15647 15648 15649 15650 15651 15652 15653 15654 15655 15656 15657 15658 15659 15660 15661 15662 15663 15664 15665 15666 15667 15668 15669 15670 15671 15672 15673 15674 15675 15676 15677 15678 15679 15680 15681 15682 15683 15684 15685 15686 15687 15688 15689 15690 15691 15692 15693 15694 15695 15696 15697 15698 15699 15700 15701 15702 15703 15704 15705 15706 15707 15708 15709 15710 15711 15712 15713 15714 15715 15716 15717 15718 15719 15720 15721 15722 15723 15724 15725 15726 15727 15728 15729 15730 15731 15732 15733 15734 15735 15736 15737 15738 15739 15740 15741 15742 15743 15744 15745 15746 15747 15748 15749 15750 15751 15752 15753 15754 15755 15756 15757 15758 15759 15760 15761 15762 15763 15764 15765 15766 15767 15768 15769 15770 15771 15772 15773 15774 15775 15776 15777 15778 15779 15780 15781 15782 15783 15784 15785 15786 15787 15788 15789 15790 15791 15792 15793 15794 15795 15796 15797 15798 15799 15800 15801 15802 15803 15804 15805 15806 15807 15808 15809 15810 15811 15812 15813 15814 15815 15816 15817 15818 15819 15820 15821 15822 15823 15824 15825 15826 15827 15828 15829 15830 15831 15832 15833 15834 15835 15836 15837 15838 15839 15840 15841 15842 15843 15844 15845 15846 15847 15848 15849 15850 15851 15852 15853 15854 15855 15856 15857 15858 15859 15860 15861 15862 15863 15864 15865 15866 15867 15868 15869 15870 15871 15872 15873 15874 15875 15876 15877 15878 15879 15880 15881 15882 15883 15884 15885 15886 15887 15888 15889 15890 15891 15892 15893 15894 15895 15896 15897 15898 15899 15900 15901 15902 15903 15904 15905 15906 15907 15908 15909 15910 15911 15912 15913 15914 15915 15916 15917 15918 15919 15920 15921 15922 15923 15924 15925 15926 15927 15928 15929 15930 15931 15932 15933 15934 15935 15936 15937 15938 15939 15940 15941 15942 15943 15944 15945 15946 15947 15948 15949 15950 15951 15952 15953 15954 15955 15956 15957 15958 15959 15960 15961 15962 15963 15964 15965 15966 15967 15968 15969 15970 15971 15972 15973 15974 15975 15976 15977 15978 15979 15980 15981 15982 15983 15984 15985 15986 15987 15988 15989 15990 15991 15992 15993 15994 15995 15996 15997 15998 15999 16000 16001 16002 16003 16004 16005 16006 16007 16008 16009 16010 16011 16012 16013 16014 16015 16016 16017 16018 16019 16020 16021 16022 16023 16024 16025 16026 16027 16028 16029 16030 16031 16032 16033 16034 16035 16036 16037 16038 16039 16040 16041 16042 16043 16044 16045 16046 16047 16048 16049 16050 16051 16052 16053 16054 16055 16056 16057 16058 16059 16060 16061 16062 16063 16064 16065 16066 16067 16068 16069 16070 16071 16072 16073 16074 16075 16076 16077 16078 16079 16080 16081 16082 16083 16084 16085 16086 16087 16088 16089 16090 16091 16092 16093 16094 16095 16096 16097 16098 16099 16100 16101 16102 16103 16104 16105 16106 16107 16108 16109 16110 16111 16112 16113 16114 16115 16116 16117 16118 16119 16120 16121 16122 16123 16124 16125 16126 16127 16128 16129 16130 16131 16132 16133 16134 16135 16136 16137 16138 16139 16140 16141 16142 16143 16144 16145 16146 16147 16148 16149 16150 16151 16152 16153 16154 16155 16156 16157 16158 16159 16160 16161 16162 16163 16164 16165 16166 16167 16168 16169 16170 16171 16172 16173 16174 16175 16176 16177 16178 16179 16180 16181 16182 16183 16184 16185 16186 16187 16188 16189 16190 16191 16192 16193 16194 16195 16196 16197 16198 16199 16200 16201 16202 16203 16204 16205 16206 16207 16208 16209 16210 16211 16212 16213 16214 16215 16216 16217 16218 16219 16220 16221 16222 16223 16224 16225 16226 16227 16228 16229 16230 16231 16232 16233 16234 16235 16236 16237 16238 16239 16240 16241 16242 16243 16244 16245 16246 16247 16248 16249 16250 16251 16252 16253 16254 16255 16256 16257 16258 16259 16260 16261 16262 16263 16264 16265 16266 16267 16268 16269 16270 16271 16272 16273 16274 16275 16276 16277 16278 16279 16280 16281 16282 16283 16284 16285 16286 16287 16288 16289 16290 16291 16292 16293 16294 16295 16296 16297 16298 16299 16300 16301 16302 16303 16304 16305 16306 16307 16308 16309 16310 16311 16312 16313 16314 16315 16316 16317 16318 16319 16320 16321 16322 16323 16324 16325 16326 16327 16328 16329 16330 16331 16332 16333 16334 16335 16336 16337 16338 16339 16340 16341 16342 16343 16344 16345 16346 16347 16348 16349 16350 16351 16352 16353 16354 16355 16356 16357 16358 16359 16360 16361 16362 16363 16364 16365 16366 16367 16368 16369 16370 16371 16372 16373 16374 16375 16376 16377 16378 16379 16380 16381 16382 16383 16384 16385 16386 16387 16388 16389 16390 16391 16392 16393 16394 16395 16396 16397 16398 16399 16400 16401 16402 16403 16404 16405 16406 16407 16408 16409 16410 16411 16412 16413 16414 16415 16416 16417 16418 16419 16420 16421 16422 16423 16424 16425 16426 16427 16428 16429 16430 16431 16432 16433 16434 16435 16436 16437 16438 16439 16440 16441 16442 16443 16444 16445 16446 16447 16448 16449 16450 16451 16452 16453 16454 16455 16456 16457 16458 16459 16460 16461 16462 16463 16464 16465 16466 16467 16468 16469 16470 16471 16472 16473 16474 16475 16476 16477 16478 16479 16480 16481 16482 16483 16484 16485 16486 16487 16488 16489 16490 16491 16492 16493 16494 16495 16496 16497 16498 16499 16500 16501 16502 16503 16504 16505 16506 16507 16508 16509 16510 16511 16512 16513 16514 16515 16516 16517 16518 16519 16520 16521 16522 16523 16524 16525 16526 16527 16528 16529 16530 16531 16532 16533 16534 16535 16536 16537 16538 16539 16540 16541 16542 16543 16544 16545 16546 16547 16548 16549 16550 16551 16552 16553 16554 16555 16556 16557 16558 16559 16560 16561 16562 16563 16564 16565 16566 16567 16568 16569 16570 16571 16572 16573 16574 16575 16576 16577 16578 16579 16580 16581 16582 16583 16584 16585 16586 16587 16588 16589 16590 16591 16592 16593 16594 16595 16596 16597 16598 16599 16600 16601 16602 16603 16604 16605 16606 16607 16608 16609 16610 16611 16612 16613 16614 16615 16616 16617 16618 16619 16620 16621 16622 16623 16624 16625 16626 16627 16628 16629 16630 16631 16632 16633 16634 16635 16636 16637 16638 16639 16640 16641 16642 16643 16644 16645 16646 16647 16648 16649 16650 16651 16652 16653 16654 16655 16656 16657 16658 16659 16660 16661 16662 16663 16664 16665 16666 16667 16668 16669 16670 16671 16672 16673 16674 16675 16676 16677 16678 16679 16680 16681 16682 16683 16684 16685 16686 16687 16688 16689 16690 16691 16692 16693 16694 16695 16696 16697 16698 16699 16700 16701 16702 16703 16704 16705 16706 16707 16708 16709 16710 16711 16712 16713 16714 16715 16716 16717 16718 16719 16720 16721 16722 16723 16724 16725 16726 16727 16728 16729 16730 16731 16732 16733 16734 16735 16736 16737 16738 16739 16740 16741 16742 16743 16744 16745 16746 16747 16748 16749 16750 16751 16752 16753 16754 16755 16756 16757 16758 16759 16760 16761 16762 16763 16764 16765 16766 16767 16768 16769 16770 16771 16772 16773 16774 16775 16776 16777 16778 16779 16780 16781 16782 16783 16784 16785 16786 16787 16788 16789 16790 16791 16792 16793 16794 16795 16796 16797 16798 16799 16800 16801 16802 16803 16804 16805 16806 16807 16808 16809 16810 16811 16812 16813 16814 16815 16816 16817 16818 16819 16820 16821 16822 16823 16824 16825 16826 16827 16828 16829 16830 16831 16832 16833 16834 16835 16836 16837 16838 16839 16840 16841 16842 16843 16844 16845 16846 16847 16848 16849 16850 16851 16852 16853 16854 16855 16856 16857 16858 16859 16860 16861 16862 16863 16864 16865 16866 16867 16868 16869 16870 16871 16872 16873 16874 16875 16876 16877 16878 16879 16880 16881 16882 16883 16884 16885 16886 16887 16888 16889 16890 16891 16892 16893 16894 16895 16896 16897 16898 16899 16900 16901 16902 16903 16904 16905 16906 16907 16908 16909 16910 16911 16912 16913 16914 16915 16916 16917 16918 16919 16920 16921 16922 16923 16924 16925 16926 16927 16928 16929 16930 16931 16932 16933 16934 16935 16936 16937 16938 16939 16940 16941 16942 16943 16944 16945 16946 16947 16948 16949 16950 16951 16952 16953 16954 16955 16956 16957 16958 16959 16960 16961 16962 16963 16964 16965 16966 16967 16968 16969 16970 16971 16972 16973 16974 16975 16976 16977 16978 16979 16980 16981 16982 16983 16984 16985 16986 16987 16988 16989 16990 16991 16992 16993 16994 16995 16996 16997 16998 16999 17000 17001 17002 17003 17004 17005 17006 17007 17008 17009 17010 17011 17012 17013 17014 17015 17016 17017 17018 17019 17020 17021 17022 17023 17024 17025 17026 17027 17028 17029 17030 17031 17032 17033 17034 17035 17036 17037 17038 17039 17040 17041 17042 17043 17044 17045 17046 17047 17048 17049 17050 17051 17052 17053 17054 17055 17056 17057 17058 17059 17060 17061 17062 17063 17064 17065 17066 17067 17068 17069 17070 17071 17072 17073 17074 17075 17076 17077 17078 17079 17080 17081 17082 17083 17084 17085 17086 17087 17088 17089 17090 17091 17092 17093 17094 17095 17096 17097 17098 17099 17100 17101 17102 17103 17104 17105 17106 17107 17108 17109 17110 17111 17112 17113 17114 17115 17116 17117 17118 17119 17120 17121 17122 17123 17124 17125 17126 17127 17128 17129 17130 17131 17132 17133 17134 17135 17136 17137 17138 17139 17140 17141 17142 17143 17144 17145 17146 17147 17148 17149 17150 17151 17152 17153 17154 17155 17156 17157 17158 17159 17160 17161 17162 17163 17164 17165 17166 17167 17168 17169 17170 17171 17172 17173 17174 17175 17176 17177 17178 17179 17180 17181 17182 17183 17184 17185 17186 17187 17188 17189 17190 17191 17192 17193 17194 17195 17196 17197 17198 17199 17200 17201 17202 17203 17204 17205 17206 17207 17208 17209 17210 17211 17212 17213 17214 17215 17216 17217 17218 17219 17220 17221 17222 17223 17224 17225 17226 17227 17228 17229 17230 17231 17232 17233 17234 17235 17236 17237 17238 17239 17240 17241 17242 17243 17244 17245 17246 17247 17248 17249 17250 17251 17252 17253 17254 17255 17256 17257 17258 17259 17260 17261 17262 17263 17264 17265 17266 17267 17268 17269 17270 17271 17272 17273 17274 17275 17276 17277 17278 17279 17280 17281 17282 17283 17284 17285 17286 17287 17288 17289 17290 17291 17292 17293 17294 17295 17296 17297 17298 17299 17300 17301 17302 17303 17304 17305 17306 17307 17308 17309 17310 17311 17312 17313 17314 17315 17316 17317 17318 17319 17320 17321 17322 17323 17324 17325 17326 17327 17328 17329 17330 17331 17332 17333 17334 17335 17336 17337 17338 17339 17340 17341 17342 17343 17344 17345 17346 17347 17348 17349 17350 17351 17352 17353 17354 17355 17356 17357 17358 17359 17360 17361 17362 17363 17364 17365 17366 17367 17368 17369 17370 17371 17372 17373 17374 17375 17376 17377 17378 17379 17380 17381 17382 17383 17384 17385 17386 17387 17388 17389 17390 17391 17392 17393 17394 17395 17396 17397 17398 17399 17400 17401 17402 17403 17404 17405 17406 17407 17408 17409 17410 17411 17412 17413 17414 17415 17416 17417 17418 17419 17420 17421 17422 17423 17424 17425 17426 17427 17428 17429 17430 17431 17432 17433 17434 17435 17436 17437 17438 17439 17440 17441 17442 17443 17444 17445 17446 17447 17448 17449 17450 17451 17452 17453 17454 17455 17456 17457 17458 17459 17460 17461 17462 17463 17464 17465 17466 17467 17468 17469 17470 17471 17472 17473 17474 17475 17476 17477 17478 17479 17480 17481 17482 17483 17484 17485 17486 17487 17488 17489 17490 17491 17492 17493 17494 17495 17496 17497 17498 17499 17500 17501 17502 17503 17504 17505 17506 17507 17508 17509 17510 17511 17512 17513 17514 17515 17516 17517 17518 17519 17520 17521 17522 17523 17524 17525 17526 17527 17528 17529 17530 17531 17532 17533 17534 17535 17536 17537 17538 17539 17540 17541 17542 17543 17544 17545 17546 17547 17548 17549 17550 17551 17552 17553 17554 17555 17556 17557 17558 17559 17560 17561 17562 17563 17564 17565 17566 17567 17568 17569 17570 17571 17572 17573 17574 17575 17576 17577 17578 17579 17580 17581 17582 17583 17584 17585 17586 17587 17588 17589 17590 17591 17592 17593 17594 17595 17596 17597 17598 17599 17600 17601 17602 17603 17604 17605 17606 17607 17608 17609 17610 17611 17612 17613 17614 17615 17616 17617 17618 17619 17620 17621 17622 17623 17624 17625 17626 17627 17628 17629 17630 17631 17632 17633 17634 17635 17636 17637 17638 17639 17640 17641 17642 17643 17644 17645 17646 17647 17648 17649 17650 17651 17652 17653 17654 17655 17656 17657 17658 17659 17660 17661 17662 17663 17664 17665 17666 17667 17668 17669 17670 17671 17672 17673 17674 17675 17676 17677 17678 17679 17680 17681 17682 17683 17684 17685 17686 17687 17688 17689 17690 17691 17692 17693 17694 17695 17696 17697 17698 17699 17700 17701 17702 17703 17704 17705 17706 17707 17708 17709 17710 17711 17712 17713 17714 17715 17716 17717 17718 17719 17720 17721 17722 17723 17724 17725 17726 17727 17728 17729 17730 17731 17732 17733 17734 17735 17736 17737 17738 17739 17740 17741 17742 17743 17744 17745 17746 17747 17748 17749 17750 17751 17752 17753 17754 17755 17756 17757 17758 17759 17760 17761 17762 17763 17764 17765 17766 17767 17768 17769 17770 17771 17772 17773 17774 17775 17776 17777 17778 17779 17780 17781 17782 17783 17784 17785 17786 17787 17788 17789 17790 17791 17792 17793 17794 17795 17796 17797 17798 17799 17800 17801 17802 17803 17804 17805 17806 17807 17808 17809 17810 17811 17812 17813 17814 17815 17816 17817 17818 17819 17820 17821 17822 17823 17824 17825 17826 17827 17828 17829 17830 17831 17832 17833 17834 17835 17836 17837 17838 17839 17840 17841 17842 17843 17844 17845 17846 17847 17848 17849 17850 17851 17852 17853 17854 17855 17856 17857 17858 17859 17860 17861 17862 17863 17864 17865 17866 17867 17868 17869 17870 17871 17872 17873 17874 17875 17876 17877 17878 17879 17880 17881 17882 17883 17884 17885 17886 17887 17888 17889 17890 17891 17892 17893 17894 17895 17896 17897 17898 17899 17900 17901 17902 17903 17904 17905 17906 17907 17908 17909 17910 17911 17912 17913 17914 17915 17916 17917 17918 17919 17920 17921 17922 17923 17924 17925 17926 17927 17928 17929 17930 17931 17932 17933 17934 17935 17936 17937 17938 17939 17940 17941 17942 17943 17944 17945 17946 17947 17948 17949 17950 17951 17952 17953 17954 17955 17956 17957 17958 17959 17960 17961 17962 17963 17964 17965 17966 17967 17968 17969 17970 17971 17972 17973 17974 17975 17976 17977 17978 17979 17980 17981 17982 17983 17984 17985 17986 17987 17988 17989 17990 17991 17992 17993 17994 17995 17996 17997 17998 17999 18000 18001 18002 18003 18004 18005 18006 18007 18008 18009 18010 18011 18012 18013 18014 18015 18016 18017 18018 18019 18020 18021 18022 18023 18024 18025 18026 18027 18028 18029 18030 18031 18032 18033 18034 18035 18036 18037 18038 18039 18040 18041 18042 18043 18044 18045 18046 18047 18048 18049 18050 18051 18052 18053 18054 18055 18056 18057 18058 18059 18060 18061 18062 18063 18064 18065 18066 18067 18068 18069 18070 18071 18072 18073 18074 18075 18076 18077 18078 18079 18080 18081 18082 18083 18084 18085 18086 18087 18088 18089 18090 18091 18092 18093 18094 18095 18096 18097 18098 18099 18100 18101 18102 18103 18104 18105 18106 18107 18108 18109 18110 18111 18112 18113 18114 18115 18116 18117 18118 18119 18120 18121 18122 18123 18124 18125 18126 18127 18128 18129 18130 18131 18132 18133 18134 18135 18136 18137 18138 18139 18140 18141 18142 18143 18144 18145 18146 18147 18148 18149 18150 18151 18152 18153 18154 18155 18156 18157 18158 18159 18160 18161 18162 18163 18164 18165 18166 18167 18168 18169 18170 18171 18172 18173 18174 18175 18176 18177 18178 18179 18180 18181 18182 18183 18184 18185 18186 18187 18188 18189 18190 18191 18192 18193 18194 18195 18196 18197 18198 18199 18200 18201 18202 18203 18204 18205 18206 18207 18208 18209 18210 18211 18212 18213 18214 18215 18216 18217 18218 18219 18220 18221 18222 18223 18224 18225 18226 18227 18228 18229 18230 18231 18232 18233 18234 18235 18236 18237 18238 18239 18240 18241 18242 18243 18244 18245 18246 18247 18248 18249 18250 18251 18252 18253 18254 18255 18256 18257 18258 18259 18260 18261 18262 18263 18264 18265 18266 18267 18268 18269 18270 18271 18272 18273 18274 18275 18276 18277 18278 18279 18280 18281 18282 18283 18284 18285 18286 18287 18288 18289 18290 18291 18292 18293 18294 18295 18296 18297 18298 18299 18300 18301 18302 18303 18304 18305 18306 18307 18308 18309 18310 18311 18312 18313 18314 18315 18316 18317 18318 18319 18320 18321 18322 18323 18324 18325 18326 18327 18328 18329 18330 18331 18332 18333 18334 18335 18336 18337 18338 18339 18340 18341 18342 18343 18344 18345 18346 18347 18348 18349 18350 18351 18352 18353 18354 18355 18356 18357 18358 18359 18360 18361 18362 18363 18364 18365 18366 18367 18368 18369 18370 18371 18372 18373 18374 18375 18376 18377 18378 18379 18380 18381 18382 18383 18384 18385 18386 18387 18388 18389 18390 18391 18392 18393 18394 18395 18396 18397 18398 18399 18400 18401 18402 18403 18404 18405 18406 18407 18408 18409 18410 18411 18412 18413 18414 18415 18416 18417 18418 18419 18420 18421 18422 18423 18424 18425 18426 18427 18428 18429 18430 18431 18432 18433 18434 18435 18436 18437 18438 18439 18440 18441 18442 18443 18444 18445 18446 18447 18448 18449 18450 18451 18452 18453 18454 18455 18456 18457 18458 18459 18460 18461 18462 18463 18464 18465 18466 18467 18468 18469 18470 18471 18472 18473 18474 18475 18476 18477 18478 18479 18480 18481 18482 18483 18484 18485 18486 18487 18488 18489 18490 18491 18492 18493 18494 18495 18496 18497 18498 18499 18500 18501 18502 18503 18504 18505 18506 18507 18508 18509 18510 18511 18512 18513 18514 18515 18516 18517 18518 18519 18520 18521 18522 18523 18524 18525 18526 18527 18528 18529 18530 18531 18532 18533 18534 18535 18536 18537 18538 18539 18540 18541 18542 18543 18544 18545 18546 18547 18548 18549 18550 18551 18552 18553 18554 18555 18556 18557 18558 18559 18560 18561 18562 18563 18564 18565 18566 18567 18568 18569 18570 18571 18572 18573 18574 18575 18576 18577 18578 18579 18580 18581 18582 18583 18584 18585 18586 18587 18588 18589 18590 18591 18592 18593 18594 18595 18596 18597 18598 18599 18600 18601 18602 18603 18604 18605 18606 18607 18608 18609 18610 18611 18612 18613 18614 18615 18616 18617 18618 18619 18620 18621 18622 18623 18624 18625 18626 18627 18628 18629 18630 18631 18632 18633 18634 18635 18636 18637 18638 18639 18640 18641 18642 18643 18644 18645 18646 18647 18648 18649 18650 18651 18652 18653 18654 18655 18656 18657 18658 18659 18660 18661 18662 18663 18664 18665 18666 18667 18668 18669 18670 18671 18672 18673 18674 18675 18676 18677 18678 18679 18680 18681 18682 18683 18684 18685 18686 18687 18688 18689 18690 18691 18692 18693 18694 18695 18696 18697 18698 18699 18700 18701 18702 18703 18704 18705 18706 18707 18708 18709 18710 18711 18712 18713 18714 18715 18716 18717 18718 18719 18720 18721 18722 18723 18724 18725 18726 18727 18728 18729 18730 18731 18732 18733 18734 18735 18736 18737 18738 18739 18740 18741 18742 18743 18744 18745 18746 18747 18748 18749 18750 18751 18752 18753 18754 18755 18756 18757 18758 18759 18760 18761 18762 18763 18764 18765 18766 18767 18768 18769 18770 18771 18772 18773 18774 18775 18776 18777 18778 18779 18780 18781 18782 18783 18784 18785 18786 18787 18788 18789 18790 18791 18792 18793 18794 18795 18796 18797 18798 18799 18800 18801 18802 18803 18804 18805 18806 18807 18808 18809 18810 18811 18812 18813 18814 18815 18816 18817 18818 18819 18820 18821 18822 18823 18824 18825 18826 18827 18828 18829 18830 18831 18832 18833 18834 18835 18836 18837 18838 18839 18840 18841 18842 18843 18844 18845 18846 18847 18848 18849 18850 18851 18852 18853 18854 18855 18856 18857 18858 18859 18860 18861 18862 18863 18864 18865 18866 18867 18868 18869 18870 18871 18872 18873 18874 18875 18876 18877 18878 18879 18880 18881 18882 18883 18884 18885 18886 18887 18888 18889 18890 18891 18892 18893 18894 18895 18896 18897 18898 18899 18900 18901 18902 18903 18904 18905 18906 18907 18908 18909 18910 18911 18912 18913 18914 18915 18916 18917 18918 18919 18920 18921 18922 18923 18924 18925 18926 18927 18928 18929 18930 18931 18932 18933 18934 18935 18936 18937 18938 18939 18940 18941 18942 18943 18944 18945 18946 18947 18948 18949 18950 18951 18952 18953 18954 18955 18956 18957 18958 18959 18960 18961 18962 18963 18964 18965 18966 18967 18968 18969 18970 18971 18972 18973 18974 18975 18976 18977 18978 18979 18980 18981 18982 18983 18984 18985 18986 18987 18988 18989 18990 18991 18992 18993 18994 18995 18996 18997 18998 18999 19000 19001 19002 19003 19004 19005 19006 19007 19008 19009 19010 19011 19012 19013 19014 19015 19016 19017 19018 19019 19020 19021 19022 19023 19024 19025 19026 19027 19028 19029 19030 19031 19032 19033 19034 19035 19036 19037 19038 19039 19040 19041 19042 19043 19044 19045 19046 19047 19048 19049 19050 19051 19052 19053 19054 19055 19056 19057 19058 19059 19060 19061 19062 19063 19064 19065 19066 19067 19068 19069 19070 19071 19072 19073 19074 19075 19076 19077 19078 19079 19080 19081 19082 19083 19084 19085 19086 19087 19088 19089 19090 19091 19092 19093 19094 19095 19096 19097 19098 19099 19100 19101 19102 19103 19104 19105 19106 19107 19108 19109 19110 19111 19112 19113 19114 19115 19116 19117 19118 19119 19120 19121 19122 19123 19124 19125 19126 19127 19128 19129 19130 19131 19132 19133 19134 19135 19136 19137 19138 19139 19140 19141 19142 19143 19144 19145 19146 19147 19148 19149 19150 19151 19152 19153 19154 19155 19156 19157 19158 19159 19160 19161 19162 19163 19164 19165 19166 19167 19168 19169 19170 19171 19172 19173 19174 19175 19176 19177 19178 19179 19180 19181 19182 19183 19184 19185 19186 19187 19188 19189 19190 19191 19192 19193 19194 19195 19196 19197 19198 19199 19200 19201 19202 19203 19204 19205 19206 19207 19208 19209 19210 19211 19212 19213 19214 19215 19216 19217 19218 19219 19220 19221 19222 19223 19224 19225 19226 19227 19228 19229 19230 19231 19232 19233 19234 19235 19236 19237 19238 19239 19240 19241 19242 19243 19244 19245 19246 19247 19248 19249 19250 19251 19252 19253 19254 19255 19256 19257 19258 19259 19260 19261 19262 19263 19264 19265 19266 19267 19268 19269 19270 19271 19272 19273 19274 19275 19276 19277 19278 19279 19280 19281 19282 19283 19284 19285 19286 19287 19288 19289 19290 19291 19292 19293 19294 19295 19296 19297 19298 19299 19300 19301 19302 19303 19304 19305 19306 19307 19308 19309 19310 19311 19312 19313 19314 19315 19316 19317 19318 19319 19320 19321 19322 19323 19324 19325 19326 19327 19328 19329 19330 19331 19332 19333 19334 19335 19336 19337 19338 19339 19340 19341 19342 19343 19344 19345 19346 19347 19348 19349 19350 19351 19352 19353 19354 19355 19356 19357 19358 19359 19360 19361 19362 19363 19364 19365 19366 19367 19368 19369 19370 19371 19372 19373 19374 19375 19376 19377 19378 19379 19380 19381 19382 19383 19384 19385 19386 19387 19388 19389 19390 19391 19392 19393 19394 19395 19396 19397 19398 19399 19400 19401 19402 19403 19404 19405 19406 19407 19408 19409 19410 19411 19412 19413 19414 19415 19416 19417 19418 19419 19420 19421 19422 19423 19424 19425 19426 19427 19428 19429 19430 19431 19432 19433 19434 19435 19436 19437 19438 19439 19440 19441 19442 19443 19444 19445 19446 19447 19448 19449 19450 19451 19452 19453 19454 19455 19456 19457 19458 19459 19460 19461 19462 19463 19464 19465 19466 19467 19468 19469 19470 19471 19472 19473 19474 19475 19476 19477 19478 19479 19480 19481 19482 19483 19484 19485 19486 19487 19488 19489 19490 19491 19492 19493 19494 19495 19496 19497 19498 19499 19500 19501 19502 19503 19504 19505 19506 19507 19508 19509 19510 19511 19512 19513 19514 19515 19516 19517 19518 19519 19520 19521 19522 19523 19524 19525 19526 19527 19528 19529 19530 19531 19532 19533 19534 19535 19536 19537 19538 19539 19540 19541 19542 19543 19544 19545 19546 19547 19548 19549 19550 19551 19552 19553 19554 19555 19556 19557 19558 19559 19560 19561 19562 19563 19564 19565 19566 19567 19568 19569 19570 19571 19572 19573 19574 19575 19576 19577 19578 19579 19580 19581 19582 19583 19584 19585 19586 19587 19588 19589 19590 19591 19592 19593 19594 19595 19596 19597 19598 19599 19600 19601 19602 19603 19604 19605 19606 19607 19608 19609 19610 19611 19612 19613 19614 19615 19616 19617 19618 19619 19620 19621 19622 19623 19624 19625 19626 19627 19628 19629 19630 19631 19632 19633 19634 19635 19636 19637 19638 19639 19640 19641 19642 19643 19644 19645 19646 19647 19648 19649 19650 19651 19652 19653 19654 19655 19656 19657 19658 19659 19660 19661 19662 19663 19664 19665 19666 19667 19668 19669 19670 19671 19672 19673 19674 19675 19676 19677 19678 19679 19680 19681 19682 19683 19684 19685 19686 19687 19688 19689 19690 19691 19692 19693 19694 19695 19696 19697 19698 19699 19700 19701 19702 19703 19704 19705 19706 19707 19708 19709 19710 19711 19712 19713 19714 19715 19716 19717 19718 19719 19720 19721 19722 19723 19724 19725 19726 19727 19728 19729 19730 19731 19732 19733 19734 19735 19736 19737 19738 19739 19740 19741 19742 19743 19744 19745 19746 19747 19748 19749 19750 19751 19752 19753 19754 19755 19756 19757 19758 19759 19760 19761 19762 19763 19764 19765 19766 19767 19768 19769 19770 19771 19772 19773 19774 19775 19776 19777 19778 19779 19780 19781 19782 19783 19784 19785 19786 19787 19788 19789 19790 19791 19792 19793 19794 19795 19796 19797 19798 19799 19800 19801 19802 19803 19804 19805 19806 19807 19808 19809 19810 19811 19812 19813 19814 19815 19816 19817 19818 19819 19820 19821 19822 19823 19824 19825 19826 19827 19828 19829 19830 19831 19832 19833 19834 19835 19836 19837 19838 19839 19840 19841 19842 19843 19844 19845 19846 19847 19848 19849 19850 19851 19852 19853 19854 19855 19856 19857 19858 19859 19860 19861 19862 19863 19864 19865 19866 19867 19868 19869 19870 19871 19872 19873 19874 19875 19876 19877 19878 19879 19880 19881 19882 19883 19884 19885 19886 19887 19888 19889 19890 19891 19892 19893 19894 19895 19896 19897 19898 19899 19900 19901 19902 19903 19904 19905 19906 19907 19908 19909 19910 19911 19912 19913 19914 19915 19916 19917 19918 19919 19920 19921 19922 19923 19924 19925 19926 19927 19928 19929 19930 19931 19932 19933 19934 19935 19936 19937 19938 19939 19940 19941 19942 19943 19944 19945 19946 19947 19948 19949 19950 19951 19952 19953 19954 19955 19956 19957 19958 19959 19960 19961 19962 19963 19964 19965 19966 19967 19968 19969 19970 19971 19972 19973 19974 19975 19976 19977 19978 19979 19980 19981 19982 19983 19984 19985 19986 19987 19988 19989 19990 19991 19992 19993 19994 19995 19996 19997 19998 19999 20000 20001 20002 20003 20004 20005 20006 20007 20008 20009 20010 20011 20012 20013 20014 20015 20016 20017 20018 20019 20020 20021 20022 20023 20024 20025 20026 20027 20028 20029 20030 20031 20032 20033 20034 20035 20036 20037 20038 20039 20040 20041 20042 20043 20044 20045 20046 20047 20048 20049 20050 20051 20052 20053 20054 20055 20056 20057 20058 20059 20060 20061 20062 20063 20064 20065 20066 20067 20068 20069 20070 20071 20072 20073 20074 20075 20076 20077 20078 20079 20080 20081 20082 20083 20084 20085 20086 20087 20088 20089 20090 20091 20092 20093 20094 20095 20096 20097 20098 20099 20100 20101 20102 20103 20104 20105 20106 20107 20108 20109 20110 20111 20112 20113 20114 20115 20116 20117 20118 20119 20120 20121 20122 20123 20124 20125 20126 20127 20128 20129 20130 20131 20132 20133 20134 20135 20136 20137 20138 20139 20140 20141 20142 20143 20144 20145 20146 20147 20148 20149 20150 20151 20152 20153 20154 20155 20156 20157 20158 20159 20160 20161 20162 20163 20164 20165 20166 20167 20168 20169 20170 20171 20172 20173 20174 20175 20176 20177 20178 20179 20180 20181 20182 20183 20184 20185 20186 20187 20188 20189 20190 20191 20192 20193 20194 20195 20196 20197 20198 20199 20200 20201 20202 20203 20204 20205 20206 20207 20208 20209 20210 20211 20212 20213 20214 20215 20216 20217 20218 20219 20220 20221 20222 20223 20224 20225 20226 20227 20228 20229 20230 20231 20232 20233 20234 20235 20236 20237 20238 20239 20240 20241 20242 20243 20244 20245 20246 20247 20248 20249 20250 20251 20252 20253 20254 20255 20256 20257 20258 20259 20260 20261 20262 20263 20264 20265 20266 20267 20268 20269 20270 20271 20272 20273 20274 20275 20276 20277 20278 20279 20280 20281 20282 20283 20284 20285 20286 20287 20288 20289 20290 20291 20292 20293 20294 20295 20296 20297 20298 20299 20300 20301 20302 20303 20304 20305 20306 20307 20308 20309 20310 20311 20312 20313 20314 20315 20316 20317 20318 20319 20320 20321 20322 20323 20324 20325 20326 20327 20328 20329 20330 20331 20332 20333 20334 20335 20336 20337 20338 20339 20340 20341 20342 20343 20344 20345 20346 20347 20348 20349 20350 20351 20352 20353 20354 20355 20356 20357 20358 20359 20360 20361 20362 20363 20364 20365 20366 20367 20368 20369 20370 20371 20372 20373 20374 20375 20376 20377 20378 20379 20380 20381 20382 20383 20384 20385 20386 20387 20388 20389 20390 20391 20392 20393 20394 20395 20396 20397 20398 20399 20400 20401 20402 20403 20404 20405 20406 20407 20408 20409 20410 20411 20412 20413 20414 20415 20416 20417 20418 20419 20420 20421 20422 20423 20424 20425 20426 20427 20428 20429 20430 20431 20432 20433 20434 20435 20436 20437 20438 20439 20440 20441 20442 20443 20444 20445 20446 20447 20448 20449 20450 20451 20452 20453 20454 20455 20456 20457 20458 20459 20460 20461 20462 20463 20464 20465 20466 20467 20468 20469 20470 20471 20472 20473 20474 20475 20476 20477 20478 20479 20480 20481 20482 20483 20484 20485 20486 20487 20488 20489 20490 20491 20492 20493 20494 20495 20496 20497 20498 20499 20500 20501 20502 20503 20504 20505 20506 20507 20508 20509 20510 20511 20512 20513 20514 20515 20516 20517 20518 20519 20520 20521 20522 20523 20524 20525 20526 20527 20528 20529 20530 20531 20532 20533 20534 20535 20536 20537 20538 20539 20540 20541 20542 20543 20544 20545 20546 20547 20548 20549 20550 20551 20552 20553 20554 20555 20556 20557 20558 20559 20560 20561 20562 20563 20564 20565 20566 20567 20568 20569 20570 20571 20572 20573 20574 20575 20576 20577 20578 20579 20580 20581 20582 20583 20584 20585 20586 20587 20588 20589 20590 20591 20592 20593 20594 20595 20596 20597 20598 20599 20600 20601 20602 20603 20604 20605 20606 20607 20608 20609 20610 20611 20612 20613 20614 20615 20616 20617 20618 20619 20620 20621 20622 20623 20624 20625 20626 20627 20628 20629 20630 20631 20632 20633 20634 20635 20636 20637 20638 20639 20640 20641 20642 20643 20644 20645 20646 20647 20648 20649 20650 20651 20652 20653 20654 20655 20656 20657 20658 20659 20660 20661 20662 20663 20664 20665 20666 20667 20668 20669 20670 20671 20672 20673 20674 20675 20676 20677 20678 20679 20680 20681 20682 20683 20684 20685 20686 20687 20688 20689 20690 20691 20692 20693 20694 20695 20696 20697 20698 20699 20700 20701 20702 20703 20704 20705 20706 20707 20708 20709 20710 20711 20712 20713 20714 20715 20716 20717 20718 20719 20720 20721 20722 20723 20724 20725 20726 20727 20728 20729 20730 20731 20732 20733 20734 20735 20736 20737 20738 20739 20740 20741 20742 20743 20744 20745 20746 20747 20748 20749 20750 20751 20752 20753 20754 20755 20756 20757 20758 20759 20760 20761 20762 20763 20764 20765 20766 20767 20768 20769 20770 20771 20772 20773 20774 20775 20776 20777 20778 20779 20780 20781 20782 20783 20784 20785 20786 20787 20788 20789 20790 20791 20792 20793 20794 20795 20796 20797 20798 20799 20800 20801 20802 20803 20804 20805 20806 20807 20808 20809 20810 20811 20812 20813 20814 20815 20816 20817 20818 20819 20820 20821 20822 20823 20824 20825 20826 20827 20828 20829 20830 20831 20832 20833 20834 20835 20836 20837 20838 20839 20840 20841 20842 20843 20844 20845 20846 20847 20848 20849 20850 20851 20852 20853 20854 20855 20856 20857 20858 20859 20860 20861 20862 20863 20864 20865 20866 20867 20868 20869 20870 20871 20872 20873 20874 20875 20876 20877 20878 20879 20880 20881 20882 20883 20884 20885 20886 20887 20888 20889 20890 20891 20892 20893 20894 20895 20896 20897 20898 20899 20900 20901 20902 20903 20904 20905 20906 20907 20908 20909 20910 20911 20912 20913 20914 20915 20916 20917 20918 20919 20920 20921 20922 20923 20924 20925 20926 20927 20928 20929 20930 20931 20932 20933 20934 20935 20936 20937 20938 20939 20940 20941 20942 20943 20944 20945 20946 20947 20948 20949 20950 20951 20952 20953 20954 20955 20956 20957 20958 20959 20960 20961 20962 20963 20964 20965 20966 20967 20968 20969 20970 20971 20972 20973 20974 20975 20976 20977 20978 20979 20980 20981 20982 20983 20984 20985 20986 20987 20988 20989 20990 20991 20992 20993 20994 20995 20996 20997 20998 20999 21000 21001 21002 21003 21004 21005 21006 21007 21008 21009 21010 21011 21012 21013 21014 21015 21016 21017 21018 21019 21020 21021 21022 21023 21024 21025 21026 21027 21028 21029 21030 21031 21032 21033 21034 21035 21036 21037 21038 21039 21040 21041 21042 21043 21044 21045 21046 21047 21048 21049 21050 21051 21052 21053 21054 21055 21056 21057 21058 21059 21060 21061 21062 21063 21064 21065 21066 21067 21068 21069 21070 21071 21072 21073 21074 21075 21076 21077 21078 21079 21080 21081 21082 21083 21084 21085 21086 21087 21088 21089 21090 21091 21092 21093 21094 21095 21096 21097 21098 21099 21100 21101 21102 21103 21104 21105 21106 21107 21108 21109 21110 21111 21112 21113 21114 21115 21116 21117 21118 21119 21120 21121 21122 21123 21124 21125 21126 21127 21128 21129 21130 21131 21132 21133 21134 21135 21136 21137 21138 21139 21140 21141 21142 21143 21144 21145 21146 21147 21148 21149 21150 21151 21152 21153 21154 21155 21156 21157 21158 21159 21160 21161 21162 21163 21164 21165 21166 21167 21168 21169 21170 21171 21172 21173 21174 21175 21176 21177 21178 21179 21180 21181 21182 21183 21184 21185 21186 21187 21188 21189 21190 21191 21192 21193 21194 21195 21196 21197 21198 21199 21200 21201 21202 21203 21204 21205 21206 21207 21208 21209 21210 21211 21212 21213 21214 21215 21216 21217 21218 21219 21220 21221 21222 21223 21224 21225 21226 21227 21228 21229 21230 21231 21232 21233 21234 21235 21236 21237 21238 21239 21240 21241 21242 21243 21244 21245 21246 21247 21248 21249 21250 21251 21252 21253 21254 21255 21256 21257 21258 21259 21260 21261 21262 21263 21264 21265 21266 21267 21268 21269 21270 21271 21272 21273 21274 21275 21276 21277 21278 21279 21280 21281 21282 21283 21284 21285 21286 21287 21288 21289 21290 21291 21292 21293 21294 21295 21296 21297 21298 21299 21300 21301 21302 21303 21304 21305 21306 21307 21308 21309 21310 21311 21312 21313 21314 21315 21316 21317 21318 21319 21320 21321 21322 21323 21324 21325 21326 21327 21328 21329 21330 21331 21332 21333 21334 21335 21336 21337 21338 21339 21340 21341 21342 21343 21344 21345 21346 21347 21348 21349 21350 21351 21352 21353 21354 21355 21356 21357 21358 21359 21360 21361 21362 21363 21364 21365 21366 21367 21368 21369 21370 21371 21372 21373 21374 21375 21376 21377 21378 21379 21380 21381 21382 21383 21384 21385 21386 21387 21388 21389 21390 21391 21392 21393 21394 21395 21396 21397 21398 21399 21400 21401 21402 21403 21404 21405 21406 21407 21408 21409 21410 21411 21412 21413 21414 21415 21416 21417 21418 21419 21420 21421 21422 21423 21424 21425 21426 21427 21428 21429 21430 21431 21432 21433 21434 21435 21436 21437 21438 21439 21440 21441 21442 21443 21444 21445 21446 21447 21448 21449 21450 21451 21452 21453 21454 21455 21456 21457 21458 21459 21460 21461 21462 21463 21464 21465 21466 21467 21468 21469 21470 21471 21472 21473 21474 21475 21476 21477 21478 21479 21480 21481 21482 21483 21484 21485 21486 21487 21488 21489 21490 21491 21492 21493 21494 21495 21496 21497 21498 21499 21500 21501 21502 21503 21504 21505 21506 21507 21508 21509 21510 21511 21512 21513 21514 21515 21516 21517 21518 21519 21520 21521 21522 21523 21524 21525 21526 21527 21528 21529 21530 21531 21532 21533 21534 21535 21536 21537 21538 21539 21540 21541 21542 21543 21544 21545 21546 21547 21548 21549 21550 21551 21552 21553 21554 21555 21556 21557 21558 21559 21560 21561 21562 21563 21564 21565 21566 21567 21568 21569 21570 21571 21572 21573 21574 21575 21576 21577 21578 21579 21580 21581 21582 21583 21584 21585 21586 21587 21588 21589 21590 21591 21592 21593 21594 21595 21596 21597 21598 21599 21600 21601 21602 21603 21604 21605 21606 21607 21608 21609 21610 21611 21612 21613 21614 21615 21616 21617 21618 21619 21620 21621 21622 21623 21624 21625 21626 21627 21628 21629 21630 21631 21632 21633 21634 21635 21636 21637 21638 21639 21640 21641 21642 21643 21644 21645 21646 21647 21648 21649 21650 21651 21652 21653 21654 21655 21656 21657 21658 21659 21660 21661 21662 21663 21664 21665 21666 21667 21668 21669 21670 21671 21672 21673 21674 21675 21676 21677 21678 21679 21680 21681 21682 21683 21684 21685 21686 21687 21688 21689 21690 21691 21692 21693 21694 21695 21696 21697 21698 21699 21700 21701 21702 21703 21704 21705 21706 21707 21708 21709 21710 21711 21712 21713 21714 21715 21716 21717 21718 21719 21720 21721 21722 21723 21724 21725 21726 21727 21728 21729 21730 21731 21732 21733 21734 21735 21736 21737 21738 21739 21740 21741 21742 21743 21744 21745 21746 21747 21748 21749 21750 21751 21752 21753 21754 21755 21756 21757 21758 21759 21760 21761 21762 21763 21764 21765 21766 21767 21768 21769 21770 21771 21772 21773 21774 21775 21776 21777 21778 21779 21780 21781 21782 21783 21784 21785 21786 21787 21788 21789 21790 21791 21792 21793 21794 21795 21796 21797 21798 21799 21800 21801 21802 21803 21804 21805 21806 21807 21808 21809 21810 21811 21812 21813 21814 21815 21816 21817 21818 21819 21820 21821 21822 21823 21824 21825 21826 21827 21828 21829 21830 21831 21832 21833 21834 21835 21836 21837 21838 21839 21840 21841 21842 21843 21844 21845 21846 21847 21848 21849 21850 21851 21852 21853 21854 21855 21856 21857 21858 21859 21860 21861 21862 21863 21864 21865 21866 21867 21868 21869 21870 21871 21872 21873 21874 21875 21876 21877 21878 21879 21880 21881 21882 21883 21884 21885 21886 21887 21888 21889 21890 21891 21892 21893 21894 21895 21896 21897 21898 21899 21900 21901 21902 21903 21904 21905 21906 21907 21908 21909 21910 21911 21912 21913 21914 21915 21916 21917 21918 21919 21920 21921 21922 21923 21924 21925 21926 21927 21928 21929 21930 21931 21932 21933 21934 21935 21936 21937 21938 21939 21940 21941 21942 21943 21944 21945 21946 21947 21948 21949 21950 21951 21952 21953 21954 21955 21956 21957 21958 21959 21960 21961 21962 21963 21964 21965 21966 21967 21968 21969 21970 21971 21972 21973 21974 21975 21976 21977 21978 21979 21980 21981 21982 21983 21984 21985 21986 21987 21988 21989 21990 21991 21992 21993 21994 21995 21996 21997 21998 21999 22000 22001 22002 22003 22004 22005 22006 22007 22008 22009 22010 22011 22012 22013 22014 22015 22016 22017 22018 22019 22020 22021 22022 22023 22024 22025 22026 22027 22028 22029 22030 22031 22032 22033 22034 22035 22036 22037 22038 22039 22040 22041 22042 22043 22044 22045 22046 22047 22048 22049 22050 22051 22052 22053 22054 22055 22056 22057 22058 22059 22060 22061 22062 22063 22064 22065 22066 22067 22068 22069 22070 22071 22072 22073 22074 22075 22076 22077 22078 22079 22080 22081 22082 22083 22084 22085 22086 22087 22088 22089 22090 22091 22092 22093 22094 22095 22096 22097 22098 22099 22100 22101 22102 22103 22104 22105 22106 22107 22108 22109 22110 22111 22112 22113 22114 22115 22116 22117 22118 22119 22120 22121 22122 22123 22124 22125 22126 22127 22128 22129 22130 22131 22132 22133 22134 22135 22136 22137 22138 22139 22140 22141 22142 22143 22144 22145 22146 22147 22148 22149 22150 22151 22152 22153 22154 22155 22156 22157 22158 22159 22160 22161 22162 22163 22164 22165 22166 22167 22168 22169 22170 22171 22172 22173 22174 22175 22176 22177 22178 22179 22180 22181 22182 22183 22184 22185 22186 22187 22188 22189 22190 22191 22192 22193 22194 22195 22196 22197 22198 22199 22200 22201 22202 22203 22204 22205 22206 22207 22208 22209 22210 22211 22212 22213 22214 22215 22216 22217 22218 22219 22220 22221 22222 22223 22224 22225 22226 22227 22228 22229 22230 22231 22232 22233 22234 22235 22236 22237 22238 22239 22240 22241 22242 22243 22244 22245 22246 22247 22248 22249 22250 22251 22252 22253 22254 22255 22256 22257 22258 22259 22260 22261 22262 22263 22264 22265 22266 22267 22268 22269 22270 22271 22272 22273 22274 22275 22276 22277 22278 22279 22280 22281 22282 22283 22284 22285 22286 22287 22288 22289 22290 22291 22292 22293 22294 22295 22296 22297 22298 22299 22300 22301 22302 22303 22304 22305 22306 22307 22308 22309 22310 22311 22312 22313 22314 22315 22316 22317 22318 22319 22320 22321 22322 22323 22324 22325 22326 22327 22328 22329 22330 22331 22332 22333 22334 22335 22336 22337 22338 22339 22340 22341 22342 22343 22344 22345 22346 22347 22348 22349 22350 22351 22352 22353 22354 22355 22356 22357 22358 22359 22360 22361 22362 22363 22364 22365 22366 22367 22368 22369 22370 22371 22372 22373 22374 22375 22376 22377 22378 22379 22380 22381 22382 22383 22384 22385 22386 22387 22388 22389 22390 22391 22392 22393 22394 22395 22396 22397 22398 22399 22400 22401 22402 22403 22404 22405 22406 22407 22408 22409 22410 22411 22412 22413 22414 22415 22416 22417 22418 22419 22420 22421 22422 22423 22424 22425 22426 22427 22428 22429 22430 22431 22432 22433 22434 22435 22436 22437 22438 22439 22440 22441 22442 22443 22444 22445 22446 22447 22448 22449 22450 22451 22452 22453 22454 22455 22456 22457 22458 22459 22460 22461 22462 22463 22464 22465 22466 22467 22468 22469 22470 22471 22472 22473 22474 22475 22476 22477 22478 22479 22480 22481 22482 22483 22484 22485 22486 22487 22488 22489 22490 22491 22492 22493 22494 22495 22496 22497 22498 22499 22500 22501 22502 22503 22504 22505 22506 22507 22508 22509 22510 22511 22512 22513 22514 22515 22516 22517 22518 22519 22520 22521 22522 22523 22524 22525 22526 22527 22528 22529 22530 22531 22532 22533 22534 22535 22536 22537 22538 22539 22540 22541 22542 22543 22544 22545 22546 22547 22548 22549 22550 22551 22552 22553 22554 22555 22556 22557 22558 22559 22560 22561 22562 22563 22564 22565 22566 22567 22568 22569 22570 22571 22572 22573 22574 22575 22576 22577 22578 22579 22580 22581 22582 22583 22584 22585 22586 22587 22588 22589 22590 22591 22592 22593 22594 22595 22596 22597 22598 22599 22600 22601 22602 22603 22604 22605 22606 22607 22608 22609 22610 22611 22612 22613 22614 22615 22616 22617 22618 22619 22620 22621 22622 22623 22624 22625 22626 22627 22628 22629 22630 22631 22632 22633 22634 22635 22636 22637 22638 22639 22640 22641 22642 22643 22644 22645 22646 22647 22648 22649 22650 22651 22652 22653 22654 22655 22656 22657 22658 22659 22660 22661 22662 22663 22664 22665 22666 22667 22668 22669 22670 22671 22672 22673 22674 22675 22676 22677 22678 22679 22680 22681 22682 22683 22684 22685 22686 22687 22688 22689 22690 22691 22692 22693 22694 22695 22696 22697 22698 22699 22700 22701 22702 22703 22704 22705 22706 22707 22708 22709 22710 22711 22712 22713 22714 22715 22716 22717 22718 22719 22720 22721 22722 22723 22724 22725 22726 22727 22728 22729 22730 22731 22732 22733 22734 22735 22736 22737 22738 22739 22740 22741 22742 22743 22744 22745 22746 22747 22748 22749 22750 22751 22752 22753 22754 22755 22756 22757 22758 22759 22760 22761 22762 22763 22764 22765 22766 22767 22768 22769 22770 22771 22772 22773 22774 22775 22776 22777 22778 22779 22780 22781 22782 22783 22784 22785 22786 22787 22788 22789 22790 22791 22792 22793 22794 22795 22796 22797 22798 22799 22800 22801 22802 22803 22804 22805 22806 22807 22808 22809 22810 22811 22812 22813 22814 22815 22816 22817 22818 22819 22820 22821 22822 22823 22824 22825 22826 22827 22828 22829 22830 22831 22832 22833 22834 22835 22836 22837 22838 22839 22840 22841 22842 22843 22844 22845 22846 22847 22848 22849 22850 22851 22852 22853 22854 22855 22856 22857 22858 22859 22860 22861 22862 22863 22864 22865 22866 22867 22868 22869 22870 22871 22872 22873 22874 22875 22876 22877 22878 22879 22880 22881 22882 22883 22884 22885 22886 22887 22888 22889 22890 22891 22892 22893 22894 22895 22896 22897 22898 22899 22900 22901 22902 22903 22904 22905 22906 22907 22908 22909 22910 22911 22912 22913 22914 22915 22916 22917 22918 22919 22920 22921 22922 22923 22924 22925 22926 22927 22928 22929 22930 22931 22932 22933 22934 22935 22936 22937 22938 22939 22940 22941 22942 22943 22944 22945 22946 22947 22948 22949 22950 22951 22952 22953 22954 22955 22956 22957 22958 22959 22960 22961 22962 22963 22964 22965 22966 22967 22968 22969 22970 22971 22972 22973 22974 22975 22976 22977 22978 22979 22980 22981 22982 22983 22984 22985 22986 22987 22988 22989 22990 22991 22992 22993 22994 22995 22996 22997 22998 22999 23000 23001 23002 23003 23004 23005 23006 23007 23008 23009 23010 23011 23012 23013 23014 23015 23016 23017 23018 23019 23020 23021 23022 23023 23024 23025 23026 23027 23028 23029 23030 23031 23032 23033 23034 23035 23036 23037 23038 23039 23040 23041 23042 23043 23044 23045 23046 23047 23048 23049 23050 23051 23052 23053 23054 23055 23056 23057 23058 23059 23060 23061 23062 23063 23064 23065 23066 23067 23068 23069 23070 23071 23072 23073 23074 23075 23076 23077 23078 23079 23080 23081 23082 23083 23084 23085 23086 23087 23088 23089 23090 23091 23092 23093 23094 23095 23096 23097 23098 23099 23100 23101 23102 23103 23104 23105 23106 23107 23108 23109 23110 23111 23112 23113 23114 23115 23116 23117 23118 23119 23120 23121 23122 23123 23124 23125 23126 23127 23128 23129 23130 23131 23132 23133 23134 23135 23136 23137 23138 23139 23140 23141 23142 23143 23144 23145 23146 23147 23148 23149 23150 23151 23152 23153 23154 23155 23156 23157 23158 23159 23160 23161 23162 23163 23164 23165 23166 23167 23168 23169 23170 23171 23172 23173 23174 23175 23176 23177 23178 23179 23180 23181 23182 23183 23184 23185 23186 23187 23188 23189 23190 23191 23192 23193 23194 23195 23196 23197 23198 23199 23200 23201 23202 23203 23204 23205 23206 23207 23208 23209 23210 23211 23212 23213 23214 23215 23216 23217 23218 23219 23220 23221 23222 23223 23224 23225 23226 23227 23228 23229 23230 23231 23232 23233 23234 23235 23236 23237 23238 23239 23240 23241 23242 23243 23244 23245 23246 23247 23248 23249 23250 23251 23252 23253 23254 23255 23256 23257 23258 23259 23260 23261 23262 23263 23264 23265 23266 23267 23268 23269 23270 23271 23272 23273 23274 23275 23276 23277 23278 23279 23280 23281 23282 23283 23284 23285 23286 23287 23288 23289 23290 23291 23292 23293 23294 23295 23296 23297 23298 23299 23300 23301 23302 23303 23304 23305 23306 23307 23308 23309 23310 23311 23312 23313 23314 23315 23316 23317 23318 23319 23320 23321 23322 23323 23324 23325 23326 23327 23328 23329 23330 23331 23332 23333 23334 23335 23336 23337 23338 23339 23340 23341 23342 23343 23344 23345 23346 23347 23348 23349 23350 23351 23352 23353 23354 23355 23356 23357 23358 23359 23360 23361 23362 23363 23364 23365 23366 23367 23368 23369 23370 23371 23372 23373 23374 23375 23376 23377 23378 23379 23380 23381 23382 23383 23384 23385 23386 23387 23388 23389 23390 23391 23392 23393 23394 23395 23396 23397 23398 23399 23400 23401 23402 23403 23404 23405 23406 23407 23408 23409 23410 23411 23412 23413 23414 23415 23416 23417 23418 23419 23420 23421 23422 23423 23424 23425 23426 23427 23428 23429 23430 23431 23432 23433 23434 23435 23436 23437 23438 23439 23440 23441 23442 23443 23444 23445 23446 23447 23448 23449 23450 23451 23452 23453 23454 23455 23456 23457 23458 23459 23460 23461 23462 23463 23464 23465 23466 23467 23468 23469 23470 23471 23472 23473 23474 23475 23476 23477 23478 23479 23480 23481 23482 23483 23484 23485 23486 23487 23488 23489 23490 23491 23492 23493 23494 23495 23496 23497 23498 23499 23500 23501 23502 23503 23504 23505 23506 23507 23508 23509 23510 23511 23512 23513 23514 23515 23516 23517 23518 23519 23520 23521 23522 23523 23524 23525 23526 23527 23528 23529 23530 23531 23532 23533 23534 23535 23536 23537 23538 23539 23540 23541 23542 23543 23544 23545 23546 23547 23548 23549 23550 23551 23552 23553 23554 23555 23556 23557 23558 23559 23560 23561 23562 23563 23564 23565 23566 23567 23568 23569 23570 23571 23572 23573 23574 23575 23576 23577 23578 23579 23580 23581 23582 23583 23584 23585 23586 23587 23588 23589 23590 23591 23592 23593 23594 23595 23596 23597 23598 23599 23600 23601 23602 23603 23604 23605 23606 23607 23608 23609 23610 23611 23612 23613 23614 23615 23616 23617 23618 23619 23620 23621 23622 23623 23624 23625 23626 23627 23628 23629 23630 23631 23632 23633 23634 23635 23636 23637 23638 23639 23640 23641 23642 23643 23644 23645 23646 23647 23648 23649 23650 23651 23652 23653 23654 23655 23656 23657 23658 23659 23660 23661 23662 23663 23664 23665 23666 23667 23668 23669 23670 23671 23672 23673 23674 23675 23676 23677 23678 23679 23680 23681 23682 23683 23684 23685 23686 23687 23688 23689 23690 23691 23692 23693 23694 23695 23696 23697 23698 23699 23700 23701 23702 23703 23704 23705 23706 23707 23708 23709 23710 23711 23712 23713 23714 23715 23716 23717 23718 23719 23720 23721 23722 23723 23724 23725 23726 23727 23728 23729 23730 23731 23732 23733 23734 23735 23736 23737 23738 23739 23740 23741 23742 23743 23744 23745 23746 23747 23748 23749 23750 23751 23752 23753 23754 23755 23756 23757 23758 23759 23760 23761 23762 23763 23764 23765 23766 23767 23768 23769 23770 23771 23772 23773 23774 23775 23776 23777 23778 23779 23780 23781 23782 23783 23784 23785 23786 23787 23788 23789 23790 23791 23792 23793 23794 23795 23796 23797 23798 23799 23800 23801 23802 23803 23804 23805 23806 23807 23808 23809 23810 23811 23812 23813 23814 23815 23816 23817 23818 23819 23820 23821 23822 23823 23824 23825 23826 23827 23828 23829 23830 23831 23832 23833 23834 23835 23836 23837 23838 23839 23840 23841 23842 23843 23844 23845 23846 23847 23848 23849 23850 23851 23852 23853 23854 23855 23856 23857 23858 23859 23860 23861 23862 23863 23864 23865 23866 23867 23868 23869 23870 23871 23872 23873 23874 23875 23876 23877 23878 23879 23880 23881 23882 23883 23884 23885 23886 23887 23888 23889 23890 23891 23892 23893 23894 23895 23896 23897 23898 23899 23900 23901 23902 23903 23904 23905 23906 23907 23908 23909 23910 23911 23912 23913 23914 23915 23916 23917 23918 23919 23920 23921 23922 23923 23924 23925 23926 23927 23928 23929 23930 23931 23932 23933 23934 23935 23936 23937 23938 23939 23940 23941 23942 23943 23944 23945 23946 23947 23948 23949 23950 23951 23952 23953 23954 23955 23956 23957 23958 23959 23960 23961 23962 23963 23964 23965 23966 23967 23968 23969 23970 23971 23972 23973 23974 23975 23976 23977 23978 23979 23980 23981 23982 23983 23984 23985 23986 23987 23988 23989 23990 23991 23992 23993 23994 23995 23996 23997 23998 23999 24000 24001 24002 24003 24004 24005 24006 24007 24008 24009 24010 24011 24012 24013 24014 24015 24016 24017 24018 24019 24020 24021 24022 24023 24024 24025 24026 24027 24028 24029 24030 24031 24032 24033 24034 24035 24036 24037 24038 24039 24040 24041 24042 24043 24044 24045 24046 24047 24048 24049 24050 24051 24052 24053 24054 24055 24056 24057 24058 24059 24060 24061 24062 24063 24064 24065 24066 24067 24068 24069 24070 24071 24072 24073 24074 24075 24076 24077 24078 24079 24080 24081 24082 24083 24084 24085 24086 24087 24088 24089 24090 24091 24092 24093 24094 24095 24096 24097 24098 24099 24100 24101 24102 24103 24104 24105 24106 24107 24108 24109 24110 24111 24112 24113 24114 24115 24116 24117 24118 24119 24120 24121 24122 24123 24124 24125 24126 24127 24128 24129 24130 24131 24132 24133 24134 24135 24136 24137 24138 24139 24140 24141 24142 24143 24144 24145 24146 24147 24148 24149 24150 24151 24152 24153 24154 24155 24156 24157 24158 24159 24160 24161 24162 24163 24164 24165 24166 24167 24168 24169 24170 24171 24172 24173 24174 24175 24176 24177 24178 24179 24180 24181 24182 24183 24184 24185 24186 24187 24188 24189 24190 24191 24192 24193 24194 24195 24196 24197 24198 24199 24200 24201 24202 24203 24204 24205 24206 24207 24208 24209 24210 24211 24212 24213 24214 24215 24216 24217 24218 24219 24220 24221 24222 24223 24224 24225 24226 24227 24228 24229 24230 24231 24232 24233 24234 24235 24236 24237 24238 24239 24240 24241 24242 24243 24244 24245 24246 24247 24248 24249 24250 24251 24252 24253 24254 24255 24256 24257 24258 24259 24260 24261 24262 24263 24264 24265 24266 24267 24268 24269 24270 24271 24272 24273 24274 24275 24276 24277 24278 24279 24280 24281 24282 24283 24284 24285 24286 24287 24288 24289 24290 24291 24292 24293 24294 24295 24296 24297 24298 24299 24300 24301 24302 24303 24304 24305 24306 24307 24308 24309 24310 24311 24312 24313 24314 24315 24316 24317 24318 24319 24320 24321 24322 24323 24324 24325 24326 24327 24328 24329 24330 24331 24332 24333 24334 24335 24336 24337 24338 24339 24340 24341 24342 24343 24344 24345 24346 24347 24348 24349 24350 24351 24352 24353 24354 24355 24356 24357 24358 24359 24360 24361 24362 24363 24364 24365 24366 24367 24368 24369 24370 24371 24372 24373 24374 24375 24376 24377 24378 24379 24380 24381 24382 24383 24384 24385 24386 24387 24388 24389 24390 24391 24392 24393 24394 24395 24396 24397 24398 24399 24400 24401 24402 24403 24404 24405 24406 24407 24408 24409 24410 24411 24412 24413 24414 24415 24416 24417 24418 24419 24420 24421 24422 24423 24424 24425 24426 24427 24428 24429 24430 24431 24432 24433 24434 24435 24436 24437 24438 24439 24440 24441 24442 24443 24444 24445 24446 24447 24448 24449 24450 24451 24452 24453 24454 24455 24456 24457 24458 24459 24460 24461 24462 24463 24464 24465 24466 24467 24468 24469 24470 24471 24472 24473 24474 24475 24476 24477 24478 24479 24480 24481 24482 24483 24484 24485 24486 24487 24488 24489 24490 24491 24492 24493 24494 24495 24496 24497 24498 24499 24500 24501 24502 24503 24504 24505 24506 24507 24508 24509 24510 24511 24512 24513 24514 24515 24516 24517 24518 24519 24520 24521 24522 24523 24524 24525 24526 24527 24528 24529 24530 24531 24532 24533 24534 24535 24536 24537 24538 24539 24540 24541 24542 24543 24544 24545 24546 24547 24548 24549 24550 24551 24552 24553 24554 24555 24556 24557 24558 24559 24560 24561 24562 24563 24564 24565 24566 24567 24568 24569 24570 24571 24572 24573 24574 24575 24576 24577 24578 24579 24580 24581 24582 24583 24584 24585 24586 24587 24588 24589 24590 24591 24592 24593 24594 24595 24596 24597 24598 24599 24600 24601 24602 24603 24604 24605 24606 24607 24608 24609 24610 24611 24612 24613 24614 24615 24616 24617 24618 24619 24620 24621 24622 24623 24624 24625 24626 24627 24628 24629 24630 24631 24632 24633 24634 24635 24636 24637 24638 24639 24640 24641 24642 24643 24644 24645 24646 24647 24648 24649 24650 24651 24652 24653 24654 24655 24656 24657 24658 24659 24660 24661 24662 24663 24664 24665 24666 24667 24668 24669 24670 24671 24672 24673 24674 24675 24676 24677 24678 24679 24680 24681 24682 24683 24684 24685 24686 24687 24688 24689 24690 24691 24692 24693 24694 24695 24696 24697 24698 24699 24700 24701 24702 24703 24704 24705 24706 24707 24708 24709 24710 24711 24712 24713 24714 24715 24716 24717 24718 24719 24720 24721 24722 24723 24724 24725 24726 24727 24728 24729 24730 24731 24732 24733 24734 24735 24736 24737 24738 24739 24740 24741 24742 24743 24744 24745 24746 24747 24748 24749 24750 24751 24752 24753 24754 24755 24756 24757 24758 24759 24760 24761 24762 24763 24764 24765 24766 24767 24768 24769 24770 24771 24772 24773 24774 24775 24776 24777 24778 24779 24780 24781 24782 24783 24784 24785 24786 24787 24788 24789 24790 24791 24792 24793 24794 24795 24796 24797 24798 24799 24800 24801 24802 24803 24804 24805 24806 24807 24808 24809 24810 24811 24812 24813 24814 24815 24816 24817 24818 24819 24820 24821 24822 24823 24824 24825 24826 24827 24828 24829 24830 24831 24832 24833 24834 24835 24836 24837 24838 24839 24840 24841 24842 24843 24844 24845 24846 24847 24848 24849 24850 24851 24852 24853 24854 24855 24856 24857 24858 24859 24860 24861 24862 24863 24864 24865 24866 24867 24868 24869 24870 24871 24872 24873 24874 24875 24876 24877 24878 24879 24880 24881 24882 24883 24884 24885 24886 24887 24888 24889 24890 24891 24892 24893 24894 24895 24896 24897 24898 24899 24900 24901 24902 24903 24904 24905 24906 24907 24908 24909 24910 24911 24912 24913 24914 24915 24916 24917 24918 24919 24920 24921 24922 24923 24924 24925 24926 24927 24928 24929 24930 24931 24932 24933 24934 24935 24936 24937 24938 24939 24940 24941 24942 24943 24944 24945 24946 24947 24948 24949 24950 24951 24952 24953 24954 24955 24956 24957 24958 24959 24960 24961 24962 24963 24964 24965 24966 24967 24968 24969 24970 24971 24972 24973 24974 24975 24976 24977 24978 24979 24980 24981 24982 24983 24984 24985 24986 24987 24988 24989 24990 24991 24992 24993 24994 24995 24996 24997 24998 24999 25000 25001 25002 25003 25004 25005 25006 25007 25008 25009 25010 25011 25012 25013 25014 25015 25016 25017 25018 25019 25020 25021 25022 25023 25024 25025 25026 25027 25028 25029 25030 25031 25032 25033 25034 25035 25036 25037 25038 25039 25040 25041 25042 25043 25044 25045 25046 25047 25048 25049 25050 25051 25052 25053 25054 25055 25056 25057 25058 25059 25060 25061 25062 25063 25064 25065 25066 25067 25068 25069 25070 25071 25072 25073 25074 25075 25076 25077 25078 25079 25080 25081 25082 25083 25084 25085 25086 25087 25088 25089 25090 25091 25092 25093 25094 25095 25096 25097 25098 25099 25100 25101 25102 25103 25104 25105 25106 25107 25108 25109 25110 25111 25112 25113 25114 25115 25116 25117 25118 25119 25120 25121 25122 25123 25124 25125 25126 25127 25128 25129 25130 25131 25132 25133 25134 25135 25136 25137 25138 25139 25140 25141 25142 25143 25144 25145 25146 25147 25148 25149 25150 25151 25152 25153 25154 25155 25156 25157 25158 25159 25160 25161 25162 25163 25164 25165 25166 25167 25168 25169 25170 25171 25172 25173 25174 25175 25176 25177 25178 25179 25180 25181 25182 25183 25184 25185 25186 25187 25188 25189 25190 25191 25192 25193 25194 25195 25196 25197 25198 25199 25200 25201 25202 25203 25204 25205 25206 25207 25208 25209 25210 25211 25212 25213 25214 25215 25216 25217 25218 25219 25220 25221 25222 25223 25224 25225 25226 25227 25228 25229 25230 25231 25232 25233 25234 25235 25236 25237 25238 25239 25240 25241 25242 25243 25244 25245 25246 25247 25248 25249 25250 25251 25252 25253 25254 25255 25256 25257 25258 25259 25260 25261 25262 25263 25264 25265 25266 25267 25268 25269 25270 25271 25272 25273 25274 25275 25276 25277 25278 25279 25280 25281 25282 25283 25284 25285 25286 25287 25288 25289 25290 25291 25292 25293 25294 25295 25296 25297 25298 25299 25300 25301 25302 25303 25304 25305 25306 25307 25308 25309 25310 25311 25312 25313 25314 25315 25316 25317 25318 25319 25320 25321 25322 25323 25324 25325 25326 25327 25328 25329 25330 25331 25332 25333 25334 25335 25336 25337 25338 25339 25340 25341 25342 25343 25344 25345 25346 25347 25348 25349 25350 25351 25352 25353 25354 25355 25356 25357 25358 25359 25360 25361 25362 25363 25364 25365 25366 25367 25368 25369 25370 25371 25372 25373 25374 25375 25376 25377 25378 25379 25380 25381 25382 25383 25384 25385 25386 25387 25388 25389 25390 25391 25392 25393 25394 25395 25396 25397 25398 25399 25400 25401 25402 25403 25404 25405 25406 25407 25408 25409 25410 25411 25412 25413 25414 25415 25416 25417 25418 25419 25420 25421 25422 25423 25424 25425 25426 25427 25428 25429 25430 25431 25432 25433 25434 25435 25436 25437 25438 25439 25440 25441 25442 25443 25444 25445 25446 25447 25448 25449 25450 25451 25452 25453 25454 25455 25456 25457 25458 25459 25460 25461 25462 25463 25464 25465 25466 25467 25468 25469 25470 25471 25472 25473 25474 25475 25476 25477 25478 25479 25480 25481 25482 25483 25484 25485 25486 25487 25488 25489 25490 25491 25492 25493 25494 25495 25496 25497 25498 25499 25500 25501 25502 25503 25504 25505 25506 25507 25508 25509 25510 25511 25512 25513 25514 25515 25516 25517 25518 25519 25520 25521 25522 25523 25524 25525 25526 25527 25528 25529 25530 25531 25532 25533 25534 25535 25536 25537 25538 25539 25540 25541 25542 25543 25544 25545 25546 25547 25548 25549 25550 25551 25552 25553 25554 25555 25556 25557 25558 25559 25560 25561 25562 25563 25564 25565 25566 25567 25568 25569 25570 25571 25572 25573 25574 25575 25576 25577 25578 25579 25580 25581 25582 25583 25584 25585 25586 25587 25588 25589 25590 25591 25592 25593 25594 25595 25596 25597 25598 25599 25600 25601 25602 25603 25604 25605 25606 25607 25608 25609 25610 25611 25612 25613 25614 25615 25616 25617 25618 25619 25620 25621 25622 25623 25624 25625 25626 25627 25628 25629 25630 25631 25632 25633 25634 25635 25636 25637 25638 25639 25640 25641 25642 25643 25644 25645 25646 25647 25648 25649 25650 25651 25652 25653 25654 25655 25656 25657 25658 25659 25660 25661 25662 25663 25664 25665 25666 25667 25668 25669 25670 25671 25672 25673 25674 25675 25676 25677 25678 25679 25680 25681 25682 25683 25684 25685 25686 25687 25688 25689 25690 25691 25692 25693 25694 25695 25696 25697 25698 25699 25700 25701 25702 25703 25704 25705 25706 25707 25708 25709 25710 25711 25712 25713 25714 25715 25716 25717 25718 25719 25720 25721 25722 25723 25724 25725 25726 25727 25728 25729 25730 25731 25732 25733 25734 25735 25736 25737 25738 25739 25740 25741 25742 25743 25744 25745 25746 25747 25748 25749 25750 25751 25752 25753 25754 25755 25756 25757 25758 25759 25760 25761 25762 25763 25764 25765 25766 25767 25768 25769 25770 25771 25772 25773 25774 25775 25776 25777 25778 25779 25780 25781 25782 25783 25784 25785 25786 25787 25788 25789 25790 25791 25792 25793 25794 25795 25796 25797 25798 25799 25800 25801 25802 25803 25804 25805 25806 25807 25808 25809 25810 25811 25812 25813 25814 25815 25816 25817 25818 25819 25820 25821 25822 25823 25824 25825 25826 25827 25828 25829 25830 25831 25832 25833 25834 25835 25836 25837 25838 25839 25840 25841 25842 25843 25844 25845 25846 25847 25848 25849 25850 25851 25852 25853 25854 25855 25856 25857 25858 25859 25860 25861 25862 25863 25864 25865 25866 25867 25868 25869 25870 25871 25872 25873 25874 25875 25876 25877 25878 25879 25880 25881 25882 25883 25884 25885 25886 25887 25888 25889 25890 25891 25892 25893 25894 25895 25896 25897 25898 25899 25900 25901 25902 25903 25904 25905 25906 25907 25908 25909 25910 25911 25912 25913 25914 25915 25916 25917 25918 25919 25920 25921 25922 25923 25924 25925 25926 25927 25928 25929 25930 25931 25932 25933 25934 25935 25936 25937 25938 25939 25940 25941 25942 25943 25944 25945 25946 25947 25948 25949 25950 25951 25952 25953 25954 25955 25956 25957 25958 25959 25960 25961 25962 25963 25964 25965 25966 25967 25968 25969 25970 25971 25972 25973 25974 25975 25976 25977 25978 25979 25980 25981 25982 25983 25984 25985 25986 25987 25988 25989 25990 25991 25992 25993 25994 25995 25996 25997 25998 25999 26000 26001 26002 26003 26004 26005 26006 26007 26008 26009 26010 26011 26012 26013 26014 26015 26016 26017 26018 26019 26020 26021 26022 26023 26024 26025 26026 26027 26028 26029 26030 26031 26032 26033 26034 26035 26036 26037 26038 26039 26040 26041 26042 26043 26044 26045 26046 26047 26048 26049 26050 26051 26052 26053 26054 26055 26056 26057 26058 26059 26060 26061 26062 26063 26064 26065 26066 26067 26068 26069 26070 26071 26072 26073 26074 26075 26076 26077 26078 26079 26080 26081 26082 26083 26084 26085 26086 26087 26088 26089 26090 26091 26092 26093 26094 26095 26096 26097 26098 26099 26100 26101 26102 26103 26104 26105 26106 26107 26108 26109 26110 26111 26112 26113 26114 26115 26116 26117 26118 26119 26120 26121 26122 26123 26124 26125 26126 26127 26128 26129 26130 26131 26132 26133 26134 26135 26136 26137 26138 26139 26140 26141 26142 26143 26144 26145 26146 26147 26148 26149 26150 26151 26152 26153 26154 26155 26156 26157 26158 26159 26160 26161 26162 26163 26164 26165 26166 26167 26168 26169 26170 26171 26172 26173 26174 26175 26176 26177 26178 26179 26180 26181 26182 26183 26184 26185 26186 26187 26188 26189 26190 26191 26192 26193 26194 26195 26196 26197 26198 26199 26200 26201 26202 26203 26204 26205 26206 26207 26208 26209 26210 26211 26212 26213 26214 26215 26216 26217 26218 26219 26220 26221 26222 26223 26224 26225 26226 26227 26228 26229 26230 26231 26232 26233 26234 26235 26236 26237 26238 26239 26240 26241 26242 26243 26244 26245 26246 26247 26248 26249 26250 26251 26252 26253 26254 26255 26256 26257 26258 26259 26260 26261 26262 26263 26264 26265 26266 26267 26268 26269 26270 26271 26272 26273 26274 26275 26276 26277 26278 26279 26280 26281 26282 26283 26284 26285 26286 26287 26288 26289 26290 26291 26292 26293 26294 26295 26296 26297 26298 26299 26300 26301 26302 26303 26304 26305 26306 26307 26308 26309 26310 26311 26312 26313 26314 26315 26316 26317 26318 26319 26320 26321 26322 26323 26324 26325 26326 26327 26328 26329 26330 26331 26332 26333 26334 26335 26336 26337 26338 26339 26340 26341 26342 26343 26344 26345 26346 26347 26348 26349 26350 26351 26352 26353 26354 26355 26356 26357 26358 26359 26360 26361 26362 26363 26364 26365 26366 26367 26368 26369 26370 26371 26372 26373 26374 26375 26376 26377 26378 26379 26380 26381 26382 26383 26384 26385 26386 26387 26388 26389 26390 26391 26392 26393 26394 26395 26396 26397 26398 26399 26400 26401 26402 26403 26404 26405 26406 26407 26408 26409 26410 26411 26412 26413 26414 26415 26416 26417 26418 26419 26420 26421 26422 26423 26424 26425 26426 26427 26428 26429 26430 26431 26432 26433 26434 26435 26436 26437 26438 26439 26440 26441 26442 26443 26444 26445 26446 26447 26448 26449 26450 26451 26452 26453 26454 26455 26456 26457 26458 26459 26460 26461 26462 26463 26464 26465 26466 26467 26468 26469 26470 26471 26472 26473 26474 26475 26476 26477 26478 26479 26480 26481 26482 26483 26484 26485 26486 26487 26488 26489 26490 26491 26492 26493 26494 26495 26496 26497 26498 26499 26500 26501 26502 26503 26504 26505 26506 26507 26508 26509 26510 26511 26512 26513 26514 26515 26516 26517 26518 26519 26520 26521 26522 26523 26524 26525 26526 26527 26528 26529 26530 26531 26532 26533 26534 26535 26536 26537 26538 26539 26540 26541 26542 26543 26544 26545 26546 26547 26548 26549 26550 26551 26552 26553 26554 26555 26556 26557 26558 26559 26560 26561 26562 26563 26564 26565 26566 26567 26568 26569 26570 26571 26572 26573 26574 26575 26576 26577 26578 26579 26580 26581 26582 26583 26584 26585 26586 26587 26588 26589 26590 26591 26592 26593 26594 26595 26596 26597 26598 26599 26600 26601 26602 26603 26604 26605 26606 26607 26608 26609 26610 26611 26612 26613 26614 26615 26616 26617 26618 26619 26620 26621 26622 26623 26624 26625 26626 26627 26628 26629 26630 26631 26632 26633 26634 26635 26636 26637 26638 26639 26640 26641 26642 26643 26644 26645 26646 26647 26648 26649 26650 26651 26652 26653 26654 26655 26656 26657 26658 26659 26660 26661 26662 26663 26664 26665 26666 26667 26668 26669 26670 26671 26672 26673 26674 26675 26676 26677 26678 26679 26680 26681 26682 26683 26684 26685 26686 26687 26688 26689 26690 26691 26692 26693 26694 26695 26696 26697 26698 26699 26700 26701 26702 26703 26704 26705 26706 26707 26708 26709 26710 26711 26712 26713 26714 26715 26716 26717 26718 26719 26720 26721 26722 26723 26724 26725 26726 26727 26728 26729 26730 26731 26732 26733 26734 26735 26736 26737 26738 26739 26740 26741 26742 26743 26744 26745 26746 26747 26748 26749 26750 26751 26752 26753 26754 26755 26756 26757 26758 26759 26760 26761 26762 26763 26764 26765 26766 26767 26768 26769 26770 26771 26772 26773 26774 26775 26776 26777 26778 26779 26780 26781 26782 26783 26784 26785 26786 26787 26788 26789 26790 26791 26792 26793 26794 26795 26796 26797 26798 26799 26800 26801 26802 26803 26804 26805 26806 26807 26808 26809 26810 26811 26812 26813 26814 26815 26816 26817 26818 26819 26820 26821 26822 26823 26824 26825 26826 26827 26828 26829 26830 26831 26832 26833 26834 26835 26836 26837 26838 26839 26840 26841 26842 26843 26844 26845 26846 26847 26848 26849 26850 26851 26852 26853 26854 26855 26856 26857 26858 26859 26860 26861 26862 26863 26864 26865 26866 26867 26868 26869 26870 26871 26872 26873 26874 26875 26876 26877 26878 26879 26880 26881 26882 26883 26884 26885 26886 26887 26888 26889 26890 26891 26892 26893 26894 26895 26896 26897 26898 26899 26900 26901 26902 26903 26904 26905 26906 26907 26908 26909 26910 26911 26912 26913 26914 26915 26916 26917 26918 26919 26920 26921 26922 26923 26924 26925 26926 26927 26928 26929 26930 26931 26932 26933 26934 26935 26936 26937 26938 26939 26940 26941 26942 26943 26944 26945 26946 26947 26948 26949 26950 26951 26952 26953 26954 26955 26956 26957 26958 26959 26960 26961 26962 26963 26964 26965 26966 26967 26968 26969 26970 26971 26972 26973 26974 26975 26976 26977 26978 26979 26980 26981 26982 26983 26984 26985 26986 26987 26988 26989 26990 26991 26992 26993 26994 26995 26996 26997 26998 26999 27000 27001 27002 27003 27004 27005 27006 27007 27008 27009 27010 27011 27012 27013 27014 27015 27016 27017 27018 27019 27020 27021 27022 27023 27024 27025 27026 27027 27028 27029 27030 27031 27032 27033 27034 27035 27036 27037 27038 27039 27040 27041 27042 27043 27044 27045 27046 27047 27048 27049 27050 27051 27052 27053 27054 27055 27056 27057 27058 27059 27060 27061 27062 27063 27064 27065 27066 27067 27068 27069 27070 27071 27072 27073 27074 27075 27076 27077 27078 27079 27080 27081 27082 27083 27084 27085 27086 27087 27088 27089 27090 27091 27092 27093 27094 27095 27096 27097 27098 27099 27100 27101 27102 27103 27104 27105 27106 27107 27108 27109 27110 27111 27112 27113 27114 27115 27116 27117 27118 27119 27120 27121 27122 27123 27124 27125 27126 27127 27128 27129 27130 27131 27132 27133 27134 27135 27136 27137 27138 27139 27140 27141 27142 27143 27144 27145 27146 27147 27148 27149 27150 27151 27152 27153 27154 27155 27156 27157 27158 27159 27160 27161 27162 27163 27164 27165 27166 27167 27168 27169 27170 27171 27172 27173 27174 27175 27176 27177 27178 27179 27180 27181 27182 27183 27184 27185 27186 27187 27188 27189 27190 27191 27192 27193 27194 27195 27196 27197 27198 27199 27200 27201 27202 27203 27204 27205 27206 27207 27208 27209 27210 27211 27212 27213 27214 27215 27216 27217 27218 27219 27220 27221 27222 27223 27224 27225 27226 27227 27228 27229 27230 27231 27232 27233 27234 27235 27236 27237 27238 27239 27240 27241 27242 27243 27244 27245 27246 27247 27248 27249 27250 27251 27252 27253 27254 27255 27256 27257 27258 27259 27260 27261 27262 27263 27264 27265 27266 27267 27268 27269 27270 27271 27272 27273 27274 27275 27276 27277 27278 27279 27280 27281 27282 27283 27284 27285 27286 27287 27288 27289 27290 27291 27292 27293 27294 27295 27296 27297 27298 27299 27300 27301 27302 27303 27304 27305 27306 27307 27308 27309 27310 27311 27312 27313 27314 27315 27316 27317 27318 27319 27320 27321 27322 27323 27324 27325 27326 27327 27328 27329 27330 27331 27332 27333 27334 27335 27336 27337 27338 27339 27340 27341 27342 27343 27344 27345 27346 27347 27348 27349 27350 27351 27352 27353 27354 27355 27356 27357 27358 27359 27360 27361 27362 27363 27364 27365 27366 27367 27368 27369 27370 27371 27372 27373 27374 27375 27376 27377 27378 27379 27380 27381 27382 27383 27384 27385 27386 27387 27388 27389 27390 27391 27392 27393 27394 27395 27396 27397 27398 27399 27400 27401 27402 27403 27404 27405 27406 27407 27408 27409 27410 27411 27412 27413 27414 27415 27416 27417 27418 27419 27420 27421 27422 27423 27424 27425 27426 27427 27428 27429 27430 27431 27432 27433 27434 27435 27436 27437 27438 27439 27440 27441 27442 27443 27444 27445 27446 27447 27448 27449 27450 27451 27452 27453 27454 27455 27456 27457 27458 27459 27460 27461 27462 27463 27464 27465 27466 27467 27468 27469 27470 27471 27472 27473 27474 27475 27476 27477 27478 27479 27480 27481 27482 27483 27484 27485 27486 27487 27488 27489 27490 27491 27492 27493 27494 27495 27496 27497 27498 27499 27500 27501 27502 27503 27504 27505 27506 27507 27508 27509 27510 27511 27512 27513 27514 27515 27516 27517 27518 27519 27520 27521 27522 27523 27524 27525 27526 27527 27528 27529 27530 27531 27532 27533 27534 27535 27536 27537 27538 27539 27540 27541 27542 27543 27544 27545 27546 27547 27548 27549 27550 27551 27552 27553 27554 27555 27556 27557 27558 27559 27560 27561 27562 27563 27564 27565 27566 27567 27568 27569 27570 27571 27572 27573 27574 27575 27576 27577 27578 27579 27580 27581 27582 27583 27584 27585 27586 27587 27588 27589 27590 27591 27592 27593 27594 27595 27596 27597 27598 27599 27600 27601 27602 27603 27604 27605 27606 27607 27608 27609 27610 27611 27612 27613 27614 27615 27616 27617 27618 27619 27620 27621 27622 27623 27624 27625 27626 27627 27628 27629 27630 27631 27632 27633 27634 27635 27636 27637 27638 27639 27640 27641 27642 27643 27644 27645 27646 27647 27648 27649 27650 27651 27652 27653 27654 27655 27656 27657 27658 27659 27660 27661 27662 27663 27664 27665 27666 27667 27668 27669 27670 27671 27672 27673 27674 27675 27676 27677 27678 27679 27680 27681 27682 27683 27684 27685 27686 27687 27688 27689 27690 27691 27692 27693 27694 27695 27696 27697 27698 27699 27700 27701 27702 27703 27704 27705 27706 27707 27708 27709 27710 27711 27712 27713 27714 27715 27716 27717 27718 27719 27720 27721 27722 27723 27724 27725 27726 27727 27728 27729 27730 27731 27732 27733 27734 27735 27736 27737 27738 27739 27740 27741 27742 27743 27744 27745 27746 27747 27748 27749 27750 27751 27752 27753 27754 27755 27756 27757 27758 27759 27760 27761 27762 27763 27764 27765 27766 27767 27768 27769 27770 27771 27772 27773 27774 27775 27776 27777 27778 27779 27780 27781 27782 27783 27784 27785 27786 27787 27788 27789 27790 27791 27792 27793 27794 27795 27796 27797 27798 27799 27800 27801 27802 27803 27804 27805 27806 27807 27808 27809 27810 27811 27812 27813 27814 27815 27816 27817 27818 27819 27820 27821 27822 27823 27824 27825 27826 27827 27828 27829 27830 27831 27832 27833 27834 27835 27836 27837 27838 27839 27840 27841 27842 27843 27844 27845 27846 27847 27848 27849 27850 27851 27852 27853 27854 27855 27856 27857 27858 27859 27860 27861 27862 27863 27864 27865 27866 27867 27868 27869 27870 27871 27872 27873 27874 27875 27876 27877 27878 27879 27880 27881 27882 27883 27884 27885 27886 27887 27888 27889 27890 27891 27892 27893 27894 27895 27896 27897 27898 27899 27900 27901 27902 27903 27904 27905 27906 27907 27908 27909 27910 27911 27912 27913 27914 27915 27916 27917 27918 27919 27920 27921 27922 27923 27924 27925 27926 27927 27928 27929 27930 27931 27932 27933 27934 27935 27936 27937 27938 27939 27940 27941 27942 27943 27944 27945 27946 27947 27948 27949 27950 27951 27952 27953 27954 27955 27956 27957 27958 27959 27960 27961 27962 27963 27964 27965 27966 27967 27968 27969 27970 27971 27972 27973 27974 27975 27976 27977 27978 27979 27980 27981 27982 27983 27984 27985 27986 27987 27988 27989 27990 27991 27992 27993 27994 27995 27996 27997 27998 27999 28000 28001 28002 28003 28004 28005 28006 28007 28008 28009 28010 28011 28012 28013 28014 28015 28016 28017 28018 28019 28020 28021 28022 28023 28024 28025 28026 28027 28028 28029 28030 28031 28032 28033 28034 28035 28036 28037 28038 28039 28040 28041 28042 28043 28044 28045 28046 28047 28048 28049 28050 28051 28052 28053 28054 28055 28056 28057 28058 28059 28060 28061 28062 28063 28064 28065 28066 28067 28068 28069 28070 28071 28072 28073 28074 28075 28076 28077 28078 28079 28080 28081 28082 28083 28084 28085 28086 28087 28088 28089 28090 28091 28092 28093 28094 28095 28096 28097 28098 28099 28100 28101 28102 28103 28104 28105 28106 28107 28108 28109 28110 28111 28112 28113 28114 28115 28116 28117 28118 28119 28120 28121 28122 28123 28124 28125 28126 28127 28128 28129 28130 28131 28132 28133 28134 28135 28136 28137 28138 28139 28140 28141 28142 28143 28144 28145 28146 28147 28148 28149 28150 28151 28152 28153 28154 28155 28156 28157 28158 28159 28160 28161 28162 28163 28164 28165 28166 28167 28168 28169 28170 28171 28172 28173 28174 28175 28176 28177 28178 28179 28180 28181 28182 28183 28184 28185 28186 28187 28188 28189 28190 28191 28192 28193 28194 28195 28196 28197 28198 28199 28200 28201 28202 28203 28204 28205 28206 28207 28208 28209 28210 28211 28212 28213 28214 28215 28216 28217 28218 28219 28220 28221 28222 28223 28224 28225 28226 28227 28228 28229 28230 28231 28232 28233 28234 28235 28236 28237 28238 28239 28240 28241 28242 28243 28244 28245 28246 28247 28248 28249 28250 28251 28252 28253 28254 28255 28256 28257 28258 28259 28260 28261 28262 28263 28264 28265 28266 28267 28268 28269 28270 28271 28272 28273 28274 28275 28276 28277 28278 28279 28280 28281 28282 28283 28284 28285 28286 28287 28288 28289 28290 28291 28292 28293 28294 28295 28296 28297 28298 28299 28300 28301 28302 28303 28304 28305 28306 28307 28308 28309 28310 28311 28312 28313 28314 28315 28316 28317 28318 28319 28320 28321 28322 28323 28324 28325 28326 28327 28328 28329 28330 28331 28332 28333 28334 28335 28336 28337 28338 28339 28340 28341 28342 28343 28344 28345 28346 28347 28348 28349 28350 28351 28352 28353 28354 28355 28356 28357 28358 28359 28360 28361 28362 28363 28364 28365 28366 28367 28368 28369 28370 28371 28372 28373 28374 28375 28376 28377 28378 28379 28380 28381 28382 28383 28384 28385 28386 28387 28388 28389 28390 28391 28392 28393 28394 28395 28396 28397 28398 28399 28400 28401 28402 28403 28404 28405 28406 28407 28408 28409 28410 28411 28412 28413 28414 28415 28416 28417 28418 28419 28420 28421 28422 28423 28424 28425 28426 28427 28428 28429 28430 28431 28432 28433 28434 28435 28436 28437 28438 28439 28440 28441 28442 28443 28444 28445 28446 28447 28448 28449 28450 28451 28452 28453 28454 28455 28456 28457 28458 28459 28460 28461 28462 28463 28464 28465 28466 28467 28468 28469 28470 28471 28472 28473 28474 28475 28476 28477 28478 28479 28480 28481 28482 28483 28484 28485 28486 28487 28488 28489 28490 28491 28492 28493 28494 28495 28496 28497 28498 28499 28500 28501 28502 28503 28504 28505 28506 28507 28508 28509 28510 28511 28512 28513 28514 28515 28516 28517 28518 28519 28520 28521 28522 28523 28524 28525 28526 28527 28528 28529 28530 28531 28532 28533 28534 28535 28536 28537 28538 28539 28540 28541 28542 28543 28544 28545 28546 28547 28548 28549 28550 28551 28552 28553 28554 28555 28556 28557 28558 28559 28560 28561 28562 28563 28564 28565 28566 28567 28568 28569 28570 28571 28572 28573 28574 28575 28576 28577 28578 28579 28580 28581 28582 28583 28584 28585 28586 28587 28588 28589 28590 28591 28592 28593 28594 28595 28596 28597 28598 28599 28600 28601 28602 28603 28604 28605 28606 28607 28608 28609 28610 28611 28612 28613 28614 28615 28616 28617 28618 28619 28620 28621 28622 28623 28624 28625 28626 28627 28628 28629 28630 28631 28632 28633 28634 28635 28636 28637 28638 28639 28640 28641 28642 28643 28644 28645 28646 28647 28648 28649 28650 28651 28652 28653 28654 28655 28656 28657 28658 28659 28660 28661 28662 28663 28664 28665 28666 28667 28668 28669 28670 28671 28672 28673 28674 28675 28676 28677 28678 28679 28680 28681 28682 28683 28684 28685 28686 28687 28688 28689 28690 28691 28692 28693 28694 28695 28696 28697 28698 28699 28700 28701 28702 28703 28704 28705 28706 28707 28708 28709 28710 28711 28712 28713 28714 28715 28716 28717 28718 28719 28720 28721 28722 28723 28724 28725 28726 28727 28728 28729 28730 28731 28732 28733 28734 28735 28736 28737 28738 28739 28740 28741 28742 28743 28744 28745 28746 28747 28748 28749 28750 28751 28752 28753 28754 28755 28756 28757 28758 28759 28760 28761 28762 28763 28764 28765 28766 28767 28768 28769 28770 28771 28772 28773 28774 28775 28776 28777 28778 28779 28780 28781 28782 28783 28784 28785 28786 28787 28788 28789 28790 28791 28792 28793 28794 28795 28796 28797 28798 28799 28800 28801 28802 28803 28804 28805 28806 28807 28808 28809 28810 28811 28812 28813 28814 28815 28816 28817 28818 28819 28820 28821 28822 28823 28824 28825 28826 28827 28828 28829 28830 28831 28832 28833 28834 28835 28836 28837 28838 28839 28840 28841 28842 28843 28844 28845 28846 28847 28848 28849 28850 28851 28852 28853 28854 28855 28856 28857 28858 28859 28860 28861 28862 28863 28864 28865 28866 28867 28868 28869 28870 28871 28872 28873 28874 28875 28876 28877 28878 28879 28880 28881 28882 28883 28884 28885 28886 28887 28888 28889 28890 28891 28892 28893 28894 28895 28896 28897 28898 28899 28900 28901 28902 28903 28904 28905 28906 28907 28908 28909 28910 28911 28912 28913 28914 28915 28916 28917 28918 28919 28920 28921 28922 28923 28924 28925 28926 28927 28928 28929 28930 28931 28932 28933 28934 28935 28936 28937 28938 28939 28940 28941 28942 28943 28944 28945 28946 28947 28948 28949 28950 28951 28952 28953 28954 28955 28956 28957 28958 28959 28960 28961 28962 28963 28964 28965 28966 28967 28968 28969 28970 28971 28972 28973 28974 28975 28976 28977 28978 28979 28980 28981 28982 28983 28984 28985 28986 28987 28988 28989 28990 28991 28992 28993 28994 28995 28996 28997 28998 28999 29000 29001 29002 29003 29004 29005 29006 29007 29008 29009 29010 29011 29012 29013 29014 29015 29016 29017 29018 29019 29020 29021 29022 29023 29024 29025 29026 29027 29028 29029 29030 29031 29032 29033 29034 29035 29036 29037 29038 29039 29040 29041 29042 29043 29044 29045 29046 29047 29048 29049 29050 29051 29052 29053 29054 29055 29056 29057 29058 29059 29060 29061 29062 29063 29064 29065 29066 29067 29068 29069 29070 29071 29072 29073 29074 29075 29076 29077 29078 29079 29080 29081 29082 29083 29084 29085 29086 29087 29088 29089 29090 29091 29092 29093 29094 29095 29096 29097 29098 29099 29100 29101 29102 29103 29104 29105 29106 29107 29108 29109 29110 29111 29112 29113 29114 29115 29116 29117 29118 29119 29120 29121 29122 29123 29124 29125 29126 29127 29128 29129 29130 29131 29132 29133 29134 29135 29136 29137 29138 29139 29140 29141 29142 29143 29144 29145 29146 29147 29148 29149 29150 29151 29152 29153 29154 29155 29156 29157 29158 29159 29160 29161 29162 29163 29164 29165 29166 29167 29168 29169 29170 29171 29172 29173 29174 29175 29176 29177 29178 29179 29180 29181 29182 29183 29184 29185 29186 29187 29188 29189 29190 29191 29192 29193 29194 29195 29196 29197 29198 29199 29200 29201 29202 29203 29204 29205 29206 29207 29208 29209 29210 29211 29212 29213 29214 29215 29216 29217 29218 29219 29220 29221 29222 29223 29224 29225 29226 29227 29228 29229 29230 29231 29232 29233 29234 29235 29236 29237 29238 29239 29240 29241 29242 29243 29244 29245 29246 29247 29248 29249 29250 29251 29252 29253 29254 29255 29256 29257 29258 29259 29260 29261 29262 29263 29264 29265 29266 29267 29268 29269 29270 29271 29272 29273 29274 29275 29276 29277 29278 29279 29280 29281 29282 29283 29284 29285 29286 29287 29288 29289 29290 29291 29292 29293 29294 29295 29296 29297 29298 29299 29300 29301 29302 29303 29304 29305 29306 29307 29308 29309 29310 29311 29312 29313 29314 29315 29316 29317 29318 29319 29320 29321 29322 29323 29324 29325 29326 29327 29328 29329 29330 29331 29332 29333 29334 29335 29336 29337 29338 29339 29340 29341 29342 29343 29344 29345 29346 29347 29348 29349 29350 29351 29352 29353 29354 29355 29356 29357 29358 29359 29360 29361 29362 29363 29364 29365 29366 29367 29368 29369 29370 29371 29372 29373 29374 29375 29376 29377 29378 29379 29380 29381 29382 29383 29384 29385 29386 29387 29388 29389 29390 29391 29392 29393 29394 29395 29396 29397 29398 29399 29400 29401 29402 29403 29404 29405 29406 29407 29408 29409 29410 29411 29412 29413 29414 29415 29416 29417 29418 29419 29420 29421 29422 29423 29424 29425 29426 29427 29428 29429 29430 29431 29432 29433 29434 29435 29436 29437 29438 29439 29440 29441 29442 29443 29444 29445 29446 29447 29448 29449 29450 29451 29452 29453 29454 29455 29456 29457 29458 29459 29460 29461 29462 29463 29464 29465 29466 29467 29468 29469 29470 29471 29472 29473 29474 29475 29476 29477 29478 29479 29480 29481 29482 29483 29484 29485 29486 29487 29488 29489 29490 29491 29492 29493 29494 29495 29496 29497 29498 29499 29500 29501 29502 29503 29504 29505 29506 29507 29508 29509 29510 29511 29512 29513 29514 29515 29516 29517 29518 29519 29520 29521 29522 29523 29524 29525 29526 29527 29528 29529 29530 29531 29532 29533 29534 29535 29536 29537 29538 29539 29540 29541 29542 29543 29544 29545 29546 29547 29548 29549 29550 29551 29552 29553 29554 29555 29556 29557 29558 29559 29560 29561 29562 29563 29564 29565 29566 29567 29568 29569 29570 29571 29572 29573 29574 29575 29576 29577 29578 29579 29580 29581 29582 29583 29584 29585 29586 29587 29588 29589 29590 29591 29592 29593 29594 29595 29596 29597 29598 29599 29600 29601 29602 29603 29604 29605 29606 29607 29608 29609 29610 29611 29612 29613 29614 29615 29616 29617 29618 29619 29620 29621 29622 29623 29624 29625 29626 29627 29628 29629 29630 29631 29632 29633 29634 29635 29636 29637 29638 29639 29640 29641 29642 29643 29644 29645 29646 29647 29648 29649 29650 29651 29652 29653 29654 29655 29656 29657 29658 29659 29660 29661 29662 29663 29664 29665 29666 29667 29668 29669 29670 29671 29672 29673 29674 29675 29676 29677 29678 29679 29680 29681 29682 29683 29684 29685 29686 29687 29688 29689 29690 29691 29692 29693 29694 29695 29696 29697 29698 29699 29700 29701 29702 29703 29704 29705 29706 29707 29708 29709 29710 29711 29712 29713 29714 29715 29716 29717 29718 29719 29720 29721 29722 29723 29724 29725 29726 29727 29728 29729 29730 29731 29732 29733 29734 29735 29736 29737 29738 29739 29740 29741 29742 29743 29744 29745 29746 29747 29748 29749 29750 29751 29752 29753 29754 29755 29756 29757 29758 29759 29760 29761 29762 29763 29764 29765 29766 29767 29768 29769 29770 29771 29772 29773 29774 29775 29776 29777 29778 29779 29780 29781 29782 29783 29784 29785 29786 29787 29788 29789 29790 29791 29792 29793 29794 29795 29796 29797 29798 29799 29800 29801 29802 29803 29804 29805 29806 29807 29808 29809 29810 29811 29812 29813 29814 29815 29816 29817 29818 29819 29820 29821 29822 29823 29824 29825 29826 29827 29828 29829 29830 29831 29832 29833 29834 29835 29836 29837 29838 29839 29840 29841 29842 29843 29844 29845 29846 29847 29848 29849 29850 29851 29852 29853 29854 29855 29856 29857 29858 29859 29860 29861 29862 29863 29864 29865 29866 29867 29868 29869 29870 29871 29872 29873 29874 29875 29876 29877 29878 29879 29880 29881 29882 29883 29884 29885 29886 29887 29888 29889 29890 29891 29892 29893 29894 29895 29896 29897 29898 29899 29900 29901 29902 29903 29904 29905 29906 29907 29908 29909 29910 29911 29912 29913 29914 29915 29916 29917 29918 29919 29920 29921 29922 29923 29924 29925 29926 29927 29928 29929 29930 29931 29932 29933 29934 29935 29936 29937 29938 29939 29940 29941 29942 29943 29944 29945 29946 29947 29948 29949 29950 29951 29952 29953 29954 29955 29956 29957 29958 29959 29960 29961 29962 29963 29964 29965 29966 29967 29968 29969 29970 29971 29972 29973 29974 29975 29976 29977 29978 29979 29980 29981 29982 29983 29984 29985 29986 29987 29988 29989 29990 29991 29992 29993 29994 29995 29996 29997 29998 29999 30000 30001 30002 30003 30004 30005 30006 30007 30008 30009 30010 30011 30012 30013 30014 30015 30016 30017 30018 30019 30020 30021 30022 30023 30024 30025 30026 30027 30028 30029 30030 30031 30032 30033 30034 30035 30036 30037 30038 30039 30040 30041 30042 30043 30044 30045 30046 30047 30048 30049 30050 30051 30052 30053 30054 30055 30056 30057 30058 30059 30060 30061 30062 30063 30064 30065 30066 30067 30068 30069 30070 30071 30072 30073 30074 30075 30076 30077 30078 30079 30080 30081 30082 30083 30084 30085 30086 30087 30088 30089 30090 30091 30092 30093 30094 30095 30096 30097 30098 30099 30100 30101 30102 30103 30104 30105 30106 30107 30108 30109 30110 30111 30112 30113 30114 30115 30116 30117 30118 30119 30120 30121 30122 30123 30124 30125 30126 30127 30128 30129 30130 30131 30132 30133 30134 30135 30136 30137 30138 30139 30140 30141 30142 30143 30144 30145 30146 30147 30148 30149 30150 30151 30152 30153 30154 30155 30156 30157 30158 30159 30160 30161 30162 30163 30164 30165 30166 30167 30168 30169 30170 30171 30172 30173 30174 30175 30176 30177 30178 30179 30180 30181 30182 30183 30184 30185 30186 30187 30188 30189 30190 30191 30192 30193 30194 30195 30196 30197 30198 30199 30200 30201 30202 30203 30204 30205 30206 30207 30208 30209 30210 30211 30212 30213 30214 30215 30216 30217 30218 30219 30220 30221 30222 30223 30224 30225 30226 30227 30228 30229 30230 30231 30232 30233 30234 30235 30236 30237 30238 30239 30240 30241 30242 30243 30244 30245 30246 30247 30248 30249 30250 30251 30252 30253 30254 30255 30256 30257 30258 30259 30260 30261 30262 30263 30264 30265 30266 30267 30268 30269 30270 30271 30272 30273 30274 30275 30276 30277 30278 30279 30280 30281 30282 30283 30284 30285 30286 30287 30288 30289 30290 30291 30292 30293 30294 30295 30296 30297 30298 30299 30300 30301 30302 30303 30304 30305 30306 30307 30308 30309 30310 30311 30312 30313 30314 30315 30316 30317 30318 30319 30320 30321 30322 30323 30324 30325 30326 30327 30328 30329 30330 30331 30332 30333 30334 30335 30336 30337 30338 30339 30340 30341 30342 30343 30344 30345 30346 30347 30348 30349 30350 30351 30352 30353 30354 30355 30356 30357 30358 30359 30360 30361 30362 30363 30364 30365 30366 30367 30368 30369 30370 30371 30372 30373 30374 30375 30376 30377 30378 30379 30380 30381 30382 30383 30384 30385 30386 30387 30388 30389 30390 30391 30392 30393 30394 30395 30396 30397 30398 30399 30400 30401 30402 30403 30404 30405 30406 30407 30408 30409 30410 30411 30412 30413 30414 30415 30416 30417 30418 30419 30420 30421 30422 30423 30424 30425 30426 30427 30428 30429 30430 30431 30432 30433 30434 30435 30436 30437 30438 30439 30440 30441 30442 30443 30444 30445 30446 30447 30448 30449 30450 30451 30452 30453 30454 30455 30456 30457 30458 30459 30460 30461 30462 30463 30464 30465 30466 30467 30468 30469 30470 30471 30472 30473 30474 30475 30476 30477 30478 30479 30480 30481 30482 30483 30484 30485 30486 30487 30488 30489 30490 30491 30492 30493 30494 30495 30496 30497 30498 30499 30500 30501 30502 30503 30504 30505 30506 30507 30508 30509 30510 30511 30512 30513 30514 30515 30516 30517 30518 30519 30520 30521 30522 30523 30524 30525 30526 30527 30528 30529 30530 30531 30532 30533 30534 30535 30536 30537 30538 30539 30540 30541 30542 30543 30544 30545 30546 30547 30548 30549 30550 30551 30552 30553 30554 30555 30556 30557 30558 30559 30560 30561 30562 30563 30564 30565 30566 30567 30568 30569 30570 30571 30572 30573 30574 30575 30576 30577 30578 30579 30580 30581 30582 30583 30584 30585 30586 30587 30588 30589 30590 30591 30592 30593 30594 30595 30596 30597 30598 30599 30600 30601 30602 30603 30604 30605 30606 30607 30608 30609 30610 30611 30612 30613 30614 30615 30616 30617 30618 30619 30620 30621 30622 30623 30624 30625 30626 30627 30628 30629 30630 30631 30632 30633 30634 30635 30636 30637 30638 30639 30640 30641 30642 30643 30644 30645 30646 30647 30648 30649 30650 30651 30652 30653 30654 30655 30656 30657 30658 30659 30660 30661 30662 30663 30664 30665 30666 30667 30668 30669 30670 30671 30672 30673 30674 30675 30676 30677 30678 30679 30680 30681 30682 30683 30684 30685 30686 30687 30688 30689 30690 30691 30692 30693 30694 30695 30696 30697 30698 30699 30700 30701 30702 30703 30704 30705 30706 30707 30708 30709 30710 30711 30712 30713 30714 30715 30716 30717 30718 30719 30720 30721 30722 30723 30724 30725 30726 30727 30728 30729 30730 30731 30732 30733 30734 30735 30736 30737 30738 30739 30740 30741 30742 30743 30744 30745 30746 30747 30748 30749 30750 30751 30752 30753 30754 30755 30756 30757 30758 30759 30760 30761 30762 30763 30764 30765 30766 30767 30768 30769 30770 30771 30772 30773 30774 30775 30776 30777 30778 30779 30780 30781 30782 30783 30784 30785 30786 30787 30788 30789 30790 30791 30792 30793 30794 30795 30796 30797 30798 30799 30800 30801 30802 30803 30804 30805 30806 30807 30808 30809 30810 30811 30812 30813 30814 30815 30816 30817 30818 30819 30820 30821 30822 30823 30824 30825 30826 30827 30828 30829 30830 30831 30832 30833 30834 30835 30836 30837 30838 30839 30840 30841 30842 30843 30844 30845 30846 30847 30848 30849 30850 30851 30852 30853 30854 30855 30856 30857 30858 30859 30860 30861 30862 30863 30864 30865 30866 30867 30868 30869 30870 30871 30872 30873 30874 30875 30876 30877 30878 30879 30880 30881 30882 30883 30884 30885 30886 30887 30888 30889 30890 30891 30892 30893 30894 30895 30896 30897 30898 30899 30900 30901 30902 30903 30904 30905 30906 30907 30908 30909 30910 30911 30912 30913 30914 30915 30916 30917 30918 30919 30920 30921 30922 30923 30924 30925 30926 30927 30928 30929 30930 30931 30932 30933 30934 30935 30936 30937 30938 30939 30940 30941 30942 30943 30944 30945 30946 30947 30948 30949 30950 30951 30952 30953 30954 30955 30956 30957 30958 30959 30960 30961 30962 30963 30964 30965 30966 30967 30968 30969 30970 30971 30972 30973 30974 30975 30976 30977 30978 30979 30980 30981 30982 30983 30984 30985 30986 30987 30988 30989 30990 30991 30992 30993 30994 30995 30996 30997 30998 30999 31000 31001 31002 31003 31004 31005 31006 31007 31008 31009 31010 31011 31012 31013 31014 31015 31016 31017 31018 31019 31020 31021 31022 31023 31024 31025 31026 31027 31028 31029 31030 31031 31032 31033 31034 31035 31036 31037 31038 31039 31040 31041 31042 31043 31044 31045 31046 31047 31048 31049 31050 31051 31052 31053 31054 31055 31056 31057 31058 31059 31060 31061 31062 31063 31064 31065 31066 31067 31068 31069 31070 31071 31072 31073 31074 31075 31076 31077 31078 31079 31080 31081 31082 31083 31084 31085 31086 31087 31088 31089 31090 31091 31092 31093 31094 31095 31096 31097 31098 31099 31100 31101 31102 31103 31104 31105 31106 31107 31108 31109 31110 31111 31112 31113 31114 31115 31116 31117 31118 31119 31120 31121 31122 31123 31124 31125 31126 31127 31128 31129 31130 31131 31132 31133 31134 31135 31136 31137 31138 31139 31140 31141 31142 31143 31144 31145 31146 31147 31148 31149 31150 31151 31152 31153 31154 31155 31156 31157 31158 31159 31160 31161 31162 31163 31164 31165 31166 31167 31168 31169 31170 31171 31172 31173 31174 31175 31176 31177 31178 31179 31180 31181 31182 31183 31184 31185 31186 31187 31188 31189 31190 31191 31192 31193 31194 31195 31196 31197 31198 31199 31200 31201 31202 31203 31204 31205 31206 31207 31208 31209 31210 31211 31212 31213 31214 31215 31216 31217 31218 31219 31220 31221 31222 31223 31224 31225 31226 31227 31228 31229 31230 31231 31232 31233 31234 31235 31236 31237 31238 31239 31240 31241 31242 31243 31244 31245 31246 31247 31248 31249 31250 31251 31252 31253 31254 31255 31256 31257 31258 31259 31260 31261 31262 31263 31264 31265 31266 31267 31268 31269 31270 31271 31272 31273 31274 31275 31276 31277 31278 31279 31280 31281 31282 31283 31284 31285 31286 31287 31288 31289 31290 31291 31292 31293 31294 31295 31296 31297 31298 31299 31300 31301 31302 31303 31304 31305 31306 31307 31308 31309 31310 31311 31312 31313 31314 31315 31316 31317 31318 31319 31320 31321 31322 31323 31324 31325 31326 31327 31328 31329 31330 31331 31332 31333 31334 31335 31336 31337 31338 31339 31340 31341 31342 31343 31344 31345 31346 31347 31348 31349 31350 31351 31352 31353 31354 31355 31356 31357 31358 31359 31360 31361 31362 31363 31364 31365 31366 31367 31368 31369 31370 31371 31372 31373 31374 31375 31376 31377 31378 31379 31380 31381 31382 31383 31384 31385 31386 31387 31388 31389 31390 31391 31392 31393 31394 31395 31396 31397 31398 31399 31400 31401 31402 31403 31404 31405 31406 31407 31408 31409 31410 31411 31412 31413 31414 31415 31416 31417 31418 31419 31420 31421 31422 31423 31424 31425 31426 31427 31428 31429 31430 31431 31432 31433 31434 31435 31436 31437 31438 31439 31440 31441 31442 31443 31444 31445 31446 31447 31448 31449 31450 31451 31452 31453 31454 31455 31456 31457 31458 31459 31460 31461 31462 31463 31464 31465 31466 31467 31468 31469 31470 31471 31472 31473 31474 31475 31476 31477 31478 31479 31480 31481 31482 31483 31484 31485 31486 31487 31488 31489 31490 31491 31492 31493 31494 31495 31496 31497 31498 31499 31500 31501 31502 31503 31504 31505 31506 31507 31508 31509 31510 31511 31512 31513 31514 31515 31516 31517 31518 31519 31520 31521 31522 31523 31524 31525 31526 31527 31528 31529 31530 31531 31532 31533 31534 31535 31536 31537 31538 31539 31540 31541 31542 31543 31544 31545 31546 31547 31548 31549 31550 31551 31552 31553 31554 31555 31556 31557 31558 31559 31560 31561 31562 31563 31564 31565 31566 31567 31568 31569 31570 31571 31572 31573 31574 31575 31576 31577 31578 31579 31580 31581 31582 31583 31584 31585 31586 31587 31588 31589 31590 31591 31592 31593 31594 31595 31596 31597 31598 31599 31600 31601 31602 31603 31604 31605 31606 31607 31608 31609 31610 31611 31612 31613 31614 31615 31616 31617 31618 31619 31620 31621 31622 31623 31624 31625 31626 31627 31628 31629 31630 31631 31632 31633 31634 31635 31636 31637 31638 31639 31640 31641 31642 31643 31644 31645 31646 31647 31648 31649 31650 31651 31652 31653 31654 31655 31656 31657 31658 31659 31660 31661 31662 31663 31664 31665 31666 31667 31668 31669 31670 31671 31672 31673 31674 31675 31676 31677 31678 31679 31680 31681 31682 31683 31684 31685 31686 31687 31688 31689 31690 31691 31692 31693 31694 31695 31696 31697 31698 31699 31700 31701 31702 31703 31704 31705 31706 31707 31708 31709 31710 31711 31712 31713 31714 31715 31716 31717 31718 31719 31720 31721 31722 31723 31724 31725 31726 31727 31728 31729 31730 31731 31732 31733 31734 31735 31736 31737 31738 31739 31740 31741 31742 31743 31744 31745 31746 31747 31748 31749 31750 31751 31752 31753 31754 31755 31756 31757 31758 31759 31760 31761 31762 31763 31764 31765 31766 31767 31768 31769 31770 31771 31772 31773 31774 31775 31776 31777 31778 31779 31780 31781 31782 31783 31784 31785 31786 31787 31788 31789 31790 31791 31792 31793 31794 31795 31796 31797 31798 31799 31800 31801 31802 31803 31804 31805 31806 31807 31808 31809 31810 31811 31812 31813 31814 31815 31816 31817 31818 31819 31820 31821 31822 31823 31824 31825 31826 31827 31828 31829 31830 31831 31832 31833 31834 31835 31836 31837 31838 31839 31840 31841 31842 31843 31844 31845 31846 31847 31848 31849 31850 31851 31852 31853 31854 31855 31856 31857 31858 31859 31860 31861 31862 31863 31864 31865 31866 31867 31868 31869 31870 31871 31872 31873 31874 31875 31876 31877 31878 31879 31880 31881 31882 31883 31884 31885 31886 31887 31888 31889 31890 31891 31892 31893 31894 31895 31896 31897 31898 31899 31900 31901 31902 31903 31904 31905 31906 31907 31908 31909 31910 31911 31912 31913 31914 31915 31916 31917 31918 31919 31920 31921 31922 31923 31924 31925 31926 31927 31928 31929 31930 31931 31932 31933 31934 31935 31936 31937 31938 31939 31940 31941 31942 31943 31944 31945 31946 31947 31948 31949 31950 31951 31952 31953 31954 31955 31956 31957 31958 31959 31960 31961 31962 31963 31964 31965 31966 31967 31968 31969 31970 31971 31972 31973 31974 31975 31976 31977 31978 31979 31980 31981 31982 31983 31984 31985 31986 31987 31988 31989 31990 31991 31992 31993 31994 31995 31996 31997 31998 31999 32000 32001 32002 32003 32004 32005 32006 32007 32008 32009 32010 32011 32012 32013 32014 32015 32016 32017 32018 32019 32020 32021 32022 32023 32024 32025 32026 32027 32028 32029 32030 32031 32032 32033 32034 32035 32036 32037 32038 32039 32040 32041 32042 32043 32044 32045 32046 32047 32048 32049 32050 32051 32052 32053 32054 32055 32056 32057 32058 32059 32060 32061 32062 32063 32064 32065 32066 32067 32068 32069 32070 32071 32072 32073 32074 32075 32076 32077 32078 32079 32080 32081 32082 32083 32084 32085 32086 32087 32088 32089 32090 32091 32092 32093 32094 32095 32096 32097 32098 32099 32100 32101 32102 32103 32104 32105 32106 32107 32108 32109 32110 32111 32112 32113 32114 32115 32116 32117 32118 32119 32120 32121 32122 32123 32124 32125 32126 32127 32128 32129 32130 32131 32132 32133 32134 32135 32136 32137 32138 32139 32140 32141 32142 32143 32144 32145 32146 32147 32148 32149 32150 32151 32152 32153 32154 32155 32156 32157 32158 32159 32160 32161 32162 32163 32164 32165 32166 32167 32168 32169 32170 32171 32172 32173 32174 32175 32176 32177 32178 32179 32180 32181 32182 32183 32184 32185 32186 32187 32188 32189 32190 32191 32192 32193 32194 32195 32196 32197 32198 32199 32200 32201 32202 32203 32204 32205 32206 32207 32208 32209 32210 32211 32212 32213 32214 32215 32216 32217 32218 32219 32220 32221 32222 32223 32224 32225 32226 32227 32228 32229 32230 32231 32232 32233 32234 32235 32236 32237 32238 32239 32240 32241 32242 32243 32244 32245 32246 32247 32248 32249 32250 32251 32252 32253 32254 32255 32256 32257 32258 32259 32260 32261 32262 32263 32264 32265 32266 32267 32268 32269 32270 32271 32272 32273 32274 32275 32276 32277 32278 32279 32280 32281 32282 32283 32284 32285 32286 32287 32288 32289 32290 32291 32292 32293 32294 32295 32296 32297 32298 32299 32300 32301 32302 32303 32304 32305 32306 32307 32308 32309 32310 32311 32312 32313 32314 32315 32316 32317 32318 32319 32320 32321 32322 32323 32324 32325 32326 32327 32328 32329 32330 32331 32332 32333 32334 32335 32336 32337 32338 32339 32340 32341 32342 32343 32344 32345 32346 32347 32348 32349 32350 32351 32352 32353 32354 32355 32356 32357 32358 32359 32360 32361 32362 32363 32364 32365 32366 32367 32368 32369 32370 32371 32372 32373 32374 32375 32376 32377 32378 32379 32380 32381 32382 32383 32384 32385 32386 32387 32388 32389 32390 32391 32392 32393 32394 32395 32396 32397 32398 32399 32400 32401 32402 32403 32404 32405 32406 32407 32408 32409 32410 32411 32412 32413 32414 32415 32416 32417 32418 32419 32420 32421 32422 32423 32424 32425 32426 32427 32428 32429 32430 32431 32432 32433 32434 32435 32436 32437 32438 32439 32440 32441 32442 32443 32444 32445 32446 32447 32448 32449 32450 32451 32452 32453 32454 32455 32456 32457 32458 32459 32460 32461 32462 32463 32464 32465 32466 32467 32468 32469 32470 32471 32472 32473 32474 32475 32476 32477 32478 32479 32480 32481 32482 32483 32484 32485 32486 32487 32488 32489 32490 32491 32492 32493 32494 32495 32496 32497 32498 32499 32500 32501 32502 32503 32504 32505 32506 32507 32508 32509 32510 32511 32512 32513 32514 32515 32516 32517 32518 32519 32520 32521 32522 32523 32524 32525 32526 32527 32528 32529 32530 32531 32532 32533 32534 32535 32536 32537 32538 32539 32540 32541 32542 32543 32544 32545 32546 32547 32548 32549 32550 32551 32552 32553 32554 32555 32556 32557 32558 32559 32560 32561 32562 32563 32564 32565 32566 32567 32568 32569 32570 32571 32572 32573 32574 32575 32576 32577 32578 32579 32580 32581 32582 32583 32584 32585 32586 32587 32588 32589 32590 32591 32592 32593 32594 32595 32596 32597 32598 32599 32600 32601 32602 32603 32604 32605 32606 32607 32608 32609 32610 32611 32612 32613 32614 32615 32616 32617 32618 32619 32620 32621 32622 32623 32624 32625 32626 32627 32628 32629 32630 32631 32632 32633 32634 32635 32636 32637 32638 32639 32640 32641 32642 32643 32644 32645 32646 32647 32648 32649 32650 32651 32652 32653 32654 32655 32656 32657 32658 32659 32660 32661 32662 32663 32664 32665 32666 32667 32668 32669 32670 32671 32672 32673 32674 32675 32676 32677 32678 32679 32680 32681 32682 32683 32684 32685 32686 32687 32688 32689 32690 32691 32692 32693 32694 32695 32696 32697 32698 32699 32700 32701 32702 32703 32704 32705 32706 32707 32708 32709 32710 32711 32712 32713 32714 32715 32716 32717 32718 32719 32720 32721 32722 32723 32724 32725 32726 32727 32728 32729 32730 32731 32732 32733 32734 32735 32736 32737 32738 32739 32740 32741 32742 32743 32744 32745 32746 32747 32748 32749 32750 32751 32752 32753 32754 32755 32756 32757 32758 32759 32760 32761 32762 32763 32764 32765 32766 32767 32768 32769 32770 32771 32772 32773 32774 32775 32776 32777 32778 32779 32780 32781 32782 32783 32784 32785 32786 32787 32788 32789 32790 32791 32792 32793 32794 32795 32796 32797 32798 32799 32800 32801 32802 32803 32804 32805 32806 32807 32808 32809 32810 32811 32812 32813 32814 32815 32816 32817 32818 32819 32820 32821 32822 32823 32824 32825 32826 32827 32828 32829 32830 32831 32832 32833 32834 32835 32836 32837 32838 32839 32840 32841 32842 32843 32844 32845 32846 32847 32848 32849 32850 32851 32852 32853 32854 32855 32856 32857 32858 32859 32860 32861 32862 32863 32864 32865 32866 32867 32868 32869 32870 32871 32872 32873 32874 32875 32876 32877 32878 32879 32880 32881 32882 32883 32884 32885 32886 32887 32888 32889 32890 32891 32892 32893 32894 32895 32896 32897 32898 32899 32900 32901 32902 32903 32904 32905 32906 32907 32908 32909 32910 32911 32912 32913 32914 32915 32916 32917 32918 32919 32920 32921 32922 32923 32924 32925 32926 32927 32928 32929 32930 32931 32932 32933 32934 32935 32936 32937 32938 32939 32940 32941 32942 32943 32944 32945 32946 32947 32948 32949 32950 32951 32952 32953 32954 32955 32956 32957 32958 32959 32960 32961 32962 32963 32964 32965 32966 32967 32968 32969 32970 32971 32972 32973 32974 32975 32976 32977 32978 32979 32980 32981 32982 32983 32984 32985 32986 32987 32988 32989 32990 32991 32992 32993 32994 32995 32996 32997 32998 32999 33000 33001 33002 33003 33004 33005 33006 33007 33008 33009 33010 33011 33012 33013 33014 33015 33016 33017 33018 33019 33020 33021 33022 33023 33024 33025 33026 33027 33028 33029 33030 33031 33032 33033 33034 33035 33036 33037 33038 33039 33040 33041 33042 33043 33044 33045 33046 33047 33048 33049 33050 33051 33052 33053 33054 33055 33056 33057 33058 33059 33060 33061 33062 33063 33064 33065 33066 33067 33068 33069 33070 33071 33072 33073 33074 33075 33076 33077 33078 33079 33080 33081 33082 33083 33084 33085 33086 33087 33088 33089 33090 33091 33092 33093 33094 33095 33096 33097 33098 33099 33100 33101 33102 33103 33104 33105 33106 33107 33108 33109 33110 33111 33112 33113 33114 33115 33116 33117 33118 33119 33120 33121 33122 33123 33124 33125 33126 33127 33128 33129 33130 33131 33132 33133 33134 33135 33136 33137 33138 33139 33140 33141 33142 33143 33144 33145 33146 33147 33148 33149 33150 33151 33152 33153 33154 33155 33156 33157 33158 33159 33160 33161 33162 33163 33164 33165 33166 33167 33168 33169 33170 33171 33172 33173 33174 33175 33176 33177 33178 33179 33180 33181 33182 33183 33184 33185 33186 33187 33188 33189 33190 33191 33192 33193 33194 33195 33196 33197 33198 33199 33200 33201 33202 33203 33204 33205 33206 33207 33208 33209 33210 33211 33212 33213 33214 33215 33216 33217 33218 33219 33220 33221 33222 33223 33224 33225 33226 33227 33228 33229 33230 33231 33232 33233 33234 33235 33236 33237 33238 33239 33240 33241 33242 33243 33244 33245 33246 33247 33248 33249 33250 33251 33252 33253 33254 33255 33256 33257 33258 33259 33260 33261 33262 33263 33264 33265 33266 33267 33268 33269 33270 33271 33272 33273 33274 33275 33276 33277 33278 33279 33280 33281 33282 33283 33284 33285 33286 33287 33288 33289 33290 33291 33292 33293 33294 33295 33296 33297 33298 33299 33300 33301 33302 33303 33304 33305 33306 33307 33308 33309 33310 33311 33312 33313 33314 33315 33316 33317 33318 33319 33320 33321 33322 33323 33324 33325 33326 33327 33328 33329 33330 33331 33332 33333 33334 33335 33336 33337 33338 33339 33340 33341 33342 33343 33344 33345 33346 33347 33348 33349 33350 33351 33352 33353 33354 33355 33356 33357 33358 33359 33360 33361 33362 33363 33364 33365 33366 33367 33368 33369 33370 33371 33372 33373 33374 33375 33376 33377 33378 33379 33380 33381 33382 33383 33384 33385 33386 33387 33388 33389 33390 33391 33392 33393 33394 33395 33396 33397 33398 33399 33400 33401 33402 33403 33404 33405 33406 33407 33408 33409 33410 33411 33412 33413 33414 33415 33416 33417 33418 33419 33420 33421 33422 33423 33424 33425 33426 33427 33428 33429 33430 33431 33432 33433 33434 33435 33436 33437 33438 33439 33440 33441 33442 33443 33444 33445 33446 33447 33448 33449 33450 33451 33452 33453 33454 33455 33456 33457 33458 33459 33460 33461 33462 33463 33464 33465 33466 33467 33468 33469 33470 33471 33472 33473 33474 33475 33476 33477 33478 33479 33480 33481 33482 33483 33484 33485 33486 33487 33488 33489 33490 33491 33492 33493 33494 33495 33496 33497 33498 33499 33500 33501 33502 33503 33504 33505 33506 33507 33508 33509 33510 33511 33512 33513 33514 33515 33516 33517 33518 33519 33520 33521 33522 33523 33524 33525 33526 33527 33528 33529 33530 33531 33532 33533 33534 33535 33536 33537 33538 33539 33540 33541 33542 33543 33544 33545 33546 33547 33548 33549 33550 33551 33552 33553 33554 33555 33556 33557 33558 33559 33560 33561 33562 33563 33564 33565 33566 33567 33568 33569 33570 33571 33572 33573 33574 33575 33576 33577 33578 33579 33580 33581 33582 33583 33584 33585 33586 33587 33588 33589 33590 33591 33592 33593 33594 33595 33596 33597 33598 33599 33600 33601 33602 33603 33604 33605 33606 33607 33608 33609 33610 33611 33612 33613 33614 33615 33616 33617 33618 33619 33620 33621 33622 33623 33624 33625 33626 33627 33628 33629 33630 33631 33632 33633 33634 33635 33636 33637 33638 33639 33640 33641 33642 33643 33644 33645 33646 33647 33648 33649 33650 33651 33652 33653 33654 33655 33656 33657 33658 33659 33660 33661 33662 33663 33664 33665 33666 33667 33668 33669 33670 33671 33672 33673 33674 33675 33676 33677 33678 33679 33680 33681 33682 33683 33684 33685 33686 33687 33688 33689 33690 33691 33692 33693 33694 33695 33696 33697 33698 33699 33700 33701 33702 33703 33704 33705 33706 33707 33708 33709 33710 33711 33712 33713 33714 33715 33716 33717 33718 33719 33720 33721 33722 33723 33724 33725 33726 33727 33728 33729 33730 33731 33732 33733 33734 33735 33736 33737 33738 33739 33740 33741 33742 33743 33744 33745 33746 33747 33748 33749 33750 33751 33752 33753 33754 33755 33756 33757 33758 33759 33760 33761 33762 33763 33764 33765 33766 33767 33768 33769 33770 33771 33772 33773 33774 33775 33776 33777 33778 33779 33780 33781 33782 33783 33784 33785 33786 33787 33788 33789 33790 33791 33792 33793 33794 33795 33796 33797 33798 33799 33800 33801 33802 33803 33804 33805 33806 33807 33808 33809 33810 33811 33812 33813 33814 33815 33816 33817 33818 33819 33820 33821 33822 33823 33824 33825 33826 33827 33828 33829 33830 33831 33832 33833 33834 33835 33836 33837 33838 33839 33840 33841 33842 33843 33844 33845 33846 33847 33848 33849 33850 33851 33852 33853 33854 33855 33856 33857 33858 33859 33860 33861 33862 33863 33864 33865 33866 33867 33868 33869 33870 33871 33872 33873 33874 33875 33876 33877 33878 33879 33880 33881 33882 33883 33884 33885 33886 33887 33888 33889 33890 33891 33892 33893 33894 33895 33896 33897 33898 33899 33900 33901 33902 33903 33904 33905 33906 33907 33908 33909 33910 33911 33912 33913 33914 33915 33916 33917 33918 33919 33920 33921 33922 33923 33924 33925 33926 33927 33928 33929 33930 33931 33932 33933 33934 33935 33936 33937 33938 33939 33940 33941 33942 33943 33944 33945 33946 33947 33948 33949 33950 33951 33952 33953 33954 33955 33956 33957 33958 33959 33960 33961 33962 33963 33964 33965 33966 33967 33968 33969 33970 33971 33972 33973 33974 33975 33976 33977 33978 33979 33980 33981 33982 33983 33984 33985 33986 33987 33988 33989 33990 33991 33992 33993 33994 33995 33996 33997 33998 33999 34000 34001 34002 34003 34004 34005 34006 34007 34008 34009 34010 34011 34012 34013 34014 34015 34016 34017 34018 34019 34020 34021 34022 34023 34024 34025 34026 34027 34028 34029 34030 34031 34032 34033 34034 34035 34036 34037 34038 34039 34040 34041 34042 34043 34044 34045 34046 34047 34048 34049 34050 34051 34052 34053 34054 34055 34056 34057 34058 34059 34060 34061 34062 34063 34064 34065 34066 34067 34068 34069 34070 34071 34072 34073 34074 34075 34076 34077 34078 34079 34080 34081 34082 34083 34084 34085 34086 34087 34088 34089 34090 34091 34092 34093 34094 34095 34096 34097 34098 34099 34100 34101 34102 34103 34104 34105 34106 34107 34108 34109 34110 34111 34112 34113 34114 34115 34116 34117 34118 34119 34120 34121 34122 34123 34124 34125 34126 34127 34128 34129 34130 34131 34132 34133 34134 34135 34136 34137 34138 34139 34140 34141 34142 34143 34144 34145 34146 34147 34148 34149 34150 34151 34152 34153 34154 34155 34156 34157 34158 34159 34160 34161 34162 34163 34164 34165 34166 34167 34168 34169 34170 34171 34172 34173 34174 34175 34176 34177 34178 34179 34180 34181 34182 34183 34184 34185 34186 34187 34188 34189 34190 34191 34192 34193 34194 34195 34196 34197 34198 34199 34200 34201 34202 34203 34204 34205 34206 34207 34208 34209 34210 34211 34212 34213 34214 34215 34216 34217 34218 34219 34220 34221 34222 34223 34224 34225 34226 34227 34228 34229 34230 34231 34232 34233 34234 34235 34236 34237 34238 34239 34240 34241 34242 34243 34244 34245 34246 34247 34248 34249 34250 34251 34252 34253 34254 34255 34256 34257 34258 34259 34260 34261 34262 34263 34264 34265 34266 34267 34268 34269 34270 34271 34272 34273 34274 34275 34276 34277 34278 34279 34280 34281 34282 34283 34284 34285 34286 34287 34288 34289 34290 34291 34292 34293 34294 34295 34296 34297 34298 34299 34300 34301 34302 34303 34304 34305 34306 34307 34308 34309 34310 34311 34312 34313 34314 34315 34316 34317 34318 34319 34320 34321 34322 34323 34324 34325 34326 34327 34328 34329 34330 34331 34332 34333 34334 34335 34336 34337 34338 34339 34340 34341 34342 34343 34344 34345 34346 34347 34348 34349 34350 34351 34352 34353 34354 34355 34356 34357 34358 34359 34360 34361 34362 34363 34364 34365 34366 34367 34368 34369 34370 34371 34372 34373 34374 34375 34376 34377 34378 34379 34380 34381 34382 34383 34384 34385 34386 34387 34388 34389 34390 34391 34392 34393 34394 34395 34396 34397 34398 34399 34400 34401 34402 34403 34404 34405 34406 34407 34408 34409 34410 34411 34412 34413 34414 34415 34416 34417 34418 34419 34420 34421 34422 34423 34424 34425 34426 34427 34428 34429 34430 34431 34432 34433 34434 34435 34436 34437 34438 34439 34440 34441 34442 34443 34444 34445 34446 34447 34448 34449 34450 34451 34452 34453 34454 34455 34456 34457 34458 34459 34460 34461 34462 34463 34464 34465 34466 34467 34468 34469 34470 34471 34472 34473 34474 34475 34476 34477 34478 34479 34480 34481 34482 34483 34484 34485 34486 34487 34488 34489 34490 34491 34492 34493 34494 34495 34496 34497 34498 34499 34500 34501 34502 34503 34504 34505 34506 34507 34508 34509 34510 34511 34512 34513 34514 34515 34516 34517 34518 34519 34520 34521 34522 34523 34524 34525 34526 34527 34528 34529 34530 34531 34532 34533 34534 34535 34536 34537 34538 34539 34540 34541 34542 34543 34544 34545 34546 34547 34548 34549 34550 34551 34552 34553 34554 34555 34556 34557 34558 34559 34560 34561 34562 34563 34564 34565 34566 34567 34568 34569 34570 34571 34572 34573 34574 34575 34576 34577 34578 34579 34580 34581 34582 34583 34584 34585 34586 34587 34588 34589 34590 34591 34592 34593 34594 34595 34596 34597 34598 34599 34600 34601 34602 34603 34604 34605 34606 34607 34608 34609 34610 34611 34612 34613 34614 34615 34616 34617 34618 34619 34620 34621 34622 34623 34624 34625 34626 34627 34628 34629 34630 34631 34632 34633 34634 34635 34636 34637 34638 34639 34640 34641 34642 34643 34644 34645 34646 34647 34648 34649 34650 34651 34652 34653 34654 34655 34656 34657 34658 34659 34660 34661 34662 34663 34664 34665 34666 34667 34668 34669 34670 34671 34672 34673 34674 34675 34676 34677 34678 34679 34680 34681 34682 34683 34684 34685 34686 34687 34688 34689 34690 34691 34692 34693 34694 34695 34696 34697 34698 34699 34700 34701 34702 34703 34704 34705 34706 34707 34708 34709 34710 34711 34712 34713 34714 34715 34716 34717 34718 34719 34720 34721 34722 34723 34724 34725 34726 34727 34728 34729 34730 34731 34732 34733 34734 34735 34736 34737 34738 34739 34740 34741 34742 34743 34744 34745 34746 34747 34748 34749 34750 34751 34752 34753 34754 34755 34756 34757 34758 34759 34760 34761 34762 34763 34764 34765 34766 34767 34768 34769 34770 34771 34772 34773 34774 34775 34776 34777 34778 34779 34780 34781 34782 34783 34784 34785 34786 34787 34788 34789 34790 34791 34792 34793 34794 34795 34796 34797 34798 34799 34800 34801 34802 34803 34804 34805 34806 34807 34808 34809 34810 34811 34812 34813 34814 34815 34816 34817 34818 34819 34820 34821 34822 34823 34824 34825 34826 34827 34828 34829 34830 34831 34832 34833 34834 34835 34836 34837 34838 34839 34840 34841 34842 34843 34844 34845 34846 34847 34848 34849 34850 34851 34852 34853 34854 34855 34856 34857 34858 34859 34860 34861 34862 34863 34864 34865 34866 34867 34868 34869 34870 34871 34872 34873 34874 34875 34876 34877 34878 34879 34880 34881 34882 34883 34884 34885 34886 34887 34888 34889 34890 34891 34892 34893 34894 34895 34896 34897 34898 34899 34900 34901 34902 34903 34904 34905 34906 34907 34908 34909 34910 34911 34912 34913 34914 34915 34916 34917 34918 34919 34920 34921 34922 34923 34924 34925 34926 34927 34928 34929 34930 34931 34932 34933 34934 34935 34936 34937 34938 34939 34940 34941 34942 34943 34944 34945 34946 34947 34948 34949 34950 34951 34952 34953 34954 34955 34956 34957 34958 34959 34960 34961 34962 34963 34964 34965 34966 34967 34968 34969 34970 34971 34972 34973 34974 34975 34976 34977 34978 34979 34980 34981 34982 34983 34984 34985 34986 34987 34988 34989 34990 34991 34992 34993 34994 34995 34996 34997 34998 34999 35000 35001 35002 35003 35004 35005 35006 35007 35008 35009 35010 35011 35012 35013 35014 35015 35016 35017 35018 35019 35020 35021 35022 35023 35024 35025 35026 35027 35028 35029 35030 35031 35032 35033 35034 35035 35036 35037 35038 35039 35040 35041 35042 35043 35044 35045 35046 35047 35048 35049 35050 35051 35052 35053 35054 35055 35056 35057 35058 35059 35060 35061 35062 35063 35064 35065 35066 35067 35068 35069 35070 35071 35072 35073 35074 35075 35076 35077 35078 35079 35080 35081 35082 35083 35084 35085 35086 35087 35088 35089 35090 35091 35092 35093 35094 35095 35096 35097 35098 35099 35100 35101 35102 35103 35104 35105 35106 35107 35108 35109 35110 35111 35112 35113 35114 35115 35116 35117 35118 35119 35120 35121 35122 35123 35124 35125 35126 35127 35128 35129 35130 35131 35132 35133 35134 35135 35136 35137 35138 35139 35140 35141 35142 35143 35144 35145 35146 35147 35148 35149 35150 35151 35152 35153 35154 35155 35156 35157 35158 35159 35160 35161 35162 35163 35164 35165 35166 35167 35168 35169 35170 35171 35172 35173 35174 35175 35176 35177 35178 35179 35180 35181 35182 35183 35184 35185 35186 35187 35188 35189 35190 35191 35192 35193 35194 35195 35196 35197 35198 35199 35200 35201 35202 35203 35204 35205 35206 35207 35208 35209 35210 35211 35212 35213 35214 35215 35216 35217 35218 35219 35220 35221 35222 35223 35224 35225 35226 35227 35228 35229 35230 35231 35232 35233 35234 35235 35236 35237 35238 35239 35240 35241 35242 35243 35244 35245 35246 35247 35248 35249 35250 35251 35252 35253 35254 35255 35256 35257 35258 35259 35260 35261 35262 35263 35264 35265 35266 35267 35268 35269 35270 35271 35272 35273 35274 35275 35276 35277 35278 35279 35280 35281 35282 35283 35284 35285 35286 35287 35288 35289 35290 35291 35292 35293 35294 35295 35296 35297 35298 35299 35300 35301 35302 35303 35304 35305 35306 35307 35308 35309 35310 35311 35312 35313 35314 35315 35316 35317 35318 35319 35320 35321 35322 35323 35324 35325 35326 35327 35328 35329 35330 35331 35332 35333 35334 35335 35336 35337 35338 35339 35340 35341 35342 35343 35344 35345 35346 35347 35348 35349 35350 35351 35352 35353 35354 35355 35356 35357 35358 35359 35360 35361 35362 35363 35364 35365 35366 35367 35368 35369 35370 35371 35372 35373 35374 35375 35376 35377 35378 35379 35380 35381 35382 35383 35384 35385 35386 35387 35388 35389 35390 35391 35392 35393 35394 35395 35396 35397 35398 35399 35400 35401 35402 35403 35404 35405 35406 35407 35408 35409 35410 35411 35412 35413 35414 35415 35416 35417 35418 35419 35420 35421 35422 35423 35424 35425 35426 35427 35428 35429 35430 35431 35432 35433 35434 35435 35436 35437 35438 35439 35440 35441 35442 35443 35444 35445 35446 35447 35448 35449 35450 35451 35452 35453 35454 35455 35456 35457 35458 35459 35460 35461 35462 35463 35464 35465 35466 35467 35468 35469 35470 35471 35472 35473 35474 35475 35476 35477 35478 35479 35480 35481 35482 35483 35484 35485 35486 35487 35488 35489 35490 35491 35492 35493 35494 35495 35496 35497 35498 35499 35500 35501 35502 35503 35504 35505 35506 35507 35508 35509 35510 35511 35512 35513 35514 35515 35516 35517 35518 35519 35520 35521 35522 35523 35524 35525 35526 35527 35528 35529 35530 35531 35532 35533 35534 35535 35536 35537 35538 35539 35540 35541 35542 35543 35544 35545 35546 35547 35548 35549 35550 35551 35552 35553 35554 35555 35556 35557 35558 35559 35560 35561 35562 35563 35564 35565 35566 35567 35568 35569 35570 35571 35572 35573 35574 35575 35576 35577 35578 35579 35580 35581 35582 35583 35584 35585 35586 35587 35588 35589 35590 35591 35592 35593 35594 35595 35596 35597 35598 35599 35600 35601 35602 35603 35604 35605 35606 35607 35608 35609 35610 35611 35612 35613 35614 35615 35616 35617 35618 35619 35620 35621 35622 35623 35624 35625 35626 35627 35628 35629 35630 35631 35632 35633 35634 35635 35636 35637 35638 35639 35640 35641 35642 35643 35644 35645 35646 35647 35648 35649 35650 35651 35652 35653 35654 35655 35656 35657 35658 35659 35660 35661 35662 35663 35664 35665 35666 35667 35668 35669 35670 35671 35672 35673 35674 35675 35676 35677 35678 35679 35680 35681 35682 35683 35684 35685 35686 35687 35688 35689 35690 35691 35692 35693 35694 35695 35696 35697 35698 35699 35700 35701 35702 35703 35704 35705 35706 35707 35708 35709 35710 35711 35712 35713 35714 35715 35716 35717 35718 35719 35720 35721 35722 35723 35724 35725 35726 35727 35728 35729 35730 35731 35732 35733 35734 35735 35736 35737 35738 35739 35740 35741 35742 35743 35744 35745 35746 35747 35748 35749 35750 35751 35752 35753 35754 35755 35756 35757 35758 35759 35760 35761 35762 35763 35764 35765 35766 35767 35768 35769 35770 35771 35772 35773 35774 35775 35776 35777 35778 35779 35780 35781 35782 35783 35784 35785 35786 35787 35788 35789 35790 35791 35792 35793 35794 35795 35796 35797 35798 35799 35800 35801 35802 35803 35804 35805 35806 35807 35808 35809 35810 35811 35812 35813 35814 35815 35816 35817 35818 35819 35820 35821 35822 35823 35824 35825 35826 35827 35828 35829 35830 35831 35832 35833 35834 35835 35836 35837 35838 35839 35840 35841 35842 35843 35844 35845 35846 35847 35848 35849 35850 35851 35852 35853 35854 35855 35856 35857 35858 35859 35860 35861 35862 35863 35864 35865 35866 35867 35868 35869 35870 35871 35872 35873 35874 35875 35876 35877 35878 35879 35880 35881 35882 35883 35884 35885 35886 35887 35888 35889 35890 35891 35892 35893 35894 35895 35896 35897 35898 35899 35900 35901 35902 35903 35904 35905 35906 35907 35908 35909 35910 35911 35912 35913 35914 35915 35916 35917 35918 35919 35920 35921 35922 35923 35924 35925 35926 35927 35928 35929 35930 35931 35932 35933 35934 35935 35936 35937 35938 35939 35940 35941 35942 35943 35944 35945 35946 35947 35948 35949 35950 35951 35952 35953 35954 35955 35956 35957 35958 35959 35960 35961 35962 35963 35964 35965 35966 35967 35968 35969 35970 35971 35972 35973 35974 35975 35976 35977 35978 35979 35980 35981 35982 35983 35984 35985 35986 35987 35988 35989 35990 35991 35992 35993 35994 35995 35996 35997 35998 35999 36000 36001 36002 36003 36004 36005 36006 36007 36008 36009 36010 36011 36012 36013 36014 36015 36016 36017 36018 36019 36020 36021 36022 36023 36024 36025 36026 36027 36028 36029 36030 36031 36032 36033 36034 36035 36036 36037 36038 36039 36040 36041 36042 36043 36044 36045 36046 36047 36048 36049 36050 36051 36052 36053 36054 36055 36056 36057 36058 36059 36060 36061 36062 36063 36064 36065 36066 36067 36068 36069 36070 36071 36072 36073 36074 36075 36076 36077 36078 36079 36080 36081 36082 36083 36084 36085 36086 36087 36088 36089 36090 36091 36092 36093 36094 36095 36096 36097 36098 36099 36100 36101 36102 36103 36104 36105 36106 36107 36108 36109 36110 36111 36112 36113 36114 36115 36116 36117 36118 36119 36120 36121 36122 36123 36124 36125 36126 36127 36128 36129 36130 36131 36132 36133 36134 36135 36136 36137 36138 36139 36140 36141 36142 36143 36144 36145 36146 36147 36148 36149 36150 36151 36152 36153 36154 36155 36156 36157 36158 36159 36160 36161 36162 36163 36164 36165 36166 36167 36168 36169 36170 36171 36172 36173 36174 36175 36176 36177 36178 36179 36180 36181 36182 36183 36184 36185 36186 36187 36188 36189 36190 36191 36192 36193 36194 36195 36196 36197 36198 36199 36200 36201 36202 36203 36204 36205 36206 36207 36208 36209 36210 36211 36212 36213 36214 36215 36216 36217 36218 36219 36220 36221 36222 36223 36224 36225 36226 36227 36228 36229 36230 36231 36232 36233 36234 36235 36236 36237 36238 36239 36240 36241 36242 36243 36244 36245 36246 36247 36248 36249 36250 36251 36252 36253 36254 36255 36256 36257 36258 36259 36260 36261 36262 36263 36264 36265 36266 36267 36268 36269 36270 36271 36272 36273 36274 36275 36276 36277 36278 36279 36280 36281 36282 36283 36284 36285 36286 36287 36288 36289 36290 36291 36292 36293 36294 36295 36296 36297 36298 36299 36300 36301 36302 36303 36304 36305 36306 36307 36308 36309 36310 36311 36312 36313 36314 36315 36316 36317 36318 36319 36320 36321 36322 36323 36324 36325 36326 36327 36328 36329 36330 36331 36332 36333 36334 36335 36336 36337 36338 36339 36340 36341 36342 36343 36344 36345 36346 36347 36348 36349 36350 36351 36352 36353 36354 36355 36356 36357 36358 36359 36360 36361 36362 36363 36364 36365 36366 36367 36368 36369 36370 36371 36372 36373 36374 36375 36376 36377 36378 36379 36380 36381 36382 36383 36384 36385 36386 36387 36388 36389 36390 36391 36392 36393 36394 36395 36396 36397 36398 36399 36400 36401 36402 36403 36404 36405 36406 36407 36408 36409 36410 36411 36412 36413 36414 36415 36416 36417 36418 36419 36420 36421 36422 36423 36424 36425 36426 36427 36428 36429 36430 36431 36432 36433 36434 36435 36436 36437 36438 36439 36440 36441 36442 36443 36444 36445 36446 36447 36448 36449 36450 36451 36452 36453 36454 36455 36456 36457 36458 36459 36460 36461 36462 36463 36464 36465 36466 36467 36468 36469 36470 36471 36472 36473 36474 36475 36476 36477 36478 36479 36480 36481 36482 36483 36484 36485 36486 36487 36488 36489 36490 36491 36492 36493 36494 36495 36496 36497 36498 36499 36500 36501 36502 36503 36504 36505 36506 36507 36508 36509 36510 36511 36512 36513 36514 36515 36516 36517 36518 36519 36520 36521 36522 36523 36524 36525 36526 36527 36528 36529 36530 36531 36532 36533 36534 36535 36536 36537 36538 36539 36540 36541 36542 36543 36544 36545 36546 36547 36548 36549 36550 36551 36552 36553 36554 36555 36556 36557 36558 36559 36560 36561 36562 36563 36564 36565 36566 36567 36568 36569 36570 36571 36572 36573 36574 36575 36576 36577 36578 36579 36580 36581 36582 36583 36584 36585 36586 36587 36588 36589 36590 36591 36592 36593 36594 36595 36596 36597 36598 36599 36600 36601 36602 36603 36604 36605 36606 36607 36608 36609 36610 36611 36612 36613 36614 36615 36616 36617 36618 36619 36620 36621 36622 36623 36624 36625 36626 36627 36628 36629 36630 36631 36632 36633 36634 36635 36636 36637 36638 36639 36640 36641 36642 36643 36644 36645 36646 36647 36648 36649 36650 36651 36652 36653 36654 36655 36656 36657 36658 36659 36660 36661 36662 36663 36664 36665 36666 36667 36668 36669 36670 36671 36672 36673 36674 36675 36676 36677 36678 36679 36680 36681 36682 36683 36684 36685 36686 36687 36688 36689 36690 36691 36692 36693 36694 36695 36696 36697 36698 36699 36700 36701 36702 36703 36704 36705 36706 36707 36708 36709 36710 36711 36712 36713 36714 36715 36716 36717 36718 36719 36720 36721 36722 36723 36724 36725 36726 36727 36728 36729 36730 36731 36732 36733 36734 36735 36736 36737 36738 36739 36740 36741 36742 36743 36744 36745 36746 36747 36748 36749 36750 36751 36752 36753 36754 36755 36756 36757 36758 36759 36760 36761 36762 36763 36764 36765 36766 36767 36768 36769 36770 36771 36772 36773 36774 36775 36776 36777 36778 36779 36780 36781 36782 36783 36784 36785 36786 36787 36788 36789 36790 36791 36792 36793 36794 36795 36796 36797 36798 36799 36800 36801 36802 36803 36804 36805 36806 36807 36808 36809 36810 36811 36812 36813 36814 36815 36816 36817 36818 36819 36820 36821 36822 36823 36824 36825 36826 36827 36828 36829 36830 36831 36832 36833 36834 36835 36836 36837 36838 36839 36840 36841 36842 36843 36844 36845 36846 36847 36848 36849 36850 36851 36852 36853 36854 36855 36856 36857 36858 36859 36860 36861 36862 36863 36864 36865 36866 36867 36868 36869 36870 36871 36872 36873 36874 36875 36876 36877 36878 36879 36880 36881 36882 36883 36884 36885 36886 36887 36888 36889 36890 36891 36892 36893 36894 36895 36896 36897 36898 36899 36900 36901 36902 36903 36904 36905 36906 36907 36908 36909 36910 36911 36912 36913 36914 36915 36916 36917 36918 36919 36920 36921 36922 36923 36924 36925 36926 36927 36928 36929 36930 36931 36932 36933 36934 36935 36936 36937 36938 36939 36940 36941 36942 36943 36944 36945 36946 36947 36948 36949 36950 36951 36952 36953 36954 36955 36956 36957 36958 36959 36960 36961 36962 36963 36964 36965 36966 36967 36968 36969 36970 36971 36972 36973 36974 36975 36976 36977 36978 36979 36980 36981 36982 36983 36984 36985 36986 36987 36988 36989 36990 36991 36992 36993 36994 36995 36996 36997 36998 36999 37000 37001 37002 37003 37004 37005 37006 37007 37008 37009 37010 37011 37012 37013 37014 37015 37016 37017 37018 37019 37020 37021 37022 37023 37024 37025 37026 37027 37028 37029 37030 37031 37032 37033 37034 37035 37036 37037 37038 37039 37040 37041 37042 37043 37044 37045 37046 37047 37048 37049 37050 37051 37052 37053 37054 37055 37056 37057 37058 37059 37060 37061 37062 37063 37064 37065 37066 37067 37068 37069 37070 37071 37072 37073 37074 37075 37076 37077 37078 37079 37080 37081 37082 37083 37084 37085 37086 37087 37088 37089 37090 37091 37092 37093 37094 37095 37096 37097 37098 37099 37100 37101 37102 37103 37104 37105 37106 37107 37108 37109 37110 37111 37112 37113 37114 37115 37116 37117 37118 37119 37120 37121 37122 37123 37124 37125 37126 37127 37128 37129 37130 37131 37132 37133 37134 37135 37136 37137 37138 37139 37140 37141 37142 37143 37144 37145 37146 37147 37148 37149 37150 37151 37152 37153 37154 37155 37156 37157 37158 37159 37160 37161 37162 37163 37164 37165 37166 37167 37168 37169 37170 37171 37172 37173 37174 37175 37176 37177 37178 37179 37180 37181 37182 37183 37184 37185 37186 37187 37188 37189 37190 37191 37192 37193 37194 37195 37196 37197 37198 37199 37200 37201 37202 37203 37204 37205 37206 37207 37208 37209 37210 37211 37212 37213 37214 37215 37216 37217 37218 37219 37220 37221 37222 37223 37224 37225 37226 37227 37228 37229 37230 37231 37232 37233 37234 37235 37236 37237 37238 37239 37240 37241 37242 37243 37244 37245 37246 37247 37248 37249 37250 37251 37252 37253 37254 37255 37256 37257 37258 37259 37260 37261 37262 37263 37264 37265 37266 37267 37268 37269 37270 37271 37272 37273 37274 37275 37276 37277 37278 37279 37280 37281 37282 37283 37284 37285 37286 37287 37288 37289 37290 37291 37292 37293 37294 37295 37296 37297 37298 37299 37300 37301 37302 37303 37304 37305 37306 37307 37308 37309 37310 37311 37312 37313 37314 37315 37316 37317 37318 37319 37320 37321 37322 37323 37324 37325 37326 37327 37328 37329 37330 37331 37332 37333 37334 37335 37336 37337 37338 37339 37340 37341 37342 37343 37344 37345 37346 37347 37348 37349 37350 37351 37352 37353 37354 37355 37356 37357 37358 37359 37360 37361 37362 37363 37364 37365 37366 37367 37368 37369 37370 37371 37372 37373 37374 37375 37376 37377 37378 37379 37380 37381 37382 37383 37384 37385 37386 37387 37388 37389 37390 37391 37392 37393 37394 37395 37396 37397 37398 37399 37400 37401 37402 37403 37404 37405 37406 37407 37408 37409 37410 37411 37412 37413 37414 37415 37416 37417 37418 37419 37420 37421 37422 37423 37424 37425 37426 37427 37428 37429 37430 37431 37432 37433 37434 37435 37436 37437 37438 37439 37440 37441 37442 37443 37444 37445 37446 37447 37448 37449 37450 37451 37452 37453 37454 37455 37456 37457 37458 37459 37460 37461 37462 37463 37464 37465 37466 37467 37468 37469 37470 37471 37472 37473 37474 37475 37476 37477 37478 37479 37480 37481 37482 37483 37484 37485 37486 37487 37488 37489 37490 37491 37492 37493 37494 37495 37496 37497 37498 37499 37500 37501 37502 37503 37504 37505 37506 37507 37508 37509 37510 37511 37512 37513 37514 37515 37516 37517 37518 37519 37520 37521 37522 37523 37524 37525 37526 37527 37528 37529 37530 37531 37532 37533 37534 37535 37536 37537 37538 37539 37540 37541 37542 37543 37544 37545 37546 37547 37548 37549 37550 37551 37552 37553 37554 37555 37556 37557 37558 37559 37560 37561 37562 37563 37564 37565 37566 37567 37568 37569 37570 37571 37572 37573 37574 37575 37576 37577 37578 37579 37580 37581 37582 37583 37584 37585 37586 37587 37588 37589 37590 37591 37592 37593 37594 37595 37596 37597 37598 37599 37600 37601 37602 37603 37604 37605 37606 37607 37608 37609 37610 37611 37612 37613 37614 37615 37616 37617 37618 37619 37620 37621 37622 37623 37624 37625 37626 37627 37628 37629 37630 37631 37632 37633 37634 37635 37636 37637 37638 37639 37640 37641 37642 37643 37644 37645 37646 37647 37648 37649 37650 37651 37652 37653 37654 37655 37656 37657 37658 37659 37660 37661 37662 37663 37664 37665 37666 37667 37668 37669 37670 37671 37672 37673 37674 37675 37676 37677 37678 37679 37680 37681 37682 37683 37684 37685 37686 37687 37688 37689 37690 37691 37692 37693 37694 37695 37696 37697 37698 37699 37700 37701 37702 37703 37704 37705 37706 37707 37708 37709 37710 37711 37712 37713 37714 37715 37716 37717 37718 37719 37720 37721 37722 37723 37724 37725 37726 37727 37728 37729 37730 37731 37732 37733 37734 37735 37736 37737 37738 37739 37740 37741 37742 37743 37744 37745 37746 37747 37748 37749 37750 37751 37752 37753 37754 37755 37756 37757 37758 37759 37760 37761 37762 37763 37764 37765 37766 37767 37768 37769 37770 37771 37772 37773 37774 37775 37776 37777 37778 37779 37780 37781 37782 37783 37784 37785 37786 37787 37788 37789 37790 37791 37792 37793 37794 37795 37796 37797 37798 37799 37800 37801 37802 37803 37804 37805 37806 37807 37808 37809 37810 37811 37812 37813 37814 37815 37816 37817 37818 37819 37820 37821 37822 37823 37824 37825 37826 37827 37828 37829 37830 37831 37832 37833 37834 37835 37836 37837 37838 37839 37840 37841 37842 37843 37844 37845 37846 37847 37848 37849 37850 37851 37852 37853 37854 37855 37856 37857 37858 37859 37860 37861 37862 37863 37864 37865 37866 37867 37868 37869 37870 37871 37872 37873 37874 37875 37876 37877 37878 37879 37880 37881 37882 37883 37884 37885 37886 37887 37888 37889 37890 37891 37892 37893 37894 37895 37896 37897 37898 37899 37900 37901 37902 37903 37904 37905 37906 37907 37908 37909 37910 37911 37912 37913 37914 37915 37916 37917 37918 37919 37920 37921 37922 37923 37924 37925 37926 37927 37928 37929 37930 37931 37932 37933 37934 37935 37936 37937 37938 37939 37940 37941 37942 37943 37944 37945 37946 37947 37948 37949 37950 37951 37952 37953 37954 37955 37956 37957 37958 37959 37960 37961 37962 37963 37964 37965 37966 37967 37968 37969 37970 37971 37972 37973 37974 37975 37976 37977 37978 37979 37980 37981 37982 37983 37984 37985 37986 37987 37988 37989 37990 37991 37992 37993 37994 37995 37996 37997 37998 37999 38000 38001 38002 38003 38004 38005 38006 38007 38008 38009 38010 38011 38012 38013 38014 38015 38016 38017 38018 38019 38020 38021 38022 38023 38024 38025 38026 38027 38028 38029 38030 38031 38032 38033 38034 38035 38036 38037 38038 38039 38040 38041 38042 38043 38044 38045 38046 38047 38048 38049 38050 38051 38052 38053 38054 38055 38056 38057 38058 38059 38060 38061 38062 38063 38064 38065 38066 38067 38068 38069 38070 38071 38072 38073 38074 38075 38076 38077 38078 38079 38080 38081 38082 38083 38084 38085 38086 38087 38088 38089 38090 38091 38092 38093 38094 38095 38096 38097 38098 38099 38100 38101 38102 38103 38104 38105 38106 38107 38108 38109 38110 38111 38112 38113 38114 38115 38116 38117 38118 38119 38120 38121 38122 38123 38124 38125 38126 38127 38128 38129 38130 38131 38132 38133 38134 38135 38136 38137 38138 38139 38140 38141 38142 38143 38144 38145 38146 38147 38148 38149 38150 38151 38152 38153 38154 38155 38156 38157 38158 38159 38160 38161 38162 38163 38164 38165 38166 38167 38168 38169 38170 38171 38172 38173 38174 38175 38176 38177 38178 38179 38180 38181 38182 38183 38184 38185 38186 38187 38188 38189 38190 38191 38192 38193 38194 38195 38196 38197 38198 38199 38200 38201 38202 38203 38204 38205 38206 38207 38208 38209 38210 38211 38212 38213 38214 38215 38216 38217 38218 38219 38220 38221 38222 38223 38224 38225 38226 38227 38228 38229 38230 38231 38232 38233 38234 38235 38236 38237 38238 38239 38240 38241 38242 38243 38244 38245 38246 38247 38248 38249 38250 38251 38252 38253 38254 38255 38256 38257 38258 38259 38260 38261 38262 38263 38264 38265 38266 38267 38268 38269 38270 38271 38272 38273 38274 38275 38276 38277 38278 38279 38280 38281 38282 38283 38284 38285 38286 38287 38288 38289 38290 38291 38292 38293 38294 38295 38296 38297 38298 38299 38300 38301 38302 38303 38304 38305 38306 38307 38308 38309 38310 38311 38312 38313 38314 38315 38316 38317 38318 38319 38320 38321 38322 38323 38324 38325 38326 38327 38328 38329 38330 38331 38332 38333 38334 38335 38336 38337 38338 38339 38340 38341 38342 38343 38344 38345 38346 38347 38348 38349 38350 38351 38352 38353 38354 38355 38356 38357 38358 38359 38360 38361 38362 38363 38364 38365 38366 38367 38368 38369 38370 38371 38372 38373 38374 38375 38376 38377 38378 38379 38380 38381 38382 38383 38384 38385 38386 38387 38388 38389 38390 38391 38392 38393 38394 38395 38396 38397 38398 38399 38400 38401 38402 38403 38404 38405 38406 38407 38408 38409 38410 38411 38412 38413 38414 38415 38416 38417 38418 38419 38420 38421 38422 38423 38424 38425 38426 38427 38428 38429 38430 38431 38432 38433 38434 38435 38436 38437 38438 38439 38440 38441 38442 38443 38444 38445 38446 38447 38448 38449 38450 38451 38452 38453 38454 38455 38456 38457 38458 38459 38460 38461 38462 38463 38464 38465 38466 38467 38468 38469 38470 38471 38472 38473 38474 38475 38476 38477 38478 38479 38480 38481 38482 38483 38484 38485 38486 38487 38488 38489 38490 38491 38492 38493 38494 38495 38496 38497 38498 38499 38500 38501 38502 38503 38504 38505 38506 38507 38508 38509 38510 38511 38512 38513 38514 38515 38516 38517 38518 38519 38520 38521 38522 38523 38524 38525 38526 38527 38528 38529 38530 38531 38532 38533 38534 38535 38536 38537 38538 38539 38540 38541 38542 38543 38544 38545 38546 38547 38548 38549 38550 38551 38552 38553 38554 38555 38556 38557 38558 38559 38560 38561 38562 38563 38564 38565 38566 38567 38568 38569 38570 38571 38572 38573 38574 38575 38576 38577 38578 38579 38580 38581 38582 38583 38584 38585 38586 38587 38588 38589 38590 38591 38592 38593 38594 38595 38596 38597 38598 38599 38600 38601 38602 38603 38604 38605 38606 38607 38608 38609 38610 38611 38612 38613 38614 38615 38616 38617 38618 38619 38620 38621 38622 38623 38624 38625 38626 38627 38628 38629 38630 38631 38632 38633 38634 38635 38636 38637 38638 38639 38640 38641 38642 38643 38644 38645 38646 38647 38648 38649 38650 38651 38652 38653 38654 38655 38656 38657 38658 38659 38660 38661 38662 38663 38664 38665 38666 38667 38668 38669 38670 38671 38672 38673 38674 38675 38676 38677 38678 38679 38680 38681 38682 38683 38684 38685 38686 38687 38688 38689 38690 38691 38692 38693 38694 38695 38696 38697 38698 38699 38700 38701 38702 38703 38704 38705 38706 38707 38708 38709 38710 38711 38712 38713 38714 38715 38716 38717 38718 38719 38720 38721 38722 38723 38724 38725 38726 38727 38728 38729 38730 38731 38732 38733 38734 38735 38736 38737 38738 38739 38740 38741 38742 38743 38744 38745 38746 38747 38748 38749 38750 38751 38752 38753 38754 38755 38756 38757 38758 38759 38760 38761 38762 38763 38764 38765 38766 38767 38768 38769 38770 38771 38772 38773 38774 38775 38776 38777 38778 38779 38780 38781 38782 38783 38784 38785 38786 38787 38788 38789 38790 38791 38792 38793 38794 38795 38796 38797 38798 38799 38800 38801 38802 38803 38804 38805 38806 38807 38808 38809 38810 38811 38812 38813 38814 38815 38816 38817 38818 38819 38820 38821 38822 38823 38824 38825 38826 38827 38828 38829 38830 38831 38832 38833 38834 38835 38836 38837 38838 38839 38840 38841 38842 38843 38844 38845 38846 38847 38848 38849 38850 38851 38852 38853 38854 38855 38856 38857 38858 38859 38860 38861 38862 38863 38864 38865 38866 38867 38868 38869 38870 38871 38872 38873 38874 38875 38876 38877 38878 38879 38880 38881 38882 38883 38884 38885 38886 38887 38888 38889 38890 38891 38892 38893 38894 38895 38896 38897 38898 38899 38900 38901 38902 38903 38904 38905 38906 38907 38908 38909 38910 38911 38912 38913 38914 38915 38916 38917 38918 38919 38920 38921 38922 38923 38924 38925 38926 38927 38928 38929 38930 38931 38932 38933 38934 38935 38936 38937 38938 38939 38940 38941 38942 38943 38944 38945 38946 38947 38948 38949 38950 38951 38952 38953 38954 38955 38956 38957 38958 38959 38960 38961 38962 38963 38964 38965 38966 38967 38968 38969 38970 38971 38972 38973 38974 38975 38976 38977 38978 38979 38980 38981 38982 38983 38984 38985 38986 38987 38988 38989 38990 38991 38992 38993 38994 38995 38996 38997 38998 38999 39000 39001 39002 39003 39004 39005 39006 39007 39008 39009 39010 39011 39012 39013 39014 39015 39016 39017 39018 39019 39020 39021 39022 39023 39024 39025 39026 39027 39028 39029 39030 39031 39032 39033 39034 39035 39036 39037 39038 39039 39040 39041 39042 39043 39044 39045 39046 39047 39048 39049 39050 39051 39052 39053 39054 39055 39056 39057 39058 39059 39060 39061 39062 39063 39064 39065 39066 39067 39068 39069 39070 39071 39072 39073 39074 39075 39076 39077 39078 39079 39080 39081 39082 39083 39084 39085 39086 39087 39088 39089 39090 39091 39092 39093 39094 39095 39096 39097 39098 39099 39100 39101 39102 39103 39104 39105 39106 39107 39108 39109 39110 39111 39112 39113 39114 39115 39116 39117 39118 39119 39120 39121 39122 39123 39124 39125 39126 39127 39128 39129 39130 39131 39132 39133 39134 39135 39136 39137 39138 39139 39140 39141 39142 39143 39144 39145 39146 39147 39148 39149 39150 39151 39152 39153 39154 39155 39156 39157 39158 39159 39160 39161 39162 39163 39164 39165 39166 39167 39168 39169 39170 39171 39172 39173 39174 39175 39176 39177 39178 39179 39180 39181 39182 39183 39184 39185 39186 39187 39188 39189 39190 39191 39192 39193 39194 39195 39196 39197 39198 39199 39200 39201 39202 39203 39204 39205 39206 39207 39208 39209 39210 39211 39212 39213 39214 39215 39216 39217 39218 39219 39220 39221 39222 39223 39224 39225 39226 39227 39228 39229 39230 39231 39232 39233 39234 39235 39236 39237 39238 39239 39240 39241 39242 39243 39244 39245 39246 39247 39248 39249 39250 39251 39252 39253 39254 39255 39256 39257 39258 39259 39260 39261 39262 39263 39264 39265 39266 39267 39268 39269 39270 39271 39272 39273 39274 39275 39276 39277 39278 39279 39280 39281 39282 39283 39284 39285 39286 39287 39288 39289 39290 39291 39292 39293 39294 39295 39296 39297 39298 39299 39300 39301 39302 39303 39304 39305 39306 39307 39308 39309 39310 39311 39312 39313 39314 39315 39316 39317 39318 39319 39320 39321 39322 39323 39324 39325 39326 39327 39328 39329 39330 39331 39332 39333 39334 39335 39336 39337 39338 39339 39340 39341 39342 39343 39344 39345 39346 39347 39348 39349 39350 39351 39352 39353 39354 39355 39356 39357 39358 39359 39360 39361 39362 39363 39364 39365 39366 39367 39368 39369 39370 39371 39372 39373 39374 39375 39376 39377 39378 39379 39380 39381 39382 39383 39384 39385 39386 39387 39388 39389 39390 39391 39392 39393 39394 39395 39396 39397 39398 39399 39400 39401 39402 39403 39404 39405 39406 39407 39408 39409 39410 39411 39412 39413 39414 39415 39416 39417 39418 39419 39420 39421 39422 39423 39424 39425 39426 39427 39428 39429 39430 39431 39432 39433 39434 39435 39436 39437 39438 39439 39440 39441 39442 39443 39444 39445 39446 39447 39448 39449 39450 39451 39452 39453 39454 39455 39456 39457 39458 39459 39460 39461 39462 39463 39464 39465 39466 39467 39468 39469 39470 39471 39472 39473 39474 39475 39476 39477 39478 39479 39480 39481 39482 39483 39484 39485 39486 39487 39488 39489 39490 39491 39492 39493 39494 39495 39496 39497 39498 39499 39500 39501 39502 39503 39504 39505 39506 39507 39508 39509 39510 39511 39512 39513 39514 39515 39516 39517 39518 39519 39520 39521 39522 39523 39524 39525 39526 39527 39528 39529 39530 39531 39532 39533 39534 39535 39536 39537 39538 39539 39540 39541 39542 39543 39544 39545 39546 39547 39548 39549 39550 39551 39552 39553 39554 39555 39556 39557 39558 39559 39560 39561 39562 39563 39564 39565 39566 39567 39568 39569 39570 39571 39572 39573 39574 39575 39576 39577 39578 39579 39580 39581 39582 39583 39584 39585 39586 39587 39588 39589 39590 39591 39592 39593 39594 39595 39596 39597 39598 39599 39600 39601 39602 39603 39604 39605 39606 39607 39608 39609 39610 39611 39612 39613 39614 39615 39616 39617 39618 39619 39620 39621 39622 39623 39624 39625 39626 39627 39628 39629 39630 39631 39632 39633 39634 39635 39636 39637 39638 39639 39640 39641 39642 39643 39644 39645 39646 39647 39648 39649 39650 39651 39652 39653 39654 39655 39656 39657 39658 39659 39660 39661 39662 39663 39664 39665 39666 39667 39668 39669 39670 39671 39672 39673 39674 39675 39676 39677 39678 39679 39680 39681 39682 39683 39684 39685 39686 39687 39688 39689 39690 39691 39692 39693 39694 39695 39696 39697 39698 39699 39700 39701 39702 39703 39704 39705 39706 39707 39708 39709 39710 39711 39712 39713 39714 39715 39716 39717 39718 39719 39720 39721 39722 39723 39724 39725 39726 39727 39728 39729 39730 39731 39732 39733 39734 39735 39736 39737 39738 39739 39740 39741 39742 39743 39744 39745 39746 39747 39748 39749 39750 39751 39752 39753 39754 39755 39756 39757 39758 39759 39760 39761 39762 39763 39764 39765 39766 39767 39768 39769 39770 39771 39772 39773 39774 39775 39776 39777 39778 39779 39780 39781 39782 39783 39784 39785 39786 39787 39788 39789 39790 39791 39792 39793 39794 39795 39796 39797 39798 39799 39800 39801 39802 39803 39804 39805 39806 39807 39808 39809 39810 39811 39812 39813 39814 39815 39816 39817 39818 39819 39820 39821 39822 39823 39824 39825 39826 39827 39828 39829 39830 39831 39832 39833 39834 39835 39836 39837 39838 39839 39840 39841 39842 39843 39844 39845 39846 39847 39848 39849 39850 39851 39852 39853 39854 39855 39856 39857 39858 39859 39860 39861 39862 39863 39864 39865 39866 39867 39868 39869 39870 39871 39872 39873 39874 39875 39876 39877 39878 39879 39880 39881 39882 39883 39884 39885 39886 39887 39888 39889 39890 39891 39892 39893 39894 39895 39896 39897 39898 39899 39900 39901 39902 39903 39904 39905 39906 39907 39908 39909 39910 39911 39912 39913 39914 39915 39916 39917 39918 39919 39920 39921 39922 39923 39924 39925 39926 39927 39928 39929 39930 39931 39932 39933 39934 39935 39936 39937 39938 39939 39940 39941 39942 39943 39944 39945 39946 39947 39948 39949 39950 39951 39952 39953 39954 39955 39956 39957 39958 39959 39960 39961 39962 39963 39964 39965 39966 39967 39968 39969 39970 39971 39972 39973 39974 39975 39976 39977 39978 39979 39980 39981 39982 39983 39984 39985 39986 39987 39988 39989 39990 39991 39992 39993 39994 39995 39996 39997 39998 39999 40000 40001 40002 40003 40004 40005 40006 40007 40008 40009 40010 40011 40012 40013 40014 40015 40016 40017 40018 40019 40020 40021 40022 40023 40024 40025 40026 40027 40028 40029 40030 40031 40032 40033 40034 40035 40036 40037 40038 40039 40040 40041 40042 40043 40044 40045 40046 40047 40048 40049 40050 40051 40052 40053 40054 40055 40056 40057 40058 40059 40060 40061 40062 40063 40064 40065 40066 40067 40068 40069 40070 40071 40072 40073 40074 40075 40076 40077 40078 40079 40080 40081 40082 40083 40084 40085 40086 40087 40088 40089 40090 40091 40092 40093 40094 40095 40096 40097 40098 40099 40100 40101 40102 40103 40104 40105 40106 40107 40108 40109 40110 40111 40112 40113 40114 40115 40116 40117 40118 40119 40120 40121 40122 40123 40124 40125 40126 40127 40128 40129 40130 40131 40132 40133 40134 40135 40136 40137 40138 40139 40140 40141 40142 40143 40144 40145 40146 40147 40148 40149 40150 40151 40152 40153 40154 40155 40156 40157 40158 40159 40160 40161 40162 40163 40164 40165 40166 40167 40168 40169 40170 40171 40172 40173 40174 40175 40176 40177 40178 40179 40180 40181 40182 40183 40184 40185 40186 40187 40188 40189 40190 40191 40192 40193 40194 40195 40196 40197 40198 40199 40200 40201 40202 40203 40204 40205 40206 40207 40208 40209 40210 40211 40212 40213 40214 40215 40216 40217 40218 40219 40220 40221 40222 40223 40224 40225 40226 40227 40228 40229 40230 40231 40232 40233 40234 40235 40236 40237 40238 40239 40240 40241 40242 40243 40244 40245 40246 40247 40248 40249 40250 40251 40252 40253 40254 40255 40256 40257 40258 40259 40260 40261 40262 40263 40264 40265 40266 40267 40268 40269 40270 40271 40272 40273 40274 40275 40276 40277 40278 40279 40280 40281 40282 40283 40284 40285 40286 40287 40288 40289 40290 40291 40292 40293 40294 40295 40296 40297 40298 40299 40300 40301 40302 40303 40304 40305 40306 40307 40308 40309 40310 40311 40312 40313 40314 40315 40316 40317 40318 40319 40320 40321 40322 40323 40324 40325 40326 40327 40328 40329 40330 40331 40332 40333 40334 40335 40336 40337 40338 40339 40340 40341 40342 40343 40344 40345 40346 40347 40348 40349 40350 40351 40352 40353 40354 40355 40356 40357 40358 40359 40360 40361 40362 40363 40364 40365 40366 40367 40368 40369 40370 40371 40372 40373 40374 40375 40376 40377 40378 40379 40380 40381 40382 40383 40384 40385 40386 40387 40388 40389 40390 40391 40392 40393 40394 40395 40396 40397 40398 40399 40400 40401 40402 40403 40404 40405 40406 40407 40408 40409 40410 40411 40412 40413 40414 40415 40416 40417 40418 40419 40420 40421 40422 40423 40424 40425 40426 40427 40428 40429 40430 40431 40432 40433 40434 40435 40436 40437 40438 40439 40440 40441 40442 40443 40444 40445 40446 40447 40448 40449 40450 40451 40452 40453 40454 40455 40456 40457 40458 40459 40460 40461 40462 40463 40464 40465 40466 40467 40468 40469 40470 40471 40472 40473 40474 40475 40476 40477 40478 40479 40480 40481 40482 40483 40484 40485 40486 40487 40488 40489 40490 40491 40492 40493 40494 40495 40496 40497 40498 40499 40500 40501 40502 40503 40504 40505 40506 40507 40508 40509 40510 40511 40512 40513 40514 40515 40516 40517 40518 40519 40520 40521 40522 40523 40524 40525 40526 40527 40528 40529 40530 40531 40532 40533 40534 40535 40536 40537 40538 40539 40540 40541 40542 40543 40544 40545 40546 40547 40548 40549 40550 40551 40552 40553 40554 40555 40556 40557 40558 40559 40560 40561 40562 40563 40564 40565 40566 40567 40568 40569 40570 40571 40572 40573 40574 40575 40576 40577 40578 40579 40580 40581 40582 40583 40584 40585 40586 40587 40588 40589 40590 40591 40592 40593 40594 40595 40596 40597 40598 40599 40600 40601 40602 40603 40604 40605 40606 40607 40608 40609 40610 40611 40612 40613 40614 40615 40616 40617 40618 40619 40620 40621 40622 40623 40624 40625 40626 40627 40628 40629 40630 40631 40632 40633 40634 40635 40636 40637 40638 40639 40640 40641 40642 40643 40644 40645 40646 40647 40648 40649 40650 40651 40652 40653 40654 40655 40656 40657 40658 40659 40660 40661 40662 40663 40664 40665 40666 40667 40668 40669 40670 40671 40672 40673 40674 40675 40676 40677 40678 40679 40680 40681 40682 40683 40684 40685 40686 40687 40688 40689 40690 40691 40692 40693 40694 40695 40696 40697 40698 40699 40700 40701 40702 40703 40704 40705 40706 40707 40708 40709 40710 40711 40712 40713 40714 40715 40716 40717 40718 40719 40720 40721 40722 40723 40724 40725 40726 40727 40728 40729 40730 40731 40732 40733 40734 40735 40736 40737 40738 40739 40740 40741 40742 40743 40744 40745 40746 40747 40748 40749 40750 40751 40752 40753 40754 40755 40756 40757 40758 40759 40760 40761 40762 40763 40764 40765 40766 40767 40768 40769 40770 40771 40772 40773 40774 40775 40776 40777 40778 40779 40780 40781 40782 40783 40784 40785 40786 40787 40788 40789 40790 40791 40792 40793 40794 40795 40796 40797 40798 40799 40800 40801 40802 40803 40804 40805 40806 40807 40808 40809 40810 40811 40812 40813 40814 40815 40816 40817 40818 40819 40820 40821 40822 40823 40824 40825 40826 40827 40828 40829 40830 40831 40832 40833 40834 40835 40836 40837 40838 40839 40840 40841 40842 40843 40844 40845 40846 40847 40848 40849 40850 40851 40852 40853 40854 40855 40856 40857 40858 40859 40860 40861 40862 40863 40864 40865 40866 40867 40868 40869 40870 40871 40872 40873 40874 40875 40876 40877 40878 40879 40880 40881 40882 40883 40884 40885 40886 40887 40888 40889 40890 40891 40892 40893 40894 40895 40896 40897 40898 40899 40900 40901 40902 40903 40904 40905 40906 40907 40908 40909 40910 40911 40912 40913 40914 40915 40916 40917 40918 40919 40920 40921 40922 40923 40924 40925 40926 40927 40928 40929 40930 40931 40932 40933 40934 40935 40936 40937 40938 40939 40940 40941 40942 40943 40944 40945 40946 40947 40948 40949 40950 40951 40952 40953 40954 40955 40956 40957 40958 40959 40960 40961 40962 40963 40964 40965 40966 40967 40968 40969 40970 40971 40972 40973 40974 40975 40976 40977 40978 40979 40980 40981 40982 40983 40984 40985 40986 40987 40988 40989 40990 40991 40992 40993 40994 40995 40996 40997 40998 40999 41000 41001 41002 41003 41004 41005 41006 41007 41008 41009 41010 41011 41012 41013 41014 41015 41016 41017 41018 41019 41020 41021 41022 41023 41024 41025 41026 41027 41028 41029 41030 41031 41032 41033 41034 41035 41036 41037 41038 41039 41040 41041 41042 41043 41044 41045 41046 41047 41048 41049 41050 41051 41052 41053 41054 41055 41056 41057 41058 41059 41060 41061 41062 41063 41064 41065 41066 41067 41068 41069 41070 41071 41072 41073 41074 41075 41076 41077 41078 41079 41080 41081 41082 41083 41084 41085 41086 41087 41088 41089 41090 41091 41092 41093 41094 41095 41096 41097 41098 41099 41100 41101 41102 41103 41104 41105 41106 41107 41108 41109 41110 41111 41112 41113 41114 41115 41116 41117 41118 41119 41120 41121 41122 41123 41124 41125 41126 41127 41128 41129 41130 41131 41132 41133 41134 41135 41136 41137 41138 41139 41140 41141 41142 41143 41144 41145 41146 41147 41148 41149 41150 41151 41152 41153 41154 41155 41156 41157 41158 41159 41160 41161 41162 41163 41164 41165 41166 41167 41168 41169 41170 41171 41172 41173 41174 41175 41176 41177 41178 41179 41180 41181 41182 41183 41184 41185 41186 41187 41188 41189 41190 41191 41192 41193 41194 41195 41196 41197 41198 41199 41200 41201 41202 41203 41204 41205 41206 41207 41208 41209 41210 41211 41212 41213 41214 41215 41216 41217 41218 41219 41220 41221 41222 41223 41224 41225 41226 41227 41228 41229 41230 41231 41232 41233 41234 41235 41236 41237 41238 41239 41240 41241 41242 41243 41244 41245 41246 41247 41248 41249 41250 41251 41252 41253 41254 41255 41256 41257 41258 41259 41260 41261 41262 41263 41264 41265 41266 41267 41268 41269 41270 41271 41272 41273 41274 41275 41276 41277 41278 41279 41280 41281 41282 41283 41284 41285 41286 41287 41288 41289 41290 41291 41292 41293 41294 41295 41296 41297 41298 41299 41300 41301 41302 41303 41304 41305 41306 41307 41308 41309 41310 41311 41312 41313 41314 41315 41316 41317 41318 41319 41320 41321 41322 41323 41324 41325 41326 41327 41328 41329 41330 41331 41332 41333 41334 41335 41336 41337 41338 41339 41340 41341 41342 41343 41344 41345 41346 41347 41348 41349 41350 41351 41352 41353 41354 41355 41356 41357 41358 41359 41360 41361 41362 41363 41364 41365 41366 41367 41368 41369 41370 41371 41372 41373 41374 41375 41376 41377 41378 41379 41380 41381 41382 41383 41384 41385 41386 41387 41388 41389 41390 41391 41392 41393 41394 41395 41396 41397 41398 41399 41400 41401 41402 41403 41404 41405 41406 41407 41408 41409 41410 41411 41412 41413 41414 41415 41416 41417 41418 41419 41420 41421 41422 41423 41424 41425 41426 41427 41428 41429 41430 41431 41432 41433 41434 41435 41436 41437 41438 41439 41440 41441 41442 41443 41444 41445 41446 41447 41448 41449 41450 41451 41452 41453 41454 41455 41456 41457 41458 41459 41460 41461 41462 41463 41464 41465 41466 41467 41468 41469 41470 41471 41472 41473 41474 41475 41476 41477 41478 41479 41480 41481 41482 41483 41484 41485 41486 41487 41488 41489 41490 41491 41492 41493 41494 41495 41496 41497 41498 41499 41500 41501 41502 41503 41504 41505 41506 41507 41508 41509 41510 41511 41512 41513 41514 41515 41516 41517 41518 41519 41520 41521 41522 41523 41524 41525 41526 41527 41528 41529 41530 41531 41532 41533 41534 41535 41536 41537 41538 41539 41540 41541 41542 41543 41544 41545 41546 41547 41548 41549 41550 41551 41552 41553 41554 41555 41556 41557 41558 41559 41560 41561 41562 41563 41564 41565 41566 41567 41568 41569 41570 41571 41572 41573 41574 41575 41576 41577 41578 41579 41580 41581 41582 41583 41584 41585 41586 41587 41588 41589 41590 41591 41592 41593 41594 41595 41596 41597 41598 41599 41600 41601 41602 41603 41604 41605 41606 41607 41608 41609 41610 41611 41612 41613 41614 41615 41616 41617 41618 41619 41620 41621 41622 41623 41624 41625 41626 41627 41628 41629 41630 41631 41632 41633 41634 41635 41636 41637 41638 41639 41640 41641 41642 41643 41644 41645 41646 41647 41648 41649 41650 41651 41652 41653 41654 41655 41656 41657 41658 41659 41660 41661 41662 41663 41664 41665 41666 41667 41668 41669 41670 41671 41672 41673 41674 41675 41676 41677 41678 41679 41680 41681 41682 41683 41684 41685 41686 41687 41688 41689 41690 41691 41692 41693 41694 41695 41696 41697 41698 41699 41700 41701 41702 41703 41704 41705 41706 41707 41708 41709 41710 41711 41712 41713 41714 41715 41716 41717 41718 41719 41720 41721 41722 41723 41724 41725 41726 41727 41728 41729 41730 41731 41732 41733 41734 41735 41736 41737 41738 41739 41740 41741 41742 41743 41744 41745 41746 41747 41748 41749 41750 41751 41752 41753 41754 41755 41756 41757 41758 41759 41760 41761 41762 41763 41764 41765 41766 41767 41768 41769 41770 41771 41772 41773 41774 41775 41776 41777 41778 41779 41780 41781 41782 41783 41784 41785 41786 41787 41788 41789 41790 41791 41792 41793 41794 41795 41796 41797 41798 41799 41800 41801 41802 41803 41804 41805 41806 41807 41808 41809 41810 41811 41812 41813 41814 41815 41816 41817 41818 41819 41820 41821 41822 41823 41824 41825 41826 41827 41828 41829 41830 41831 41832 41833 41834 41835 41836 41837 41838 41839 41840 41841 41842 41843 41844 41845 41846 41847 41848 41849 41850 41851 41852 41853 41854 41855 41856 41857 41858 41859 41860 41861 41862 41863 41864 41865 41866 41867 41868 41869 41870 41871 41872 41873 41874 41875 41876 41877 41878 41879 41880 41881 41882 41883 41884 41885 41886 41887 41888 41889 41890 41891 41892 41893 41894 41895 41896 41897 41898 41899 41900 41901 41902 41903 41904 41905 41906 41907 41908 41909 41910 41911 41912 41913 41914 41915 41916 41917 41918 41919 41920 41921 41922 41923 41924 41925 41926 41927 41928 41929 41930 41931 41932 41933 41934 41935 41936 41937 41938 41939 41940 41941 41942 41943 41944 41945 41946 41947 41948 41949 41950 41951 41952 41953 41954 41955 41956 41957 41958 41959 41960 41961 41962 41963 41964 41965 41966 41967 41968 41969 41970 41971 41972 41973 41974 41975 41976 41977 41978 41979 41980 41981 41982 41983 41984 41985 41986 41987 41988 41989 41990 41991 41992 41993 41994 41995 41996 41997 41998 41999 42000 42001 42002 42003 42004 42005 42006 42007 42008 42009 42010 42011 42012 42013 42014 42015 42016 42017 42018 42019 42020 42021 42022 42023 42024 42025 42026 42027 42028 42029 42030 42031 42032 42033 42034 42035 42036 42037 42038 42039 42040 42041 42042 42043 42044 42045 42046 42047 42048 42049 42050 42051 42052 42053 42054 42055 42056 42057 42058 42059 42060 42061 42062 42063 42064 42065 42066 42067 42068 42069 42070 42071 42072 42073 42074 42075 42076 42077 42078 42079 42080 42081 42082 42083 42084 42085 42086 42087 42088 42089 42090 42091 42092 42093 42094 42095 42096 42097 42098 42099 42100 42101 42102 42103 42104 42105 42106 42107 42108 42109 42110 42111 42112 42113 42114 42115 42116 42117 42118 42119 42120 42121 42122 42123 42124 42125 42126 42127 42128 42129 42130 42131 42132 42133 42134 42135 42136 42137 42138 42139 42140 42141 42142 42143 42144 42145 42146 42147 42148 42149 42150 42151 42152 42153 42154 42155 42156 42157 42158 42159 42160 42161 42162 42163 42164 42165 42166 42167 42168 42169 42170 42171 42172 42173 42174 42175 42176 42177 42178 42179 42180 42181 42182 42183 42184 42185 42186 42187 42188 42189 42190 42191 42192 42193 42194 42195 42196 42197 42198 42199 42200 42201 42202 42203 42204 42205 42206 42207 42208 42209 42210 42211 42212 42213 42214 42215 42216 42217 42218 42219 42220 42221 42222 42223 42224 42225 42226 42227 42228 42229 42230 42231 42232 42233 42234 42235 42236 42237 42238 42239 42240 42241 42242 42243 42244 42245 42246 42247 42248 42249 42250 42251 42252 42253 42254 42255 42256 42257 42258 42259 42260 42261 42262 42263 42264 42265 42266 42267 42268 42269 42270 42271 42272 42273 42274 42275 42276 42277 42278 42279 42280 42281 42282 42283 42284 42285 42286 42287 42288 42289 42290 42291 42292 42293 42294 42295 42296 42297 42298 42299 42300 42301 42302 42303 42304 42305 42306 42307 42308 42309 42310 42311 42312 42313 42314 42315 42316 42317 42318 42319 42320 42321 42322 42323 42324 42325 42326 42327 42328 42329 42330 42331 42332 42333 42334 42335 42336 42337 42338 42339 42340 42341 42342 42343 42344 42345 42346 42347 42348 42349 42350 42351 42352 42353 42354 42355 42356 42357 42358 42359 42360 42361 42362 42363 42364 42365 42366 42367 42368 42369 42370 42371 42372 42373 42374 42375 42376 42377 42378 42379 42380 42381 42382 42383 42384 42385 42386 42387 42388 42389 42390 42391 42392 42393 42394 42395 42396 42397 42398 42399 42400 42401 42402 42403 42404 42405 42406 42407 42408 42409 42410 42411 42412 42413 42414 42415 42416 42417 42418 42419 42420 42421 42422 42423 42424 42425 42426 42427 42428 42429 42430 42431 42432 42433 42434 42435 42436 42437 42438 42439 42440 42441 42442 42443 42444 42445 42446 42447 42448 42449 42450 42451 42452 42453 42454 42455 42456 42457 42458 42459 42460 42461 42462 42463 42464 42465 42466 42467 42468 42469 42470 42471 42472 42473 42474 42475 42476 42477 42478 42479 42480 42481 42482 42483 42484 42485 42486 42487 42488 42489 42490 42491 42492 42493 42494 42495 42496 42497 42498 42499 42500 42501 42502 42503 42504 42505 42506 42507 42508 42509 42510 42511 42512 42513 42514 42515 42516 42517 42518 42519 42520 42521 42522 42523 42524 42525 42526 42527 42528 42529 42530 42531 42532 42533 42534 42535 42536 42537 42538 42539 42540 42541 42542 42543 42544 42545 42546 42547 42548 42549 42550 42551 42552 42553 42554 42555 42556 42557 42558 42559 42560 42561 42562 42563 42564 42565 42566 42567 42568 42569 42570 42571 42572 42573 42574 42575 42576 42577 42578 42579 42580 42581 42582 42583 42584 42585 42586 42587 42588 42589 42590 42591 42592 42593 42594 42595 42596 42597 42598 42599 42600 42601 42602 42603 42604 42605 42606 42607 42608 42609 42610 42611 42612 42613 42614 42615 42616 42617 42618 42619 42620 42621 42622 42623 42624 42625 42626 42627 42628 42629 42630 42631 42632 42633 42634 42635 42636 42637 42638 42639 42640 42641 42642 42643 42644 42645 42646 42647 42648 42649 42650 42651 42652 42653 42654 42655 42656 42657 42658 42659 42660 42661 42662 42663 42664 42665 42666 42667 42668 42669 42670 42671 42672 42673 42674 42675 42676 42677 42678 42679 42680 42681 42682 42683 42684 42685 42686 42687 42688 42689 42690 42691 42692 42693 42694 42695 42696 42697 42698 42699 42700 42701 42702 42703 42704 42705 42706 42707 42708 42709 42710 42711 42712 42713 42714 42715 42716 42717 42718 42719 42720 42721 42722 42723 42724 42725 42726 42727 42728 42729 42730 42731 42732 42733 42734 42735 42736 42737 42738 42739 42740 42741 42742 42743 42744 42745 42746 42747 42748 42749 42750 42751 42752 42753 42754 42755 42756 42757 42758 42759 42760 42761 42762 42763 42764 42765 42766 42767 42768 42769 42770 42771 42772 42773 42774 42775 42776 42777 42778 42779 42780 42781 42782 42783 42784 42785 42786 42787 42788 42789 42790 42791 42792 42793 42794 42795 42796 42797 42798 42799 42800 42801 42802 42803 42804 42805 42806 42807 42808 42809 42810 42811 42812 42813 42814 42815 42816 42817 42818 42819 42820 42821 42822 42823 42824 42825 42826 42827 42828 42829 42830 42831 42832 42833 42834 42835 42836 42837 42838 42839 42840 42841 42842 42843 42844 42845 42846 42847 42848 42849 42850 42851 42852 42853 42854 42855 42856 42857 42858 42859 42860 42861 42862 42863 42864 42865 42866 42867 42868 42869 42870 42871 42872 42873 42874 42875 42876 42877 42878 42879 42880 42881 42882 42883 42884 42885 42886 42887 42888 42889 42890 42891 42892 42893 42894 42895 42896 42897 42898 42899 42900 42901 42902 42903 42904 42905 42906 42907 42908 42909 42910 42911 42912 42913 42914 42915 42916 42917 42918 42919 42920 42921 42922 42923 42924 42925 42926 42927 42928 42929 42930 42931 42932 42933 42934 42935 42936 42937 42938 42939 42940 42941 42942 42943 42944 42945 42946 42947 42948 42949 42950 42951 42952 42953 42954 42955 42956 42957 42958 42959 42960 42961 42962 42963 42964 42965 42966 42967 42968 42969 42970 42971 42972 42973 42974 42975 42976 42977 42978 42979 42980 42981 42982 42983 42984 42985 42986 42987 42988 42989 42990 42991 42992 42993 42994 42995 42996 42997 42998 42999 43000 43001 43002 43003 43004 43005 43006 43007 43008 43009 43010 43011 43012 43013 43014 43015 43016 43017 43018 43019 43020 43021 43022 43023 43024 43025 43026 43027 43028 43029 43030 43031 43032 43033 43034 43035 43036 43037 43038 43039 43040 43041 43042 43043 43044 43045 43046 43047 43048 43049 43050 43051 43052 43053 43054 43055 43056 43057 43058 43059 43060 43061 43062 43063 43064 43065 43066 43067 43068 43069 43070 43071 43072 43073 43074 43075 43076 43077 43078 43079 43080 43081 43082 43083 43084 43085 43086 43087 43088 43089 43090 43091 43092 43093 43094 43095 43096 43097 43098 43099 43100 43101 43102 43103 43104 43105 43106 43107 43108 43109 43110 43111 43112 43113 43114 43115 43116 43117 43118 43119 43120 43121 43122 43123 43124 43125 43126 43127 43128 43129 43130 43131 43132 43133 43134 43135 43136 43137 43138 43139 43140 43141 43142 43143 43144 43145 43146 43147 43148 43149 43150 43151 43152 43153 43154 43155 43156 43157 43158 43159 43160 43161 43162 43163 43164 43165 43166 43167 43168 43169 43170 43171 43172 43173 43174 43175 43176 43177 43178 43179 43180 43181 43182 43183 43184 43185 43186 43187 43188 43189 43190 43191 43192 43193 43194 43195 43196 43197 43198 43199 43200 43201 43202 43203 43204 43205 43206 43207 43208 43209 43210 43211 43212 43213 43214 43215 43216 43217 43218 43219 43220 43221 43222 43223 43224 43225 43226 43227 43228 43229 43230 43231 43232 43233 43234 43235 43236 43237 43238 43239 43240 43241 43242 43243 43244 43245 43246 43247 43248 43249 43250 43251 43252 43253 43254 43255 43256 43257 43258 43259 43260 43261 43262 43263 43264 43265 43266 43267 43268 43269 43270 43271 43272 43273 43274 43275 43276 43277 43278 43279 43280 43281 43282 43283 43284 43285 43286 43287 43288 43289 43290 43291 43292 43293 43294 43295 43296 43297 43298 43299 43300 43301 43302 43303 43304 43305 43306 43307 43308 43309 43310 43311 43312 43313 43314 43315 43316 43317 43318 43319 43320 43321 43322 43323 43324 43325 43326 43327 43328 43329 43330 43331 43332 43333 43334 43335 43336 43337 43338 43339 43340 43341 43342 43343 43344 43345 43346 43347 43348 43349 43350 43351 43352 43353 43354 43355 43356 43357 43358 43359 43360 43361 43362 43363 43364 43365 43366 43367 43368 43369 43370 43371 43372 43373 43374 43375 43376 43377 43378 43379 43380 43381 43382 43383 43384 43385 43386 43387 43388 43389 43390 43391 43392 43393 43394 43395 43396 43397 43398 43399 43400 43401 43402 43403 43404 43405 43406 43407 43408 43409 43410 43411 43412 43413 43414 43415 43416 43417 43418 43419 43420 43421 43422 43423 43424 43425 43426 43427 43428 43429 43430 43431 43432 43433 43434 43435 43436 43437 43438 43439 43440 43441 43442 43443 43444 43445 43446 43447 43448 43449 43450 43451 43452 43453 43454 43455 43456 43457 43458 43459 43460 43461 43462 43463 43464 43465 43466 43467 43468 43469 43470 43471 43472 43473 43474 43475 43476 43477 43478 43479 43480 43481 43482 43483 43484 43485 43486 43487 43488 43489 43490 43491 43492 43493 43494 43495 43496 43497 43498 43499 43500 43501 43502 43503 43504 43505 43506 43507 43508 43509 43510 43511 43512 43513 43514 43515 43516 43517 43518 43519 43520 43521 43522 43523 43524 43525 43526 43527 43528 43529 43530 43531 43532 43533 43534 43535 43536 43537 43538 43539 43540 43541 43542 43543 43544 43545 43546 43547 43548 43549 43550 43551 43552 43553 43554 43555 43556 43557 43558 43559 43560 43561 43562 43563 43564 43565 43566 43567 43568 43569 43570 43571 43572 43573 43574 43575 43576 43577 43578 43579 43580 43581 43582 43583 43584 43585 43586 43587 43588 43589 43590 43591 43592 43593 43594 43595 43596 43597 43598 43599 43600 43601 43602 43603 43604 43605 43606 43607 43608 43609 43610 43611 43612 43613 43614 43615 43616 43617 43618 43619 43620 43621 43622 43623 43624 43625 43626 43627 43628 43629 43630 43631 43632 43633 43634 43635 43636 43637 43638 43639 43640 43641 43642 43643 43644 43645 43646 43647 43648 43649 43650 43651 43652 43653 43654 43655 43656 43657 43658 43659 43660 43661 43662 43663 43664 43665 43666 43667 43668 43669 43670 43671 43672 43673 43674 43675 43676 43677 43678 43679 43680 43681 43682 43683 43684 43685 43686 43687 43688 43689 43690 43691 43692 43693 43694 43695 43696 43697 43698 43699 43700 43701 43702 43703 43704 43705 43706 43707 43708 43709 43710 43711 43712 43713 43714 43715 43716 43717 43718 43719 43720 43721 43722 43723 43724 43725 43726 43727 43728 43729 43730 43731 43732 43733 43734 43735 43736 43737 43738 43739 43740 43741 43742 43743 43744 43745 43746 43747 43748 43749 43750 43751 43752 43753 43754 43755 43756 43757 43758 43759 43760 43761 43762 43763 43764 43765 43766 43767 43768 43769 43770 43771 43772 43773 43774 43775 43776 43777 43778 43779 43780 43781 43782 43783 43784 43785 43786 43787 43788 43789 43790 43791 43792 43793 43794 43795 43796 43797 43798 43799 43800 43801 43802 43803 43804 43805 43806 43807 43808 43809 43810 43811 43812 43813 43814 43815 43816 43817 43818 43819 43820 43821 43822 43823 43824 43825 43826 43827 43828 43829 43830 43831 43832 43833 43834 43835 43836 43837 43838 43839 43840 43841 43842 43843 43844 43845 43846 43847 43848 43849 43850 43851 43852 43853 43854 43855 43856 43857 43858 43859 43860 43861 43862 43863 43864 43865 43866 43867 43868 43869 43870 43871 43872 43873 43874 43875 43876 43877 43878 43879 43880 43881 43882 43883 43884 43885 43886 43887 43888 43889 43890 43891 43892 43893 43894 43895 43896 43897 43898 43899 43900 43901 43902 43903 43904 43905 43906 43907 43908 43909 43910 43911 43912 43913 43914 43915 43916 43917 43918 43919 43920 43921 43922 43923 43924 43925 43926 43927 43928 43929 43930 43931 43932 43933 43934 43935 43936 43937 43938 43939 43940 43941 43942 43943 43944 43945 43946 43947 43948 43949 43950 43951 43952 43953 43954 43955 43956 43957 43958 43959 43960 43961 43962 43963 43964 43965 43966 43967 43968 43969 43970 43971 43972 43973 43974 43975 43976 43977 43978 43979 43980 43981 43982 43983 43984 43985 43986 43987 43988 43989 43990 43991 43992 43993 43994 43995 43996 43997 43998 43999 44000 44001 44002 44003 44004 44005 44006 44007 44008 44009 44010 44011 44012 44013 44014 44015 44016 44017 44018 44019 44020 44021 44022 44023 44024 44025 44026 44027 44028 44029 44030 44031 44032 44033 44034 44035 44036 44037 44038 44039 44040 44041 44042 44043 44044 44045 44046 44047 44048 44049 44050 44051 44052 44053 44054 44055 44056 44057 44058 44059 44060 44061 44062 44063 44064 44065 44066 44067 44068 44069 44070 44071 44072 44073 44074 44075 44076 44077 44078 44079 44080 44081 44082 44083 44084 44085 44086 44087 44088 44089 44090 44091 44092 44093 44094 44095 44096 44097 44098 44099 44100 44101 44102 44103 44104 44105 44106 44107 44108 44109 44110 44111 44112 44113 44114 44115 44116 44117 44118 44119 44120 44121 44122 44123 44124 44125 44126 44127 44128 44129 44130 44131 44132 44133 44134 44135 44136 44137 44138 44139 44140 44141 44142 44143 44144 44145 44146 44147 44148 44149 44150 44151 44152 44153 44154 44155 44156 44157 44158 44159 44160 44161 44162 44163 44164 44165 44166 44167 44168 44169 44170 44171 44172 44173 44174 44175 44176 44177 44178 44179 44180 44181 44182 44183 44184 44185 44186 44187 44188 44189 44190 44191 44192 44193 44194 44195 44196 44197 44198 44199 44200 44201 44202 44203 44204 44205 44206 44207 44208 44209 44210 44211 44212 44213 44214 44215 44216 44217 44218 44219 44220 44221 44222 44223 44224 44225 44226 44227 44228 44229 44230 44231 44232 44233 44234 44235 44236 44237 44238 44239 44240 44241 44242 44243 44244 44245 44246 44247 44248 44249 44250 44251 44252 44253 44254 44255 44256 44257 44258 44259 44260 44261 44262 44263 44264 44265 44266 44267 44268 44269 44270 44271 44272 44273 44274 44275 44276 44277 44278 44279 44280 44281 44282 44283 44284 44285 44286 44287 44288 44289 44290 44291 44292 44293 44294 44295 44296 44297 44298 44299 44300 44301 44302 44303 44304 44305 44306 44307 44308 44309 44310 44311 44312 44313 44314 44315 44316 44317 44318 44319 44320 44321 44322 44323 44324 44325 44326 44327 44328 44329 44330 44331 44332 44333 44334 44335 44336 44337 44338 44339 44340 44341 44342 44343 44344 44345 44346 44347 44348 44349 44350 44351 44352 44353 44354 44355 44356 44357 44358 44359 44360 44361 44362 44363 44364 44365 44366 44367 44368 44369 44370 44371 44372 44373 44374 44375 44376 44377 44378 44379 44380 44381 44382 44383 44384 44385 44386 44387 44388 44389 44390 44391 44392 44393 44394 44395 44396 44397 44398 44399 44400 44401 44402 44403 44404 44405 44406 44407 44408 44409 44410 44411 44412 44413 44414 44415 44416 44417 44418 44419 44420 44421 44422 44423 44424 44425 44426 44427 44428 44429 44430 44431 44432 44433 44434 44435 44436 44437 44438 44439 44440 44441 44442 44443 44444 44445 44446 44447 44448 44449 44450 44451 44452 44453 44454 44455 44456 44457 44458 44459 44460 44461 44462 44463 44464 44465 44466 44467 44468 44469 44470 44471 44472 44473 44474 44475 44476 44477 44478 44479 44480 44481 44482 44483 44484 44485 44486 44487 44488 44489 44490 44491 44492 44493 44494 44495 44496 44497 44498 44499 44500 44501 44502 44503 44504 44505 44506 44507 44508 44509 44510 44511 44512 44513 44514 44515 44516 44517 44518 44519 44520 44521 44522 44523 44524 44525 44526 44527 44528 44529 44530 44531 44532 44533 44534 44535 44536 44537 44538 44539 44540 44541 44542 44543 44544 44545 44546 44547 44548 44549 44550 44551 44552 44553 44554 44555 44556 44557 44558 44559 44560 44561 44562 44563 44564 44565 44566 44567 44568 44569 44570 44571 44572 44573 44574 44575 44576 44577 44578 44579 44580 44581 44582 44583 44584 44585 44586 44587 44588 44589 44590 44591 44592 44593 44594 44595 44596 44597 44598 44599 44600 44601 44602 44603 44604 44605 44606 44607 44608 44609 44610 44611 44612 44613 44614 44615 44616 44617 44618 44619 44620 44621 44622 44623 44624 44625 44626 44627 44628 44629 44630 44631 44632 44633 44634 44635 44636 44637 44638 44639 44640 44641 44642 44643 44644 44645 44646 44647 44648 44649 44650 44651 44652 44653 44654 44655 44656 44657 44658 44659 44660 44661 44662 44663 44664 44665 44666 44667 44668 44669 44670 44671 44672 44673 44674 44675 44676 44677 44678 44679 44680 44681 44682 44683 44684 44685 44686 44687 44688 44689 44690 44691 44692 44693 44694 44695 44696 44697 44698 44699 44700 44701 44702 44703 44704 44705 44706 44707 44708 44709 44710 44711 44712 44713 44714 44715 44716 44717 44718 44719 44720 44721 44722 44723 44724 44725 44726 44727 44728 44729 44730 44731 44732 44733 44734 44735 44736 44737 44738 44739 44740 44741 44742 44743 44744 44745 44746 44747 44748 44749 44750 44751 44752 44753 44754 44755 44756 44757 44758 44759 44760 44761 44762 44763 44764 44765 44766 44767 44768 44769 44770 44771 44772 44773 44774 44775 44776 44777 44778 44779 44780 44781 44782 44783 44784 44785 44786 44787 44788 44789 44790 44791 44792 44793 44794 44795 44796 44797 44798 44799 44800 44801 44802 44803 44804 44805 44806 44807 44808 44809 44810 44811 44812 44813 44814 44815 44816 44817 44818 44819 44820 44821 44822 44823 44824 44825 44826 44827 44828 44829 44830 44831 44832 44833 44834 44835 44836 44837 44838 44839 44840 44841 44842 44843 44844 44845 44846 44847 44848 44849 44850 44851 44852 44853 44854 44855 44856 44857 44858 44859 44860 44861 44862 44863 44864 44865 44866 44867 44868 44869 44870 44871 44872 44873 44874 44875 44876 44877 44878 44879 44880 44881 44882 44883 44884 44885 44886 44887 44888 44889 44890 44891 44892 44893 44894 44895 44896 44897 44898 44899 44900 44901 44902 44903 44904 44905 44906 44907 44908 44909 44910 44911 44912 44913 44914 44915 44916 44917 44918 44919 44920 44921 44922 44923 44924 44925 44926 44927 44928 44929 44930 44931 44932 44933 44934 44935 44936 44937 44938 44939 44940 44941 44942 44943 44944 44945 44946 44947 44948 44949 44950 44951 44952 44953 44954 44955 44956 44957 44958 44959 44960 44961 44962 44963 44964 44965 44966 44967 44968 44969 44970 44971 44972 44973 44974 44975 44976 44977 44978 44979 44980 44981 44982 44983 44984 44985 44986 44987 44988 44989 44990 44991 44992 44993 44994 44995 44996 44997 44998 44999 45000 45001 45002 45003 45004 45005 45006 45007 45008 45009 45010 45011 45012 45013 45014 45015 45016 45017 45018 45019 45020 45021 45022 45023 45024 45025 45026 45027 45028 45029 45030 45031 45032 45033 45034 45035 45036 45037 45038 45039 45040 45041 45042 45043 45044 45045 45046 45047 45048 45049 45050 45051 45052 45053 45054 45055 45056 45057 45058 45059 45060 45061 45062 45063 45064 45065 45066 45067 45068 45069 45070 45071 45072 45073 45074 45075 45076 45077 45078 45079 45080 45081 45082 45083 45084 45085 45086 45087 45088 45089 45090 45091 45092 45093 45094 45095 45096 45097 45098 45099 45100 45101 45102 45103 45104 45105 45106 45107 45108 45109 45110 45111 45112 45113 45114 45115 45116 45117 45118 45119 45120 45121 45122 45123 45124 45125 45126 45127 45128 45129 45130 45131 45132 45133 45134 45135 45136 45137 45138 45139 45140 45141 45142 45143 45144 45145 45146 45147 45148 45149 45150 45151 45152 45153 45154 45155 45156 45157 45158 45159 45160 45161 45162 45163 45164 45165 45166 45167 45168 45169 45170 45171 45172 45173 45174 45175 45176 45177 45178 45179 45180 45181 45182 45183 45184 45185 45186 45187 45188 45189 45190 45191 45192 45193 45194 45195 45196 45197 45198 45199 45200 45201 45202 45203 45204 45205 45206 45207 45208 45209 45210 45211 45212 45213 45214 45215 45216 45217 45218 45219 45220 45221 45222 45223 45224 45225 45226 45227 45228 45229 45230 45231 45232 45233 45234 45235 45236 45237 45238 45239 45240 45241 45242 45243 45244 45245 45246 45247 45248 45249 45250 45251 45252 45253 45254 45255 45256 45257 45258 45259 45260 45261 45262 45263 45264 45265 45266 45267 45268 45269 45270 45271 45272 45273 45274 45275 45276 45277 45278 45279 45280 45281 45282 45283 45284 45285 45286 45287 45288 45289 45290 45291 45292 45293 45294 45295 45296 45297 45298 45299 45300 45301 45302 45303 45304 45305 45306 45307 45308 45309 45310 45311 45312 45313 45314 45315 45316 45317 45318 45319 45320 45321 45322 45323 45324 45325 45326 45327 45328 45329 45330 45331 45332 45333 45334 45335 45336 45337 45338 45339 45340 45341 45342 45343 45344 45345 45346 45347 45348 45349 45350 45351 45352 45353 45354 45355 45356 45357 45358 45359 45360 45361 45362 45363 45364 45365 45366 45367 45368 45369 45370 45371 45372 45373 45374 45375 45376 45377 45378 45379 45380 45381 45382 45383 45384 45385 45386 45387 45388 45389 45390 45391 45392 45393 45394 45395 45396 45397 45398 45399 45400 45401 45402 45403 45404 45405 45406 45407 45408 45409 45410 45411 45412 45413 45414 45415 45416 45417 45418 45419 45420 45421 45422 45423 45424 45425 45426 45427 45428 45429 45430 45431 45432 45433 45434 45435 45436 45437 45438 45439 45440 45441 45442 45443 45444 45445 45446 45447 45448 45449 45450 45451 45452 45453 45454 45455 45456 45457 45458 45459 45460 45461 45462 45463 45464 45465 45466 45467 45468 45469 45470 45471 45472 45473 45474 45475 45476 45477 45478 45479 45480 45481 45482 45483 45484 45485 45486 45487 45488 45489 45490 45491 45492 45493 45494 45495 45496 45497 45498 45499 45500 45501 45502 45503 45504 45505 45506 45507 45508 45509 45510 45511 45512 45513 45514 45515 45516 45517 45518 45519 45520 45521 45522 45523 45524 45525 45526 45527 45528 45529 45530 45531 45532 45533 45534 45535 45536 45537 45538 45539 45540 45541 45542 45543 45544 45545 45546 45547 45548 45549 45550 45551 45552 45553 45554 45555 45556 45557 45558 45559 45560 45561 45562 45563 45564 45565 45566 45567 45568 45569 45570 45571 45572 45573 45574 45575 45576 45577 45578 45579 45580 45581 45582 45583 45584 45585 45586 45587 45588 45589 45590 45591 45592 45593 45594 45595 45596 45597 45598 45599 45600 45601 45602 45603 45604 45605 45606 45607 45608 45609 45610 45611 45612 45613 45614 45615 45616 45617 45618 45619 45620 45621 45622 45623 45624 45625 45626 45627 45628 45629 45630 45631 45632 45633 45634 45635 45636 45637 45638 45639 45640 45641 45642 45643 45644 45645 45646 45647 45648 45649 45650 45651 45652 45653 45654 45655 45656 45657 45658 45659 45660 45661 45662 45663 45664 45665 45666 45667 45668 45669 45670 45671 45672 45673 45674 45675 45676 45677 45678 45679 45680 45681 45682 45683 45684 45685 45686 45687 45688 45689 45690 45691 45692 45693 45694 45695 45696 45697 45698 45699 45700 45701 45702 45703 45704 45705 45706 45707 45708 45709 45710 45711 45712 45713 45714 45715 45716 45717 45718 45719 45720 45721 45722 45723 45724 45725 45726 45727 45728 45729 45730 45731 45732 45733 45734 45735 45736 45737 45738 45739 45740 45741 45742 45743 45744 45745 45746 45747 45748 45749 45750 45751 45752 45753 45754 45755 45756 45757 45758 45759 45760 45761 45762 45763 45764 45765 45766 45767 45768 45769 45770 45771 45772 45773 45774 45775 45776 45777 45778 45779 45780 45781 45782 45783 45784 45785 45786 45787 45788 45789 45790 45791 45792 45793 45794 45795 45796 45797 45798 45799 45800 45801 45802 45803 45804 45805 45806 45807 45808 45809 45810 45811 45812 45813 45814 45815 45816 45817 45818 45819 45820 45821 45822 45823 45824 45825 45826 45827 45828 45829 45830 45831 45832 45833 45834 45835 45836 45837 45838 45839 45840 45841 45842 45843 45844 45845 45846 45847 45848 45849 45850 45851 45852 45853 45854 45855 45856 45857 45858 45859 45860 45861 45862 45863 45864 45865 45866 45867 45868 45869 45870 45871 45872 45873 45874 45875 45876 45877 45878 45879 45880 45881 45882 45883 45884 45885 45886 45887 45888 45889 45890 45891 45892 45893 45894 45895 45896 45897 45898 45899 45900 45901 45902 45903 45904 45905 45906 45907 45908 45909 45910 45911 45912 45913 45914 45915 45916 45917 45918 45919 45920 45921 45922 45923 45924 45925 45926 45927 45928 45929 45930 45931 45932 45933 45934 45935 45936 45937 45938 45939 45940 45941 45942 45943 45944 45945 45946 45947 45948 45949 45950 45951 45952 45953 45954 45955 45956 45957 45958 45959 45960 45961 45962 45963 45964 45965 45966 45967 45968 45969 45970 45971 45972 45973 45974 45975 45976 45977 45978 45979 45980 45981 45982 45983 45984 45985 45986 45987 45988 45989 45990 45991 45992 45993 45994 45995 45996 45997 45998 45999 46000 46001 46002 46003 46004 46005 46006 46007 46008 46009 46010 46011 46012 46013 46014 46015 46016 46017 46018 46019 46020 46021 46022 46023 46024 46025 46026 46027 46028 46029 46030 46031 46032 46033 46034 46035 46036 46037 46038 46039 46040 46041 46042 46043 46044 46045 46046 46047 46048 46049 46050 46051 46052 46053 46054 46055 46056 46057 46058 46059 46060 46061 46062 46063 46064 46065 46066 46067 46068 46069 46070 46071 46072 46073 46074 46075 46076 46077 46078 46079 46080 46081 46082 46083 46084 46085 46086 46087 46088 46089 46090 46091 46092 46093 46094 46095 46096 46097 46098 46099 46100 46101 46102 46103 46104 46105 46106 46107 46108 46109 46110 46111 46112 46113 46114 46115 46116 46117 46118 46119 46120 46121 46122 46123 46124 46125 46126 46127 46128 46129 46130 46131 46132 46133 46134 46135 46136 46137 46138 46139 46140 46141 46142 46143 46144 46145 46146 46147 46148 46149 46150 46151 46152 46153 46154 46155 46156 46157 46158 46159 46160 46161 46162 46163 46164 46165 46166 46167 46168 46169 46170 46171 46172 46173 46174 46175 46176 46177 46178 46179 46180 46181 46182 46183 46184 46185 46186 46187 46188 46189 46190 46191 46192 46193 46194 46195 46196 46197 46198 46199 46200 46201 46202 46203 46204 46205 46206 46207 46208 46209 46210 46211 46212 46213 46214 46215 46216 46217 46218 46219 46220 46221 46222 46223 46224 46225 46226 46227 46228 46229 46230 46231 46232 46233 46234 46235 46236 46237 46238 46239 46240 46241 46242 46243 46244 46245 46246 46247 46248 46249 46250 46251 46252 46253 46254 46255 46256 46257 46258 46259 46260 46261 46262 46263 46264 46265 46266 46267 46268 46269 46270 46271 46272 46273 46274 46275 46276 46277 46278 46279 46280 46281 46282 46283 46284 46285 46286 46287 46288 46289 46290 46291 46292 46293 46294 46295 46296 46297 46298 46299 46300 46301 46302 46303 46304 46305 46306 46307 46308 46309 46310 46311 46312 46313 46314 46315 46316 46317 46318 46319 46320 46321 46322 46323 46324 46325 46326 46327 46328 46329 46330 46331 46332 46333 46334 46335 46336 46337 46338 46339 46340 46341 46342 46343 46344 46345 46346 46347 46348 46349 46350 46351 46352 46353 46354 46355 46356 46357 46358 46359 46360 46361 46362 46363 46364 46365 46366 46367 46368 46369 46370 46371 46372 46373 46374 46375 46376 46377 46378 46379 46380 46381 46382 46383 46384 46385 46386 46387 46388 46389 46390 46391 46392 46393 46394 46395 46396 46397 46398 46399 46400 46401 46402 46403 46404 46405 46406 46407 46408 46409 46410 46411 46412 46413 46414 46415 46416 46417 46418 46419 46420 46421 46422 46423 46424 46425 46426 46427 46428 46429 46430 46431 46432 46433 46434 46435 46436 46437 46438 46439 46440 46441 46442 46443 46444 46445 46446 46447 46448 46449 46450 46451 46452 46453 46454 46455 46456 46457 46458 46459 46460 46461 46462 46463 46464 46465 46466 46467 46468 46469 46470 46471 46472 46473 46474 46475 46476 46477 46478 46479 46480 46481 46482 46483 46484 46485 46486 46487 46488 46489 46490 46491 46492 46493 46494 46495 46496 46497 46498 46499 46500 46501 46502 46503 46504 46505 46506 46507 46508 46509 46510 46511 46512 46513 46514 46515 46516 46517 46518 46519 46520 46521 46522 46523 46524 46525 46526 46527 46528 46529 46530 46531 46532 46533 46534 46535 46536 46537 46538 46539 46540 46541 46542 46543 46544 46545 46546 46547 46548 46549 46550 46551 46552 46553 46554 46555 46556 46557 46558 46559 46560 46561 46562 46563 46564 46565 46566 46567 46568 46569 46570 46571 46572 46573 46574 46575 46576 46577 46578 46579 46580 46581 46582 46583 46584 46585 46586 46587 46588 46589 46590 46591 46592 46593 46594 46595 46596 46597 46598 46599 46600 46601 46602 46603 46604 46605 46606 46607 46608 46609 46610 46611 46612 46613 46614 46615 46616 46617 46618 46619 46620 46621 46622 46623 46624 46625 46626 46627 46628 46629 46630 46631 46632 46633 46634 46635 46636 46637 46638 46639 46640 46641 46642 46643 46644 46645 46646 46647 46648 46649 46650 46651 46652 46653 46654 46655 46656 46657 46658 46659 46660 46661 46662 46663 46664 46665 46666 46667 46668 46669 46670 46671 46672 46673 46674 46675 46676 46677 46678 46679 46680 46681 46682 46683 46684 46685 46686 46687 46688 46689 46690 46691 46692 46693 46694 46695 46696 46697 46698 46699 46700 46701 46702 46703 46704 46705 46706 46707 46708 46709 46710 46711 46712 46713 46714 46715 46716 46717 46718 46719 46720 46721 46722 46723 46724 46725 46726 46727 46728 46729 46730 46731 46732 46733 46734 46735 46736 46737 46738 46739 46740 46741 46742 46743 46744 46745 46746 46747 46748 46749 46750 46751 46752 46753 46754 46755 46756 46757 46758 46759 46760 46761 46762 46763 46764 46765 46766 46767 46768 46769 46770 46771 46772 46773 46774 46775 46776 46777 46778 46779 46780 46781 46782 46783 46784 46785 46786 46787 46788 46789 46790 46791 46792 46793 46794 46795 46796 46797 46798 46799 46800 46801 46802 46803 46804 46805 46806 46807 46808 46809 46810 46811 46812 46813 46814 46815 46816 46817 46818 46819 46820 46821 46822 46823 46824 46825 46826 46827 46828 46829 46830 46831 46832 46833 46834 46835 46836 46837 46838 46839 46840 46841 46842 46843 46844 46845 46846 46847 46848 46849 46850 46851 46852 46853 46854 46855 46856 46857 46858 46859 46860 46861 46862 46863 46864 46865 46866 46867 46868 46869 46870 46871 46872 46873 46874 46875 46876 46877 46878 46879 46880 46881 46882 46883 46884 46885 46886 46887 46888 46889 46890 46891 46892 46893 46894 46895 46896 46897 46898 46899 46900 46901 46902 46903 46904 46905 46906 46907 46908 46909 46910 46911 46912 46913 46914 46915 46916 46917 46918 46919 46920 46921 46922 46923 46924 46925 46926 46927 46928 46929 46930 46931 46932 46933 46934 46935 46936 46937 46938 46939 46940 46941 46942 46943 46944 46945 46946 46947 46948 46949 46950 46951 46952 46953 46954 46955 46956 46957 46958 46959 46960 46961 46962 46963 46964 46965 46966 46967 46968 46969 46970 46971 46972 46973 46974 46975 46976 46977 46978 46979 46980 46981 46982 46983 46984 46985 46986 46987 46988 46989 46990 46991 46992 46993 46994 46995 46996 46997 46998 46999 47000 47001 47002 47003 47004 47005 47006 47007 47008 47009 47010 47011 47012 47013 47014 47015 47016 47017 47018 47019 47020 47021 47022 47023 47024 47025 47026 47027 47028 47029 47030 47031 47032 47033 47034 47035 47036 47037 47038 47039 47040 47041 47042 47043 47044 47045 47046 47047 47048 47049 47050 47051 47052 47053 47054 47055 47056 47057 47058 47059 47060 47061 47062 47063 47064 47065 47066 47067 47068 47069 47070 47071 47072 47073 47074 47075 47076 47077 47078 47079 47080 47081 47082 47083 47084 47085 47086 47087 47088 47089 47090 47091 47092 47093 47094 47095 47096 47097 47098 47099 47100 47101 47102 47103 47104 47105 47106 47107 47108 47109 47110 47111 47112 47113 47114 47115 47116 47117 47118 47119 47120 47121 47122 47123 47124 47125 47126 47127 47128 47129 47130 47131 47132 47133 47134 47135 47136 47137 47138 47139 47140 47141 47142 47143 47144 47145 47146 47147 47148 47149 47150 47151 47152 47153 47154 47155 47156 47157 47158 47159 47160 47161 47162 47163 47164 47165 47166 47167 47168 47169 47170 47171 47172 47173 47174 47175 47176 47177 47178 47179 47180 47181 47182 47183 47184 47185 47186 47187 47188 47189 47190 47191 47192 47193 47194 47195 47196 47197 47198 47199 47200 47201 47202 47203 47204 47205 47206 47207 47208 47209 47210 47211 47212 47213 47214 47215 47216 47217 47218 47219 47220 47221 47222 47223 47224 47225 47226 47227 47228 47229 47230 47231 47232 47233 47234 47235 47236 47237 47238 47239 47240 47241 47242 47243 47244 47245 47246 47247 47248 47249 47250 47251 47252 47253 47254 47255 47256 47257 47258 47259 47260 47261 47262 47263 47264 47265 47266 47267 47268 47269 47270 47271 47272 47273 47274 47275 47276 47277 47278 47279 47280 47281 47282 47283 47284 47285 47286 47287 47288 47289 47290 47291 47292 47293 47294 47295 47296 47297 47298 47299 47300 47301 47302 47303 47304 47305 47306 47307 47308 47309 47310 47311 47312 47313 47314 47315 47316 47317 47318 47319 47320 47321 47322 47323 47324 47325 47326 47327 47328 47329 47330 47331 47332 47333 47334 47335 47336 47337 47338 47339 47340 47341 47342 47343 47344 47345 47346 47347 47348 47349 47350 47351 47352 47353 47354 47355 47356 47357 47358 47359 47360 47361 47362 47363 47364 47365 47366 47367 47368 47369 47370 47371 47372 47373 47374 47375 47376 47377 47378 47379 47380 47381 47382 47383 47384 47385 47386 47387 47388 47389 47390 47391 47392 47393 47394 47395 47396 47397 47398 47399 47400 47401 47402 47403 47404 47405 47406 47407 47408 47409 47410 47411 47412 47413 47414 47415 47416 47417 47418 47419 47420 47421 47422 47423 47424 47425 47426 47427 47428 47429 47430 47431 47432 47433 47434 47435 47436 47437 47438 47439 47440 47441 47442 47443 47444 47445 47446 47447 47448 47449 47450 47451 47452 47453 47454 47455 47456 47457 47458 47459 47460 47461 47462 47463 47464 47465 47466 47467 47468 47469 47470 47471 47472 47473 47474 47475 47476 47477 47478 47479 47480 47481 47482 47483 47484 47485 47486 47487 47488 47489 47490 47491 47492 47493 47494 47495 47496 47497 47498 47499 47500 47501 47502 47503 47504 47505 47506 47507 47508 47509 47510 47511 47512 47513 47514 47515 47516 47517 47518 47519 47520 47521 47522 47523 47524 47525 47526 47527 47528 47529 47530 47531 47532 47533 47534 47535 47536 47537 47538 47539 47540 47541 47542 47543 47544 47545 47546 47547 47548 47549 47550 47551 47552 47553 47554 47555 47556 47557 47558 47559 47560 47561 47562 47563 47564 47565 47566 47567 47568 47569 47570 47571 47572 47573 47574 47575 47576 47577 47578 47579 47580 47581 47582 47583 47584 47585 47586 47587 47588 47589 47590 47591 47592 47593 47594 47595 47596 47597 47598 47599 47600 47601 47602 47603 47604 47605 47606 47607 47608 47609 47610 47611 47612 47613 47614 47615 47616 47617 47618 47619 47620 47621 47622 47623 47624 47625 47626 47627 47628 47629 47630 47631 47632 47633 47634 47635 47636 47637 47638 47639 47640 47641 47642 47643 47644 47645 47646 47647 47648 47649 47650 47651 47652 47653 47654 47655 47656 47657 47658 47659 47660 47661 47662 47663 47664 47665 47666 47667 47668 47669 47670 47671 47672 47673 47674 47675 47676 47677 47678 47679 47680 47681 47682 47683 47684 47685 47686 47687 47688 47689 47690 47691 47692 47693 47694 47695 47696 47697 47698 47699 47700 47701 47702 47703 47704 47705 47706 47707 47708 47709 47710 47711 47712 47713 47714 47715 47716 47717 47718 47719 47720 47721 47722 47723 47724 47725 47726 47727 47728 47729 47730 47731 47732 47733 47734 47735 47736 47737 47738 47739 47740 47741 47742 47743 47744 47745 47746 47747 47748 47749 47750 47751 47752 47753 47754 47755 47756 47757 47758 47759 47760 47761 47762 47763 47764 47765 47766 47767 47768 47769 47770 47771 47772 47773 47774 47775 47776 47777 47778 47779 47780 47781 47782 47783 47784 47785 47786 47787 47788 47789 47790 47791 47792 47793 47794 47795 47796 47797 47798 47799 47800 47801 47802 47803 47804 47805 47806 47807 47808 47809 47810 47811 47812 47813 47814 47815 47816 47817 47818 47819 47820 47821 47822 47823 47824 47825 47826 47827 47828 47829 47830 47831 47832 47833 47834 47835 47836 47837 47838 47839 47840 47841 47842 47843 47844 47845 47846 47847 47848 47849 47850 47851 47852 47853 47854 47855 47856 47857 47858 47859 47860 47861 47862 47863 47864 47865 47866 47867 47868 47869 47870 47871 47872 47873 47874 47875 47876 47877 47878 47879 47880 47881 47882 47883 47884 47885 47886 47887 47888 47889 47890 47891 47892 47893 47894 47895 47896 47897 47898 47899 47900 47901 47902 47903 47904 47905 47906 47907 47908 47909 47910 47911 47912 47913 47914 47915 47916 47917 47918 47919 47920 47921 47922 47923 47924 47925 47926 47927 47928 47929 47930 47931 47932 47933 47934 47935 47936 47937 47938 47939 47940 47941 47942 47943 47944 47945 47946 47947 47948 47949 47950 47951 47952 47953 47954 47955 47956 47957 47958 47959 47960 47961 47962 47963 47964 47965 47966 47967 47968 47969 47970 47971 47972 47973 47974 47975 47976 47977 47978 47979 47980 47981 47982 47983 47984 47985 47986 47987 47988 47989 47990 47991 47992 47993 47994 47995 47996 47997 47998 47999 48000 48001 48002 48003 48004 48005 48006 48007 48008 48009 48010 48011 48012 48013 48014 48015 48016 48017 48018 48019 48020 48021 48022 48023 48024 48025 48026 48027 48028 48029 48030 48031 48032 48033 48034 48035 48036 48037 48038 48039 48040 48041 48042 48043 48044 48045 48046 48047 48048 48049 48050 48051 48052 48053 48054 48055 48056 48057 48058 48059 48060 48061 48062 48063 48064 48065 48066 48067 48068 48069 48070 48071 48072 48073 48074 48075 48076 48077 48078 48079 48080 48081 48082 48083 48084 48085 48086 48087 48088 48089 48090 48091 48092 48093 48094 48095 48096 48097 48098 48099 48100 48101 48102 48103 48104 48105 48106 48107 48108 48109 48110 48111 48112 48113 48114 48115 48116 48117 48118 48119 48120 48121 48122 48123 48124 48125 48126 48127 48128 48129 48130 48131 48132 48133 48134 48135 48136 48137 48138 48139 48140 48141 48142 48143 48144 48145 48146 48147 48148 48149 48150 48151 48152 48153 48154 48155 48156 48157 48158 48159 48160 48161 48162 48163 48164 48165 48166 48167 48168 48169 48170 48171 48172 48173 48174 48175 48176 48177 48178 48179 48180 48181 48182 48183 48184 48185 48186 48187 48188 48189 48190 48191 48192 48193 48194 48195 48196 48197 48198 48199 48200 48201 48202 48203 48204 48205 48206 48207 48208 48209 48210 48211 48212 48213 48214 48215 48216 48217 48218 48219 48220 48221 48222 48223 48224 48225 48226 48227 48228 48229 48230 48231 48232 48233 48234 48235 48236 48237 48238 48239 48240 48241 48242 48243 48244 48245 48246 48247 48248 48249 48250 48251 48252 48253 48254 48255 48256 48257 48258 48259 48260 48261 48262 48263 48264 48265 48266 48267 48268 48269 48270 48271 48272 48273 48274 48275 48276 48277 48278 48279 48280 48281 48282 48283 48284 48285 48286 48287 48288 48289 48290 48291 48292 48293 48294 48295 48296 48297 48298 48299 48300 48301 48302 48303 48304 48305 48306 48307 48308 48309 48310 48311 48312 48313 48314 48315 48316 48317 48318 48319 48320 48321 48322 48323 48324 48325 48326 48327 48328 48329 48330 48331 48332 48333 48334 48335 48336 48337 48338 48339 48340 48341 48342 48343 48344 48345 48346 48347 48348 48349 48350 48351 48352 48353 48354 48355 48356 48357 48358 48359 48360 48361 48362 48363 48364 48365 48366 48367 48368 48369 48370 48371 48372 48373 48374 48375 48376 48377 48378 48379 48380 48381 48382 48383 48384 48385 48386 48387 48388 48389 48390 48391 48392 48393 48394 48395 48396 48397 48398 48399 48400 48401 48402 48403 48404 48405 48406 48407 48408 48409 48410 48411 48412 48413 48414 48415 48416 48417 48418 48419 48420 48421 48422 48423 48424 48425 48426 48427 48428 48429 48430 48431 48432 48433 48434 48435 48436 48437 48438 48439 48440 48441 48442 48443 48444 48445 48446 48447 48448 48449 48450 48451 48452 48453 48454 48455 48456 48457 48458 48459 48460 48461 48462 48463 48464 48465 48466 48467 48468 48469 48470 48471 48472 48473 48474 48475 48476 48477 48478 48479 48480 48481 48482 48483 48484 48485 48486 48487 48488 48489 48490 48491 48492 48493 48494 48495 48496 48497 48498 48499 48500 48501 48502 48503 48504 48505 48506 48507 48508 48509 48510 48511 48512 48513 48514 48515 48516 48517 48518 48519 48520 48521 48522 48523 48524 48525 48526 48527 48528 48529 48530 48531 48532 48533 48534 48535 48536 48537 48538 48539 48540 48541 48542 48543 48544 48545 48546 48547 48548 48549 48550 48551 48552 48553 48554 48555 48556 48557 48558 48559 48560 48561 48562 48563 48564 48565 48566 48567 48568 48569 48570 48571 48572 48573 48574 48575 48576 48577 48578 48579 48580 48581 48582 48583 48584 48585 48586 48587 48588 48589 48590 48591 48592 48593 48594 48595 48596 48597 48598 48599 48600 48601 48602 48603 48604 48605 48606 48607 48608 48609 48610 48611 48612 48613 48614 48615 48616 48617 48618 48619 48620 48621 48622 48623 48624 48625 48626 48627 48628 48629 48630 48631 48632 48633 48634 48635 48636 48637 48638 48639 48640 48641 48642 48643 48644 48645 48646 48647 48648 48649 48650 48651 48652 48653 48654 48655 48656 48657 48658 48659 48660 48661 48662 48663 48664 48665 48666 48667 48668 48669 48670 48671 48672 48673 48674 48675 48676 48677 48678 48679 48680 48681 48682 48683 48684 48685 48686 48687 48688 48689 48690 48691 48692 48693 48694 48695 48696 48697 48698 48699 48700 48701 48702 48703 48704 48705 48706 48707 48708 48709 48710 48711 48712 48713 48714 48715 48716 48717 48718 48719 48720 48721 48722 48723 48724 48725 48726 48727 48728 48729 48730 48731 48732 48733 48734 48735 48736 48737 48738 48739 48740 48741 48742 48743 48744 48745 48746 48747 48748 48749 48750 48751 48752 48753 48754 48755 48756 48757 48758 48759 48760 48761 48762 48763 48764 48765 48766 48767 48768 48769 48770 48771 48772 48773 48774 48775 48776 48777 48778 48779 48780 48781 48782 48783 48784 48785 48786 48787 48788 48789 48790 48791 48792 48793 48794 48795 48796 48797 48798 48799 48800 48801 48802 48803 48804 48805 48806 48807 48808 48809 48810 48811 48812 48813 48814 48815 48816 48817 48818 48819 48820 48821 48822 48823 48824 48825 48826 48827 48828 48829 48830 48831 48832 48833 48834 48835 48836 48837 48838 48839 48840 48841 48842 48843 48844 48845 48846 48847 48848 48849 48850 48851 48852 48853 48854 48855 48856 48857 48858 48859 48860 48861 48862 48863 48864 48865 48866 48867 48868 48869 48870 48871 48872 48873 48874 48875 48876 48877 48878 48879 48880 48881 48882 48883 48884 48885 48886 48887 48888 48889 48890 48891 48892 48893 48894 48895 48896 48897 48898 48899 48900 48901 48902 48903 48904 48905 48906 48907 48908 48909 48910 48911 48912 48913 48914 48915 48916 48917 48918 48919 48920 48921 48922 48923 48924 48925 48926 48927 48928 48929 48930 48931 48932 48933 48934 48935 48936 48937 48938 48939 48940 48941 48942 48943 48944 48945 48946 48947 48948 48949 48950 48951 48952 48953 48954 48955 48956 48957 48958 48959 48960 48961 48962 48963 48964 48965 48966 48967 48968 48969 48970 48971 48972 48973 48974 48975 48976 48977 48978 48979 48980 48981 48982 48983 48984 48985 48986 48987 48988 48989 48990 48991 48992 48993 48994 48995 48996 48997 48998 48999 49000 49001 49002 49003 49004 49005 49006 49007 49008 49009 49010 49011 49012 49013 49014 49015 49016 49017 49018 49019 49020 49021 49022 49023 49024 49025 49026 49027 49028 49029 49030 49031 49032 49033 49034 49035 49036 49037 49038 49039 49040 49041 49042 49043 49044 49045 49046 49047 49048 49049 49050 49051 49052 49053 49054 49055 49056 49057 49058 49059
|
/**
* \file zstd.c
* Single-file Zstandard library.
*
* Generate using:
* \code
* python combine.py -r ../../lib -x legacy/zstd_legacy.h -o zstd.c zstd-in.c
* \endcode
*/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/*
* Settings to bake for the single library file.
*
* Note: It's important that none of these affects 'zstd.h' (only the
* implementation files we're amalgamating).
*
* Note: MEM_MODULE stops xxhash redefining BYTE, U16, etc., which are also
* defined in mem.h (breaking C99 compatibility).
*
* Note: the undefs for xxHash allow Zstd's implementation to coincide with
* standalone xxHash usage (with global defines).
*
* Note: if you enable ZSTD_LEGACY_SUPPORT the combine.py script will need
* re-running without the "-x legacy/zstd_legacy.h" option (it excludes the
* legacy support at the source level).
*
* Note: multithreading is enabled for all platforms apart from Emscripten.
*/
#define DEBUGLEVEL 0
#define MEM_MODULE
#undef XXH_NAMESPACE
#define XXH_NAMESPACE ZSTD_
#undef XXH_PRIVATE_API
#define XXH_PRIVATE_API
#undef XXH_INLINE_ALL
#define XXH_INLINE_ALL
#define ZSTD_LEGACY_SUPPORT 0
#ifndef __EMSCRIPTEN__
#define ZSTD_MULTITHREAD
#endif
#define ZSTD_TRACE 0
/* TODO: Can't amalgamate ASM function */
#define ZSTD_DISABLE_ASM 1
/* Include zstd_deps.h first with all the options we need enabled. */
#define ZSTD_DEPS_NEED_MALLOC
#define ZSTD_DEPS_NEED_MATH64
/**** start inlining common/zstd_deps.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/* This file provides common libc dependencies that zstd requires.
* The purpose is to allow replacing this file with a custom implementation
* to compile zstd without libc support.
*/
/* Need:
* NULL
* INT_MAX
* UINT_MAX
* ZSTD_memcpy()
* ZSTD_memset()
* ZSTD_memmove()
*/
#ifndef ZSTD_DEPS_COMMON
#define ZSTD_DEPS_COMMON
/* Even though we use qsort_r only for the dictionary builder, the macro
* _GNU_SOURCE has to be declared *before* the inclusion of any standard
* header and the script 'combine.sh' combines the whole zstd source code
* in a single file.
*/
#if defined(__linux) || defined(__linux__) || defined(linux) || defined(__gnu_linux__) || \
defined(__CYGWIN__) || defined(__MSYS__)
#if !defined(_GNU_SOURCE) && !defined(__ANDROID__) /* NDK doesn't ship qsort_r(). */
#define _GNU_SOURCE
#endif
#endif
#include <limits.h>
#include <stddef.h>
#include <string.h>
#if defined(__GNUC__) && __GNUC__ >= 4
# define ZSTD_memcpy(d,s,l) __builtin_memcpy((d),(s),(l))
# define ZSTD_memmove(d,s,l) __builtin_memmove((d),(s),(l))
# define ZSTD_memset(p,v,l) __builtin_memset((p),(v),(l))
#else
# define ZSTD_memcpy(d,s,l) memcpy((d),(s),(l))
# define ZSTD_memmove(d,s,l) memmove((d),(s),(l))
# define ZSTD_memset(p,v,l) memset((p),(v),(l))
#endif
#endif /* ZSTD_DEPS_COMMON */
/* Need:
* ZSTD_malloc()
* ZSTD_free()
* ZSTD_calloc()
*/
#ifdef ZSTD_DEPS_NEED_MALLOC
#ifndef ZSTD_DEPS_MALLOC
#define ZSTD_DEPS_MALLOC
#include <stdlib.h>
#define ZSTD_malloc(s) malloc(s)
#define ZSTD_calloc(n,s) calloc((n), (s))
#define ZSTD_free(p) free((p))
#endif /* ZSTD_DEPS_MALLOC */
#endif /* ZSTD_DEPS_NEED_MALLOC */
/*
* Provides 64-bit math support.
* Need:
* U64 ZSTD_div64(U64 dividend, U32 divisor)
*/
#ifdef ZSTD_DEPS_NEED_MATH64
#ifndef ZSTD_DEPS_MATH64
#define ZSTD_DEPS_MATH64
#define ZSTD_div64(dividend, divisor) ((dividend) / (divisor))
#endif /* ZSTD_DEPS_MATH64 */
#endif /* ZSTD_DEPS_NEED_MATH64 */
/* Need:
* assert()
*/
#ifdef ZSTD_DEPS_NEED_ASSERT
#ifndef ZSTD_DEPS_ASSERT
#define ZSTD_DEPS_ASSERT
#include <assert.h>
#endif /* ZSTD_DEPS_ASSERT */
#endif /* ZSTD_DEPS_NEED_ASSERT */
/* Need:
* ZSTD_DEBUG_PRINT()
*/
#ifdef ZSTD_DEPS_NEED_IO
#ifndef ZSTD_DEPS_IO
#define ZSTD_DEPS_IO
#include <stdio.h>
#define ZSTD_DEBUG_PRINT(...) fprintf(stderr, __VA_ARGS__)
#endif /* ZSTD_DEPS_IO */
#endif /* ZSTD_DEPS_NEED_IO */
/* Only requested when <stdint.h> is known to be present.
* Need:
* intptr_t
*/
#ifdef ZSTD_DEPS_NEED_STDINT
#ifndef ZSTD_DEPS_STDINT
#define ZSTD_DEPS_STDINT
#include <stdint.h>
#endif /* ZSTD_DEPS_STDINT */
#endif /* ZSTD_DEPS_NEED_STDINT */
/**** ended inlining common/zstd_deps.h ****/
/**** start inlining common/debug.c ****/
/* ******************************************************************
* debug
* Part of FSE library
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* You can contact the author at :
* - Source repository : https://github.com/Cyan4973/FiniteStateEntropy
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
****************************************************************** */
/*
* This module only hosts one global variable
* which can be used to dynamically influence the verbosity of traces,
* such as DEBUGLOG and RAWLOG
*/
/**** start inlining debug.h ****/
/* ******************************************************************
* debug
* Part of FSE library
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* You can contact the author at :
* - Source repository : https://github.com/Cyan4973/FiniteStateEntropy
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
****************************************************************** */
/*
* The purpose of this header is to enable debug functions.
* They regroup assert(), DEBUGLOG() and RAWLOG() for run-time,
* and DEBUG_STATIC_ASSERT() for compile-time.
*
* By default, DEBUGLEVEL==0, which means run-time debug is disabled.
*
* Level 1 enables assert() only.
* Starting level 2, traces can be generated and pushed to stderr.
* The higher the level, the more verbose the traces.
*
* It's possible to dynamically adjust level using variable g_debug_level,
* which is only declared if DEBUGLEVEL>=2,
* and is a global variable, not multi-thread protected (use with care)
*/
#ifndef DEBUG_H_12987983217
#define DEBUG_H_12987983217
/* static assert is triggered at compile time, leaving no runtime artefact.
* static assert only works with compile-time constants.
* Also, this variant can only be used inside a function. */
#define DEBUG_STATIC_ASSERT(c) (void)sizeof(char[(c) ? 1 : -1])
/* DEBUGLEVEL is expected to be defined externally,
* typically through compiler command line.
* Value must be a number. */
#ifndef DEBUGLEVEL
# define DEBUGLEVEL 0
#endif
/* recommended values for DEBUGLEVEL :
* 0 : release mode, no debug, all run-time checks disabled
* 1 : enables assert() only, no display
* 2 : reserved, for currently active debug path
* 3 : events once per object lifetime (CCtx, CDict, etc.)
* 4 : events once per frame
* 5 : events once per block
* 6 : events once per sequence (verbose)
* 7+: events at every position (*very* verbose)
*
* It's generally inconvenient to output traces > 5.
* In which case, it's possible to selectively trigger high verbosity levels
* by modifying g_debug_level.
*/
#if (DEBUGLEVEL>=1)
# define ZSTD_DEPS_NEED_ASSERT
/**** skipping file: zstd_deps.h ****/
#else
# ifndef assert /* assert may be already defined, due to prior #include <assert.h> */
# define assert(condition) ((void)0) /* disable assert (default) */
# endif
#endif
#if (DEBUGLEVEL>=2)
# define ZSTD_DEPS_NEED_IO
/**** skipping file: zstd_deps.h ****/
extern int g_debuglevel; /* the variable is only declared,
it actually lives in debug.c,
and is shared by the whole process.
It's not thread-safe.
It's useful when enabling very verbose levels
on selective conditions (such as position in src) */
# define RAWLOG(l, ...) \
do { \
if (l<=g_debuglevel) { \
ZSTD_DEBUG_PRINT(__VA_ARGS__); \
} \
} while (0)
#define STRINGIFY(x) #x
#define TOSTRING(x) STRINGIFY(x)
#define LINE_AS_STRING TOSTRING(__LINE__)
# define DEBUGLOG(l, ...) \
do { \
if (l<=g_debuglevel) { \
ZSTD_DEBUG_PRINT(__FILE__ ":" LINE_AS_STRING ": " __VA_ARGS__); \
ZSTD_DEBUG_PRINT(" \n"); \
} \
} while (0)
#else
# define RAWLOG(l, ...) do { } while (0) /* disabled */
# define DEBUGLOG(l, ...) do { } while (0) /* disabled */
#endif
#endif /* DEBUG_H_12987983217 */
/**** ended inlining debug.h ****/
#if !defined(ZSTD_LINUX_KERNEL) || (DEBUGLEVEL>=2)
/* We only use this when DEBUGLEVEL>=2, but we get -Werror=pedantic errors if a
* translation unit is empty. So remove this from Linux kernel builds, but
* otherwise just leave it in.
*/
int g_debuglevel = DEBUGLEVEL;
#endif
/**** ended inlining common/debug.c ****/
/**** start inlining common/entropy_common.c ****/
/* ******************************************************************
* Common functions of New Generation Entropy library
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* You can contact the author at :
* - FSE+HUF source repository : https://github.com/Cyan4973/FiniteStateEntropy
* - Public forum : https://groups.google.com/forum/#!forum/lz4c
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
****************************************************************** */
/* *************************************
* Dependencies
***************************************/
/**** start inlining mem.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef MEM_H_MODULE
#define MEM_H_MODULE
/*-****************************************
* Dependencies
******************************************/
#include <stddef.h> /* size_t, ptrdiff_t */
/**** start inlining compiler.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_COMPILER_H
#define ZSTD_COMPILER_H
#include <stddef.h>
/**** start inlining portability_macros.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_PORTABILITY_MACROS_H
#define ZSTD_PORTABILITY_MACROS_H
/**
* This header file contains macro definitions to support portability.
* This header is shared between C and ASM code, so it MUST only
* contain macro definitions. It MUST not contain any C code.
*
* This header ONLY defines macros to detect platforms/feature support.
*
*/
/* compat. with non-clang compilers */
#ifndef __has_attribute
#define __has_attribute(x) 0
#endif
/* compat. with non-clang compilers */
#ifndef __has_builtin
# define __has_builtin(x) 0
#endif
/* compat. with non-clang compilers */
#ifndef __has_feature
# define __has_feature(x) 0
#endif
/* detects whether we are being compiled under msan */
#ifndef ZSTD_MEMORY_SANITIZER
# if __has_feature(memory_sanitizer)
# define ZSTD_MEMORY_SANITIZER 1
# else
# define ZSTD_MEMORY_SANITIZER 0
# endif
#endif
/* detects whether we are being compiled under asan */
#ifndef ZSTD_ADDRESS_SANITIZER
# if __has_feature(address_sanitizer)
# define ZSTD_ADDRESS_SANITIZER 1
# elif defined(__SANITIZE_ADDRESS__)
# define ZSTD_ADDRESS_SANITIZER 1
# else
# define ZSTD_ADDRESS_SANITIZER 0
# endif
#endif
/* detects whether we are being compiled under dfsan */
#ifndef ZSTD_DATAFLOW_SANITIZER
# if __has_feature(dataflow_sanitizer)
# define ZSTD_DATAFLOW_SANITIZER 1
# else
# define ZSTD_DATAFLOW_SANITIZER 0
# endif
#endif
/* Mark the internal assembly functions as hidden */
#ifdef __ELF__
# define ZSTD_HIDE_ASM_FUNCTION(func) .hidden func
#elif defined(__APPLE__)
# define ZSTD_HIDE_ASM_FUNCTION(func) .private_extern func
#else
# define ZSTD_HIDE_ASM_FUNCTION(func)
#endif
/* Compile time determination of BMI2 support */
#ifndef STATIC_BMI2
# if defined(__BMI2__)
# define STATIC_BMI2 1
# elif defined(_MSC_VER) && defined(__AVX2__)
# define STATIC_BMI2 1 /* MSVC does not have a BMI2 specific flag, but every CPU that supports AVX2 also supports BMI2 */
# endif
#endif
#ifndef STATIC_BMI2
# define STATIC_BMI2 0
#endif
/* Enable runtime BMI2 dispatch based on the CPU.
* Enabled for clang & gcc >=4.8 on x86 when BMI2 isn't enabled by default.
*/
#ifndef DYNAMIC_BMI2
# if ((defined(__clang__) && __has_attribute(__target__)) \
|| (defined(__GNUC__) \
&& (__GNUC__ >= 5 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 8)))) \
&& (defined(__i386__) || defined(__x86_64__) || defined(_M_IX86) || defined(_M_X64)) \
&& !defined(__BMI2__)
# define DYNAMIC_BMI2 1
# else
# define DYNAMIC_BMI2 0
# endif
#endif
/**
* Only enable assembly for GNU C compatible compilers,
* because other platforms may not support GAS assembly syntax.
*
* Only enable assembly for Linux / MacOS / Win32, other platforms may
* work, but they haven't been tested. This could likely be
* extended to BSD systems.
*
* Disable assembly when MSAN is enabled, because MSAN requires
* 100% of code to be instrumented to work.
*/
#if defined(__GNUC__)
# if defined(__linux__) || defined(__linux) || defined(__APPLE__) || defined(_WIN32)
# if ZSTD_MEMORY_SANITIZER
# define ZSTD_ASM_SUPPORTED 0
# elif ZSTD_DATAFLOW_SANITIZER
# define ZSTD_ASM_SUPPORTED 0
# else
# define ZSTD_ASM_SUPPORTED 1
# endif
# else
# define ZSTD_ASM_SUPPORTED 0
# endif
#else
# define ZSTD_ASM_SUPPORTED 0
#endif
/**
* Determines whether we should enable assembly for x86-64
* with BMI2.
*
* Enable if all of the following conditions hold:
* - ASM hasn't been explicitly disabled by defining ZSTD_DISABLE_ASM
* - Assembly is supported
* - We are compiling for x86-64 and either:
* - DYNAMIC_BMI2 is enabled
* - BMI2 is supported at compile time
*/
#if !defined(ZSTD_DISABLE_ASM) && \
ZSTD_ASM_SUPPORTED && \
defined(__x86_64__) && \
(DYNAMIC_BMI2 || defined(__BMI2__))
# define ZSTD_ENABLE_ASM_X86_64_BMI2 1
#else
# define ZSTD_ENABLE_ASM_X86_64_BMI2 0
#endif
/*
* For x86 ELF targets, add .note.gnu.property section for Intel CET in
* assembly sources when CET is enabled.
*
* Additionally, any function that may be called indirectly must begin
* with ZSTD_CET_ENDBRANCH.
*/
#if defined(__ELF__) && (defined(__x86_64__) || defined(__i386__)) \
&& defined(__has_include)
# if __has_include(<cet.h>)
# include <cet.h>
# define ZSTD_CET_ENDBRANCH _CET_ENDBR
# endif
#endif
#ifndef ZSTD_CET_ENDBRANCH
# define ZSTD_CET_ENDBRANCH
#endif
#endif /* ZSTD_PORTABILITY_MACROS_H */
/**** ended inlining portability_macros.h ****/
/*-*******************************************************
* Compiler specifics
*********************************************************/
/* force inlining */
#if !defined(ZSTD_NO_INLINE)
#if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || defined(__cplusplus) || defined(__STDC_VERSION__) && __STDC_VERSION__ >= 199901L /* C99 */
# define INLINE_KEYWORD inline
#else
# define INLINE_KEYWORD
#endif
#if defined(__GNUC__) || defined(__IAR_SYSTEMS_ICC__)
# define FORCE_INLINE_ATTR __attribute__((always_inline))
#elif defined(_MSC_VER)
# define FORCE_INLINE_ATTR __forceinline
#else
# define FORCE_INLINE_ATTR
#endif
#else
#define INLINE_KEYWORD
#define FORCE_INLINE_ATTR
#endif
/**
On MSVC qsort requires that functions passed into it use the __cdecl calling conversion(CC).
This explicitly marks such functions as __cdecl so that the code will still compile
if a CC other than __cdecl has been made the default.
*/
#if defined(_MSC_VER)
# define WIN_CDECL __cdecl
#else
# define WIN_CDECL
#endif
/* UNUSED_ATTR tells the compiler it is okay if the function is unused. */
#if defined(__GNUC__) || defined(__IAR_SYSTEMS_ICC__)
# define UNUSED_ATTR __attribute__((unused))
#else
# define UNUSED_ATTR
#endif
/**
* FORCE_INLINE_TEMPLATE is used to define C "templates", which take constant
* parameters. They must be inlined for the compiler to eliminate the constant
* branches.
*/
#define FORCE_INLINE_TEMPLATE static INLINE_KEYWORD FORCE_INLINE_ATTR UNUSED_ATTR
/**
* HINT_INLINE is used to help the compiler generate better code. It is *not*
* used for "templates", so it can be tweaked based on the compilers
* performance.
*
* gcc-4.8 and gcc-4.9 have been shown to benefit from leaving off the
* always_inline attribute.
*
* clang up to 5.0.0 (trunk) benefit tremendously from the always_inline
* attribute.
*/
#if !defined(__clang__) && defined(__GNUC__) && __GNUC__ >= 4 && __GNUC_MINOR__ >= 8 && __GNUC__ < 5
# define HINT_INLINE static INLINE_KEYWORD
#else
# define HINT_INLINE FORCE_INLINE_TEMPLATE
#endif
/* "soft" inline :
* The compiler is free to select if it's a good idea to inline or not.
* The main objective is to silence compiler warnings
* when a defined function in included but not used.
*
* Note : this macro is prefixed `MEM_` because it used to be provided by `mem.h` unit.
* Updating the prefix is probably preferable, but requires a fairly large codemod,
* since this name is used everywhere.
*/
#ifndef MEM_STATIC /* already defined in Linux Kernel mem.h */
#if defined(__GNUC__)
# define MEM_STATIC static __inline UNUSED_ATTR
#elif defined(__IAR_SYSTEMS_ICC__)
# define MEM_STATIC static inline UNUSED_ATTR
#elif defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */)
# define MEM_STATIC static inline
#elif defined(_MSC_VER)
# define MEM_STATIC static __inline
#else
# define MEM_STATIC static /* this version may generate warnings for unused static functions; disable the relevant warning */
#endif
#endif
/* force no inlining */
#ifdef _MSC_VER
# define FORCE_NOINLINE static __declspec(noinline)
#else
# if defined(__GNUC__) || defined(__IAR_SYSTEMS_ICC__)
# define FORCE_NOINLINE static __attribute__((__noinline__))
# else
# define FORCE_NOINLINE static
# endif
#endif
/* target attribute */
#if defined(__GNUC__) || defined(__IAR_SYSTEMS_ICC__)
# define TARGET_ATTRIBUTE(target) __attribute__((__target__(target)))
#else
# define TARGET_ATTRIBUTE(target)
#endif
/* Target attribute for BMI2 dynamic dispatch.
* Enable lzcnt, bmi, and bmi2.
* We test for bmi1 & bmi2. lzcnt is included in bmi1.
*/
#define BMI2_TARGET_ATTRIBUTE TARGET_ATTRIBUTE("lzcnt,bmi,bmi2")
/* prefetch
* can be disabled, by declaring NO_PREFETCH build macro */
#if defined(NO_PREFETCH)
# define PREFETCH_L1(ptr) do { (void)(ptr); } while (0) /* disabled */
# define PREFETCH_L2(ptr) do { (void)(ptr); } while (0) /* disabled */
#else
# if defined(_MSC_VER) && (defined(_M_X64) || defined(_M_I86)) && !defined(_M_ARM64EC) /* _mm_prefetch() is not defined outside of x86/x64 */
# include <mmintrin.h> /* https://msdn.microsoft.com/fr-fr/library/84szxsww(v=vs.90).aspx */
# define PREFETCH_L1(ptr) _mm_prefetch((const char*)(ptr), _MM_HINT_T0)
# define PREFETCH_L2(ptr) _mm_prefetch((const char*)(ptr), _MM_HINT_T1)
# elif defined(__GNUC__) && ( (__GNUC__ >= 4) || ( (__GNUC__ == 3) && (__GNUC_MINOR__ >= 1) ) )
# define PREFETCH_L1(ptr) __builtin_prefetch((ptr), 0 /* rw==read */, 3 /* locality */)
# define PREFETCH_L2(ptr) __builtin_prefetch((ptr), 0 /* rw==read */, 2 /* locality */)
# elif defined(__aarch64__)
# define PREFETCH_L1(ptr) do { __asm__ __volatile__("prfm pldl1keep, %0" ::"Q"(*(ptr))); } while (0)
# define PREFETCH_L2(ptr) do { __asm__ __volatile__("prfm pldl2keep, %0" ::"Q"(*(ptr))); } while (0)
# else
# define PREFETCH_L1(ptr) do { (void)(ptr); } while (0) /* disabled */
# define PREFETCH_L2(ptr) do { (void)(ptr); } while (0) /* disabled */
# endif
#endif /* NO_PREFETCH */
#define CACHELINE_SIZE 64
#define PREFETCH_AREA(p, s) \
do { \
const char* const _ptr = (const char*)(p); \
size_t const _size = (size_t)(s); \
size_t _pos; \
for (_pos=0; _pos<_size; _pos+=CACHELINE_SIZE) { \
PREFETCH_L2(_ptr + _pos); \
} \
} while (0)
/* vectorization
* older GCC (pre gcc-4.3 picked as the cutoff) uses a different syntax,
* and some compilers, like Intel ICC and MCST LCC, do not support it at all. */
#if !defined(__INTEL_COMPILER) && !defined(__clang__) && defined(__GNUC__) && !defined(__LCC__)
# if (__GNUC__ == 4 && __GNUC_MINOR__ > 3) || (__GNUC__ >= 5)
# define DONT_VECTORIZE __attribute__((optimize("no-tree-vectorize")))
# else
# define DONT_VECTORIZE _Pragma("GCC optimize(\"no-tree-vectorize\")")
# endif
#else
# define DONT_VECTORIZE
#endif
/* Tell the compiler that a branch is likely or unlikely.
* Only use these macros if it causes the compiler to generate better code.
* If you can remove a LIKELY/UNLIKELY annotation without speed changes in gcc
* and clang, please do.
*/
#if defined(__GNUC__)
#define LIKELY(x) (__builtin_expect((x), 1))
#define UNLIKELY(x) (__builtin_expect((x), 0))
#else
#define LIKELY(x) (x)
#define UNLIKELY(x) (x)
#endif
#if __has_builtin(__builtin_unreachable) || (defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 5)))
# define ZSTD_UNREACHABLE do { assert(0), __builtin_unreachable(); } while (0)
#else
# define ZSTD_UNREACHABLE do { assert(0); } while (0)
#endif
/* disable warnings */
#ifdef _MSC_VER /* Visual Studio */
# include <intrin.h> /* For Visual 2005 */
# pragma warning(disable : 4100) /* disable: C4100: unreferenced formal parameter */
# pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */
# pragma warning(disable : 4204) /* disable: C4204: non-constant aggregate initializer */
# pragma warning(disable : 4214) /* disable: C4214: non-int bitfields */
# pragma warning(disable : 4324) /* disable: C4324: padded structure */
#endif
/* compile time determination of SIMD support */
#if !defined(ZSTD_NO_INTRINSICS)
# if defined(__AVX2__)
# define ZSTD_ARCH_X86_AVX2
# endif
# if defined(__SSE2__) || defined(_M_X64) || (defined (_M_IX86) && defined(_M_IX86_FP) && (_M_IX86_FP >= 2))
# define ZSTD_ARCH_X86_SSE2
# endif
# if defined(__ARM_NEON) || defined(_M_ARM64)
# define ZSTD_ARCH_ARM_NEON
# endif
#
# if defined(ZSTD_ARCH_X86_AVX2)
# include <immintrin.h>
# endif
# if defined(ZSTD_ARCH_X86_SSE2)
# include <emmintrin.h>
# elif defined(ZSTD_ARCH_ARM_NEON)
# include <arm_neon.h>
# endif
#endif
/* C-language Attributes are added in C23. */
#if defined(__STDC_VERSION__) && (__STDC_VERSION__ > 201710L) && defined(__has_c_attribute)
# define ZSTD_HAS_C_ATTRIBUTE(x) __has_c_attribute(x)
#else
# define ZSTD_HAS_C_ATTRIBUTE(x) 0
#endif
/* Only use C++ attributes in C++. Some compilers report support for C++
* attributes when compiling with C.
*/
#if defined(__cplusplus) && defined(__has_cpp_attribute)
# define ZSTD_HAS_CPP_ATTRIBUTE(x) __has_cpp_attribute(x)
#else
# define ZSTD_HAS_CPP_ATTRIBUTE(x) 0
#endif
/* Define ZSTD_FALLTHROUGH macro for annotating switch case with the 'fallthrough' attribute.
* - C23: https://en.cppreference.com/w/c/language/attributes/fallthrough
* - CPP17: https://en.cppreference.com/w/cpp/language/attributes/fallthrough
* - Else: __attribute__((__fallthrough__))
*/
#ifndef ZSTD_FALLTHROUGH
# if ZSTD_HAS_C_ATTRIBUTE(fallthrough)
# define ZSTD_FALLTHROUGH [[fallthrough]]
# elif ZSTD_HAS_CPP_ATTRIBUTE(fallthrough)
# define ZSTD_FALLTHROUGH [[fallthrough]]
# elif __has_attribute(__fallthrough__)
/* Leading semicolon is to satisfy gcc-11 with -pedantic. Without the semicolon
* gcc complains about: a label can only be part of a statement and a declaration is not a statement.
*/
# define ZSTD_FALLTHROUGH ; __attribute__((__fallthrough__))
# else
# define ZSTD_FALLTHROUGH
# endif
#endif
/*-**************************************************************
* Alignment
*****************************************************************/
/* @return 1 if @u is a 2^n value, 0 otherwise
* useful to check a value is valid for alignment restrictions */
MEM_STATIC int ZSTD_isPower2(size_t u) {
return (u & (u-1)) == 0;
}
/* this test was initially positioned in mem.h,
* but this file is removed (or replaced) for linux kernel
* so it's now hosted in compiler.h,
* which remains valid for both user & kernel spaces.
*/
#ifndef ZSTD_ALIGNOF
# if defined(__GNUC__) || defined(_MSC_VER)
/* covers gcc, clang & MSVC */
/* note : this section must come first, before C11,
* due to a limitation in the kernel source generator */
# define ZSTD_ALIGNOF(T) __alignof(T)
# elif defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 201112L)
/* C11 support */
# include <stdalign.h>
# define ZSTD_ALIGNOF(T) alignof(T)
# else
/* No known support for alignof() - imperfect backup */
# define ZSTD_ALIGNOF(T) (sizeof(void*) < sizeof(T) ? sizeof(void*) : sizeof(T))
# endif
#endif /* ZSTD_ALIGNOF */
#ifndef ZSTD_ALIGNED
/* C90-compatible alignment macro (GCC/Clang). Adjust for other compilers if needed. */
# if defined(__GNUC__) || defined(__clang__)
# define ZSTD_ALIGNED(a) __attribute__((aligned(a)))
# elif defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 201112L) /* C11 */
# define ZSTD_ALIGNED(a) _Alignas(a)
#elif defined(_MSC_VER)
# define ZSTD_ALIGNED(n) __declspec(align(n))
# else
/* this compiler will require its own alignment instruction */
# define ZSTD_ALIGNED(...)
# endif
#endif /* ZSTD_ALIGNED */
/*-**************************************************************
* Sanitizer
*****************************************************************/
/**
* Zstd relies on pointer overflow in its decompressor.
* We add this attribute to functions that rely on pointer overflow.
*/
#ifndef ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
# if __has_attribute(no_sanitize)
# if !defined(__clang__) && defined(__GNUC__) && __GNUC__ < 8
/* gcc < 8 only has signed-integer-overlow which triggers on pointer overflow */
# define ZSTD_ALLOW_POINTER_OVERFLOW_ATTR __attribute__((no_sanitize("signed-integer-overflow")))
# else
/* older versions of clang [3.7, 5.0) will warn that pointer-overflow is ignored. */
# define ZSTD_ALLOW_POINTER_OVERFLOW_ATTR __attribute__((no_sanitize("pointer-overflow")))
# endif
# else
# define ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
# endif
#endif
/**
* Helper function to perform a wrapped pointer difference without triggering
* UBSAN.
*
* @returns lhs - rhs with wrapping
*/
MEM_STATIC
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
ptrdiff_t ZSTD_wrappedPtrDiff(unsigned char const* lhs, unsigned char const* rhs)
{
return lhs - rhs;
}
/**
* Helper function to perform a wrapped pointer add without triggering UBSAN.
*
* @return ptr + add with wrapping
*/
MEM_STATIC
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
unsigned char const* ZSTD_wrappedPtrAdd(unsigned char const* ptr, ptrdiff_t add)
{
return ptr + add;
}
/**
* Helper function to perform a wrapped pointer subtraction without triggering
* UBSAN.
*
* @return ptr - sub with wrapping
*/
MEM_STATIC
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
unsigned char const* ZSTD_wrappedPtrSub(unsigned char const* ptr, ptrdiff_t sub)
{
return ptr - sub;
}
/**
* Helper function to add to a pointer that works around C's undefined behavior
* of adding 0 to NULL.
*
* @returns `ptr + add` except it defines `NULL + 0 == NULL`.
*/
MEM_STATIC
unsigned char* ZSTD_maybeNullPtrAdd(unsigned char* ptr, ptrdiff_t add)
{
return add > 0 ? ptr + add : ptr;
}
/* Issue #3240 reports an ASAN failure on an llvm-mingw build. Out of an
* abundance of caution, disable our custom poisoning on mingw. */
#ifdef __MINGW32__
#ifndef ZSTD_ASAN_DONT_POISON_WORKSPACE
#define ZSTD_ASAN_DONT_POISON_WORKSPACE 1
#endif
#ifndef ZSTD_MSAN_DONT_POISON_WORKSPACE
#define ZSTD_MSAN_DONT_POISON_WORKSPACE 1
#endif
#endif
#if ZSTD_MEMORY_SANITIZER && !defined(ZSTD_MSAN_DONT_POISON_WORKSPACE)
/* Not all platforms that support msan provide sanitizers/msan_interface.h.
* We therefore declare the functions we need ourselves, rather than trying to
* include the header file... */
#include <stddef.h> /* size_t */
#define ZSTD_DEPS_NEED_STDINT
/**** skipping file: zstd_deps.h ****/
/* Make memory region fully initialized (without changing its contents). */
void __msan_unpoison(const volatile void *a, size_t size);
/* Make memory region fully uninitialized (without changing its contents).
This is a legacy interface that does not update origin information. Use
__msan_allocated_memory() instead. */
void __msan_poison(const volatile void *a, size_t size);
/* Returns the offset of the first (at least partially) poisoned byte in the
memory range, or -1 if the whole range is good. */
intptr_t __msan_test_shadow(const volatile void *x, size_t size);
/* Print shadow and origin for the memory range to stderr in a human-readable
format. */
void __msan_print_shadow(const volatile void *x, size_t size);
#endif
#if ZSTD_ADDRESS_SANITIZER && !defined(ZSTD_ASAN_DONT_POISON_WORKSPACE)
/* Not all platforms that support asan provide sanitizers/asan_interface.h.
* We therefore declare the functions we need ourselves, rather than trying to
* include the header file... */
#include <stddef.h> /* size_t */
/**
* Marks a memory region (<c>[addr, addr+size)</c>) as unaddressable.
*
* This memory must be previously allocated by your program. Instrumented
* code is forbidden from accessing addresses in this region until it is
* unpoisoned. This function is not guaranteed to poison the entire region -
* it could poison only a subregion of <c>[addr, addr+size)</c> due to ASan
* alignment restrictions.
*
* \note This function is not thread-safe because no two threads can poison or
* unpoison memory in the same memory region simultaneously.
*
* \param addr Start of memory region.
* \param size Size of memory region. */
void __asan_poison_memory_region(void const volatile *addr, size_t size);
/**
* Marks a memory region (<c>[addr, addr+size)</c>) as addressable.
*
* This memory must be previously allocated by your program. Accessing
* addresses in this region is allowed until this region is poisoned again.
* This function could unpoison a super-region of <c>[addr, addr+size)</c> due
* to ASan alignment restrictions.
*
* \note This function is not thread-safe because no two threads can
* poison or unpoison memory in the same memory region simultaneously.
*
* \param addr Start of memory region.
* \param size Size of memory region. */
void __asan_unpoison_memory_region(void const volatile *addr, size_t size);
#endif
#endif /* ZSTD_COMPILER_H */
/**** ended inlining compiler.h ****/
/**** skipping file: debug.h ****/
/**** skipping file: zstd_deps.h ****/
/*-****************************************
* Compiler specifics
******************************************/
#if defined(_MSC_VER) /* Visual Studio */
# include <stdlib.h> /* _byteswap_ulong */
# include <intrin.h> /* _byteswap_* */
#elif defined(__ICCARM__)
# include <intrinsics.h>
#endif
/*-**************************************************************
* Basic Types
*****************************************************************/
#if !defined (__VMS) && (defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) )
# if defined(_AIX)
# include <inttypes.h>
# else
# include <stdint.h> /* intptr_t */
# endif
typedef uint8_t BYTE;
typedef uint8_t U8;
typedef int8_t S8;
typedef uint16_t U16;
typedef int16_t S16;
typedef uint32_t U32;
typedef int32_t S32;
typedef uint64_t U64;
typedef int64_t S64;
#else
# include <limits.h>
#if CHAR_BIT != 8
# error "this implementation requires char to be exactly 8-bit type"
#endif
typedef unsigned char BYTE;
typedef unsigned char U8;
typedef signed char S8;
#if USHRT_MAX != 65535
# error "this implementation requires short to be exactly 16-bit type"
#endif
typedef unsigned short U16;
typedef signed short S16;
#if UINT_MAX != 4294967295
# error "this implementation requires int to be exactly 32-bit type"
#endif
typedef unsigned int U32;
typedef signed int S32;
/* note : there are no limits defined for long long type in C90.
* limits exist in C99, however, in such case, <stdint.h> is preferred */
typedef unsigned long long U64;
typedef signed long long S64;
#endif
/*-**************************************************************
* Memory I/O API
*****************************************************************/
/*=== Static platform detection ===*/
MEM_STATIC unsigned MEM_32bits(void);
MEM_STATIC unsigned MEM_64bits(void);
MEM_STATIC unsigned MEM_isLittleEndian(void);
/*=== Native unaligned read/write ===*/
MEM_STATIC U16 MEM_read16(const void* memPtr);
MEM_STATIC U32 MEM_read32(const void* memPtr);
MEM_STATIC U64 MEM_read64(const void* memPtr);
MEM_STATIC size_t MEM_readST(const void* memPtr);
MEM_STATIC void MEM_write16(void* memPtr, U16 value);
MEM_STATIC void MEM_write32(void* memPtr, U32 value);
MEM_STATIC void MEM_write64(void* memPtr, U64 value);
/*=== Little endian unaligned read/write ===*/
MEM_STATIC U16 MEM_readLE16(const void* memPtr);
MEM_STATIC U32 MEM_readLE24(const void* memPtr);
MEM_STATIC U32 MEM_readLE32(const void* memPtr);
MEM_STATIC U64 MEM_readLE64(const void* memPtr);
MEM_STATIC size_t MEM_readLEST(const void* memPtr);
MEM_STATIC void MEM_writeLE16(void* memPtr, U16 val);
MEM_STATIC void MEM_writeLE24(void* memPtr, U32 val);
MEM_STATIC void MEM_writeLE32(void* memPtr, U32 val32);
MEM_STATIC void MEM_writeLE64(void* memPtr, U64 val64);
MEM_STATIC void MEM_writeLEST(void* memPtr, size_t val);
/*=== Big endian unaligned read/write ===*/
MEM_STATIC U32 MEM_readBE32(const void* memPtr);
MEM_STATIC U64 MEM_readBE64(const void* memPtr);
MEM_STATIC size_t MEM_readBEST(const void* memPtr);
MEM_STATIC void MEM_writeBE32(void* memPtr, U32 val32);
MEM_STATIC void MEM_writeBE64(void* memPtr, U64 val64);
MEM_STATIC void MEM_writeBEST(void* memPtr, size_t val);
/*=== Byteswap ===*/
MEM_STATIC U32 MEM_swap32(U32 in);
MEM_STATIC U64 MEM_swap64(U64 in);
MEM_STATIC size_t MEM_swapST(size_t in);
/*-**************************************************************
* Memory I/O Implementation
*****************************************************************/
/* MEM_FORCE_MEMORY_ACCESS : For accessing unaligned memory:
* Method 0 : always use `memcpy()`. Safe and portable.
* Method 1 : Use compiler extension to set unaligned access.
* Method 2 : direct access. This method is portable but violate C standard.
* It can generate buggy code on targets depending on alignment.
* Default : method 1 if supported, else method 0
*/
#ifndef MEM_FORCE_MEMORY_ACCESS /* can be defined externally, on command line for example */
# ifdef __GNUC__
# define MEM_FORCE_MEMORY_ACCESS 1
# endif
#endif
MEM_STATIC unsigned MEM_32bits(void) { return sizeof(size_t)==4; }
MEM_STATIC unsigned MEM_64bits(void) { return sizeof(size_t)==8; }
MEM_STATIC unsigned MEM_isLittleEndian(void)
{
#if defined(__BYTE_ORDER__) && defined(__ORDER_LITTLE_ENDIAN__) && (__BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__)
return 1;
#elif defined(__BYTE_ORDER__) && defined(__ORDER_BIG_ENDIAN__) && (__BYTE_ORDER__ == __ORDER_BIG_ENDIAN__)
return 0;
#elif defined(__clang__) && __LITTLE_ENDIAN__
return 1;
#elif defined(__clang__) && __BIG_ENDIAN__
return 0;
#elif defined(_MSC_VER) && (_M_X64 || _M_IX86)
return 1;
#elif defined(__DMC__) && defined(_M_IX86)
return 1;
#elif defined(__IAR_SYSTEMS_ICC__) && __LITTLE_ENDIAN__
return 1;
#else
const union { U32 u; BYTE c[4]; } one = { 1 }; /* don't use static : performance detrimental */
return one.c[0];
#endif
}
#if defined(MEM_FORCE_MEMORY_ACCESS) && (MEM_FORCE_MEMORY_ACCESS==2)
/* violates C standard, by lying on structure alignment.
Only use if no other choice to achieve best performance on target platform */
MEM_STATIC U16 MEM_read16(const void* memPtr) { return *(const U16*) memPtr; }
MEM_STATIC U32 MEM_read32(const void* memPtr) { return *(const U32*) memPtr; }
MEM_STATIC U64 MEM_read64(const void* memPtr) { return *(const U64*) memPtr; }
MEM_STATIC size_t MEM_readST(const void* memPtr) { return *(const size_t*) memPtr; }
MEM_STATIC void MEM_write16(void* memPtr, U16 value) { *(U16*)memPtr = value; }
MEM_STATIC void MEM_write32(void* memPtr, U32 value) { *(U32*)memPtr = value; }
MEM_STATIC void MEM_write64(void* memPtr, U64 value) { *(U64*)memPtr = value; }
#elif defined(MEM_FORCE_MEMORY_ACCESS) && (MEM_FORCE_MEMORY_ACCESS==1)
typedef __attribute__((aligned(1))) U16 unalign16;
typedef __attribute__((aligned(1))) U32 unalign32;
typedef __attribute__((aligned(1))) U64 unalign64;
typedef __attribute__((aligned(1))) size_t unalignArch;
MEM_STATIC U16 MEM_read16(const void* ptr) { return *(const unalign16*)ptr; }
MEM_STATIC U32 MEM_read32(const void* ptr) { return *(const unalign32*)ptr; }
MEM_STATIC U64 MEM_read64(const void* ptr) { return *(const unalign64*)ptr; }
MEM_STATIC size_t MEM_readST(const void* ptr) { return *(const unalignArch*)ptr; }
MEM_STATIC void MEM_write16(void* memPtr, U16 value) { *(unalign16*)memPtr = value; }
MEM_STATIC void MEM_write32(void* memPtr, U32 value) { *(unalign32*)memPtr = value; }
MEM_STATIC void MEM_write64(void* memPtr, U64 value) { *(unalign64*)memPtr = value; }
#else
/* default method, safe and standard.
can sometimes prove slower */
MEM_STATIC U16 MEM_read16(const void* memPtr)
{
U16 val; ZSTD_memcpy(&val, memPtr, sizeof(val)); return val;
}
MEM_STATIC U32 MEM_read32(const void* memPtr)
{
U32 val; ZSTD_memcpy(&val, memPtr, sizeof(val)); return val;
}
MEM_STATIC U64 MEM_read64(const void* memPtr)
{
U64 val; ZSTD_memcpy(&val, memPtr, sizeof(val)); return val;
}
MEM_STATIC size_t MEM_readST(const void* memPtr)
{
size_t val; ZSTD_memcpy(&val, memPtr, sizeof(val)); return val;
}
MEM_STATIC void MEM_write16(void* memPtr, U16 value)
{
ZSTD_memcpy(memPtr, &value, sizeof(value));
}
MEM_STATIC void MEM_write32(void* memPtr, U32 value)
{
ZSTD_memcpy(memPtr, &value, sizeof(value));
}
MEM_STATIC void MEM_write64(void* memPtr, U64 value)
{
ZSTD_memcpy(memPtr, &value, sizeof(value));
}
#endif /* MEM_FORCE_MEMORY_ACCESS */
MEM_STATIC U32 MEM_swap32_fallback(U32 in)
{
return ((in << 24) & 0xff000000 ) |
((in << 8) & 0x00ff0000 ) |
((in >> 8) & 0x0000ff00 ) |
((in >> 24) & 0x000000ff );
}
MEM_STATIC U32 MEM_swap32(U32 in)
{
#if defined(_MSC_VER) /* Visual Studio */
return _byteswap_ulong(in);
#elif (defined (__GNUC__) && (__GNUC__ * 100 + __GNUC_MINOR__ >= 403)) \
|| (defined(__clang__) && __has_builtin(__builtin_bswap32))
return __builtin_bswap32(in);
#elif defined(__ICCARM__)
return __REV(in);
#else
return MEM_swap32_fallback(in);
#endif
}
MEM_STATIC U64 MEM_swap64_fallback(U64 in)
{
return ((in << 56) & 0xff00000000000000ULL) |
((in << 40) & 0x00ff000000000000ULL) |
((in << 24) & 0x0000ff0000000000ULL) |
((in << 8) & 0x000000ff00000000ULL) |
((in >> 8) & 0x00000000ff000000ULL) |
((in >> 24) & 0x0000000000ff0000ULL) |
((in >> 40) & 0x000000000000ff00ULL) |
((in >> 56) & 0x00000000000000ffULL);
}
MEM_STATIC U64 MEM_swap64(U64 in)
{
#if defined(_MSC_VER) /* Visual Studio */
return _byteswap_uint64(in);
#elif (defined (__GNUC__) && (__GNUC__ * 100 + __GNUC_MINOR__ >= 403)) \
|| (defined(__clang__) && __has_builtin(__builtin_bswap64))
return __builtin_bswap64(in);
#else
return MEM_swap64_fallback(in);
#endif
}
MEM_STATIC size_t MEM_swapST(size_t in)
{
if (MEM_32bits())
return (size_t)MEM_swap32((U32)in);
else
return (size_t)MEM_swap64((U64)in);
}
/*=== Little endian r/w ===*/
MEM_STATIC U16 MEM_readLE16(const void* memPtr)
{
if (MEM_isLittleEndian())
return MEM_read16(memPtr);
else {
const BYTE* p = (const BYTE*)memPtr;
return (U16)(p[0] + (p[1]<<8));
}
}
MEM_STATIC void MEM_writeLE16(void* memPtr, U16 val)
{
if (MEM_isLittleEndian()) {
MEM_write16(memPtr, val);
} else {
BYTE* p = (BYTE*)memPtr;
p[0] = (BYTE)val;
p[1] = (BYTE)(val>>8);
}
}
MEM_STATIC U32 MEM_readLE24(const void* memPtr)
{
return (U32)MEM_readLE16(memPtr) + ((U32)(((const BYTE*)memPtr)[2]) << 16);
}
MEM_STATIC void MEM_writeLE24(void* memPtr, U32 val)
{
MEM_writeLE16(memPtr, (U16)val);
((BYTE*)memPtr)[2] = (BYTE)(val>>16);
}
MEM_STATIC U32 MEM_readLE32(const void* memPtr)
{
if (MEM_isLittleEndian())
return MEM_read32(memPtr);
else
return MEM_swap32(MEM_read32(memPtr));
}
MEM_STATIC void MEM_writeLE32(void* memPtr, U32 val32)
{
if (MEM_isLittleEndian())
MEM_write32(memPtr, val32);
else
MEM_write32(memPtr, MEM_swap32(val32));
}
MEM_STATIC U64 MEM_readLE64(const void* memPtr)
{
if (MEM_isLittleEndian())
return MEM_read64(memPtr);
else
return MEM_swap64(MEM_read64(memPtr));
}
MEM_STATIC void MEM_writeLE64(void* memPtr, U64 val64)
{
if (MEM_isLittleEndian())
MEM_write64(memPtr, val64);
else
MEM_write64(memPtr, MEM_swap64(val64));
}
MEM_STATIC size_t MEM_readLEST(const void* memPtr)
{
if (MEM_32bits())
return (size_t)MEM_readLE32(memPtr);
else
return (size_t)MEM_readLE64(memPtr);
}
MEM_STATIC void MEM_writeLEST(void* memPtr, size_t val)
{
if (MEM_32bits())
MEM_writeLE32(memPtr, (U32)val);
else
MEM_writeLE64(memPtr, (U64)val);
}
/*=== Big endian r/w ===*/
MEM_STATIC U32 MEM_readBE32(const void* memPtr)
{
if (MEM_isLittleEndian())
return MEM_swap32(MEM_read32(memPtr));
else
return MEM_read32(memPtr);
}
MEM_STATIC void MEM_writeBE32(void* memPtr, U32 val32)
{
if (MEM_isLittleEndian())
MEM_write32(memPtr, MEM_swap32(val32));
else
MEM_write32(memPtr, val32);
}
MEM_STATIC U64 MEM_readBE64(const void* memPtr)
{
if (MEM_isLittleEndian())
return MEM_swap64(MEM_read64(memPtr));
else
return MEM_read64(memPtr);
}
MEM_STATIC void MEM_writeBE64(void* memPtr, U64 val64)
{
if (MEM_isLittleEndian())
MEM_write64(memPtr, MEM_swap64(val64));
else
MEM_write64(memPtr, val64);
}
MEM_STATIC size_t MEM_readBEST(const void* memPtr)
{
if (MEM_32bits())
return (size_t)MEM_readBE32(memPtr);
else
return (size_t)MEM_readBE64(memPtr);
}
MEM_STATIC void MEM_writeBEST(void* memPtr, size_t val)
{
if (MEM_32bits())
MEM_writeBE32(memPtr, (U32)val);
else
MEM_writeBE64(memPtr, (U64)val);
}
/* code only tested on 32 and 64 bits systems */
MEM_STATIC void MEM_check(void) { DEBUG_STATIC_ASSERT((sizeof(size_t)==4) || (sizeof(size_t)==8)); }
#endif /* MEM_H_MODULE */
/**** ended inlining mem.h ****/
/**** start inlining error_private.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/* Note : this module is expected to remain private, do not expose it */
#ifndef ERROR_H_MODULE
#define ERROR_H_MODULE
/* ****************************************
* Dependencies
******************************************/
/**** start inlining ../zstd_errors.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_ERRORS_H_398273423
#define ZSTD_ERRORS_H_398273423
#if defined (__cplusplus)
extern "C" {
#endif
/* ===== ZSTDERRORLIB_API : control library symbols visibility ===== */
#ifndef ZSTDERRORLIB_VISIBLE
/* Backwards compatibility with old macro name */
# ifdef ZSTDERRORLIB_VISIBILITY
# define ZSTDERRORLIB_VISIBLE ZSTDERRORLIB_VISIBILITY
# elif defined(__GNUC__) && (__GNUC__ >= 4) && !defined(__MINGW32__)
# define ZSTDERRORLIB_VISIBLE __attribute__ ((visibility ("default")))
# else
# define ZSTDERRORLIB_VISIBLE
# endif
#endif
#ifndef ZSTDERRORLIB_HIDDEN
# if defined(__GNUC__) && (__GNUC__ >= 4) && !defined(__MINGW32__)
# define ZSTDERRORLIB_HIDDEN __attribute__ ((visibility ("hidden")))
# else
# define ZSTDERRORLIB_HIDDEN
# endif
#endif
#if defined(ZSTD_DLL_EXPORT) && (ZSTD_DLL_EXPORT==1)
# define ZSTDERRORLIB_API __declspec(dllexport) ZSTDERRORLIB_VISIBLE
#elif defined(ZSTD_DLL_IMPORT) && (ZSTD_DLL_IMPORT==1)
# define ZSTDERRORLIB_API __declspec(dllimport) ZSTDERRORLIB_VISIBLE /* It isn't required but allows to generate better code, saving a function pointer load from the IAT and an indirect jump.*/
#else
# define ZSTDERRORLIB_API ZSTDERRORLIB_VISIBLE
#endif
/*-*********************************************
* Error codes list
*-*********************************************
* Error codes _values_ are pinned down since v1.3.1 only.
* Therefore, don't rely on values if you may link to any version < v1.3.1.
*
* Only values < 100 are considered stable.
*
* note 1 : this API shall be used with static linking only.
* dynamic linking is not yet officially supported.
* note 2 : Prefer relying on the enum than on its value whenever possible
* This is the only supported way to use the error list < v1.3.1
* note 3 : ZSTD_isError() is always correct, whatever the library version.
**********************************************/
typedef enum {
ZSTD_error_no_error = 0,
ZSTD_error_GENERIC = 1,
ZSTD_error_prefix_unknown = 10,
ZSTD_error_version_unsupported = 12,
ZSTD_error_frameParameter_unsupported = 14,
ZSTD_error_frameParameter_windowTooLarge = 16,
ZSTD_error_corruption_detected = 20,
ZSTD_error_checksum_wrong = 22,
ZSTD_error_literals_headerWrong = 24,
ZSTD_error_dictionary_corrupted = 30,
ZSTD_error_dictionary_wrong = 32,
ZSTD_error_dictionaryCreation_failed = 34,
ZSTD_error_parameter_unsupported = 40,
ZSTD_error_parameter_combination_unsupported = 41,
ZSTD_error_parameter_outOfBound = 42,
ZSTD_error_tableLog_tooLarge = 44,
ZSTD_error_maxSymbolValue_tooLarge = 46,
ZSTD_error_maxSymbolValue_tooSmall = 48,
ZSTD_error_cannotProduce_uncompressedBlock = 49,
ZSTD_error_stabilityCondition_notRespected = 50,
ZSTD_error_stage_wrong = 60,
ZSTD_error_init_missing = 62,
ZSTD_error_memory_allocation = 64,
ZSTD_error_workSpace_tooSmall= 66,
ZSTD_error_dstSize_tooSmall = 70,
ZSTD_error_srcSize_wrong = 72,
ZSTD_error_dstBuffer_null = 74,
ZSTD_error_noForwardProgress_destFull = 80,
ZSTD_error_noForwardProgress_inputEmpty = 82,
/* following error codes are __NOT STABLE__, they can be removed or changed in future versions */
ZSTD_error_frameIndex_tooLarge = 100,
ZSTD_error_seekableIO = 102,
ZSTD_error_dstBuffer_wrong = 104,
ZSTD_error_srcBuffer_wrong = 105,
ZSTD_error_sequenceProducer_failed = 106,
ZSTD_error_externalSequences_invalid = 107,
ZSTD_error_maxCode = 120 /* never EVER use this value directly, it can change in future versions! Use ZSTD_isError() instead */
} ZSTD_ErrorCode;
ZSTDERRORLIB_API const char* ZSTD_getErrorString(ZSTD_ErrorCode code); /**< Same as ZSTD_getErrorName, but using a `ZSTD_ErrorCode` enum argument */
#if defined (__cplusplus)
}
#endif
#endif /* ZSTD_ERRORS_H_398273423 */
/**** ended inlining ../zstd_errors.h ****/
/**** skipping file: compiler.h ****/
/**** skipping file: debug.h ****/
/**** skipping file: zstd_deps.h ****/
/* ****************************************
* Compiler-specific
******************************************/
#if defined(__GNUC__)
# define ERR_STATIC static __attribute__((unused))
#elif defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */)
# define ERR_STATIC static inline
#elif defined(_MSC_VER)
# define ERR_STATIC static __inline
#else
# define ERR_STATIC static /* this version may generate warnings for unused static functions; disable the relevant warning */
#endif
/*-****************************************
* Customization (error_public.h)
******************************************/
typedef ZSTD_ErrorCode ERR_enum;
#define PREFIX(name) ZSTD_error_##name
/*-****************************************
* Error codes handling
******************************************/
#undef ERROR /* already defined on Visual Studio */
#define ERROR(name) ZSTD_ERROR(name)
#define ZSTD_ERROR(name) ((size_t)-PREFIX(name))
ERR_STATIC unsigned ERR_isError(size_t code) { return (code > ERROR(maxCode)); }
ERR_STATIC ERR_enum ERR_getErrorCode(size_t code) { if (!ERR_isError(code)) return (ERR_enum)0; return (ERR_enum) (0-code); }
/* check and forward error code */
#define CHECK_V_F(e, f) \
size_t const e = f; \
do { \
if (ERR_isError(e)) \
return e; \
} while (0)
#define CHECK_F(f) do { CHECK_V_F(_var_err__, f); } while (0)
/*-****************************************
* Error Strings
******************************************/
const char* ERR_getErrorString(ERR_enum code); /* error_private.c */
ERR_STATIC const char* ERR_getErrorName(size_t code)
{
return ERR_getErrorString(ERR_getErrorCode(code));
}
/**
* Ignore: this is an internal helper.
*
* This is a helper function to help force C99-correctness during compilation.
* Under strict compilation modes, variadic macro arguments can't be empty.
* However, variadic function arguments can be. Using a function therefore lets
* us statically check that at least one (string) argument was passed,
* independent of the compilation flags.
*/
static INLINE_KEYWORD UNUSED_ATTR
void _force_has_format_string(const char *format, ...) {
(void)format;
}
/**
* Ignore: this is an internal helper.
*
* We want to force this function invocation to be syntactically correct, but
* we don't want to force runtime evaluation of its arguments.
*/
#define _FORCE_HAS_FORMAT_STRING(...) \
do { \
if (0) { \
_force_has_format_string(__VA_ARGS__); \
} \
} while (0)
#define ERR_QUOTE(str) #str
/**
* Return the specified error if the condition evaluates to true.
*
* In debug modes, prints additional information.
* In order to do that (particularly, printing the conditional that failed),
* this can't just wrap RETURN_ERROR().
*/
#define RETURN_ERROR_IF(cond, err, ...) \
do { \
if (cond) { \
RAWLOG(3, "%s:%d: ERROR!: check %s failed, returning %s", \
__FILE__, __LINE__, ERR_QUOTE(cond), ERR_QUOTE(ERROR(err))); \
_FORCE_HAS_FORMAT_STRING(__VA_ARGS__); \
RAWLOG(3, ": " __VA_ARGS__); \
RAWLOG(3, "\n"); \
return ERROR(err); \
} \
} while (0)
/**
* Unconditionally return the specified error.
*
* In debug modes, prints additional information.
*/
#define RETURN_ERROR(err, ...) \
do { \
RAWLOG(3, "%s:%d: ERROR!: unconditional check failed, returning %s", \
__FILE__, __LINE__, ERR_QUOTE(ERROR(err))); \
_FORCE_HAS_FORMAT_STRING(__VA_ARGS__); \
RAWLOG(3, ": " __VA_ARGS__); \
RAWLOG(3, "\n"); \
return ERROR(err); \
} while(0)
/**
* If the provided expression evaluates to an error code, returns that error code.
*
* In debug modes, prints additional information.
*/
#define FORWARD_IF_ERROR(err, ...) \
do { \
size_t const err_code = (err); \
if (ERR_isError(err_code)) { \
RAWLOG(3, "%s:%d: ERROR!: forwarding error in %s: %s", \
__FILE__, __LINE__, ERR_QUOTE(err), ERR_getErrorName(err_code)); \
_FORCE_HAS_FORMAT_STRING(__VA_ARGS__); \
RAWLOG(3, ": " __VA_ARGS__); \
RAWLOG(3, "\n"); \
return err_code; \
} \
} while(0)
#endif /* ERROR_H_MODULE */
/**** ended inlining error_private.h ****/
#define FSE_STATIC_LINKING_ONLY /* FSE_MIN_TABLELOG */
/**** start inlining fse.h ****/
/* ******************************************************************
* FSE : Finite State Entropy codec
* Public Prototypes declaration
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* You can contact the author at :
* - Source repository : https://github.com/Cyan4973/FiniteStateEntropy
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
****************************************************************** */
#ifndef FSE_H
#define FSE_H
/*-*****************************************
* Dependencies
******************************************/
/**** skipping file: zstd_deps.h ****/
/*-*****************************************
* FSE_PUBLIC_API : control library symbols visibility
******************************************/
#if defined(FSE_DLL_EXPORT) && (FSE_DLL_EXPORT==1) && defined(__GNUC__) && (__GNUC__ >= 4)
# define FSE_PUBLIC_API __attribute__ ((visibility ("default")))
#elif defined(FSE_DLL_EXPORT) && (FSE_DLL_EXPORT==1) /* Visual expected */
# define FSE_PUBLIC_API __declspec(dllexport)
#elif defined(FSE_DLL_IMPORT) && (FSE_DLL_IMPORT==1)
# define FSE_PUBLIC_API __declspec(dllimport) /* It isn't required but allows to generate better code, saving a function pointer load from the IAT and an indirect jump.*/
#else
# define FSE_PUBLIC_API
#endif
/*------ Version ------*/
#define FSE_VERSION_MAJOR 0
#define FSE_VERSION_MINOR 9
#define FSE_VERSION_RELEASE 0
#define FSE_LIB_VERSION FSE_VERSION_MAJOR.FSE_VERSION_MINOR.FSE_VERSION_RELEASE
#define FSE_QUOTE(str) #str
#define FSE_EXPAND_AND_QUOTE(str) FSE_QUOTE(str)
#define FSE_VERSION_STRING FSE_EXPAND_AND_QUOTE(FSE_LIB_VERSION)
#define FSE_VERSION_NUMBER (FSE_VERSION_MAJOR *100*100 + FSE_VERSION_MINOR *100 + FSE_VERSION_RELEASE)
FSE_PUBLIC_API unsigned FSE_versionNumber(void); /**< library version number; to be used when checking dll version */
/*-*****************************************
* Tool functions
******************************************/
FSE_PUBLIC_API size_t FSE_compressBound(size_t size); /* maximum compressed size */
/* Error Management */
FSE_PUBLIC_API unsigned FSE_isError(size_t code); /* tells if a return value is an error code */
FSE_PUBLIC_API const char* FSE_getErrorName(size_t code); /* provides error code string (useful for debugging) */
/*-*****************************************
* FSE detailed API
******************************************/
/*!
FSE_compress() does the following:
1. count symbol occurrence from source[] into table count[] (see hist.h)
2. normalize counters so that sum(count[]) == Power_of_2 (2^tableLog)
3. save normalized counters to memory buffer using writeNCount()
4. build encoding table 'CTable' from normalized counters
5. encode the data stream using encoding table 'CTable'
FSE_decompress() does the following:
1. read normalized counters with readNCount()
2. build decoding table 'DTable' from normalized counters
3. decode the data stream using decoding table 'DTable'
The following API allows targeting specific sub-functions for advanced tasks.
For example, it's possible to compress several blocks using the same 'CTable',
or to save and provide normalized distribution using external method.
*/
/* *** COMPRESSION *** */
/*! FSE_optimalTableLog():
dynamically downsize 'tableLog' when conditions are met.
It saves CPU time, by using smaller tables, while preserving or even improving compression ratio.
@return : recommended tableLog (necessarily <= 'maxTableLog') */
FSE_PUBLIC_API unsigned FSE_optimalTableLog(unsigned maxTableLog, size_t srcSize, unsigned maxSymbolValue);
/*! FSE_normalizeCount():
normalize counts so that sum(count[]) == Power_of_2 (2^tableLog)
'normalizedCounter' is a table of short, of minimum size (maxSymbolValue+1).
useLowProbCount is a boolean parameter which trades off compressed size for
faster header decoding. When it is set to 1, the compressed data will be slightly
smaller. And when it is set to 0, FSE_readNCount() and FSE_buildDTable() will be
faster. If you are compressing a small amount of data (< 2 KB) then useLowProbCount=0
is a good default, since header deserialization makes a big speed difference.
Otherwise, useLowProbCount=1 is a good default, since the speed difference is small.
@return : tableLog,
or an errorCode, which can be tested using FSE_isError() */
FSE_PUBLIC_API size_t FSE_normalizeCount(short* normalizedCounter, unsigned tableLog,
const unsigned* count, size_t srcSize, unsigned maxSymbolValue, unsigned useLowProbCount);
/*! FSE_NCountWriteBound():
Provides the maximum possible size of an FSE normalized table, given 'maxSymbolValue' and 'tableLog'.
Typically useful for allocation purpose. */
FSE_PUBLIC_API size_t FSE_NCountWriteBound(unsigned maxSymbolValue, unsigned tableLog);
/*! FSE_writeNCount():
Compactly save 'normalizedCounter' into 'buffer'.
@return : size of the compressed table,
or an errorCode, which can be tested using FSE_isError(). */
FSE_PUBLIC_API size_t FSE_writeNCount (void* buffer, size_t bufferSize,
const short* normalizedCounter,
unsigned maxSymbolValue, unsigned tableLog);
/*! Constructor and Destructor of FSE_CTable.
Note that FSE_CTable size depends on 'tableLog' and 'maxSymbolValue' */
typedef unsigned FSE_CTable; /* don't allocate that. It's only meant to be more restrictive than void* */
/*! FSE_buildCTable():
Builds `ct`, which must be already allocated, using FSE_createCTable().
@return : 0, or an errorCode, which can be tested using FSE_isError() */
FSE_PUBLIC_API size_t FSE_buildCTable(FSE_CTable* ct, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog);
/*! FSE_compress_usingCTable():
Compress `src` using `ct` into `dst` which must be already allocated.
@return : size of compressed data (<= `dstCapacity`),
or 0 if compressed data could not fit into `dst`,
or an errorCode, which can be tested using FSE_isError() */
FSE_PUBLIC_API size_t FSE_compress_usingCTable (void* dst, size_t dstCapacity, const void* src, size_t srcSize, const FSE_CTable* ct);
/*!
Tutorial :
----------
The first step is to count all symbols. FSE_count() does this job very fast.
Result will be saved into 'count', a table of unsigned int, which must be already allocated, and have 'maxSymbolValuePtr[0]+1' cells.
'src' is a table of bytes of size 'srcSize'. All values within 'src' MUST be <= maxSymbolValuePtr[0]
maxSymbolValuePtr[0] will be updated, with its real value (necessarily <= original value)
FSE_count() will return the number of occurrence of the most frequent symbol.
This can be used to know if there is a single symbol within 'src', and to quickly evaluate its compressibility.
If there is an error, the function will return an ErrorCode (which can be tested using FSE_isError()).
The next step is to normalize the frequencies.
FSE_normalizeCount() will ensure that sum of frequencies is == 2 ^'tableLog'.
It also guarantees a minimum of 1 to any Symbol with frequency >= 1.
You can use 'tableLog'==0 to mean "use default tableLog value".
If you are unsure of which tableLog value to use, you can ask FSE_optimalTableLog(),
which will provide the optimal valid tableLog given sourceSize, maxSymbolValue, and a user-defined maximum (0 means "default").
The result of FSE_normalizeCount() will be saved into a table,
called 'normalizedCounter', which is a table of signed short.
'normalizedCounter' must be already allocated, and have at least 'maxSymbolValue+1' cells.
The return value is tableLog if everything proceeded as expected.
It is 0 if there is a single symbol within distribution.
If there is an error (ex: invalid tableLog value), the function will return an ErrorCode (which can be tested using FSE_isError()).
'normalizedCounter' can be saved in a compact manner to a memory area using FSE_writeNCount().
'buffer' must be already allocated.
For guaranteed success, buffer size must be at least FSE_headerBound().
The result of the function is the number of bytes written into 'buffer'.
If there is an error, the function will return an ErrorCode (which can be tested using FSE_isError(); ex : buffer size too small).
'normalizedCounter' can then be used to create the compression table 'CTable'.
The space required by 'CTable' must be already allocated, using FSE_createCTable().
You can then use FSE_buildCTable() to fill 'CTable'.
If there is an error, both functions will return an ErrorCode (which can be tested using FSE_isError()).
'CTable' can then be used to compress 'src', with FSE_compress_usingCTable().
Similar to FSE_count(), the convention is that 'src' is assumed to be a table of char of size 'srcSize'
The function returns the size of compressed data (without header), necessarily <= `dstCapacity`.
If it returns '0', compressed data could not fit into 'dst'.
If there is an error, the function will return an ErrorCode (which can be tested using FSE_isError()).
*/
/* *** DECOMPRESSION *** */
/*! FSE_readNCount():
Read compactly saved 'normalizedCounter' from 'rBuffer'.
@return : size read from 'rBuffer',
or an errorCode, which can be tested using FSE_isError().
maxSymbolValuePtr[0] and tableLogPtr[0] will also be updated with their respective values */
FSE_PUBLIC_API size_t FSE_readNCount (short* normalizedCounter,
unsigned* maxSymbolValuePtr, unsigned* tableLogPtr,
const void* rBuffer, size_t rBuffSize);
/*! FSE_readNCount_bmi2():
* Same as FSE_readNCount() but pass bmi2=1 when your CPU supports BMI2 and 0 otherwise.
*/
FSE_PUBLIC_API size_t FSE_readNCount_bmi2(short* normalizedCounter,
unsigned* maxSymbolValuePtr, unsigned* tableLogPtr,
const void* rBuffer, size_t rBuffSize, int bmi2);
typedef unsigned FSE_DTable; /* don't allocate that. It's just a way to be more restrictive than void* */
/*!
Tutorial :
----------
(Note : these functions only decompress FSE-compressed blocks.
If block is uncompressed, use memcpy() instead
If block is a single repeated byte, use memset() instead )
The first step is to obtain the normalized frequencies of symbols.
This can be performed by FSE_readNCount() if it was saved using FSE_writeNCount().
'normalizedCounter' must be already allocated, and have at least 'maxSymbolValuePtr[0]+1' cells of signed short.
In practice, that means it's necessary to know 'maxSymbolValue' beforehand,
or size the table to handle worst case situations (typically 256).
FSE_readNCount() will provide 'tableLog' and 'maxSymbolValue'.
The result of FSE_readNCount() is the number of bytes read from 'rBuffer'.
Note that 'rBufferSize' must be at least 4 bytes, even if useful information is less than that.
If there is an error, the function will return an error code, which can be tested using FSE_isError().
The next step is to build the decompression tables 'FSE_DTable' from 'normalizedCounter'.
This is performed by the function FSE_buildDTable().
The space required by 'FSE_DTable' must be already allocated using FSE_createDTable().
If there is an error, the function will return an error code, which can be tested using FSE_isError().
`FSE_DTable` can then be used to decompress `cSrc`, with FSE_decompress_usingDTable().
`cSrcSize` must be strictly correct, otherwise decompression will fail.
FSE_decompress_usingDTable() result will tell how many bytes were regenerated (<=`dstCapacity`).
If there is an error, the function will return an error code, which can be tested using FSE_isError(). (ex: dst buffer too small)
*/
#endif /* FSE_H */
#if defined(FSE_STATIC_LINKING_ONLY) && !defined(FSE_H_FSE_STATIC_LINKING_ONLY)
#define FSE_H_FSE_STATIC_LINKING_ONLY
/**** start inlining bitstream.h ****/
/* ******************************************************************
* bitstream
* Part of FSE library
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* You can contact the author at :
* - Source repository : https://github.com/Cyan4973/FiniteStateEntropy
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
****************************************************************** */
#ifndef BITSTREAM_H_MODULE
#define BITSTREAM_H_MODULE
/*
* This API consists of small unitary functions, which must be inlined for best performance.
* Since link-time-optimization is not available for all compilers,
* these functions are defined into a .h to be included.
*/
/*-****************************************
* Dependencies
******************************************/
/**** skipping file: mem.h ****/
/**** skipping file: compiler.h ****/
/**** skipping file: debug.h ****/
/**** skipping file: error_private.h ****/
/**** start inlining bits.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_BITS_H
#define ZSTD_BITS_H
/**** skipping file: mem.h ****/
MEM_STATIC unsigned ZSTD_countTrailingZeros32_fallback(U32 val)
{
assert(val != 0);
{
static const U32 DeBruijnBytePos[32] = {0, 1, 28, 2, 29, 14, 24, 3,
30, 22, 20, 15, 25, 17, 4, 8,
31, 27, 13, 23, 21, 19, 16, 7,
26, 12, 18, 6, 11, 5, 10, 9};
return DeBruijnBytePos[((U32) ((val & -(S32) val) * 0x077CB531U)) >> 27];
}
}
MEM_STATIC unsigned ZSTD_countTrailingZeros32(U32 val)
{
assert(val != 0);
#if defined(_MSC_VER)
# if STATIC_BMI2
return (unsigned)_tzcnt_u32(val);
# else
if (val != 0) {
unsigned long r;
_BitScanForward(&r, val);
return (unsigned)r;
} else {
__assume(0); /* Should not reach this code path */
}
# endif
#elif defined(__GNUC__) && (__GNUC__ >= 4)
return (unsigned)__builtin_ctz(val);
#elif defined(__ICCARM__)
return (unsigned)__builtin_ctz(val);
#else
return ZSTD_countTrailingZeros32_fallback(val);
#endif
}
MEM_STATIC unsigned ZSTD_countLeadingZeros32_fallback(U32 val)
{
assert(val != 0);
{
static const U32 DeBruijnClz[32] = {0, 9, 1, 10, 13, 21, 2, 29,
11, 14, 16, 18, 22, 25, 3, 30,
8, 12, 20, 28, 15, 17, 24, 7,
19, 27, 23, 6, 26, 5, 4, 31};
val |= val >> 1;
val |= val >> 2;
val |= val >> 4;
val |= val >> 8;
val |= val >> 16;
return 31 - DeBruijnClz[(val * 0x07C4ACDDU) >> 27];
}
}
MEM_STATIC unsigned ZSTD_countLeadingZeros32(U32 val)
{
assert(val != 0);
#if defined(_MSC_VER)
# if STATIC_BMI2
return (unsigned)_lzcnt_u32(val);
# else
if (val != 0) {
unsigned long r;
_BitScanReverse(&r, val);
return (unsigned)(31 - r);
} else {
__assume(0); /* Should not reach this code path */
}
# endif
#elif defined(__GNUC__) && (__GNUC__ >= 4)
return (unsigned)__builtin_clz(val);
#elif defined(__ICCARM__)
return (unsigned)__builtin_clz(val);
#else
return ZSTD_countLeadingZeros32_fallback(val);
#endif
}
MEM_STATIC unsigned ZSTD_countTrailingZeros64(U64 val)
{
assert(val != 0);
#if defined(_MSC_VER) && defined(_WIN64)
# if STATIC_BMI2
return (unsigned)_tzcnt_u64(val);
# else
if (val != 0) {
unsigned long r;
_BitScanForward64(&r, val);
return (unsigned)r;
} else {
__assume(0); /* Should not reach this code path */
}
# endif
#elif defined(__GNUC__) && (__GNUC__ >= 4) && defined(__LP64__)
return (unsigned)__builtin_ctzll(val);
#elif defined(__ICCARM__)
return (unsigned)__builtin_ctzll(val);
#else
{
U32 mostSignificantWord = (U32)(val >> 32);
U32 leastSignificantWord = (U32)val;
if (leastSignificantWord == 0) {
return 32 + ZSTD_countTrailingZeros32(mostSignificantWord);
} else {
return ZSTD_countTrailingZeros32(leastSignificantWord);
}
}
#endif
}
MEM_STATIC unsigned ZSTD_countLeadingZeros64(U64 val)
{
assert(val != 0);
#if defined(_MSC_VER) && defined(_WIN64)
# if STATIC_BMI2
return (unsigned)_lzcnt_u64(val);
# else
if (val != 0) {
unsigned long r;
_BitScanReverse64(&r, val);
return (unsigned)(63 - r);
} else {
__assume(0); /* Should not reach this code path */
}
# endif
#elif defined(__GNUC__) && (__GNUC__ >= 4)
return (unsigned)(__builtin_clzll(val));
#elif defined(__ICCARM__)
return (unsigned)(__builtin_clzll(val));
#else
{
U32 mostSignificantWord = (U32)(val >> 32);
U32 leastSignificantWord = (U32)val;
if (mostSignificantWord == 0) {
return 32 + ZSTD_countLeadingZeros32(leastSignificantWord);
} else {
return ZSTD_countLeadingZeros32(mostSignificantWord);
}
}
#endif
}
MEM_STATIC unsigned ZSTD_NbCommonBytes(size_t val)
{
if (MEM_isLittleEndian()) {
if (MEM_64bits()) {
return ZSTD_countTrailingZeros64((U64)val) >> 3;
} else {
return ZSTD_countTrailingZeros32((U32)val) >> 3;
}
} else { /* Big Endian CPU */
if (MEM_64bits()) {
return ZSTD_countLeadingZeros64((U64)val) >> 3;
} else {
return ZSTD_countLeadingZeros32((U32)val) >> 3;
}
}
}
MEM_STATIC unsigned ZSTD_highbit32(U32 val) /* compress, dictBuilder, decodeCorpus */
{
assert(val != 0);
return 31 - ZSTD_countLeadingZeros32(val);
}
/* ZSTD_rotateRight_*():
* Rotates a bitfield to the right by "count" bits.
* https://en.wikipedia.org/w/index.php?title=Circular_shift&oldid=991635599#Implementing_circular_shifts
*/
MEM_STATIC
U64 ZSTD_rotateRight_U64(U64 const value, U32 count) {
assert(count < 64);
count &= 0x3F; /* for fickle pattern recognition */
return (value >> count) | (U64)(value << ((0U - count) & 0x3F));
}
MEM_STATIC
U32 ZSTD_rotateRight_U32(U32 const value, U32 count) {
assert(count < 32);
count &= 0x1F; /* for fickle pattern recognition */
return (value >> count) | (U32)(value << ((0U - count) & 0x1F));
}
MEM_STATIC
U16 ZSTD_rotateRight_U16(U16 const value, U32 count) {
assert(count < 16);
count &= 0x0F; /* for fickle pattern recognition */
return (value >> count) | (U16)(value << ((0U - count) & 0x0F));
}
#endif /* ZSTD_BITS_H */
/**** ended inlining bits.h ****/
/*=========================================
* Target specific
=========================================*/
#ifndef ZSTD_NO_INTRINSICS
# if (defined(__BMI__) || defined(__BMI2__)) && defined(__GNUC__)
# include <immintrin.h> /* support for bextr (experimental)/bzhi */
# elif defined(__ICCARM__)
# include <intrinsics.h>
# endif
#endif
#define STREAM_ACCUMULATOR_MIN_32 25
#define STREAM_ACCUMULATOR_MIN_64 57
#define STREAM_ACCUMULATOR_MIN ((U32)(MEM_32bits() ? STREAM_ACCUMULATOR_MIN_32 : STREAM_ACCUMULATOR_MIN_64))
/*-******************************************
* bitStream encoding API (write forward)
********************************************/
typedef size_t BitContainerType;
/* bitStream can mix input from multiple sources.
* A critical property of these streams is that they encode and decode in **reverse** direction.
* So the first bit sequence you add will be the last to be read, like a LIFO stack.
*/
typedef struct {
BitContainerType bitContainer;
unsigned bitPos;
char* startPtr;
char* ptr;
char* endPtr;
} BIT_CStream_t;
MEM_STATIC size_t BIT_initCStream(BIT_CStream_t* bitC, void* dstBuffer, size_t dstCapacity);
MEM_STATIC void BIT_addBits(BIT_CStream_t* bitC, BitContainerType value, unsigned nbBits);
MEM_STATIC void BIT_flushBits(BIT_CStream_t* bitC);
MEM_STATIC size_t BIT_closeCStream(BIT_CStream_t* bitC);
/* Start with initCStream, providing the size of buffer to write into.
* bitStream will never write outside of this buffer.
* `dstCapacity` must be >= sizeof(bitD->bitContainer), otherwise @return will be an error code.
*
* bits are first added to a local register.
* Local register is BitContainerType, 64-bits on 64-bits systems, or 32-bits on 32-bits systems.
* Writing data into memory is an explicit operation, performed by the flushBits function.
* Hence keep track how many bits are potentially stored into local register to avoid register overflow.
* After a flushBits, a maximum of 7 bits might still be stored into local register.
*
* Avoid storing elements of more than 24 bits if you want compatibility with 32-bits bitstream readers.
*
* Last operation is to close the bitStream.
* The function returns the final size of CStream in bytes.
* If data couldn't fit into `dstBuffer`, it will return a 0 ( == not storable)
*/
/*-********************************************
* bitStream decoding API (read backward)
**********************************************/
typedef struct {
BitContainerType bitContainer;
unsigned bitsConsumed;
const char* ptr;
const char* start;
const char* limitPtr;
} BIT_DStream_t;
typedef enum { BIT_DStream_unfinished = 0, /* fully refilled */
BIT_DStream_endOfBuffer = 1, /* still some bits left in bitstream */
BIT_DStream_completed = 2, /* bitstream entirely consumed, bit-exact */
BIT_DStream_overflow = 3 /* user requested more bits than present in bitstream */
} BIT_DStream_status; /* result of BIT_reloadDStream() */
MEM_STATIC size_t BIT_initDStream(BIT_DStream_t* bitD, const void* srcBuffer, size_t srcSize);
MEM_STATIC BitContainerType BIT_readBits(BIT_DStream_t* bitD, unsigned nbBits);
MEM_STATIC BIT_DStream_status BIT_reloadDStream(BIT_DStream_t* bitD);
MEM_STATIC unsigned BIT_endOfDStream(const BIT_DStream_t* bitD);
/* Start by invoking BIT_initDStream().
* A chunk of the bitStream is then stored into a local register.
* Local register size is 64-bits on 64-bits systems, 32-bits on 32-bits systems (BitContainerType).
* You can then retrieve bitFields stored into the local register, **in reverse order**.
* Local register is explicitly reloaded from memory by the BIT_reloadDStream() method.
* A reload guarantee a minimum of ((8*sizeof(bitD->bitContainer))-7) bits when its result is BIT_DStream_unfinished.
* Otherwise, it can be less than that, so proceed accordingly.
* Checking if DStream has reached its end can be performed with BIT_endOfDStream().
*/
/*-****************************************
* unsafe API
******************************************/
MEM_STATIC void BIT_addBitsFast(BIT_CStream_t* bitC, BitContainerType value, unsigned nbBits);
/* faster, but works only if value is "clean", meaning all high bits above nbBits are 0 */
MEM_STATIC void BIT_flushBitsFast(BIT_CStream_t* bitC);
/* unsafe version; does not check buffer overflow */
MEM_STATIC size_t BIT_readBitsFast(BIT_DStream_t* bitD, unsigned nbBits);
/* faster, but works only if nbBits >= 1 */
/*===== Local Constants =====*/
static const unsigned BIT_mask[] = {
0, 1, 3, 7, 0xF, 0x1F,
0x3F, 0x7F, 0xFF, 0x1FF, 0x3FF, 0x7FF,
0xFFF, 0x1FFF, 0x3FFF, 0x7FFF, 0xFFFF, 0x1FFFF,
0x3FFFF, 0x7FFFF, 0xFFFFF, 0x1FFFFF, 0x3FFFFF, 0x7FFFFF,
0xFFFFFF, 0x1FFFFFF, 0x3FFFFFF, 0x7FFFFFF, 0xFFFFFFF, 0x1FFFFFFF,
0x3FFFFFFF, 0x7FFFFFFF}; /* up to 31 bits */
#define BIT_MASK_SIZE (sizeof(BIT_mask) / sizeof(BIT_mask[0]))
/*-**************************************************************
* bitStream encoding
****************************************************************/
/*! BIT_initCStream() :
* `dstCapacity` must be > sizeof(size_t)
* @return : 0 if success,
* otherwise an error code (can be tested using ERR_isError()) */
MEM_STATIC size_t BIT_initCStream(BIT_CStream_t* bitC,
void* startPtr, size_t dstCapacity)
{
bitC->bitContainer = 0;
bitC->bitPos = 0;
bitC->startPtr = (char*)startPtr;
bitC->ptr = bitC->startPtr;
bitC->endPtr = bitC->startPtr + dstCapacity - sizeof(bitC->bitContainer);
if (dstCapacity <= sizeof(bitC->bitContainer)) return ERROR(dstSize_tooSmall);
return 0;
}
FORCE_INLINE_TEMPLATE BitContainerType BIT_getLowerBits(BitContainerType bitContainer, U32 const nbBits)
{
#if STATIC_BMI2 && !defined(ZSTD_NO_INTRINSICS)
# if (defined(__x86_64__) || defined(_M_X64)) && !defined(__ILP32__)
return _bzhi_u64(bitContainer, nbBits);
# else
DEBUG_STATIC_ASSERT(sizeof(bitContainer) == sizeof(U32));
return _bzhi_u32(bitContainer, nbBits);
# endif
#else
assert(nbBits < BIT_MASK_SIZE);
return bitContainer & BIT_mask[nbBits];
#endif
}
/*! BIT_addBits() :
* can add up to 31 bits into `bitC`.
* Note : does not check for register overflow ! */
MEM_STATIC void BIT_addBits(BIT_CStream_t* bitC,
BitContainerType value, unsigned nbBits)
{
DEBUG_STATIC_ASSERT(BIT_MASK_SIZE == 32);
assert(nbBits < BIT_MASK_SIZE);
assert(nbBits + bitC->bitPos < sizeof(bitC->bitContainer) * 8);
bitC->bitContainer |= BIT_getLowerBits(value, nbBits) << bitC->bitPos;
bitC->bitPos += nbBits;
}
/*! BIT_addBitsFast() :
* works only if `value` is _clean_,
* meaning all high bits above nbBits are 0 */
MEM_STATIC void BIT_addBitsFast(BIT_CStream_t* bitC,
BitContainerType value, unsigned nbBits)
{
assert((value>>nbBits) == 0);
assert(nbBits + bitC->bitPos < sizeof(bitC->bitContainer) * 8);
bitC->bitContainer |= value << bitC->bitPos;
bitC->bitPos += nbBits;
}
/*! BIT_flushBitsFast() :
* assumption : bitContainer has not overflowed
* unsafe version; does not check buffer overflow */
MEM_STATIC void BIT_flushBitsFast(BIT_CStream_t* bitC)
{
size_t const nbBytes = bitC->bitPos >> 3;
assert(bitC->bitPos < sizeof(bitC->bitContainer) * 8);
assert(bitC->ptr <= bitC->endPtr);
MEM_writeLEST(bitC->ptr, bitC->bitContainer);
bitC->ptr += nbBytes;
bitC->bitPos &= 7;
bitC->bitContainer >>= nbBytes*8;
}
/*! BIT_flushBits() :
* assumption : bitContainer has not overflowed
* safe version; check for buffer overflow, and prevents it.
* note : does not signal buffer overflow.
* overflow will be revealed later on using BIT_closeCStream() */
MEM_STATIC void BIT_flushBits(BIT_CStream_t* bitC)
{
size_t const nbBytes = bitC->bitPos >> 3;
assert(bitC->bitPos < sizeof(bitC->bitContainer) * 8);
assert(bitC->ptr <= bitC->endPtr);
MEM_writeLEST(bitC->ptr, bitC->bitContainer);
bitC->ptr += nbBytes;
if (bitC->ptr > bitC->endPtr) bitC->ptr = bitC->endPtr;
bitC->bitPos &= 7;
bitC->bitContainer >>= nbBytes*8;
}
/*! BIT_closeCStream() :
* @return : size of CStream, in bytes,
* or 0 if it could not fit into dstBuffer */
MEM_STATIC size_t BIT_closeCStream(BIT_CStream_t* bitC)
{
BIT_addBitsFast(bitC, 1, 1); /* endMark */
BIT_flushBits(bitC);
if (bitC->ptr >= bitC->endPtr) return 0; /* overflow detected */
return (size_t)(bitC->ptr - bitC->startPtr) + (bitC->bitPos > 0);
}
/*-********************************************************
* bitStream decoding
**********************************************************/
/*! BIT_initDStream() :
* Initialize a BIT_DStream_t.
* `bitD` : a pointer to an already allocated BIT_DStream_t structure.
* `srcSize` must be the *exact* size of the bitStream, in bytes.
* @return : size of stream (== srcSize), or an errorCode if a problem is detected
*/
MEM_STATIC size_t BIT_initDStream(BIT_DStream_t* bitD, const void* srcBuffer, size_t srcSize)
{
if (srcSize < 1) { ZSTD_memset(bitD, 0, sizeof(*bitD)); return ERROR(srcSize_wrong); }
bitD->start = (const char*)srcBuffer;
bitD->limitPtr = bitD->start + sizeof(bitD->bitContainer);
if (srcSize >= sizeof(bitD->bitContainer)) { /* normal case */
bitD->ptr = (const char*)srcBuffer + srcSize - sizeof(bitD->bitContainer);
bitD->bitContainer = MEM_readLEST(bitD->ptr);
{ BYTE const lastByte = ((const BYTE*)srcBuffer)[srcSize-1];
bitD->bitsConsumed = lastByte ? 8 - ZSTD_highbit32(lastByte) : 0; /* ensures bitsConsumed is always set */
if (lastByte == 0) return ERROR(GENERIC); /* endMark not present */ }
} else {
bitD->ptr = bitD->start;
bitD->bitContainer = *(const BYTE*)(bitD->start);
switch(srcSize)
{
case 7: bitD->bitContainer += (BitContainerType)(((const BYTE*)(srcBuffer))[6]) << (sizeof(bitD->bitContainer)*8 - 16);
ZSTD_FALLTHROUGH;
case 6: bitD->bitContainer += (BitContainerType)(((const BYTE*)(srcBuffer))[5]) << (sizeof(bitD->bitContainer)*8 - 24);
ZSTD_FALLTHROUGH;
case 5: bitD->bitContainer += (BitContainerType)(((const BYTE*)(srcBuffer))[4]) << (sizeof(bitD->bitContainer)*8 - 32);
ZSTD_FALLTHROUGH;
case 4: bitD->bitContainer += (BitContainerType)(((const BYTE*)(srcBuffer))[3]) << 24;
ZSTD_FALLTHROUGH;
case 3: bitD->bitContainer += (BitContainerType)(((const BYTE*)(srcBuffer))[2]) << 16;
ZSTD_FALLTHROUGH;
case 2: bitD->bitContainer += (BitContainerType)(((const BYTE*)(srcBuffer))[1]) << 8;
ZSTD_FALLTHROUGH;
default: break;
}
{ BYTE const lastByte = ((const BYTE*)srcBuffer)[srcSize-1];
bitD->bitsConsumed = lastByte ? 8 - ZSTD_highbit32(lastByte) : 0;
if (lastByte == 0) return ERROR(corruption_detected); /* endMark not present */
}
bitD->bitsConsumed += (U32)(sizeof(bitD->bitContainer) - srcSize)*8;
}
return srcSize;
}
FORCE_INLINE_TEMPLATE BitContainerType BIT_getUpperBits(BitContainerType bitContainer, U32 const start)
{
return bitContainer >> start;
}
FORCE_INLINE_TEMPLATE BitContainerType BIT_getMiddleBits(BitContainerType bitContainer, U32 const start, U32 const nbBits)
{
U32 const regMask = sizeof(bitContainer)*8 - 1;
/* if start > regMask, bitstream is corrupted, and result is undefined */
assert(nbBits < BIT_MASK_SIZE);
/* x86 transform & ((1 << nbBits) - 1) to bzhi instruction, it is better
* than accessing memory. When bmi2 instruction is not present, we consider
* such cpus old (pre-Haswell, 2013) and their performance is not of that
* importance.
*/
#if defined(__x86_64__) || defined(_M_X64)
return (bitContainer >> (start & regMask)) & ((((U64)1) << nbBits) - 1);
#else
return (bitContainer >> (start & regMask)) & BIT_mask[nbBits];
#endif
}
/*! BIT_lookBits() :
* Provides next n bits from local register.
* local register is not modified.
* On 32-bits, maxNbBits==24.
* On 64-bits, maxNbBits==56.
* @return : value extracted */
FORCE_INLINE_TEMPLATE BitContainerType BIT_lookBits(const BIT_DStream_t* bitD, U32 nbBits)
{
/* arbitrate between double-shift and shift+mask */
#if 1
/* if bitD->bitsConsumed + nbBits > sizeof(bitD->bitContainer)*8,
* bitstream is likely corrupted, and result is undefined */
return BIT_getMiddleBits(bitD->bitContainer, (sizeof(bitD->bitContainer)*8) - bitD->bitsConsumed - nbBits, nbBits);
#else
/* this code path is slower on my os-x laptop */
U32 const regMask = sizeof(bitD->bitContainer)*8 - 1;
return ((bitD->bitContainer << (bitD->bitsConsumed & regMask)) >> 1) >> ((regMask-nbBits) & regMask);
#endif
}
/*! BIT_lookBitsFast() :
* unsafe version; only works if nbBits >= 1 */
MEM_STATIC BitContainerType BIT_lookBitsFast(const BIT_DStream_t* bitD, U32 nbBits)
{
U32 const regMask = sizeof(bitD->bitContainer)*8 - 1;
assert(nbBits >= 1);
return (bitD->bitContainer << (bitD->bitsConsumed & regMask)) >> (((regMask+1)-nbBits) & regMask);
}
FORCE_INLINE_TEMPLATE void BIT_skipBits(BIT_DStream_t* bitD, U32 nbBits)
{
bitD->bitsConsumed += nbBits;
}
/*! BIT_readBits() :
* Read (consume) next n bits from local register and update.
* Pay attention to not read more than nbBits contained into local register.
* @return : extracted value. */
FORCE_INLINE_TEMPLATE BitContainerType BIT_readBits(BIT_DStream_t* bitD, unsigned nbBits)
{
BitContainerType const value = BIT_lookBits(bitD, nbBits);
BIT_skipBits(bitD, nbBits);
return value;
}
/*! BIT_readBitsFast() :
* unsafe version; only works if nbBits >= 1 */
MEM_STATIC BitContainerType BIT_readBitsFast(BIT_DStream_t* bitD, unsigned nbBits)
{
BitContainerType const value = BIT_lookBitsFast(bitD, nbBits);
assert(nbBits >= 1);
BIT_skipBits(bitD, nbBits);
return value;
}
/*! BIT_reloadDStream_internal() :
* Simple variant of BIT_reloadDStream(), with two conditions:
* 1. bitstream is valid : bitsConsumed <= sizeof(bitD->bitContainer)*8
* 2. look window is valid after shifted down : bitD->ptr >= bitD->start
*/
MEM_STATIC BIT_DStream_status BIT_reloadDStream_internal(BIT_DStream_t* bitD)
{
assert(bitD->bitsConsumed <= sizeof(bitD->bitContainer)*8);
bitD->ptr -= bitD->bitsConsumed >> 3;
assert(bitD->ptr >= bitD->start);
bitD->bitsConsumed &= 7;
bitD->bitContainer = MEM_readLEST(bitD->ptr);
return BIT_DStream_unfinished;
}
/*! BIT_reloadDStreamFast() :
* Similar to BIT_reloadDStream(), but with two differences:
* 1. bitsConsumed <= sizeof(bitD->bitContainer)*8 must hold!
* 2. Returns BIT_DStream_overflow when bitD->ptr < bitD->limitPtr, at this
* point you must use BIT_reloadDStream() to reload.
*/
MEM_STATIC BIT_DStream_status BIT_reloadDStreamFast(BIT_DStream_t* bitD)
{
if (UNLIKELY(bitD->ptr < bitD->limitPtr))
return BIT_DStream_overflow;
return BIT_reloadDStream_internal(bitD);
}
/*! BIT_reloadDStream() :
* Refill `bitD` from buffer previously set in BIT_initDStream() .
* This function is safe, it guarantees it will not never beyond src buffer.
* @return : status of `BIT_DStream_t` internal register.
* when status == BIT_DStream_unfinished, internal register is filled with at least 25 or 57 bits */
FORCE_INLINE_TEMPLATE BIT_DStream_status BIT_reloadDStream(BIT_DStream_t* bitD)
{
/* note : once in overflow mode, a bitstream remains in this mode until it's reset */
if (UNLIKELY(bitD->bitsConsumed > (sizeof(bitD->bitContainer)*8))) {
static const BitContainerType zeroFilled = 0;
bitD->ptr = (const char*)&zeroFilled; /* aliasing is allowed for char */
/* overflow detected, erroneous scenario or end of stream: no update */
return BIT_DStream_overflow;
}
assert(bitD->ptr >= bitD->start);
if (bitD->ptr >= bitD->limitPtr) {
return BIT_reloadDStream_internal(bitD);
}
if (bitD->ptr == bitD->start) {
/* reached end of bitStream => no update */
if (bitD->bitsConsumed < sizeof(bitD->bitContainer)*8) return BIT_DStream_endOfBuffer;
return BIT_DStream_completed;
}
/* start < ptr < limitPtr => cautious update */
{ U32 nbBytes = bitD->bitsConsumed >> 3;
BIT_DStream_status result = BIT_DStream_unfinished;
if (bitD->ptr - nbBytes < bitD->start) {
nbBytes = (U32)(bitD->ptr - bitD->start); /* ptr > start */
result = BIT_DStream_endOfBuffer;
}
bitD->ptr -= nbBytes;
bitD->bitsConsumed -= nbBytes*8;
bitD->bitContainer = MEM_readLEST(bitD->ptr); /* reminder : srcSize > sizeof(bitD->bitContainer), otherwise bitD->ptr == bitD->start */
return result;
}
}
/*! BIT_endOfDStream() :
* @return : 1 if DStream has _exactly_ reached its end (all bits consumed).
*/
MEM_STATIC unsigned BIT_endOfDStream(const BIT_DStream_t* DStream)
{
return ((DStream->ptr == DStream->start) && (DStream->bitsConsumed == sizeof(DStream->bitContainer)*8));
}
#endif /* BITSTREAM_H_MODULE */
/**** ended inlining bitstream.h ****/
/* *****************************************
* Static allocation
*******************************************/
/* FSE buffer bounds */
#define FSE_NCOUNTBOUND 512
#define FSE_BLOCKBOUND(size) ((size) + ((size)>>7) + 4 /* fse states */ + sizeof(size_t) /* bitContainer */)
#define FSE_COMPRESSBOUND(size) (FSE_NCOUNTBOUND + FSE_BLOCKBOUND(size)) /* Macro version, useful for static allocation */
/* It is possible to statically allocate FSE CTable/DTable as a table of FSE_CTable/FSE_DTable using below macros */
#define FSE_CTABLE_SIZE_U32(maxTableLog, maxSymbolValue) (1 + (1<<((maxTableLog)-1)) + (((maxSymbolValue)+1)*2))
#define FSE_DTABLE_SIZE_U32(maxTableLog) (1 + (1<<(maxTableLog)))
/* or use the size to malloc() space directly. Pay attention to alignment restrictions though */
#define FSE_CTABLE_SIZE(maxTableLog, maxSymbolValue) (FSE_CTABLE_SIZE_U32(maxTableLog, maxSymbolValue) * sizeof(FSE_CTable))
#define FSE_DTABLE_SIZE(maxTableLog) (FSE_DTABLE_SIZE_U32(maxTableLog) * sizeof(FSE_DTable))
/* *****************************************
* FSE advanced API
***************************************** */
unsigned FSE_optimalTableLog_internal(unsigned maxTableLog, size_t srcSize, unsigned maxSymbolValue, unsigned minus);
/**< same as FSE_optimalTableLog(), which used `minus==2` */
size_t FSE_buildCTable_rle (FSE_CTable* ct, unsigned char symbolValue);
/**< build a fake FSE_CTable, designed to compress always the same symbolValue */
/* FSE_buildCTable_wksp() :
* Same as FSE_buildCTable(), but using an externally allocated scratch buffer (`workSpace`).
* `wkspSize` must be >= `FSE_BUILD_CTABLE_WORKSPACE_SIZE_U32(maxSymbolValue, tableLog)` of `unsigned`.
* See FSE_buildCTable_wksp() for breakdown of workspace usage.
*/
#define FSE_BUILD_CTABLE_WORKSPACE_SIZE_U32(maxSymbolValue, tableLog) (((maxSymbolValue + 2) + (1ull << (tableLog)))/2 + sizeof(U64)/sizeof(U32) /* additional 8 bytes for potential table overwrite */)
#define FSE_BUILD_CTABLE_WORKSPACE_SIZE(maxSymbolValue, tableLog) (sizeof(unsigned) * FSE_BUILD_CTABLE_WORKSPACE_SIZE_U32(maxSymbolValue, tableLog))
size_t FSE_buildCTable_wksp(FSE_CTable* ct, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog, void* workSpace, size_t wkspSize);
#define FSE_BUILD_DTABLE_WKSP_SIZE(maxTableLog, maxSymbolValue) (sizeof(short) * (maxSymbolValue + 1) + (1ULL << maxTableLog) + 8)
#define FSE_BUILD_DTABLE_WKSP_SIZE_U32(maxTableLog, maxSymbolValue) ((FSE_BUILD_DTABLE_WKSP_SIZE(maxTableLog, maxSymbolValue) + sizeof(unsigned) - 1) / sizeof(unsigned))
FSE_PUBLIC_API size_t FSE_buildDTable_wksp(FSE_DTable* dt, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog, void* workSpace, size_t wkspSize);
/**< Same as FSE_buildDTable(), using an externally allocated `workspace` produced with `FSE_BUILD_DTABLE_WKSP_SIZE_U32(maxSymbolValue)` */
#define FSE_DECOMPRESS_WKSP_SIZE_U32(maxTableLog, maxSymbolValue) (FSE_DTABLE_SIZE_U32(maxTableLog) + 1 + FSE_BUILD_DTABLE_WKSP_SIZE_U32(maxTableLog, maxSymbolValue) + (FSE_MAX_SYMBOL_VALUE + 1) / 2 + 1)
#define FSE_DECOMPRESS_WKSP_SIZE(maxTableLog, maxSymbolValue) (FSE_DECOMPRESS_WKSP_SIZE_U32(maxTableLog, maxSymbolValue) * sizeof(unsigned))
size_t FSE_decompress_wksp_bmi2(void* dst, size_t dstCapacity, const void* cSrc, size_t cSrcSize, unsigned maxLog, void* workSpace, size_t wkspSize, int bmi2);
/**< same as FSE_decompress(), using an externally allocated `workSpace` produced with `FSE_DECOMPRESS_WKSP_SIZE_U32(maxLog, maxSymbolValue)`.
* Set bmi2 to 1 if your CPU supports BMI2 or 0 if it doesn't */
typedef enum {
FSE_repeat_none, /**< Cannot use the previous table */
FSE_repeat_check, /**< Can use the previous table but it must be checked */
FSE_repeat_valid /**< Can use the previous table and it is assumed to be valid */
} FSE_repeat;
/* *****************************************
* FSE symbol compression API
*******************************************/
/*!
This API consists of small unitary functions, which highly benefit from being inlined.
Hence their body are included in next section.
*/
typedef struct {
ptrdiff_t value;
const void* stateTable;
const void* symbolTT;
unsigned stateLog;
} FSE_CState_t;
static void FSE_initCState(FSE_CState_t* CStatePtr, const FSE_CTable* ct);
static void FSE_encodeSymbol(BIT_CStream_t* bitC, FSE_CState_t* CStatePtr, unsigned symbol);
static void FSE_flushCState(BIT_CStream_t* bitC, const FSE_CState_t* CStatePtr);
/**<
These functions are inner components of FSE_compress_usingCTable().
They allow the creation of custom streams, mixing multiple tables and bit sources.
A key property to keep in mind is that encoding and decoding are done **in reverse direction**.
So the first symbol you will encode is the last you will decode, like a LIFO stack.
You will need a few variables to track your CStream. They are :
FSE_CTable ct; // Provided by FSE_buildCTable()
BIT_CStream_t bitStream; // bitStream tracking structure
FSE_CState_t state; // State tracking structure (can have several)
The first thing to do is to init bitStream and state.
size_t errorCode = BIT_initCStream(&bitStream, dstBuffer, maxDstSize);
FSE_initCState(&state, ct);
Note that BIT_initCStream() can produce an error code, so its result should be tested, using FSE_isError();
You can then encode your input data, byte after byte.
FSE_encodeSymbol() outputs a maximum of 'tableLog' bits at a time.
Remember decoding will be done in reverse direction.
FSE_encodeByte(&bitStream, &state, symbol);
At any time, you can also add any bit sequence.
Note : maximum allowed nbBits is 25, for compatibility with 32-bits decoders
BIT_addBits(&bitStream, bitField, nbBits);
The above methods don't commit data to memory, they just store it into local register, for speed.
Local register size is 64-bits on 64-bits systems, 32-bits on 32-bits systems (size_t).
Writing data to memory is a manual operation, performed by the flushBits function.
BIT_flushBits(&bitStream);
Your last FSE encoding operation shall be to flush your last state value(s).
FSE_flushState(&bitStream, &state);
Finally, you must close the bitStream.
The function returns the size of CStream in bytes.
If data couldn't fit into dstBuffer, it will return a 0 ( == not compressible)
If there is an error, it returns an errorCode (which can be tested using FSE_isError()).
size_t size = BIT_closeCStream(&bitStream);
*/
/* *****************************************
* FSE symbol decompression API
*******************************************/
typedef struct {
size_t state;
const void* table; /* precise table may vary, depending on U16 */
} FSE_DState_t;
static void FSE_initDState(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD, const FSE_DTable* dt);
static unsigned char FSE_decodeSymbol(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD);
static unsigned FSE_endOfDState(const FSE_DState_t* DStatePtr);
/**<
Let's now decompose FSE_decompress_usingDTable() into its unitary components.
You will decode FSE-encoded symbols from the bitStream,
and also any other bitFields you put in, **in reverse order**.
You will need a few variables to track your bitStream. They are :
BIT_DStream_t DStream; // Stream context
FSE_DState_t DState; // State context. Multiple ones are possible
FSE_DTable* DTablePtr; // Decoding table, provided by FSE_buildDTable()
The first thing to do is to init the bitStream.
errorCode = BIT_initDStream(&DStream, srcBuffer, srcSize);
You should then retrieve your initial state(s)
(in reverse flushing order if you have several ones) :
errorCode = FSE_initDState(&DState, &DStream, DTablePtr);
You can then decode your data, symbol after symbol.
For information the maximum number of bits read by FSE_decodeSymbol() is 'tableLog'.
Keep in mind that symbols are decoded in reverse order, like a LIFO stack (last in, first out).
unsigned char symbol = FSE_decodeSymbol(&DState, &DStream);
You can retrieve any bitfield you eventually stored into the bitStream (in reverse order)
Note : maximum allowed nbBits is 25, for 32-bits compatibility
size_t bitField = BIT_readBits(&DStream, nbBits);
All above operations only read from local register (which size depends on size_t).
Refueling the register from memory is manually performed by the reload method.
endSignal = FSE_reloadDStream(&DStream);
BIT_reloadDStream() result tells if there is still some more data to read from DStream.
BIT_DStream_unfinished : there is still some data left into the DStream.
BIT_DStream_endOfBuffer : Dstream reached end of buffer. Its container may no longer be completely filled.
BIT_DStream_completed : Dstream reached its exact end, corresponding in general to decompression completed.
BIT_DStream_tooFar : Dstream went too far. Decompression result is corrupted.
When reaching end of buffer (BIT_DStream_endOfBuffer), progress slowly, notably if you decode multiple symbols per loop,
to properly detect the exact end of stream.
After each decoded symbol, check if DStream is fully consumed using this simple test :
BIT_reloadDStream(&DStream) >= BIT_DStream_completed
When it's done, verify decompression is fully completed, by checking both DStream and the relevant states.
Checking if DStream has reached its end is performed by :
BIT_endOfDStream(&DStream);
Check also the states. There might be some symbols left there, if some high probability ones (>50%) are possible.
FSE_endOfDState(&DState);
*/
/* *****************************************
* FSE unsafe API
*******************************************/
static unsigned char FSE_decodeSymbolFast(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD);
/* faster, but works only if nbBits is always >= 1 (otherwise, result will be corrupted) */
/* *****************************************
* Implementation of inlined functions
*******************************************/
typedef struct {
int deltaFindState;
U32 deltaNbBits;
} FSE_symbolCompressionTransform; /* total 8 bytes */
MEM_STATIC void FSE_initCState(FSE_CState_t* statePtr, const FSE_CTable* ct)
{
const void* ptr = ct;
const U16* u16ptr = (const U16*) ptr;
const U32 tableLog = MEM_read16(ptr);
statePtr->value = (ptrdiff_t)1<<tableLog;
statePtr->stateTable = u16ptr+2;
statePtr->symbolTT = ct + 1 + (tableLog ? (1<<(tableLog-1)) : 1);
statePtr->stateLog = tableLog;
}
/*! FSE_initCState2() :
* Same as FSE_initCState(), but the first symbol to include (which will be the last to be read)
* uses the smallest state value possible, saving the cost of this symbol */
MEM_STATIC void FSE_initCState2(FSE_CState_t* statePtr, const FSE_CTable* ct, U32 symbol)
{
FSE_initCState(statePtr, ct);
{ const FSE_symbolCompressionTransform symbolTT = ((const FSE_symbolCompressionTransform*)(statePtr->symbolTT))[symbol];
const U16* stateTable = (const U16*)(statePtr->stateTable);
U32 nbBitsOut = (U32)((symbolTT.deltaNbBits + (1<<15)) >> 16);
statePtr->value = (nbBitsOut << 16) - symbolTT.deltaNbBits;
statePtr->value = stateTable[(statePtr->value >> nbBitsOut) + symbolTT.deltaFindState];
}
}
MEM_STATIC void FSE_encodeSymbol(BIT_CStream_t* bitC, FSE_CState_t* statePtr, unsigned symbol)
{
FSE_symbolCompressionTransform const symbolTT = ((const FSE_symbolCompressionTransform*)(statePtr->symbolTT))[symbol];
const U16* const stateTable = (const U16*)(statePtr->stateTable);
U32 const nbBitsOut = (U32)((statePtr->value + symbolTT.deltaNbBits) >> 16);
BIT_addBits(bitC, (BitContainerType)statePtr->value, nbBitsOut);
statePtr->value = stateTable[ (statePtr->value >> nbBitsOut) + symbolTT.deltaFindState];
}
MEM_STATIC void FSE_flushCState(BIT_CStream_t* bitC, const FSE_CState_t* statePtr)
{
BIT_addBits(bitC, (BitContainerType)statePtr->value, statePtr->stateLog);
BIT_flushBits(bitC);
}
/* FSE_getMaxNbBits() :
* Approximate maximum cost of a symbol, in bits.
* Fractional get rounded up (i.e. a symbol with a normalized frequency of 3 gives the same result as a frequency of 2)
* note 1 : assume symbolValue is valid (<= maxSymbolValue)
* note 2 : if freq[symbolValue]==0, @return a fake cost of tableLog+1 bits */
MEM_STATIC U32 FSE_getMaxNbBits(const void* symbolTTPtr, U32 symbolValue)
{
const FSE_symbolCompressionTransform* symbolTT = (const FSE_symbolCompressionTransform*) symbolTTPtr;
return (symbolTT[symbolValue].deltaNbBits + ((1<<16)-1)) >> 16;
}
/* FSE_bitCost() :
* Approximate symbol cost, as fractional value, using fixed-point format (accuracyLog fractional bits)
* note 1 : assume symbolValue is valid (<= maxSymbolValue)
* note 2 : if freq[symbolValue]==0, @return a fake cost of tableLog+1 bits */
MEM_STATIC U32 FSE_bitCost(const void* symbolTTPtr, U32 tableLog, U32 symbolValue, U32 accuracyLog)
{
const FSE_symbolCompressionTransform* symbolTT = (const FSE_symbolCompressionTransform*) symbolTTPtr;
U32 const minNbBits = symbolTT[symbolValue].deltaNbBits >> 16;
U32 const threshold = (minNbBits+1) << 16;
assert(tableLog < 16);
assert(accuracyLog < 31-tableLog); /* ensure enough room for renormalization double shift */
{ U32 const tableSize = 1 << tableLog;
U32 const deltaFromThreshold = threshold - (symbolTT[symbolValue].deltaNbBits + tableSize);
U32 const normalizedDeltaFromThreshold = (deltaFromThreshold << accuracyLog) >> tableLog; /* linear interpolation (very approximate) */
U32 const bitMultiplier = 1 << accuracyLog;
assert(symbolTT[symbolValue].deltaNbBits + tableSize <= threshold);
assert(normalizedDeltaFromThreshold <= bitMultiplier);
return (minNbBits+1)*bitMultiplier - normalizedDeltaFromThreshold;
}
}
/* ====== Decompression ====== */
typedef struct {
U16 tableLog;
U16 fastMode;
} FSE_DTableHeader; /* sizeof U32 */
typedef struct
{
unsigned short newState;
unsigned char symbol;
unsigned char nbBits;
} FSE_decode_t; /* size == U32 */
MEM_STATIC void FSE_initDState(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD, const FSE_DTable* dt)
{
const void* ptr = dt;
const FSE_DTableHeader* const DTableH = (const FSE_DTableHeader*)ptr;
DStatePtr->state = BIT_readBits(bitD, DTableH->tableLog);
BIT_reloadDStream(bitD);
DStatePtr->table = dt + 1;
}
MEM_STATIC BYTE FSE_peekSymbol(const FSE_DState_t* DStatePtr)
{
FSE_decode_t const DInfo = ((const FSE_decode_t*)(DStatePtr->table))[DStatePtr->state];
return DInfo.symbol;
}
MEM_STATIC void FSE_updateState(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD)
{
FSE_decode_t const DInfo = ((const FSE_decode_t*)(DStatePtr->table))[DStatePtr->state];
U32 const nbBits = DInfo.nbBits;
size_t const lowBits = BIT_readBits(bitD, nbBits);
DStatePtr->state = DInfo.newState + lowBits;
}
MEM_STATIC BYTE FSE_decodeSymbol(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD)
{
FSE_decode_t const DInfo = ((const FSE_decode_t*)(DStatePtr->table))[DStatePtr->state];
U32 const nbBits = DInfo.nbBits;
BYTE const symbol = DInfo.symbol;
size_t const lowBits = BIT_readBits(bitD, nbBits);
DStatePtr->state = DInfo.newState + lowBits;
return symbol;
}
/*! FSE_decodeSymbolFast() :
unsafe, only works if no symbol has a probability > 50% */
MEM_STATIC BYTE FSE_decodeSymbolFast(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD)
{
FSE_decode_t const DInfo = ((const FSE_decode_t*)(DStatePtr->table))[DStatePtr->state];
U32 const nbBits = DInfo.nbBits;
BYTE const symbol = DInfo.symbol;
size_t const lowBits = BIT_readBitsFast(bitD, nbBits);
DStatePtr->state = DInfo.newState + lowBits;
return symbol;
}
MEM_STATIC unsigned FSE_endOfDState(const FSE_DState_t* DStatePtr)
{
return DStatePtr->state == 0;
}
#ifndef FSE_COMMONDEFS_ONLY
/* **************************************************************
* Tuning parameters
****************************************************************/
/*!MEMORY_USAGE :
* Memory usage formula : N->2^N Bytes (examples : 10 -> 1KB; 12 -> 4KB ; 16 -> 64KB; 20 -> 1MB; etc.)
* Increasing memory usage improves compression ratio
* Reduced memory usage can improve speed, due to cache effect
* Recommended max value is 14, for 16KB, which nicely fits into Intel x86 L1 cache */
#ifndef FSE_MAX_MEMORY_USAGE
# define FSE_MAX_MEMORY_USAGE 14
#endif
#ifndef FSE_DEFAULT_MEMORY_USAGE
# define FSE_DEFAULT_MEMORY_USAGE 13
#endif
#if (FSE_DEFAULT_MEMORY_USAGE > FSE_MAX_MEMORY_USAGE)
# error "FSE_DEFAULT_MEMORY_USAGE must be <= FSE_MAX_MEMORY_USAGE"
#endif
/*!FSE_MAX_SYMBOL_VALUE :
* Maximum symbol value authorized.
* Required for proper stack allocation */
#ifndef FSE_MAX_SYMBOL_VALUE
# define FSE_MAX_SYMBOL_VALUE 255
#endif
/* **************************************************************
* template functions type & suffix
****************************************************************/
#define FSE_FUNCTION_TYPE BYTE
#define FSE_FUNCTION_EXTENSION
#define FSE_DECODE_TYPE FSE_decode_t
#endif /* !FSE_COMMONDEFS_ONLY */
/* ***************************************************************
* Constants
*****************************************************************/
#define FSE_MAX_TABLELOG (FSE_MAX_MEMORY_USAGE-2)
#define FSE_MAX_TABLESIZE (1U<<FSE_MAX_TABLELOG)
#define FSE_MAXTABLESIZE_MASK (FSE_MAX_TABLESIZE-1)
#define FSE_DEFAULT_TABLELOG (FSE_DEFAULT_MEMORY_USAGE-2)
#define FSE_MIN_TABLELOG 5
#define FSE_TABLELOG_ABSOLUTE_MAX 15
#if FSE_MAX_TABLELOG > FSE_TABLELOG_ABSOLUTE_MAX
# error "FSE_MAX_TABLELOG > FSE_TABLELOG_ABSOLUTE_MAX is not supported"
#endif
#define FSE_TABLESTEP(tableSize) (((tableSize)>>1) + ((tableSize)>>3) + 3)
#endif /* FSE_STATIC_LINKING_ONLY */
/**** ended inlining fse.h ****/
/**** start inlining huf.h ****/
/* ******************************************************************
* huff0 huffman codec,
* part of Finite State Entropy library
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* You can contact the author at :
* - Source repository : https://github.com/Cyan4973/FiniteStateEntropy
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
****************************************************************** */
#ifndef HUF_H_298734234
#define HUF_H_298734234
/* *** Dependencies *** */
/**** skipping file: zstd_deps.h ****/
/**** skipping file: mem.h ****/
#define FSE_STATIC_LINKING_ONLY
/**** skipping file: fse.h ****/
/* *** Tool functions *** */
#define HUF_BLOCKSIZE_MAX (128 * 1024) /**< maximum input size for a single block compressed with HUF_compress */
size_t HUF_compressBound(size_t size); /**< maximum compressed size (worst case) */
/* Error Management */
unsigned HUF_isError(size_t code); /**< tells if a return value is an error code */
const char* HUF_getErrorName(size_t code); /**< provides error code string (useful for debugging) */
#define HUF_WORKSPACE_SIZE ((8 << 10) + 512 /* sorting scratch space */)
#define HUF_WORKSPACE_SIZE_U64 (HUF_WORKSPACE_SIZE / sizeof(U64))
/* *** Constants *** */
#define HUF_TABLELOG_MAX 12 /* max runtime value of tableLog (due to static allocation); can be modified up to HUF_TABLELOG_ABSOLUTEMAX */
#define HUF_TABLELOG_DEFAULT 11 /* default tableLog value when none specified */
#define HUF_SYMBOLVALUE_MAX 255
#define HUF_TABLELOG_ABSOLUTEMAX 12 /* absolute limit of HUF_MAX_TABLELOG. Beyond that value, code does not work */
#if (HUF_TABLELOG_MAX > HUF_TABLELOG_ABSOLUTEMAX)
# error "HUF_TABLELOG_MAX is too large !"
#endif
/* ****************************************
* Static allocation
******************************************/
/* HUF buffer bounds */
#define HUF_CTABLEBOUND 129
#define HUF_BLOCKBOUND(size) (size + (size>>8) + 8) /* only true when incompressible is pre-filtered with fast heuristic */
#define HUF_COMPRESSBOUND(size) (HUF_CTABLEBOUND + HUF_BLOCKBOUND(size)) /* Macro version, useful for static allocation */
/* static allocation of HUF's Compression Table */
/* this is a private definition, just exposed for allocation and strict aliasing purpose. never EVER access its members directly */
typedef size_t HUF_CElt; /* consider it an incomplete type */
#define HUF_CTABLE_SIZE_ST(maxSymbolValue) ((maxSymbolValue)+2) /* Use tables of size_t, for proper alignment */
#define HUF_CTABLE_SIZE(maxSymbolValue) (HUF_CTABLE_SIZE_ST(maxSymbolValue) * sizeof(size_t))
#define HUF_CREATE_STATIC_CTABLE(name, maxSymbolValue) \
HUF_CElt name[HUF_CTABLE_SIZE_ST(maxSymbolValue)] /* no final ; */
/* static allocation of HUF's DTable */
typedef U32 HUF_DTable;
#define HUF_DTABLE_SIZE(maxTableLog) (1 + (1<<(maxTableLog)))
#define HUF_CREATE_STATIC_DTABLEX1(DTable, maxTableLog) \
HUF_DTable DTable[HUF_DTABLE_SIZE((maxTableLog)-1)] = { ((U32)((maxTableLog)-1) * 0x01000001) }
#define HUF_CREATE_STATIC_DTABLEX2(DTable, maxTableLog) \
HUF_DTable DTable[HUF_DTABLE_SIZE(maxTableLog)] = { ((U32)(maxTableLog) * 0x01000001) }
/* ****************************************
* Advanced decompression functions
******************************************/
/**
* Huffman flags bitset.
* For all flags, 0 is the default value.
*/
typedef enum {
/**
* If compiled with DYNAMIC_BMI2: Set flag only if the CPU supports BMI2 at runtime.
* Otherwise: Ignored.
*/
HUF_flags_bmi2 = (1 << 0),
/**
* If set: Test possible table depths to find the one that produces the smallest header + encoded size.
* If unset: Use heuristic to find the table depth.
*/
HUF_flags_optimalDepth = (1 << 1),
/**
* If set: If the previous table can encode the input, always reuse the previous table.
* If unset: If the previous table can encode the input, reuse the previous table if it results in a smaller output.
*/
HUF_flags_preferRepeat = (1 << 2),
/**
* If set: Sample the input and check if the sample is uncompressible, if it is then don't attempt to compress.
* If unset: Always histogram the entire input.
*/
HUF_flags_suspectUncompressible = (1 << 3),
/**
* If set: Don't use assembly implementations
* If unset: Allow using assembly implementations
*/
HUF_flags_disableAsm = (1 << 4),
/**
* If set: Don't use the fast decoding loop, always use the fallback decoding loop.
* If unset: Use the fast decoding loop when possible.
*/
HUF_flags_disableFast = (1 << 5)
} HUF_flags_e;
/* ****************************************
* HUF detailed API
* ****************************************/
#define HUF_OPTIMAL_DEPTH_THRESHOLD ZSTD_btultra
/*! HUF_compress() does the following:
* 1. count symbol occurrence from source[] into table count[] using FSE_count() (exposed within "fse.h")
* 2. (optional) refine tableLog using HUF_optimalTableLog()
* 3. build Huffman table from count using HUF_buildCTable()
* 4. save Huffman table to memory buffer using HUF_writeCTable()
* 5. encode the data stream using HUF_compress4X_usingCTable()
*
* The following API allows targeting specific sub-functions for advanced tasks.
* For example, it's possible to compress several blocks using the same 'CTable',
* or to save and regenerate 'CTable' using external methods.
*/
unsigned HUF_minTableLog(unsigned symbolCardinality);
unsigned HUF_cardinality(const unsigned* count, unsigned maxSymbolValue);
unsigned HUF_optimalTableLog(unsigned maxTableLog, size_t srcSize, unsigned maxSymbolValue, void* workSpace,
size_t wkspSize, HUF_CElt* table, const unsigned* count, int flags); /* table is used as scratch space for building and testing tables, not a return value */
size_t HUF_writeCTable_wksp(void* dst, size_t maxDstSize, const HUF_CElt* CTable, unsigned maxSymbolValue, unsigned huffLog, void* workspace, size_t workspaceSize);
size_t HUF_compress4X_usingCTable(void* dst, size_t dstSize, const void* src, size_t srcSize, const HUF_CElt* CTable, int flags);
size_t HUF_estimateCompressedSize(const HUF_CElt* CTable, const unsigned* count, unsigned maxSymbolValue);
int HUF_validateCTable(const HUF_CElt* CTable, const unsigned* count, unsigned maxSymbolValue);
typedef enum {
HUF_repeat_none, /**< Cannot use the previous table */
HUF_repeat_check, /**< Can use the previous table but it must be checked. Note : The previous table must have been constructed by HUF_compress{1, 4}X_repeat */
HUF_repeat_valid /**< Can use the previous table and it is assumed to be valid */
} HUF_repeat;
/** HUF_compress4X_repeat() :
* Same as HUF_compress4X_wksp(), but considers using hufTable if *repeat != HUF_repeat_none.
* If it uses hufTable it does not modify hufTable or repeat.
* If it doesn't, it sets *repeat = HUF_repeat_none, and it sets hufTable to the table used.
* If preferRepeat then the old table will always be used if valid.
* If suspectUncompressible then some sampling checks will be run to potentially skip huffman coding */
size_t HUF_compress4X_repeat(void* dst, size_t dstSize,
const void* src, size_t srcSize,
unsigned maxSymbolValue, unsigned tableLog,
void* workSpace, size_t wkspSize, /**< `workSpace` must be aligned on 4-bytes boundaries, `wkspSize` must be >= HUF_WORKSPACE_SIZE */
HUF_CElt* hufTable, HUF_repeat* repeat, int flags);
/** HUF_buildCTable_wksp() :
* Same as HUF_buildCTable(), but using externally allocated scratch buffer.
* `workSpace` must be aligned on 4-bytes boundaries, and its size must be >= HUF_CTABLE_WORKSPACE_SIZE.
*/
#define HUF_CTABLE_WORKSPACE_SIZE_U32 ((4 * (HUF_SYMBOLVALUE_MAX + 1)) + 192)
#define HUF_CTABLE_WORKSPACE_SIZE (HUF_CTABLE_WORKSPACE_SIZE_U32 * sizeof(unsigned))
size_t HUF_buildCTable_wksp (HUF_CElt* tree,
const unsigned* count, U32 maxSymbolValue, U32 maxNbBits,
void* workSpace, size_t wkspSize);
/*! HUF_readStats() :
* Read compact Huffman tree, saved by HUF_writeCTable().
* `huffWeight` is destination buffer.
* @return : size read from `src` , or an error Code .
* Note : Needed by HUF_readCTable() and HUF_readDTableXn() . */
size_t HUF_readStats(BYTE* huffWeight, size_t hwSize,
U32* rankStats, U32* nbSymbolsPtr, U32* tableLogPtr,
const void* src, size_t srcSize);
/*! HUF_readStats_wksp() :
* Same as HUF_readStats() but takes an external workspace which must be
* 4-byte aligned and its size must be >= HUF_READ_STATS_WORKSPACE_SIZE.
* If the CPU has BMI2 support, pass bmi2=1, otherwise pass bmi2=0.
*/
#define HUF_READ_STATS_WORKSPACE_SIZE_U32 FSE_DECOMPRESS_WKSP_SIZE_U32(6, HUF_TABLELOG_MAX-1)
#define HUF_READ_STATS_WORKSPACE_SIZE (HUF_READ_STATS_WORKSPACE_SIZE_U32 * sizeof(unsigned))
size_t HUF_readStats_wksp(BYTE* huffWeight, size_t hwSize,
U32* rankStats, U32* nbSymbolsPtr, U32* tableLogPtr,
const void* src, size_t srcSize,
void* workspace, size_t wkspSize,
int flags);
/** HUF_readCTable() :
* Loading a CTable saved with HUF_writeCTable() */
size_t HUF_readCTable (HUF_CElt* CTable, unsigned* maxSymbolValuePtr, const void* src, size_t srcSize, unsigned *hasZeroWeights);
/** HUF_getNbBitsFromCTable() :
* Read nbBits from CTable symbolTable, for symbol `symbolValue` presumed <= HUF_SYMBOLVALUE_MAX
* Note 1 : If symbolValue > HUF_readCTableHeader(symbolTable).maxSymbolValue, returns 0
* Note 2 : is not inlined, as HUF_CElt definition is private
*/
U32 HUF_getNbBitsFromCTable(const HUF_CElt* symbolTable, U32 symbolValue);
typedef struct {
BYTE tableLog;
BYTE maxSymbolValue;
BYTE unused[sizeof(size_t) - 2];
} HUF_CTableHeader;
/** HUF_readCTableHeader() :
* @returns The header from the CTable specifying the tableLog and the maxSymbolValue.
*/
HUF_CTableHeader HUF_readCTableHeader(HUF_CElt const* ctable);
/*
* HUF_decompress() does the following:
* 1. select the decompression algorithm (X1, X2) based on pre-computed heuristics
* 2. build Huffman table from save, using HUF_readDTableX?()
* 3. decode 1 or 4 segments in parallel using HUF_decompress?X?_usingDTable()
*/
/** HUF_selectDecoder() :
* Tells which decoder is likely to decode faster,
* based on a set of pre-computed metrics.
* @return : 0==HUF_decompress4X1, 1==HUF_decompress4X2 .
* Assumption : 0 < dstSize <= 128 KB */
U32 HUF_selectDecoder (size_t dstSize, size_t cSrcSize);
/**
* The minimum workspace size for the `workSpace` used in
* HUF_readDTableX1_wksp() and HUF_readDTableX2_wksp().
*
* The space used depends on HUF_TABLELOG_MAX, ranging from ~1500 bytes when
* HUF_TABLE_LOG_MAX=12 to ~1850 bytes when HUF_TABLE_LOG_MAX=15.
* Buffer overflow errors may potentially occur if code modifications result in
* a required workspace size greater than that specified in the following
* macro.
*/
#define HUF_DECOMPRESS_WORKSPACE_SIZE ((2 << 10) + (1 << 9))
#define HUF_DECOMPRESS_WORKSPACE_SIZE_U32 (HUF_DECOMPRESS_WORKSPACE_SIZE / sizeof(U32))
/* ====================== */
/* single stream variants */
/* ====================== */
size_t HUF_compress1X_usingCTable(void* dst, size_t dstSize, const void* src, size_t srcSize, const HUF_CElt* CTable, int flags);
/** HUF_compress1X_repeat() :
* Same as HUF_compress1X_wksp(), but considers using hufTable if *repeat != HUF_repeat_none.
* If it uses hufTable it does not modify hufTable or repeat.
* If it doesn't, it sets *repeat = HUF_repeat_none, and it sets hufTable to the table used.
* If preferRepeat then the old table will always be used if valid.
* If suspectUncompressible then some sampling checks will be run to potentially skip huffman coding */
size_t HUF_compress1X_repeat(void* dst, size_t dstSize,
const void* src, size_t srcSize,
unsigned maxSymbolValue, unsigned tableLog,
void* workSpace, size_t wkspSize, /**< `workSpace` must be aligned on 4-bytes boundaries, `wkspSize` must be >= HUF_WORKSPACE_SIZE */
HUF_CElt* hufTable, HUF_repeat* repeat, int flags);
size_t HUF_decompress1X_DCtx_wksp(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize, int flags);
#ifndef HUF_FORCE_DECOMPRESS_X1
size_t HUF_decompress1X2_DCtx_wksp(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize, int flags); /**< double-symbols decoder */
#endif
/* BMI2 variants.
* If the CPU has BMI2 support, pass bmi2=1, otherwise pass bmi2=0.
*/
size_t HUF_decompress1X_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable, int flags);
#ifndef HUF_FORCE_DECOMPRESS_X2
size_t HUF_decompress1X1_DCtx_wksp(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize, int flags);
#endif
size_t HUF_decompress4X_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable, int flags);
size_t HUF_decompress4X_hufOnly_wksp(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize, int flags);
#ifndef HUF_FORCE_DECOMPRESS_X2
size_t HUF_readDTableX1_wksp(HUF_DTable* DTable, const void* src, size_t srcSize, void* workSpace, size_t wkspSize, int flags);
#endif
#ifndef HUF_FORCE_DECOMPRESS_X1
size_t HUF_readDTableX2_wksp(HUF_DTable* DTable, const void* src, size_t srcSize, void* workSpace, size_t wkspSize, int flags);
#endif
#endif /* HUF_H_298734234 */
/**** ended inlining huf.h ****/
/**** skipping file: bits.h ****/
/*=== Version ===*/
unsigned FSE_versionNumber(void) { return FSE_VERSION_NUMBER; }
/*=== Error Management ===*/
unsigned FSE_isError(size_t code) { return ERR_isError(code); }
const char* FSE_getErrorName(size_t code) { return ERR_getErrorName(code); }
unsigned HUF_isError(size_t code) { return ERR_isError(code); }
const char* HUF_getErrorName(size_t code) { return ERR_getErrorName(code); }
/*-**************************************************************
* FSE NCount encoding-decoding
****************************************************************/
FORCE_INLINE_TEMPLATE
size_t FSE_readNCount_body(short* normalizedCounter, unsigned* maxSVPtr, unsigned* tableLogPtr,
const void* headerBuffer, size_t hbSize)
{
const BYTE* const istart = (const BYTE*) headerBuffer;
const BYTE* const iend = istart + hbSize;
const BYTE* ip = istart;
int nbBits;
int remaining;
int threshold;
U32 bitStream;
int bitCount;
unsigned charnum = 0;
unsigned const maxSV1 = *maxSVPtr + 1;
int previous0 = 0;
if (hbSize < 8) {
/* This function only works when hbSize >= 8 */
char buffer[8] = {0};
ZSTD_memcpy(buffer, headerBuffer, hbSize);
{ size_t const countSize = FSE_readNCount(normalizedCounter, maxSVPtr, tableLogPtr,
buffer, sizeof(buffer));
if (FSE_isError(countSize)) return countSize;
if (countSize > hbSize) return ERROR(corruption_detected);
return countSize;
} }
assert(hbSize >= 8);
/* init */
ZSTD_memset(normalizedCounter, 0, (*maxSVPtr+1) * sizeof(normalizedCounter[0])); /* all symbols not present in NCount have a frequency of 0 */
bitStream = MEM_readLE32(ip);
nbBits = (bitStream & 0xF) + FSE_MIN_TABLELOG; /* extract tableLog */
if (nbBits > FSE_TABLELOG_ABSOLUTE_MAX) return ERROR(tableLog_tooLarge);
bitStream >>= 4;
bitCount = 4;
*tableLogPtr = nbBits;
remaining = (1<<nbBits)+1;
threshold = 1<<nbBits;
nbBits++;
for (;;) {
if (previous0) {
/* Count the number of repeats. Each time the
* 2-bit repeat code is 0b11 there is another
* repeat.
* Avoid UB by setting the high bit to 1.
*/
int repeats = ZSTD_countTrailingZeros32(~bitStream | 0x80000000) >> 1;
while (repeats >= 12) {
charnum += 3 * 12;
if (LIKELY(ip <= iend-7)) {
ip += 3;
} else {
bitCount -= (int)(8 * (iend - 7 - ip));
bitCount &= 31;
ip = iend - 4;
}
bitStream = MEM_readLE32(ip) >> bitCount;
repeats = ZSTD_countTrailingZeros32(~bitStream | 0x80000000) >> 1;
}
charnum += 3 * repeats;
bitStream >>= 2 * repeats;
bitCount += 2 * repeats;
/* Add the final repeat which isn't 0b11. */
assert((bitStream & 3) < 3);
charnum += bitStream & 3;
bitCount += 2;
/* This is an error, but break and return an error
* at the end, because returning out of a loop makes
* it harder for the compiler to optimize.
*/
if (charnum >= maxSV1) break;
/* We don't need to set the normalized count to 0
* because we already memset the whole buffer to 0.
*/
if (LIKELY(ip <= iend-7) || (ip + (bitCount>>3) <= iend-4)) {
assert((bitCount >> 3) <= 3); /* For first condition to work */
ip += bitCount>>3;
bitCount &= 7;
} else {
bitCount -= (int)(8 * (iend - 4 - ip));
bitCount &= 31;
ip = iend - 4;
}
bitStream = MEM_readLE32(ip) >> bitCount;
}
{
int const max = (2*threshold-1) - remaining;
int count;
if ((bitStream & (threshold-1)) < (U32)max) {
count = bitStream & (threshold-1);
bitCount += nbBits-1;
} else {
count = bitStream & (2*threshold-1);
if (count >= threshold) count -= max;
bitCount += nbBits;
}
count--; /* extra accuracy */
/* When it matters (small blocks), this is a
* predictable branch, because we don't use -1.
*/
if (count >= 0) {
remaining -= count;
} else {
assert(count == -1);
remaining += count;
}
normalizedCounter[charnum++] = (short)count;
previous0 = !count;
assert(threshold > 1);
if (remaining < threshold) {
/* This branch can be folded into the
* threshold update condition because we
* know that threshold > 1.
*/
if (remaining <= 1) break;
nbBits = ZSTD_highbit32(remaining) + 1;
threshold = 1 << (nbBits - 1);
}
if (charnum >= maxSV1) break;
if (LIKELY(ip <= iend-7) || (ip + (bitCount>>3) <= iend-4)) {
ip += bitCount>>3;
bitCount &= 7;
} else {
bitCount -= (int)(8 * (iend - 4 - ip));
bitCount &= 31;
ip = iend - 4;
}
bitStream = MEM_readLE32(ip) >> bitCount;
} }
if (remaining != 1) return ERROR(corruption_detected);
/* Only possible when there are too many zeros. */
if (charnum > maxSV1) return ERROR(maxSymbolValue_tooSmall);
if (bitCount > 32) return ERROR(corruption_detected);
*maxSVPtr = charnum-1;
ip += (bitCount+7)>>3;
return ip-istart;
}
/* Avoids the FORCE_INLINE of the _body() function. */
static size_t FSE_readNCount_body_default(
short* normalizedCounter, unsigned* maxSVPtr, unsigned* tableLogPtr,
const void* headerBuffer, size_t hbSize)
{
return FSE_readNCount_body(normalizedCounter, maxSVPtr, tableLogPtr, headerBuffer, hbSize);
}
#if DYNAMIC_BMI2
BMI2_TARGET_ATTRIBUTE static size_t FSE_readNCount_body_bmi2(
short* normalizedCounter, unsigned* maxSVPtr, unsigned* tableLogPtr,
const void* headerBuffer, size_t hbSize)
{
return FSE_readNCount_body(normalizedCounter, maxSVPtr, tableLogPtr, headerBuffer, hbSize);
}
#endif
size_t FSE_readNCount_bmi2(
short* normalizedCounter, unsigned* maxSVPtr, unsigned* tableLogPtr,
const void* headerBuffer, size_t hbSize, int bmi2)
{
#if DYNAMIC_BMI2
if (bmi2) {
return FSE_readNCount_body_bmi2(normalizedCounter, maxSVPtr, tableLogPtr, headerBuffer, hbSize);
}
#endif
(void)bmi2;
return FSE_readNCount_body_default(normalizedCounter, maxSVPtr, tableLogPtr, headerBuffer, hbSize);
}
size_t FSE_readNCount(
short* normalizedCounter, unsigned* maxSVPtr, unsigned* tableLogPtr,
const void* headerBuffer, size_t hbSize)
{
return FSE_readNCount_bmi2(normalizedCounter, maxSVPtr, tableLogPtr, headerBuffer, hbSize, /* bmi2 */ 0);
}
/*! HUF_readStats() :
Read compact Huffman tree, saved by HUF_writeCTable().
`huffWeight` is destination buffer.
`rankStats` is assumed to be a table of at least HUF_TABLELOG_MAX U32.
@return : size read from `src` , or an error Code .
Note : Needed by HUF_readCTable() and HUF_readDTableX?() .
*/
size_t HUF_readStats(BYTE* huffWeight, size_t hwSize, U32* rankStats,
U32* nbSymbolsPtr, U32* tableLogPtr,
const void* src, size_t srcSize)
{
U32 wksp[HUF_READ_STATS_WORKSPACE_SIZE_U32];
return HUF_readStats_wksp(huffWeight, hwSize, rankStats, nbSymbolsPtr, tableLogPtr, src, srcSize, wksp, sizeof(wksp), /* flags */ 0);
}
FORCE_INLINE_TEMPLATE size_t
HUF_readStats_body(BYTE* huffWeight, size_t hwSize, U32* rankStats,
U32* nbSymbolsPtr, U32* tableLogPtr,
const void* src, size_t srcSize,
void* workSpace, size_t wkspSize,
int bmi2)
{
U32 weightTotal;
const BYTE* ip = (const BYTE*) src;
size_t iSize;
size_t oSize;
if (!srcSize) return ERROR(srcSize_wrong);
iSize = ip[0];
/* ZSTD_memset(huffWeight, 0, hwSize); *//* is not necessary, even though some analyzer complain ... */
if (iSize >= 128) { /* special header */
oSize = iSize - 127;
iSize = ((oSize+1)/2);
if (iSize+1 > srcSize) return ERROR(srcSize_wrong);
if (oSize >= hwSize) return ERROR(corruption_detected);
ip += 1;
{ U32 n;
for (n=0; n<oSize; n+=2) {
huffWeight[n] = ip[n/2] >> 4;
huffWeight[n+1] = ip[n/2] & 15;
} } }
else { /* header compressed with FSE (normal case) */
if (iSize+1 > srcSize) return ERROR(srcSize_wrong);
/* max (hwSize-1) values decoded, as last one is implied */
oSize = FSE_decompress_wksp_bmi2(huffWeight, hwSize-1, ip+1, iSize, 6, workSpace, wkspSize, bmi2);
if (FSE_isError(oSize)) return oSize;
}
/* collect weight stats */
ZSTD_memset(rankStats, 0, (HUF_TABLELOG_MAX + 1) * sizeof(U32));
weightTotal = 0;
{ U32 n; for (n=0; n<oSize; n++) {
if (huffWeight[n] > HUF_TABLELOG_MAX) return ERROR(corruption_detected);
rankStats[huffWeight[n]]++;
weightTotal += (1 << huffWeight[n]) >> 1;
} }
if (weightTotal == 0) return ERROR(corruption_detected);
/* get last non-null symbol weight (implied, total must be 2^n) */
{ U32 const tableLog = ZSTD_highbit32(weightTotal) + 1;
if (tableLog > HUF_TABLELOG_MAX) return ERROR(corruption_detected);
*tableLogPtr = tableLog;
/* determine last weight */
{ U32 const total = 1 << tableLog;
U32 const rest = total - weightTotal;
U32 const verif = 1 << ZSTD_highbit32(rest);
U32 const lastWeight = ZSTD_highbit32(rest) + 1;
if (verif != rest) return ERROR(corruption_detected); /* last value must be a clean power of 2 */
huffWeight[oSize] = (BYTE)lastWeight;
rankStats[lastWeight]++;
} }
/* check tree construction validity */
if ((rankStats[1] < 2) || (rankStats[1] & 1)) return ERROR(corruption_detected); /* by construction : at least 2 elts of rank 1, must be even */
/* results */
*nbSymbolsPtr = (U32)(oSize+1);
return iSize+1;
}
/* Avoids the FORCE_INLINE of the _body() function. */
static size_t HUF_readStats_body_default(BYTE* huffWeight, size_t hwSize, U32* rankStats,
U32* nbSymbolsPtr, U32* tableLogPtr,
const void* src, size_t srcSize,
void* workSpace, size_t wkspSize)
{
return HUF_readStats_body(huffWeight, hwSize, rankStats, nbSymbolsPtr, tableLogPtr, src, srcSize, workSpace, wkspSize, 0);
}
#if DYNAMIC_BMI2
static BMI2_TARGET_ATTRIBUTE size_t HUF_readStats_body_bmi2(BYTE* huffWeight, size_t hwSize, U32* rankStats,
U32* nbSymbolsPtr, U32* tableLogPtr,
const void* src, size_t srcSize,
void* workSpace, size_t wkspSize)
{
return HUF_readStats_body(huffWeight, hwSize, rankStats, nbSymbolsPtr, tableLogPtr, src, srcSize, workSpace, wkspSize, 1);
}
#endif
size_t HUF_readStats_wksp(BYTE* huffWeight, size_t hwSize, U32* rankStats,
U32* nbSymbolsPtr, U32* tableLogPtr,
const void* src, size_t srcSize,
void* workSpace, size_t wkspSize,
int flags)
{
#if DYNAMIC_BMI2
if (flags & HUF_flags_bmi2) {
return HUF_readStats_body_bmi2(huffWeight, hwSize, rankStats, nbSymbolsPtr, tableLogPtr, src, srcSize, workSpace, wkspSize);
}
#endif
(void)flags;
return HUF_readStats_body_default(huffWeight, hwSize, rankStats, nbSymbolsPtr, tableLogPtr, src, srcSize, workSpace, wkspSize);
}
/**** ended inlining common/entropy_common.c ****/
/**** start inlining common/error_private.c ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/* The purpose of this file is to have a single list of error strings embedded in binary */
/**** skipping file: error_private.h ****/
const char* ERR_getErrorString(ERR_enum code)
{
#ifdef ZSTD_STRIP_ERROR_STRINGS
(void)code;
return "Error strings stripped";
#else
static const char* const notErrorCode = "Unspecified error code";
switch( code )
{
case PREFIX(no_error): return "No error detected";
case PREFIX(GENERIC): return "Error (generic)";
case PREFIX(prefix_unknown): return "Unknown frame descriptor";
case PREFIX(version_unsupported): return "Version not supported";
case PREFIX(frameParameter_unsupported): return "Unsupported frame parameter";
case PREFIX(frameParameter_windowTooLarge): return "Frame requires too much memory for decoding";
case PREFIX(corruption_detected): return "Data corruption detected";
case PREFIX(checksum_wrong): return "Restored data doesn't match checksum";
case PREFIX(literals_headerWrong): return "Header of Literals' block doesn't respect format specification";
case PREFIX(parameter_unsupported): return "Unsupported parameter";
case PREFIX(parameter_combination_unsupported): return "Unsupported combination of parameters";
case PREFIX(parameter_outOfBound): return "Parameter is out of bound";
case PREFIX(init_missing): return "Context should be init first";
case PREFIX(memory_allocation): return "Allocation error : not enough memory";
case PREFIX(workSpace_tooSmall): return "workSpace buffer is not large enough";
case PREFIX(stage_wrong): return "Operation not authorized at current processing stage";
case PREFIX(tableLog_tooLarge): return "tableLog requires too much memory : unsupported";
case PREFIX(maxSymbolValue_tooLarge): return "Unsupported max Symbol Value : too large";
case PREFIX(maxSymbolValue_tooSmall): return "Specified maxSymbolValue is too small";
case PREFIX(cannotProduce_uncompressedBlock): return "This mode cannot generate an uncompressed block";
case PREFIX(stabilityCondition_notRespected): return "pledged buffer stability condition is not respected";
case PREFIX(dictionary_corrupted): return "Dictionary is corrupted";
case PREFIX(dictionary_wrong): return "Dictionary mismatch";
case PREFIX(dictionaryCreation_failed): return "Cannot create Dictionary from provided samples";
case PREFIX(dstSize_tooSmall): return "Destination buffer is too small";
case PREFIX(srcSize_wrong): return "Src size is incorrect";
case PREFIX(dstBuffer_null): return "Operation on NULL destination buffer";
case PREFIX(noForwardProgress_destFull): return "Operation made no progress over multiple calls, due to output buffer being full";
case PREFIX(noForwardProgress_inputEmpty): return "Operation made no progress over multiple calls, due to input being empty";
/* following error codes are not stable and may be removed or changed in a future version */
case PREFIX(frameIndex_tooLarge): return "Frame index is too large";
case PREFIX(seekableIO): return "An I/O error occurred when reading/seeking";
case PREFIX(dstBuffer_wrong): return "Destination buffer is wrong";
case PREFIX(srcBuffer_wrong): return "Source buffer is wrong";
case PREFIX(sequenceProducer_failed): return "Block-level external sequence producer returned an error code";
case PREFIX(externalSequences_invalid): return "External sequences are not valid";
case PREFIX(maxCode):
default: return notErrorCode;
}
#endif
}
/**** ended inlining common/error_private.c ****/
/**** start inlining common/fse_decompress.c ****/
/* ******************************************************************
* FSE : Finite State Entropy decoder
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* You can contact the author at :
* - FSE source repository : https://github.com/Cyan4973/FiniteStateEntropy
* - Public forum : https://groups.google.com/forum/#!forum/lz4c
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
****************************************************************** */
/* **************************************************************
* Includes
****************************************************************/
/**** skipping file: debug.h ****/
/**** skipping file: bitstream.h ****/
/**** skipping file: compiler.h ****/
#define FSE_STATIC_LINKING_ONLY
/**** skipping file: fse.h ****/
/**** skipping file: error_private.h ****/
/**** skipping file: zstd_deps.h ****/
/**** skipping file: bits.h ****/
/* **************************************************************
* Error Management
****************************************************************/
#define FSE_isError ERR_isError
#define FSE_STATIC_ASSERT(c) DEBUG_STATIC_ASSERT(c) /* use only *after* variable declarations */
/* **************************************************************
* Templates
****************************************************************/
/*
designed to be included
for type-specific functions (template emulation in C)
Objective is to write these functions only once, for improved maintenance
*/
/* safety checks */
#ifndef FSE_FUNCTION_EXTENSION
# error "FSE_FUNCTION_EXTENSION must be defined"
#endif
#ifndef FSE_FUNCTION_TYPE
# error "FSE_FUNCTION_TYPE must be defined"
#endif
/* Function names */
#define FSE_CAT(X,Y) X##Y
#define FSE_FUNCTION_NAME(X,Y) FSE_CAT(X,Y)
#define FSE_TYPE_NAME(X,Y) FSE_CAT(X,Y)
static size_t FSE_buildDTable_internal(FSE_DTable* dt, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog, void* workSpace, size_t wkspSize)
{
void* const tdPtr = dt+1; /* because *dt is unsigned, 32-bits aligned on 32-bits */
FSE_DECODE_TYPE* const tableDecode = (FSE_DECODE_TYPE*) (tdPtr);
U16* symbolNext = (U16*)workSpace;
BYTE* spread = (BYTE*)(symbolNext + maxSymbolValue + 1);
U32 const maxSV1 = maxSymbolValue + 1;
U32 const tableSize = 1 << tableLog;
U32 highThreshold = tableSize-1;
/* Sanity Checks */
if (FSE_BUILD_DTABLE_WKSP_SIZE(tableLog, maxSymbolValue) > wkspSize) return ERROR(maxSymbolValue_tooLarge);
if (maxSymbolValue > FSE_MAX_SYMBOL_VALUE) return ERROR(maxSymbolValue_tooLarge);
if (tableLog > FSE_MAX_TABLELOG) return ERROR(tableLog_tooLarge);
/* Init, lay down lowprob symbols */
{ FSE_DTableHeader DTableH;
DTableH.tableLog = (U16)tableLog;
DTableH.fastMode = 1;
{ S16 const largeLimit= (S16)(1 << (tableLog-1));
U32 s;
for (s=0; s<maxSV1; s++) {
if (normalizedCounter[s]==-1) {
tableDecode[highThreshold--].symbol = (FSE_FUNCTION_TYPE)s;
symbolNext[s] = 1;
} else {
if (normalizedCounter[s] >= largeLimit) DTableH.fastMode=0;
symbolNext[s] = (U16)normalizedCounter[s];
} } }
ZSTD_memcpy(dt, &DTableH, sizeof(DTableH));
}
/* Spread symbols */
if (highThreshold == tableSize - 1) {
size_t const tableMask = tableSize-1;
size_t const step = FSE_TABLESTEP(tableSize);
/* First lay down the symbols in order.
* We use a uint64_t to lay down 8 bytes at a time. This reduces branch
* misses since small blocks generally have small table logs, so nearly
* all symbols have counts <= 8. We ensure we have 8 bytes at the end of
* our buffer to handle the over-write.
*/
{ U64 const add = 0x0101010101010101ull;
size_t pos = 0;
U64 sv = 0;
U32 s;
for (s=0; s<maxSV1; ++s, sv += add) {
int i;
int const n = normalizedCounter[s];
MEM_write64(spread + pos, sv);
for (i = 8; i < n; i += 8) {
MEM_write64(spread + pos + i, sv);
}
pos += (size_t)n;
} }
/* Now we spread those positions across the table.
* The benefit of doing it in two stages is that we avoid the
* variable size inner loop, which caused lots of branch misses.
* Now we can run through all the positions without any branch misses.
* We unroll the loop twice, since that is what empirically worked best.
*/
{
size_t position = 0;
size_t s;
size_t const unroll = 2;
assert(tableSize % unroll == 0); /* FSE_MIN_TABLELOG is 5 */
for (s = 0; s < (size_t)tableSize; s += unroll) {
size_t u;
for (u = 0; u < unroll; ++u) {
size_t const uPosition = (position + (u * step)) & tableMask;
tableDecode[uPosition].symbol = spread[s + u];
}
position = (position + (unroll * step)) & tableMask;
}
assert(position == 0);
}
} else {
U32 const tableMask = tableSize-1;
U32 const step = FSE_TABLESTEP(tableSize);
U32 s, position = 0;
for (s=0; s<maxSV1; s++) {
int i;
for (i=0; i<normalizedCounter[s]; i++) {
tableDecode[position].symbol = (FSE_FUNCTION_TYPE)s;
position = (position + step) & tableMask;
while (position > highThreshold) position = (position + step) & tableMask; /* lowprob area */
} }
if (position!=0) return ERROR(GENERIC); /* position must reach all cells once, otherwise normalizedCounter is incorrect */
}
/* Build Decoding table */
{ U32 u;
for (u=0; u<tableSize; u++) {
FSE_FUNCTION_TYPE const symbol = (FSE_FUNCTION_TYPE)(tableDecode[u].symbol);
U32 const nextState = symbolNext[symbol]++;
tableDecode[u].nbBits = (BYTE) (tableLog - ZSTD_highbit32(nextState) );
tableDecode[u].newState = (U16) ( (nextState << tableDecode[u].nbBits) - tableSize);
} }
return 0;
}
size_t FSE_buildDTable_wksp(FSE_DTable* dt, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog, void* workSpace, size_t wkspSize)
{
return FSE_buildDTable_internal(dt, normalizedCounter, maxSymbolValue, tableLog, workSpace, wkspSize);
}
#ifndef FSE_COMMONDEFS_ONLY
/*-*******************************************************
* Decompression (Byte symbols)
*********************************************************/
FORCE_INLINE_TEMPLATE size_t FSE_decompress_usingDTable_generic(
void* dst, size_t maxDstSize,
const void* cSrc, size_t cSrcSize,
const FSE_DTable* dt, const unsigned fast)
{
BYTE* const ostart = (BYTE*) dst;
BYTE* op = ostart;
BYTE* const omax = op + maxDstSize;
BYTE* const olimit = omax-3;
BIT_DStream_t bitD;
FSE_DState_t state1;
FSE_DState_t state2;
/* Init */
CHECK_F(BIT_initDStream(&bitD, cSrc, cSrcSize));
FSE_initDState(&state1, &bitD, dt);
FSE_initDState(&state2, &bitD, dt);
RETURN_ERROR_IF(BIT_reloadDStream(&bitD)==BIT_DStream_overflow, corruption_detected, "");
#define FSE_GETSYMBOL(statePtr) fast ? FSE_decodeSymbolFast(statePtr, &bitD) : FSE_decodeSymbol(statePtr, &bitD)
/* 4 symbols per loop */
for ( ; (BIT_reloadDStream(&bitD)==BIT_DStream_unfinished) & (op<olimit) ; op+=4) {
op[0] = FSE_GETSYMBOL(&state1);
if (FSE_MAX_TABLELOG*2+7 > sizeof(bitD.bitContainer)*8) /* This test must be static */
BIT_reloadDStream(&bitD);
op[1] = FSE_GETSYMBOL(&state2);
if (FSE_MAX_TABLELOG*4+7 > sizeof(bitD.bitContainer)*8) /* This test must be static */
{ if (BIT_reloadDStream(&bitD) > BIT_DStream_unfinished) { op+=2; break; } }
op[2] = FSE_GETSYMBOL(&state1);
if (FSE_MAX_TABLELOG*2+7 > sizeof(bitD.bitContainer)*8) /* This test must be static */
BIT_reloadDStream(&bitD);
op[3] = FSE_GETSYMBOL(&state2);
}
/* tail */
/* note : BIT_reloadDStream(&bitD) >= FSE_DStream_partiallyFilled; Ends at exactly BIT_DStream_completed */
while (1) {
if (op>(omax-2)) return ERROR(dstSize_tooSmall);
*op++ = FSE_GETSYMBOL(&state1);
if (BIT_reloadDStream(&bitD)==BIT_DStream_overflow) {
*op++ = FSE_GETSYMBOL(&state2);
break;
}
if (op>(omax-2)) return ERROR(dstSize_tooSmall);
*op++ = FSE_GETSYMBOL(&state2);
if (BIT_reloadDStream(&bitD)==BIT_DStream_overflow) {
*op++ = FSE_GETSYMBOL(&state1);
break;
} }
assert(op >= ostart);
return (size_t)(op-ostart);
}
typedef struct {
short ncount[FSE_MAX_SYMBOL_VALUE + 1];
} FSE_DecompressWksp;
FORCE_INLINE_TEMPLATE size_t FSE_decompress_wksp_body(
void* dst, size_t dstCapacity,
const void* cSrc, size_t cSrcSize,
unsigned maxLog, void* workSpace, size_t wkspSize,
int bmi2)
{
const BYTE* const istart = (const BYTE*)cSrc;
const BYTE* ip = istart;
unsigned tableLog;
unsigned maxSymbolValue = FSE_MAX_SYMBOL_VALUE;
FSE_DecompressWksp* const wksp = (FSE_DecompressWksp*)workSpace;
size_t const dtablePos = sizeof(FSE_DecompressWksp) / sizeof(FSE_DTable);
FSE_DTable* const dtable = (FSE_DTable*)workSpace + dtablePos;
FSE_STATIC_ASSERT((FSE_MAX_SYMBOL_VALUE + 1) % 2 == 0);
if (wkspSize < sizeof(*wksp)) return ERROR(GENERIC);
/* correct offset to dtable depends on this property */
FSE_STATIC_ASSERT(sizeof(FSE_DecompressWksp) % sizeof(FSE_DTable) == 0);
/* normal FSE decoding mode */
{ size_t const NCountLength =
FSE_readNCount_bmi2(wksp->ncount, &maxSymbolValue, &tableLog, istart, cSrcSize, bmi2);
if (FSE_isError(NCountLength)) return NCountLength;
if (tableLog > maxLog) return ERROR(tableLog_tooLarge);
assert(NCountLength <= cSrcSize);
ip += NCountLength;
cSrcSize -= NCountLength;
}
if (FSE_DECOMPRESS_WKSP_SIZE(tableLog, maxSymbolValue) > wkspSize) return ERROR(tableLog_tooLarge);
assert(sizeof(*wksp) + FSE_DTABLE_SIZE(tableLog) <= wkspSize);
workSpace = (BYTE*)workSpace + sizeof(*wksp) + FSE_DTABLE_SIZE(tableLog);
wkspSize -= sizeof(*wksp) + FSE_DTABLE_SIZE(tableLog);
CHECK_F( FSE_buildDTable_internal(dtable, wksp->ncount, maxSymbolValue, tableLog, workSpace, wkspSize) );
{
const void* ptr = dtable;
const FSE_DTableHeader* DTableH = (const FSE_DTableHeader*)ptr;
const U32 fastMode = DTableH->fastMode;
/* select fast mode (static) */
if (fastMode) return FSE_decompress_usingDTable_generic(dst, dstCapacity, ip, cSrcSize, dtable, 1);
return FSE_decompress_usingDTable_generic(dst, dstCapacity, ip, cSrcSize, dtable, 0);
}
}
/* Avoids the FORCE_INLINE of the _body() function. */
static size_t FSE_decompress_wksp_body_default(void* dst, size_t dstCapacity, const void* cSrc, size_t cSrcSize, unsigned maxLog, void* workSpace, size_t wkspSize)
{
return FSE_decompress_wksp_body(dst, dstCapacity, cSrc, cSrcSize, maxLog, workSpace, wkspSize, 0);
}
#if DYNAMIC_BMI2
BMI2_TARGET_ATTRIBUTE static size_t FSE_decompress_wksp_body_bmi2(void* dst, size_t dstCapacity, const void* cSrc, size_t cSrcSize, unsigned maxLog, void* workSpace, size_t wkspSize)
{
return FSE_decompress_wksp_body(dst, dstCapacity, cSrc, cSrcSize, maxLog, workSpace, wkspSize, 1);
}
#endif
size_t FSE_decompress_wksp_bmi2(void* dst, size_t dstCapacity, const void* cSrc, size_t cSrcSize, unsigned maxLog, void* workSpace, size_t wkspSize, int bmi2)
{
#if DYNAMIC_BMI2
if (bmi2) {
return FSE_decompress_wksp_body_bmi2(dst, dstCapacity, cSrc, cSrcSize, maxLog, workSpace, wkspSize);
}
#endif
(void)bmi2;
return FSE_decompress_wksp_body_default(dst, dstCapacity, cSrc, cSrcSize, maxLog, workSpace, wkspSize);
}
#endif /* FSE_COMMONDEFS_ONLY */
/**** ended inlining common/fse_decompress.c ****/
/**** start inlining common/threading.c ****/
/**
* Copyright (c) 2016 Tino Reichardt
* All rights reserved.
*
* You can contact the author at:
* - zstdmt source repository: https://github.com/mcmilk/zstdmt
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/**
* This file will hold wrapper for systems, which do not support pthreads
*/
/**** start inlining threading.h ****/
/**
* Copyright (c) 2016 Tino Reichardt
* All rights reserved.
*
* You can contact the author at:
* - zstdmt source repository: https://github.com/mcmilk/zstdmt
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef THREADING_H_938743
#define THREADING_H_938743
/**** skipping file: debug.h ****/
#if defined(ZSTD_MULTITHREAD) && defined(_WIN32)
/**
* Windows minimalist Pthread Wrapper
*/
#ifdef WINVER
# undef WINVER
#endif
#define WINVER 0x0600
#ifdef _WIN32_WINNT
# undef _WIN32_WINNT
#endif
#define _WIN32_WINNT 0x0600
#ifndef WIN32_LEAN_AND_MEAN
# define WIN32_LEAN_AND_MEAN
#endif
#undef ERROR /* reported already defined on VS 2015 (Rich Geldreich) */
#include <windows.h>
#undef ERROR
#define ERROR(name) ZSTD_ERROR(name)
/* mutex */
#define ZSTD_pthread_mutex_t CRITICAL_SECTION
#define ZSTD_pthread_mutex_init(a, b) ((void)(b), InitializeCriticalSection((a)), 0)
#define ZSTD_pthread_mutex_destroy(a) DeleteCriticalSection((a))
#define ZSTD_pthread_mutex_lock(a) EnterCriticalSection((a))
#define ZSTD_pthread_mutex_unlock(a) LeaveCriticalSection((a))
/* condition variable */
#define ZSTD_pthread_cond_t CONDITION_VARIABLE
#define ZSTD_pthread_cond_init(a, b) ((void)(b), InitializeConditionVariable((a)), 0)
#define ZSTD_pthread_cond_destroy(a) ((void)(a))
#define ZSTD_pthread_cond_wait(a, b) SleepConditionVariableCS((a), (b), INFINITE)
#define ZSTD_pthread_cond_signal(a) WakeConditionVariable((a))
#define ZSTD_pthread_cond_broadcast(a) WakeAllConditionVariable((a))
/* ZSTD_pthread_create() and ZSTD_pthread_join() */
typedef HANDLE ZSTD_pthread_t;
int ZSTD_pthread_create(ZSTD_pthread_t* thread, const void* unused,
void* (*start_routine) (void*), void* arg);
int ZSTD_pthread_join(ZSTD_pthread_t thread);
/**
* add here more wrappers as required
*/
#elif defined(ZSTD_MULTITHREAD) /* posix assumed ; need a better detection method */
/* === POSIX Systems === */
# include <pthread.h>
#if DEBUGLEVEL < 1
#define ZSTD_pthread_mutex_t pthread_mutex_t
#define ZSTD_pthread_mutex_init(a, b) pthread_mutex_init((a), (b))
#define ZSTD_pthread_mutex_destroy(a) pthread_mutex_destroy((a))
#define ZSTD_pthread_mutex_lock(a) pthread_mutex_lock((a))
#define ZSTD_pthread_mutex_unlock(a) pthread_mutex_unlock((a))
#define ZSTD_pthread_cond_t pthread_cond_t
#define ZSTD_pthread_cond_init(a, b) pthread_cond_init((a), (b))
#define ZSTD_pthread_cond_destroy(a) pthread_cond_destroy((a))
#define ZSTD_pthread_cond_wait(a, b) pthread_cond_wait((a), (b))
#define ZSTD_pthread_cond_signal(a) pthread_cond_signal((a))
#define ZSTD_pthread_cond_broadcast(a) pthread_cond_broadcast((a))
#define ZSTD_pthread_t pthread_t
#define ZSTD_pthread_create(a, b, c, d) pthread_create((a), (b), (c), (d))
#define ZSTD_pthread_join(a) pthread_join((a),NULL)
#else /* DEBUGLEVEL >= 1 */
/* Debug implementation of threading.
* In this implementation we use pointers for mutexes and condition variables.
* This way, if we forget to init/destroy them the program will crash or ASAN
* will report leaks.
*/
#define ZSTD_pthread_mutex_t pthread_mutex_t*
int ZSTD_pthread_mutex_init(ZSTD_pthread_mutex_t* mutex, pthread_mutexattr_t const* attr);
int ZSTD_pthread_mutex_destroy(ZSTD_pthread_mutex_t* mutex);
#define ZSTD_pthread_mutex_lock(a) pthread_mutex_lock(*(a))
#define ZSTD_pthread_mutex_unlock(a) pthread_mutex_unlock(*(a))
#define ZSTD_pthread_cond_t pthread_cond_t*
int ZSTD_pthread_cond_init(ZSTD_pthread_cond_t* cond, pthread_condattr_t const* attr);
int ZSTD_pthread_cond_destroy(ZSTD_pthread_cond_t* cond);
#define ZSTD_pthread_cond_wait(a, b) pthread_cond_wait(*(a), *(b))
#define ZSTD_pthread_cond_signal(a) pthread_cond_signal(*(a))
#define ZSTD_pthread_cond_broadcast(a) pthread_cond_broadcast(*(a))
#define ZSTD_pthread_t pthread_t
#define ZSTD_pthread_create(a, b, c, d) pthread_create((a), (b), (c), (d))
#define ZSTD_pthread_join(a) pthread_join((a),NULL)
#endif
#else /* ZSTD_MULTITHREAD not defined */
/* No multithreading support */
typedef int ZSTD_pthread_mutex_t;
#define ZSTD_pthread_mutex_init(a, b) ((void)(a), (void)(b), 0)
#define ZSTD_pthread_mutex_destroy(a) ((void)(a))
#define ZSTD_pthread_mutex_lock(a) ((void)(a))
#define ZSTD_pthread_mutex_unlock(a) ((void)(a))
typedef int ZSTD_pthread_cond_t;
#define ZSTD_pthread_cond_init(a, b) ((void)(a), (void)(b), 0)
#define ZSTD_pthread_cond_destroy(a) ((void)(a))
#define ZSTD_pthread_cond_wait(a, b) ((void)(a), (void)(b))
#define ZSTD_pthread_cond_signal(a) ((void)(a))
#define ZSTD_pthread_cond_broadcast(a) ((void)(a))
/* do not use ZSTD_pthread_t */
#endif /* ZSTD_MULTITHREAD */
#endif /* THREADING_H_938743 */
/**** ended inlining threading.h ****/
/* create fake symbol to avoid empty translation unit warning */
int g_ZSTD_threading_useless_symbol;
#if defined(ZSTD_MULTITHREAD) && defined(_WIN32)
/**
* Windows minimalist Pthread Wrapper
*/
/* === Dependencies === */
#include <process.h>
#include <errno.h>
/* === Implementation === */
typedef struct {
void* (*start_routine)(void*);
void* arg;
int initialized;
ZSTD_pthread_cond_t initialized_cond;
ZSTD_pthread_mutex_t initialized_mutex;
} ZSTD_thread_params_t;
static unsigned __stdcall worker(void *arg)
{
void* (*start_routine)(void*);
void* thread_arg;
/* Initialized thread_arg and start_routine and signal main thread that we don't need it
* to wait any longer.
*/
{
ZSTD_thread_params_t* thread_param = (ZSTD_thread_params_t*)arg;
thread_arg = thread_param->arg;
start_routine = thread_param->start_routine;
/* Signal main thread that we are running and do not depend on its memory anymore */
ZSTD_pthread_mutex_lock(&thread_param->initialized_mutex);
thread_param->initialized = 1;
ZSTD_pthread_cond_signal(&thread_param->initialized_cond);
ZSTD_pthread_mutex_unlock(&thread_param->initialized_mutex);
}
start_routine(thread_arg);
return 0;
}
int ZSTD_pthread_create(ZSTD_pthread_t* thread, const void* unused,
void* (*start_routine) (void*), void* arg)
{
ZSTD_thread_params_t thread_param;
(void)unused;
if (thread==NULL) return -1;
*thread = NULL;
thread_param.start_routine = start_routine;
thread_param.arg = arg;
thread_param.initialized = 0;
/* Setup thread initialization synchronization */
if(ZSTD_pthread_cond_init(&thread_param.initialized_cond, NULL)) {
/* Should never happen on Windows */
return -1;
}
if(ZSTD_pthread_mutex_init(&thread_param.initialized_mutex, NULL)) {
/* Should never happen on Windows */
ZSTD_pthread_cond_destroy(&thread_param.initialized_cond);
return -1;
}
/* Spawn thread */
*thread = (HANDLE)_beginthreadex(NULL, 0, worker, &thread_param, 0, NULL);
if (*thread==NULL) {
ZSTD_pthread_mutex_destroy(&thread_param.initialized_mutex);
ZSTD_pthread_cond_destroy(&thread_param.initialized_cond);
return errno;
}
/* Wait for thread to be initialized */
ZSTD_pthread_mutex_lock(&thread_param.initialized_mutex);
while(!thread_param.initialized) {
ZSTD_pthread_cond_wait(&thread_param.initialized_cond, &thread_param.initialized_mutex);
}
ZSTD_pthread_mutex_unlock(&thread_param.initialized_mutex);
ZSTD_pthread_mutex_destroy(&thread_param.initialized_mutex);
ZSTD_pthread_cond_destroy(&thread_param.initialized_cond);
return 0;
}
int ZSTD_pthread_join(ZSTD_pthread_t thread)
{
DWORD result;
if (!thread) return 0;
result = WaitForSingleObject(thread, INFINITE);
CloseHandle(thread);
switch (result) {
case WAIT_OBJECT_0:
return 0;
case WAIT_ABANDONED:
return EINVAL;
default:
return GetLastError();
}
}
#endif /* ZSTD_MULTITHREAD */
#if defined(ZSTD_MULTITHREAD) && DEBUGLEVEL >= 1 && !defined(_WIN32)
#define ZSTD_DEPS_NEED_MALLOC
/**** skipping file: zstd_deps.h ****/
int ZSTD_pthread_mutex_init(ZSTD_pthread_mutex_t* mutex, pthread_mutexattr_t const* attr)
{
assert(mutex != NULL);
*mutex = (pthread_mutex_t*)ZSTD_malloc(sizeof(pthread_mutex_t));
if (!*mutex)
return 1;
return pthread_mutex_init(*mutex, attr);
}
int ZSTD_pthread_mutex_destroy(ZSTD_pthread_mutex_t* mutex)
{
assert(mutex != NULL);
if (!*mutex)
return 0;
{
int const ret = pthread_mutex_destroy(*mutex);
ZSTD_free(*mutex);
return ret;
}
}
int ZSTD_pthread_cond_init(ZSTD_pthread_cond_t* cond, pthread_condattr_t const* attr)
{
assert(cond != NULL);
*cond = (pthread_cond_t*)ZSTD_malloc(sizeof(pthread_cond_t));
if (!*cond)
return 1;
return pthread_cond_init(*cond, attr);
}
int ZSTD_pthread_cond_destroy(ZSTD_pthread_cond_t* cond)
{
assert(cond != NULL);
if (!*cond)
return 0;
{
int const ret = pthread_cond_destroy(*cond);
ZSTD_free(*cond);
return ret;
}
}
#endif
/**** ended inlining common/threading.c ****/
/**** start inlining common/pool.c ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/* ====== Dependencies ======= */
/**** start inlining ../common/allocations.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/* This file provides custom allocation primitives
*/
#define ZSTD_DEPS_NEED_MALLOC
/**** skipping file: zstd_deps.h ****/
/**** skipping file: compiler.h ****/
#define ZSTD_STATIC_LINKING_ONLY
/**** *NOT* inlining ../zstd.h ****/
#include "zstd.h" /* ZSTD_customMem */
#ifndef ZSTD_ALLOCATIONS_H
#define ZSTD_ALLOCATIONS_H
/* custom memory allocation functions */
MEM_STATIC void* ZSTD_customMalloc(size_t size, ZSTD_customMem customMem)
{
if (customMem.customAlloc)
return customMem.customAlloc(customMem.opaque, size);
return ZSTD_malloc(size);
}
MEM_STATIC void* ZSTD_customCalloc(size_t size, ZSTD_customMem customMem)
{
if (customMem.customAlloc) {
/* calloc implemented as malloc+memset;
* not as efficient as calloc, but next best guess for custom malloc */
void* const ptr = customMem.customAlloc(customMem.opaque, size);
ZSTD_memset(ptr, 0, size);
return ptr;
}
return ZSTD_calloc(1, size);
}
MEM_STATIC void ZSTD_customFree(void* ptr, ZSTD_customMem customMem)
{
if (ptr!=NULL) {
if (customMem.customFree)
customMem.customFree(customMem.opaque, ptr);
else
ZSTD_free(ptr);
}
}
#endif /* ZSTD_ALLOCATIONS_H */
/**** ended inlining ../common/allocations.h ****/
/**** skipping file: zstd_deps.h ****/
/**** skipping file: debug.h ****/
/**** start inlining pool.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef POOL_H
#define POOL_H
/**** skipping file: zstd_deps.h ****/
#define ZSTD_STATIC_LINKING_ONLY /* ZSTD_customMem */
/**** skipping file: ../zstd.h ****/
typedef struct POOL_ctx_s POOL_ctx;
/*! POOL_create() :
* Create a thread pool with at most `numThreads` threads.
* `numThreads` must be at least 1.
* The maximum number of queued jobs before blocking is `queueSize`.
* @return : POOL_ctx pointer on success, else NULL.
*/
POOL_ctx* POOL_create(size_t numThreads, size_t queueSize);
POOL_ctx* POOL_create_advanced(size_t numThreads, size_t queueSize,
ZSTD_customMem customMem);
/*! POOL_free() :
* Free a thread pool returned by POOL_create().
*/
void POOL_free(POOL_ctx* ctx);
/*! POOL_joinJobs() :
* Waits for all queued jobs to finish executing.
*/
void POOL_joinJobs(POOL_ctx* ctx);
/*! POOL_resize() :
* Expands or shrinks pool's number of threads.
* This is more efficient than releasing + creating a new context,
* since it tries to preserve and reuse existing threads.
* `numThreads` must be at least 1.
* @return : 0 when resize was successful,
* !0 (typically 1) if there is an error.
* note : only numThreads can be resized, queueSize remains unchanged.
*/
int POOL_resize(POOL_ctx* ctx, size_t numThreads);
/*! POOL_sizeof() :
* @return threadpool memory usage
* note : compatible with NULL (returns 0 in this case)
*/
size_t POOL_sizeof(const POOL_ctx* ctx);
/*! POOL_function :
* The function type that can be added to a thread pool.
*/
typedef void (*POOL_function)(void*);
/*! POOL_add() :
* Add the job `function(opaque)` to the thread pool. `ctx` must be valid.
* Possibly blocks until there is room in the queue.
* Note : The function may be executed asynchronously,
* therefore, `opaque` must live until function has been completed.
*/
void POOL_add(POOL_ctx* ctx, POOL_function function, void* opaque);
/*! POOL_tryAdd() :
* Add the job `function(opaque)` to thread pool _if_ a queue slot is available.
* Returns immediately even if not (does not block).
* @return : 1 if successful, 0 if not.
*/
int POOL_tryAdd(POOL_ctx* ctx, POOL_function function, void* opaque);
#endif
/**** ended inlining pool.h ****/
/* ====== Compiler specifics ====== */
#if defined(_MSC_VER)
# pragma warning(disable : 4204) /* disable: C4204: non-constant aggregate initializer */
#endif
#ifdef ZSTD_MULTITHREAD
/**** skipping file: threading.h ****/
/* A job is a function and an opaque argument */
typedef struct POOL_job_s {
POOL_function function;
void *opaque;
} POOL_job;
struct POOL_ctx_s {
ZSTD_customMem customMem;
/* Keep track of the threads */
ZSTD_pthread_t* threads;
size_t threadCapacity;
size_t threadLimit;
/* The queue is a circular buffer */
POOL_job *queue;
size_t queueHead;
size_t queueTail;
size_t queueSize;
/* The number of threads working on jobs */
size_t numThreadsBusy;
/* Indicates if the queue is empty */
int queueEmpty;
/* The mutex protects the queue */
ZSTD_pthread_mutex_t queueMutex;
/* Condition variable for pushers to wait on when the queue is full */
ZSTD_pthread_cond_t queuePushCond;
/* Condition variables for poppers to wait on when the queue is empty */
ZSTD_pthread_cond_t queuePopCond;
/* Indicates if the queue is shutting down */
int shutdown;
};
/* POOL_thread() :
* Work thread for the thread pool.
* Waits for jobs and executes them.
* @returns : NULL on failure else non-null.
*/
static void* POOL_thread(void* opaque) {
POOL_ctx* const ctx = (POOL_ctx*)opaque;
if (!ctx) { return NULL; }
for (;;) {
/* Lock the mutex and wait for a non-empty queue or until shutdown */
ZSTD_pthread_mutex_lock(&ctx->queueMutex);
while ( ctx->queueEmpty
|| (ctx->numThreadsBusy >= ctx->threadLimit) ) {
if (ctx->shutdown) {
/* even if !queueEmpty, (possible if numThreadsBusy >= threadLimit),
* a few threads will be shutdown while !queueEmpty,
* but enough threads will remain active to finish the queue */
ZSTD_pthread_mutex_unlock(&ctx->queueMutex);
return opaque;
}
ZSTD_pthread_cond_wait(&ctx->queuePopCond, &ctx->queueMutex);
}
/* Pop a job off the queue */
{ POOL_job const job = ctx->queue[ctx->queueHead];
ctx->queueHead = (ctx->queueHead + 1) % ctx->queueSize;
ctx->numThreadsBusy++;
ctx->queueEmpty = (ctx->queueHead == ctx->queueTail);
/* Unlock the mutex, signal a pusher, and run the job */
ZSTD_pthread_cond_signal(&ctx->queuePushCond);
ZSTD_pthread_mutex_unlock(&ctx->queueMutex);
job.function(job.opaque);
/* If the intended queue size was 0, signal after finishing job */
ZSTD_pthread_mutex_lock(&ctx->queueMutex);
ctx->numThreadsBusy--;
ZSTD_pthread_cond_signal(&ctx->queuePushCond);
ZSTD_pthread_mutex_unlock(&ctx->queueMutex);
}
} /* for (;;) */
assert(0); /* Unreachable */
}
/* ZSTD_createThreadPool() : public access point */
POOL_ctx* ZSTD_createThreadPool(size_t numThreads) {
return POOL_create (numThreads, 0);
}
POOL_ctx* POOL_create(size_t numThreads, size_t queueSize) {
return POOL_create_advanced(numThreads, queueSize, ZSTD_defaultCMem);
}
POOL_ctx* POOL_create_advanced(size_t numThreads, size_t queueSize,
ZSTD_customMem customMem)
{
POOL_ctx* ctx;
/* Check parameters */
if (!numThreads) { return NULL; }
/* Allocate the context and zero initialize */
ctx = (POOL_ctx*)ZSTD_customCalloc(sizeof(POOL_ctx), customMem);
if (!ctx) { return NULL; }
/* Initialize the job queue.
* It needs one extra space since one space is wasted to differentiate
* empty and full queues.
*/
ctx->queueSize = queueSize + 1;
ctx->queue = (POOL_job*)ZSTD_customCalloc(ctx->queueSize * sizeof(POOL_job), customMem);
ctx->queueHead = 0;
ctx->queueTail = 0;
ctx->numThreadsBusy = 0;
ctx->queueEmpty = 1;
{
int error = 0;
error |= ZSTD_pthread_mutex_init(&ctx->queueMutex, NULL);
error |= ZSTD_pthread_cond_init(&ctx->queuePushCond, NULL);
error |= ZSTD_pthread_cond_init(&ctx->queuePopCond, NULL);
if (error) { POOL_free(ctx); return NULL; }
}
ctx->shutdown = 0;
/* Allocate space for the thread handles */
ctx->threads = (ZSTD_pthread_t*)ZSTD_customCalloc(numThreads * sizeof(ZSTD_pthread_t), customMem);
ctx->threadCapacity = 0;
ctx->customMem = customMem;
/* Check for errors */
if (!ctx->threads || !ctx->queue) { POOL_free(ctx); return NULL; }
/* Initialize the threads */
{ size_t i;
for (i = 0; i < numThreads; ++i) {
if (ZSTD_pthread_create(&ctx->threads[i], NULL, &POOL_thread, ctx)) {
ctx->threadCapacity = i;
POOL_free(ctx);
return NULL;
} }
ctx->threadCapacity = numThreads;
ctx->threadLimit = numThreads;
}
return ctx;
}
/*! POOL_join() :
Shutdown the queue, wake any sleeping threads, and join all of the threads.
*/
static void POOL_join(POOL_ctx* ctx) {
/* Shut down the queue */
ZSTD_pthread_mutex_lock(&ctx->queueMutex);
ctx->shutdown = 1;
ZSTD_pthread_mutex_unlock(&ctx->queueMutex);
/* Wake up sleeping threads */
ZSTD_pthread_cond_broadcast(&ctx->queuePushCond);
ZSTD_pthread_cond_broadcast(&ctx->queuePopCond);
/* Join all of the threads */
{ size_t i;
for (i = 0; i < ctx->threadCapacity; ++i) {
ZSTD_pthread_join(ctx->threads[i]); /* note : could fail */
} }
}
void POOL_free(POOL_ctx *ctx) {
if (!ctx) { return; }
POOL_join(ctx);
ZSTD_pthread_mutex_destroy(&ctx->queueMutex);
ZSTD_pthread_cond_destroy(&ctx->queuePushCond);
ZSTD_pthread_cond_destroy(&ctx->queuePopCond);
ZSTD_customFree(ctx->queue, ctx->customMem);
ZSTD_customFree(ctx->threads, ctx->customMem);
ZSTD_customFree(ctx, ctx->customMem);
}
/*! POOL_joinJobs() :
* Waits for all queued jobs to finish executing.
*/
void POOL_joinJobs(POOL_ctx* ctx) {
ZSTD_pthread_mutex_lock(&ctx->queueMutex);
while(!ctx->queueEmpty || ctx->numThreadsBusy > 0) {
ZSTD_pthread_cond_wait(&ctx->queuePushCond, &ctx->queueMutex);
}
ZSTD_pthread_mutex_unlock(&ctx->queueMutex);
}
void ZSTD_freeThreadPool (ZSTD_threadPool* pool) {
POOL_free (pool);
}
size_t POOL_sizeof(const POOL_ctx* ctx) {
if (ctx==NULL) return 0; /* supports sizeof NULL */
return sizeof(*ctx)
+ ctx->queueSize * sizeof(POOL_job)
+ ctx->threadCapacity * sizeof(ZSTD_pthread_t);
}
/* @return : 0 on success, 1 on error */
static int POOL_resize_internal(POOL_ctx* ctx, size_t numThreads)
{
if (numThreads <= ctx->threadCapacity) {
if (!numThreads) return 1;
ctx->threadLimit = numThreads;
return 0;
}
/* numThreads > threadCapacity */
{ ZSTD_pthread_t* const threadPool = (ZSTD_pthread_t*)ZSTD_customCalloc(numThreads * sizeof(ZSTD_pthread_t), ctx->customMem);
if (!threadPool) return 1;
/* replace existing thread pool */
ZSTD_memcpy(threadPool, ctx->threads, ctx->threadCapacity * sizeof(ZSTD_pthread_t));
ZSTD_customFree(ctx->threads, ctx->customMem);
ctx->threads = threadPool;
/* Initialize additional threads */
{ size_t threadId;
for (threadId = ctx->threadCapacity; threadId < numThreads; ++threadId) {
if (ZSTD_pthread_create(&threadPool[threadId], NULL, &POOL_thread, ctx)) {
ctx->threadCapacity = threadId;
return 1;
} }
} }
/* successfully expanded */
ctx->threadCapacity = numThreads;
ctx->threadLimit = numThreads;
return 0;
}
/* @return : 0 on success, 1 on error */
int POOL_resize(POOL_ctx* ctx, size_t numThreads)
{
int result;
if (ctx==NULL) return 1;
ZSTD_pthread_mutex_lock(&ctx->queueMutex);
result = POOL_resize_internal(ctx, numThreads);
ZSTD_pthread_cond_broadcast(&ctx->queuePopCond);
ZSTD_pthread_mutex_unlock(&ctx->queueMutex);
return result;
}
/**
* Returns 1 if the queue is full and 0 otherwise.
*
* When queueSize is 1 (pool was created with an intended queueSize of 0),
* then a queue is empty if there is a thread free _and_ no job is waiting.
*/
static int isQueueFull(POOL_ctx const* ctx) {
if (ctx->queueSize > 1) {
return ctx->queueHead == ((ctx->queueTail + 1) % ctx->queueSize);
} else {
return (ctx->numThreadsBusy == ctx->threadLimit) ||
!ctx->queueEmpty;
}
}
static void
POOL_add_internal(POOL_ctx* ctx, POOL_function function, void *opaque)
{
POOL_job job;
job.function = function;
job.opaque = opaque;
assert(ctx != NULL);
if (ctx->shutdown) return;
ctx->queueEmpty = 0;
ctx->queue[ctx->queueTail] = job;
ctx->queueTail = (ctx->queueTail + 1) % ctx->queueSize;
ZSTD_pthread_cond_signal(&ctx->queuePopCond);
}
void POOL_add(POOL_ctx* ctx, POOL_function function, void* opaque)
{
assert(ctx != NULL);
ZSTD_pthread_mutex_lock(&ctx->queueMutex);
/* Wait until there is space in the queue for the new job */
while (isQueueFull(ctx) && (!ctx->shutdown)) {
ZSTD_pthread_cond_wait(&ctx->queuePushCond, &ctx->queueMutex);
}
POOL_add_internal(ctx, function, opaque);
ZSTD_pthread_mutex_unlock(&ctx->queueMutex);
}
int POOL_tryAdd(POOL_ctx* ctx, POOL_function function, void* opaque)
{
assert(ctx != NULL);
ZSTD_pthread_mutex_lock(&ctx->queueMutex);
if (isQueueFull(ctx)) {
ZSTD_pthread_mutex_unlock(&ctx->queueMutex);
return 0;
}
POOL_add_internal(ctx, function, opaque);
ZSTD_pthread_mutex_unlock(&ctx->queueMutex);
return 1;
}
#else /* ZSTD_MULTITHREAD not defined */
/* ========================== */
/* No multi-threading support */
/* ========================== */
/* We don't need any data, but if it is empty, malloc() might return NULL. */
struct POOL_ctx_s {
int dummy;
};
static POOL_ctx g_poolCtx;
POOL_ctx* POOL_create(size_t numThreads, size_t queueSize) {
return POOL_create_advanced(numThreads, queueSize, ZSTD_defaultCMem);
}
POOL_ctx*
POOL_create_advanced(size_t numThreads, size_t queueSize, ZSTD_customMem customMem)
{
(void)numThreads;
(void)queueSize;
(void)customMem;
return &g_poolCtx;
}
void POOL_free(POOL_ctx* ctx) {
assert(!ctx || ctx == &g_poolCtx);
(void)ctx;
}
void POOL_joinJobs(POOL_ctx* ctx){
assert(!ctx || ctx == &g_poolCtx);
(void)ctx;
}
int POOL_resize(POOL_ctx* ctx, size_t numThreads) {
(void)ctx; (void)numThreads;
return 0;
}
void POOL_add(POOL_ctx* ctx, POOL_function function, void* opaque) {
(void)ctx;
function(opaque);
}
int POOL_tryAdd(POOL_ctx* ctx, POOL_function function, void* opaque) {
(void)ctx;
function(opaque);
return 1;
}
size_t POOL_sizeof(const POOL_ctx* ctx) {
if (ctx==NULL) return 0; /* supports sizeof NULL */
assert(ctx == &g_poolCtx);
return sizeof(*ctx);
}
#endif /* ZSTD_MULTITHREAD */
/**** ended inlining common/pool.c ****/
/**** start inlining common/zstd_common.c ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/*-*************************************
* Dependencies
***************************************/
#define ZSTD_DEPS_NEED_MALLOC
/**** skipping file: error_private.h ****/
/**** start inlining zstd_internal.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_CCOMMON_H_MODULE
#define ZSTD_CCOMMON_H_MODULE
/* this module contains definitions which must be identical
* across compression, decompression and dictBuilder.
* It also contains a few functions useful to at least 2 of them
* and which benefit from being inlined */
/*-*************************************
* Dependencies
***************************************/
/**** skipping file: compiler.h ****/
/**** start inlining cpu.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_COMMON_CPU_H
#define ZSTD_COMMON_CPU_H
/**
* Implementation taken from folly/CpuId.h
* https://github.com/facebook/folly/blob/master/folly/CpuId.h
*/
/**** skipping file: mem.h ****/
#ifdef _MSC_VER
#include <intrin.h>
#endif
typedef struct {
U32 f1c;
U32 f1d;
U32 f7b;
U32 f7c;
} ZSTD_cpuid_t;
MEM_STATIC ZSTD_cpuid_t ZSTD_cpuid(void) {
U32 f1c = 0;
U32 f1d = 0;
U32 f7b = 0;
U32 f7c = 0;
#if defined(_MSC_VER) && (defined(_M_X64) || defined(_M_IX86))
#if !defined(_M_X64) || !defined(__clang__) || __clang_major__ >= 16
int reg[4];
__cpuid((int*)reg, 0);
{
int const n = reg[0];
if (n >= 1) {
__cpuid((int*)reg, 1);
f1c = (U32)reg[2];
f1d = (U32)reg[3];
}
if (n >= 7) {
__cpuidex((int*)reg, 7, 0);
f7b = (U32)reg[1];
f7c = (U32)reg[2];
}
}
#else
/* Clang compiler has a bug (fixed in https://reviews.llvm.org/D101338) in
* which the `__cpuid` intrinsic does not save and restore `rbx` as it needs
* to due to being a reserved register. So in that case, do the `cpuid`
* ourselves. Clang supports inline assembly anyway.
*/
U32 n;
__asm__(
"pushq %%rbx\n\t"
"cpuid\n\t"
"popq %%rbx\n\t"
: "=a"(n)
: "a"(0)
: "rcx", "rdx");
if (n >= 1) {
U32 f1a;
__asm__(
"pushq %%rbx\n\t"
"cpuid\n\t"
"popq %%rbx\n\t"
: "=a"(f1a), "=c"(f1c), "=d"(f1d)
: "a"(1)
:);
}
if (n >= 7) {
__asm__(
"pushq %%rbx\n\t"
"cpuid\n\t"
"movq %%rbx, %%rax\n\t"
"popq %%rbx"
: "=a"(f7b), "=c"(f7c)
: "a"(7), "c"(0)
: "rdx");
}
#endif
#elif defined(__i386__) && defined(__PIC__) && !defined(__clang__) && defined(__GNUC__)
/* The following block like the normal cpuid branch below, but gcc
* reserves ebx for use of its pic register so we must specially
* handle the save and restore to avoid clobbering the register
*/
U32 n;
__asm__(
"pushl %%ebx\n\t"
"cpuid\n\t"
"popl %%ebx\n\t"
: "=a"(n)
: "a"(0)
: "ecx", "edx");
if (n >= 1) {
U32 f1a;
__asm__(
"pushl %%ebx\n\t"
"cpuid\n\t"
"popl %%ebx\n\t"
: "=a"(f1a), "=c"(f1c), "=d"(f1d)
: "a"(1));
}
if (n >= 7) {
__asm__(
"pushl %%ebx\n\t"
"cpuid\n\t"
"movl %%ebx, %%eax\n\t"
"popl %%ebx"
: "=a"(f7b), "=c"(f7c)
: "a"(7), "c"(0)
: "edx");
}
#elif defined(__x86_64__) || defined(_M_X64) || defined(__i386__)
U32 n;
__asm__("cpuid" : "=a"(n) : "a"(0) : "ebx", "ecx", "edx");
if (n >= 1) {
U32 f1a;
__asm__("cpuid" : "=a"(f1a), "=c"(f1c), "=d"(f1d) : "a"(1) : "ebx");
}
if (n >= 7) {
U32 f7a;
__asm__("cpuid"
: "=a"(f7a), "=b"(f7b), "=c"(f7c)
: "a"(7), "c"(0)
: "edx");
}
#endif
{
ZSTD_cpuid_t cpuid;
cpuid.f1c = f1c;
cpuid.f1d = f1d;
cpuid.f7b = f7b;
cpuid.f7c = f7c;
return cpuid;
}
}
#define X(name, r, bit) \
MEM_STATIC int ZSTD_cpuid_##name(ZSTD_cpuid_t const cpuid) { \
return ((cpuid.r) & (1U << bit)) != 0; \
}
/* cpuid(1): Processor Info and Feature Bits. */
#define C(name, bit) X(name, f1c, bit)
C(sse3, 0)
C(pclmuldq, 1)
C(dtes64, 2)
C(monitor, 3)
C(dscpl, 4)
C(vmx, 5)
C(smx, 6)
C(eist, 7)
C(tm2, 8)
C(ssse3, 9)
C(cnxtid, 10)
C(fma, 12)
C(cx16, 13)
C(xtpr, 14)
C(pdcm, 15)
C(pcid, 17)
C(dca, 18)
C(sse41, 19)
C(sse42, 20)
C(x2apic, 21)
C(movbe, 22)
C(popcnt, 23)
C(tscdeadline, 24)
C(aes, 25)
C(xsave, 26)
C(osxsave, 27)
C(avx, 28)
C(f16c, 29)
C(rdrand, 30)
#undef C
#define D(name, bit) X(name, f1d, bit)
D(fpu, 0)
D(vme, 1)
D(de, 2)
D(pse, 3)
D(tsc, 4)
D(msr, 5)
D(pae, 6)
D(mce, 7)
D(cx8, 8)
D(apic, 9)
D(sep, 11)
D(mtrr, 12)
D(pge, 13)
D(mca, 14)
D(cmov, 15)
D(pat, 16)
D(pse36, 17)
D(psn, 18)
D(clfsh, 19)
D(ds, 21)
D(acpi, 22)
D(mmx, 23)
D(fxsr, 24)
D(sse, 25)
D(sse2, 26)
D(ss, 27)
D(htt, 28)
D(tm, 29)
D(pbe, 31)
#undef D
/* cpuid(7): Extended Features. */
#define B(name, bit) X(name, f7b, bit)
B(bmi1, 3)
B(hle, 4)
B(avx2, 5)
B(smep, 7)
B(bmi2, 8)
B(erms, 9)
B(invpcid, 10)
B(rtm, 11)
B(mpx, 14)
B(avx512f, 16)
B(avx512dq, 17)
B(rdseed, 18)
B(adx, 19)
B(smap, 20)
B(avx512ifma, 21)
B(pcommit, 22)
B(clflushopt, 23)
B(clwb, 24)
B(avx512pf, 26)
B(avx512er, 27)
B(avx512cd, 28)
B(sha, 29)
B(avx512bw, 30)
B(avx512vl, 31)
#undef B
#define C(name, bit) X(name, f7c, bit)
C(prefetchwt1, 0)
C(avx512vbmi, 1)
#undef C
#undef X
#endif /* ZSTD_COMMON_CPU_H */
/**** ended inlining cpu.h ****/
/**** skipping file: mem.h ****/
/**** skipping file: debug.h ****/
/**** skipping file: error_private.h ****/
#define ZSTD_STATIC_LINKING_ONLY
/**** skipping file: ../zstd.h ****/
#define FSE_STATIC_LINKING_ONLY
/**** skipping file: fse.h ****/
/**** skipping file: huf.h ****/
#ifndef XXH_STATIC_LINKING_ONLY
# define XXH_STATIC_LINKING_ONLY /* XXH64_state_t */
#endif
/**** start inlining xxhash.h ****/
/*
* xxHash - Extremely Fast Hash algorithm
* Header File
* Copyright (c) Yann Collet - Meta Platforms, Inc
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/* Local adaptations for Zstandard */
#ifndef XXH_NO_XXH3
# define XXH_NO_XXH3
#endif
#ifndef XXH_NAMESPACE
# define XXH_NAMESPACE ZSTD_
#endif
/*!
* @mainpage xxHash
*
* xxHash is an extremely fast non-cryptographic hash algorithm, working at RAM speed
* limits.
*
* It is proposed in four flavors, in three families:
* 1. @ref XXH32_family
* - Classic 32-bit hash function. Simple, compact, and runs on almost all
* 32-bit and 64-bit systems.
* 2. @ref XXH64_family
* - Classic 64-bit adaptation of XXH32. Just as simple, and runs well on most
* 64-bit systems (but _not_ 32-bit systems).
* 3. @ref XXH3_family
* - Modern 64-bit and 128-bit hash function family which features improved
* strength and performance across the board, especially on smaller data.
* It benefits greatly from SIMD and 64-bit without requiring it.
*
* Benchmarks
* ---
* The reference system uses an Intel i7-9700K CPU, and runs Ubuntu x64 20.04.
* The open source benchmark program is compiled with clang v10.0 using -O3 flag.
*
* | Hash Name | ISA ext | Width | Large Data Speed | Small Data Velocity |
* | -------------------- | ------- | ----: | ---------------: | ------------------: |
* | XXH3_64bits() | @b AVX2 | 64 | 59.4 GB/s | 133.1 |
* | MeowHash | AES-NI | 128 | 58.2 GB/s | 52.5 |
* | XXH3_128bits() | @b AVX2 | 128 | 57.9 GB/s | 118.1 |
* | CLHash | PCLMUL | 64 | 37.1 GB/s | 58.1 |
* | XXH3_64bits() | @b SSE2 | 64 | 31.5 GB/s | 133.1 |
* | XXH3_128bits() | @b SSE2 | 128 | 29.6 GB/s | 118.1 |
* | RAM sequential read | | N/A | 28.0 GB/s | N/A |
* | ahash | AES-NI | 64 | 22.5 GB/s | 107.2 |
* | City64 | | 64 | 22.0 GB/s | 76.6 |
* | T1ha2 | | 64 | 22.0 GB/s | 99.0 |
* | City128 | | 128 | 21.7 GB/s | 57.7 |
* | FarmHash | AES-NI | 64 | 21.3 GB/s | 71.9 |
* | XXH64() | | 64 | 19.4 GB/s | 71.0 |
* | SpookyHash | | 64 | 19.3 GB/s | 53.2 |
* | Mum | | 64 | 18.0 GB/s | 67.0 |
* | CRC32C | SSE4.2 | 32 | 13.0 GB/s | 57.9 |
* | XXH32() | | 32 | 9.7 GB/s | 71.9 |
* | City32 | | 32 | 9.1 GB/s | 66.0 |
* | Blake3* | @b AVX2 | 256 | 4.4 GB/s | 8.1 |
* | Murmur3 | | 32 | 3.9 GB/s | 56.1 |
* | SipHash* | | 64 | 3.0 GB/s | 43.2 |
* | Blake3* | @b SSE2 | 256 | 2.4 GB/s | 8.1 |
* | HighwayHash | | 64 | 1.4 GB/s | 6.0 |
* | FNV64 | | 64 | 1.2 GB/s | 62.7 |
* | Blake2* | | 256 | 1.1 GB/s | 5.1 |
* | SHA1* | | 160 | 0.8 GB/s | 5.6 |
* | MD5* | | 128 | 0.6 GB/s | 7.8 |
* @note
* - Hashes which require a specific ISA extension are noted. SSE2 is also noted,
* even though it is mandatory on x64.
* - Hashes with an asterisk are cryptographic. Note that MD5 is non-cryptographic
* by modern standards.
* - Small data velocity is a rough average of algorithm's efficiency for small
* data. For more accurate information, see the wiki.
* - More benchmarks and strength tests are found on the wiki:
* https://github.com/Cyan4973/xxHash/wiki
*
* Usage
* ------
* All xxHash variants use a similar API. Changing the algorithm is a trivial
* substitution.
*
* @pre
* For functions which take an input and length parameter, the following
* requirements are assumed:
* - The range from [`input`, `input + length`) is valid, readable memory.
* - The only exception is if the `length` is `0`, `input` may be `NULL`.
* - For C++, the objects must have the *TriviallyCopyable* property, as the
* functions access bytes directly as if it was an array of `unsigned char`.
*
* @anchor single_shot_example
* **Single Shot**
*
* These functions are stateless functions which hash a contiguous block of memory,
* immediately returning the result. They are the easiest and usually the fastest
* option.
*
* XXH32(), XXH64(), XXH3_64bits(), XXH3_128bits()
*
* @code{.c}
* #include <string.h>
* #include "xxhash.h"
*
* // Example for a function which hashes a null terminated string with XXH32().
* XXH32_hash_t hash_string(const char* string, XXH32_hash_t seed)
* {
* // NULL pointers are only valid if the length is zero
* size_t length = (string == NULL) ? 0 : strlen(string);
* return XXH32(string, length, seed);
* }
* @endcode
*
*
* @anchor streaming_example
* **Streaming**
*
* These groups of functions allow incremental hashing of unknown size, even
* more than what would fit in a size_t.
*
* XXH32_reset(), XXH64_reset(), XXH3_64bits_reset(), XXH3_128bits_reset()
*
* @code{.c}
* #include <stdio.h>
* #include <assert.h>
* #include "xxhash.h"
* // Example for a function which hashes a FILE incrementally with XXH3_64bits().
* XXH64_hash_t hashFile(FILE* f)
* {
* // Allocate a state struct. Do not just use malloc() or new.
* XXH3_state_t* state = XXH3_createState();
* assert(state != NULL && "Out of memory!");
* // Reset the state to start a new hashing session.
* XXH3_64bits_reset(state);
* char buffer[4096];
* size_t count;
* // Read the file in chunks
* while ((count = fread(buffer, 1, sizeof(buffer), f)) != 0) {
* // Run update() as many times as necessary to process the data
* XXH3_64bits_update(state, buffer, count);
* }
* // Retrieve the finalized hash. This will not change the state.
* XXH64_hash_t result = XXH3_64bits_digest(state);
* // Free the state. Do not use free().
* XXH3_freeState(state);
* return result;
* }
* @endcode
*
* Streaming functions generate the xxHash value from an incremental input.
* This method is slower than single-call functions, due to state management.
* For small inputs, prefer `XXH32()` and `XXH64()`, which are better optimized.
*
* An XXH state must first be allocated using `XXH*_createState()`.
*
* Start a new hash by initializing the state with a seed using `XXH*_reset()`.
*
* Then, feed the hash state by calling `XXH*_update()` as many times as necessary.
*
* The function returns an error code, with 0 meaning OK, and any other value
* meaning there is an error.
*
* Finally, a hash value can be produced anytime, by using `XXH*_digest()`.
* This function returns the nn-bits hash as an int or long long.
*
* It's still possible to continue inserting input into the hash state after a
* digest, and generate new hash values later on by invoking `XXH*_digest()`.
*
* When done, release the state using `XXH*_freeState()`.
*
*
* @anchor canonical_representation_example
* **Canonical Representation**
*
* The default return values from XXH functions are unsigned 32, 64 and 128 bit
* integers.
* This the simplest and fastest format for further post-processing.
*
* However, this leaves open the question of what is the order on the byte level,
* since little and big endian conventions will store the same number differently.
*
* The canonical representation settles this issue by mandating big-endian
* convention, the same convention as human-readable numbers (large digits first).
*
* When writing hash values to storage, sending them over a network, or printing
* them, it's highly recommended to use the canonical representation to ensure
* portability across a wider range of systems, present and future.
*
* The following functions allow transformation of hash values to and from
* canonical format.
*
* XXH32_canonicalFromHash(), XXH32_hashFromCanonical(),
* XXH64_canonicalFromHash(), XXH64_hashFromCanonical(),
* XXH128_canonicalFromHash(), XXH128_hashFromCanonical(),
*
* @code{.c}
* #include <stdio.h>
* #include "xxhash.h"
*
* // Example for a function which prints XXH32_hash_t in human readable format
* void printXxh32(XXH32_hash_t hash)
* {
* XXH32_canonical_t cano;
* XXH32_canonicalFromHash(&cano, hash);
* size_t i;
* for(i = 0; i < sizeof(cano.digest); ++i) {
* printf("%02x", cano.digest[i]);
* }
* printf("\n");
* }
*
* // Example for a function which converts XXH32_canonical_t to XXH32_hash_t
* XXH32_hash_t convertCanonicalToXxh32(XXH32_canonical_t cano)
* {
* XXH32_hash_t hash = XXH32_hashFromCanonical(&cano);
* return hash;
* }
* @endcode
*
*
* @file xxhash.h
* xxHash prototypes and implementation
*/
/* ****************************
* INLINE mode
******************************/
/*!
* @defgroup public Public API
* Contains details on the public xxHash functions.
* @{
*/
#ifdef XXH_DOXYGEN
/*!
* @brief Gives access to internal state declaration, required for static allocation.
*
* Incompatible with dynamic linking, due to risks of ABI changes.
*
* Usage:
* @code{.c}
* #define XXH_STATIC_LINKING_ONLY
* #include "xxhash.h"
* @endcode
*/
# define XXH_STATIC_LINKING_ONLY
/* Do not undef XXH_STATIC_LINKING_ONLY for Doxygen */
/*!
* @brief Gives access to internal definitions.
*
* Usage:
* @code{.c}
* #define XXH_STATIC_LINKING_ONLY
* #define XXH_IMPLEMENTATION
* #include "xxhash.h"
* @endcode
*/
# define XXH_IMPLEMENTATION
/* Do not undef XXH_IMPLEMENTATION for Doxygen */
/*!
* @brief Exposes the implementation and marks all functions as `inline`.
*
* Use these build macros to inline xxhash into the target unit.
* Inlining improves performance on small inputs, especially when the length is
* expressed as a compile-time constant:
*
* https://fastcompression.blogspot.com/2018/03/xxhash-for-small-keys-impressive-power.html
*
* It also keeps xxHash symbols private to the unit, so they are not exported.
*
* Usage:
* @code{.c}
* #define XXH_INLINE_ALL
* #include "xxhash.h"
* @endcode
* Do not compile and link xxhash.o as a separate object, as it is not useful.
*/
# define XXH_INLINE_ALL
# undef XXH_INLINE_ALL
/*!
* @brief Exposes the implementation without marking functions as inline.
*/
# define XXH_PRIVATE_API
# undef XXH_PRIVATE_API
/*!
* @brief Emulate a namespace by transparently prefixing all symbols.
*
* If you want to include _and expose_ xxHash functions from within your own
* library, but also want to avoid symbol collisions with other libraries which
* may also include xxHash, you can use @ref XXH_NAMESPACE to automatically prefix
* any public symbol from xxhash library with the value of @ref XXH_NAMESPACE
* (therefore, avoid empty or numeric values).
*
* Note that no change is required within the calling program as long as it
* includes `xxhash.h`: Regular symbol names will be automatically translated
* by this header.
*/
# define XXH_NAMESPACE /* YOUR NAME HERE */
# undef XXH_NAMESPACE
#endif
#if (defined(XXH_INLINE_ALL) || defined(XXH_PRIVATE_API)) \
&& !defined(XXH_INLINE_ALL_31684351384)
/* this section should be traversed only once */
# define XXH_INLINE_ALL_31684351384
/* give access to the advanced API, required to compile implementations */
# undef XXH_STATIC_LINKING_ONLY /* avoid macro redef */
# define XXH_STATIC_LINKING_ONLY
/* make all functions private */
# undef XXH_PUBLIC_API
# if defined(__GNUC__)
# define XXH_PUBLIC_API static __inline __attribute__((unused))
# elif defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */)
# define XXH_PUBLIC_API static inline
# elif defined(_MSC_VER)
# define XXH_PUBLIC_API static __inline
# else
/* note: this version may generate warnings for unused static functions */
# define XXH_PUBLIC_API static
# endif
/*
* This part deals with the special case where a unit wants to inline xxHash,
* but "xxhash.h" has previously been included without XXH_INLINE_ALL,
* such as part of some previously included *.h header file.
* Without further action, the new include would just be ignored,
* and functions would effectively _not_ be inlined (silent failure).
* The following macros solve this situation by prefixing all inlined names,
* avoiding naming collision with previous inclusions.
*/
/* Before that, we unconditionally #undef all symbols,
* in case they were already defined with XXH_NAMESPACE.
* They will then be redefined for XXH_INLINE_ALL
*/
# undef XXH_versionNumber
/* XXH32 */
# undef XXH32
# undef XXH32_createState
# undef XXH32_freeState
# undef XXH32_reset
# undef XXH32_update
# undef XXH32_digest
# undef XXH32_copyState
# undef XXH32_canonicalFromHash
# undef XXH32_hashFromCanonical
/* XXH64 */
# undef XXH64
# undef XXH64_createState
# undef XXH64_freeState
# undef XXH64_reset
# undef XXH64_update
# undef XXH64_digest
# undef XXH64_copyState
# undef XXH64_canonicalFromHash
# undef XXH64_hashFromCanonical
/* XXH3_64bits */
# undef XXH3_64bits
# undef XXH3_64bits_withSecret
# undef XXH3_64bits_withSeed
# undef XXH3_64bits_withSecretandSeed
# undef XXH3_createState
# undef XXH3_freeState
# undef XXH3_copyState
# undef XXH3_64bits_reset
# undef XXH3_64bits_reset_withSeed
# undef XXH3_64bits_reset_withSecret
# undef XXH3_64bits_update
# undef XXH3_64bits_digest
# undef XXH3_generateSecret
/* XXH3_128bits */
# undef XXH128
# undef XXH3_128bits
# undef XXH3_128bits_withSeed
# undef XXH3_128bits_withSecret
# undef XXH3_128bits_reset
# undef XXH3_128bits_reset_withSeed
# undef XXH3_128bits_reset_withSecret
# undef XXH3_128bits_reset_withSecretandSeed
# undef XXH3_128bits_update
# undef XXH3_128bits_digest
# undef XXH128_isEqual
# undef XXH128_cmp
# undef XXH128_canonicalFromHash
# undef XXH128_hashFromCanonical
/* Finally, free the namespace itself */
# undef XXH_NAMESPACE
/* employ the namespace for XXH_INLINE_ALL */
# define XXH_NAMESPACE XXH_INLINE_
/*
* Some identifiers (enums, type names) are not symbols,
* but they must nonetheless be renamed to avoid redeclaration.
* Alternative solution: do not redeclare them.
* However, this requires some #ifdefs, and has a more dispersed impact.
* Meanwhile, renaming can be achieved in a single place.
*/
# define XXH_IPREF(Id) XXH_NAMESPACE ## Id
# define XXH_OK XXH_IPREF(XXH_OK)
# define XXH_ERROR XXH_IPREF(XXH_ERROR)
# define XXH_errorcode XXH_IPREF(XXH_errorcode)
# define XXH32_canonical_t XXH_IPREF(XXH32_canonical_t)
# define XXH64_canonical_t XXH_IPREF(XXH64_canonical_t)
# define XXH128_canonical_t XXH_IPREF(XXH128_canonical_t)
# define XXH32_state_s XXH_IPREF(XXH32_state_s)
# define XXH32_state_t XXH_IPREF(XXH32_state_t)
# define XXH64_state_s XXH_IPREF(XXH64_state_s)
# define XXH64_state_t XXH_IPREF(XXH64_state_t)
# define XXH3_state_s XXH_IPREF(XXH3_state_s)
# define XXH3_state_t XXH_IPREF(XXH3_state_t)
# define XXH128_hash_t XXH_IPREF(XXH128_hash_t)
/* Ensure the header is parsed again, even if it was previously included */
# undef XXHASH_H_5627135585666179
# undef XXHASH_H_STATIC_13879238742
#endif /* XXH_INLINE_ALL || XXH_PRIVATE_API */
/* ****************************************************************
* Stable API
*****************************************************************/
#ifndef XXHASH_H_5627135585666179
#define XXHASH_H_5627135585666179 1
/*! @brief Marks a global symbol. */
#if !defined(XXH_INLINE_ALL) && !defined(XXH_PRIVATE_API)
# if defined(WIN32) && defined(_MSC_VER) && (defined(XXH_IMPORT) || defined(XXH_EXPORT))
# ifdef XXH_EXPORT
# define XXH_PUBLIC_API __declspec(dllexport)
# elif XXH_IMPORT
# define XXH_PUBLIC_API __declspec(dllimport)
# endif
# else
# define XXH_PUBLIC_API /* do nothing */
# endif
#endif
#ifdef XXH_NAMESPACE
# define XXH_CAT(A,B) A##B
# define XXH_NAME2(A,B) XXH_CAT(A,B)
# define XXH_versionNumber XXH_NAME2(XXH_NAMESPACE, XXH_versionNumber)
/* XXH32 */
# define XXH32 XXH_NAME2(XXH_NAMESPACE, XXH32)
# define XXH32_createState XXH_NAME2(XXH_NAMESPACE, XXH32_createState)
# define XXH32_freeState XXH_NAME2(XXH_NAMESPACE, XXH32_freeState)
# define XXH32_reset XXH_NAME2(XXH_NAMESPACE, XXH32_reset)
# define XXH32_update XXH_NAME2(XXH_NAMESPACE, XXH32_update)
# define XXH32_digest XXH_NAME2(XXH_NAMESPACE, XXH32_digest)
# define XXH32_copyState XXH_NAME2(XXH_NAMESPACE, XXH32_copyState)
# define XXH32_canonicalFromHash XXH_NAME2(XXH_NAMESPACE, XXH32_canonicalFromHash)
# define XXH32_hashFromCanonical XXH_NAME2(XXH_NAMESPACE, XXH32_hashFromCanonical)
/* XXH64 */
# define XXH64 XXH_NAME2(XXH_NAMESPACE, XXH64)
# define XXH64_createState XXH_NAME2(XXH_NAMESPACE, XXH64_createState)
# define XXH64_freeState XXH_NAME2(XXH_NAMESPACE, XXH64_freeState)
# define XXH64_reset XXH_NAME2(XXH_NAMESPACE, XXH64_reset)
# define XXH64_update XXH_NAME2(XXH_NAMESPACE, XXH64_update)
# define XXH64_digest XXH_NAME2(XXH_NAMESPACE, XXH64_digest)
# define XXH64_copyState XXH_NAME2(XXH_NAMESPACE, XXH64_copyState)
# define XXH64_canonicalFromHash XXH_NAME2(XXH_NAMESPACE, XXH64_canonicalFromHash)
# define XXH64_hashFromCanonical XXH_NAME2(XXH_NAMESPACE, XXH64_hashFromCanonical)
/* XXH3_64bits */
# define XXH3_64bits XXH_NAME2(XXH_NAMESPACE, XXH3_64bits)
# define XXH3_64bits_withSecret XXH_NAME2(XXH_NAMESPACE, XXH3_64bits_withSecret)
# define XXH3_64bits_withSeed XXH_NAME2(XXH_NAMESPACE, XXH3_64bits_withSeed)
# define XXH3_64bits_withSecretandSeed XXH_NAME2(XXH_NAMESPACE, XXH3_64bits_withSecretandSeed)
# define XXH3_createState XXH_NAME2(XXH_NAMESPACE, XXH3_createState)
# define XXH3_freeState XXH_NAME2(XXH_NAMESPACE, XXH3_freeState)
# define XXH3_copyState XXH_NAME2(XXH_NAMESPACE, XXH3_copyState)
# define XXH3_64bits_reset XXH_NAME2(XXH_NAMESPACE, XXH3_64bits_reset)
# define XXH3_64bits_reset_withSeed XXH_NAME2(XXH_NAMESPACE, XXH3_64bits_reset_withSeed)
# define XXH3_64bits_reset_withSecret XXH_NAME2(XXH_NAMESPACE, XXH3_64bits_reset_withSecret)
# define XXH3_64bits_reset_withSecretandSeed XXH_NAME2(XXH_NAMESPACE, XXH3_64bits_reset_withSecretandSeed)
# define XXH3_64bits_update XXH_NAME2(XXH_NAMESPACE, XXH3_64bits_update)
# define XXH3_64bits_digest XXH_NAME2(XXH_NAMESPACE, XXH3_64bits_digest)
# define XXH3_generateSecret XXH_NAME2(XXH_NAMESPACE, XXH3_generateSecret)
# define XXH3_generateSecret_fromSeed XXH_NAME2(XXH_NAMESPACE, XXH3_generateSecret_fromSeed)
/* XXH3_128bits */
# define XXH128 XXH_NAME2(XXH_NAMESPACE, XXH128)
# define XXH3_128bits XXH_NAME2(XXH_NAMESPACE, XXH3_128bits)
# define XXH3_128bits_withSeed XXH_NAME2(XXH_NAMESPACE, XXH3_128bits_withSeed)
# define XXH3_128bits_withSecret XXH_NAME2(XXH_NAMESPACE, XXH3_128bits_withSecret)
# define XXH3_128bits_withSecretandSeed XXH_NAME2(XXH_NAMESPACE, XXH3_128bits_withSecretandSeed)
# define XXH3_128bits_reset XXH_NAME2(XXH_NAMESPACE, XXH3_128bits_reset)
# define XXH3_128bits_reset_withSeed XXH_NAME2(XXH_NAMESPACE, XXH3_128bits_reset_withSeed)
# define XXH3_128bits_reset_withSecret XXH_NAME2(XXH_NAMESPACE, XXH3_128bits_reset_withSecret)
# define XXH3_128bits_reset_withSecretandSeed XXH_NAME2(XXH_NAMESPACE, XXH3_128bits_reset_withSecretandSeed)
# define XXH3_128bits_update XXH_NAME2(XXH_NAMESPACE, XXH3_128bits_update)
# define XXH3_128bits_digest XXH_NAME2(XXH_NAMESPACE, XXH3_128bits_digest)
# define XXH128_isEqual XXH_NAME2(XXH_NAMESPACE, XXH128_isEqual)
# define XXH128_cmp XXH_NAME2(XXH_NAMESPACE, XXH128_cmp)
# define XXH128_canonicalFromHash XXH_NAME2(XXH_NAMESPACE, XXH128_canonicalFromHash)
# define XXH128_hashFromCanonical XXH_NAME2(XXH_NAMESPACE, XXH128_hashFromCanonical)
#endif
/* *************************************
* Compiler specifics
***************************************/
/* specific declaration modes for Windows */
#if !defined(XXH_INLINE_ALL) && !defined(XXH_PRIVATE_API)
# if defined(WIN32) && defined(_MSC_VER) && (defined(XXH_IMPORT) || defined(XXH_EXPORT))
# ifdef XXH_EXPORT
# define XXH_PUBLIC_API __declspec(dllexport)
# elif XXH_IMPORT
# define XXH_PUBLIC_API __declspec(dllimport)
# endif
# else
# define XXH_PUBLIC_API /* do nothing */
# endif
#endif
#if defined (__GNUC__)
# define XXH_CONSTF __attribute__((const))
# define XXH_PUREF __attribute__((pure))
# define XXH_MALLOCF __attribute__((malloc))
#else
# define XXH_CONSTF /* disable */
# define XXH_PUREF
# define XXH_MALLOCF
#endif
/* *************************************
* Version
***************************************/
#define XXH_VERSION_MAJOR 0
#define XXH_VERSION_MINOR 8
#define XXH_VERSION_RELEASE 2
/*! @brief Version number, encoded as two digits each */
#define XXH_VERSION_NUMBER (XXH_VERSION_MAJOR *100*100 + XXH_VERSION_MINOR *100 + XXH_VERSION_RELEASE)
#if defined (__cplusplus)
extern "C" {
#endif
/*!
* @brief Obtains the xxHash version.
*
* This is mostly useful when xxHash is compiled as a shared library,
* since the returned value comes from the library, as opposed to header file.
*
* @return @ref XXH_VERSION_NUMBER of the invoked library.
*/
XXH_PUBLIC_API XXH_CONSTF unsigned XXH_versionNumber (void);
#if defined (__cplusplus)
}
#endif
/* ****************************
* Common basic types
******************************/
#include <stddef.h> /* size_t */
/*!
* @brief Exit code for the streaming API.
*/
typedef enum {
XXH_OK = 0, /*!< OK */
XXH_ERROR /*!< Error */
} XXH_errorcode;
/*-**********************************************************************
* 32-bit hash
************************************************************************/
#if defined(XXH_DOXYGEN) /* Don't show <stdint.h> include */
/*!
* @brief An unsigned 32-bit integer.
*
* Not necessarily defined to `uint32_t` but functionally equivalent.
*/
typedef uint32_t XXH32_hash_t;
#elif !defined (__VMS) \
&& (defined (__cplusplus) \
|| (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) )
# ifdef _AIX
# include <inttypes.h>
# else
# include <stdint.h>
# endif
typedef uint32_t XXH32_hash_t;
#else
# include <limits.h>
# if UINT_MAX == 0xFFFFFFFFUL
typedef unsigned int XXH32_hash_t;
# elif ULONG_MAX == 0xFFFFFFFFUL
typedef unsigned long XXH32_hash_t;
# else
# error "unsupported platform: need a 32-bit type"
# endif
#endif
#if defined (__cplusplus)
extern "C" {
#endif
/*!
* @}
*
* @defgroup XXH32_family XXH32 family
* @ingroup public
* Contains functions used in the classic 32-bit xxHash algorithm.
*
* @note
* XXH32 is useful for older platforms, with no or poor 64-bit performance.
* Note that the @ref XXH3_family provides competitive speed for both 32-bit
* and 64-bit systems, and offers true 64/128 bit hash results.
*
* @see @ref XXH64_family, @ref XXH3_family : Other xxHash families
* @see @ref XXH32_impl for implementation details
* @{
*/
/*!
* @brief Calculates the 32-bit hash of @p input using xxHash32.
*
* @param input The block of data to be hashed, at least @p length bytes in size.
* @param length The length of @p input, in bytes.
* @param seed The 32-bit seed to alter the hash's output predictably.
*
* @pre
* The memory between @p input and @p input + @p length must be valid,
* readable, contiguous memory. However, if @p length is `0`, @p input may be
* `NULL`. In C++, this also must be *TriviallyCopyable*.
*
* @return The calculated 32-bit xxHash32 value.
*
* @see @ref single_shot_example "Single Shot Example" for an example.
*/
XXH_PUBLIC_API XXH_PUREF XXH32_hash_t XXH32 (const void* input, size_t length, XXH32_hash_t seed);
#ifndef XXH_NO_STREAM
/*!
* @typedef struct XXH32_state_s XXH32_state_t
* @brief The opaque state struct for the XXH32 streaming API.
*
* @see XXH32_state_s for details.
*/
typedef struct XXH32_state_s XXH32_state_t;
/*!
* @brief Allocates an @ref XXH32_state_t.
*
* @return An allocated pointer of @ref XXH32_state_t on success.
* @return `NULL` on failure.
*
* @note Must be freed with XXH32_freeState().
*/
XXH_PUBLIC_API XXH_MALLOCF XXH32_state_t* XXH32_createState(void);
/*!
* @brief Frees an @ref XXH32_state_t.
*
* @param statePtr A pointer to an @ref XXH32_state_t allocated with @ref XXH32_createState().
*
* @return @ref XXH_OK.
*
* @note @p statePtr must be allocated with XXH32_createState().
*
*/
XXH_PUBLIC_API XXH_errorcode XXH32_freeState(XXH32_state_t* statePtr);
/*!
* @brief Copies one @ref XXH32_state_t to another.
*
* @param dst_state The state to copy to.
* @param src_state The state to copy from.
* @pre
* @p dst_state and @p src_state must not be `NULL` and must not overlap.
*/
XXH_PUBLIC_API void XXH32_copyState(XXH32_state_t* dst_state, const XXH32_state_t* src_state);
/*!
* @brief Resets an @ref XXH32_state_t to begin a new hash.
*
* @param statePtr The state struct to reset.
* @param seed The 32-bit seed to alter the hash result predictably.
*
* @pre
* @p statePtr must not be `NULL`.
*
* @return @ref XXH_OK on success.
* @return @ref XXH_ERROR on failure.
*
* @note This function resets and seeds a state. Call it before @ref XXH32_update().
*/
XXH_PUBLIC_API XXH_errorcode XXH32_reset (XXH32_state_t* statePtr, XXH32_hash_t seed);
/*!
* @brief Consumes a block of @p input to an @ref XXH32_state_t.
*
* @param statePtr The state struct to update.
* @param input The block of data to be hashed, at least @p length bytes in size.
* @param length The length of @p input, in bytes.
*
* @pre
* @p statePtr must not be `NULL`.
* @pre
* The memory between @p input and @p input + @p length must be valid,
* readable, contiguous memory. However, if @p length is `0`, @p input may be
* `NULL`. In C++, this also must be *TriviallyCopyable*.
*
* @return @ref XXH_OK on success.
* @return @ref XXH_ERROR on failure.
*
* @note Call this to incrementally consume blocks of data.
*/
XXH_PUBLIC_API XXH_errorcode XXH32_update (XXH32_state_t* statePtr, const void* input, size_t length);
/*!
* @brief Returns the calculated hash value from an @ref XXH32_state_t.
*
* @param statePtr The state struct to calculate the hash from.
*
* @pre
* @p statePtr must not be `NULL`.
*
* @return The calculated 32-bit xxHash32 value from that state.
*
* @note
* Calling XXH32_digest() will not affect @p statePtr, so you can update,
* digest, and update again.
*/
XXH_PUBLIC_API XXH_PUREF XXH32_hash_t XXH32_digest (const XXH32_state_t* statePtr);
#endif /* !XXH_NO_STREAM */
/******* Canonical representation *******/
/*!
* @brief Canonical (big endian) representation of @ref XXH32_hash_t.
*/
typedef struct {
unsigned char digest[4]; /*!< Hash bytes, big endian */
} XXH32_canonical_t;
/*!
* @brief Converts an @ref XXH32_hash_t to a big endian @ref XXH32_canonical_t.
*
* @param dst The @ref XXH32_canonical_t pointer to be stored to.
* @param hash The @ref XXH32_hash_t to be converted.
*
* @pre
* @p dst must not be `NULL`.
*
* @see @ref canonical_representation_example "Canonical Representation Example"
*/
XXH_PUBLIC_API void XXH32_canonicalFromHash(XXH32_canonical_t* dst, XXH32_hash_t hash);
/*!
* @brief Converts an @ref XXH32_canonical_t to a native @ref XXH32_hash_t.
*
* @param src The @ref XXH32_canonical_t to convert.
*
* @pre
* @p src must not be `NULL`.
*
* @return The converted hash.
*
* @see @ref canonical_representation_example "Canonical Representation Example"
*/
XXH_PUBLIC_API XXH_PUREF XXH32_hash_t XXH32_hashFromCanonical(const XXH32_canonical_t* src);
/*! @cond Doxygen ignores this part */
#ifdef __has_attribute
# define XXH_HAS_ATTRIBUTE(x) __has_attribute(x)
#else
# define XXH_HAS_ATTRIBUTE(x) 0
#endif
/*! @endcond */
/*! @cond Doxygen ignores this part */
/*
* C23 __STDC_VERSION__ number hasn't been specified yet. For now
* leave as `201711L` (C17 + 1).
* TODO: Update to correct value when its been specified.
*/
#define XXH_C23_VN 201711L
/*! @endcond */
/*! @cond Doxygen ignores this part */
/* C-language Attributes are added in C23. */
#if defined(__STDC_VERSION__) && (__STDC_VERSION__ >= XXH_C23_VN) && defined(__has_c_attribute)
# define XXH_HAS_C_ATTRIBUTE(x) __has_c_attribute(x)
#else
# define XXH_HAS_C_ATTRIBUTE(x) 0
#endif
/*! @endcond */
/*! @cond Doxygen ignores this part */
#if defined(__cplusplus) && defined(__has_cpp_attribute)
# define XXH_HAS_CPP_ATTRIBUTE(x) __has_cpp_attribute(x)
#else
# define XXH_HAS_CPP_ATTRIBUTE(x) 0
#endif
/*! @endcond */
/*! @cond Doxygen ignores this part */
/*
* Define XXH_FALLTHROUGH macro for annotating switch case with the 'fallthrough' attribute
* introduced in CPP17 and C23.
* CPP17 : https://en.cppreference.com/w/cpp/language/attributes/fallthrough
* C23 : https://en.cppreference.com/w/c/language/attributes/fallthrough
*/
#if XXH_HAS_C_ATTRIBUTE(fallthrough) || XXH_HAS_CPP_ATTRIBUTE(fallthrough)
# define XXH_FALLTHROUGH [[fallthrough]]
#elif XXH_HAS_ATTRIBUTE(__fallthrough__)
# define XXH_FALLTHROUGH __attribute__ ((__fallthrough__))
#else
# define XXH_FALLTHROUGH /* fallthrough */
#endif
/*! @endcond */
/*! @cond Doxygen ignores this part */
/*
* Define XXH_NOESCAPE for annotated pointers in public API.
* https://clang.llvm.org/docs/AttributeReference.html#noescape
* As of writing this, only supported by clang.
*/
#if XXH_HAS_ATTRIBUTE(noescape)
# define XXH_NOESCAPE __attribute__((noescape))
#else
# define XXH_NOESCAPE
#endif
/*! @endcond */
#if defined (__cplusplus)
} /* end of extern "C" */
#endif
/*!
* @}
* @ingroup public
* @{
*/
#ifndef XXH_NO_LONG_LONG
/*-**********************************************************************
* 64-bit hash
************************************************************************/
#if defined(XXH_DOXYGEN) /* don't include <stdint.h> */
/*!
* @brief An unsigned 64-bit integer.
*
* Not necessarily defined to `uint64_t` but functionally equivalent.
*/
typedef uint64_t XXH64_hash_t;
#elif !defined (__VMS) \
&& (defined (__cplusplus) \
|| (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) )
# ifdef _AIX
# include <inttypes.h>
# else
# include <stdint.h>
# endif
typedef uint64_t XXH64_hash_t;
#else
# include <limits.h>
# if defined(__LP64__) && ULONG_MAX == 0xFFFFFFFFFFFFFFFFULL
/* LP64 ABI says uint64_t is unsigned long */
typedef unsigned long XXH64_hash_t;
# else
/* the following type must have a width of 64-bit */
typedef unsigned long long XXH64_hash_t;
# endif
#endif
#if defined (__cplusplus)
extern "C" {
#endif
/*!
* @}
*
* @defgroup XXH64_family XXH64 family
* @ingroup public
* @{
* Contains functions used in the classic 64-bit xxHash algorithm.
*
* @note
* XXH3 provides competitive speed for both 32-bit and 64-bit systems,
* and offers true 64/128 bit hash results.
* It provides better speed for systems with vector processing capabilities.
*/
/*!
* @brief Calculates the 64-bit hash of @p input using xxHash64.
*
* @param input The block of data to be hashed, at least @p length bytes in size.
* @param length The length of @p input, in bytes.
* @param seed The 64-bit seed to alter the hash's output predictably.
*
* @pre
* The memory between @p input and @p input + @p length must be valid,
* readable, contiguous memory. However, if @p length is `0`, @p input may be
* `NULL`. In C++, this also must be *TriviallyCopyable*.
*
* @return The calculated 64-bit xxHash64 value.
*
* @see @ref single_shot_example "Single Shot Example" for an example.
*/
XXH_PUBLIC_API XXH_PUREF XXH64_hash_t XXH64(XXH_NOESCAPE const void* input, size_t length, XXH64_hash_t seed);
/******* Streaming *******/
#ifndef XXH_NO_STREAM
/*!
* @brief The opaque state struct for the XXH64 streaming API.
*
* @see XXH64_state_s for details.
*/
typedef struct XXH64_state_s XXH64_state_t; /* incomplete type */
/*!
* @brief Allocates an @ref XXH64_state_t.
*
* @return An allocated pointer of @ref XXH64_state_t on success.
* @return `NULL` on failure.
*
* @note Must be freed with XXH64_freeState().
*/
XXH_PUBLIC_API XXH_MALLOCF XXH64_state_t* XXH64_createState(void);
/*!
* @brief Frees an @ref XXH64_state_t.
*
* @param statePtr A pointer to an @ref XXH64_state_t allocated with @ref XXH64_createState().
*
* @return @ref XXH_OK.
*
* @note @p statePtr must be allocated with XXH64_createState().
*/
XXH_PUBLIC_API XXH_errorcode XXH64_freeState(XXH64_state_t* statePtr);
/*!
* @brief Copies one @ref XXH64_state_t to another.
*
* @param dst_state The state to copy to.
* @param src_state The state to copy from.
* @pre
* @p dst_state and @p src_state must not be `NULL` and must not overlap.
*/
XXH_PUBLIC_API void XXH64_copyState(XXH_NOESCAPE XXH64_state_t* dst_state, const XXH64_state_t* src_state);
/*!
* @brief Resets an @ref XXH64_state_t to begin a new hash.
*
* @param statePtr The state struct to reset.
* @param seed The 64-bit seed to alter the hash result predictably.
*
* @pre
* @p statePtr must not be `NULL`.
*
* @return @ref XXH_OK on success.
* @return @ref XXH_ERROR on failure.
*
* @note This function resets and seeds a state. Call it before @ref XXH64_update().
*/
XXH_PUBLIC_API XXH_errorcode XXH64_reset (XXH_NOESCAPE XXH64_state_t* statePtr, XXH64_hash_t seed);
/*!
* @brief Consumes a block of @p input to an @ref XXH64_state_t.
*
* @param statePtr The state struct to update.
* @param input The block of data to be hashed, at least @p length bytes in size.
* @param length The length of @p input, in bytes.
*
* @pre
* @p statePtr must not be `NULL`.
* @pre
* The memory between @p input and @p input + @p length must be valid,
* readable, contiguous memory. However, if @p length is `0`, @p input may be
* `NULL`. In C++, this also must be *TriviallyCopyable*.
*
* @return @ref XXH_OK on success.
* @return @ref XXH_ERROR on failure.
*
* @note Call this to incrementally consume blocks of data.
*/
XXH_PUBLIC_API XXH_errorcode XXH64_update (XXH_NOESCAPE XXH64_state_t* statePtr, XXH_NOESCAPE const void* input, size_t length);
/*!
* @brief Returns the calculated hash value from an @ref XXH64_state_t.
*
* @param statePtr The state struct to calculate the hash from.
*
* @pre
* @p statePtr must not be `NULL`.
*
* @return The calculated 64-bit xxHash64 value from that state.
*
* @note
* Calling XXH64_digest() will not affect @p statePtr, so you can update,
* digest, and update again.
*/
XXH_PUBLIC_API XXH_PUREF XXH64_hash_t XXH64_digest (XXH_NOESCAPE const XXH64_state_t* statePtr);
#endif /* !XXH_NO_STREAM */
/******* Canonical representation *******/
/*!
* @brief Canonical (big endian) representation of @ref XXH64_hash_t.
*/
typedef struct { unsigned char digest[sizeof(XXH64_hash_t)]; } XXH64_canonical_t;
/*!
* @brief Converts an @ref XXH64_hash_t to a big endian @ref XXH64_canonical_t.
*
* @param dst The @ref XXH64_canonical_t pointer to be stored to.
* @param hash The @ref XXH64_hash_t to be converted.
*
* @pre
* @p dst must not be `NULL`.
*
* @see @ref canonical_representation_example "Canonical Representation Example"
*/
XXH_PUBLIC_API void XXH64_canonicalFromHash(XXH_NOESCAPE XXH64_canonical_t* dst, XXH64_hash_t hash);
/*!
* @brief Converts an @ref XXH64_canonical_t to a native @ref XXH64_hash_t.
*
* @param src The @ref XXH64_canonical_t to convert.
*
* @pre
* @p src must not be `NULL`.
*
* @return The converted hash.
*
* @see @ref canonical_representation_example "Canonical Representation Example"
*/
XXH_PUBLIC_API XXH_PUREF XXH64_hash_t XXH64_hashFromCanonical(XXH_NOESCAPE const XXH64_canonical_t* src);
#ifndef XXH_NO_XXH3
/*!
* @}
* ************************************************************************
* @defgroup XXH3_family XXH3 family
* @ingroup public
* @{
*
* XXH3 is a more recent hash algorithm featuring:
* - Improved speed for both small and large inputs
* - True 64-bit and 128-bit outputs
* - SIMD acceleration
* - Improved 32-bit viability
*
* Speed analysis methodology is explained here:
*
* https://fastcompression.blogspot.com/2019/03/presenting-xxh3.html
*
* Compared to XXH64, expect XXH3 to run approximately
* ~2x faster on large inputs and >3x faster on small ones,
* exact differences vary depending on platform.
*
* XXH3's speed benefits greatly from SIMD and 64-bit arithmetic,
* but does not require it.
* Most 32-bit and 64-bit targets that can run XXH32 smoothly can run XXH3
* at competitive speeds, even without vector support. Further details are
* explained in the implementation.
*
* XXH3 has a fast scalar implementation, but it also includes accelerated SIMD
* implementations for many common platforms:
* - AVX512
* - AVX2
* - SSE2
* - ARM NEON
* - WebAssembly SIMD128
* - POWER8 VSX
* - s390x ZVector
* This can be controlled via the @ref XXH_VECTOR macro, but it automatically
* selects the best version according to predefined macros. For the x86 family, an
* automatic runtime dispatcher is included separately in @ref xxh_x86dispatch.c.
*
* XXH3 implementation is portable:
* it has a generic C90 formulation that can be compiled on any platform,
* all implementations generate exactly the same hash value on all platforms.
* Starting from v0.8.0, it's also labelled "stable", meaning that
* any future version will also generate the same hash value.
*
* XXH3 offers 2 variants, _64bits and _128bits.
*
* When only 64 bits are needed, prefer invoking the _64bits variant, as it
* reduces the amount of mixing, resulting in faster speed on small inputs.
* It's also generally simpler to manipulate a scalar return type than a struct.
*
* The API supports one-shot hashing, streaming mode, and custom secrets.
*/
/*-**********************************************************************
* XXH3 64-bit variant
************************************************************************/
/*!
* @brief Calculates 64-bit unseeded variant of XXH3 hash of @p input.
*
* @param input The block of data to be hashed, at least @p length bytes in size.
* @param length The length of @p input, in bytes.
*
* @pre
* The memory between @p input and @p input + @p length must be valid,
* readable, contiguous memory. However, if @p length is `0`, @p input may be
* `NULL`. In C++, this also must be *TriviallyCopyable*.
*
* @return The calculated 64-bit XXH3 hash value.
*
* @note
* This is equivalent to @ref XXH3_64bits_withSeed() with a seed of `0`, however
* it may have slightly better performance due to constant propagation of the
* defaults.
*
* @see
* XXH3_64bits_withSeed(), XXH3_64bits_withSecret(): other seeding variants
* @see @ref single_shot_example "Single Shot Example" for an example.
*/
XXH_PUBLIC_API XXH_PUREF XXH64_hash_t XXH3_64bits(XXH_NOESCAPE const void* input, size_t length);
/*!
* @brief Calculates 64-bit seeded variant of XXH3 hash of @p input.
*
* @param input The block of data to be hashed, at least @p length bytes in size.
* @param length The length of @p input, in bytes.
* @param seed The 64-bit seed to alter the hash result predictably.
*
* @pre
* The memory between @p input and @p input + @p length must be valid,
* readable, contiguous memory. However, if @p length is `0`, @p input may be
* `NULL`. In C++, this also must be *TriviallyCopyable*.
*
* @return The calculated 64-bit XXH3 hash value.
*
* @note
* seed == 0 produces the same results as @ref XXH3_64bits().
*
* This variant generates a custom secret on the fly based on default secret
* altered using the @p seed value.
*
* While this operation is decently fast, note that it's not completely free.
*
* @see @ref single_shot_example "Single Shot Example" for an example.
*/
XXH_PUBLIC_API XXH_PUREF XXH64_hash_t XXH3_64bits_withSeed(XXH_NOESCAPE const void* input, size_t length, XXH64_hash_t seed);
/*!
* The bare minimum size for a custom secret.
*
* @see
* XXH3_64bits_withSecret(), XXH3_64bits_reset_withSecret(),
* XXH3_128bits_withSecret(), XXH3_128bits_reset_withSecret().
*/
#define XXH3_SECRET_SIZE_MIN 136
/*!
* @brief Calculates 64-bit variant of XXH3 with a custom "secret".
*
* @param data The block of data to be hashed, at least @p len bytes in size.
* @param len The length of @p data, in bytes.
* @param secret The secret data.
* @param secretSize The length of @p secret, in bytes.
*
* @return The calculated 64-bit XXH3 hash value.
*
* @pre
* The memory between @p data and @p data + @p len must be valid,
* readable, contiguous memory. However, if @p length is `0`, @p data may be
* `NULL`. In C++, this also must be *TriviallyCopyable*.
*
* It's possible to provide any blob of bytes as a "secret" to generate the hash.
* This makes it more difficult for an external actor to prepare an intentional collision.
* The main condition is that @p secretSize *must* be large enough (>= @ref XXH3_SECRET_SIZE_MIN).
* However, the quality of the secret impacts the dispersion of the hash algorithm.
* Therefore, the secret _must_ look like a bunch of random bytes.
* Avoid "trivial" or structured data such as repeated sequences or a text document.
* Whenever in doubt about the "randomness" of the blob of bytes,
* consider employing @ref XXH3_generateSecret() instead (see below).
* It will generate a proper high entropy secret derived from the blob of bytes.
* Another advantage of using XXH3_generateSecret() is that
* it guarantees that all bits within the initial blob of bytes
* will impact every bit of the output.
* This is not necessarily the case when using the blob of bytes directly
* because, when hashing _small_ inputs, only a portion of the secret is employed.
*
* @see @ref single_shot_example "Single Shot Example" for an example.
*/
XXH_PUBLIC_API XXH_PUREF XXH64_hash_t XXH3_64bits_withSecret(XXH_NOESCAPE const void* data, size_t len, XXH_NOESCAPE const void* secret, size_t secretSize);
/******* Streaming *******/
#ifndef XXH_NO_STREAM
/*
* Streaming requires state maintenance.
* This operation costs memory and CPU.
* As a consequence, streaming is slower than one-shot hashing.
* For better performance, prefer one-shot functions whenever applicable.
*/
/*!
* @brief The opaque state struct for the XXH3 streaming API.
*
* @see XXH3_state_s for details.
*/
typedef struct XXH3_state_s XXH3_state_t;
XXH_PUBLIC_API XXH_MALLOCF XXH3_state_t* XXH3_createState(void);
XXH_PUBLIC_API XXH_errorcode XXH3_freeState(XXH3_state_t* statePtr);
/*!
* @brief Copies one @ref XXH3_state_t to another.
*
* @param dst_state The state to copy to.
* @param src_state The state to copy from.
* @pre
* @p dst_state and @p src_state must not be `NULL` and must not overlap.
*/
XXH_PUBLIC_API void XXH3_copyState(XXH_NOESCAPE XXH3_state_t* dst_state, XXH_NOESCAPE const XXH3_state_t* src_state);
/*!
* @brief Resets an @ref XXH3_state_t to begin a new hash.
*
* @param statePtr The state struct to reset.
*
* @pre
* @p statePtr must not be `NULL`.
*
* @return @ref XXH_OK on success.
* @return @ref XXH_ERROR on failure.
*
* @note
* - This function resets `statePtr` and generate a secret with default parameters.
* - Call this function before @ref XXH3_64bits_update().
* - Digest will be equivalent to `XXH3_64bits()`.
*
*/
XXH_PUBLIC_API XXH_errorcode XXH3_64bits_reset(XXH_NOESCAPE XXH3_state_t* statePtr);
/*!
* @brief Resets an @ref XXH3_state_t with 64-bit seed to begin a new hash.
*
* @param statePtr The state struct to reset.
* @param seed The 64-bit seed to alter the hash result predictably.
*
* @pre
* @p statePtr must not be `NULL`.
*
* @return @ref XXH_OK on success.
* @return @ref XXH_ERROR on failure.
*
* @note
* - This function resets `statePtr` and generate a secret from `seed`.
* - Call this function before @ref XXH3_64bits_update().
* - Digest will be equivalent to `XXH3_64bits_withSeed()`.
*
*/
XXH_PUBLIC_API XXH_errorcode XXH3_64bits_reset_withSeed(XXH_NOESCAPE XXH3_state_t* statePtr, XXH64_hash_t seed);
/*!
* @brief Resets an @ref XXH3_state_t with secret data to begin a new hash.
*
* @param statePtr The state struct to reset.
* @param secret The secret data.
* @param secretSize The length of @p secret, in bytes.
*
* @pre
* @p statePtr must not be `NULL`.
*
* @return @ref XXH_OK on success.
* @return @ref XXH_ERROR on failure.
*
* @note
* `secret` is referenced, it _must outlive_ the hash streaming session.
*
* Similar to one-shot API, `secretSize` must be >= @ref XXH3_SECRET_SIZE_MIN,
* and the quality of produced hash values depends on secret's entropy
* (secret's content should look like a bunch of random bytes).
* When in doubt about the randomness of a candidate `secret`,
* consider employing `XXH3_generateSecret()` instead (see below).
*/
XXH_PUBLIC_API XXH_errorcode XXH3_64bits_reset_withSecret(XXH_NOESCAPE XXH3_state_t* statePtr, XXH_NOESCAPE const void* secret, size_t secretSize);
/*!
* @brief Consumes a block of @p input to an @ref XXH3_state_t.
*
* @param statePtr The state struct to update.
* @param input The block of data to be hashed, at least @p length bytes in size.
* @param length The length of @p input, in bytes.
*
* @pre
* @p statePtr must not be `NULL`.
* @pre
* The memory between @p input and @p input + @p length must be valid,
* readable, contiguous memory. However, if @p length is `0`, @p input may be
* `NULL`. In C++, this also must be *TriviallyCopyable*.
*
* @return @ref XXH_OK on success.
* @return @ref XXH_ERROR on failure.
*
* @note Call this to incrementally consume blocks of data.
*/
XXH_PUBLIC_API XXH_errorcode XXH3_64bits_update (XXH_NOESCAPE XXH3_state_t* statePtr, XXH_NOESCAPE const void* input, size_t length);
/*!
* @brief Returns the calculated XXH3 64-bit hash value from an @ref XXH3_state_t.
*
* @param statePtr The state struct to calculate the hash from.
*
* @pre
* @p statePtr must not be `NULL`.
*
* @return The calculated XXH3 64-bit hash value from that state.
*
* @note
* Calling XXH3_64bits_digest() will not affect @p statePtr, so you can update,
* digest, and update again.
*/
XXH_PUBLIC_API XXH_PUREF XXH64_hash_t XXH3_64bits_digest (XXH_NOESCAPE const XXH3_state_t* statePtr);
#endif /* !XXH_NO_STREAM */
/* note : canonical representation of XXH3 is the same as XXH64
* since they both produce XXH64_hash_t values */
/*-**********************************************************************
* XXH3 128-bit variant
************************************************************************/
/*!
* @brief The return value from 128-bit hashes.
*
* Stored in little endian order, although the fields themselves are in native
* endianness.
*/
typedef struct {
XXH64_hash_t low64; /*!< `value & 0xFFFFFFFFFFFFFFFF` */
XXH64_hash_t high64; /*!< `value >> 64` */
} XXH128_hash_t;
/*!
* @brief Calculates 128-bit unseeded variant of XXH3 of @p data.
*
* @param data The block of data to be hashed, at least @p length bytes in size.
* @param len The length of @p data, in bytes.
*
* @return The calculated 128-bit variant of XXH3 value.
*
* The 128-bit variant of XXH3 has more strength, but it has a bit of overhead
* for shorter inputs.
*
* This is equivalent to @ref XXH3_128bits_withSeed() with a seed of `0`, however
* it may have slightly better performance due to constant propagation of the
* defaults.
*
* @see XXH3_128bits_withSeed(), XXH3_128bits_withSecret(): other seeding variants
* @see @ref single_shot_example "Single Shot Example" for an example.
*/
XXH_PUBLIC_API XXH_PUREF XXH128_hash_t XXH3_128bits(XXH_NOESCAPE const void* data, size_t len);
/*! @brief Calculates 128-bit seeded variant of XXH3 hash of @p data.
*
* @param data The block of data to be hashed, at least @p length bytes in size.
* @param len The length of @p data, in bytes.
* @param seed The 64-bit seed to alter the hash result predictably.
*
* @return The calculated 128-bit variant of XXH3 value.
*
* @note
* seed == 0 produces the same results as @ref XXH3_64bits().
*
* This variant generates a custom secret on the fly based on default secret
* altered using the @p seed value.
*
* While this operation is decently fast, note that it's not completely free.
*
* @see XXH3_128bits(), XXH3_128bits_withSecret(): other seeding variants
* @see @ref single_shot_example "Single Shot Example" for an example.
*/
XXH_PUBLIC_API XXH_PUREF XXH128_hash_t XXH3_128bits_withSeed(XXH_NOESCAPE const void* data, size_t len, XXH64_hash_t seed);
/*!
* @brief Calculates 128-bit variant of XXH3 with a custom "secret".
*
* @param data The block of data to be hashed, at least @p len bytes in size.
* @param len The length of @p data, in bytes.
* @param secret The secret data.
* @param secretSize The length of @p secret, in bytes.
*
* @return The calculated 128-bit variant of XXH3 value.
*
* It's possible to provide any blob of bytes as a "secret" to generate the hash.
* This makes it more difficult for an external actor to prepare an intentional collision.
* The main condition is that @p secretSize *must* be large enough (>= @ref XXH3_SECRET_SIZE_MIN).
* However, the quality of the secret impacts the dispersion of the hash algorithm.
* Therefore, the secret _must_ look like a bunch of random bytes.
* Avoid "trivial" or structured data such as repeated sequences or a text document.
* Whenever in doubt about the "randomness" of the blob of bytes,
* consider employing @ref XXH3_generateSecret() instead (see below).
* It will generate a proper high entropy secret derived from the blob of bytes.
* Another advantage of using XXH3_generateSecret() is that
* it guarantees that all bits within the initial blob of bytes
* will impact every bit of the output.
* This is not necessarily the case when using the blob of bytes directly
* because, when hashing _small_ inputs, only a portion of the secret is employed.
*
* @see @ref single_shot_example "Single Shot Example" for an example.
*/
XXH_PUBLIC_API XXH_PUREF XXH128_hash_t XXH3_128bits_withSecret(XXH_NOESCAPE const void* data, size_t len, XXH_NOESCAPE const void* secret, size_t secretSize);
/******* Streaming *******/
#ifndef XXH_NO_STREAM
/*
* Streaming requires state maintenance.
* This operation costs memory and CPU.
* As a consequence, streaming is slower than one-shot hashing.
* For better performance, prefer one-shot functions whenever applicable.
*
* XXH3_128bits uses the same XXH3_state_t as XXH3_64bits().
* Use already declared XXH3_createState() and XXH3_freeState().
*
* All reset and streaming functions have same meaning as their 64-bit counterpart.
*/
/*!
* @brief Resets an @ref XXH3_state_t to begin a new hash.
*
* @param statePtr The state struct to reset.
*
* @pre
* @p statePtr must not be `NULL`.
*
* @return @ref XXH_OK on success.
* @return @ref XXH_ERROR on failure.
*
* @note
* - This function resets `statePtr` and generate a secret with default parameters.
* - Call it before @ref XXH3_128bits_update().
* - Digest will be equivalent to `XXH3_128bits()`.
*/
XXH_PUBLIC_API XXH_errorcode XXH3_128bits_reset(XXH_NOESCAPE XXH3_state_t* statePtr);
/*!
* @brief Resets an @ref XXH3_state_t with 64-bit seed to begin a new hash.
*
* @param statePtr The state struct to reset.
* @param seed The 64-bit seed to alter the hash result predictably.
*
* @pre
* @p statePtr must not be `NULL`.
*
* @return @ref XXH_OK on success.
* @return @ref XXH_ERROR on failure.
*
* @note
* - This function resets `statePtr` and generate a secret from `seed`.
* - Call it before @ref XXH3_128bits_update().
* - Digest will be equivalent to `XXH3_128bits_withSeed()`.
*/
XXH_PUBLIC_API XXH_errorcode XXH3_128bits_reset_withSeed(XXH_NOESCAPE XXH3_state_t* statePtr, XXH64_hash_t seed);
/*!
* @brief Resets an @ref XXH3_state_t with secret data to begin a new hash.
*
* @param statePtr The state struct to reset.
* @param secret The secret data.
* @param secretSize The length of @p secret, in bytes.
*
* @pre
* @p statePtr must not be `NULL`.
*
* @return @ref XXH_OK on success.
* @return @ref XXH_ERROR on failure.
*
* `secret` is referenced, it _must outlive_ the hash streaming session.
* Similar to one-shot API, `secretSize` must be >= @ref XXH3_SECRET_SIZE_MIN,
* and the quality of produced hash values depends on secret's entropy
* (secret's content should look like a bunch of random bytes).
* When in doubt about the randomness of a candidate `secret`,
* consider employing `XXH3_generateSecret()` instead (see below).
*/
XXH_PUBLIC_API XXH_errorcode XXH3_128bits_reset_withSecret(XXH_NOESCAPE XXH3_state_t* statePtr, XXH_NOESCAPE const void* secret, size_t secretSize);
/*!
* @brief Consumes a block of @p input to an @ref XXH3_state_t.
*
* Call this to incrementally consume blocks of data.
*
* @param statePtr The state struct to update.
* @param input The block of data to be hashed, at least @p length bytes in size.
* @param length The length of @p input, in bytes.
*
* @pre
* @p statePtr must not be `NULL`.
*
* @return @ref XXH_OK on success.
* @return @ref XXH_ERROR on failure.
*
* @note
* The memory between @p input and @p input + @p length must be valid,
* readable, contiguous memory. However, if @p length is `0`, @p input may be
* `NULL`. In C++, this also must be *TriviallyCopyable*.
*
*/
XXH_PUBLIC_API XXH_errorcode XXH3_128bits_update (XXH_NOESCAPE XXH3_state_t* statePtr, XXH_NOESCAPE const void* input, size_t length);
/*!
* @brief Returns the calculated XXH3 128-bit hash value from an @ref XXH3_state_t.
*
* @param statePtr The state struct to calculate the hash from.
*
* @pre
* @p statePtr must not be `NULL`.
*
* @return The calculated XXH3 128-bit hash value from that state.
*
* @note
* Calling XXH3_128bits_digest() will not affect @p statePtr, so you can update,
* digest, and update again.
*
*/
XXH_PUBLIC_API XXH_PUREF XXH128_hash_t XXH3_128bits_digest (XXH_NOESCAPE const XXH3_state_t* statePtr);
#endif /* !XXH_NO_STREAM */
/* Following helper functions make it possible to compare XXH128_hast_t values.
* Since XXH128_hash_t is a structure, this capability is not offered by the language.
* Note: For better performance, these functions can be inlined using XXH_INLINE_ALL */
/*!
* @brief Check equality of two XXH128_hash_t values
*
* @param h1 The 128-bit hash value.
* @param h2 Another 128-bit hash value.
*
* @return `1` if `h1` and `h2` are equal.
* @return `0` if they are not.
*/
XXH_PUBLIC_API XXH_PUREF int XXH128_isEqual(XXH128_hash_t h1, XXH128_hash_t h2);
/*!
* @brief Compares two @ref XXH128_hash_t
*
* This comparator is compatible with stdlib's `qsort()`/`bsearch()`.
*
* @param h128_1 Left-hand side value
* @param h128_2 Right-hand side value
*
* @return >0 if @p h128_1 > @p h128_2
* @return =0 if @p h128_1 == @p h128_2
* @return <0 if @p h128_1 < @p h128_2
*/
XXH_PUBLIC_API XXH_PUREF int XXH128_cmp(XXH_NOESCAPE const void* h128_1, XXH_NOESCAPE const void* h128_2);
/******* Canonical representation *******/
typedef struct { unsigned char digest[sizeof(XXH128_hash_t)]; } XXH128_canonical_t;
/*!
* @brief Converts an @ref XXH128_hash_t to a big endian @ref XXH128_canonical_t.
*
* @param dst The @ref XXH128_canonical_t pointer to be stored to.
* @param hash The @ref XXH128_hash_t to be converted.
*
* @pre
* @p dst must not be `NULL`.
* @see @ref canonical_representation_example "Canonical Representation Example"
*/
XXH_PUBLIC_API void XXH128_canonicalFromHash(XXH_NOESCAPE XXH128_canonical_t* dst, XXH128_hash_t hash);
/*!
* @brief Converts an @ref XXH128_canonical_t to a native @ref XXH128_hash_t.
*
* @param src The @ref XXH128_canonical_t to convert.
*
* @pre
* @p src must not be `NULL`.
*
* @return The converted hash.
* @see @ref canonical_representation_example "Canonical Representation Example"
*/
XXH_PUBLIC_API XXH_PUREF XXH128_hash_t XXH128_hashFromCanonical(XXH_NOESCAPE const XXH128_canonical_t* src);
#endif /* !XXH_NO_XXH3 */
#if defined (__cplusplus)
} /* extern "C" */
#endif
#endif /* XXH_NO_LONG_LONG */
/*!
* @}
*/
#endif /* XXHASH_H_5627135585666179 */
#if defined(XXH_STATIC_LINKING_ONLY) && !defined(XXHASH_H_STATIC_13879238742)
#define XXHASH_H_STATIC_13879238742
/* ****************************************************************************
* This section contains declarations which are not guaranteed to remain stable.
* They may change in future versions, becoming incompatible with a different
* version of the library.
* These declarations should only be used with static linking.
* Never use them in association with dynamic linking!
***************************************************************************** */
/*
* These definitions are only present to allow static allocation
* of XXH states, on stack or in a struct, for example.
* Never **ever** access their members directly.
*/
/*!
* @internal
* @brief Structure for XXH32 streaming API.
*
* @note This is only defined when @ref XXH_STATIC_LINKING_ONLY,
* @ref XXH_INLINE_ALL, or @ref XXH_IMPLEMENTATION is defined. Otherwise it is
* an opaque type. This allows fields to safely be changed.
*
* Typedef'd to @ref XXH32_state_t.
* Do not access the members of this struct directly.
* @see XXH64_state_s, XXH3_state_s
*/
struct XXH32_state_s {
XXH32_hash_t total_len_32; /*!< Total length hashed, modulo 2^32 */
XXH32_hash_t large_len; /*!< Whether the hash is >= 16 (handles @ref total_len_32 overflow) */
XXH32_hash_t v[4]; /*!< Accumulator lanes */
XXH32_hash_t mem32[4]; /*!< Internal buffer for partial reads. Treated as unsigned char[16]. */
XXH32_hash_t memsize; /*!< Amount of data in @ref mem32 */
XXH32_hash_t reserved; /*!< Reserved field. Do not read nor write to it. */
}; /* typedef'd to XXH32_state_t */
#ifndef XXH_NO_LONG_LONG /* defined when there is no 64-bit support */
/*!
* @internal
* @brief Structure for XXH64 streaming API.
*
* @note This is only defined when @ref XXH_STATIC_LINKING_ONLY,
* @ref XXH_INLINE_ALL, or @ref XXH_IMPLEMENTATION is defined. Otherwise it is
* an opaque type. This allows fields to safely be changed.
*
* Typedef'd to @ref XXH64_state_t.
* Do not access the members of this struct directly.
* @see XXH32_state_s, XXH3_state_s
*/
struct XXH64_state_s {
XXH64_hash_t total_len; /*!< Total length hashed. This is always 64-bit. */
XXH64_hash_t v[4]; /*!< Accumulator lanes */
XXH64_hash_t mem64[4]; /*!< Internal buffer for partial reads. Treated as unsigned char[32]. */
XXH32_hash_t memsize; /*!< Amount of data in @ref mem64 */
XXH32_hash_t reserved32; /*!< Reserved field, needed for padding anyways*/
XXH64_hash_t reserved64; /*!< Reserved field. Do not read or write to it. */
}; /* typedef'd to XXH64_state_t */
#ifndef XXH_NO_XXH3
#if defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 201112L) /* >= C11 */
# include <stdalign.h>
# define XXH_ALIGN(n) alignas(n)
#elif defined(__cplusplus) && (__cplusplus >= 201103L) /* >= C++11 */
/* In C++ alignas() is a keyword */
# define XXH_ALIGN(n) alignas(n)
#elif defined(__GNUC__)
# define XXH_ALIGN(n) __attribute__ ((aligned(n)))
#elif defined(_MSC_VER)
# define XXH_ALIGN(n) __declspec(align(n))
#else
# define XXH_ALIGN(n) /* disabled */
#endif
/* Old GCC versions only accept the attribute after the type in structures. */
#if !(defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 201112L)) /* C11+ */ \
&& ! (defined(__cplusplus) && (__cplusplus >= 201103L)) /* >= C++11 */ \
&& defined(__GNUC__)
# define XXH_ALIGN_MEMBER(align, type) type XXH_ALIGN(align)
#else
# define XXH_ALIGN_MEMBER(align, type) XXH_ALIGN(align) type
#endif
/*!
* @brief The size of the internal XXH3 buffer.
*
* This is the optimal update size for incremental hashing.
*
* @see XXH3_64b_update(), XXH3_128b_update().
*/
#define XXH3_INTERNALBUFFER_SIZE 256
/*!
* @internal
* @brief Default size of the secret buffer (and @ref XXH3_kSecret).
*
* This is the size used in @ref XXH3_kSecret and the seeded functions.
*
* Not to be confused with @ref XXH3_SECRET_SIZE_MIN.
*/
#define XXH3_SECRET_DEFAULT_SIZE 192
/*!
* @internal
* @brief Structure for XXH3 streaming API.
*
* @note This is only defined when @ref XXH_STATIC_LINKING_ONLY,
* @ref XXH_INLINE_ALL, or @ref XXH_IMPLEMENTATION is defined.
* Otherwise it is an opaque type.
* Never use this definition in combination with dynamic library.
* This allows fields to safely be changed in the future.
*
* @note ** This structure has a strict alignment requirement of 64 bytes!! **
* Do not allocate this with `malloc()` or `new`,
* it will not be sufficiently aligned.
* Use @ref XXH3_createState() and @ref XXH3_freeState(), or stack allocation.
*
* Typedef'd to @ref XXH3_state_t.
* Do never access the members of this struct directly.
*
* @see XXH3_INITSTATE() for stack initialization.
* @see XXH3_createState(), XXH3_freeState().
* @see XXH32_state_s, XXH64_state_s
*/
struct XXH3_state_s {
XXH_ALIGN_MEMBER(64, XXH64_hash_t acc[8]);
/*!< The 8 accumulators. See @ref XXH32_state_s::v and @ref XXH64_state_s::v */
XXH_ALIGN_MEMBER(64, unsigned char customSecret[XXH3_SECRET_DEFAULT_SIZE]);
/*!< Used to store a custom secret generated from a seed. */
XXH_ALIGN_MEMBER(64, unsigned char buffer[XXH3_INTERNALBUFFER_SIZE]);
/*!< The internal buffer. @see XXH32_state_s::mem32 */
XXH32_hash_t bufferedSize;
/*!< The amount of memory in @ref buffer, @see XXH32_state_s::memsize */
XXH32_hash_t useSeed;
/*!< Reserved field. Needed for padding on 64-bit. */
size_t nbStripesSoFar;
/*!< Number or stripes processed. */
XXH64_hash_t totalLen;
/*!< Total length hashed. 64-bit even on 32-bit targets. */
size_t nbStripesPerBlock;
/*!< Number of stripes per block. */
size_t secretLimit;
/*!< Size of @ref customSecret or @ref extSecret */
XXH64_hash_t seed;
/*!< Seed for _withSeed variants. Must be zero otherwise, @see XXH3_INITSTATE() */
XXH64_hash_t reserved64;
/*!< Reserved field. */
const unsigned char* extSecret;
/*!< Reference to an external secret for the _withSecret variants, NULL
* for other variants. */
/* note: there may be some padding at the end due to alignment on 64 bytes */
}; /* typedef'd to XXH3_state_t */
#undef XXH_ALIGN_MEMBER
/*!
* @brief Initializes a stack-allocated `XXH3_state_s`.
*
* When the @ref XXH3_state_t structure is merely emplaced on stack,
* it should be initialized with XXH3_INITSTATE() or a memset()
* in case its first reset uses XXH3_NNbits_reset_withSeed().
* This init can be omitted if the first reset uses default or _withSecret mode.
* This operation isn't necessary when the state is created with XXH3_createState().
* Note that this doesn't prepare the state for a streaming operation,
* it's still necessary to use XXH3_NNbits_reset*() afterwards.
*/
#define XXH3_INITSTATE(XXH3_state_ptr) \
do { \
XXH3_state_t* tmp_xxh3_state_ptr = (XXH3_state_ptr); \
tmp_xxh3_state_ptr->seed = 0; \
tmp_xxh3_state_ptr->extSecret = NULL; \
} while(0)
#if defined (__cplusplus)
extern "C" {
#endif
/*!
* @brief Calculates the 128-bit hash of @p data using XXH3.
*
* @param data The block of data to be hashed, at least @p len bytes in size.
* @param len The length of @p data, in bytes.
* @param seed The 64-bit seed to alter the hash's output predictably.
*
* @pre
* The memory between @p data and @p data + @p len must be valid,
* readable, contiguous memory. However, if @p len is `0`, @p data may be
* `NULL`. In C++, this also must be *TriviallyCopyable*.
*
* @return The calculated 128-bit XXH3 value.
*
* @see @ref single_shot_example "Single Shot Example" for an example.
*/
XXH_PUBLIC_API XXH_PUREF XXH128_hash_t XXH128(XXH_NOESCAPE const void* data, size_t len, XXH64_hash_t seed);
/* === Experimental API === */
/* Symbols defined below must be considered tied to a specific library version. */
/*!
* @brief Derive a high-entropy secret from any user-defined content, named customSeed.
*
* @param secretBuffer A writable buffer for derived high-entropy secret data.
* @param secretSize Size of secretBuffer, in bytes. Must be >= XXH3_SECRET_DEFAULT_SIZE.
* @param customSeed A user-defined content.
* @param customSeedSize Size of customSeed, in bytes.
*
* @return @ref XXH_OK on success.
* @return @ref XXH_ERROR on failure.
*
* The generated secret can be used in combination with `*_withSecret()` functions.
* The `_withSecret()` variants are useful to provide a higher level of protection
* than 64-bit seed, as it becomes much more difficult for an external actor to
* guess how to impact the calculation logic.
*
* The function accepts as input a custom seed of any length and any content,
* and derives from it a high-entropy secret of length @p secretSize into an
* already allocated buffer @p secretBuffer.
*
* The generated secret can then be used with any `*_withSecret()` variant.
* The functions @ref XXH3_128bits_withSecret(), @ref XXH3_64bits_withSecret(),
* @ref XXH3_128bits_reset_withSecret() and @ref XXH3_64bits_reset_withSecret()
* are part of this list. They all accept a `secret` parameter
* which must be large enough for implementation reasons (>= @ref XXH3_SECRET_SIZE_MIN)
* _and_ feature very high entropy (consist of random-looking bytes).
* These conditions can be a high bar to meet, so @ref XXH3_generateSecret() can
* be employed to ensure proper quality.
*
* @p customSeed can be anything. It can have any size, even small ones,
* and its content can be anything, even "poor entropy" sources such as a bunch
* of zeroes. The resulting `secret` will nonetheless provide all required qualities.
*
* @pre
* - @p secretSize must be >= @ref XXH3_SECRET_SIZE_MIN
* - When @p customSeedSize > 0, supplying NULL as customSeed is undefined behavior.
*
* Example code:
* @code{.c}
* #include <stdio.h>
* #include <stdlib.h>
* #include <string.h>
* #define XXH_STATIC_LINKING_ONLY // expose unstable API
* #include "xxhash.h"
* // Hashes argv[2] using the entropy from argv[1].
* int main(int argc, char* argv[])
* {
* char secret[XXH3_SECRET_SIZE_MIN];
* if (argv != 3) { return 1; }
* XXH3_generateSecret(secret, sizeof(secret), argv[1], strlen(argv[1]));
* XXH64_hash_t h = XXH3_64bits_withSecret(
* argv[2], strlen(argv[2]),
* secret, sizeof(secret)
* );
* printf("%016llx\n", (unsigned long long) h);
* }
* @endcode
*/
XXH_PUBLIC_API XXH_errorcode XXH3_generateSecret(XXH_NOESCAPE void* secretBuffer, size_t secretSize, XXH_NOESCAPE const void* customSeed, size_t customSeedSize);
/*!
* @brief Generate the same secret as the _withSeed() variants.
*
* @param secretBuffer A writable buffer of @ref XXH3_SECRET_SIZE_MIN bytes
* @param seed The 64-bit seed to alter the hash result predictably.
*
* The generated secret can be used in combination with
*`*_withSecret()` and `_withSecretandSeed()` variants.
*
* Example C++ `std::string` hash class:
* @code{.cpp}
* #include <string>
* #define XXH_STATIC_LINKING_ONLY // expose unstable API
* #include "xxhash.h"
* // Slow, seeds each time
* class HashSlow {
* XXH64_hash_t seed;
* public:
* HashSlow(XXH64_hash_t s) : seed{s} {}
* size_t operator()(const std::string& x) const {
* return size_t{XXH3_64bits_withSeed(x.c_str(), x.length(), seed)};
* }
* };
* // Fast, caches the seeded secret for future uses.
* class HashFast {
* unsigned char secret[XXH3_SECRET_SIZE_MIN];
* public:
* HashFast(XXH64_hash_t s) {
* XXH3_generateSecret_fromSeed(secret, seed);
* }
* size_t operator()(const std::string& x) const {
* return size_t{
* XXH3_64bits_withSecret(x.c_str(), x.length(), secret, sizeof(secret))
* };
* }
* };
* @endcode
*/
XXH_PUBLIC_API void XXH3_generateSecret_fromSeed(XXH_NOESCAPE void* secretBuffer, XXH64_hash_t seed);
/*!
* @brief Calculates 64/128-bit seeded variant of XXH3 hash of @p data.
*
* @param data The block of data to be hashed, at least @p len bytes in size.
* @param len The length of @p data, in bytes.
* @param secret The secret data.
* @param secretSize The length of @p secret, in bytes.
* @param seed The 64-bit seed to alter the hash result predictably.
*
* These variants generate hash values using either
* @p seed for "short" keys (< @ref XXH3_MIDSIZE_MAX = 240 bytes)
* or @p secret for "large" keys (>= @ref XXH3_MIDSIZE_MAX).
*
* This generally benefits speed, compared to `_withSeed()` or `_withSecret()`.
* `_withSeed()` has to generate the secret on the fly for "large" keys.
* It's fast, but can be perceptible for "not so large" keys (< 1 KB).
* `_withSecret()` has to generate the masks on the fly for "small" keys,
* which requires more instructions than _withSeed() variants.
* Therefore, _withSecretandSeed variant combines the best of both worlds.
*
* When @p secret has been generated by XXH3_generateSecret_fromSeed(),
* this variant produces *exactly* the same results as `_withSeed()` variant,
* hence offering only a pure speed benefit on "large" input,
* by skipping the need to regenerate the secret for every large input.
*
* Another usage scenario is to hash the secret to a 64-bit hash value,
* for example with XXH3_64bits(), which then becomes the seed,
* and then employ both the seed and the secret in _withSecretandSeed().
* On top of speed, an added benefit is that each bit in the secret
* has a 50% chance to swap each bit in the output, via its impact to the seed.
*
* This is not guaranteed when using the secret directly in "small data" scenarios,
* because only portions of the secret are employed for small data.
*/
XXH_PUBLIC_API XXH_PUREF XXH64_hash_t
XXH3_64bits_withSecretandSeed(XXH_NOESCAPE const void* data, size_t len,
XXH_NOESCAPE const void* secret, size_t secretSize,
XXH64_hash_t seed);
/*!
* @brief Calculates 128-bit seeded variant of XXH3 hash of @p data.
*
* @param input The block of data to be hashed, at least @p len bytes in size.
* @param length The length of @p data, in bytes.
* @param secret The secret data.
* @param secretSize The length of @p secret, in bytes.
* @param seed64 The 64-bit seed to alter the hash result predictably.
*
* @return @ref XXH_OK on success.
* @return @ref XXH_ERROR on failure.
*
* @see XXH3_64bits_withSecretandSeed()
*/
XXH_PUBLIC_API XXH_PUREF XXH128_hash_t
XXH3_128bits_withSecretandSeed(XXH_NOESCAPE const void* input, size_t length,
XXH_NOESCAPE const void* secret, size_t secretSize,
XXH64_hash_t seed64);
#ifndef XXH_NO_STREAM
/*!
* @brief Resets an @ref XXH3_state_t with secret data to begin a new hash.
*
* @param statePtr A pointer to an @ref XXH3_state_t allocated with @ref XXH3_createState().
* @param secret The secret data.
* @param secretSize The length of @p secret, in bytes.
* @param seed64 The 64-bit seed to alter the hash result predictably.
*
* @return @ref XXH_OK on success.
* @return @ref XXH_ERROR on failure.
*
* @see XXH3_64bits_withSecretandSeed()
*/
XXH_PUBLIC_API XXH_errorcode
XXH3_64bits_reset_withSecretandSeed(XXH_NOESCAPE XXH3_state_t* statePtr,
XXH_NOESCAPE const void* secret, size_t secretSize,
XXH64_hash_t seed64);
/*!
* @brief Resets an @ref XXH3_state_t with secret data to begin a new hash.
*
* @param statePtr A pointer to an @ref XXH3_state_t allocated with @ref XXH3_createState().
* @param secret The secret data.
* @param secretSize The length of @p secret, in bytes.
* @param seed64 The 64-bit seed to alter the hash result predictably.
*
* @return @ref XXH_OK on success.
* @return @ref XXH_ERROR on failure.
*
* @see XXH3_64bits_withSecretandSeed()
*/
XXH_PUBLIC_API XXH_errorcode
XXH3_128bits_reset_withSecretandSeed(XXH_NOESCAPE XXH3_state_t* statePtr,
XXH_NOESCAPE const void* secret, size_t secretSize,
XXH64_hash_t seed64);
#endif /* !XXH_NO_STREAM */
#if defined (__cplusplus)
} /* extern "C" */
#endif
#endif /* !XXH_NO_XXH3 */
#endif /* XXH_NO_LONG_LONG */
#if defined(XXH_INLINE_ALL) || defined(XXH_PRIVATE_API)
# define XXH_IMPLEMENTATION
#endif
#endif /* defined(XXH_STATIC_LINKING_ONLY) && !defined(XXHASH_H_STATIC_13879238742) */
/* ======================================================================== */
/* ======================================================================== */
/* ======================================================================== */
/*-**********************************************************************
* xxHash implementation
*-**********************************************************************
* xxHash's implementation used to be hosted inside xxhash.c.
*
* However, inlining requires implementation to be visible to the compiler,
* hence be included alongside the header.
* Previously, implementation was hosted inside xxhash.c,
* which was then #included when inlining was activated.
* This construction created issues with a few build and install systems,
* as it required xxhash.c to be stored in /include directory.
*
* xxHash implementation is now directly integrated within xxhash.h.
* As a consequence, xxhash.c is no longer needed in /include.
*
* xxhash.c is still available and is still useful.
* In a "normal" setup, when xxhash is not inlined,
* xxhash.h only exposes the prototypes and public symbols,
* while xxhash.c can be built into an object file xxhash.o
* which can then be linked into the final binary.
************************************************************************/
#if ( defined(XXH_INLINE_ALL) || defined(XXH_PRIVATE_API) \
|| defined(XXH_IMPLEMENTATION) ) && !defined(XXH_IMPLEM_13a8737387)
# define XXH_IMPLEM_13a8737387
/* *************************************
* Tuning parameters
***************************************/
/*!
* @defgroup tuning Tuning parameters
* @{
*
* Various macros to control xxHash's behavior.
*/
#ifdef XXH_DOXYGEN
/*!
* @brief Define this to disable 64-bit code.
*
* Useful if only using the @ref XXH32_family and you have a strict C90 compiler.
*/
# define XXH_NO_LONG_LONG
# undef XXH_NO_LONG_LONG /* don't actually */
/*!
* @brief Controls how unaligned memory is accessed.
*
* By default, access to unaligned memory is controlled by `memcpy()`, which is
* safe and portable.
*
* Unfortunately, on some target/compiler combinations, the generated assembly
* is sub-optimal.
*
* The below switch allow selection of a different access method
* in the search for improved performance.
*
* @par Possible options:
*
* - `XXH_FORCE_MEMORY_ACCESS=0` (default): `memcpy`
* @par
* Use `memcpy()`. Safe and portable. Note that most modern compilers will
* eliminate the function call and treat it as an unaligned access.
*
* - `XXH_FORCE_MEMORY_ACCESS=1`: `__attribute__((aligned(1)))`
* @par
* Depends on compiler extensions and is therefore not portable.
* This method is safe _if_ your compiler supports it,
* and *generally* as fast or faster than `memcpy`.
*
* - `XXH_FORCE_MEMORY_ACCESS=2`: Direct cast
* @par
* Casts directly and dereferences. This method doesn't depend on the
* compiler, but it violates the C standard as it directly dereferences an
* unaligned pointer. It can generate buggy code on targets which do not
* support unaligned memory accesses, but in some circumstances, it's the
* only known way to get the most performance.
*
* - `XXH_FORCE_MEMORY_ACCESS=3`: Byteshift
* @par
* Also portable. This can generate the best code on old compilers which don't
* inline small `memcpy()` calls, and it might also be faster on big-endian
* systems which lack a native byteswap instruction. However, some compilers
* will emit literal byteshifts even if the target supports unaligned access.
*
*
* @warning
* Methods 1 and 2 rely on implementation-defined behavior. Use these with
* care, as what works on one compiler/platform/optimization level may cause
* another to read garbage data or even crash.
*
* See https://fastcompression.blogspot.com/2015/08/accessing-unaligned-memory.html for details.
*
* Prefer these methods in priority order (0 > 3 > 1 > 2)
*/
# define XXH_FORCE_MEMORY_ACCESS 0
/*!
* @def XXH_SIZE_OPT
* @brief Controls how much xxHash optimizes for size.
*
* xxHash, when compiled, tends to result in a rather large binary size. This
* is mostly due to heavy usage to forced inlining and constant folding of the
* @ref XXH3_family to increase performance.
*
* However, some developers prefer size over speed. This option can
* significantly reduce the size of the generated code. When using the `-Os`
* or `-Oz` options on GCC or Clang, this is defined to 1 by default,
* otherwise it is defined to 0.
*
* Most of these size optimizations can be controlled manually.
*
* This is a number from 0-2.
* - `XXH_SIZE_OPT` == 0: Default. xxHash makes no size optimizations. Speed
* comes first.
* - `XXH_SIZE_OPT` == 1: Default for `-Os` and `-Oz`. xxHash is more
* conservative and disables hacks that increase code size. It implies the
* options @ref XXH_NO_INLINE_HINTS == 1, @ref XXH_FORCE_ALIGN_CHECK == 0,
* and @ref XXH3_NEON_LANES == 8 if they are not already defined.
* - `XXH_SIZE_OPT` == 2: xxHash tries to make itself as small as possible.
* Performance may cry. For example, the single shot functions just use the
* streaming API.
*/
# define XXH_SIZE_OPT 0
/*!
* @def XXH_FORCE_ALIGN_CHECK
* @brief If defined to non-zero, adds a special path for aligned inputs (XXH32()
* and XXH64() only).
*
* This is an important performance trick for architectures without decent
* unaligned memory access performance.
*
* It checks for input alignment, and when conditions are met, uses a "fast
* path" employing direct 32-bit/64-bit reads, resulting in _dramatically
* faster_ read speed.
*
* The check costs one initial branch per hash, which is generally negligible,
* but not zero.
*
* Moreover, it's not useful to generate an additional code path if memory
* access uses the same instruction for both aligned and unaligned
* addresses (e.g. x86 and aarch64).
*
* In these cases, the alignment check can be removed by setting this macro to 0.
* Then the code will always use unaligned memory access.
* Align check is automatically disabled on x86, x64, ARM64, and some ARM chips
* which are platforms known to offer good unaligned memory accesses performance.
*
* It is also disabled by default when @ref XXH_SIZE_OPT >= 1.
*
* This option does not affect XXH3 (only XXH32 and XXH64).
*/
# define XXH_FORCE_ALIGN_CHECK 0
/*!
* @def XXH_NO_INLINE_HINTS
* @brief When non-zero, sets all functions to `static`.
*
* By default, xxHash tries to force the compiler to inline almost all internal
* functions.
*
* This can usually improve performance due to reduced jumping and improved
* constant folding, but significantly increases the size of the binary which
* might not be favorable.
*
* Additionally, sometimes the forced inlining can be detrimental to performance,
* depending on the architecture.
*
* XXH_NO_INLINE_HINTS marks all internal functions as static, giving the
* compiler full control on whether to inline or not.
*
* When not optimizing (-O0), using `-fno-inline` with GCC or Clang, or if
* @ref XXH_SIZE_OPT >= 1, this will automatically be defined.
*/
# define XXH_NO_INLINE_HINTS 0
/*!
* @def XXH3_INLINE_SECRET
* @brief Determines whether to inline the XXH3 withSecret code.
*
* When the secret size is known, the compiler can improve the performance
* of XXH3_64bits_withSecret() and XXH3_128bits_withSecret().
*
* However, if the secret size is not known, it doesn't have any benefit. This
* happens when xxHash is compiled into a global symbol. Therefore, if
* @ref XXH_INLINE_ALL is *not* defined, this will be defined to 0.
*
* Additionally, this defaults to 0 on GCC 12+, which has an issue with function pointers
* that are *sometimes* force inline on -Og, and it is impossible to automatically
* detect this optimization level.
*/
# define XXH3_INLINE_SECRET 0
/*!
* @def XXH32_ENDJMP
* @brief Whether to use a jump for `XXH32_finalize`.
*
* For performance, `XXH32_finalize` uses multiple branches in the finalizer.
* This is generally preferable for performance,
* but depending on exact architecture, a jmp may be preferable.
*
* This setting is only possibly making a difference for very small inputs.
*/
# define XXH32_ENDJMP 0
/*!
* @internal
* @brief Redefines old internal names.
*
* For compatibility with code that uses xxHash's internals before the names
* were changed to improve namespacing. There is no other reason to use this.
*/
# define XXH_OLD_NAMES
# undef XXH_OLD_NAMES /* don't actually use, it is ugly. */
/*!
* @def XXH_NO_STREAM
* @brief Disables the streaming API.
*
* When xxHash is not inlined and the streaming functions are not used, disabling
* the streaming functions can improve code size significantly, especially with
* the @ref XXH3_family which tends to make constant folded copies of itself.
*/
# define XXH_NO_STREAM
# undef XXH_NO_STREAM /* don't actually */
#endif /* XXH_DOXYGEN */
/*!
* @}
*/
#ifndef XXH_FORCE_MEMORY_ACCESS /* can be defined externally, on command line for example */
/* prefer __packed__ structures (method 1) for GCC
* < ARMv7 with unaligned access (e.g. Raspbian armhf) still uses byte shifting, so we use memcpy
* which for some reason does unaligned loads. */
# if defined(__GNUC__) && !(defined(__ARM_ARCH) && __ARM_ARCH < 7 && defined(__ARM_FEATURE_UNALIGNED))
# define XXH_FORCE_MEMORY_ACCESS 1
# endif
#endif
#ifndef XXH_SIZE_OPT
/* default to 1 for -Os or -Oz */
# if (defined(__GNUC__) || defined(__clang__)) && defined(__OPTIMIZE_SIZE__)
# define XXH_SIZE_OPT 1
# else
# define XXH_SIZE_OPT 0
# endif
#endif
#ifndef XXH_FORCE_ALIGN_CHECK /* can be defined externally */
/* don't check on sizeopt, x86, aarch64, or arm when unaligned access is available */
# if XXH_SIZE_OPT >= 1 || \
defined(__i386) || defined(__x86_64__) || defined(__aarch64__) || defined(__ARM_FEATURE_UNALIGNED) \
|| defined(_M_IX86) || defined(_M_X64) || defined(_M_ARM64) || defined(_M_ARM) /* visual */
# define XXH_FORCE_ALIGN_CHECK 0
# else
# define XXH_FORCE_ALIGN_CHECK 1
# endif
#endif
#ifndef XXH_NO_INLINE_HINTS
# if XXH_SIZE_OPT >= 1 || defined(__NO_INLINE__) /* -O0, -fno-inline */
# define XXH_NO_INLINE_HINTS 1
# else
# define XXH_NO_INLINE_HINTS 0
# endif
#endif
#ifndef XXH3_INLINE_SECRET
# if (defined(__GNUC__) && !defined(__clang__) && __GNUC__ >= 12) \
|| !defined(XXH_INLINE_ALL)
# define XXH3_INLINE_SECRET 0
# else
# define XXH3_INLINE_SECRET 1
# endif
#endif
#ifndef XXH32_ENDJMP
/* generally preferable for performance */
# define XXH32_ENDJMP 0
#endif
/*!
* @defgroup impl Implementation
* @{
*/
/* *************************************
* Includes & Memory related functions
***************************************/
#include <string.h> /* memcmp, memcpy */
#include <limits.h> /* ULLONG_MAX */
#if defined(XXH_NO_STREAM)
/* nothing */
#elif defined(XXH_NO_STDLIB)
/* When requesting to disable any mention of stdlib,
* the library loses the ability to invoked malloc / free.
* In practice, it means that functions like `XXH*_createState()`
* will always fail, and return NULL.
* This flag is useful in situations where
* xxhash.h is integrated into some kernel, embedded or limited environment
* without access to dynamic allocation.
*/
#if defined (__cplusplus)
extern "C" {
#endif
static XXH_CONSTF void* XXH_malloc(size_t s) { (void)s; return NULL; }
static void XXH_free(void* p) { (void)p; }
#if defined (__cplusplus)
} /* extern "C" */
#endif
#else
/*
* Modify the local functions below should you wish to use
* different memory routines for malloc() and free()
*/
#include <stdlib.h>
#if defined (__cplusplus)
extern "C" {
#endif
/*!
* @internal
* @brief Modify this function to use a different routine than malloc().
*/
static XXH_MALLOCF void* XXH_malloc(size_t s) { return malloc(s); }
/*!
* @internal
* @brief Modify this function to use a different routine than free().
*/
static void XXH_free(void* p) { free(p); }
#if defined (__cplusplus)
} /* extern "C" */
#endif
#endif /* XXH_NO_STDLIB */
#if defined (__cplusplus)
extern "C" {
#endif
/*!
* @internal
* @brief Modify this function to use a different routine than memcpy().
*/
static void* XXH_memcpy(void* dest, const void* src, size_t size)
{
return memcpy(dest,src,size);
}
#if defined (__cplusplus)
} /* extern "C" */
#endif
/* *************************************
* Compiler Specific Options
***************************************/
#ifdef _MSC_VER /* Visual Studio warning fix */
# pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */
#endif
#if XXH_NO_INLINE_HINTS /* disable inlining hints */
# if defined(__GNUC__) || defined(__clang__)
# define XXH_FORCE_INLINE static __attribute__((unused))
# else
# define XXH_FORCE_INLINE static
# endif
# define XXH_NO_INLINE static
/* enable inlining hints */
#elif defined(__GNUC__) || defined(__clang__)
# define XXH_FORCE_INLINE static __inline__ __attribute__((always_inline, unused))
# define XXH_NO_INLINE static __attribute__((noinline))
#elif defined(_MSC_VER) /* Visual Studio */
# define XXH_FORCE_INLINE static __forceinline
# define XXH_NO_INLINE static __declspec(noinline)
#elif defined (__cplusplus) \
|| (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L)) /* C99 */
# define XXH_FORCE_INLINE static inline
# define XXH_NO_INLINE static
#else
# define XXH_FORCE_INLINE static
# define XXH_NO_INLINE static
#endif
#if XXH3_INLINE_SECRET
# define XXH3_WITH_SECRET_INLINE XXH_FORCE_INLINE
#else
# define XXH3_WITH_SECRET_INLINE XXH_NO_INLINE
#endif
/* *************************************
* Debug
***************************************/
/*!
* @ingroup tuning
* @def XXH_DEBUGLEVEL
* @brief Sets the debugging level.
*
* XXH_DEBUGLEVEL is expected to be defined externally, typically via the
* compiler's command line options. The value must be a number.
*/
#ifndef XXH_DEBUGLEVEL
# ifdef DEBUGLEVEL /* backwards compat */
# define XXH_DEBUGLEVEL DEBUGLEVEL
# else
# define XXH_DEBUGLEVEL 0
# endif
#endif
#if (XXH_DEBUGLEVEL>=1)
# include <assert.h> /* note: can still be disabled with NDEBUG */
# define XXH_ASSERT(c) assert(c)
#else
# if defined(__INTEL_COMPILER)
# define XXH_ASSERT(c) XXH_ASSUME((unsigned char) (c))
# else
# define XXH_ASSERT(c) XXH_ASSUME(c)
# endif
#endif
/* note: use after variable declarations */
#ifndef XXH_STATIC_ASSERT
# if defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 201112L) /* C11 */
# define XXH_STATIC_ASSERT_WITH_MESSAGE(c,m) do { _Static_assert((c),m); } while(0)
# elif defined(__cplusplus) && (__cplusplus >= 201103L) /* C++11 */
# define XXH_STATIC_ASSERT_WITH_MESSAGE(c,m) do { static_assert((c),m); } while(0)
# else
# define XXH_STATIC_ASSERT_WITH_MESSAGE(c,m) do { struct xxh_sa { char x[(c) ? 1 : -1]; }; } while(0)
# endif
# define XXH_STATIC_ASSERT(c) XXH_STATIC_ASSERT_WITH_MESSAGE((c),#c)
#endif
/*!
* @internal
* @def XXH_COMPILER_GUARD(var)
* @brief Used to prevent unwanted optimizations for @p var.
*
* It uses an empty GCC inline assembly statement with a register constraint
* which forces @p var into a general purpose register (eg eax, ebx, ecx
* on x86) and marks it as modified.
*
* This is used in a few places to avoid unwanted autovectorization (e.g.
* XXH32_round()). All vectorization we want is explicit via intrinsics,
* and _usually_ isn't wanted elsewhere.
*
* We also use it to prevent unwanted constant folding for AArch64 in
* XXH3_initCustomSecret_scalar().
*/
#if defined(__GNUC__) || defined(__clang__)
# define XXH_COMPILER_GUARD(var) __asm__("" : "+r" (var))
#else
# define XXH_COMPILER_GUARD(var) ((void)0)
#endif
/* Specifically for NEON vectors which use the "w" constraint, on
* Clang. */
#if defined(__clang__) && defined(__ARM_ARCH) && !defined(__wasm__)
# define XXH_COMPILER_GUARD_CLANG_NEON(var) __asm__("" : "+w" (var))
#else
# define XXH_COMPILER_GUARD_CLANG_NEON(var) ((void)0)
#endif
/* *************************************
* Basic Types
***************************************/
#if !defined (__VMS) \
&& (defined (__cplusplus) \
|| (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) )
# ifdef _AIX
# include <inttypes.h>
# else
# include <stdint.h>
# endif
typedef uint8_t xxh_u8;
#else
typedef unsigned char xxh_u8;
#endif
typedef XXH32_hash_t xxh_u32;
#ifdef XXH_OLD_NAMES
# warning "XXH_OLD_NAMES is planned to be removed starting v0.9. If the program depends on it, consider moving away from it by employing newer type names directly"
# define BYTE xxh_u8
# define U8 xxh_u8
# define U32 xxh_u32
#endif
#if defined (__cplusplus)
extern "C" {
#endif
/* *** Memory access *** */
/*!
* @internal
* @fn xxh_u32 XXH_read32(const void* ptr)
* @brief Reads an unaligned 32-bit integer from @p ptr in native endianness.
*
* Affected by @ref XXH_FORCE_MEMORY_ACCESS.
*
* @param ptr The pointer to read from.
* @return The 32-bit native endian integer from the bytes at @p ptr.
*/
/*!
* @internal
* @fn xxh_u32 XXH_readLE32(const void* ptr)
* @brief Reads an unaligned 32-bit little endian integer from @p ptr.
*
* Affected by @ref XXH_FORCE_MEMORY_ACCESS.
*
* @param ptr The pointer to read from.
* @return The 32-bit little endian integer from the bytes at @p ptr.
*/
/*!
* @internal
* @fn xxh_u32 XXH_readBE32(const void* ptr)
* @brief Reads an unaligned 32-bit big endian integer from @p ptr.
*
* Affected by @ref XXH_FORCE_MEMORY_ACCESS.
*
* @param ptr The pointer to read from.
* @return The 32-bit big endian integer from the bytes at @p ptr.
*/
/*!
* @internal
* @fn xxh_u32 XXH_readLE32_align(const void* ptr, XXH_alignment align)
* @brief Like @ref XXH_readLE32(), but has an option for aligned reads.
*
* Affected by @ref XXH_FORCE_MEMORY_ACCESS.
* Note that when @ref XXH_FORCE_ALIGN_CHECK == 0, the @p align parameter is
* always @ref XXH_alignment::XXH_unaligned.
*
* @param ptr The pointer to read from.
* @param align Whether @p ptr is aligned.
* @pre
* If @p align == @ref XXH_alignment::XXH_aligned, @p ptr must be 4 byte
* aligned.
* @return The 32-bit little endian integer from the bytes at @p ptr.
*/
#if (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==3))
/*
* Manual byteshift. Best for old compilers which don't inline memcpy.
* We actually directly use XXH_readLE32 and XXH_readBE32.
*/
#elif (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==2))
/*
* Force direct memory access. Only works on CPU which support unaligned memory
* access in hardware.
*/
static xxh_u32 XXH_read32(const void* memPtr) { return *(const xxh_u32*) memPtr; }
#elif (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==1))
/*
* __attribute__((aligned(1))) is supported by gcc and clang. Originally the
* documentation claimed that it only increased the alignment, but actually it
* can decrease it on gcc, clang, and icc:
* https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69502,
* https://gcc.godbolt.org/z/xYez1j67Y.
*/
#ifdef XXH_OLD_NAMES
typedef union { xxh_u32 u32; } __attribute__((packed)) unalign;
#endif
static xxh_u32 XXH_read32(const void* ptr)
{
typedef __attribute__((aligned(1))) xxh_u32 xxh_unalign32;
return *((const xxh_unalign32*)ptr);
}
#else
/*
* Portable and safe solution. Generally efficient.
* see: https://fastcompression.blogspot.com/2015/08/accessing-unaligned-memory.html
*/
static xxh_u32 XXH_read32(const void* memPtr)
{
xxh_u32 val;
XXH_memcpy(&val, memPtr, sizeof(val));
return val;
}
#endif /* XXH_FORCE_DIRECT_MEMORY_ACCESS */
/* *** Endianness *** */
/*!
* @ingroup tuning
* @def XXH_CPU_LITTLE_ENDIAN
* @brief Whether the target is little endian.
*
* Defined to 1 if the target is little endian, or 0 if it is big endian.
* It can be defined externally, for example on the compiler command line.
*
* If it is not defined,
* a runtime check (which is usually constant folded) is used instead.
*
* @note
* This is not necessarily defined to an integer constant.
*
* @see XXH_isLittleEndian() for the runtime check.
*/
#ifndef XXH_CPU_LITTLE_ENDIAN
/*
* Try to detect endianness automatically, to avoid the nonstandard behavior
* in `XXH_isLittleEndian()`
*/
# if defined(_WIN32) /* Windows is always little endian */ \
|| defined(__LITTLE_ENDIAN__) \
|| (defined(__BYTE_ORDER__) && __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__)
# define XXH_CPU_LITTLE_ENDIAN 1
# elif defined(__BIG_ENDIAN__) \
|| (defined(__BYTE_ORDER__) && __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__)
# define XXH_CPU_LITTLE_ENDIAN 0
# else
/*!
* @internal
* @brief Runtime check for @ref XXH_CPU_LITTLE_ENDIAN.
*
* Most compilers will constant fold this.
*/
static int XXH_isLittleEndian(void)
{
/*
* Portable and well-defined behavior.
* Don't use static: it is detrimental to performance.
*/
const union { xxh_u32 u; xxh_u8 c[4]; } one = { 1 };
return one.c[0];
}
# define XXH_CPU_LITTLE_ENDIAN XXH_isLittleEndian()
# endif
#endif
/* ****************************************
* Compiler-specific Functions and Macros
******************************************/
#define XXH_GCC_VERSION (__GNUC__ * 100 + __GNUC_MINOR__)
#ifdef __has_builtin
# define XXH_HAS_BUILTIN(x) __has_builtin(x)
#else
# define XXH_HAS_BUILTIN(x) 0
#endif
/*
* C23 and future versions have standard "unreachable()".
* Once it has been implemented reliably we can add it as an
* additional case:
*
* ```
* #if defined(__STDC_VERSION__) && (__STDC_VERSION__ >= XXH_C23_VN)
* # include <stddef.h>
* # ifdef unreachable
* # define XXH_UNREACHABLE() unreachable()
* # endif
* #endif
* ```
*
* Note C++23 also has std::unreachable() which can be detected
* as follows:
* ```
* #if defined(__cpp_lib_unreachable) && (__cpp_lib_unreachable >= 202202L)
* # include <utility>
* # define XXH_UNREACHABLE() std::unreachable()
* #endif
* ```
* NB: `__cpp_lib_unreachable` is defined in the `<version>` header.
* We don't use that as including `<utility>` in `extern "C"` blocks
* doesn't work on GCC12
*/
#if XXH_HAS_BUILTIN(__builtin_unreachable)
# define XXH_UNREACHABLE() __builtin_unreachable()
#elif defined(_MSC_VER)
# define XXH_UNREACHABLE() __assume(0)
#else
# define XXH_UNREACHABLE()
#endif
#if XXH_HAS_BUILTIN(__builtin_assume)
# define XXH_ASSUME(c) __builtin_assume(c)
#else
# define XXH_ASSUME(c) if (!(c)) { XXH_UNREACHABLE(); }
#endif
/*!
* @internal
* @def XXH_rotl32(x,r)
* @brief 32-bit rotate left.
*
* @param x The 32-bit integer to be rotated.
* @param r The number of bits to rotate.
* @pre
* @p r > 0 && @p r < 32
* @note
* @p x and @p r may be evaluated multiple times.
* @return The rotated result.
*/
#if !defined(NO_CLANG_BUILTIN) && XXH_HAS_BUILTIN(__builtin_rotateleft32) \
&& XXH_HAS_BUILTIN(__builtin_rotateleft64)
# define XXH_rotl32 __builtin_rotateleft32
# define XXH_rotl64 __builtin_rotateleft64
/* Note: although _rotl exists for minGW (GCC under windows), performance seems poor */
#elif defined(_MSC_VER)
# define XXH_rotl32(x,r) _rotl(x,r)
# define XXH_rotl64(x,r) _rotl64(x,r)
#else
# define XXH_rotl32(x,r) (((x) << (r)) | ((x) >> (32 - (r))))
# define XXH_rotl64(x,r) (((x) << (r)) | ((x) >> (64 - (r))))
#endif
/*!
* @internal
* @fn xxh_u32 XXH_swap32(xxh_u32 x)
* @brief A 32-bit byteswap.
*
* @param x The 32-bit integer to byteswap.
* @return @p x, byteswapped.
*/
#if defined(_MSC_VER) /* Visual Studio */
# define XXH_swap32 _byteswap_ulong
#elif XXH_GCC_VERSION >= 403
# define XXH_swap32 __builtin_bswap32
#else
static xxh_u32 XXH_swap32 (xxh_u32 x)
{
return ((x << 24) & 0xff000000 ) |
((x << 8) & 0x00ff0000 ) |
((x >> 8) & 0x0000ff00 ) |
((x >> 24) & 0x000000ff );
}
#endif
/* ***************************
* Memory reads
*****************************/
/*!
* @internal
* @brief Enum to indicate whether a pointer is aligned.
*/
typedef enum {
XXH_aligned, /*!< Aligned */
XXH_unaligned /*!< Possibly unaligned */
} XXH_alignment;
/*
* XXH_FORCE_MEMORY_ACCESS==3 is an endian-independent byteshift load.
*
* This is ideal for older compilers which don't inline memcpy.
*/
#if (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==3))
XXH_FORCE_INLINE xxh_u32 XXH_readLE32(const void* memPtr)
{
const xxh_u8* bytePtr = (const xxh_u8 *)memPtr;
return bytePtr[0]
| ((xxh_u32)bytePtr[1] << 8)
| ((xxh_u32)bytePtr[2] << 16)
| ((xxh_u32)bytePtr[3] << 24);
}
XXH_FORCE_INLINE xxh_u32 XXH_readBE32(const void* memPtr)
{
const xxh_u8* bytePtr = (const xxh_u8 *)memPtr;
return bytePtr[3]
| ((xxh_u32)bytePtr[2] << 8)
| ((xxh_u32)bytePtr[1] << 16)
| ((xxh_u32)bytePtr[0] << 24);
}
#else
XXH_FORCE_INLINE xxh_u32 XXH_readLE32(const void* ptr)
{
return XXH_CPU_LITTLE_ENDIAN ? XXH_read32(ptr) : XXH_swap32(XXH_read32(ptr));
}
static xxh_u32 XXH_readBE32(const void* ptr)
{
return XXH_CPU_LITTLE_ENDIAN ? XXH_swap32(XXH_read32(ptr)) : XXH_read32(ptr);
}
#endif
XXH_FORCE_INLINE xxh_u32
XXH_readLE32_align(const void* ptr, XXH_alignment align)
{
if (align==XXH_unaligned) {
return XXH_readLE32(ptr);
} else {
return XXH_CPU_LITTLE_ENDIAN ? *(const xxh_u32*)ptr : XXH_swap32(*(const xxh_u32*)ptr);
}
}
/* *************************************
* Misc
***************************************/
/*! @ingroup public */
XXH_PUBLIC_API unsigned XXH_versionNumber (void) { return XXH_VERSION_NUMBER; }
/* *******************************************************************
* 32-bit hash functions
*********************************************************************/
/*!
* @}
* @defgroup XXH32_impl XXH32 implementation
* @ingroup impl
*
* Details on the XXH32 implementation.
* @{
*/
/* #define instead of static const, to be used as initializers */
#define XXH_PRIME32_1 0x9E3779B1U /*!< 0b10011110001101110111100110110001 */
#define XXH_PRIME32_2 0x85EBCA77U /*!< 0b10000101111010111100101001110111 */
#define XXH_PRIME32_3 0xC2B2AE3DU /*!< 0b11000010101100101010111000111101 */
#define XXH_PRIME32_4 0x27D4EB2FU /*!< 0b00100111110101001110101100101111 */
#define XXH_PRIME32_5 0x165667B1U /*!< 0b00010110010101100110011110110001 */
#ifdef XXH_OLD_NAMES
# define PRIME32_1 XXH_PRIME32_1
# define PRIME32_2 XXH_PRIME32_2
# define PRIME32_3 XXH_PRIME32_3
# define PRIME32_4 XXH_PRIME32_4
# define PRIME32_5 XXH_PRIME32_5
#endif
/*!
* @internal
* @brief Normal stripe processing routine.
*
* This shuffles the bits so that any bit from @p input impacts several bits in
* @p acc.
*
* @param acc The accumulator lane.
* @param input The stripe of input to mix.
* @return The mixed accumulator lane.
*/
static xxh_u32 XXH32_round(xxh_u32 acc, xxh_u32 input)
{
acc += input * XXH_PRIME32_2;
acc = XXH_rotl32(acc, 13);
acc *= XXH_PRIME32_1;
#if (defined(__SSE4_1__) || defined(__aarch64__) || defined(__wasm_simd128__)) && !defined(XXH_ENABLE_AUTOVECTORIZE)
/*
* UGLY HACK:
* A compiler fence is the only thing that prevents GCC and Clang from
* autovectorizing the XXH32 loop (pragmas and attributes don't work for some
* reason) without globally disabling SSE4.1.
*
* The reason we want to avoid vectorization is because despite working on
* 4 integers at a time, there are multiple factors slowing XXH32 down on
* SSE4:
* - There's a ridiculous amount of lag from pmulld (10 cycles of latency on
* newer chips!) making it slightly slower to multiply four integers at
* once compared to four integers independently. Even when pmulld was
* fastest, Sandy/Ivy Bridge, it is still not worth it to go into SSE
* just to multiply unless doing a long operation.
*
* - Four instructions are required to rotate,
* movqda tmp, v // not required with VEX encoding
* pslld tmp, 13 // tmp <<= 13
* psrld v, 19 // x >>= 19
* por v, tmp // x |= tmp
* compared to one for scalar:
* roll v, 13 // reliably fast across the board
* shldl v, v, 13 // Sandy Bridge and later prefer this for some reason
*
* - Instruction level parallelism is actually more beneficial here because
* the SIMD actually serializes this operation: While v1 is rotating, v2
* can load data, while v3 can multiply. SSE forces them to operate
* together.
*
* This is also enabled on AArch64, as Clang is *very aggressive* in vectorizing
* the loop. NEON is only faster on the A53, and with the newer cores, it is less
* than half the speed.
*
* Additionally, this is used on WASM SIMD128 because it JITs to the same
* SIMD instructions and has the same issue.
*/
XXH_COMPILER_GUARD(acc);
#endif
return acc;
}
/*!
* @internal
* @brief Mixes all bits to finalize the hash.
*
* The final mix ensures that all input bits have a chance to impact any bit in
* the output digest, resulting in an unbiased distribution.
*
* @param hash The hash to avalanche.
* @return The avalanched hash.
*/
static xxh_u32 XXH32_avalanche(xxh_u32 hash)
{
hash ^= hash >> 15;
hash *= XXH_PRIME32_2;
hash ^= hash >> 13;
hash *= XXH_PRIME32_3;
hash ^= hash >> 16;
return hash;
}
#define XXH_get32bits(p) XXH_readLE32_align(p, align)
/*!
* @internal
* @brief Processes the last 0-15 bytes of @p ptr.
*
* There may be up to 15 bytes remaining to consume from the input.
* This final stage will digest them to ensure that all input bytes are present
* in the final mix.
*
* @param hash The hash to finalize.
* @param ptr The pointer to the remaining input.
* @param len The remaining length, modulo 16.
* @param align Whether @p ptr is aligned.
* @return The finalized hash.
* @see XXH64_finalize().
*/
static XXH_PUREF xxh_u32
XXH32_finalize(xxh_u32 hash, const xxh_u8* ptr, size_t len, XXH_alignment align)
{
#define XXH_PROCESS1 do { \
hash += (*ptr++) * XXH_PRIME32_5; \
hash = XXH_rotl32(hash, 11) * XXH_PRIME32_1; \
} while (0)
#define XXH_PROCESS4 do { \
hash += XXH_get32bits(ptr) * XXH_PRIME32_3; \
ptr += 4; \
hash = XXH_rotl32(hash, 17) * XXH_PRIME32_4; \
} while (0)
if (ptr==NULL) XXH_ASSERT(len == 0);
/* Compact rerolled version; generally faster */
if (!XXH32_ENDJMP) {
len &= 15;
while (len >= 4) {
XXH_PROCESS4;
len -= 4;
}
while (len > 0) {
XXH_PROCESS1;
--len;
}
return XXH32_avalanche(hash);
} else {
switch(len&15) /* or switch(bEnd - p) */ {
case 12: XXH_PROCESS4;
XXH_FALLTHROUGH; /* fallthrough */
case 8: XXH_PROCESS4;
XXH_FALLTHROUGH; /* fallthrough */
case 4: XXH_PROCESS4;
return XXH32_avalanche(hash);
case 13: XXH_PROCESS4;
XXH_FALLTHROUGH; /* fallthrough */
case 9: XXH_PROCESS4;
XXH_FALLTHROUGH; /* fallthrough */
case 5: XXH_PROCESS4;
XXH_PROCESS1;
return XXH32_avalanche(hash);
case 14: XXH_PROCESS4;
XXH_FALLTHROUGH; /* fallthrough */
case 10: XXH_PROCESS4;
XXH_FALLTHROUGH; /* fallthrough */
case 6: XXH_PROCESS4;
XXH_PROCESS1;
XXH_PROCESS1;
return XXH32_avalanche(hash);
case 15: XXH_PROCESS4;
XXH_FALLTHROUGH; /* fallthrough */
case 11: XXH_PROCESS4;
XXH_FALLTHROUGH; /* fallthrough */
case 7: XXH_PROCESS4;
XXH_FALLTHROUGH; /* fallthrough */
case 3: XXH_PROCESS1;
XXH_FALLTHROUGH; /* fallthrough */
case 2: XXH_PROCESS1;
XXH_FALLTHROUGH; /* fallthrough */
case 1: XXH_PROCESS1;
XXH_FALLTHROUGH; /* fallthrough */
case 0: return XXH32_avalanche(hash);
}
XXH_ASSERT(0);
return hash; /* reaching this point is deemed impossible */
}
}
#ifdef XXH_OLD_NAMES
# define PROCESS1 XXH_PROCESS1
# define PROCESS4 XXH_PROCESS4
#else
# undef XXH_PROCESS1
# undef XXH_PROCESS4
#endif
/*!
* @internal
* @brief The implementation for @ref XXH32().
*
* @param input , len , seed Directly passed from @ref XXH32().
* @param align Whether @p input is aligned.
* @return The calculated hash.
*/
XXH_FORCE_INLINE XXH_PUREF xxh_u32
XXH32_endian_align(const xxh_u8* input, size_t len, xxh_u32 seed, XXH_alignment align)
{
xxh_u32 h32;
if (input==NULL) XXH_ASSERT(len == 0);
if (len>=16) {
const xxh_u8* const bEnd = input + len;
const xxh_u8* const limit = bEnd - 15;
xxh_u32 v1 = seed + XXH_PRIME32_1 + XXH_PRIME32_2;
xxh_u32 v2 = seed + XXH_PRIME32_2;
xxh_u32 v3 = seed + 0;
xxh_u32 v4 = seed - XXH_PRIME32_1;
do {
v1 = XXH32_round(v1, XXH_get32bits(input)); input += 4;
v2 = XXH32_round(v2, XXH_get32bits(input)); input += 4;
v3 = XXH32_round(v3, XXH_get32bits(input)); input += 4;
v4 = XXH32_round(v4, XXH_get32bits(input)); input += 4;
} while (input < limit);
h32 = XXH_rotl32(v1, 1) + XXH_rotl32(v2, 7)
+ XXH_rotl32(v3, 12) + XXH_rotl32(v4, 18);
} else {
h32 = seed + XXH_PRIME32_5;
}
h32 += (xxh_u32)len;
return XXH32_finalize(h32, input, len&15, align);
}
/*! @ingroup XXH32_family */
XXH_PUBLIC_API XXH32_hash_t XXH32 (const void* input, size_t len, XXH32_hash_t seed)
{
#if !defined(XXH_NO_STREAM) && XXH_SIZE_OPT >= 2
/* Simple version, good for code maintenance, but unfortunately slow for small inputs */
XXH32_state_t state;
XXH32_reset(&state, seed);
XXH32_update(&state, (const xxh_u8*)input, len);
return XXH32_digest(&state);
#else
if (XXH_FORCE_ALIGN_CHECK) {
if ((((size_t)input) & 3) == 0) { /* Input is 4-bytes aligned, leverage the speed benefit */
return XXH32_endian_align((const xxh_u8*)input, len, seed, XXH_aligned);
} }
return XXH32_endian_align((const xxh_u8*)input, len, seed, XXH_unaligned);
#endif
}
/******* Hash streaming *******/
#ifndef XXH_NO_STREAM
/*! @ingroup XXH32_family */
XXH_PUBLIC_API XXH32_state_t* XXH32_createState(void)
{
return (XXH32_state_t*)XXH_malloc(sizeof(XXH32_state_t));
}
/*! @ingroup XXH32_family */
XXH_PUBLIC_API XXH_errorcode XXH32_freeState(XXH32_state_t* statePtr)
{
XXH_free(statePtr);
return XXH_OK;
}
/*! @ingroup XXH32_family */
XXH_PUBLIC_API void XXH32_copyState(XXH32_state_t* dstState, const XXH32_state_t* srcState)
{
XXH_memcpy(dstState, srcState, sizeof(*dstState));
}
/*! @ingroup XXH32_family */
XXH_PUBLIC_API XXH_errorcode XXH32_reset(XXH32_state_t* statePtr, XXH32_hash_t seed)
{
XXH_ASSERT(statePtr != NULL);
memset(statePtr, 0, sizeof(*statePtr));
statePtr->v[0] = seed + XXH_PRIME32_1 + XXH_PRIME32_2;
statePtr->v[1] = seed + XXH_PRIME32_2;
statePtr->v[2] = seed + 0;
statePtr->v[3] = seed - XXH_PRIME32_1;
return XXH_OK;
}
/*! @ingroup XXH32_family */
XXH_PUBLIC_API XXH_errorcode
XXH32_update(XXH32_state_t* state, const void* input, size_t len)
{
if (input==NULL) {
XXH_ASSERT(len == 0);
return XXH_OK;
}
{ const xxh_u8* p = (const xxh_u8*)input;
const xxh_u8* const bEnd = p + len;
state->total_len_32 += (XXH32_hash_t)len;
state->large_len |= (XXH32_hash_t)((len>=16) | (state->total_len_32>=16));
if (state->memsize + len < 16) { /* fill in tmp buffer */
XXH_memcpy((xxh_u8*)(state->mem32) + state->memsize, input, len);
state->memsize += (XXH32_hash_t)len;
return XXH_OK;
}
if (state->memsize) { /* some data left from previous update */
XXH_memcpy((xxh_u8*)(state->mem32) + state->memsize, input, 16-state->memsize);
{ const xxh_u32* p32 = state->mem32;
state->v[0] = XXH32_round(state->v[0], XXH_readLE32(p32)); p32++;
state->v[1] = XXH32_round(state->v[1], XXH_readLE32(p32)); p32++;
state->v[2] = XXH32_round(state->v[2], XXH_readLE32(p32)); p32++;
state->v[3] = XXH32_round(state->v[3], XXH_readLE32(p32));
}
p += 16-state->memsize;
state->memsize = 0;
}
if (p <= bEnd-16) {
const xxh_u8* const limit = bEnd - 16;
do {
state->v[0] = XXH32_round(state->v[0], XXH_readLE32(p)); p+=4;
state->v[1] = XXH32_round(state->v[1], XXH_readLE32(p)); p+=4;
state->v[2] = XXH32_round(state->v[2], XXH_readLE32(p)); p+=4;
state->v[3] = XXH32_round(state->v[3], XXH_readLE32(p)); p+=4;
} while (p<=limit);
}
if (p < bEnd) {
XXH_memcpy(state->mem32, p, (size_t)(bEnd-p));
state->memsize = (unsigned)(bEnd-p);
}
}
return XXH_OK;
}
/*! @ingroup XXH32_family */
XXH_PUBLIC_API XXH32_hash_t XXH32_digest(const XXH32_state_t* state)
{
xxh_u32 h32;
if (state->large_len) {
h32 = XXH_rotl32(state->v[0], 1)
+ XXH_rotl32(state->v[1], 7)
+ XXH_rotl32(state->v[2], 12)
+ XXH_rotl32(state->v[3], 18);
} else {
h32 = state->v[2] /* == seed */ + XXH_PRIME32_5;
}
h32 += state->total_len_32;
return XXH32_finalize(h32, (const xxh_u8*)state->mem32, state->memsize, XXH_aligned);
}
#endif /* !XXH_NO_STREAM */
/******* Canonical representation *******/
/*! @ingroup XXH32_family */
XXH_PUBLIC_API void XXH32_canonicalFromHash(XXH32_canonical_t* dst, XXH32_hash_t hash)
{
XXH_STATIC_ASSERT(sizeof(XXH32_canonical_t) == sizeof(XXH32_hash_t));
if (XXH_CPU_LITTLE_ENDIAN) hash = XXH_swap32(hash);
XXH_memcpy(dst, &hash, sizeof(*dst));
}
/*! @ingroup XXH32_family */
XXH_PUBLIC_API XXH32_hash_t XXH32_hashFromCanonical(const XXH32_canonical_t* src)
{
return XXH_readBE32(src);
}
#ifndef XXH_NO_LONG_LONG
/* *******************************************************************
* 64-bit hash functions
*********************************************************************/
/*!
* @}
* @ingroup impl
* @{
*/
/******* Memory access *******/
typedef XXH64_hash_t xxh_u64;
#ifdef XXH_OLD_NAMES
# define U64 xxh_u64
#endif
#if (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==3))
/*
* Manual byteshift. Best for old compilers which don't inline memcpy.
* We actually directly use XXH_readLE64 and XXH_readBE64.
*/
#elif (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==2))
/* Force direct memory access. Only works on CPU which support unaligned memory access in hardware */
static xxh_u64 XXH_read64(const void* memPtr)
{
return *(const xxh_u64*) memPtr;
}
#elif (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==1))
/*
* __attribute__((aligned(1))) is supported by gcc and clang. Originally the
* documentation claimed that it only increased the alignment, but actually it
* can decrease it on gcc, clang, and icc:
* https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69502,
* https://gcc.godbolt.org/z/xYez1j67Y.
*/
#ifdef XXH_OLD_NAMES
typedef union { xxh_u32 u32; xxh_u64 u64; } __attribute__((packed)) unalign64;
#endif
static xxh_u64 XXH_read64(const void* ptr)
{
typedef __attribute__((aligned(1))) xxh_u64 xxh_unalign64;
return *((const xxh_unalign64*)ptr);
}
#else
/*
* Portable and safe solution. Generally efficient.
* see: https://fastcompression.blogspot.com/2015/08/accessing-unaligned-memory.html
*/
static xxh_u64 XXH_read64(const void* memPtr)
{
xxh_u64 val;
XXH_memcpy(&val, memPtr, sizeof(val));
return val;
}
#endif /* XXH_FORCE_DIRECT_MEMORY_ACCESS */
#if defined(_MSC_VER) /* Visual Studio */
# define XXH_swap64 _byteswap_uint64
#elif XXH_GCC_VERSION >= 403
# define XXH_swap64 __builtin_bswap64
#else
static xxh_u64 XXH_swap64(xxh_u64 x)
{
return ((x << 56) & 0xff00000000000000ULL) |
((x << 40) & 0x00ff000000000000ULL) |
((x << 24) & 0x0000ff0000000000ULL) |
((x << 8) & 0x000000ff00000000ULL) |
((x >> 8) & 0x00000000ff000000ULL) |
((x >> 24) & 0x0000000000ff0000ULL) |
((x >> 40) & 0x000000000000ff00ULL) |
((x >> 56) & 0x00000000000000ffULL);
}
#endif
/* XXH_FORCE_MEMORY_ACCESS==3 is an endian-independent byteshift load. */
#if (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==3))
XXH_FORCE_INLINE xxh_u64 XXH_readLE64(const void* memPtr)
{
const xxh_u8* bytePtr = (const xxh_u8 *)memPtr;
return bytePtr[0]
| ((xxh_u64)bytePtr[1] << 8)
| ((xxh_u64)bytePtr[2] << 16)
| ((xxh_u64)bytePtr[3] << 24)
| ((xxh_u64)bytePtr[4] << 32)
| ((xxh_u64)bytePtr[5] << 40)
| ((xxh_u64)bytePtr[6] << 48)
| ((xxh_u64)bytePtr[7] << 56);
}
XXH_FORCE_INLINE xxh_u64 XXH_readBE64(const void* memPtr)
{
const xxh_u8* bytePtr = (const xxh_u8 *)memPtr;
return bytePtr[7]
| ((xxh_u64)bytePtr[6] << 8)
| ((xxh_u64)bytePtr[5] << 16)
| ((xxh_u64)bytePtr[4] << 24)
| ((xxh_u64)bytePtr[3] << 32)
| ((xxh_u64)bytePtr[2] << 40)
| ((xxh_u64)bytePtr[1] << 48)
| ((xxh_u64)bytePtr[0] << 56);
}
#else
XXH_FORCE_INLINE xxh_u64 XXH_readLE64(const void* ptr)
{
return XXH_CPU_LITTLE_ENDIAN ? XXH_read64(ptr) : XXH_swap64(XXH_read64(ptr));
}
static xxh_u64 XXH_readBE64(const void* ptr)
{
return XXH_CPU_LITTLE_ENDIAN ? XXH_swap64(XXH_read64(ptr)) : XXH_read64(ptr);
}
#endif
XXH_FORCE_INLINE xxh_u64
XXH_readLE64_align(const void* ptr, XXH_alignment align)
{
if (align==XXH_unaligned)
return XXH_readLE64(ptr);
else
return XXH_CPU_LITTLE_ENDIAN ? *(const xxh_u64*)ptr : XXH_swap64(*(const xxh_u64*)ptr);
}
/******* xxh64 *******/
/*!
* @}
* @defgroup XXH64_impl XXH64 implementation
* @ingroup impl
*
* Details on the XXH64 implementation.
* @{
*/
/* #define rather that static const, to be used as initializers */
#define XXH_PRIME64_1 0x9E3779B185EBCA87ULL /*!< 0b1001111000110111011110011011000110000101111010111100101010000111 */
#define XXH_PRIME64_2 0xC2B2AE3D27D4EB4FULL /*!< 0b1100001010110010101011100011110100100111110101001110101101001111 */
#define XXH_PRIME64_3 0x165667B19E3779F9ULL /*!< 0b0001011001010110011001111011000110011110001101110111100111111001 */
#define XXH_PRIME64_4 0x85EBCA77C2B2AE63ULL /*!< 0b1000010111101011110010100111011111000010101100101010111001100011 */
#define XXH_PRIME64_5 0x27D4EB2F165667C5ULL /*!< 0b0010011111010100111010110010111100010110010101100110011111000101 */
#ifdef XXH_OLD_NAMES
# define PRIME64_1 XXH_PRIME64_1
# define PRIME64_2 XXH_PRIME64_2
# define PRIME64_3 XXH_PRIME64_3
# define PRIME64_4 XXH_PRIME64_4
# define PRIME64_5 XXH_PRIME64_5
#endif
/*! @copydoc XXH32_round */
static xxh_u64 XXH64_round(xxh_u64 acc, xxh_u64 input)
{
acc += input * XXH_PRIME64_2;
acc = XXH_rotl64(acc, 31);
acc *= XXH_PRIME64_1;
#if (defined(__AVX512F__)) && !defined(XXH_ENABLE_AUTOVECTORIZE)
/*
* DISABLE AUTOVECTORIZATION:
* A compiler fence is used to prevent GCC and Clang from
* autovectorizing the XXH64 loop (pragmas and attributes don't work for some
* reason) without globally disabling AVX512.
*
* Autovectorization of XXH64 tends to be detrimental,
* though the exact outcome may change depending on exact cpu and compiler version.
* For information, it has been reported as detrimental for Skylake-X,
* but possibly beneficial for Zen4.
*
* The default is to disable auto-vectorization,
* but you can select to enable it instead using `XXH_ENABLE_AUTOVECTORIZE` build variable.
*/
XXH_COMPILER_GUARD(acc);
#endif
return acc;
}
static xxh_u64 XXH64_mergeRound(xxh_u64 acc, xxh_u64 val)
{
val = XXH64_round(0, val);
acc ^= val;
acc = acc * XXH_PRIME64_1 + XXH_PRIME64_4;
return acc;
}
/*! @copydoc XXH32_avalanche */
static xxh_u64 XXH64_avalanche(xxh_u64 hash)
{
hash ^= hash >> 33;
hash *= XXH_PRIME64_2;
hash ^= hash >> 29;
hash *= XXH_PRIME64_3;
hash ^= hash >> 32;
return hash;
}
#define XXH_get64bits(p) XXH_readLE64_align(p, align)
/*!
* @internal
* @brief Processes the last 0-31 bytes of @p ptr.
*
* There may be up to 31 bytes remaining to consume from the input.
* This final stage will digest them to ensure that all input bytes are present
* in the final mix.
*
* @param hash The hash to finalize.
* @param ptr The pointer to the remaining input.
* @param len The remaining length, modulo 32.
* @param align Whether @p ptr is aligned.
* @return The finalized hash
* @see XXH32_finalize().
*/
static XXH_PUREF xxh_u64
XXH64_finalize(xxh_u64 hash, const xxh_u8* ptr, size_t len, XXH_alignment align)
{
if (ptr==NULL) XXH_ASSERT(len == 0);
len &= 31;
while (len >= 8) {
xxh_u64 const k1 = XXH64_round(0, XXH_get64bits(ptr));
ptr += 8;
hash ^= k1;
hash = XXH_rotl64(hash,27) * XXH_PRIME64_1 + XXH_PRIME64_4;
len -= 8;
}
if (len >= 4) {
hash ^= (xxh_u64)(XXH_get32bits(ptr)) * XXH_PRIME64_1;
ptr += 4;
hash = XXH_rotl64(hash, 23) * XXH_PRIME64_2 + XXH_PRIME64_3;
len -= 4;
}
while (len > 0) {
hash ^= (*ptr++) * XXH_PRIME64_5;
hash = XXH_rotl64(hash, 11) * XXH_PRIME64_1;
--len;
}
return XXH64_avalanche(hash);
}
#ifdef XXH_OLD_NAMES
# define PROCESS1_64 XXH_PROCESS1_64
# define PROCESS4_64 XXH_PROCESS4_64
# define PROCESS8_64 XXH_PROCESS8_64
#else
# undef XXH_PROCESS1_64
# undef XXH_PROCESS4_64
# undef XXH_PROCESS8_64
#endif
/*!
* @internal
* @brief The implementation for @ref XXH64().
*
* @param input , len , seed Directly passed from @ref XXH64().
* @param align Whether @p input is aligned.
* @return The calculated hash.
*/
XXH_FORCE_INLINE XXH_PUREF xxh_u64
XXH64_endian_align(const xxh_u8* input, size_t len, xxh_u64 seed, XXH_alignment align)
{
xxh_u64 h64;
if (input==NULL) XXH_ASSERT(len == 0);
if (len>=32) {
const xxh_u8* const bEnd = input + len;
const xxh_u8* const limit = bEnd - 31;
xxh_u64 v1 = seed + XXH_PRIME64_1 + XXH_PRIME64_2;
xxh_u64 v2 = seed + XXH_PRIME64_2;
xxh_u64 v3 = seed + 0;
xxh_u64 v4 = seed - XXH_PRIME64_1;
do {
v1 = XXH64_round(v1, XXH_get64bits(input)); input+=8;
v2 = XXH64_round(v2, XXH_get64bits(input)); input+=8;
v3 = XXH64_round(v3, XXH_get64bits(input)); input+=8;
v4 = XXH64_round(v4, XXH_get64bits(input)); input+=8;
} while (input<limit);
h64 = XXH_rotl64(v1, 1) + XXH_rotl64(v2, 7) + XXH_rotl64(v3, 12) + XXH_rotl64(v4, 18);
h64 = XXH64_mergeRound(h64, v1);
h64 = XXH64_mergeRound(h64, v2);
h64 = XXH64_mergeRound(h64, v3);
h64 = XXH64_mergeRound(h64, v4);
} else {
h64 = seed + XXH_PRIME64_5;
}
h64 += (xxh_u64) len;
return XXH64_finalize(h64, input, len, align);
}
/*! @ingroup XXH64_family */
XXH_PUBLIC_API XXH64_hash_t XXH64 (XXH_NOESCAPE const void* input, size_t len, XXH64_hash_t seed)
{
#if !defined(XXH_NO_STREAM) && XXH_SIZE_OPT >= 2
/* Simple version, good for code maintenance, but unfortunately slow for small inputs */
XXH64_state_t state;
XXH64_reset(&state, seed);
XXH64_update(&state, (const xxh_u8*)input, len);
return XXH64_digest(&state);
#else
if (XXH_FORCE_ALIGN_CHECK) {
if ((((size_t)input) & 7)==0) { /* Input is aligned, let's leverage the speed advantage */
return XXH64_endian_align((const xxh_u8*)input, len, seed, XXH_aligned);
} }
return XXH64_endian_align((const xxh_u8*)input, len, seed, XXH_unaligned);
#endif
}
/******* Hash Streaming *******/
#ifndef XXH_NO_STREAM
/*! @ingroup XXH64_family*/
XXH_PUBLIC_API XXH64_state_t* XXH64_createState(void)
{
return (XXH64_state_t*)XXH_malloc(sizeof(XXH64_state_t));
}
/*! @ingroup XXH64_family */
XXH_PUBLIC_API XXH_errorcode XXH64_freeState(XXH64_state_t* statePtr)
{
XXH_free(statePtr);
return XXH_OK;
}
/*! @ingroup XXH64_family */
XXH_PUBLIC_API void XXH64_copyState(XXH_NOESCAPE XXH64_state_t* dstState, const XXH64_state_t* srcState)
{
XXH_memcpy(dstState, srcState, sizeof(*dstState));
}
/*! @ingroup XXH64_family */
XXH_PUBLIC_API XXH_errorcode XXH64_reset(XXH_NOESCAPE XXH64_state_t* statePtr, XXH64_hash_t seed)
{
XXH_ASSERT(statePtr != NULL);
memset(statePtr, 0, sizeof(*statePtr));
statePtr->v[0] = seed + XXH_PRIME64_1 + XXH_PRIME64_2;
statePtr->v[1] = seed + XXH_PRIME64_2;
statePtr->v[2] = seed + 0;
statePtr->v[3] = seed - XXH_PRIME64_1;
return XXH_OK;
}
/*! @ingroup XXH64_family */
XXH_PUBLIC_API XXH_errorcode
XXH64_update (XXH_NOESCAPE XXH64_state_t* state, XXH_NOESCAPE const void* input, size_t len)
{
if (input==NULL) {
XXH_ASSERT(len == 0);
return XXH_OK;
}
{ const xxh_u8* p = (const xxh_u8*)input;
const xxh_u8* const bEnd = p + len;
state->total_len += len;
if (state->memsize + len < 32) { /* fill in tmp buffer */
XXH_memcpy(((xxh_u8*)state->mem64) + state->memsize, input, len);
state->memsize += (xxh_u32)len;
return XXH_OK;
}
if (state->memsize) { /* tmp buffer is full */
XXH_memcpy(((xxh_u8*)state->mem64) + state->memsize, input, 32-state->memsize);
state->v[0] = XXH64_round(state->v[0], XXH_readLE64(state->mem64+0));
state->v[1] = XXH64_round(state->v[1], XXH_readLE64(state->mem64+1));
state->v[2] = XXH64_round(state->v[2], XXH_readLE64(state->mem64+2));
state->v[3] = XXH64_round(state->v[3], XXH_readLE64(state->mem64+3));
p += 32 - state->memsize;
state->memsize = 0;
}
if (p+32 <= bEnd) {
const xxh_u8* const limit = bEnd - 32;
do {
state->v[0] = XXH64_round(state->v[0], XXH_readLE64(p)); p+=8;
state->v[1] = XXH64_round(state->v[1], XXH_readLE64(p)); p+=8;
state->v[2] = XXH64_round(state->v[2], XXH_readLE64(p)); p+=8;
state->v[3] = XXH64_round(state->v[3], XXH_readLE64(p)); p+=8;
} while (p<=limit);
}
if (p < bEnd) {
XXH_memcpy(state->mem64, p, (size_t)(bEnd-p));
state->memsize = (unsigned)(bEnd-p);
}
}
return XXH_OK;
}
/*! @ingroup XXH64_family */
XXH_PUBLIC_API XXH64_hash_t XXH64_digest(XXH_NOESCAPE const XXH64_state_t* state)
{
xxh_u64 h64;
if (state->total_len >= 32) {
h64 = XXH_rotl64(state->v[0], 1) + XXH_rotl64(state->v[1], 7) + XXH_rotl64(state->v[2], 12) + XXH_rotl64(state->v[3], 18);
h64 = XXH64_mergeRound(h64, state->v[0]);
h64 = XXH64_mergeRound(h64, state->v[1]);
h64 = XXH64_mergeRound(h64, state->v[2]);
h64 = XXH64_mergeRound(h64, state->v[3]);
} else {
h64 = state->v[2] /*seed*/ + XXH_PRIME64_5;
}
h64 += (xxh_u64) state->total_len;
return XXH64_finalize(h64, (const xxh_u8*)state->mem64, (size_t)state->total_len, XXH_aligned);
}
#endif /* !XXH_NO_STREAM */
/******* Canonical representation *******/
/*! @ingroup XXH64_family */
XXH_PUBLIC_API void XXH64_canonicalFromHash(XXH_NOESCAPE XXH64_canonical_t* dst, XXH64_hash_t hash)
{
XXH_STATIC_ASSERT(sizeof(XXH64_canonical_t) == sizeof(XXH64_hash_t));
if (XXH_CPU_LITTLE_ENDIAN) hash = XXH_swap64(hash);
XXH_memcpy(dst, &hash, sizeof(*dst));
}
/*! @ingroup XXH64_family */
XXH_PUBLIC_API XXH64_hash_t XXH64_hashFromCanonical(XXH_NOESCAPE const XXH64_canonical_t* src)
{
return XXH_readBE64(src);
}
#if defined (__cplusplus)
}
#endif
#ifndef XXH_NO_XXH3
/* *********************************************************************
* XXH3
* New generation hash designed for speed on small keys and vectorization
************************************************************************ */
/*!
* @}
* @defgroup XXH3_impl XXH3 implementation
* @ingroup impl
* @{
*/
/* === Compiler specifics === */
#if ((defined(sun) || defined(__sun)) && __cplusplus) /* Solaris includes __STDC_VERSION__ with C++. Tested with GCC 5.5 */
# define XXH_RESTRICT /* disable */
#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L /* >= C99 */
# define XXH_RESTRICT restrict
#elif (defined (__GNUC__) && ((__GNUC__ > 3) || (__GNUC__ == 3 && __GNUC_MINOR__ >= 1))) \
|| (defined (__clang__)) \
|| (defined (_MSC_VER) && (_MSC_VER >= 1400)) \
|| (defined (__INTEL_COMPILER) && (__INTEL_COMPILER >= 1300))
/*
* There are a LOT more compilers that recognize __restrict but this
* covers the major ones.
*/
# define XXH_RESTRICT __restrict
#else
# define XXH_RESTRICT /* disable */
#endif
#if (defined(__GNUC__) && (__GNUC__ >= 3)) \
|| (defined(__INTEL_COMPILER) && (__INTEL_COMPILER >= 800)) \
|| defined(__clang__)
# define XXH_likely(x) __builtin_expect(x, 1)
# define XXH_unlikely(x) __builtin_expect(x, 0)
#else
# define XXH_likely(x) (x)
# define XXH_unlikely(x) (x)
#endif
#ifndef XXH_HAS_INCLUDE
# ifdef __has_include
/*
* Not defined as XXH_HAS_INCLUDE(x) (function-like) because
* this causes segfaults in Apple Clang 4.2 (on Mac OS X 10.7 Lion)
*/
# define XXH_HAS_INCLUDE __has_include
# else
# define XXH_HAS_INCLUDE(x) 0
# endif
#endif
#if defined(__GNUC__) || defined(__clang__)
# if defined(__ARM_FEATURE_SVE)
# include <arm_sve.h>
# endif
# if defined(__ARM_NEON__) || defined(__ARM_NEON) \
|| (defined(_M_ARM) && _M_ARM >= 7) \
|| defined(_M_ARM64) || defined(_M_ARM64EC) \
|| (defined(__wasm_simd128__) && XXH_HAS_INCLUDE(<arm_neon.h>)) /* WASM SIMD128 via SIMDe */
# define inline __inline__ /* circumvent a clang bug */
# include <arm_neon.h>
# undef inline
# elif defined(__AVX2__)
# include <immintrin.h>
# elif defined(__SSE2__)
# include <emmintrin.h>
# endif
#endif
#if defined(_MSC_VER)
# include <intrin.h>
#endif
/*
* One goal of XXH3 is to make it fast on both 32-bit and 64-bit, while
* remaining a true 64-bit/128-bit hash function.
*
* This is done by prioritizing a subset of 64-bit operations that can be
* emulated without too many steps on the average 32-bit machine.
*
* For example, these two lines seem similar, and run equally fast on 64-bit:
*
* xxh_u64 x;
* x ^= (x >> 47); // good
* x ^= (x >> 13); // bad
*
* However, to a 32-bit machine, there is a major difference.
*
* x ^= (x >> 47) looks like this:
*
* x.lo ^= (x.hi >> (47 - 32));
*
* while x ^= (x >> 13) looks like this:
*
* // note: funnel shifts are not usually cheap.
* x.lo ^= (x.lo >> 13) | (x.hi << (32 - 13));
* x.hi ^= (x.hi >> 13);
*
* The first one is significantly faster than the second, simply because the
* shift is larger than 32. This means:
* - All the bits we need are in the upper 32 bits, so we can ignore the lower
* 32 bits in the shift.
* - The shift result will always fit in the lower 32 bits, and therefore,
* we can ignore the upper 32 bits in the xor.
*
* Thanks to this optimization, XXH3 only requires these features to be efficient:
*
* - Usable unaligned access
* - A 32-bit or 64-bit ALU
* - If 32-bit, a decent ADC instruction
* - A 32 or 64-bit multiply with a 64-bit result
* - For the 128-bit variant, a decent byteswap helps short inputs.
*
* The first two are already required by XXH32, and almost all 32-bit and 64-bit
* platforms which can run XXH32 can run XXH3 efficiently.
*
* Thumb-1, the classic 16-bit only subset of ARM's instruction set, is one
* notable exception.
*
* First of all, Thumb-1 lacks support for the UMULL instruction which
* performs the important long multiply. This means numerous __aeabi_lmul
* calls.
*
* Second of all, the 8 functional registers are just not enough.
* Setup for __aeabi_lmul, byteshift loads, pointers, and all arithmetic need
* Lo registers, and this shuffling results in thousands more MOVs than A32.
*
* A32 and T32 don't have this limitation. They can access all 14 registers,
* do a 32->64 multiply with UMULL, and the flexible operand allowing free
* shifts is helpful, too.
*
* Therefore, we do a quick sanity check.
*
* If compiling Thumb-1 for a target which supports ARM instructions, we will
* emit a warning, as it is not a "sane" platform to compile for.
*
* Usually, if this happens, it is because of an accident and you probably need
* to specify -march, as you likely meant to compile for a newer architecture.
*
* Credit: large sections of the vectorial and asm source code paths
* have been contributed by @easyaspi314
*/
#if defined(__thumb__) && !defined(__thumb2__) && defined(__ARM_ARCH_ISA_ARM)
# warning "XXH3 is highly inefficient without ARM or Thumb-2."
#endif
/* ==========================================
* Vectorization detection
* ========================================== */
#ifdef XXH_DOXYGEN
/*!
* @ingroup tuning
* @brief Overrides the vectorization implementation chosen for XXH3.
*
* Can be defined to 0 to disable SIMD or any of the values mentioned in
* @ref XXH_VECTOR_TYPE.
*
* If this is not defined, it uses predefined macros to determine the best
* implementation.
*/
# define XXH_VECTOR XXH_SCALAR
/*!
* @ingroup tuning
* @brief Possible values for @ref XXH_VECTOR.
*
* Note that these are actually implemented as macros.
*
* If this is not defined, it is detected automatically.
* internal macro XXH_X86DISPATCH overrides this.
*/
enum XXH_VECTOR_TYPE /* fake enum */ {
XXH_SCALAR = 0, /*!< Portable scalar version */
XXH_SSE2 = 1, /*!<
* SSE2 for Pentium 4, Opteron, all x86_64.
*
* @note SSE2 is also guaranteed on Windows 10, macOS, and
* Android x86.
*/
XXH_AVX2 = 2, /*!< AVX2 for Haswell and Bulldozer */
XXH_AVX512 = 3, /*!< AVX512 for Skylake and Icelake */
XXH_NEON = 4, /*!<
* NEON for most ARMv7-A, all AArch64, and WASM SIMD128
* via the SIMDeverywhere polyfill provided with the
* Emscripten SDK.
*/
XXH_VSX = 5, /*!< VSX and ZVector for POWER8/z13 (64-bit) */
XXH_SVE = 6, /*!< SVE for some ARMv8-A and ARMv9-A */
};
/*!
* @ingroup tuning
* @brief Selects the minimum alignment for XXH3's accumulators.
*
* When using SIMD, this should match the alignment required for said vector
* type, so, for example, 32 for AVX2.
*
* Default: Auto detected.
*/
# define XXH_ACC_ALIGN 8
#endif
/* Actual definition */
#ifndef XXH_DOXYGEN
# define XXH_SCALAR 0
# define XXH_SSE2 1
# define XXH_AVX2 2
# define XXH_AVX512 3
# define XXH_NEON 4
# define XXH_VSX 5
# define XXH_SVE 6
#endif
#ifndef XXH_VECTOR /* can be defined on command line */
# if defined(__ARM_FEATURE_SVE)
# define XXH_VECTOR XXH_SVE
# elif ( \
defined(__ARM_NEON__) || defined(__ARM_NEON) /* gcc */ \
|| defined(_M_ARM) || defined(_M_ARM64) || defined(_M_ARM64EC) /* msvc */ \
|| (defined(__wasm_simd128__) && XXH_HAS_INCLUDE(<arm_neon.h>)) /* wasm simd128 via SIMDe */ \
) && ( \
defined(_WIN32) || defined(__LITTLE_ENDIAN__) /* little endian only */ \
|| (defined(__BYTE_ORDER__) && __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__) \
)
# define XXH_VECTOR XXH_NEON
# elif defined(__AVX512F__)
# define XXH_VECTOR XXH_AVX512
# elif defined(__AVX2__)
# define XXH_VECTOR XXH_AVX2
# elif defined(__SSE2__) || defined(_M_X64) || (defined(_M_IX86_FP) && (_M_IX86_FP == 2))
# define XXH_VECTOR XXH_SSE2
# elif (defined(__PPC64__) && defined(__POWER8_VECTOR__)) \
|| (defined(__s390x__) && defined(__VEC__)) \
&& defined(__GNUC__) /* TODO: IBM XL */
# define XXH_VECTOR XXH_VSX
# else
# define XXH_VECTOR XXH_SCALAR
# endif
#endif
/* __ARM_FEATURE_SVE is only supported by GCC & Clang. */
#if (XXH_VECTOR == XXH_SVE) && !defined(__ARM_FEATURE_SVE)
# ifdef _MSC_VER
# pragma warning(once : 4606)
# else
# warning "__ARM_FEATURE_SVE isn't supported. Use SCALAR instead."
# endif
# undef XXH_VECTOR
# define XXH_VECTOR XXH_SCALAR
#endif
/*
* Controls the alignment of the accumulator,
* for compatibility with aligned vector loads, which are usually faster.
*/
#ifndef XXH_ACC_ALIGN
# if defined(XXH_X86DISPATCH)
# define XXH_ACC_ALIGN 64 /* for compatibility with avx512 */
# elif XXH_VECTOR == XXH_SCALAR /* scalar */
# define XXH_ACC_ALIGN 8
# elif XXH_VECTOR == XXH_SSE2 /* sse2 */
# define XXH_ACC_ALIGN 16
# elif XXH_VECTOR == XXH_AVX2 /* avx2 */
# define XXH_ACC_ALIGN 32
# elif XXH_VECTOR == XXH_NEON /* neon */
# define XXH_ACC_ALIGN 16
# elif XXH_VECTOR == XXH_VSX /* vsx */
# define XXH_ACC_ALIGN 16
# elif XXH_VECTOR == XXH_AVX512 /* avx512 */
# define XXH_ACC_ALIGN 64
# elif XXH_VECTOR == XXH_SVE /* sve */
# define XXH_ACC_ALIGN 64
# endif
#endif
#if defined(XXH_X86DISPATCH) || XXH_VECTOR == XXH_SSE2 \
|| XXH_VECTOR == XXH_AVX2 || XXH_VECTOR == XXH_AVX512
# define XXH_SEC_ALIGN XXH_ACC_ALIGN
#elif XXH_VECTOR == XXH_SVE
# define XXH_SEC_ALIGN XXH_ACC_ALIGN
#else
# define XXH_SEC_ALIGN 8
#endif
#if defined(__GNUC__) || defined(__clang__)
# define XXH_ALIASING __attribute__((may_alias))
#else
# define XXH_ALIASING /* nothing */
#endif
/*
* UGLY HACK:
* GCC usually generates the best code with -O3 for xxHash.
*
* However, when targeting AVX2, it is overzealous in its unrolling resulting
* in code roughly 3/4 the speed of Clang.
*
* There are other issues, such as GCC splitting _mm256_loadu_si256 into
* _mm_loadu_si128 + _mm256_inserti128_si256. This is an optimization which
* only applies to Sandy and Ivy Bridge... which don't even support AVX2.
*
* That is why when compiling the AVX2 version, it is recommended to use either
* -O2 -mavx2 -march=haswell
* or
* -O2 -mavx2 -mno-avx256-split-unaligned-load
* for decent performance, or to use Clang instead.
*
* Fortunately, we can control the first one with a pragma that forces GCC into
* -O2, but the other one we can't control without "failed to inline always
* inline function due to target mismatch" warnings.
*/
#if XXH_VECTOR == XXH_AVX2 /* AVX2 */ \
&& defined(__GNUC__) && !defined(__clang__) /* GCC, not Clang */ \
&& defined(__OPTIMIZE__) && XXH_SIZE_OPT <= 0 /* respect -O0 and -Os */
# pragma GCC push_options
# pragma GCC optimize("-O2")
#endif
#if defined (__cplusplus)
extern "C" {
#endif
#if XXH_VECTOR == XXH_NEON
/*
* UGLY HACK: While AArch64 GCC on Linux does not seem to care, on macOS, GCC -O3
* optimizes out the entire hashLong loop because of the aliasing violation.
*
* However, GCC is also inefficient at load-store optimization with vld1q/vst1q,
* so the only option is to mark it as aliasing.
*/
typedef uint64x2_t xxh_aliasing_uint64x2_t XXH_ALIASING;
/*!
* @internal
* @brief `vld1q_u64` but faster and alignment-safe.
*
* On AArch64, unaligned access is always safe, but on ARMv7-a, it is only
* *conditionally* safe (`vld1` has an alignment bit like `movdq[ua]` in x86).
*
* GCC for AArch64 sees `vld1q_u8` as an intrinsic instead of a load, so it
* prohibits load-store optimizations. Therefore, a direct dereference is used.
*
* Otherwise, `vld1q_u8` is used with `vreinterpretq_u8_u64` to do a safe
* unaligned load.
*/
#if defined(__aarch64__) && defined(__GNUC__) && !defined(__clang__)
XXH_FORCE_INLINE uint64x2_t XXH_vld1q_u64(void const* ptr) /* silence -Wcast-align */
{
return *(xxh_aliasing_uint64x2_t const *)ptr;
}
#else
XXH_FORCE_INLINE uint64x2_t XXH_vld1q_u64(void const* ptr)
{
return vreinterpretq_u64_u8(vld1q_u8((uint8_t const*)ptr));
}
#endif
/*!
* @internal
* @brief `vmlal_u32` on low and high halves of a vector.
*
* This is a workaround for AArch64 GCC < 11 which implemented arm_neon.h with
* inline assembly and were therefore incapable of merging the `vget_{low, high}_u32`
* with `vmlal_u32`.
*/
#if defined(__aarch64__) && defined(__GNUC__) && !defined(__clang__) && __GNUC__ < 11
XXH_FORCE_INLINE uint64x2_t
XXH_vmlal_low_u32(uint64x2_t acc, uint32x4_t lhs, uint32x4_t rhs)
{
/* Inline assembly is the only way */
__asm__("umlal %0.2d, %1.2s, %2.2s" : "+w" (acc) : "w" (lhs), "w" (rhs));
return acc;
}
XXH_FORCE_INLINE uint64x2_t
XXH_vmlal_high_u32(uint64x2_t acc, uint32x4_t lhs, uint32x4_t rhs)
{
/* This intrinsic works as expected */
return vmlal_high_u32(acc, lhs, rhs);
}
#else
/* Portable intrinsic versions */
XXH_FORCE_INLINE uint64x2_t
XXH_vmlal_low_u32(uint64x2_t acc, uint32x4_t lhs, uint32x4_t rhs)
{
return vmlal_u32(acc, vget_low_u32(lhs), vget_low_u32(rhs));
}
/*! @copydoc XXH_vmlal_low_u32
* Assume the compiler converts this to vmlal_high_u32 on aarch64 */
XXH_FORCE_INLINE uint64x2_t
XXH_vmlal_high_u32(uint64x2_t acc, uint32x4_t lhs, uint32x4_t rhs)
{
return vmlal_u32(acc, vget_high_u32(lhs), vget_high_u32(rhs));
}
#endif
/*!
* @ingroup tuning
* @brief Controls the NEON to scalar ratio for XXH3
*
* This can be set to 2, 4, 6, or 8.
*
* ARM Cortex CPUs are _very_ sensitive to how their pipelines are used.
*
* For example, the Cortex-A73 can dispatch 3 micro-ops per cycle, but only 2 of those
* can be NEON. If you are only using NEON instructions, you are only using 2/3 of the CPU
* bandwidth.
*
* This is even more noticeable on the more advanced cores like the Cortex-A76 which
* can dispatch 8 micro-ops per cycle, but still only 2 NEON micro-ops at once.
*
* Therefore, to make the most out of the pipeline, it is beneficial to run 6 NEON lanes
* and 2 scalar lanes, which is chosen by default.
*
* This does not apply to Apple processors or 32-bit processors, which run better with
* full NEON. These will default to 8. Additionally, size-optimized builds run 8 lanes.
*
* This change benefits CPUs with large micro-op buffers without negatively affecting
* most other CPUs:
*
* | Chipset | Dispatch type | NEON only | 6:2 hybrid | Diff. |
* |:----------------------|:--------------------|----------:|-----------:|------:|
* | Snapdragon 730 (A76) | 2 NEON/8 micro-ops | 8.8 GB/s | 10.1 GB/s | ~16% |
* | Snapdragon 835 (A73) | 2 NEON/3 micro-ops | 5.1 GB/s | 5.3 GB/s | ~5% |
* | Marvell PXA1928 (A53) | In-order dual-issue | 1.9 GB/s | 1.9 GB/s | 0% |
* | Apple M1 | 4 NEON/8 micro-ops | 37.3 GB/s | 36.1 GB/s | ~-3% |
*
* It also seems to fix some bad codegen on GCC, making it almost as fast as clang.
*
* When using WASM SIMD128, if this is 2 or 6, SIMDe will scalarize 2 of the lanes meaning
* it effectively becomes worse 4.
*
* @see XXH3_accumulate_512_neon()
*/
# ifndef XXH3_NEON_LANES
# if (defined(__aarch64__) || defined(__arm64__) || defined(_M_ARM64) || defined(_M_ARM64EC)) \
&& !defined(__APPLE__) && XXH_SIZE_OPT <= 0
# define XXH3_NEON_LANES 6
# else
# define XXH3_NEON_LANES XXH_ACC_NB
# endif
# endif
#endif /* XXH_VECTOR == XXH_NEON */
#if defined (__cplusplus)
} /* extern "C" */
#endif
/*
* VSX and Z Vector helpers.
*
* This is very messy, and any pull requests to clean this up are welcome.
*
* There are a lot of problems with supporting VSX and s390x, due to
* inconsistent intrinsics, spotty coverage, and multiple endiannesses.
*/
#if XXH_VECTOR == XXH_VSX
/* Annoyingly, these headers _may_ define three macros: `bool`, `vector`,
* and `pixel`. This is a problem for obvious reasons.
*
* These keywords are unnecessary; the spec literally says they are
* equivalent to `__bool`, `__vector`, and `__pixel` and may be undef'd
* after including the header.
*
* We use pragma push_macro/pop_macro to keep the namespace clean. */
# pragma push_macro("bool")
# pragma push_macro("vector")
# pragma push_macro("pixel")
/* silence potential macro redefined warnings */
# undef bool
# undef vector
# undef pixel
# if defined(__s390x__)
# include <s390intrin.h>
# else
# include <altivec.h>
# endif
/* Restore the original macro values, if applicable. */
# pragma pop_macro("pixel")
# pragma pop_macro("vector")
# pragma pop_macro("bool")
typedef __vector unsigned long long xxh_u64x2;
typedef __vector unsigned char xxh_u8x16;
typedef __vector unsigned xxh_u32x4;
/*
* UGLY HACK: Similar to aarch64 macOS GCC, s390x GCC has the same aliasing issue.
*/
typedef xxh_u64x2 xxh_aliasing_u64x2 XXH_ALIASING;
# ifndef XXH_VSX_BE
# if defined(__BIG_ENDIAN__) \
|| (defined(__BYTE_ORDER__) && __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__)
# define XXH_VSX_BE 1
# elif defined(__VEC_ELEMENT_REG_ORDER__) && __VEC_ELEMENT_REG_ORDER__ == __ORDER_BIG_ENDIAN__
# warning "-maltivec=be is not recommended. Please use native endianness."
# define XXH_VSX_BE 1
# else
# define XXH_VSX_BE 0
# endif
# endif /* !defined(XXH_VSX_BE) */
# if XXH_VSX_BE
# if defined(__POWER9_VECTOR__) || (defined(__clang__) && defined(__s390x__))
# define XXH_vec_revb vec_revb
# else
#if defined (__cplusplus)
extern "C" {
#endif
/*!
* A polyfill for POWER9's vec_revb().
*/
XXH_FORCE_INLINE xxh_u64x2 XXH_vec_revb(xxh_u64x2 val)
{
xxh_u8x16 const vByteSwap = { 0x07, 0x06, 0x05, 0x04, 0x03, 0x02, 0x01, 0x00,
0x0F, 0x0E, 0x0D, 0x0C, 0x0B, 0x0A, 0x09, 0x08 };
return vec_perm(val, val, vByteSwap);
}
#if defined (__cplusplus)
} /* extern "C" */
#endif
# endif
# endif /* XXH_VSX_BE */
#if defined (__cplusplus)
extern "C" {
#endif
/*!
* Performs an unaligned vector load and byte swaps it on big endian.
*/
XXH_FORCE_INLINE xxh_u64x2 XXH_vec_loadu(const void *ptr)
{
xxh_u64x2 ret;
XXH_memcpy(&ret, ptr, sizeof(xxh_u64x2));
# if XXH_VSX_BE
ret = XXH_vec_revb(ret);
# endif
return ret;
}
/*
* vec_mulo and vec_mule are very problematic intrinsics on PowerPC
*
* These intrinsics weren't added until GCC 8, despite existing for a while,
* and they are endian dependent. Also, their meaning swap depending on version.
* */
# if defined(__s390x__)
/* s390x is always big endian, no issue on this platform */
# define XXH_vec_mulo vec_mulo
# define XXH_vec_mule vec_mule
# elif defined(__clang__) && XXH_HAS_BUILTIN(__builtin_altivec_vmuleuw) && !defined(__ibmxl__)
/* Clang has a better way to control this, we can just use the builtin which doesn't swap. */
/* The IBM XL Compiler (which defined __clang__) only implements the vec_* operations */
# define XXH_vec_mulo __builtin_altivec_vmulouw
# define XXH_vec_mule __builtin_altivec_vmuleuw
# else
/* gcc needs inline assembly */
/* Adapted from https://github.com/google/highwayhash/blob/master/highwayhash/hh_vsx.h. */
XXH_FORCE_INLINE xxh_u64x2 XXH_vec_mulo(xxh_u32x4 a, xxh_u32x4 b)
{
xxh_u64x2 result;
__asm__("vmulouw %0, %1, %2" : "=v" (result) : "v" (a), "v" (b));
return result;
}
XXH_FORCE_INLINE xxh_u64x2 XXH_vec_mule(xxh_u32x4 a, xxh_u32x4 b)
{
xxh_u64x2 result;
__asm__("vmuleuw %0, %1, %2" : "=v" (result) : "v" (a), "v" (b));
return result;
}
# endif /* XXH_vec_mulo, XXH_vec_mule */
#if defined (__cplusplus)
} /* extern "C" */
#endif
#endif /* XXH_VECTOR == XXH_VSX */
#if XXH_VECTOR == XXH_SVE
#define ACCRND(acc, offset) \
do { \
svuint64_t input_vec = svld1_u64(mask, xinput + offset); \
svuint64_t secret_vec = svld1_u64(mask, xsecret + offset); \
svuint64_t mixed = sveor_u64_x(mask, secret_vec, input_vec); \
svuint64_t swapped = svtbl_u64(input_vec, kSwap); \
svuint64_t mixed_lo = svextw_u64_x(mask, mixed); \
svuint64_t mixed_hi = svlsr_n_u64_x(mask, mixed, 32); \
svuint64_t mul = svmad_u64_x(mask, mixed_lo, mixed_hi, swapped); \
acc = svadd_u64_x(mask, acc, mul); \
} while (0)
#endif /* XXH_VECTOR == XXH_SVE */
/* prefetch
* can be disabled, by declaring XXH_NO_PREFETCH build macro */
#if defined(XXH_NO_PREFETCH)
# define XXH_PREFETCH(ptr) (void)(ptr) /* disabled */
#else
# if XXH_SIZE_OPT >= 1
# define XXH_PREFETCH(ptr) (void)(ptr)
# elif defined(_MSC_VER) && (defined(_M_X64) || defined(_M_IX86)) /* _mm_prefetch() not defined outside of x86/x64 */
# include <mmintrin.h> /* https://msdn.microsoft.com/fr-fr/library/84szxsww(v=vs.90).aspx */
# define XXH_PREFETCH(ptr) _mm_prefetch((const char*)(ptr), _MM_HINT_T0)
# elif defined(__GNUC__) && ( (__GNUC__ >= 4) || ( (__GNUC__ == 3) && (__GNUC_MINOR__ >= 1) ) )
# define XXH_PREFETCH(ptr) __builtin_prefetch((ptr), 0 /* rw==read */, 3 /* locality */)
# else
# define XXH_PREFETCH(ptr) (void)(ptr) /* disabled */
# endif
#endif /* XXH_NO_PREFETCH */
#if defined (__cplusplus)
extern "C" {
#endif
/* ==========================================
* XXH3 default settings
* ========================================== */
#define XXH_SECRET_DEFAULT_SIZE 192 /* minimum XXH3_SECRET_SIZE_MIN */
#if (XXH_SECRET_DEFAULT_SIZE < XXH3_SECRET_SIZE_MIN)
# error "default keyset is not large enough"
#endif
/*! Pseudorandom secret taken directly from FARSH. */
XXH_ALIGN(64) static const xxh_u8 XXH3_kSecret[XXH_SECRET_DEFAULT_SIZE] = {
0xb8, 0xfe, 0x6c, 0x39, 0x23, 0xa4, 0x4b, 0xbe, 0x7c, 0x01, 0x81, 0x2c, 0xf7, 0x21, 0xad, 0x1c,
0xde, 0xd4, 0x6d, 0xe9, 0x83, 0x90, 0x97, 0xdb, 0x72, 0x40, 0xa4, 0xa4, 0xb7, 0xb3, 0x67, 0x1f,
0xcb, 0x79, 0xe6, 0x4e, 0xcc, 0xc0, 0xe5, 0x78, 0x82, 0x5a, 0xd0, 0x7d, 0xcc, 0xff, 0x72, 0x21,
0xb8, 0x08, 0x46, 0x74, 0xf7, 0x43, 0x24, 0x8e, 0xe0, 0x35, 0x90, 0xe6, 0x81, 0x3a, 0x26, 0x4c,
0x3c, 0x28, 0x52, 0xbb, 0x91, 0xc3, 0x00, 0xcb, 0x88, 0xd0, 0x65, 0x8b, 0x1b, 0x53, 0x2e, 0xa3,
0x71, 0x64, 0x48, 0x97, 0xa2, 0x0d, 0xf9, 0x4e, 0x38, 0x19, 0xef, 0x46, 0xa9, 0xde, 0xac, 0xd8,
0xa8, 0xfa, 0x76, 0x3f, 0xe3, 0x9c, 0x34, 0x3f, 0xf9, 0xdc, 0xbb, 0xc7, 0xc7, 0x0b, 0x4f, 0x1d,
0x8a, 0x51, 0xe0, 0x4b, 0xcd, 0xb4, 0x59, 0x31, 0xc8, 0x9f, 0x7e, 0xc9, 0xd9, 0x78, 0x73, 0x64,
0xea, 0xc5, 0xac, 0x83, 0x34, 0xd3, 0xeb, 0xc3, 0xc5, 0x81, 0xa0, 0xff, 0xfa, 0x13, 0x63, 0xeb,
0x17, 0x0d, 0xdd, 0x51, 0xb7, 0xf0, 0xda, 0x49, 0xd3, 0x16, 0x55, 0x26, 0x29, 0xd4, 0x68, 0x9e,
0x2b, 0x16, 0xbe, 0x58, 0x7d, 0x47, 0xa1, 0xfc, 0x8f, 0xf8, 0xb8, 0xd1, 0x7a, 0xd0, 0x31, 0xce,
0x45, 0xcb, 0x3a, 0x8f, 0x95, 0x16, 0x04, 0x28, 0xaf, 0xd7, 0xfb, 0xca, 0xbb, 0x4b, 0x40, 0x7e,
};
static const xxh_u64 PRIME_MX1 = 0x165667919E3779F9ULL; /*!< 0b0001011001010110011001111001000110011110001101110111100111111001 */
static const xxh_u64 PRIME_MX2 = 0x9FB21C651E98DF25ULL; /*!< 0b1001111110110010000111000110010100011110100110001101111100100101 */
#ifdef XXH_OLD_NAMES
# define kSecret XXH3_kSecret
#endif
#ifdef XXH_DOXYGEN
/*!
* @brief Calculates a 32-bit to 64-bit long multiply.
*
* Implemented as a macro.
*
* Wraps `__emulu` on MSVC x86 because it tends to call `__allmul` when it doesn't
* need to (but it shouldn't need to anyways, it is about 7 instructions to do
* a 64x64 multiply...). Since we know that this will _always_ emit `MULL`, we
* use that instead of the normal method.
*
* If you are compiling for platforms like Thumb-1 and don't have a better option,
* you may also want to write your own long multiply routine here.
*
* @param x, y Numbers to be multiplied
* @return 64-bit product of the low 32 bits of @p x and @p y.
*/
XXH_FORCE_INLINE xxh_u64
XXH_mult32to64(xxh_u64 x, xxh_u64 y)
{
return (x & 0xFFFFFFFF) * (y & 0xFFFFFFFF);
}
#elif defined(_MSC_VER) && defined(_M_IX86)
# define XXH_mult32to64(x, y) __emulu((unsigned)(x), (unsigned)(y))
#else
/*
* Downcast + upcast is usually better than masking on older compilers like
* GCC 4.2 (especially 32-bit ones), all without affecting newer compilers.
*
* The other method, (x & 0xFFFFFFFF) * (y & 0xFFFFFFFF), will AND both operands
* and perform a full 64x64 multiply -- entirely redundant on 32-bit.
*/
# define XXH_mult32to64(x, y) ((xxh_u64)(xxh_u32)(x) * (xxh_u64)(xxh_u32)(y))
#endif
/*!
* @brief Calculates a 64->128-bit long multiply.
*
* Uses `__uint128_t` and `_umul128` if available, otherwise uses a scalar
* version.
*
* @param lhs , rhs The 64-bit integers to be multiplied
* @return The 128-bit result represented in an @ref XXH128_hash_t.
*/
static XXH128_hash_t
XXH_mult64to128(xxh_u64 lhs, xxh_u64 rhs)
{
/*
* GCC/Clang __uint128_t method.
*
* On most 64-bit targets, GCC and Clang define a __uint128_t type.
* This is usually the best way as it usually uses a native long 64-bit
* multiply, such as MULQ on x86_64 or MUL + UMULH on aarch64.
*
* Usually.
*
* Despite being a 32-bit platform, Clang (and emscripten) define this type
* despite not having the arithmetic for it. This results in a laggy
* compiler builtin call which calculates a full 128-bit multiply.
* In that case it is best to use the portable one.
* https://github.com/Cyan4973/xxHash/issues/211#issuecomment-515575677
*/
#if (defined(__GNUC__) || defined(__clang__)) && !defined(__wasm__) \
&& defined(__SIZEOF_INT128__) \
|| (defined(_INTEGRAL_MAX_BITS) && _INTEGRAL_MAX_BITS >= 128)
__uint128_t const product = (__uint128_t)lhs * (__uint128_t)rhs;
XXH128_hash_t r128;
r128.low64 = (xxh_u64)(product);
r128.high64 = (xxh_u64)(product >> 64);
return r128;
/*
* MSVC for x64's _umul128 method.
*
* xxh_u64 _umul128(xxh_u64 Multiplier, xxh_u64 Multiplicand, xxh_u64 *HighProduct);
*
* This compiles to single operand MUL on x64.
*/
#elif (defined(_M_X64) || defined(_M_IA64)) && !defined(_M_ARM64EC)
#ifndef _MSC_VER
# pragma intrinsic(_umul128)
#endif
xxh_u64 product_high;
xxh_u64 const product_low = _umul128(lhs, rhs, &product_high);
XXH128_hash_t r128;
r128.low64 = product_low;
r128.high64 = product_high;
return r128;
/*
* MSVC for ARM64's __umulh method.
*
* This compiles to the same MUL + UMULH as GCC/Clang's __uint128_t method.
*/
#elif defined(_M_ARM64) || defined(_M_ARM64EC)
#ifndef _MSC_VER
# pragma intrinsic(__umulh)
#endif
XXH128_hash_t r128;
r128.low64 = lhs * rhs;
r128.high64 = __umulh(lhs, rhs);
return r128;
#else
/*
* Portable scalar method. Optimized for 32-bit and 64-bit ALUs.
*
* This is a fast and simple grade school multiply, which is shown below
* with base 10 arithmetic instead of base 0x100000000.
*
* 9 3 // D2 lhs = 93
* x 7 5 // D2 rhs = 75
* ----------
* 1 5 // D2 lo_lo = (93 % 10) * (75 % 10) = 15
* 4 5 | // D2 hi_lo = (93 / 10) * (75 % 10) = 45
* 2 1 | // D2 lo_hi = (93 % 10) * (75 / 10) = 21
* + 6 3 | | // D2 hi_hi = (93 / 10) * (75 / 10) = 63
* ---------
* 2 7 | // D2 cross = (15 / 10) + (45 % 10) + 21 = 27
* + 6 7 | | // D2 upper = (27 / 10) + (45 / 10) + 63 = 67
* ---------
* 6 9 7 5 // D4 res = (27 * 10) + (15 % 10) + (67 * 100) = 6975
*
* The reasons for adding the products like this are:
* 1. It avoids manual carry tracking. Just like how
* (9 * 9) + 9 + 9 = 99, the same applies with this for UINT64_MAX.
* This avoids a lot of complexity.
*
* 2. It hints for, and on Clang, compiles to, the powerful UMAAL
* instruction available in ARM's Digital Signal Processing extension
* in 32-bit ARMv6 and later, which is shown below:
*
* void UMAAL(xxh_u32 *RdLo, xxh_u32 *RdHi, xxh_u32 Rn, xxh_u32 Rm)
* {
* xxh_u64 product = (xxh_u64)*RdLo * (xxh_u64)*RdHi + Rn + Rm;
* *RdLo = (xxh_u32)(product & 0xFFFFFFFF);
* *RdHi = (xxh_u32)(product >> 32);
* }
*
* This instruction was designed for efficient long multiplication, and
* allows this to be calculated in only 4 instructions at speeds
* comparable to some 64-bit ALUs.
*
* 3. It isn't terrible on other platforms. Usually this will be a couple
* of 32-bit ADD/ADCs.
*/
/* First calculate all of the cross products. */
xxh_u64 const lo_lo = XXH_mult32to64(lhs & 0xFFFFFFFF, rhs & 0xFFFFFFFF);
xxh_u64 const hi_lo = XXH_mult32to64(lhs >> 32, rhs & 0xFFFFFFFF);
xxh_u64 const lo_hi = XXH_mult32to64(lhs & 0xFFFFFFFF, rhs >> 32);
xxh_u64 const hi_hi = XXH_mult32to64(lhs >> 32, rhs >> 32);
/* Now add the products together. These will never overflow. */
xxh_u64 const cross = (lo_lo >> 32) + (hi_lo & 0xFFFFFFFF) + lo_hi;
xxh_u64 const upper = (hi_lo >> 32) + (cross >> 32) + hi_hi;
xxh_u64 const lower = (cross << 32) | (lo_lo & 0xFFFFFFFF);
XXH128_hash_t r128;
r128.low64 = lower;
r128.high64 = upper;
return r128;
#endif
}
/*!
* @brief Calculates a 64-bit to 128-bit multiply, then XOR folds it.
*
* The reason for the separate function is to prevent passing too many structs
* around by value. This will hopefully inline the multiply, but we don't force it.
*
* @param lhs , rhs The 64-bit integers to multiply
* @return The low 64 bits of the product XOR'd by the high 64 bits.
* @see XXH_mult64to128()
*/
static xxh_u64
XXH3_mul128_fold64(xxh_u64 lhs, xxh_u64 rhs)
{
XXH128_hash_t product = XXH_mult64to128(lhs, rhs);
return product.low64 ^ product.high64;
}
/*! Seems to produce slightly better code on GCC for some reason. */
XXH_FORCE_INLINE XXH_CONSTF xxh_u64 XXH_xorshift64(xxh_u64 v64, int shift)
{
XXH_ASSERT(0 <= shift && shift < 64);
return v64 ^ (v64 >> shift);
}
/*
* This is a fast avalanche stage,
* suitable when input bits are already partially mixed
*/
static XXH64_hash_t XXH3_avalanche(xxh_u64 h64)
{
h64 = XXH_xorshift64(h64, 37);
h64 *= PRIME_MX1;
h64 = XXH_xorshift64(h64, 32);
return h64;
}
/*
* This is a stronger avalanche,
* inspired by Pelle Evensen's rrmxmx
* preferable when input has not been previously mixed
*/
static XXH64_hash_t XXH3_rrmxmx(xxh_u64 h64, xxh_u64 len)
{
/* this mix is inspired by Pelle Evensen's rrmxmx */
h64 ^= XXH_rotl64(h64, 49) ^ XXH_rotl64(h64, 24);
h64 *= PRIME_MX2;
h64 ^= (h64 >> 35) + len ;
h64 *= PRIME_MX2;
return XXH_xorshift64(h64, 28);
}
/* ==========================================
* Short keys
* ==========================================
* One of the shortcomings of XXH32 and XXH64 was that their performance was
* sub-optimal on short lengths. It used an iterative algorithm which strongly
* favored lengths that were a multiple of 4 or 8.
*
* Instead of iterating over individual inputs, we use a set of single shot
* functions which piece together a range of lengths and operate in constant time.
*
* Additionally, the number of multiplies has been significantly reduced. This
* reduces latency, especially when emulating 64-bit multiplies on 32-bit.
*
* Depending on the platform, this may or may not be faster than XXH32, but it
* is almost guaranteed to be faster than XXH64.
*/
/*
* At very short lengths, there isn't enough input to fully hide secrets, or use
* the entire secret.
*
* There is also only a limited amount of mixing we can do before significantly
* impacting performance.
*
* Therefore, we use different sections of the secret and always mix two secret
* samples with an XOR. This should have no effect on performance on the
* seedless or withSeed variants because everything _should_ be constant folded
* by modern compilers.
*
* The XOR mixing hides individual parts of the secret and increases entropy.
*
* This adds an extra layer of strength for custom secrets.
*/
XXH_FORCE_INLINE XXH_PUREF XXH64_hash_t
XXH3_len_1to3_64b(const xxh_u8* input, size_t len, const xxh_u8* secret, XXH64_hash_t seed)
{
XXH_ASSERT(input != NULL);
XXH_ASSERT(1 <= len && len <= 3);
XXH_ASSERT(secret != NULL);
/*
* len = 1: combined = { input[0], 0x01, input[0], input[0] }
* len = 2: combined = { input[1], 0x02, input[0], input[1] }
* len = 3: combined = { input[2], 0x03, input[0], input[1] }
*/
{ xxh_u8 const c1 = input[0];
xxh_u8 const c2 = input[len >> 1];
xxh_u8 const c3 = input[len - 1];
xxh_u32 const combined = ((xxh_u32)c1 << 16) | ((xxh_u32)c2 << 24)
| ((xxh_u32)c3 << 0) | ((xxh_u32)len << 8);
xxh_u64 const bitflip = (XXH_readLE32(secret) ^ XXH_readLE32(secret+4)) + seed;
xxh_u64 const keyed = (xxh_u64)combined ^ bitflip;
return XXH64_avalanche(keyed);
}
}
XXH_FORCE_INLINE XXH_PUREF XXH64_hash_t
XXH3_len_4to8_64b(const xxh_u8* input, size_t len, const xxh_u8* secret, XXH64_hash_t seed)
{
XXH_ASSERT(input != NULL);
XXH_ASSERT(secret != NULL);
XXH_ASSERT(4 <= len && len <= 8);
seed ^= (xxh_u64)XXH_swap32((xxh_u32)seed) << 32;
{ xxh_u32 const input1 = XXH_readLE32(input);
xxh_u32 const input2 = XXH_readLE32(input + len - 4);
xxh_u64 const bitflip = (XXH_readLE64(secret+8) ^ XXH_readLE64(secret+16)) - seed;
xxh_u64 const input64 = input2 + (((xxh_u64)input1) << 32);
xxh_u64 const keyed = input64 ^ bitflip;
return XXH3_rrmxmx(keyed, len);
}
}
XXH_FORCE_INLINE XXH_PUREF XXH64_hash_t
XXH3_len_9to16_64b(const xxh_u8* input, size_t len, const xxh_u8* secret, XXH64_hash_t seed)
{
XXH_ASSERT(input != NULL);
XXH_ASSERT(secret != NULL);
XXH_ASSERT(9 <= len && len <= 16);
{ xxh_u64 const bitflip1 = (XXH_readLE64(secret+24) ^ XXH_readLE64(secret+32)) + seed;
xxh_u64 const bitflip2 = (XXH_readLE64(secret+40) ^ XXH_readLE64(secret+48)) - seed;
xxh_u64 const input_lo = XXH_readLE64(input) ^ bitflip1;
xxh_u64 const input_hi = XXH_readLE64(input + len - 8) ^ bitflip2;
xxh_u64 const acc = len
+ XXH_swap64(input_lo) + input_hi
+ XXH3_mul128_fold64(input_lo, input_hi);
return XXH3_avalanche(acc);
}
}
XXH_FORCE_INLINE XXH_PUREF XXH64_hash_t
XXH3_len_0to16_64b(const xxh_u8* input, size_t len, const xxh_u8* secret, XXH64_hash_t seed)
{
XXH_ASSERT(len <= 16);
{ if (XXH_likely(len > 8)) return XXH3_len_9to16_64b(input, len, secret, seed);
if (XXH_likely(len >= 4)) return XXH3_len_4to8_64b(input, len, secret, seed);
if (len) return XXH3_len_1to3_64b(input, len, secret, seed);
return XXH64_avalanche(seed ^ (XXH_readLE64(secret+56) ^ XXH_readLE64(secret+64)));
}
}
/*
* DISCLAIMER: There are known *seed-dependent* multicollisions here due to
* multiplication by zero, affecting hashes of lengths 17 to 240.
*
* However, they are very unlikely.
*
* Keep this in mind when using the unseeded XXH3_64bits() variant: As with all
* unseeded non-cryptographic hashes, it does not attempt to defend itself
* against specially crafted inputs, only random inputs.
*
* Compared to classic UMAC where a 1 in 2^31 chance of 4 consecutive bytes
* cancelling out the secret is taken an arbitrary number of times (addressed
* in XXH3_accumulate_512), this collision is very unlikely with random inputs
* and/or proper seeding:
*
* This only has a 1 in 2^63 chance of 8 consecutive bytes cancelling out, in a
* function that is only called up to 16 times per hash with up to 240 bytes of
* input.
*
* This is not too bad for a non-cryptographic hash function, especially with
* only 64 bit outputs.
*
* The 128-bit variant (which trades some speed for strength) is NOT affected
* by this, although it is always a good idea to use a proper seed if you care
* about strength.
*/
XXH_FORCE_INLINE xxh_u64 XXH3_mix16B(const xxh_u8* XXH_RESTRICT input,
const xxh_u8* XXH_RESTRICT secret, xxh_u64 seed64)
{
#if defined(__GNUC__) && !defined(__clang__) /* GCC, not Clang */ \
&& defined(__i386__) && defined(__SSE2__) /* x86 + SSE2 */ \
&& !defined(XXH_ENABLE_AUTOVECTORIZE) /* Define to disable like XXH32 hack */
/*
* UGLY HACK:
* GCC for x86 tends to autovectorize the 128-bit multiply, resulting in
* slower code.
*
* By forcing seed64 into a register, we disrupt the cost model and
* cause it to scalarize. See `XXH32_round()`
*
* FIXME: Clang's output is still _much_ faster -- On an AMD Ryzen 3600,
* XXH3_64bits @ len=240 runs at 4.6 GB/s with Clang 9, but 3.3 GB/s on
* GCC 9.2, despite both emitting scalar code.
*
* GCC generates much better scalar code than Clang for the rest of XXH3,
* which is why finding a more optimal codepath is an interest.
*/
XXH_COMPILER_GUARD(seed64);
#endif
{ xxh_u64 const input_lo = XXH_readLE64(input);
xxh_u64 const input_hi = XXH_readLE64(input+8);
return XXH3_mul128_fold64(
input_lo ^ (XXH_readLE64(secret) + seed64),
input_hi ^ (XXH_readLE64(secret+8) - seed64)
);
}
}
/* For mid range keys, XXH3 uses a Mum-hash variant. */
XXH_FORCE_INLINE XXH_PUREF XXH64_hash_t
XXH3_len_17to128_64b(const xxh_u8* XXH_RESTRICT input, size_t len,
const xxh_u8* XXH_RESTRICT secret, size_t secretSize,
XXH64_hash_t seed)
{
XXH_ASSERT(secretSize >= XXH3_SECRET_SIZE_MIN); (void)secretSize;
XXH_ASSERT(16 < len && len <= 128);
{ xxh_u64 acc = len * XXH_PRIME64_1;
#if XXH_SIZE_OPT >= 1
/* Smaller and cleaner, but slightly slower. */
unsigned int i = (unsigned int)(len - 1) / 32;
do {
acc += XXH3_mix16B(input+16 * i, secret+32*i, seed);
acc += XXH3_mix16B(input+len-16*(i+1), secret+32*i+16, seed);
} while (i-- != 0);
#else
if (len > 32) {
if (len > 64) {
if (len > 96) {
acc += XXH3_mix16B(input+48, secret+96, seed);
acc += XXH3_mix16B(input+len-64, secret+112, seed);
}
acc += XXH3_mix16B(input+32, secret+64, seed);
acc += XXH3_mix16B(input+len-48, secret+80, seed);
}
acc += XXH3_mix16B(input+16, secret+32, seed);
acc += XXH3_mix16B(input+len-32, secret+48, seed);
}
acc += XXH3_mix16B(input+0, secret+0, seed);
acc += XXH3_mix16B(input+len-16, secret+16, seed);
#endif
return XXH3_avalanche(acc);
}
}
/*!
* @brief Maximum size of "short" key in bytes.
*/
#define XXH3_MIDSIZE_MAX 240
XXH_NO_INLINE XXH_PUREF XXH64_hash_t
XXH3_len_129to240_64b(const xxh_u8* XXH_RESTRICT input, size_t len,
const xxh_u8* XXH_RESTRICT secret, size_t secretSize,
XXH64_hash_t seed)
{
XXH_ASSERT(secretSize >= XXH3_SECRET_SIZE_MIN); (void)secretSize;
XXH_ASSERT(128 < len && len <= XXH3_MIDSIZE_MAX);
#define XXH3_MIDSIZE_STARTOFFSET 3
#define XXH3_MIDSIZE_LASTOFFSET 17
{ xxh_u64 acc = len * XXH_PRIME64_1;
xxh_u64 acc_end;
unsigned int const nbRounds = (unsigned int)len / 16;
unsigned int i;
XXH_ASSERT(128 < len && len <= XXH3_MIDSIZE_MAX);
for (i=0; i<8; i++) {
acc += XXH3_mix16B(input+(16*i), secret+(16*i), seed);
}
/* last bytes */
acc_end = XXH3_mix16B(input + len - 16, secret + XXH3_SECRET_SIZE_MIN - XXH3_MIDSIZE_LASTOFFSET, seed);
XXH_ASSERT(nbRounds >= 8);
acc = XXH3_avalanche(acc);
#if defined(__clang__) /* Clang */ \
&& (defined(__ARM_NEON) || defined(__ARM_NEON__)) /* NEON */ \
&& !defined(XXH_ENABLE_AUTOVECTORIZE) /* Define to disable */
/*
* UGLY HACK:
* Clang for ARMv7-A tries to vectorize this loop, similar to GCC x86.
* In everywhere else, it uses scalar code.
*
* For 64->128-bit multiplies, even if the NEON was 100% optimal, it
* would still be slower than UMAAL (see XXH_mult64to128).
*
* Unfortunately, Clang doesn't handle the long multiplies properly and
* converts them to the nonexistent "vmulq_u64" intrinsic, which is then
* scalarized into an ugly mess of VMOV.32 instructions.
*
* This mess is difficult to avoid without turning autovectorization
* off completely, but they are usually relatively minor and/or not
* worth it to fix.
*
* This loop is the easiest to fix, as unlike XXH32, this pragma
* _actually works_ because it is a loop vectorization instead of an
* SLP vectorization.
*/
#pragma clang loop vectorize(disable)
#endif
for (i=8 ; i < nbRounds; i++) {
/*
* Prevents clang for unrolling the acc loop and interleaving with this one.
*/
XXH_COMPILER_GUARD(acc);
acc_end += XXH3_mix16B(input+(16*i), secret+(16*(i-8)) + XXH3_MIDSIZE_STARTOFFSET, seed);
}
return XXH3_avalanche(acc + acc_end);
}
}
/* ======= Long Keys ======= */
#define XXH_STRIPE_LEN 64
#define XXH_SECRET_CONSUME_RATE 8 /* nb of secret bytes consumed at each accumulation */
#define XXH_ACC_NB (XXH_STRIPE_LEN / sizeof(xxh_u64))
#ifdef XXH_OLD_NAMES
# define STRIPE_LEN XXH_STRIPE_LEN
# define ACC_NB XXH_ACC_NB
#endif
#ifndef XXH_PREFETCH_DIST
# ifdef __clang__
# define XXH_PREFETCH_DIST 320
# else
# if (XXH_VECTOR == XXH_AVX512)
# define XXH_PREFETCH_DIST 512
# else
# define XXH_PREFETCH_DIST 384
# endif
# endif /* __clang__ */
#endif /* XXH_PREFETCH_DIST */
/*
* These macros are to generate an XXH3_accumulate() function.
* The two arguments select the name suffix and target attribute.
*
* The name of this symbol is XXH3_accumulate_<name>() and it calls
* XXH3_accumulate_512_<name>().
*
* It may be useful to hand implement this function if the compiler fails to
* optimize the inline function.
*/
#define XXH3_ACCUMULATE_TEMPLATE(name) \
void \
XXH3_accumulate_##name(xxh_u64* XXH_RESTRICT acc, \
const xxh_u8* XXH_RESTRICT input, \
const xxh_u8* XXH_RESTRICT secret, \
size_t nbStripes) \
{ \
size_t n; \
for (n = 0; n < nbStripes; n++ ) { \
const xxh_u8* const in = input + n*XXH_STRIPE_LEN; \
XXH_PREFETCH(in + XXH_PREFETCH_DIST); \
XXH3_accumulate_512_##name( \
acc, \
in, \
secret + n*XXH_SECRET_CONSUME_RATE); \
} \
}
XXH_FORCE_INLINE void XXH_writeLE64(void* dst, xxh_u64 v64)
{
if (!XXH_CPU_LITTLE_ENDIAN) v64 = XXH_swap64(v64);
XXH_memcpy(dst, &v64, sizeof(v64));
}
/* Several intrinsic functions below are supposed to accept __int64 as argument,
* as documented in https://software.intel.com/sites/landingpage/IntrinsicsGuide/ .
* However, several environments do not define __int64 type,
* requiring a workaround.
*/
#if !defined (__VMS) \
&& (defined (__cplusplus) \
|| (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) )
typedef int64_t xxh_i64;
#else
/* the following type must have a width of 64-bit */
typedef long long xxh_i64;
#endif
/*
* XXH3_accumulate_512 is the tightest loop for long inputs, and it is the most optimized.
*
* It is a hardened version of UMAC, based off of FARSH's implementation.
*
* This was chosen because it adapts quite well to 32-bit, 64-bit, and SIMD
* implementations, and it is ridiculously fast.
*
* We harden it by mixing the original input to the accumulators as well as the product.
*
* This means that in the (relatively likely) case of a multiply by zero, the
* original input is preserved.
*
* On 128-bit inputs, we swap 64-bit pairs when we add the input to improve
* cross-pollination, as otherwise the upper and lower halves would be
* essentially independent.
*
* This doesn't matter on 64-bit hashes since they all get merged together in
* the end, so we skip the extra step.
*
* Both XXH3_64bits and XXH3_128bits use this subroutine.
*/
#if (XXH_VECTOR == XXH_AVX512) \
|| (defined(XXH_DISPATCH_AVX512) && XXH_DISPATCH_AVX512 != 0)
#ifndef XXH_TARGET_AVX512
# define XXH_TARGET_AVX512 /* disable attribute target */
#endif
XXH_FORCE_INLINE XXH_TARGET_AVX512 void
XXH3_accumulate_512_avx512(void* XXH_RESTRICT acc,
const void* XXH_RESTRICT input,
const void* XXH_RESTRICT secret)
{
__m512i* const xacc = (__m512i *) acc;
XXH_ASSERT((((size_t)acc) & 63) == 0);
XXH_STATIC_ASSERT(XXH_STRIPE_LEN == sizeof(__m512i));
{
/* data_vec = input[0]; */
__m512i const data_vec = _mm512_loadu_si512 (input);
/* key_vec = secret[0]; */
__m512i const key_vec = _mm512_loadu_si512 (secret);
/* data_key = data_vec ^ key_vec; */
__m512i const data_key = _mm512_xor_si512 (data_vec, key_vec);
/* data_key_lo = data_key >> 32; */
__m512i const data_key_lo = _mm512_srli_epi64 (data_key, 32);
/* product = (data_key & 0xffffffff) * (data_key_lo & 0xffffffff); */
__m512i const product = _mm512_mul_epu32 (data_key, data_key_lo);
/* xacc[0] += swap(data_vec); */
__m512i const data_swap = _mm512_shuffle_epi32(data_vec, (_MM_PERM_ENUM)_MM_SHUFFLE(1, 0, 3, 2));
__m512i const sum = _mm512_add_epi64(*xacc, data_swap);
/* xacc[0] += product; */
*xacc = _mm512_add_epi64(product, sum);
}
}
XXH_FORCE_INLINE XXH_TARGET_AVX512 XXH3_ACCUMULATE_TEMPLATE(avx512)
/*
* XXH3_scrambleAcc: Scrambles the accumulators to improve mixing.
*
* Multiplication isn't perfect, as explained by Google in HighwayHash:
*
* // Multiplication mixes/scrambles bytes 0-7 of the 64-bit result to
* // varying degrees. In descending order of goodness, bytes
* // 3 4 2 5 1 6 0 7 have quality 228 224 164 160 100 96 36 32.
* // As expected, the upper and lower bytes are much worse.
*
* Source: https://github.com/google/highwayhash/blob/0aaf66b/highwayhash/hh_avx2.h#L291
*
* Since our algorithm uses a pseudorandom secret to add some variance into the
* mix, we don't need to (or want to) mix as often or as much as HighwayHash does.
*
* This isn't as tight as XXH3_accumulate, but still written in SIMD to avoid
* extraction.
*
* Both XXH3_64bits and XXH3_128bits use this subroutine.
*/
XXH_FORCE_INLINE XXH_TARGET_AVX512 void
XXH3_scrambleAcc_avx512(void* XXH_RESTRICT acc, const void* XXH_RESTRICT secret)
{
XXH_ASSERT((((size_t)acc) & 63) == 0);
XXH_STATIC_ASSERT(XXH_STRIPE_LEN == sizeof(__m512i));
{ __m512i* const xacc = (__m512i*) acc;
const __m512i prime32 = _mm512_set1_epi32((int)XXH_PRIME32_1);
/* xacc[0] ^= (xacc[0] >> 47) */
__m512i const acc_vec = *xacc;
__m512i const shifted = _mm512_srli_epi64 (acc_vec, 47);
/* xacc[0] ^= secret; */
__m512i const key_vec = _mm512_loadu_si512 (secret);
__m512i const data_key = _mm512_ternarylogic_epi32(key_vec, acc_vec, shifted, 0x96 /* key_vec ^ acc_vec ^ shifted */);
/* xacc[0] *= XXH_PRIME32_1; */
__m512i const data_key_hi = _mm512_srli_epi64 (data_key, 32);
__m512i const prod_lo = _mm512_mul_epu32 (data_key, prime32);
__m512i const prod_hi = _mm512_mul_epu32 (data_key_hi, prime32);
*xacc = _mm512_add_epi64(prod_lo, _mm512_slli_epi64(prod_hi, 32));
}
}
XXH_FORCE_INLINE XXH_TARGET_AVX512 void
XXH3_initCustomSecret_avx512(void* XXH_RESTRICT customSecret, xxh_u64 seed64)
{
XXH_STATIC_ASSERT((XXH_SECRET_DEFAULT_SIZE & 63) == 0);
XXH_STATIC_ASSERT(XXH_SEC_ALIGN == 64);
XXH_ASSERT(((size_t)customSecret & 63) == 0);
(void)(&XXH_writeLE64);
{ int const nbRounds = XXH_SECRET_DEFAULT_SIZE / sizeof(__m512i);
__m512i const seed_pos = _mm512_set1_epi64((xxh_i64)seed64);
__m512i const seed = _mm512_mask_sub_epi64(seed_pos, 0xAA, _mm512_set1_epi8(0), seed_pos);
const __m512i* const src = (const __m512i*) ((const void*) XXH3_kSecret);
__m512i* const dest = ( __m512i*) customSecret;
int i;
XXH_ASSERT(((size_t)src & 63) == 0); /* control alignment */
XXH_ASSERT(((size_t)dest & 63) == 0);
for (i=0; i < nbRounds; ++i) {
dest[i] = _mm512_add_epi64(_mm512_load_si512(src + i), seed);
} }
}
#endif
#if (XXH_VECTOR == XXH_AVX2) \
|| (defined(XXH_DISPATCH_AVX2) && XXH_DISPATCH_AVX2 != 0)
#ifndef XXH_TARGET_AVX2
# define XXH_TARGET_AVX2 /* disable attribute target */
#endif
XXH_FORCE_INLINE XXH_TARGET_AVX2 void
XXH3_accumulate_512_avx2( void* XXH_RESTRICT acc,
const void* XXH_RESTRICT input,
const void* XXH_RESTRICT secret)
{
XXH_ASSERT((((size_t)acc) & 31) == 0);
{ __m256i* const xacc = (__m256i *) acc;
/* Unaligned. This is mainly for pointer arithmetic, and because
* _mm256_loadu_si256 requires a const __m256i * pointer for some reason. */
const __m256i* const xinput = (const __m256i *) input;
/* Unaligned. This is mainly for pointer arithmetic, and because
* _mm256_loadu_si256 requires a const __m256i * pointer for some reason. */
const __m256i* const xsecret = (const __m256i *) secret;
size_t i;
for (i=0; i < XXH_STRIPE_LEN/sizeof(__m256i); i++) {
/* data_vec = xinput[i]; */
__m256i const data_vec = _mm256_loadu_si256 (xinput+i);
/* key_vec = xsecret[i]; */
__m256i const key_vec = _mm256_loadu_si256 (xsecret+i);
/* data_key = data_vec ^ key_vec; */
__m256i const data_key = _mm256_xor_si256 (data_vec, key_vec);
/* data_key_lo = data_key >> 32; */
__m256i const data_key_lo = _mm256_srli_epi64 (data_key, 32);
/* product = (data_key & 0xffffffff) * (data_key_lo & 0xffffffff); */
__m256i const product = _mm256_mul_epu32 (data_key, data_key_lo);
/* xacc[i] += swap(data_vec); */
__m256i const data_swap = _mm256_shuffle_epi32(data_vec, _MM_SHUFFLE(1, 0, 3, 2));
__m256i const sum = _mm256_add_epi64(xacc[i], data_swap);
/* xacc[i] += product; */
xacc[i] = _mm256_add_epi64(product, sum);
} }
}
XXH_FORCE_INLINE XXH_TARGET_AVX2 XXH3_ACCUMULATE_TEMPLATE(avx2)
XXH_FORCE_INLINE XXH_TARGET_AVX2 void
XXH3_scrambleAcc_avx2(void* XXH_RESTRICT acc, const void* XXH_RESTRICT secret)
{
XXH_ASSERT((((size_t)acc) & 31) == 0);
{ __m256i* const xacc = (__m256i*) acc;
/* Unaligned. This is mainly for pointer arithmetic, and because
* _mm256_loadu_si256 requires a const __m256i * pointer for some reason. */
const __m256i* const xsecret = (const __m256i *) secret;
const __m256i prime32 = _mm256_set1_epi32((int)XXH_PRIME32_1);
size_t i;
for (i=0; i < XXH_STRIPE_LEN/sizeof(__m256i); i++) {
/* xacc[i] ^= (xacc[i] >> 47) */
__m256i const acc_vec = xacc[i];
__m256i const shifted = _mm256_srli_epi64 (acc_vec, 47);
__m256i const data_vec = _mm256_xor_si256 (acc_vec, shifted);
/* xacc[i] ^= xsecret; */
__m256i const key_vec = _mm256_loadu_si256 (xsecret+i);
__m256i const data_key = _mm256_xor_si256 (data_vec, key_vec);
/* xacc[i] *= XXH_PRIME32_1; */
__m256i const data_key_hi = _mm256_srli_epi64 (data_key, 32);
__m256i const prod_lo = _mm256_mul_epu32 (data_key, prime32);
__m256i const prod_hi = _mm256_mul_epu32 (data_key_hi, prime32);
xacc[i] = _mm256_add_epi64(prod_lo, _mm256_slli_epi64(prod_hi, 32));
}
}
}
XXH_FORCE_INLINE XXH_TARGET_AVX2 void XXH3_initCustomSecret_avx2(void* XXH_RESTRICT customSecret, xxh_u64 seed64)
{
XXH_STATIC_ASSERT((XXH_SECRET_DEFAULT_SIZE & 31) == 0);
XXH_STATIC_ASSERT((XXH_SECRET_DEFAULT_SIZE / sizeof(__m256i)) == 6);
XXH_STATIC_ASSERT(XXH_SEC_ALIGN <= 64);
(void)(&XXH_writeLE64);
XXH_PREFETCH(customSecret);
{ __m256i const seed = _mm256_set_epi64x((xxh_i64)(0U - seed64), (xxh_i64)seed64, (xxh_i64)(0U - seed64), (xxh_i64)seed64);
const __m256i* const src = (const __m256i*) ((const void*) XXH3_kSecret);
__m256i* dest = ( __m256i*) customSecret;
# if defined(__GNUC__) || defined(__clang__)
/*
* On GCC & Clang, marking 'dest' as modified will cause the compiler:
* - do not extract the secret from sse registers in the internal loop
* - use less common registers, and avoid pushing these reg into stack
*/
XXH_COMPILER_GUARD(dest);
# endif
XXH_ASSERT(((size_t)src & 31) == 0); /* control alignment */
XXH_ASSERT(((size_t)dest & 31) == 0);
/* GCC -O2 need unroll loop manually */
dest[0] = _mm256_add_epi64(_mm256_load_si256(src+0), seed);
dest[1] = _mm256_add_epi64(_mm256_load_si256(src+1), seed);
dest[2] = _mm256_add_epi64(_mm256_load_si256(src+2), seed);
dest[3] = _mm256_add_epi64(_mm256_load_si256(src+3), seed);
dest[4] = _mm256_add_epi64(_mm256_load_si256(src+4), seed);
dest[5] = _mm256_add_epi64(_mm256_load_si256(src+5), seed);
}
}
#endif
/* x86dispatch always generates SSE2 */
#if (XXH_VECTOR == XXH_SSE2) || defined(XXH_X86DISPATCH)
#ifndef XXH_TARGET_SSE2
# define XXH_TARGET_SSE2 /* disable attribute target */
#endif
XXH_FORCE_INLINE XXH_TARGET_SSE2 void
XXH3_accumulate_512_sse2( void* XXH_RESTRICT acc,
const void* XXH_RESTRICT input,
const void* XXH_RESTRICT secret)
{
/* SSE2 is just a half-scale version of the AVX2 version. */
XXH_ASSERT((((size_t)acc) & 15) == 0);
{ __m128i* const xacc = (__m128i *) acc;
/* Unaligned. This is mainly for pointer arithmetic, and because
* _mm_loadu_si128 requires a const __m128i * pointer for some reason. */
const __m128i* const xinput = (const __m128i *) input;
/* Unaligned. This is mainly for pointer arithmetic, and because
* _mm_loadu_si128 requires a const __m128i * pointer for some reason. */
const __m128i* const xsecret = (const __m128i *) secret;
size_t i;
for (i=0; i < XXH_STRIPE_LEN/sizeof(__m128i); i++) {
/* data_vec = xinput[i]; */
__m128i const data_vec = _mm_loadu_si128 (xinput+i);
/* key_vec = xsecret[i]; */
__m128i const key_vec = _mm_loadu_si128 (xsecret+i);
/* data_key = data_vec ^ key_vec; */
__m128i const data_key = _mm_xor_si128 (data_vec, key_vec);
/* data_key_lo = data_key >> 32; */
__m128i const data_key_lo = _mm_shuffle_epi32 (data_key, _MM_SHUFFLE(0, 3, 0, 1));
/* product = (data_key & 0xffffffff) * (data_key_lo & 0xffffffff); */
__m128i const product = _mm_mul_epu32 (data_key, data_key_lo);
/* xacc[i] += swap(data_vec); */
__m128i const data_swap = _mm_shuffle_epi32(data_vec, _MM_SHUFFLE(1,0,3,2));
__m128i const sum = _mm_add_epi64(xacc[i], data_swap);
/* xacc[i] += product; */
xacc[i] = _mm_add_epi64(product, sum);
} }
}
XXH_FORCE_INLINE XXH_TARGET_SSE2 XXH3_ACCUMULATE_TEMPLATE(sse2)
XXH_FORCE_INLINE XXH_TARGET_SSE2 void
XXH3_scrambleAcc_sse2(void* XXH_RESTRICT acc, const void* XXH_RESTRICT secret)
{
XXH_ASSERT((((size_t)acc) & 15) == 0);
{ __m128i* const xacc = (__m128i*) acc;
/* Unaligned. This is mainly for pointer arithmetic, and because
* _mm_loadu_si128 requires a const __m128i * pointer for some reason. */
const __m128i* const xsecret = (const __m128i *) secret;
const __m128i prime32 = _mm_set1_epi32((int)XXH_PRIME32_1);
size_t i;
for (i=0; i < XXH_STRIPE_LEN/sizeof(__m128i); i++) {
/* xacc[i] ^= (xacc[i] >> 47) */
__m128i const acc_vec = xacc[i];
__m128i const shifted = _mm_srli_epi64 (acc_vec, 47);
__m128i const data_vec = _mm_xor_si128 (acc_vec, shifted);
/* xacc[i] ^= xsecret[i]; */
__m128i const key_vec = _mm_loadu_si128 (xsecret+i);
__m128i const data_key = _mm_xor_si128 (data_vec, key_vec);
/* xacc[i] *= XXH_PRIME32_1; */
__m128i const data_key_hi = _mm_shuffle_epi32 (data_key, _MM_SHUFFLE(0, 3, 0, 1));
__m128i const prod_lo = _mm_mul_epu32 (data_key, prime32);
__m128i const prod_hi = _mm_mul_epu32 (data_key_hi, prime32);
xacc[i] = _mm_add_epi64(prod_lo, _mm_slli_epi64(prod_hi, 32));
}
}
}
XXH_FORCE_INLINE XXH_TARGET_SSE2 void XXH3_initCustomSecret_sse2(void* XXH_RESTRICT customSecret, xxh_u64 seed64)
{
XXH_STATIC_ASSERT((XXH_SECRET_DEFAULT_SIZE & 15) == 0);
(void)(&XXH_writeLE64);
{ int const nbRounds = XXH_SECRET_DEFAULT_SIZE / sizeof(__m128i);
# if defined(_MSC_VER) && defined(_M_IX86) && _MSC_VER < 1900
/* MSVC 32bit mode does not support _mm_set_epi64x before 2015 */
XXH_ALIGN(16) const xxh_i64 seed64x2[2] = { (xxh_i64)seed64, (xxh_i64)(0U - seed64) };
__m128i const seed = _mm_load_si128((__m128i const*)seed64x2);
# else
__m128i const seed = _mm_set_epi64x((xxh_i64)(0U - seed64), (xxh_i64)seed64);
# endif
int i;
const void* const src16 = XXH3_kSecret;
__m128i* dst16 = (__m128i*) customSecret;
# if defined(__GNUC__) || defined(__clang__)
/*
* On GCC & Clang, marking 'dest' as modified will cause the compiler:
* - do not extract the secret from sse registers in the internal loop
* - use less common registers, and avoid pushing these reg into stack
*/
XXH_COMPILER_GUARD(dst16);
# endif
XXH_ASSERT(((size_t)src16 & 15) == 0); /* control alignment */
XXH_ASSERT(((size_t)dst16 & 15) == 0);
for (i=0; i < nbRounds; ++i) {
dst16[i] = _mm_add_epi64(_mm_load_si128((const __m128i *)src16+i), seed);
} }
}
#endif
#if (XXH_VECTOR == XXH_NEON)
/* forward declarations for the scalar routines */
XXH_FORCE_INLINE void
XXH3_scalarRound(void* XXH_RESTRICT acc, void const* XXH_RESTRICT input,
void const* XXH_RESTRICT secret, size_t lane);
XXH_FORCE_INLINE void
XXH3_scalarScrambleRound(void* XXH_RESTRICT acc,
void const* XXH_RESTRICT secret, size_t lane);
/*!
* @internal
* @brief The bulk processing loop for NEON and WASM SIMD128.
*
* The NEON code path is actually partially scalar when running on AArch64. This
* is to optimize the pipelining and can have up to 15% speedup depending on the
* CPU, and it also mitigates some GCC codegen issues.
*
* @see XXH3_NEON_LANES for configuring this and details about this optimization.
*
* NEON's 32-bit to 64-bit long multiply takes a half vector of 32-bit
* integers instead of the other platforms which mask full 64-bit vectors,
* so the setup is more complicated than just shifting right.
*
* Additionally, there is an optimization for 4 lanes at once noted below.
*
* Since, as stated, the most optimal amount of lanes for Cortexes is 6,
* there needs to be *three* versions of the accumulate operation used
* for the remaining 2 lanes.
*
* WASM's SIMD128 uses SIMDe's arm_neon.h polyfill because the intrinsics overlap
* nearly perfectly.
*/
XXH_FORCE_INLINE void
XXH3_accumulate_512_neon( void* XXH_RESTRICT acc,
const void* XXH_RESTRICT input,
const void* XXH_RESTRICT secret)
{
XXH_ASSERT((((size_t)acc) & 15) == 0);
XXH_STATIC_ASSERT(XXH3_NEON_LANES > 0 && XXH3_NEON_LANES <= XXH_ACC_NB && XXH3_NEON_LANES % 2 == 0);
{ /* GCC for darwin arm64 does not like aliasing here */
xxh_aliasing_uint64x2_t* const xacc = (xxh_aliasing_uint64x2_t*) acc;
/* We don't use a uint32x4_t pointer because it causes bus errors on ARMv7. */
uint8_t const* xinput = (const uint8_t *) input;
uint8_t const* xsecret = (const uint8_t *) secret;
size_t i;
#ifdef __wasm_simd128__
/*
* On WASM SIMD128, Clang emits direct address loads when XXH3_kSecret
* is constant propagated, which results in it converting it to this
* inside the loop:
*
* a = v128.load(XXH3_kSecret + 0 + $secret_offset, offset = 0)
* b = v128.load(XXH3_kSecret + 16 + $secret_offset, offset = 0)
* ...
*
* This requires a full 32-bit address immediate (and therefore a 6 byte
* instruction) as well as an add for each offset.
*
* Putting an asm guard prevents it from folding (at the cost of losing
* the alignment hint), and uses the free offset in `v128.load` instead
* of adding secret_offset each time which overall reduces code size by
* about a kilobyte and improves performance.
*/
XXH_COMPILER_GUARD(xsecret);
#endif
/* Scalar lanes use the normal scalarRound routine */
for (i = XXH3_NEON_LANES; i < XXH_ACC_NB; i++) {
XXH3_scalarRound(acc, input, secret, i);
}
i = 0;
/* 4 NEON lanes at a time. */
for (; i+1 < XXH3_NEON_LANES / 2; i+=2) {
/* data_vec = xinput[i]; */
uint64x2_t data_vec_1 = XXH_vld1q_u64(xinput + (i * 16));
uint64x2_t data_vec_2 = XXH_vld1q_u64(xinput + ((i+1) * 16));
/* key_vec = xsecret[i]; */
uint64x2_t key_vec_1 = XXH_vld1q_u64(xsecret + (i * 16));
uint64x2_t key_vec_2 = XXH_vld1q_u64(xsecret + ((i+1) * 16));
/* data_swap = swap(data_vec) */
uint64x2_t data_swap_1 = vextq_u64(data_vec_1, data_vec_1, 1);
uint64x2_t data_swap_2 = vextq_u64(data_vec_2, data_vec_2, 1);
/* data_key = data_vec ^ key_vec; */
uint64x2_t data_key_1 = veorq_u64(data_vec_1, key_vec_1);
uint64x2_t data_key_2 = veorq_u64(data_vec_2, key_vec_2);
/*
* If we reinterpret the 64x2 vectors as 32x4 vectors, we can use a
* de-interleave operation for 4 lanes in 1 step with `vuzpq_u32` to
* get one vector with the low 32 bits of each lane, and one vector
* with the high 32 bits of each lane.
*
* The intrinsic returns a double vector because the original ARMv7-a
* instruction modified both arguments in place. AArch64 and SIMD128 emit
* two instructions from this intrinsic.
*
* [ dk11L | dk11H | dk12L | dk12H ] -> [ dk11L | dk12L | dk21L | dk22L ]
* [ dk21L | dk21H | dk22L | dk22H ] -> [ dk11H | dk12H | dk21H | dk22H ]
*/
uint32x4x2_t unzipped = vuzpq_u32(
vreinterpretq_u32_u64(data_key_1),
vreinterpretq_u32_u64(data_key_2)
);
/* data_key_lo = data_key & 0xFFFFFFFF */
uint32x4_t data_key_lo = unzipped.val[0];
/* data_key_hi = data_key >> 32 */
uint32x4_t data_key_hi = unzipped.val[1];
/*
* Then, we can split the vectors horizontally and multiply which, as for most
* widening intrinsics, have a variant that works on both high half vectors
* for free on AArch64. A similar instruction is available on SIMD128.
*
* sum = data_swap + (u64x2) data_key_lo * (u64x2) data_key_hi
*/
uint64x2_t sum_1 = XXH_vmlal_low_u32(data_swap_1, data_key_lo, data_key_hi);
uint64x2_t sum_2 = XXH_vmlal_high_u32(data_swap_2, data_key_lo, data_key_hi);
/*
* Clang reorders
* a += b * c; // umlal swap.2d, dkl.2s, dkh.2s
* c += a; // add acc.2d, acc.2d, swap.2d
* to
* c += a; // add acc.2d, acc.2d, swap.2d
* c += b * c; // umlal acc.2d, dkl.2s, dkh.2s
*
* While it would make sense in theory since the addition is faster,
* for reasons likely related to umlal being limited to certain NEON
* pipelines, this is worse. A compiler guard fixes this.
*/
XXH_COMPILER_GUARD_CLANG_NEON(sum_1);
XXH_COMPILER_GUARD_CLANG_NEON(sum_2);
/* xacc[i] = acc_vec + sum; */
xacc[i] = vaddq_u64(xacc[i], sum_1);
xacc[i+1] = vaddq_u64(xacc[i+1], sum_2);
}
/* Operate on the remaining NEON lanes 2 at a time. */
for (; i < XXH3_NEON_LANES / 2; i++) {
/* data_vec = xinput[i]; */
uint64x2_t data_vec = XXH_vld1q_u64(xinput + (i * 16));
/* key_vec = xsecret[i]; */
uint64x2_t key_vec = XXH_vld1q_u64(xsecret + (i * 16));
/* acc_vec_2 = swap(data_vec) */
uint64x2_t data_swap = vextq_u64(data_vec, data_vec, 1);
/* data_key = data_vec ^ key_vec; */
uint64x2_t data_key = veorq_u64(data_vec, key_vec);
/* For two lanes, just use VMOVN and VSHRN. */
/* data_key_lo = data_key & 0xFFFFFFFF; */
uint32x2_t data_key_lo = vmovn_u64(data_key);
/* data_key_hi = data_key >> 32; */
uint32x2_t data_key_hi = vshrn_n_u64(data_key, 32);
/* sum = data_swap + (u64x2) data_key_lo * (u64x2) data_key_hi; */
uint64x2_t sum = vmlal_u32(data_swap, data_key_lo, data_key_hi);
/* Same Clang workaround as before */
XXH_COMPILER_GUARD_CLANG_NEON(sum);
/* xacc[i] = acc_vec + sum; */
xacc[i] = vaddq_u64 (xacc[i], sum);
}
}
}
XXH_FORCE_INLINE XXH3_ACCUMULATE_TEMPLATE(neon)
XXH_FORCE_INLINE void
XXH3_scrambleAcc_neon(void* XXH_RESTRICT acc, const void* XXH_RESTRICT secret)
{
XXH_ASSERT((((size_t)acc) & 15) == 0);
{ xxh_aliasing_uint64x2_t* xacc = (xxh_aliasing_uint64x2_t*) acc;
uint8_t const* xsecret = (uint8_t const*) secret;
size_t i;
/* WASM uses operator overloads and doesn't need these. */
#ifndef __wasm_simd128__
/* { prime32_1, prime32_1 } */
uint32x2_t const kPrimeLo = vdup_n_u32(XXH_PRIME32_1);
/* { 0, prime32_1, 0, prime32_1 } */
uint32x4_t const kPrimeHi = vreinterpretq_u32_u64(vdupq_n_u64((xxh_u64)XXH_PRIME32_1 << 32));
#endif
/* AArch64 uses both scalar and neon at the same time */
for (i = XXH3_NEON_LANES; i < XXH_ACC_NB; i++) {
XXH3_scalarScrambleRound(acc, secret, i);
}
for (i=0; i < XXH3_NEON_LANES / 2; i++) {
/* xacc[i] ^= (xacc[i] >> 47); */
uint64x2_t acc_vec = xacc[i];
uint64x2_t shifted = vshrq_n_u64(acc_vec, 47);
uint64x2_t data_vec = veorq_u64(acc_vec, shifted);
/* xacc[i] ^= xsecret[i]; */
uint64x2_t key_vec = XXH_vld1q_u64(xsecret + (i * 16));
uint64x2_t data_key = veorq_u64(data_vec, key_vec);
/* xacc[i] *= XXH_PRIME32_1 */
#ifdef __wasm_simd128__
/* SIMD128 has multiply by u64x2, use it instead of expanding and scalarizing */
xacc[i] = data_key * XXH_PRIME32_1;
#else
/*
* Expanded version with portable NEON intrinsics
*
* lo(x) * lo(y) + (hi(x) * lo(y) << 32)
*
* prod_hi = hi(data_key) * lo(prime) << 32
*
* Since we only need 32 bits of this multiply a trick can be used, reinterpreting the vector
* as a uint32x4_t and multiplying by { 0, prime, 0, prime } to cancel out the unwanted bits
* and avoid the shift.
*/
uint32x4_t prod_hi = vmulq_u32 (vreinterpretq_u32_u64(data_key), kPrimeHi);
/* Extract low bits for vmlal_u32 */
uint32x2_t data_key_lo = vmovn_u64(data_key);
/* xacc[i] = prod_hi + lo(data_key) * XXH_PRIME32_1; */
xacc[i] = vmlal_u32(vreinterpretq_u64_u32(prod_hi), data_key_lo, kPrimeLo);
#endif
}
}
}
#endif
#if (XXH_VECTOR == XXH_VSX)
XXH_FORCE_INLINE void
XXH3_accumulate_512_vsx( void* XXH_RESTRICT acc,
const void* XXH_RESTRICT input,
const void* XXH_RESTRICT secret)
{
/* presumed aligned */
xxh_aliasing_u64x2* const xacc = (xxh_aliasing_u64x2*) acc;
xxh_u8 const* const xinput = (xxh_u8 const*) input; /* no alignment restriction */
xxh_u8 const* const xsecret = (xxh_u8 const*) secret; /* no alignment restriction */
xxh_u64x2 const v32 = { 32, 32 };
size_t i;
for (i = 0; i < XXH_STRIPE_LEN / sizeof(xxh_u64x2); i++) {
/* data_vec = xinput[i]; */
xxh_u64x2 const data_vec = XXH_vec_loadu(xinput + 16*i);
/* key_vec = xsecret[i]; */
xxh_u64x2 const key_vec = XXH_vec_loadu(xsecret + 16*i);
xxh_u64x2 const data_key = data_vec ^ key_vec;
/* shuffled = (data_key << 32) | (data_key >> 32); */
xxh_u32x4 const shuffled = (xxh_u32x4)vec_rl(data_key, v32);
/* product = ((xxh_u64x2)data_key & 0xFFFFFFFF) * ((xxh_u64x2)shuffled & 0xFFFFFFFF); */
xxh_u64x2 const product = XXH_vec_mulo((xxh_u32x4)data_key, shuffled);
/* acc_vec = xacc[i]; */
xxh_u64x2 acc_vec = xacc[i];
acc_vec += product;
/* swap high and low halves */
#ifdef __s390x__
acc_vec += vec_permi(data_vec, data_vec, 2);
#else
acc_vec += vec_xxpermdi(data_vec, data_vec, 2);
#endif
xacc[i] = acc_vec;
}
}
XXH_FORCE_INLINE XXH3_ACCUMULATE_TEMPLATE(vsx)
XXH_FORCE_INLINE void
XXH3_scrambleAcc_vsx(void* XXH_RESTRICT acc, const void* XXH_RESTRICT secret)
{
XXH_ASSERT((((size_t)acc) & 15) == 0);
{ xxh_aliasing_u64x2* const xacc = (xxh_aliasing_u64x2*) acc;
const xxh_u8* const xsecret = (const xxh_u8*) secret;
/* constants */
xxh_u64x2 const v32 = { 32, 32 };
xxh_u64x2 const v47 = { 47, 47 };
xxh_u32x4 const prime = { XXH_PRIME32_1, XXH_PRIME32_1, XXH_PRIME32_1, XXH_PRIME32_1 };
size_t i;
for (i = 0; i < XXH_STRIPE_LEN / sizeof(xxh_u64x2); i++) {
/* xacc[i] ^= (xacc[i] >> 47); */
xxh_u64x2 const acc_vec = xacc[i];
xxh_u64x2 const data_vec = acc_vec ^ (acc_vec >> v47);
/* xacc[i] ^= xsecret[i]; */
xxh_u64x2 const key_vec = XXH_vec_loadu(xsecret + 16*i);
xxh_u64x2 const data_key = data_vec ^ key_vec;
/* xacc[i] *= XXH_PRIME32_1 */
/* prod_lo = ((xxh_u64x2)data_key & 0xFFFFFFFF) * ((xxh_u64x2)prime & 0xFFFFFFFF); */
xxh_u64x2 const prod_even = XXH_vec_mule((xxh_u32x4)data_key, prime);
/* prod_hi = ((xxh_u64x2)data_key >> 32) * ((xxh_u64x2)prime >> 32); */
xxh_u64x2 const prod_odd = XXH_vec_mulo((xxh_u32x4)data_key, prime);
xacc[i] = prod_odd + (prod_even << v32);
} }
}
#endif
#if (XXH_VECTOR == XXH_SVE)
XXH_FORCE_INLINE void
XXH3_accumulate_512_sve( void* XXH_RESTRICT acc,
const void* XXH_RESTRICT input,
const void* XXH_RESTRICT secret)
{
uint64_t *xacc = (uint64_t *)acc;
const uint64_t *xinput = (const uint64_t *)(const void *)input;
const uint64_t *xsecret = (const uint64_t *)(const void *)secret;
svuint64_t kSwap = sveor_n_u64_z(svptrue_b64(), svindex_u64(0, 1), 1);
uint64_t element_count = svcntd();
if (element_count >= 8) {
svbool_t mask = svptrue_pat_b64(SV_VL8);
svuint64_t vacc = svld1_u64(mask, xacc);
ACCRND(vacc, 0);
svst1_u64(mask, xacc, vacc);
} else if (element_count == 2) { /* sve128 */
svbool_t mask = svptrue_pat_b64(SV_VL2);
svuint64_t acc0 = svld1_u64(mask, xacc + 0);
svuint64_t acc1 = svld1_u64(mask, xacc + 2);
svuint64_t acc2 = svld1_u64(mask, xacc + 4);
svuint64_t acc3 = svld1_u64(mask, xacc + 6);
ACCRND(acc0, 0);
ACCRND(acc1, 2);
ACCRND(acc2, 4);
ACCRND(acc3, 6);
svst1_u64(mask, xacc + 0, acc0);
svst1_u64(mask, xacc + 2, acc1);
svst1_u64(mask, xacc + 4, acc2);
svst1_u64(mask, xacc + 6, acc3);
} else {
svbool_t mask = svptrue_pat_b64(SV_VL4);
svuint64_t acc0 = svld1_u64(mask, xacc + 0);
svuint64_t acc1 = svld1_u64(mask, xacc + 4);
ACCRND(acc0, 0);
ACCRND(acc1, 4);
svst1_u64(mask, xacc + 0, acc0);
svst1_u64(mask, xacc + 4, acc1);
}
}
XXH_FORCE_INLINE void
XXH3_accumulate_sve(xxh_u64* XXH_RESTRICT acc,
const xxh_u8* XXH_RESTRICT input,
const xxh_u8* XXH_RESTRICT secret,
size_t nbStripes)
{
if (nbStripes != 0) {
uint64_t *xacc = (uint64_t *)acc;
const uint64_t *xinput = (const uint64_t *)(const void *)input;
const uint64_t *xsecret = (const uint64_t *)(const void *)secret;
svuint64_t kSwap = sveor_n_u64_z(svptrue_b64(), svindex_u64(0, 1), 1);
uint64_t element_count = svcntd();
if (element_count >= 8) {
svbool_t mask = svptrue_pat_b64(SV_VL8);
svuint64_t vacc = svld1_u64(mask, xacc + 0);
do {
/* svprfd(svbool_t, void *, enum svfprop); */
svprfd(mask, xinput + 128, SV_PLDL1STRM);
ACCRND(vacc, 0);
xinput += 8;
xsecret += 1;
nbStripes--;
} while (nbStripes != 0);
svst1_u64(mask, xacc + 0, vacc);
} else if (element_count == 2) { /* sve128 */
svbool_t mask = svptrue_pat_b64(SV_VL2);
svuint64_t acc0 = svld1_u64(mask, xacc + 0);
svuint64_t acc1 = svld1_u64(mask, xacc + 2);
svuint64_t acc2 = svld1_u64(mask, xacc + 4);
svuint64_t acc3 = svld1_u64(mask, xacc + 6);
do {
svprfd(mask, xinput + 128, SV_PLDL1STRM);
ACCRND(acc0, 0);
ACCRND(acc1, 2);
ACCRND(acc2, 4);
ACCRND(acc3, 6);
xinput += 8;
xsecret += 1;
nbStripes--;
} while (nbStripes != 0);
svst1_u64(mask, xacc + 0, acc0);
svst1_u64(mask, xacc + 2, acc1);
svst1_u64(mask, xacc + 4, acc2);
svst1_u64(mask, xacc + 6, acc3);
} else {
svbool_t mask = svptrue_pat_b64(SV_VL4);
svuint64_t acc0 = svld1_u64(mask, xacc + 0);
svuint64_t acc1 = svld1_u64(mask, xacc + 4);
do {
svprfd(mask, xinput + 128, SV_PLDL1STRM);
ACCRND(acc0, 0);
ACCRND(acc1, 4);
xinput += 8;
xsecret += 1;
nbStripes--;
} while (nbStripes != 0);
svst1_u64(mask, xacc + 0, acc0);
svst1_u64(mask, xacc + 4, acc1);
}
}
}
#endif
/* scalar variants - universal */
#if defined(__aarch64__) && (defined(__GNUC__) || defined(__clang__))
/*
* In XXH3_scalarRound(), GCC and Clang have a similar codegen issue, where they
* emit an excess mask and a full 64-bit multiply-add (MADD X-form).
*
* While this might not seem like much, as AArch64 is a 64-bit architecture, only
* big Cortex designs have a full 64-bit multiplier.
*
* On the little cores, the smaller 32-bit multiplier is used, and full 64-bit
* multiplies expand to 2-3 multiplies in microcode. This has a major penalty
* of up to 4 latency cycles and 2 stall cycles in the multiply pipeline.
*
* Thankfully, AArch64 still provides the 32-bit long multiply-add (UMADDL) which does
* not have this penalty and does the mask automatically.
*/
XXH_FORCE_INLINE xxh_u64
XXH_mult32to64_add64(xxh_u64 lhs, xxh_u64 rhs, xxh_u64 acc)
{
xxh_u64 ret;
/* note: %x = 64-bit register, %w = 32-bit register */
__asm__("umaddl %x0, %w1, %w2, %x3" : "=r" (ret) : "r" (lhs), "r" (rhs), "r" (acc));
return ret;
}
#else
XXH_FORCE_INLINE xxh_u64
XXH_mult32to64_add64(xxh_u64 lhs, xxh_u64 rhs, xxh_u64 acc)
{
return XXH_mult32to64((xxh_u32)lhs, (xxh_u32)rhs) + acc;
}
#endif
/*!
* @internal
* @brief Scalar round for @ref XXH3_accumulate_512_scalar().
*
* This is extracted to its own function because the NEON path uses a combination
* of NEON and scalar.
*/
XXH_FORCE_INLINE void
XXH3_scalarRound(void* XXH_RESTRICT acc,
void const* XXH_RESTRICT input,
void const* XXH_RESTRICT secret,
size_t lane)
{
xxh_u64* xacc = (xxh_u64*) acc;
xxh_u8 const* xinput = (xxh_u8 const*) input;
xxh_u8 const* xsecret = (xxh_u8 const*) secret;
XXH_ASSERT(lane < XXH_ACC_NB);
XXH_ASSERT(((size_t)acc & (XXH_ACC_ALIGN-1)) == 0);
{
xxh_u64 const data_val = XXH_readLE64(xinput + lane * 8);
xxh_u64 const data_key = data_val ^ XXH_readLE64(xsecret + lane * 8);
xacc[lane ^ 1] += data_val; /* swap adjacent lanes */
xacc[lane] = XXH_mult32to64_add64(data_key /* & 0xFFFFFFFF */, data_key >> 32, xacc[lane]);
}
}
/*!
* @internal
* @brief Processes a 64 byte block of data using the scalar path.
*/
XXH_FORCE_INLINE void
XXH3_accumulate_512_scalar(void* XXH_RESTRICT acc,
const void* XXH_RESTRICT input,
const void* XXH_RESTRICT secret)
{
size_t i;
/* ARM GCC refuses to unroll this loop, resulting in a 24% slowdown on ARMv6. */
#if defined(__GNUC__) && !defined(__clang__) \
&& (defined(__arm__) || defined(__thumb2__)) \
&& defined(__ARM_FEATURE_UNALIGNED) /* no unaligned access just wastes bytes */ \
&& XXH_SIZE_OPT <= 0
# pragma GCC unroll 8
#endif
for (i=0; i < XXH_ACC_NB; i++) {
XXH3_scalarRound(acc, input, secret, i);
}
}
XXH_FORCE_INLINE XXH3_ACCUMULATE_TEMPLATE(scalar)
/*!
* @internal
* @brief Scalar scramble step for @ref XXH3_scrambleAcc_scalar().
*
* This is extracted to its own function because the NEON path uses a combination
* of NEON and scalar.
*/
XXH_FORCE_INLINE void
XXH3_scalarScrambleRound(void* XXH_RESTRICT acc,
void const* XXH_RESTRICT secret,
size_t lane)
{
xxh_u64* const xacc = (xxh_u64*) acc; /* presumed aligned */
const xxh_u8* const xsecret = (const xxh_u8*) secret; /* no alignment restriction */
XXH_ASSERT((((size_t)acc) & (XXH_ACC_ALIGN-1)) == 0);
XXH_ASSERT(lane < XXH_ACC_NB);
{
xxh_u64 const key64 = XXH_readLE64(xsecret + lane * 8);
xxh_u64 acc64 = xacc[lane];
acc64 = XXH_xorshift64(acc64, 47);
acc64 ^= key64;
acc64 *= XXH_PRIME32_1;
xacc[lane] = acc64;
}
}
/*!
* @internal
* @brief Scrambles the accumulators after a large chunk has been read
*/
XXH_FORCE_INLINE void
XXH3_scrambleAcc_scalar(void* XXH_RESTRICT acc, const void* XXH_RESTRICT secret)
{
size_t i;
for (i=0; i < XXH_ACC_NB; i++) {
XXH3_scalarScrambleRound(acc, secret, i);
}
}
XXH_FORCE_INLINE void
XXH3_initCustomSecret_scalar(void* XXH_RESTRICT customSecret, xxh_u64 seed64)
{
/*
* We need a separate pointer for the hack below,
* which requires a non-const pointer.
* Any decent compiler will optimize this out otherwise.
*/
const xxh_u8* kSecretPtr = XXH3_kSecret;
XXH_STATIC_ASSERT((XXH_SECRET_DEFAULT_SIZE & 15) == 0);
#if defined(__GNUC__) && defined(__aarch64__)
/*
* UGLY HACK:
* GCC and Clang generate a bunch of MOV/MOVK pairs for aarch64, and they are
* placed sequentially, in order, at the top of the unrolled loop.
*
* While MOVK is great for generating constants (2 cycles for a 64-bit
* constant compared to 4 cycles for LDR), it fights for bandwidth with
* the arithmetic instructions.
*
* I L S
* MOVK
* MOVK
* MOVK
* MOVK
* ADD
* SUB STR
* STR
* By forcing loads from memory (as the asm line causes the compiler to assume
* that XXH3_kSecretPtr has been changed), the pipelines are used more
* efficiently:
* I L S
* LDR
* ADD LDR
* SUB STR
* STR
*
* See XXH3_NEON_LANES for details on the pipsline.
*
* XXH3_64bits_withSeed, len == 256, Snapdragon 835
* without hack: 2654.4 MB/s
* with hack: 3202.9 MB/s
*/
XXH_COMPILER_GUARD(kSecretPtr);
#endif
{ int const nbRounds = XXH_SECRET_DEFAULT_SIZE / 16;
int i;
for (i=0; i < nbRounds; i++) {
/*
* The asm hack causes the compiler to assume that kSecretPtr aliases with
* customSecret, and on aarch64, this prevented LDP from merging two
* loads together for free. Putting the loads together before the stores
* properly generates LDP.
*/
xxh_u64 lo = XXH_readLE64(kSecretPtr + 16*i) + seed64;
xxh_u64 hi = XXH_readLE64(kSecretPtr + 16*i + 8) - seed64;
XXH_writeLE64((xxh_u8*)customSecret + 16*i, lo);
XXH_writeLE64((xxh_u8*)customSecret + 16*i + 8, hi);
} }
}
typedef void (*XXH3_f_accumulate)(xxh_u64* XXH_RESTRICT, const xxh_u8* XXH_RESTRICT, const xxh_u8* XXH_RESTRICT, size_t);
typedef void (*XXH3_f_scrambleAcc)(void* XXH_RESTRICT, const void*);
typedef void (*XXH3_f_initCustomSecret)(void* XXH_RESTRICT, xxh_u64);
#if (XXH_VECTOR == XXH_AVX512)
#define XXH3_accumulate_512 XXH3_accumulate_512_avx512
#define XXH3_accumulate XXH3_accumulate_avx512
#define XXH3_scrambleAcc XXH3_scrambleAcc_avx512
#define XXH3_initCustomSecret XXH3_initCustomSecret_avx512
#elif (XXH_VECTOR == XXH_AVX2)
#define XXH3_accumulate_512 XXH3_accumulate_512_avx2
#define XXH3_accumulate XXH3_accumulate_avx2
#define XXH3_scrambleAcc XXH3_scrambleAcc_avx2
#define XXH3_initCustomSecret XXH3_initCustomSecret_avx2
#elif (XXH_VECTOR == XXH_SSE2)
#define XXH3_accumulate_512 XXH3_accumulate_512_sse2
#define XXH3_accumulate XXH3_accumulate_sse2
#define XXH3_scrambleAcc XXH3_scrambleAcc_sse2
#define XXH3_initCustomSecret XXH3_initCustomSecret_sse2
#elif (XXH_VECTOR == XXH_NEON)
#define XXH3_accumulate_512 XXH3_accumulate_512_neon
#define XXH3_accumulate XXH3_accumulate_neon
#define XXH3_scrambleAcc XXH3_scrambleAcc_neon
#define XXH3_initCustomSecret XXH3_initCustomSecret_scalar
#elif (XXH_VECTOR == XXH_VSX)
#define XXH3_accumulate_512 XXH3_accumulate_512_vsx
#define XXH3_accumulate XXH3_accumulate_vsx
#define XXH3_scrambleAcc XXH3_scrambleAcc_vsx
#define XXH3_initCustomSecret XXH3_initCustomSecret_scalar
#elif (XXH_VECTOR == XXH_SVE)
#define XXH3_accumulate_512 XXH3_accumulate_512_sve
#define XXH3_accumulate XXH3_accumulate_sve
#define XXH3_scrambleAcc XXH3_scrambleAcc_scalar
#define XXH3_initCustomSecret XXH3_initCustomSecret_scalar
#else /* scalar */
#define XXH3_accumulate_512 XXH3_accumulate_512_scalar
#define XXH3_accumulate XXH3_accumulate_scalar
#define XXH3_scrambleAcc XXH3_scrambleAcc_scalar
#define XXH3_initCustomSecret XXH3_initCustomSecret_scalar
#endif
#if XXH_SIZE_OPT >= 1 /* don't do SIMD for initialization */
# undef XXH3_initCustomSecret
# define XXH3_initCustomSecret XXH3_initCustomSecret_scalar
#endif
XXH_FORCE_INLINE void
XXH3_hashLong_internal_loop(xxh_u64* XXH_RESTRICT acc,
const xxh_u8* XXH_RESTRICT input, size_t len,
const xxh_u8* XXH_RESTRICT secret, size_t secretSize,
XXH3_f_accumulate f_acc,
XXH3_f_scrambleAcc f_scramble)
{
size_t const nbStripesPerBlock = (secretSize - XXH_STRIPE_LEN) / XXH_SECRET_CONSUME_RATE;
size_t const block_len = XXH_STRIPE_LEN * nbStripesPerBlock;
size_t const nb_blocks = (len - 1) / block_len;
size_t n;
XXH_ASSERT(secretSize >= XXH3_SECRET_SIZE_MIN);
for (n = 0; n < nb_blocks; n++) {
f_acc(acc, input + n*block_len, secret, nbStripesPerBlock);
f_scramble(acc, secret + secretSize - XXH_STRIPE_LEN);
}
/* last partial block */
XXH_ASSERT(len > XXH_STRIPE_LEN);
{ size_t const nbStripes = ((len - 1) - (block_len * nb_blocks)) / XXH_STRIPE_LEN;
XXH_ASSERT(nbStripes <= (secretSize / XXH_SECRET_CONSUME_RATE));
f_acc(acc, input + nb_blocks*block_len, secret, nbStripes);
/* last stripe */
{ const xxh_u8* const p = input + len - XXH_STRIPE_LEN;
#define XXH_SECRET_LASTACC_START 7 /* not aligned on 8, last secret is different from acc & scrambler */
XXH3_accumulate_512(acc, p, secret + secretSize - XXH_STRIPE_LEN - XXH_SECRET_LASTACC_START);
} }
}
XXH_FORCE_INLINE xxh_u64
XXH3_mix2Accs(const xxh_u64* XXH_RESTRICT acc, const xxh_u8* XXH_RESTRICT secret)
{
return XXH3_mul128_fold64(
acc[0] ^ XXH_readLE64(secret),
acc[1] ^ XXH_readLE64(secret+8) );
}
static XXH64_hash_t
XXH3_mergeAccs(const xxh_u64* XXH_RESTRICT acc, const xxh_u8* XXH_RESTRICT secret, xxh_u64 start)
{
xxh_u64 result64 = start;
size_t i = 0;
for (i = 0; i < 4; i++) {
result64 += XXH3_mix2Accs(acc+2*i, secret + 16*i);
#if defined(__clang__) /* Clang */ \
&& (defined(__arm__) || defined(__thumb__)) /* ARMv7 */ \
&& (defined(__ARM_NEON) || defined(__ARM_NEON__)) /* NEON */ \
&& !defined(XXH_ENABLE_AUTOVECTORIZE) /* Define to disable */
/*
* UGLY HACK:
* Prevent autovectorization on Clang ARMv7-a. Exact same problem as
* the one in XXH3_len_129to240_64b. Speeds up shorter keys > 240b.
* XXH3_64bits, len == 256, Snapdragon 835:
* without hack: 2063.7 MB/s
* with hack: 2560.7 MB/s
*/
XXH_COMPILER_GUARD(result64);
#endif
}
return XXH3_avalanche(result64);
}
#define XXH3_INIT_ACC { XXH_PRIME32_3, XXH_PRIME64_1, XXH_PRIME64_2, XXH_PRIME64_3, \
XXH_PRIME64_4, XXH_PRIME32_2, XXH_PRIME64_5, XXH_PRIME32_1 }
XXH_FORCE_INLINE XXH64_hash_t
XXH3_hashLong_64b_internal(const void* XXH_RESTRICT input, size_t len,
const void* XXH_RESTRICT secret, size_t secretSize,
XXH3_f_accumulate f_acc,
XXH3_f_scrambleAcc f_scramble)
{
XXH_ALIGN(XXH_ACC_ALIGN) xxh_u64 acc[XXH_ACC_NB] = XXH3_INIT_ACC;
XXH3_hashLong_internal_loop(acc, (const xxh_u8*)input, len, (const xxh_u8*)secret, secretSize, f_acc, f_scramble);
/* converge into final hash */
XXH_STATIC_ASSERT(sizeof(acc) == 64);
/* do not align on 8, so that the secret is different from the accumulator */
#define XXH_SECRET_MERGEACCS_START 11
XXH_ASSERT(secretSize >= sizeof(acc) + XXH_SECRET_MERGEACCS_START);
return XXH3_mergeAccs(acc, (const xxh_u8*)secret + XXH_SECRET_MERGEACCS_START, (xxh_u64)len * XXH_PRIME64_1);
}
/*
* It's important for performance to transmit secret's size (when it's static)
* so that the compiler can properly optimize the vectorized loop.
* This makes a big performance difference for "medium" keys (<1 KB) when using AVX instruction set.
* When the secret size is unknown, or on GCC 12 where the mix of NO_INLINE and FORCE_INLINE
* breaks -Og, this is XXH_NO_INLINE.
*/
XXH3_WITH_SECRET_INLINE XXH64_hash_t
XXH3_hashLong_64b_withSecret(const void* XXH_RESTRICT input, size_t len,
XXH64_hash_t seed64, const xxh_u8* XXH_RESTRICT secret, size_t secretLen)
{
(void)seed64;
return XXH3_hashLong_64b_internal(input, len, secret, secretLen, XXH3_accumulate, XXH3_scrambleAcc);
}
/*
* It's preferable for performance that XXH3_hashLong is not inlined,
* as it results in a smaller function for small data, easier to the instruction cache.
* Note that inside this no_inline function, we do inline the internal loop,
* and provide a statically defined secret size to allow optimization of vector loop.
*/
XXH_NO_INLINE XXH_PUREF XXH64_hash_t
XXH3_hashLong_64b_default(const void* XXH_RESTRICT input, size_t len,
XXH64_hash_t seed64, const xxh_u8* XXH_RESTRICT secret, size_t secretLen)
{
(void)seed64; (void)secret; (void)secretLen;
return XXH3_hashLong_64b_internal(input, len, XXH3_kSecret, sizeof(XXH3_kSecret), XXH3_accumulate, XXH3_scrambleAcc);
}
/*
* XXH3_hashLong_64b_withSeed():
* Generate a custom key based on alteration of default XXH3_kSecret with the seed,
* and then use this key for long mode hashing.
*
* This operation is decently fast but nonetheless costs a little bit of time.
* Try to avoid it whenever possible (typically when seed==0).
*
* It's important for performance that XXH3_hashLong is not inlined. Not sure
* why (uop cache maybe?), but the difference is large and easily measurable.
*/
XXH_FORCE_INLINE XXH64_hash_t
XXH3_hashLong_64b_withSeed_internal(const void* input, size_t len,
XXH64_hash_t seed,
XXH3_f_accumulate f_acc,
XXH3_f_scrambleAcc f_scramble,
XXH3_f_initCustomSecret f_initSec)
{
#if XXH_SIZE_OPT <= 0
if (seed == 0)
return XXH3_hashLong_64b_internal(input, len,
XXH3_kSecret, sizeof(XXH3_kSecret),
f_acc, f_scramble);
#endif
{ XXH_ALIGN(XXH_SEC_ALIGN) xxh_u8 secret[XXH_SECRET_DEFAULT_SIZE];
f_initSec(secret, seed);
return XXH3_hashLong_64b_internal(input, len, secret, sizeof(secret),
f_acc, f_scramble);
}
}
/*
* It's important for performance that XXH3_hashLong is not inlined.
*/
XXH_NO_INLINE XXH64_hash_t
XXH3_hashLong_64b_withSeed(const void* XXH_RESTRICT input, size_t len,
XXH64_hash_t seed, const xxh_u8* XXH_RESTRICT secret, size_t secretLen)
{
(void)secret; (void)secretLen;
return XXH3_hashLong_64b_withSeed_internal(input, len, seed,
XXH3_accumulate, XXH3_scrambleAcc, XXH3_initCustomSecret);
}
typedef XXH64_hash_t (*XXH3_hashLong64_f)(const void* XXH_RESTRICT, size_t,
XXH64_hash_t, const xxh_u8* XXH_RESTRICT, size_t);
XXH_FORCE_INLINE XXH64_hash_t
XXH3_64bits_internal(const void* XXH_RESTRICT input, size_t len,
XXH64_hash_t seed64, const void* XXH_RESTRICT secret, size_t secretLen,
XXH3_hashLong64_f f_hashLong)
{
XXH_ASSERT(secretLen >= XXH3_SECRET_SIZE_MIN);
/*
* If an action is to be taken if `secretLen` condition is not respected,
* it should be done here.
* For now, it's a contract pre-condition.
* Adding a check and a branch here would cost performance at every hash.
* Also, note that function signature doesn't offer room to return an error.
*/
if (len <= 16)
return XXH3_len_0to16_64b((const xxh_u8*)input, len, (const xxh_u8*)secret, seed64);
if (len <= 128)
return XXH3_len_17to128_64b((const xxh_u8*)input, len, (const xxh_u8*)secret, secretLen, seed64);
if (len <= XXH3_MIDSIZE_MAX)
return XXH3_len_129to240_64b((const xxh_u8*)input, len, (const xxh_u8*)secret, secretLen, seed64);
return f_hashLong(input, len, seed64, (const xxh_u8*)secret, secretLen);
}
/* === Public entry point === */
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH64_hash_t XXH3_64bits(XXH_NOESCAPE const void* input, size_t length)
{
return XXH3_64bits_internal(input, length, 0, XXH3_kSecret, sizeof(XXH3_kSecret), XXH3_hashLong_64b_default);
}
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH64_hash_t
XXH3_64bits_withSecret(XXH_NOESCAPE const void* input, size_t length, XXH_NOESCAPE const void* secret, size_t secretSize)
{
return XXH3_64bits_internal(input, length, 0, secret, secretSize, XXH3_hashLong_64b_withSecret);
}
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH64_hash_t
XXH3_64bits_withSeed(XXH_NOESCAPE const void* input, size_t length, XXH64_hash_t seed)
{
return XXH3_64bits_internal(input, length, seed, XXH3_kSecret, sizeof(XXH3_kSecret), XXH3_hashLong_64b_withSeed);
}
XXH_PUBLIC_API XXH64_hash_t
XXH3_64bits_withSecretandSeed(XXH_NOESCAPE const void* input, size_t length, XXH_NOESCAPE const void* secret, size_t secretSize, XXH64_hash_t seed)
{
if (length <= XXH3_MIDSIZE_MAX)
return XXH3_64bits_internal(input, length, seed, XXH3_kSecret, sizeof(XXH3_kSecret), NULL);
return XXH3_hashLong_64b_withSecret(input, length, seed, (const xxh_u8*)secret, secretSize);
}
/* === XXH3 streaming === */
#ifndef XXH_NO_STREAM
/*
* Malloc's a pointer that is always aligned to align.
*
* This must be freed with `XXH_alignedFree()`.
*
* malloc typically guarantees 16 byte alignment on 64-bit systems and 8 byte
* alignment on 32-bit. This isn't enough for the 32 byte aligned loads in AVX2
* or on 32-bit, the 16 byte aligned loads in SSE2 and NEON.
*
* This underalignment previously caused a rather obvious crash which went
* completely unnoticed due to XXH3_createState() not actually being tested.
* Credit to RedSpah for noticing this bug.
*
* The alignment is done manually: Functions like posix_memalign or _mm_malloc
* are avoided: To maintain portability, we would have to write a fallback
* like this anyways, and besides, testing for the existence of library
* functions without relying on external build tools is impossible.
*
* The method is simple: Overallocate, manually align, and store the offset
* to the original behind the returned pointer.
*
* Align must be a power of 2 and 8 <= align <= 128.
*/
static XXH_MALLOCF void* XXH_alignedMalloc(size_t s, size_t align)
{
XXH_ASSERT(align <= 128 && align >= 8); /* range check */
XXH_ASSERT((align & (align-1)) == 0); /* power of 2 */
XXH_ASSERT(s != 0 && s < (s + align)); /* empty/overflow */
{ /* Overallocate to make room for manual realignment and an offset byte */
xxh_u8* base = (xxh_u8*)XXH_malloc(s + align);
if (base != NULL) {
/*
* Get the offset needed to align this pointer.
*
* Even if the returned pointer is aligned, there will always be
* at least one byte to store the offset to the original pointer.
*/
size_t offset = align - ((size_t)base & (align - 1)); /* base % align */
/* Add the offset for the now-aligned pointer */
xxh_u8* ptr = base + offset;
XXH_ASSERT((size_t)ptr % align == 0);
/* Store the offset immediately before the returned pointer. */
ptr[-1] = (xxh_u8)offset;
return ptr;
}
return NULL;
}
}
/*
* Frees an aligned pointer allocated by XXH_alignedMalloc(). Don't pass
* normal malloc'd pointers, XXH_alignedMalloc has a specific data layout.
*/
static void XXH_alignedFree(void* p)
{
if (p != NULL) {
xxh_u8* ptr = (xxh_u8*)p;
/* Get the offset byte we added in XXH_malloc. */
xxh_u8 offset = ptr[-1];
/* Free the original malloc'd pointer */
xxh_u8* base = ptr - offset;
XXH_free(base);
}
}
/*! @ingroup XXH3_family */
/*!
* @brief Allocate an @ref XXH3_state_t.
*
* @return An allocated pointer of @ref XXH3_state_t on success.
* @return `NULL` on failure.
*
* @note Must be freed with XXH3_freeState().
*/
XXH_PUBLIC_API XXH3_state_t* XXH3_createState(void)
{
XXH3_state_t* const state = (XXH3_state_t*)XXH_alignedMalloc(sizeof(XXH3_state_t), 64);
if (state==NULL) return NULL;
XXH3_INITSTATE(state);
return state;
}
/*! @ingroup XXH3_family */
/*!
* @brief Frees an @ref XXH3_state_t.
*
* @param statePtr A pointer to an @ref XXH3_state_t allocated with @ref XXH3_createState().
*
* @return @ref XXH_OK.
*
* @note Must be allocated with XXH3_createState().
*/
XXH_PUBLIC_API XXH_errorcode XXH3_freeState(XXH3_state_t* statePtr)
{
XXH_alignedFree(statePtr);
return XXH_OK;
}
/*! @ingroup XXH3_family */
XXH_PUBLIC_API void
XXH3_copyState(XXH_NOESCAPE XXH3_state_t* dst_state, XXH_NOESCAPE const XXH3_state_t* src_state)
{
XXH_memcpy(dst_state, src_state, sizeof(*dst_state));
}
static void
XXH3_reset_internal(XXH3_state_t* statePtr,
XXH64_hash_t seed,
const void* secret, size_t secretSize)
{
size_t const initStart = offsetof(XXH3_state_t, bufferedSize);
size_t const initLength = offsetof(XXH3_state_t, nbStripesPerBlock) - initStart;
XXH_ASSERT(offsetof(XXH3_state_t, nbStripesPerBlock) > initStart);
XXH_ASSERT(statePtr != NULL);
/* set members from bufferedSize to nbStripesPerBlock (excluded) to 0 */
memset((char*)statePtr + initStart, 0, initLength);
statePtr->acc[0] = XXH_PRIME32_3;
statePtr->acc[1] = XXH_PRIME64_1;
statePtr->acc[2] = XXH_PRIME64_2;
statePtr->acc[3] = XXH_PRIME64_3;
statePtr->acc[4] = XXH_PRIME64_4;
statePtr->acc[5] = XXH_PRIME32_2;
statePtr->acc[6] = XXH_PRIME64_5;
statePtr->acc[7] = XXH_PRIME32_1;
statePtr->seed = seed;
statePtr->useSeed = (seed != 0);
statePtr->extSecret = (const unsigned char*)secret;
XXH_ASSERT(secretSize >= XXH3_SECRET_SIZE_MIN);
statePtr->secretLimit = secretSize - XXH_STRIPE_LEN;
statePtr->nbStripesPerBlock = statePtr->secretLimit / XXH_SECRET_CONSUME_RATE;
}
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH_errorcode
XXH3_64bits_reset(XXH_NOESCAPE XXH3_state_t* statePtr)
{
if (statePtr == NULL) return XXH_ERROR;
XXH3_reset_internal(statePtr, 0, XXH3_kSecret, XXH_SECRET_DEFAULT_SIZE);
return XXH_OK;
}
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH_errorcode
XXH3_64bits_reset_withSecret(XXH_NOESCAPE XXH3_state_t* statePtr, XXH_NOESCAPE const void* secret, size_t secretSize)
{
if (statePtr == NULL) return XXH_ERROR;
XXH3_reset_internal(statePtr, 0, secret, secretSize);
if (secret == NULL) return XXH_ERROR;
if (secretSize < XXH3_SECRET_SIZE_MIN) return XXH_ERROR;
return XXH_OK;
}
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH_errorcode
XXH3_64bits_reset_withSeed(XXH_NOESCAPE XXH3_state_t* statePtr, XXH64_hash_t seed)
{
if (statePtr == NULL) return XXH_ERROR;
if (seed==0) return XXH3_64bits_reset(statePtr);
if ((seed != statePtr->seed) || (statePtr->extSecret != NULL))
XXH3_initCustomSecret(statePtr->customSecret, seed);
XXH3_reset_internal(statePtr, seed, NULL, XXH_SECRET_DEFAULT_SIZE);
return XXH_OK;
}
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH_errorcode
XXH3_64bits_reset_withSecretandSeed(XXH_NOESCAPE XXH3_state_t* statePtr, XXH_NOESCAPE const void* secret, size_t secretSize, XXH64_hash_t seed64)
{
if (statePtr == NULL) return XXH_ERROR;
if (secret == NULL) return XXH_ERROR;
if (secretSize < XXH3_SECRET_SIZE_MIN) return XXH_ERROR;
XXH3_reset_internal(statePtr, seed64, secret, secretSize);
statePtr->useSeed = 1; /* always, even if seed64==0 */
return XXH_OK;
}
/*!
* @internal
* @brief Processes a large input for XXH3_update() and XXH3_digest_long().
*
* Unlike XXH3_hashLong_internal_loop(), this can process data that overlaps a block.
*
* @param acc Pointer to the 8 accumulator lanes
* @param nbStripesSoFarPtr In/out pointer to the number of leftover stripes in the block*
* @param nbStripesPerBlock Number of stripes in a block
* @param input Input pointer
* @param nbStripes Number of stripes to process
* @param secret Secret pointer
* @param secretLimit Offset of the last block in @p secret
* @param f_acc Pointer to an XXH3_accumulate implementation
* @param f_scramble Pointer to an XXH3_scrambleAcc implementation
* @return Pointer past the end of @p input after processing
*/
XXH_FORCE_INLINE const xxh_u8 *
XXH3_consumeStripes(xxh_u64* XXH_RESTRICT acc,
size_t* XXH_RESTRICT nbStripesSoFarPtr, size_t nbStripesPerBlock,
const xxh_u8* XXH_RESTRICT input, size_t nbStripes,
const xxh_u8* XXH_RESTRICT secret, size_t secretLimit,
XXH3_f_accumulate f_acc,
XXH3_f_scrambleAcc f_scramble)
{
const xxh_u8* initialSecret = secret + *nbStripesSoFarPtr * XXH_SECRET_CONSUME_RATE;
/* Process full blocks */
if (nbStripes >= (nbStripesPerBlock - *nbStripesSoFarPtr)) {
/* Process the initial partial block... */
size_t nbStripesThisIter = nbStripesPerBlock - *nbStripesSoFarPtr;
do {
/* Accumulate and scramble */
f_acc(acc, input, initialSecret, nbStripesThisIter);
f_scramble(acc, secret + secretLimit);
input += nbStripesThisIter * XXH_STRIPE_LEN;
nbStripes -= nbStripesThisIter;
/* Then continue the loop with the full block size */
nbStripesThisIter = nbStripesPerBlock;
initialSecret = secret;
} while (nbStripes >= nbStripesPerBlock);
*nbStripesSoFarPtr = 0;
}
/* Process a partial block */
if (nbStripes > 0) {
f_acc(acc, input, initialSecret, nbStripes);
input += nbStripes * XXH_STRIPE_LEN;
*nbStripesSoFarPtr += nbStripes;
}
/* Return end pointer */
return input;
}
#ifndef XXH3_STREAM_USE_STACK
# if XXH_SIZE_OPT <= 0 && !defined(__clang__) /* clang doesn't need additional stack space */
# define XXH3_STREAM_USE_STACK 1
# endif
#endif
/*
* Both XXH3_64bits_update and XXH3_128bits_update use this routine.
*/
XXH_FORCE_INLINE XXH_errorcode
XXH3_update(XXH3_state_t* XXH_RESTRICT const state,
const xxh_u8* XXH_RESTRICT input, size_t len,
XXH3_f_accumulate f_acc,
XXH3_f_scrambleAcc f_scramble)
{
if (input==NULL) {
XXH_ASSERT(len == 0);
return XXH_OK;
}
XXH_ASSERT(state != NULL);
{ const xxh_u8* const bEnd = input + len;
const unsigned char* const secret = (state->extSecret == NULL) ? state->customSecret : state->extSecret;
#if defined(XXH3_STREAM_USE_STACK) && XXH3_STREAM_USE_STACK >= 1
/* For some reason, gcc and MSVC seem to suffer greatly
* when operating accumulators directly into state.
* Operating into stack space seems to enable proper optimization.
* clang, on the other hand, doesn't seem to need this trick */
XXH_ALIGN(XXH_ACC_ALIGN) xxh_u64 acc[8];
XXH_memcpy(acc, state->acc, sizeof(acc));
#else
xxh_u64* XXH_RESTRICT const acc = state->acc;
#endif
state->totalLen += len;
XXH_ASSERT(state->bufferedSize <= XXH3_INTERNALBUFFER_SIZE);
/* small input : just fill in tmp buffer */
if (len <= XXH3_INTERNALBUFFER_SIZE - state->bufferedSize) {
XXH_memcpy(state->buffer + state->bufferedSize, input, len);
state->bufferedSize += (XXH32_hash_t)len;
return XXH_OK;
}
/* total input is now > XXH3_INTERNALBUFFER_SIZE */
#define XXH3_INTERNALBUFFER_STRIPES (XXH3_INTERNALBUFFER_SIZE / XXH_STRIPE_LEN)
XXH_STATIC_ASSERT(XXH3_INTERNALBUFFER_SIZE % XXH_STRIPE_LEN == 0); /* clean multiple */
/*
* Internal buffer is partially filled (always, except at beginning)
* Complete it, then consume it.
*/
if (state->bufferedSize) {
size_t const loadSize = XXH3_INTERNALBUFFER_SIZE - state->bufferedSize;
XXH_memcpy(state->buffer + state->bufferedSize, input, loadSize);
input += loadSize;
XXH3_consumeStripes(acc,
&state->nbStripesSoFar, state->nbStripesPerBlock,
state->buffer, XXH3_INTERNALBUFFER_STRIPES,
secret, state->secretLimit,
f_acc, f_scramble);
state->bufferedSize = 0;
}
XXH_ASSERT(input < bEnd);
if (bEnd - input > XXH3_INTERNALBUFFER_SIZE) {
size_t nbStripes = (size_t)(bEnd - 1 - input) / XXH_STRIPE_LEN;
input = XXH3_consumeStripes(acc,
&state->nbStripesSoFar, state->nbStripesPerBlock,
input, nbStripes,
secret, state->secretLimit,
f_acc, f_scramble);
XXH_memcpy(state->buffer + sizeof(state->buffer) - XXH_STRIPE_LEN, input - XXH_STRIPE_LEN, XXH_STRIPE_LEN);
}
/* Some remaining input (always) : buffer it */
XXH_ASSERT(input < bEnd);
XXH_ASSERT(bEnd - input <= XXH3_INTERNALBUFFER_SIZE);
XXH_ASSERT(state->bufferedSize == 0);
XXH_memcpy(state->buffer, input, (size_t)(bEnd-input));
state->bufferedSize = (XXH32_hash_t)(bEnd-input);
#if defined(XXH3_STREAM_USE_STACK) && XXH3_STREAM_USE_STACK >= 1
/* save stack accumulators into state */
XXH_memcpy(state->acc, acc, sizeof(acc));
#endif
}
return XXH_OK;
}
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH_errorcode
XXH3_64bits_update(XXH_NOESCAPE XXH3_state_t* state, XXH_NOESCAPE const void* input, size_t len)
{
return XXH3_update(state, (const xxh_u8*)input, len,
XXH3_accumulate, XXH3_scrambleAcc);
}
XXH_FORCE_INLINE void
XXH3_digest_long (XXH64_hash_t* acc,
const XXH3_state_t* state,
const unsigned char* secret)
{
xxh_u8 lastStripe[XXH_STRIPE_LEN];
const xxh_u8* lastStripePtr;
/*
* Digest on a local copy. This way, the state remains unaltered, and it can
* continue ingesting more input afterwards.
*/
XXH_memcpy(acc, state->acc, sizeof(state->acc));
if (state->bufferedSize >= XXH_STRIPE_LEN) {
/* Consume remaining stripes then point to remaining data in buffer */
size_t const nbStripes = (state->bufferedSize - 1) / XXH_STRIPE_LEN;
size_t nbStripesSoFar = state->nbStripesSoFar;
XXH3_consumeStripes(acc,
&nbStripesSoFar, state->nbStripesPerBlock,
state->buffer, nbStripes,
secret, state->secretLimit,
XXH3_accumulate, XXH3_scrambleAcc);
lastStripePtr = state->buffer + state->bufferedSize - XXH_STRIPE_LEN;
} else { /* bufferedSize < XXH_STRIPE_LEN */
/* Copy to temp buffer */
size_t const catchupSize = XXH_STRIPE_LEN - state->bufferedSize;
XXH_ASSERT(state->bufferedSize > 0); /* there is always some input buffered */
XXH_memcpy(lastStripe, state->buffer + sizeof(state->buffer) - catchupSize, catchupSize);
XXH_memcpy(lastStripe + catchupSize, state->buffer, state->bufferedSize);
lastStripePtr = lastStripe;
}
/* Last stripe */
XXH3_accumulate_512(acc,
lastStripePtr,
secret + state->secretLimit - XXH_SECRET_LASTACC_START);
}
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH64_hash_t XXH3_64bits_digest (XXH_NOESCAPE const XXH3_state_t* state)
{
const unsigned char* const secret = (state->extSecret == NULL) ? state->customSecret : state->extSecret;
if (state->totalLen > XXH3_MIDSIZE_MAX) {
XXH_ALIGN(XXH_ACC_ALIGN) XXH64_hash_t acc[XXH_ACC_NB];
XXH3_digest_long(acc, state, secret);
return XXH3_mergeAccs(acc,
secret + XXH_SECRET_MERGEACCS_START,
(xxh_u64)state->totalLen * XXH_PRIME64_1);
}
/* totalLen <= XXH3_MIDSIZE_MAX: digesting a short input */
if (state->useSeed)
return XXH3_64bits_withSeed(state->buffer, (size_t)state->totalLen, state->seed);
return XXH3_64bits_withSecret(state->buffer, (size_t)(state->totalLen),
secret, state->secretLimit + XXH_STRIPE_LEN);
}
#endif /* !XXH_NO_STREAM */
/* ==========================================
* XXH3 128 bits (a.k.a XXH128)
* ==========================================
* XXH3's 128-bit variant has better mixing and strength than the 64-bit variant,
* even without counting the significantly larger output size.
*
* For example, extra steps are taken to avoid the seed-dependent collisions
* in 17-240 byte inputs (See XXH3_mix16B and XXH128_mix32B).
*
* This strength naturally comes at the cost of some speed, especially on short
* lengths. Note that longer hashes are about as fast as the 64-bit version
* due to it using only a slight modification of the 64-bit loop.
*
* XXH128 is also more oriented towards 64-bit machines. It is still extremely
* fast for a _128-bit_ hash on 32-bit (it usually clears XXH64).
*/
XXH_FORCE_INLINE XXH_PUREF XXH128_hash_t
XXH3_len_1to3_128b(const xxh_u8* input, size_t len, const xxh_u8* secret, XXH64_hash_t seed)
{
/* A doubled version of 1to3_64b with different constants. */
XXH_ASSERT(input != NULL);
XXH_ASSERT(1 <= len && len <= 3);
XXH_ASSERT(secret != NULL);
/*
* len = 1: combinedl = { input[0], 0x01, input[0], input[0] }
* len = 2: combinedl = { input[1], 0x02, input[0], input[1] }
* len = 3: combinedl = { input[2], 0x03, input[0], input[1] }
*/
{ xxh_u8 const c1 = input[0];
xxh_u8 const c2 = input[len >> 1];
xxh_u8 const c3 = input[len - 1];
xxh_u32 const combinedl = ((xxh_u32)c1 <<16) | ((xxh_u32)c2 << 24)
| ((xxh_u32)c3 << 0) | ((xxh_u32)len << 8);
xxh_u32 const combinedh = XXH_rotl32(XXH_swap32(combinedl), 13);
xxh_u64 const bitflipl = (XXH_readLE32(secret) ^ XXH_readLE32(secret+4)) + seed;
xxh_u64 const bitfliph = (XXH_readLE32(secret+8) ^ XXH_readLE32(secret+12)) - seed;
xxh_u64 const keyed_lo = (xxh_u64)combinedl ^ bitflipl;
xxh_u64 const keyed_hi = (xxh_u64)combinedh ^ bitfliph;
XXH128_hash_t h128;
h128.low64 = XXH64_avalanche(keyed_lo);
h128.high64 = XXH64_avalanche(keyed_hi);
return h128;
}
}
XXH_FORCE_INLINE XXH_PUREF XXH128_hash_t
XXH3_len_4to8_128b(const xxh_u8* input, size_t len, const xxh_u8* secret, XXH64_hash_t seed)
{
XXH_ASSERT(input != NULL);
XXH_ASSERT(secret != NULL);
XXH_ASSERT(4 <= len && len <= 8);
seed ^= (xxh_u64)XXH_swap32((xxh_u32)seed) << 32;
{ xxh_u32 const input_lo = XXH_readLE32(input);
xxh_u32 const input_hi = XXH_readLE32(input + len - 4);
xxh_u64 const input_64 = input_lo + ((xxh_u64)input_hi << 32);
xxh_u64 const bitflip = (XXH_readLE64(secret+16) ^ XXH_readLE64(secret+24)) + seed;
xxh_u64 const keyed = input_64 ^ bitflip;
/* Shift len to the left to ensure it is even, this avoids even multiplies. */
XXH128_hash_t m128 = XXH_mult64to128(keyed, XXH_PRIME64_1 + (len << 2));
m128.high64 += (m128.low64 << 1);
m128.low64 ^= (m128.high64 >> 3);
m128.low64 = XXH_xorshift64(m128.low64, 35);
m128.low64 *= PRIME_MX2;
m128.low64 = XXH_xorshift64(m128.low64, 28);
m128.high64 = XXH3_avalanche(m128.high64);
return m128;
}
}
XXH_FORCE_INLINE XXH_PUREF XXH128_hash_t
XXH3_len_9to16_128b(const xxh_u8* input, size_t len, const xxh_u8* secret, XXH64_hash_t seed)
{
XXH_ASSERT(input != NULL);
XXH_ASSERT(secret != NULL);
XXH_ASSERT(9 <= len && len <= 16);
{ xxh_u64 const bitflipl = (XXH_readLE64(secret+32) ^ XXH_readLE64(secret+40)) - seed;
xxh_u64 const bitfliph = (XXH_readLE64(secret+48) ^ XXH_readLE64(secret+56)) + seed;
xxh_u64 const input_lo = XXH_readLE64(input);
xxh_u64 input_hi = XXH_readLE64(input + len - 8);
XXH128_hash_t m128 = XXH_mult64to128(input_lo ^ input_hi ^ bitflipl, XXH_PRIME64_1);
/*
* Put len in the middle of m128 to ensure that the length gets mixed to
* both the low and high bits in the 128x64 multiply below.
*/
m128.low64 += (xxh_u64)(len - 1) << 54;
input_hi ^= bitfliph;
/*
* Add the high 32 bits of input_hi to the high 32 bits of m128, then
* add the long product of the low 32 bits of input_hi and XXH_PRIME32_2 to
* the high 64 bits of m128.
*
* The best approach to this operation is different on 32-bit and 64-bit.
*/
if (sizeof(void *) < sizeof(xxh_u64)) { /* 32-bit */
/*
* 32-bit optimized version, which is more readable.
*
* On 32-bit, it removes an ADC and delays a dependency between the two
* halves of m128.high64, but it generates an extra mask on 64-bit.
*/
m128.high64 += (input_hi & 0xFFFFFFFF00000000ULL) + XXH_mult32to64((xxh_u32)input_hi, XXH_PRIME32_2);
} else {
/*
* 64-bit optimized (albeit more confusing) version.
*
* Uses some properties of addition and multiplication to remove the mask:
*
* Let:
* a = input_hi.lo = (input_hi & 0x00000000FFFFFFFF)
* b = input_hi.hi = (input_hi & 0xFFFFFFFF00000000)
* c = XXH_PRIME32_2
*
* a + (b * c)
* Inverse Property: x + y - x == y
* a + (b * (1 + c - 1))
* Distributive Property: x * (y + z) == (x * y) + (x * z)
* a + (b * 1) + (b * (c - 1))
* Identity Property: x * 1 == x
* a + b + (b * (c - 1))
*
* Substitute a, b, and c:
* input_hi.hi + input_hi.lo + ((xxh_u64)input_hi.lo * (XXH_PRIME32_2 - 1))
*
* Since input_hi.hi + input_hi.lo == input_hi, we get this:
* input_hi + ((xxh_u64)input_hi.lo * (XXH_PRIME32_2 - 1))
*/
m128.high64 += input_hi + XXH_mult32to64((xxh_u32)input_hi, XXH_PRIME32_2 - 1);
}
/* m128 ^= XXH_swap64(m128 >> 64); */
m128.low64 ^= XXH_swap64(m128.high64);
{ /* 128x64 multiply: h128 = m128 * XXH_PRIME64_2; */
XXH128_hash_t h128 = XXH_mult64to128(m128.low64, XXH_PRIME64_2);
h128.high64 += m128.high64 * XXH_PRIME64_2;
h128.low64 = XXH3_avalanche(h128.low64);
h128.high64 = XXH3_avalanche(h128.high64);
return h128;
} }
}
/*
* Assumption: `secret` size is >= XXH3_SECRET_SIZE_MIN
*/
XXH_FORCE_INLINE XXH_PUREF XXH128_hash_t
XXH3_len_0to16_128b(const xxh_u8* input, size_t len, const xxh_u8* secret, XXH64_hash_t seed)
{
XXH_ASSERT(len <= 16);
{ if (len > 8) return XXH3_len_9to16_128b(input, len, secret, seed);
if (len >= 4) return XXH3_len_4to8_128b(input, len, secret, seed);
if (len) return XXH3_len_1to3_128b(input, len, secret, seed);
{ XXH128_hash_t h128;
xxh_u64 const bitflipl = XXH_readLE64(secret+64) ^ XXH_readLE64(secret+72);
xxh_u64 const bitfliph = XXH_readLE64(secret+80) ^ XXH_readLE64(secret+88);
h128.low64 = XXH64_avalanche(seed ^ bitflipl);
h128.high64 = XXH64_avalanche( seed ^ bitfliph);
return h128;
} }
}
/*
* A bit slower than XXH3_mix16B, but handles multiply by zero better.
*/
XXH_FORCE_INLINE XXH128_hash_t
XXH128_mix32B(XXH128_hash_t acc, const xxh_u8* input_1, const xxh_u8* input_2,
const xxh_u8* secret, XXH64_hash_t seed)
{
acc.low64 += XXH3_mix16B (input_1, secret+0, seed);
acc.low64 ^= XXH_readLE64(input_2) + XXH_readLE64(input_2 + 8);
acc.high64 += XXH3_mix16B (input_2, secret+16, seed);
acc.high64 ^= XXH_readLE64(input_1) + XXH_readLE64(input_1 + 8);
return acc;
}
XXH_FORCE_INLINE XXH_PUREF XXH128_hash_t
XXH3_len_17to128_128b(const xxh_u8* XXH_RESTRICT input, size_t len,
const xxh_u8* XXH_RESTRICT secret, size_t secretSize,
XXH64_hash_t seed)
{
XXH_ASSERT(secretSize >= XXH3_SECRET_SIZE_MIN); (void)secretSize;
XXH_ASSERT(16 < len && len <= 128);
{ XXH128_hash_t acc;
acc.low64 = len * XXH_PRIME64_1;
acc.high64 = 0;
#if XXH_SIZE_OPT >= 1
{
/* Smaller, but slightly slower. */
unsigned int i = (unsigned int)(len - 1) / 32;
do {
acc = XXH128_mix32B(acc, input+16*i, input+len-16*(i+1), secret+32*i, seed);
} while (i-- != 0);
}
#else
if (len > 32) {
if (len > 64) {
if (len > 96) {
acc = XXH128_mix32B(acc, input+48, input+len-64, secret+96, seed);
}
acc = XXH128_mix32B(acc, input+32, input+len-48, secret+64, seed);
}
acc = XXH128_mix32B(acc, input+16, input+len-32, secret+32, seed);
}
acc = XXH128_mix32B(acc, input, input+len-16, secret, seed);
#endif
{ XXH128_hash_t h128;
h128.low64 = acc.low64 + acc.high64;
h128.high64 = (acc.low64 * XXH_PRIME64_1)
+ (acc.high64 * XXH_PRIME64_4)
+ ((len - seed) * XXH_PRIME64_2);
h128.low64 = XXH3_avalanche(h128.low64);
h128.high64 = (XXH64_hash_t)0 - XXH3_avalanche(h128.high64);
return h128;
}
}
}
XXH_NO_INLINE XXH_PUREF XXH128_hash_t
XXH3_len_129to240_128b(const xxh_u8* XXH_RESTRICT input, size_t len,
const xxh_u8* XXH_RESTRICT secret, size_t secretSize,
XXH64_hash_t seed)
{
XXH_ASSERT(secretSize >= XXH3_SECRET_SIZE_MIN); (void)secretSize;
XXH_ASSERT(128 < len && len <= XXH3_MIDSIZE_MAX);
{ XXH128_hash_t acc;
unsigned i;
acc.low64 = len * XXH_PRIME64_1;
acc.high64 = 0;
/*
* We set as `i` as offset + 32. We do this so that unchanged
* `len` can be used as upper bound. This reaches a sweet spot
* where both x86 and aarch64 get simple agen and good codegen
* for the loop.
*/
for (i = 32; i < 160; i += 32) {
acc = XXH128_mix32B(acc,
input + i - 32,
input + i - 16,
secret + i - 32,
seed);
}
acc.low64 = XXH3_avalanche(acc.low64);
acc.high64 = XXH3_avalanche(acc.high64);
/*
* NB: `i <= len` will duplicate the last 32-bytes if
* len % 32 was zero. This is an unfortunate necessity to keep
* the hash result stable.
*/
for (i=160; i <= len; i += 32) {
acc = XXH128_mix32B(acc,
input + i - 32,
input + i - 16,
secret + XXH3_MIDSIZE_STARTOFFSET + i - 160,
seed);
}
/* last bytes */
acc = XXH128_mix32B(acc,
input + len - 16,
input + len - 32,
secret + XXH3_SECRET_SIZE_MIN - XXH3_MIDSIZE_LASTOFFSET - 16,
(XXH64_hash_t)0 - seed);
{ XXH128_hash_t h128;
h128.low64 = acc.low64 + acc.high64;
h128.high64 = (acc.low64 * XXH_PRIME64_1)
+ (acc.high64 * XXH_PRIME64_4)
+ ((len - seed) * XXH_PRIME64_2);
h128.low64 = XXH3_avalanche(h128.low64);
h128.high64 = (XXH64_hash_t)0 - XXH3_avalanche(h128.high64);
return h128;
}
}
}
XXH_FORCE_INLINE XXH128_hash_t
XXH3_hashLong_128b_internal(const void* XXH_RESTRICT input, size_t len,
const xxh_u8* XXH_RESTRICT secret, size_t secretSize,
XXH3_f_accumulate f_acc,
XXH3_f_scrambleAcc f_scramble)
{
XXH_ALIGN(XXH_ACC_ALIGN) xxh_u64 acc[XXH_ACC_NB] = XXH3_INIT_ACC;
XXH3_hashLong_internal_loop(acc, (const xxh_u8*)input, len, secret, secretSize, f_acc, f_scramble);
/* converge into final hash */
XXH_STATIC_ASSERT(sizeof(acc) == 64);
XXH_ASSERT(secretSize >= sizeof(acc) + XXH_SECRET_MERGEACCS_START);
{ XXH128_hash_t h128;
h128.low64 = XXH3_mergeAccs(acc,
secret + XXH_SECRET_MERGEACCS_START,
(xxh_u64)len * XXH_PRIME64_1);
h128.high64 = XXH3_mergeAccs(acc,
secret + secretSize
- sizeof(acc) - XXH_SECRET_MERGEACCS_START,
~((xxh_u64)len * XXH_PRIME64_2));
return h128;
}
}
/*
* It's important for performance that XXH3_hashLong() is not inlined.
*/
XXH_NO_INLINE XXH_PUREF XXH128_hash_t
XXH3_hashLong_128b_default(const void* XXH_RESTRICT input, size_t len,
XXH64_hash_t seed64,
const void* XXH_RESTRICT secret, size_t secretLen)
{
(void)seed64; (void)secret; (void)secretLen;
return XXH3_hashLong_128b_internal(input, len, XXH3_kSecret, sizeof(XXH3_kSecret),
XXH3_accumulate, XXH3_scrambleAcc);
}
/*
* It's important for performance to pass @p secretLen (when it's static)
* to the compiler, so that it can properly optimize the vectorized loop.
*
* When the secret size is unknown, or on GCC 12 where the mix of NO_INLINE and FORCE_INLINE
* breaks -Og, this is XXH_NO_INLINE.
*/
XXH3_WITH_SECRET_INLINE XXH128_hash_t
XXH3_hashLong_128b_withSecret(const void* XXH_RESTRICT input, size_t len,
XXH64_hash_t seed64,
const void* XXH_RESTRICT secret, size_t secretLen)
{
(void)seed64;
return XXH3_hashLong_128b_internal(input, len, (const xxh_u8*)secret, secretLen,
XXH3_accumulate, XXH3_scrambleAcc);
}
XXH_FORCE_INLINE XXH128_hash_t
XXH3_hashLong_128b_withSeed_internal(const void* XXH_RESTRICT input, size_t len,
XXH64_hash_t seed64,
XXH3_f_accumulate f_acc,
XXH3_f_scrambleAcc f_scramble,
XXH3_f_initCustomSecret f_initSec)
{
if (seed64 == 0)
return XXH3_hashLong_128b_internal(input, len,
XXH3_kSecret, sizeof(XXH3_kSecret),
f_acc, f_scramble);
{ XXH_ALIGN(XXH_SEC_ALIGN) xxh_u8 secret[XXH_SECRET_DEFAULT_SIZE];
f_initSec(secret, seed64);
return XXH3_hashLong_128b_internal(input, len, (const xxh_u8*)secret, sizeof(secret),
f_acc, f_scramble);
}
}
/*
* It's important for performance that XXH3_hashLong is not inlined.
*/
XXH_NO_INLINE XXH128_hash_t
XXH3_hashLong_128b_withSeed(const void* input, size_t len,
XXH64_hash_t seed64, const void* XXH_RESTRICT secret, size_t secretLen)
{
(void)secret; (void)secretLen;
return XXH3_hashLong_128b_withSeed_internal(input, len, seed64,
XXH3_accumulate, XXH3_scrambleAcc, XXH3_initCustomSecret);
}
typedef XXH128_hash_t (*XXH3_hashLong128_f)(const void* XXH_RESTRICT, size_t,
XXH64_hash_t, const void* XXH_RESTRICT, size_t);
XXH_FORCE_INLINE XXH128_hash_t
XXH3_128bits_internal(const void* input, size_t len,
XXH64_hash_t seed64, const void* XXH_RESTRICT secret, size_t secretLen,
XXH3_hashLong128_f f_hl128)
{
XXH_ASSERT(secretLen >= XXH3_SECRET_SIZE_MIN);
/*
* If an action is to be taken if `secret` conditions are not respected,
* it should be done here.
* For now, it's a contract pre-condition.
* Adding a check and a branch here would cost performance at every hash.
*/
if (len <= 16)
return XXH3_len_0to16_128b((const xxh_u8*)input, len, (const xxh_u8*)secret, seed64);
if (len <= 128)
return XXH3_len_17to128_128b((const xxh_u8*)input, len, (const xxh_u8*)secret, secretLen, seed64);
if (len <= XXH3_MIDSIZE_MAX)
return XXH3_len_129to240_128b((const xxh_u8*)input, len, (const xxh_u8*)secret, secretLen, seed64);
return f_hl128(input, len, seed64, secret, secretLen);
}
/* === Public XXH128 API === */
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH128_hash_t XXH3_128bits(XXH_NOESCAPE const void* input, size_t len)
{
return XXH3_128bits_internal(input, len, 0,
XXH3_kSecret, sizeof(XXH3_kSecret),
XXH3_hashLong_128b_default);
}
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH128_hash_t
XXH3_128bits_withSecret(XXH_NOESCAPE const void* input, size_t len, XXH_NOESCAPE const void* secret, size_t secretSize)
{
return XXH3_128bits_internal(input, len, 0,
(const xxh_u8*)secret, secretSize,
XXH3_hashLong_128b_withSecret);
}
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH128_hash_t
XXH3_128bits_withSeed(XXH_NOESCAPE const void* input, size_t len, XXH64_hash_t seed)
{
return XXH3_128bits_internal(input, len, seed,
XXH3_kSecret, sizeof(XXH3_kSecret),
XXH3_hashLong_128b_withSeed);
}
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH128_hash_t
XXH3_128bits_withSecretandSeed(XXH_NOESCAPE const void* input, size_t len, XXH_NOESCAPE const void* secret, size_t secretSize, XXH64_hash_t seed)
{
if (len <= XXH3_MIDSIZE_MAX)
return XXH3_128bits_internal(input, len, seed, XXH3_kSecret, sizeof(XXH3_kSecret), NULL);
return XXH3_hashLong_128b_withSecret(input, len, seed, secret, secretSize);
}
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH128_hash_t
XXH128(XXH_NOESCAPE const void* input, size_t len, XXH64_hash_t seed)
{
return XXH3_128bits_withSeed(input, len, seed);
}
/* === XXH3 128-bit streaming === */
#ifndef XXH_NO_STREAM
/*
* All initialization and update functions are identical to 64-bit streaming variant.
* The only difference is the finalization routine.
*/
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH_errorcode
XXH3_128bits_reset(XXH_NOESCAPE XXH3_state_t* statePtr)
{
return XXH3_64bits_reset(statePtr);
}
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH_errorcode
XXH3_128bits_reset_withSecret(XXH_NOESCAPE XXH3_state_t* statePtr, XXH_NOESCAPE const void* secret, size_t secretSize)
{
return XXH3_64bits_reset_withSecret(statePtr, secret, secretSize);
}
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH_errorcode
XXH3_128bits_reset_withSeed(XXH_NOESCAPE XXH3_state_t* statePtr, XXH64_hash_t seed)
{
return XXH3_64bits_reset_withSeed(statePtr, seed);
}
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH_errorcode
XXH3_128bits_reset_withSecretandSeed(XXH_NOESCAPE XXH3_state_t* statePtr, XXH_NOESCAPE const void* secret, size_t secretSize, XXH64_hash_t seed)
{
return XXH3_64bits_reset_withSecretandSeed(statePtr, secret, secretSize, seed);
}
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH_errorcode
XXH3_128bits_update(XXH_NOESCAPE XXH3_state_t* state, XXH_NOESCAPE const void* input, size_t len)
{
return XXH3_64bits_update(state, input, len);
}
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH128_hash_t XXH3_128bits_digest (XXH_NOESCAPE const XXH3_state_t* state)
{
const unsigned char* const secret = (state->extSecret == NULL) ? state->customSecret : state->extSecret;
if (state->totalLen > XXH3_MIDSIZE_MAX) {
XXH_ALIGN(XXH_ACC_ALIGN) XXH64_hash_t acc[XXH_ACC_NB];
XXH3_digest_long(acc, state, secret);
XXH_ASSERT(state->secretLimit + XXH_STRIPE_LEN >= sizeof(acc) + XXH_SECRET_MERGEACCS_START);
{ XXH128_hash_t h128;
h128.low64 = XXH3_mergeAccs(acc,
secret + XXH_SECRET_MERGEACCS_START,
(xxh_u64)state->totalLen * XXH_PRIME64_1);
h128.high64 = XXH3_mergeAccs(acc,
secret + state->secretLimit + XXH_STRIPE_LEN
- sizeof(acc) - XXH_SECRET_MERGEACCS_START,
~((xxh_u64)state->totalLen * XXH_PRIME64_2));
return h128;
}
}
/* len <= XXH3_MIDSIZE_MAX : short code */
if (state->seed)
return XXH3_128bits_withSeed(state->buffer, (size_t)state->totalLen, state->seed);
return XXH3_128bits_withSecret(state->buffer, (size_t)(state->totalLen),
secret, state->secretLimit + XXH_STRIPE_LEN);
}
#endif /* !XXH_NO_STREAM */
/* 128-bit utility functions */
/* return : 1 is equal, 0 if different */
/*! @ingroup XXH3_family */
XXH_PUBLIC_API int XXH128_isEqual(XXH128_hash_t h1, XXH128_hash_t h2)
{
/* note : XXH128_hash_t is compact, it has no padding byte */
return !(memcmp(&h1, &h2, sizeof(h1)));
}
/* This prototype is compatible with stdlib's qsort().
* @return : >0 if *h128_1 > *h128_2
* <0 if *h128_1 < *h128_2
* =0 if *h128_1 == *h128_2 */
/*! @ingroup XXH3_family */
XXH_PUBLIC_API int XXH128_cmp(XXH_NOESCAPE const void* h128_1, XXH_NOESCAPE const void* h128_2)
{
XXH128_hash_t const h1 = *(const XXH128_hash_t*)h128_1;
XXH128_hash_t const h2 = *(const XXH128_hash_t*)h128_2;
int const hcmp = (h1.high64 > h2.high64) - (h2.high64 > h1.high64);
/* note : bets that, in most cases, hash values are different */
if (hcmp) return hcmp;
return (h1.low64 > h2.low64) - (h2.low64 > h1.low64);
}
/*====== Canonical representation ======*/
/*! @ingroup XXH3_family */
XXH_PUBLIC_API void
XXH128_canonicalFromHash(XXH_NOESCAPE XXH128_canonical_t* dst, XXH128_hash_t hash)
{
XXH_STATIC_ASSERT(sizeof(XXH128_canonical_t) == sizeof(XXH128_hash_t));
if (XXH_CPU_LITTLE_ENDIAN) {
hash.high64 = XXH_swap64(hash.high64);
hash.low64 = XXH_swap64(hash.low64);
}
XXH_memcpy(dst, &hash.high64, sizeof(hash.high64));
XXH_memcpy((char*)dst + sizeof(hash.high64), &hash.low64, sizeof(hash.low64));
}
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH128_hash_t
XXH128_hashFromCanonical(XXH_NOESCAPE const XXH128_canonical_t* src)
{
XXH128_hash_t h;
h.high64 = XXH_readBE64(src);
h.low64 = XXH_readBE64(src->digest + 8);
return h;
}
/* ==========================================
* Secret generators
* ==========================================
*/
#define XXH_MIN(x, y) (((x) > (y)) ? (y) : (x))
XXH_FORCE_INLINE void XXH3_combine16(void* dst, XXH128_hash_t h128)
{
XXH_writeLE64( dst, XXH_readLE64(dst) ^ h128.low64 );
XXH_writeLE64( (char*)dst+8, XXH_readLE64((char*)dst+8) ^ h128.high64 );
}
/*! @ingroup XXH3_family */
XXH_PUBLIC_API XXH_errorcode
XXH3_generateSecret(XXH_NOESCAPE void* secretBuffer, size_t secretSize, XXH_NOESCAPE const void* customSeed, size_t customSeedSize)
{
#if (XXH_DEBUGLEVEL >= 1)
XXH_ASSERT(secretBuffer != NULL);
XXH_ASSERT(secretSize >= XXH3_SECRET_SIZE_MIN);
#else
/* production mode, assert() are disabled */
if (secretBuffer == NULL) return XXH_ERROR;
if (secretSize < XXH3_SECRET_SIZE_MIN) return XXH_ERROR;
#endif
if (customSeedSize == 0) {
customSeed = XXH3_kSecret;
customSeedSize = XXH_SECRET_DEFAULT_SIZE;
}
#if (XXH_DEBUGLEVEL >= 1)
XXH_ASSERT(customSeed != NULL);
#else
if (customSeed == NULL) return XXH_ERROR;
#endif
/* Fill secretBuffer with a copy of customSeed - repeat as needed */
{ size_t pos = 0;
while (pos < secretSize) {
size_t const toCopy = XXH_MIN((secretSize - pos), customSeedSize);
memcpy((char*)secretBuffer + pos, customSeed, toCopy);
pos += toCopy;
} }
{ size_t const nbSeg16 = secretSize / 16;
size_t n;
XXH128_canonical_t scrambler;
XXH128_canonicalFromHash(&scrambler, XXH128(customSeed, customSeedSize, 0));
for (n=0; n<nbSeg16; n++) {
XXH128_hash_t const h128 = XXH128(&scrambler, sizeof(scrambler), n);
XXH3_combine16((char*)secretBuffer + n*16, h128);
}
/* last segment */
XXH3_combine16((char*)secretBuffer + secretSize - 16, XXH128_hashFromCanonical(&scrambler));
}
return XXH_OK;
}
/*! @ingroup XXH3_family */
XXH_PUBLIC_API void
XXH3_generateSecret_fromSeed(XXH_NOESCAPE void* secretBuffer, XXH64_hash_t seed)
{
XXH_ALIGN(XXH_SEC_ALIGN) xxh_u8 secret[XXH_SECRET_DEFAULT_SIZE];
XXH3_initCustomSecret(secret, seed);
XXH_ASSERT(secretBuffer != NULL);
memcpy(secretBuffer, secret, XXH_SECRET_DEFAULT_SIZE);
}
/* Pop our optimization override from above */
#if XXH_VECTOR == XXH_AVX2 /* AVX2 */ \
&& defined(__GNUC__) && !defined(__clang__) /* GCC, not Clang */ \
&& defined(__OPTIMIZE__) && XXH_SIZE_OPT <= 0 /* respect -O0 and -Os */
# pragma GCC pop_options
#endif
#if defined (__cplusplus)
} /* extern "C" */
#endif
#endif /* XXH_NO_LONG_LONG */
#endif /* XXH_NO_XXH3 */
/*!
* @}
*/
#endif /* XXH_IMPLEMENTATION */
/**** ended inlining xxhash.h ****/
#ifndef ZSTD_NO_TRACE
/**** start inlining zstd_trace.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_TRACE_H
#define ZSTD_TRACE_H
#include <stddef.h>
/* weak symbol support
* For now, enable conservatively:
* - Only GNUC
* - Only ELF
* - Only x86-64, i386, aarch64 and risc-v.
* Also, explicitly disable on platforms known not to work so they aren't
* forgotten in the future.
*/
#if !defined(ZSTD_HAVE_WEAK_SYMBOLS) && \
defined(__GNUC__) && defined(__ELF__) && \
(defined(__x86_64__) || defined(_M_X64) || defined(__i386__) || \
defined(_M_IX86) || defined(__aarch64__) || defined(__riscv)) && \
!defined(__APPLE__) && !defined(_WIN32) && !defined(__MINGW32__) && \
!defined(__CYGWIN__) && !defined(_AIX)
# define ZSTD_HAVE_WEAK_SYMBOLS 1
#else
# define ZSTD_HAVE_WEAK_SYMBOLS 0
#endif
#if ZSTD_HAVE_WEAK_SYMBOLS
# define ZSTD_WEAK_ATTR __attribute__((__weak__))
#else
# define ZSTD_WEAK_ATTR
#endif
/* Only enable tracing when weak symbols are available. */
#ifndef ZSTD_TRACE
# define ZSTD_TRACE ZSTD_HAVE_WEAK_SYMBOLS
#endif
#if ZSTD_TRACE
struct ZSTD_CCtx_s;
struct ZSTD_DCtx_s;
struct ZSTD_CCtx_params_s;
typedef struct {
/**
* ZSTD_VERSION_NUMBER
*
* This is guaranteed to be the first member of ZSTD_trace.
* Otherwise, this struct is not stable between versions. If
* the version number does not match your expectation, you
* should not interpret the rest of the struct.
*/
unsigned version;
/**
* Non-zero if streaming (de)compression is used.
*/
int streaming;
/**
* The dictionary ID.
*/
unsigned dictionaryID;
/**
* Is the dictionary cold?
* Only set on decompression.
*/
int dictionaryIsCold;
/**
* The dictionary size or zero if no dictionary.
*/
size_t dictionarySize;
/**
* The uncompressed size of the data.
*/
size_t uncompressedSize;
/**
* The compressed size of the data.
*/
size_t compressedSize;
/**
* The fully resolved CCtx parameters (NULL on decompression).
*/
struct ZSTD_CCtx_params_s const* params;
/**
* The ZSTD_CCtx pointer (NULL on decompression).
*/
struct ZSTD_CCtx_s const* cctx;
/**
* The ZSTD_DCtx pointer (NULL on compression).
*/
struct ZSTD_DCtx_s const* dctx;
} ZSTD_Trace;
/**
* A tracing context. It must be 0 when tracing is disabled.
* Otherwise, any non-zero value returned by a tracing begin()
* function is presented to any subsequent calls to end().
*
* Any non-zero value is treated as tracing is enabled and not
* interpreted by the library.
*
* Two possible uses are:
* * A timestamp for when the begin() function was called.
* * A unique key identifying the (de)compression, like the
* address of the [dc]ctx pointer if you need to track
* more information than just a timestamp.
*/
typedef unsigned long long ZSTD_TraceCtx;
/**
* Trace the beginning of a compression call.
* @param cctx The dctx pointer for the compression.
* It can be used as a key to map begin() to end().
* @returns Non-zero if tracing is enabled. The return value is
* passed to ZSTD_trace_compress_end().
*/
ZSTD_WEAK_ATTR ZSTD_TraceCtx ZSTD_trace_compress_begin(
struct ZSTD_CCtx_s const* cctx);
/**
* Trace the end of a compression call.
* @param ctx The return value of ZSTD_trace_compress_begin().
* @param trace The zstd tracing info.
*/
ZSTD_WEAK_ATTR void ZSTD_trace_compress_end(
ZSTD_TraceCtx ctx,
ZSTD_Trace const* trace);
/**
* Trace the beginning of a decompression call.
* @param dctx The dctx pointer for the decompression.
* It can be used as a key to map begin() to end().
* @returns Non-zero if tracing is enabled. The return value is
* passed to ZSTD_trace_compress_end().
*/
ZSTD_WEAK_ATTR ZSTD_TraceCtx ZSTD_trace_decompress_begin(
struct ZSTD_DCtx_s const* dctx);
/**
* Trace the end of a decompression call.
* @param ctx The return value of ZSTD_trace_decompress_begin().
* @param trace The zstd tracing info.
*/
ZSTD_WEAK_ATTR void ZSTD_trace_decompress_end(
ZSTD_TraceCtx ctx,
ZSTD_Trace const* trace);
#endif /* ZSTD_TRACE */
#endif /* ZSTD_TRACE_H */
/**** ended inlining zstd_trace.h ****/
#else
# define ZSTD_TRACE 0
#endif
/* ---- static assert (debug) --- */
#define ZSTD_STATIC_ASSERT(c) DEBUG_STATIC_ASSERT(c)
#define ZSTD_isError ERR_isError /* for inlining */
#define FSE_isError ERR_isError
#define HUF_isError ERR_isError
/*-*************************************
* shared macros
***************************************/
#undef MIN
#undef MAX
#define MIN(a,b) ((a)<(b) ? (a) : (b))
#define MAX(a,b) ((a)>(b) ? (a) : (b))
#define BOUNDED(min,val,max) (MAX(min,MIN(val,max)))
/*-*************************************
* Common constants
***************************************/
#define ZSTD_OPT_NUM (1<<12)
#define ZSTD_REP_NUM 3 /* number of repcodes */
static UNUSED_ATTR const U32 repStartValue[ZSTD_REP_NUM] = { 1, 4, 8 };
#define KB *(1 <<10)
#define MB *(1 <<20)
#define GB *(1U<<30)
#define BIT7 128
#define BIT6 64
#define BIT5 32
#define BIT4 16
#define BIT1 2
#define BIT0 1
#define ZSTD_WINDOWLOG_ABSOLUTEMIN 10
static UNUSED_ATTR const size_t ZSTD_fcs_fieldSize[4] = { 0, 2, 4, 8 };
static UNUSED_ATTR const size_t ZSTD_did_fieldSize[4] = { 0, 1, 2, 4 };
#define ZSTD_FRAMEIDSIZE 4 /* magic number size */
#define ZSTD_BLOCKHEADERSIZE 3 /* C standard doesn't allow `static const` variable to be init using another `static const` variable */
static UNUSED_ATTR const size_t ZSTD_blockHeaderSize = ZSTD_BLOCKHEADERSIZE;
typedef enum { bt_raw, bt_rle, bt_compressed, bt_reserved } blockType_e;
#define ZSTD_FRAMECHECKSUMSIZE 4
#define MIN_SEQUENCES_SIZE 1 /* nbSeq==0 */
#define MIN_CBLOCK_SIZE (1 /*litCSize*/ + 1 /* RLE or RAW */) /* for a non-null block */
#define MIN_LITERALS_FOR_4_STREAMS 6
typedef enum { set_basic, set_rle, set_compressed, set_repeat } SymbolEncodingType_e;
#define LONGNBSEQ 0x7F00
#define MINMATCH 3
#define Litbits 8
#define LitHufLog 11
#define MaxLit ((1<<Litbits) - 1)
#define MaxML 52
#define MaxLL 35
#define DefaultMaxOff 28
#define MaxOff 31
#define MaxSeq MAX(MaxLL, MaxML) /* Assumption : MaxOff < MaxLL,MaxML */
#define MLFSELog 9
#define LLFSELog 9
#define OffFSELog 8
#define MaxFSELog MAX(MAX(MLFSELog, LLFSELog), OffFSELog)
#define MaxMLBits 16
#define MaxLLBits 16
#define ZSTD_MAX_HUF_HEADER_SIZE 128 /* header + <= 127 byte tree description */
/* Each table cannot take more than #symbols * FSELog bits */
#define ZSTD_MAX_FSE_HEADERS_SIZE (((MaxML + 1) * MLFSELog + (MaxLL + 1) * LLFSELog + (MaxOff + 1) * OffFSELog + 7) / 8)
static UNUSED_ATTR const U8 LL_bits[MaxLL+1] = {
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 2, 2, 3, 3,
4, 6, 7, 8, 9,10,11,12,
13,14,15,16
};
static UNUSED_ATTR const S16 LL_defaultNorm[MaxLL+1] = {
4, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 1, 1, 1,
2, 2, 2, 2, 2, 2, 2, 2,
2, 3, 2, 1, 1, 1, 1, 1,
-1,-1,-1,-1
};
#define LL_DEFAULTNORMLOG 6 /* for static allocation */
static UNUSED_ATTR const U32 LL_defaultNormLog = LL_DEFAULTNORMLOG;
static UNUSED_ATTR const U8 ML_bits[MaxML+1] = {
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 2, 2, 3, 3,
4, 4, 5, 7, 8, 9,10,11,
12,13,14,15,16
};
static UNUSED_ATTR const S16 ML_defaultNorm[MaxML+1] = {
1, 4, 3, 2, 2, 2, 2, 2,
2, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1,-1,-1,
-1,-1,-1,-1,-1
};
#define ML_DEFAULTNORMLOG 6 /* for static allocation */
static UNUSED_ATTR const U32 ML_defaultNormLog = ML_DEFAULTNORMLOG;
static UNUSED_ATTR const S16 OF_defaultNorm[DefaultMaxOff+1] = {
1, 1, 1, 1, 1, 1, 2, 2,
2, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
-1,-1,-1,-1,-1
};
#define OF_DEFAULTNORMLOG 5 /* for static allocation */
static UNUSED_ATTR const U32 OF_defaultNormLog = OF_DEFAULTNORMLOG;
/*-*******************************************
* Shared functions to include for inlining
*********************************************/
static void ZSTD_copy8(void* dst, const void* src) {
#if defined(ZSTD_ARCH_ARM_NEON)
vst1_u8((uint8_t*)dst, vld1_u8((const uint8_t*)src));
#else
ZSTD_memcpy(dst, src, 8);
#endif
}
#define COPY8(d,s) do { ZSTD_copy8(d,s); d+=8; s+=8; } while (0)
/* Need to use memmove here since the literal buffer can now be located within
the dst buffer. In circumstances where the op "catches up" to where the
literal buffer is, there can be partial overlaps in this call on the final
copy if the literal is being shifted by less than 16 bytes. */
static void ZSTD_copy16(void* dst, const void* src) {
#if defined(ZSTD_ARCH_ARM_NEON)
vst1q_u8((uint8_t*)dst, vld1q_u8((const uint8_t*)src));
#elif defined(ZSTD_ARCH_X86_SSE2)
_mm_storeu_si128((__m128i*)dst, _mm_loadu_si128((const __m128i*)src));
#elif defined(__clang__)
ZSTD_memmove(dst, src, 16);
#else
/* ZSTD_memmove is not inlined properly by gcc */
BYTE copy16_buf[16];
ZSTD_memcpy(copy16_buf, src, 16);
ZSTD_memcpy(dst, copy16_buf, 16);
#endif
}
#define COPY16(d,s) do { ZSTD_copy16(d,s); d+=16; s+=16; } while (0)
#define WILDCOPY_OVERLENGTH 32
#define WILDCOPY_VECLEN 16
typedef enum {
ZSTD_no_overlap,
ZSTD_overlap_src_before_dst
/* ZSTD_overlap_dst_before_src, */
} ZSTD_overlap_e;
/*! ZSTD_wildcopy() :
* Custom version of ZSTD_memcpy(), can over read/write up to WILDCOPY_OVERLENGTH bytes (if length==0)
* @param ovtype controls the overlap detection
* - ZSTD_no_overlap: The source and destination are guaranteed to be at least WILDCOPY_VECLEN bytes apart.
* - ZSTD_overlap_src_before_dst: The src and dst may overlap, but they MUST be at least 8 bytes apart.
* The src buffer must be before the dst buffer.
*/
MEM_STATIC FORCE_INLINE_ATTR
void ZSTD_wildcopy(void* dst, const void* src, ptrdiff_t length, ZSTD_overlap_e const ovtype)
{
ptrdiff_t diff = (BYTE*)dst - (const BYTE*)src;
const BYTE* ip = (const BYTE*)src;
BYTE* op = (BYTE*)dst;
BYTE* const oend = op + length;
if (ovtype == ZSTD_overlap_src_before_dst && diff < WILDCOPY_VECLEN) {
/* Handle short offset copies. */
do {
COPY8(op, ip);
} while (op < oend);
} else {
assert(diff >= WILDCOPY_VECLEN || diff <= -WILDCOPY_VECLEN);
/* Separate out the first COPY16() call because the copy length is
* almost certain to be short, so the branches have different
* probabilities. Since it is almost certain to be short, only do
* one COPY16() in the first call. Then, do two calls per loop since
* at that point it is more likely to have a high trip count.
*/
ZSTD_copy16(op, ip);
if (16 >= length) return;
op += 16;
ip += 16;
do {
COPY16(op, ip);
COPY16(op, ip);
}
while (op < oend);
}
}
MEM_STATIC size_t ZSTD_limitCopy(void* dst, size_t dstCapacity, const void* src, size_t srcSize)
{
size_t const length = MIN(dstCapacity, srcSize);
if (length > 0) {
ZSTD_memcpy(dst, src, length);
}
return length;
}
/* define "workspace is too large" as this number of times larger than needed */
#define ZSTD_WORKSPACETOOLARGE_FACTOR 3
/* when workspace is continuously too large
* during at least this number of times,
* context's memory usage is considered wasteful,
* because it's sized to handle a worst case scenario which rarely happens.
* In which case, resize it down to free some memory */
#define ZSTD_WORKSPACETOOLARGE_MAXDURATION 128
/* Controls whether the input/output buffer is buffered or stable. */
typedef enum {
ZSTD_bm_buffered = 0, /* Buffer the input/output */
ZSTD_bm_stable = 1 /* ZSTD_inBuffer/ZSTD_outBuffer is stable */
} ZSTD_bufferMode_e;
/*-*******************************************
* Private declarations
*********************************************/
/**
* Contains the compressed frame size and an upper-bound for the decompressed frame size.
* Note: before using `compressedSize`, check for errors using ZSTD_isError().
* similarly, before using `decompressedBound`, check for errors using:
* `decompressedBound != ZSTD_CONTENTSIZE_ERROR`
*/
typedef struct {
size_t nbBlocks;
size_t compressedSize;
unsigned long long decompressedBound;
} ZSTD_frameSizeInfo; /* decompress & legacy */
/* ZSTD_invalidateRepCodes() :
* ensures next compression will not use repcodes from previous block.
* Note : only works with regular variant;
* do not use with extDict variant ! */
void ZSTD_invalidateRepCodes(ZSTD_CCtx* cctx); /* zstdmt, adaptive_compression (shouldn't get this definition from here) */
typedef struct {
blockType_e blockType;
U32 lastBlock;
U32 origSize;
} blockProperties_t; /* declared here for decompress and fullbench */
/*! ZSTD_getcBlockSize() :
* Provides the size of compressed block from block header `src` */
/* Used by: decompress, fullbench */
size_t ZSTD_getcBlockSize(const void* src, size_t srcSize,
blockProperties_t* bpPtr);
/*! ZSTD_decodeSeqHeaders() :
* decode sequence header from src */
/* Used by: zstd_decompress_block, fullbench */
size_t ZSTD_decodeSeqHeaders(ZSTD_DCtx* dctx, int* nbSeqPtr,
const void* src, size_t srcSize);
/**
* @returns true iff the CPU supports dynamic BMI2 dispatch.
*/
MEM_STATIC int ZSTD_cpuSupportsBmi2(void)
{
ZSTD_cpuid_t cpuid = ZSTD_cpuid();
return ZSTD_cpuid_bmi1(cpuid) && ZSTD_cpuid_bmi2(cpuid);
}
#endif /* ZSTD_CCOMMON_H_MODULE */
/**** ended inlining zstd_internal.h ****/
/*-****************************************
* Version
******************************************/
unsigned ZSTD_versionNumber(void) { return ZSTD_VERSION_NUMBER; }
const char* ZSTD_versionString(void) { return ZSTD_VERSION_STRING; }
/*-****************************************
* ZSTD Error Management
******************************************/
#undef ZSTD_isError /* defined within zstd_internal.h */
/*! ZSTD_isError() :
* tells if a return value is an error code
* symbol is required for external callers */
unsigned ZSTD_isError(size_t code) { return ERR_isError(code); }
/*! ZSTD_getErrorName() :
* provides error code string from function result (useful for debugging) */
const char* ZSTD_getErrorName(size_t code) { return ERR_getErrorName(code); }
/*! ZSTD_getError() :
* convert a `size_t` function result into a proper ZSTD_errorCode enum */
ZSTD_ErrorCode ZSTD_getErrorCode(size_t code) { return ERR_getErrorCode(code); }
/*! ZSTD_getErrorString() :
* provides error code string from enum */
const char* ZSTD_getErrorString(ZSTD_ErrorCode code) { return ERR_getErrorString(code); }
/**** ended inlining common/zstd_common.c ****/
/**** start inlining compress/fse_compress.c ****/
/* ******************************************************************
* FSE : Finite State Entropy encoder
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* You can contact the author at :
* - FSE source repository : https://github.com/Cyan4973/FiniteStateEntropy
* - Public forum : https://groups.google.com/forum/#!forum/lz4c
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
****************************************************************** */
/* **************************************************************
* Includes
****************************************************************/
/**** skipping file: ../common/compiler.h ****/
/**** skipping file: ../common/mem.h ****/
/**** skipping file: ../common/debug.h ****/
/**** start inlining hist.h ****/
/* ******************************************************************
* hist : Histogram functions
* part of Finite State Entropy project
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* You can contact the author at :
* - FSE source repository : https://github.com/Cyan4973/FiniteStateEntropy
* - Public forum : https://groups.google.com/forum/#!forum/lz4c
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
****************************************************************** */
/* --- dependencies --- */
/**** skipping file: ../common/zstd_deps.h ****/
/* --- simple histogram functions --- */
/*! HIST_count():
* Provides the precise count of each byte within a table 'count'.
* 'count' is a table of unsigned int, of minimum size (*maxSymbolValuePtr+1).
* Updates *maxSymbolValuePtr with actual largest symbol value detected.
* @return : count of the most frequent symbol (which isn't identified).
* or an error code, which can be tested using HIST_isError().
* note : if return == srcSize, there is only one symbol.
*/
size_t HIST_count(unsigned* count, unsigned* maxSymbolValuePtr,
const void* src, size_t srcSize);
unsigned HIST_isError(size_t code); /**< tells if a return value is an error code */
/* --- advanced histogram functions --- */
#define HIST_WKSP_SIZE_U32 1024
#define HIST_WKSP_SIZE (HIST_WKSP_SIZE_U32 * sizeof(unsigned))
/** HIST_count_wksp() :
* Same as HIST_count(), but using an externally provided scratch buffer.
* Benefit is this function will use very little stack space.
* `workSpace` is a writable buffer which must be 4-bytes aligned,
* `workSpaceSize` must be >= HIST_WKSP_SIZE
*/
size_t HIST_count_wksp(unsigned* count, unsigned* maxSymbolValuePtr,
const void* src, size_t srcSize,
void* workSpace, size_t workSpaceSize);
/** HIST_countFast() :
* same as HIST_count(), but blindly trusts that all byte values within src are <= *maxSymbolValuePtr.
* This function is unsafe, and will segfault if any value within `src` is `> *maxSymbolValuePtr`
*/
size_t HIST_countFast(unsigned* count, unsigned* maxSymbolValuePtr,
const void* src, size_t srcSize);
/** HIST_countFast_wksp() :
* Same as HIST_countFast(), but using an externally provided scratch buffer.
* `workSpace` is a writable buffer which must be 4-bytes aligned,
* `workSpaceSize` must be >= HIST_WKSP_SIZE
*/
size_t HIST_countFast_wksp(unsigned* count, unsigned* maxSymbolValuePtr,
const void* src, size_t srcSize,
void* workSpace, size_t workSpaceSize);
/*! HIST_count_simple() :
* Same as HIST_countFast(), this function is unsafe,
* and will segfault if any value within `src` is `> *maxSymbolValuePtr`.
* It is also a bit slower for large inputs.
* However, it does not need any additional memory (not even on stack).
* @return : count of the most frequent symbol.
* Note this function doesn't produce any error (i.e. it must succeed).
*/
unsigned HIST_count_simple(unsigned* count, unsigned* maxSymbolValuePtr,
const void* src, size_t srcSize);
/*! HIST_add() :
* Lowest level: just add nb of occurrences of characters from @src into @count.
* @count is not reset. @count array is presumed large enough (i.e. 1 KB).
@ This function does not need any additional stack memory.
*/
void HIST_add(unsigned* count, const void* src, size_t srcSize);
/**** ended inlining hist.h ****/
/**** skipping file: ../common/bitstream.h ****/
#define FSE_STATIC_LINKING_ONLY
/**** skipping file: ../common/fse.h ****/
/**** skipping file: ../common/error_private.h ****/
#define ZSTD_DEPS_NEED_MALLOC
#define ZSTD_DEPS_NEED_MATH64
/**** skipping file: ../common/zstd_deps.h ****/
/**** skipping file: ../common/bits.h ****/
/* **************************************************************
* Error Management
****************************************************************/
#define FSE_isError ERR_isError
/* **************************************************************
* Templates
****************************************************************/
/*
designed to be included
for type-specific functions (template emulation in C)
Objective is to write these functions only once, for improved maintenance
*/
/* safety checks */
#ifndef FSE_FUNCTION_EXTENSION
# error "FSE_FUNCTION_EXTENSION must be defined"
#endif
#ifndef FSE_FUNCTION_TYPE
# error "FSE_FUNCTION_TYPE must be defined"
#endif
/* Function names */
#define FSE_CAT(X,Y) X##Y
#define FSE_FUNCTION_NAME(X,Y) FSE_CAT(X,Y)
#define FSE_TYPE_NAME(X,Y) FSE_CAT(X,Y)
/* Function templates */
/* FSE_buildCTable_wksp() :
* Same as FSE_buildCTable(), but using an externally allocated scratch buffer (`workSpace`).
* wkspSize should be sized to handle worst case situation, which is `1<<max_tableLog * sizeof(FSE_FUNCTION_TYPE)`
* workSpace must also be properly aligned with FSE_FUNCTION_TYPE requirements
*/
size_t FSE_buildCTable_wksp(FSE_CTable* ct,
const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog,
void* workSpace, size_t wkspSize)
{
U32 const tableSize = 1 << tableLog;
U32 const tableMask = tableSize - 1;
void* const ptr = ct;
U16* const tableU16 = ( (U16*) ptr) + 2;
void* const FSCT = ((U32*)ptr) + 1 /* header */ + (tableLog ? tableSize>>1 : 1) ;
FSE_symbolCompressionTransform* const symbolTT = (FSE_symbolCompressionTransform*) (FSCT);
U32 const step = FSE_TABLESTEP(tableSize);
U32 const maxSV1 = maxSymbolValue+1;
U16* cumul = (U16*)workSpace; /* size = maxSV1 */
FSE_FUNCTION_TYPE* const tableSymbol = (FSE_FUNCTION_TYPE*)(cumul + (maxSV1+1)); /* size = tableSize */
U32 highThreshold = tableSize-1;
assert(((size_t)workSpace & 1) == 0); /* Must be 2 bytes-aligned */
if (FSE_BUILD_CTABLE_WORKSPACE_SIZE(maxSymbolValue, tableLog) > wkspSize) return ERROR(tableLog_tooLarge);
/* CTable header */
tableU16[-2] = (U16) tableLog;
tableU16[-1] = (U16) maxSymbolValue;
assert(tableLog < 16); /* required for threshold strategy to work */
/* For explanations on how to distribute symbol values over the table :
* https://fastcompression.blogspot.fr/2014/02/fse-distributing-symbol-values.html */
#ifdef __clang_analyzer__
ZSTD_memset(tableSymbol, 0, sizeof(*tableSymbol) * tableSize); /* useless initialization, just to keep scan-build happy */
#endif
/* symbol start positions */
{ U32 u;
cumul[0] = 0;
for (u=1; u <= maxSV1; u++) {
if (normalizedCounter[u-1]==-1) { /* Low proba symbol */
cumul[u] = cumul[u-1] + 1;
tableSymbol[highThreshold--] = (FSE_FUNCTION_TYPE)(u-1);
} else {
assert(normalizedCounter[u-1] >= 0);
cumul[u] = cumul[u-1] + (U16)normalizedCounter[u-1];
assert(cumul[u] >= cumul[u-1]); /* no overflow */
} }
cumul[maxSV1] = (U16)(tableSize+1);
}
/* Spread symbols */
if (highThreshold == tableSize - 1) {
/* Case for no low prob count symbols. Lay down 8 bytes at a time
* to reduce branch misses since we are operating on a small block
*/
BYTE* const spread = tableSymbol + tableSize; /* size = tableSize + 8 (may write beyond tableSize) */
{ U64 const add = 0x0101010101010101ull;
size_t pos = 0;
U64 sv = 0;
U32 s;
for (s=0; s<maxSV1; ++s, sv += add) {
int i;
int const n = normalizedCounter[s];
MEM_write64(spread + pos, sv);
for (i = 8; i < n; i += 8) {
MEM_write64(spread + pos + i, sv);
}
assert(n>=0);
pos += (size_t)n;
}
}
/* Spread symbols across the table. Lack of lowprob symbols means that
* we don't need variable sized inner loop, so we can unroll the loop and
* reduce branch misses.
*/
{ size_t position = 0;
size_t s;
size_t const unroll = 2; /* Experimentally determined optimal unroll */
assert(tableSize % unroll == 0); /* FSE_MIN_TABLELOG is 5 */
for (s = 0; s < (size_t)tableSize; s += unroll) {
size_t u;
for (u = 0; u < unroll; ++u) {
size_t const uPosition = (position + (u * step)) & tableMask;
tableSymbol[uPosition] = spread[s + u];
}
position = (position + (unroll * step)) & tableMask;
}
assert(position == 0); /* Must have initialized all positions */
}
} else {
U32 position = 0;
U32 symbol;
for (symbol=0; symbol<maxSV1; symbol++) {
int nbOccurrences;
int const freq = normalizedCounter[symbol];
for (nbOccurrences=0; nbOccurrences<freq; nbOccurrences++) {
tableSymbol[position] = (FSE_FUNCTION_TYPE)symbol;
position = (position + step) & tableMask;
while (position > highThreshold)
position = (position + step) & tableMask; /* Low proba area */
} }
assert(position==0); /* Must have initialized all positions */
}
/* Build table */
{ U32 u; for (u=0; u<tableSize; u++) {
FSE_FUNCTION_TYPE s = tableSymbol[u]; /* note : static analyzer may not understand tableSymbol is properly initialized */
tableU16[cumul[s]++] = (U16) (tableSize+u); /* TableU16 : sorted by symbol order; gives next state value */
} }
/* Build Symbol Transformation Table */
{ unsigned total = 0;
unsigned s;
for (s=0; s<=maxSymbolValue; s++) {
switch (normalizedCounter[s])
{
case 0:
/* filling nonetheless, for compatibility with FSE_getMaxNbBits() */
symbolTT[s].deltaNbBits = ((tableLog+1) << 16) - (1<<tableLog);
break;
case -1:
case 1:
symbolTT[s].deltaNbBits = (tableLog << 16) - (1<<tableLog);
assert(total <= INT_MAX);
symbolTT[s].deltaFindState = (int)(total - 1);
total ++;
break;
default :
assert(normalizedCounter[s] > 1);
{ U32 const maxBitsOut = tableLog - ZSTD_highbit32 ((U32)normalizedCounter[s]-1);
U32 const minStatePlus = (U32)normalizedCounter[s] << maxBitsOut;
symbolTT[s].deltaNbBits = (maxBitsOut << 16) - minStatePlus;
symbolTT[s].deltaFindState = (int)(total - (unsigned)normalizedCounter[s]);
total += (unsigned)normalizedCounter[s];
} } } }
#if 0 /* debug : symbol costs */
DEBUGLOG(5, "\n --- table statistics : ");
{ U32 symbol;
for (symbol=0; symbol<=maxSymbolValue; symbol++) {
DEBUGLOG(5, "%3u: w=%3i, maxBits=%u, fracBits=%.2f",
symbol, normalizedCounter[symbol],
FSE_getMaxNbBits(symbolTT, symbol),
(double)FSE_bitCost(symbolTT, tableLog, symbol, 8) / 256);
} }
#endif
return 0;
}
#ifndef FSE_COMMONDEFS_ONLY
/*-**************************************************************
* FSE NCount encoding
****************************************************************/
size_t FSE_NCountWriteBound(unsigned maxSymbolValue, unsigned tableLog)
{
size_t const maxHeaderSize = (((maxSymbolValue+1) * tableLog
+ 4 /* bitCount initialized at 4 */
+ 2 /* first two symbols may use one additional bit each */) / 8)
+ 1 /* round up to whole nb bytes */
+ 2 /* additional two bytes for bitstream flush */;
return maxSymbolValue ? maxHeaderSize : FSE_NCOUNTBOUND; /* maxSymbolValue==0 ? use default */
}
static size_t
FSE_writeNCount_generic (void* header, size_t headerBufferSize,
const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog,
unsigned writeIsSafe)
{
BYTE* const ostart = (BYTE*) header;
BYTE* out = ostart;
BYTE* const oend = ostart + headerBufferSize;
int nbBits;
const int tableSize = 1 << tableLog;
int remaining;
int threshold;
U32 bitStream = 0;
int bitCount = 0;
unsigned symbol = 0;
unsigned const alphabetSize = maxSymbolValue + 1;
int previousIs0 = 0;
/* Table Size */
bitStream += (tableLog-FSE_MIN_TABLELOG) << bitCount;
bitCount += 4;
/* Init */
remaining = tableSize+1; /* +1 for extra accuracy */
threshold = tableSize;
nbBits = (int)tableLog+1;
while ((symbol < alphabetSize) && (remaining>1)) { /* stops at 1 */
if (previousIs0) {
unsigned start = symbol;
while ((symbol < alphabetSize) && !normalizedCounter[symbol]) symbol++;
if (symbol == alphabetSize) break; /* incorrect distribution */
while (symbol >= start+24) {
start+=24;
bitStream += 0xFFFFU << bitCount;
if ((!writeIsSafe) && (out > oend-2))
return ERROR(dstSize_tooSmall); /* Buffer overflow */
out[0] = (BYTE) bitStream;
out[1] = (BYTE)(bitStream>>8);
out+=2;
bitStream>>=16;
}
while (symbol >= start+3) {
start+=3;
bitStream += 3U << bitCount;
bitCount += 2;
}
bitStream += (symbol-start) << bitCount;
bitCount += 2;
if (bitCount>16) {
if ((!writeIsSafe) && (out > oend - 2))
return ERROR(dstSize_tooSmall); /* Buffer overflow */
out[0] = (BYTE)bitStream;
out[1] = (BYTE)(bitStream>>8);
out += 2;
bitStream >>= 16;
bitCount -= 16;
} }
{ int count = normalizedCounter[symbol++];
int const max = (2*threshold-1) - remaining;
remaining -= count < 0 ? -count : count;
count++; /* +1 for extra accuracy */
if (count>=threshold)
count += max; /* [0..max[ [max..threshold[ (...) [threshold+max 2*threshold[ */
bitStream += (U32)count << bitCount;
bitCount += nbBits;
bitCount -= (count<max);
previousIs0 = (count==1);
if (remaining<1) return ERROR(GENERIC);
while (remaining<threshold) { nbBits--; threshold>>=1; }
}
if (bitCount>16) {
if ((!writeIsSafe) && (out > oend - 2))
return ERROR(dstSize_tooSmall); /* Buffer overflow */
out[0] = (BYTE)bitStream;
out[1] = (BYTE)(bitStream>>8);
out += 2;
bitStream >>= 16;
bitCount -= 16;
} }
if (remaining != 1)
return ERROR(GENERIC); /* incorrect normalized distribution */
assert(symbol <= alphabetSize);
/* flush remaining bitStream */
if ((!writeIsSafe) && (out > oend - 2))
return ERROR(dstSize_tooSmall); /* Buffer overflow */
out[0] = (BYTE)bitStream;
out[1] = (BYTE)(bitStream>>8);
out+= (bitCount+7) /8;
assert(out >= ostart);
return (size_t)(out-ostart);
}
size_t FSE_writeNCount (void* buffer, size_t bufferSize,
const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog)
{
if (tableLog > FSE_MAX_TABLELOG) return ERROR(tableLog_tooLarge); /* Unsupported */
if (tableLog < FSE_MIN_TABLELOG) return ERROR(GENERIC); /* Unsupported */
if (bufferSize < FSE_NCountWriteBound(maxSymbolValue, tableLog))
return FSE_writeNCount_generic(buffer, bufferSize, normalizedCounter, maxSymbolValue, tableLog, 0);
return FSE_writeNCount_generic(buffer, bufferSize, normalizedCounter, maxSymbolValue, tableLog, 1 /* write in buffer is safe */);
}
/*-**************************************************************
* FSE Compression Code
****************************************************************/
/* provides the minimum logSize to safely represent a distribution */
static unsigned FSE_minTableLog(size_t srcSize, unsigned maxSymbolValue)
{
U32 minBitsSrc = ZSTD_highbit32((U32)(srcSize)) + 1;
U32 minBitsSymbols = ZSTD_highbit32(maxSymbolValue) + 2;
U32 minBits = minBitsSrc < minBitsSymbols ? minBitsSrc : minBitsSymbols;
assert(srcSize > 1); /* Not supported, RLE should be used instead */
return minBits;
}
unsigned FSE_optimalTableLog_internal(unsigned maxTableLog, size_t srcSize, unsigned maxSymbolValue, unsigned minus)
{
U32 maxBitsSrc = ZSTD_highbit32((U32)(srcSize - 1)) - minus;
U32 tableLog = maxTableLog;
U32 minBits = FSE_minTableLog(srcSize, maxSymbolValue);
assert(srcSize > 1); /* Not supported, RLE should be used instead */
if (tableLog==0) tableLog = FSE_DEFAULT_TABLELOG;
if (maxBitsSrc < tableLog) tableLog = maxBitsSrc; /* Accuracy can be reduced */
if (minBits > tableLog) tableLog = minBits; /* Need a minimum to safely represent all symbol values */
if (tableLog < FSE_MIN_TABLELOG) tableLog = FSE_MIN_TABLELOG;
if (tableLog > FSE_MAX_TABLELOG) tableLog = FSE_MAX_TABLELOG;
return tableLog;
}
unsigned FSE_optimalTableLog(unsigned maxTableLog, size_t srcSize, unsigned maxSymbolValue)
{
return FSE_optimalTableLog_internal(maxTableLog, srcSize, maxSymbolValue, 2);
}
/* Secondary normalization method.
To be used when primary method fails. */
static size_t FSE_normalizeM2(short* norm, U32 tableLog, const unsigned* count, size_t total, U32 maxSymbolValue, short lowProbCount)
{
short const NOT_YET_ASSIGNED = -2;
U32 s;
U32 distributed = 0;
U32 ToDistribute;
/* Init */
U32 const lowThreshold = (U32)(total >> tableLog);
U32 lowOne = (U32)((total * 3) >> (tableLog + 1));
for (s=0; s<=maxSymbolValue; s++) {
if (count[s] == 0) {
norm[s]=0;
continue;
}
if (count[s] <= lowThreshold) {
norm[s] = lowProbCount;
distributed++;
total -= count[s];
continue;
}
if (count[s] <= lowOne) {
norm[s] = 1;
distributed++;
total -= count[s];
continue;
}
norm[s]=NOT_YET_ASSIGNED;
}
ToDistribute = (1 << tableLog) - distributed;
if (ToDistribute == 0)
return 0;
if ((total / ToDistribute) > lowOne) {
/* risk of rounding to zero */
lowOne = (U32)((total * 3) / (ToDistribute * 2));
for (s=0; s<=maxSymbolValue; s++) {
if ((norm[s] == NOT_YET_ASSIGNED) && (count[s] <= lowOne)) {
norm[s] = 1;
distributed++;
total -= count[s];
continue;
} }
ToDistribute = (1 << tableLog) - distributed;
}
if (distributed == maxSymbolValue+1) {
/* all values are pretty poor;
probably incompressible data (should have already been detected);
find max, then give all remaining points to max */
U32 maxV = 0, maxC = 0;
for (s=0; s<=maxSymbolValue; s++)
if (count[s] > maxC) { maxV=s; maxC=count[s]; }
norm[maxV] += (short)ToDistribute;
return 0;
}
if (total == 0) {
/* all of the symbols were low enough for the lowOne or lowThreshold */
for (s=0; ToDistribute > 0; s = (s+1)%(maxSymbolValue+1))
if (norm[s] > 0) { ToDistribute--; norm[s]++; }
return 0;
}
{ U64 const vStepLog = 62 - tableLog;
U64 const mid = (1ULL << (vStepLog-1)) - 1;
U64 const rStep = ZSTD_div64((((U64)1<<vStepLog) * ToDistribute) + mid, (U32)total); /* scale on remaining */
U64 tmpTotal = mid;
for (s=0; s<=maxSymbolValue; s++) {
if (norm[s]==NOT_YET_ASSIGNED) {
U64 const end = tmpTotal + (count[s] * rStep);
U32 const sStart = (U32)(tmpTotal >> vStepLog);
U32 const sEnd = (U32)(end >> vStepLog);
U32 const weight = sEnd - sStart;
if (weight < 1)
return ERROR(GENERIC);
norm[s] = (short)weight;
tmpTotal = end;
} } }
return 0;
}
size_t FSE_normalizeCount (short* normalizedCounter, unsigned tableLog,
const unsigned* count, size_t total,
unsigned maxSymbolValue, unsigned useLowProbCount)
{
/* Sanity checks */
if (tableLog==0) tableLog = FSE_DEFAULT_TABLELOG;
if (tableLog < FSE_MIN_TABLELOG) return ERROR(GENERIC); /* Unsupported size */
if (tableLog > FSE_MAX_TABLELOG) return ERROR(tableLog_tooLarge); /* Unsupported size */
if (tableLog < FSE_minTableLog(total, maxSymbolValue)) return ERROR(GENERIC); /* Too small tableLog, compression potentially impossible */
{ static U32 const rtbTable[] = { 0, 473195, 504333, 520860, 550000, 700000, 750000, 830000 };
short const lowProbCount = useLowProbCount ? -1 : 1;
U64 const scale = 62 - tableLog;
U64 const step = ZSTD_div64((U64)1<<62, (U32)total); /* <== here, one division ! */
U64 const vStep = 1ULL<<(scale-20);
int stillToDistribute = 1<<tableLog;
unsigned s;
unsigned largest=0;
short largestP=0;
U32 lowThreshold = (U32)(total >> tableLog);
for (s=0; s<=maxSymbolValue; s++) {
if (count[s] == total) return 0; /* rle special case */
if (count[s] == 0) { normalizedCounter[s]=0; continue; }
if (count[s] <= lowThreshold) {
normalizedCounter[s] = lowProbCount;
stillToDistribute--;
} else {
short proba = (short)((count[s]*step) >> scale);
if (proba<8) {
U64 restToBeat = vStep * rtbTable[proba];
proba += (count[s]*step) - ((U64)proba<<scale) > restToBeat;
}
if (proba > largestP) { largestP=proba; largest=s; }
normalizedCounter[s] = proba;
stillToDistribute -= proba;
} }
if (-stillToDistribute >= (normalizedCounter[largest] >> 1)) {
/* corner case, need another normalization method */
size_t const errorCode = FSE_normalizeM2(normalizedCounter, tableLog, count, total, maxSymbolValue, lowProbCount);
if (FSE_isError(errorCode)) return errorCode;
}
else normalizedCounter[largest] += (short)stillToDistribute;
}
#if 0
{ /* Print Table (debug) */
U32 s;
U32 nTotal = 0;
for (s=0; s<=maxSymbolValue; s++)
RAWLOG(2, "%3i: %4i \n", s, normalizedCounter[s]);
for (s=0; s<=maxSymbolValue; s++)
nTotal += abs(normalizedCounter[s]);
if (nTotal != (1U<<tableLog))
RAWLOG(2, "Warning !!! Total == %u != %u !!!", nTotal, 1U<<tableLog);
getchar();
}
#endif
return tableLog;
}
/* fake FSE_CTable, for rle input (always same symbol) */
size_t FSE_buildCTable_rle (FSE_CTable* ct, BYTE symbolValue)
{
void* ptr = ct;
U16* tableU16 = ( (U16*) ptr) + 2;
void* FSCTptr = (U32*)ptr + 2;
FSE_symbolCompressionTransform* symbolTT = (FSE_symbolCompressionTransform*) FSCTptr;
/* header */
tableU16[-2] = (U16) 0;
tableU16[-1] = (U16) symbolValue;
/* Build table */
tableU16[0] = 0;
tableU16[1] = 0; /* just in case */
/* Build Symbol Transformation Table */
symbolTT[symbolValue].deltaNbBits = 0;
symbolTT[symbolValue].deltaFindState = 0;
return 0;
}
static size_t FSE_compress_usingCTable_generic (void* dst, size_t dstSize,
const void* src, size_t srcSize,
const FSE_CTable* ct, const unsigned fast)
{
const BYTE* const istart = (const BYTE*) src;
const BYTE* const iend = istart + srcSize;
const BYTE* ip=iend;
BIT_CStream_t bitC;
FSE_CState_t CState1, CState2;
/* init */
if (srcSize <= 2) return 0;
{ size_t const initError = BIT_initCStream(&bitC, dst, dstSize);
if (FSE_isError(initError)) return 0; /* not enough space available to write a bitstream */ }
#define FSE_FLUSHBITS(s) (fast ? BIT_flushBitsFast(s) : BIT_flushBits(s))
if (srcSize & 1) {
FSE_initCState2(&CState1, ct, *--ip);
FSE_initCState2(&CState2, ct, *--ip);
FSE_encodeSymbol(&bitC, &CState1, *--ip);
FSE_FLUSHBITS(&bitC);
} else {
FSE_initCState2(&CState2, ct, *--ip);
FSE_initCState2(&CState1, ct, *--ip);
}
/* join to mod 4 */
srcSize -= 2;
if ((sizeof(bitC.bitContainer)*8 > FSE_MAX_TABLELOG*4+7 ) && (srcSize & 2)) { /* test bit 2 */
FSE_encodeSymbol(&bitC, &CState2, *--ip);
FSE_encodeSymbol(&bitC, &CState1, *--ip);
FSE_FLUSHBITS(&bitC);
}
/* 2 or 4 encoding per loop */
while ( ip>istart ) {
FSE_encodeSymbol(&bitC, &CState2, *--ip);
if (sizeof(bitC.bitContainer)*8 < FSE_MAX_TABLELOG*2+7 ) /* this test must be static */
FSE_FLUSHBITS(&bitC);
FSE_encodeSymbol(&bitC, &CState1, *--ip);
if (sizeof(bitC.bitContainer)*8 > FSE_MAX_TABLELOG*4+7 ) { /* this test must be static */
FSE_encodeSymbol(&bitC, &CState2, *--ip);
FSE_encodeSymbol(&bitC, &CState1, *--ip);
}
FSE_FLUSHBITS(&bitC);
}
FSE_flushCState(&bitC, &CState2);
FSE_flushCState(&bitC, &CState1);
return BIT_closeCStream(&bitC);
}
size_t FSE_compress_usingCTable (void* dst, size_t dstSize,
const void* src, size_t srcSize,
const FSE_CTable* ct)
{
unsigned const fast = (dstSize >= FSE_BLOCKBOUND(srcSize));
if (fast)
return FSE_compress_usingCTable_generic(dst, dstSize, src, srcSize, ct, 1);
else
return FSE_compress_usingCTable_generic(dst, dstSize, src, srcSize, ct, 0);
}
size_t FSE_compressBound(size_t size) { return FSE_COMPRESSBOUND(size); }
#endif /* FSE_COMMONDEFS_ONLY */
/**** ended inlining compress/fse_compress.c ****/
/**** start inlining compress/hist.c ****/
/* ******************************************************************
* hist : Histogram functions
* part of Finite State Entropy project
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* You can contact the author at :
* - FSE source repository : https://github.com/Cyan4973/FiniteStateEntropy
* - Public forum : https://groups.google.com/forum/#!forum/lz4c
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
****************************************************************** */
/* --- dependencies --- */
/**** skipping file: ../common/mem.h ****/
/**** skipping file: ../common/debug.h ****/
/**** skipping file: ../common/error_private.h ****/
/**** skipping file: hist.h ****/
/* --- Error management --- */
unsigned HIST_isError(size_t code) { return ERR_isError(code); }
/*-**************************************************************
* Histogram functions
****************************************************************/
void HIST_add(unsigned* count, const void* src, size_t srcSize)
{
const BYTE* ip = (const BYTE*)src;
const BYTE* const end = ip + srcSize;
while (ip<end) {
count[*ip++]++;
}
}
unsigned HIST_count_simple(unsigned* count, unsigned* maxSymbolValuePtr,
const void* src, size_t srcSize)
{
const BYTE* ip = (const BYTE*)src;
const BYTE* const end = ip + srcSize;
unsigned maxSymbolValue = *maxSymbolValuePtr;
unsigned largestCount=0;
ZSTD_memset(count, 0, (maxSymbolValue+1) * sizeof(*count));
if (srcSize==0) { *maxSymbolValuePtr = 0; return 0; }
while (ip<end) {
assert(*ip <= maxSymbolValue);
count[*ip++]++;
}
while (!count[maxSymbolValue]) maxSymbolValue--;
*maxSymbolValuePtr = maxSymbolValue;
{ U32 s;
for (s=0; s<=maxSymbolValue; s++)
if (count[s] > largestCount) largestCount = count[s];
}
return largestCount;
}
typedef enum { trustInput, checkMaxSymbolValue } HIST_checkInput_e;
/* HIST_count_parallel_wksp() :
* store histogram into 4 intermediate tables, recombined at the end.
* this design makes better use of OoO cpus,
* and is noticeably faster when some values are heavily repeated.
* But it needs some additional workspace for intermediate tables.
* `workSpace` must be a U32 table of size >= HIST_WKSP_SIZE_U32.
* @return : largest histogram frequency,
* or an error code (notably when histogram's alphabet is larger than *maxSymbolValuePtr) */
static size_t HIST_count_parallel_wksp(
unsigned* count, unsigned* maxSymbolValuePtr,
const void* source, size_t sourceSize,
HIST_checkInput_e check,
U32* const workSpace)
{
const BYTE* ip = (const BYTE*)source;
const BYTE* const iend = ip+sourceSize;
size_t const countSize = (*maxSymbolValuePtr + 1) * sizeof(*count);
unsigned max=0;
U32* const Counting1 = workSpace;
U32* const Counting2 = Counting1 + 256;
U32* const Counting3 = Counting2 + 256;
U32* const Counting4 = Counting3 + 256;
/* safety checks */
assert(*maxSymbolValuePtr <= 255);
if (!sourceSize) {
ZSTD_memset(count, 0, countSize);
*maxSymbolValuePtr = 0;
return 0;
}
ZSTD_memset(workSpace, 0, 4*256*sizeof(unsigned));
/* by stripes of 16 bytes */
{ U32 cached = MEM_read32(ip); ip += 4;
while (ip < iend-15) {
U32 c = cached; cached = MEM_read32(ip); ip += 4;
Counting1[(BYTE) c ]++;
Counting2[(BYTE)(c>>8) ]++;
Counting3[(BYTE)(c>>16)]++;
Counting4[ c>>24 ]++;
c = cached; cached = MEM_read32(ip); ip += 4;
Counting1[(BYTE) c ]++;
Counting2[(BYTE)(c>>8) ]++;
Counting3[(BYTE)(c>>16)]++;
Counting4[ c>>24 ]++;
c = cached; cached = MEM_read32(ip); ip += 4;
Counting1[(BYTE) c ]++;
Counting2[(BYTE)(c>>8) ]++;
Counting3[(BYTE)(c>>16)]++;
Counting4[ c>>24 ]++;
c = cached; cached = MEM_read32(ip); ip += 4;
Counting1[(BYTE) c ]++;
Counting2[(BYTE)(c>>8) ]++;
Counting3[(BYTE)(c>>16)]++;
Counting4[ c>>24 ]++;
}
ip-=4;
}
/* finish last symbols */
while (ip<iend) Counting1[*ip++]++;
{ U32 s;
for (s=0; s<256; s++) {
Counting1[s] += Counting2[s] + Counting3[s] + Counting4[s];
if (Counting1[s] > max) max = Counting1[s];
} }
{ unsigned maxSymbolValue = 255;
while (!Counting1[maxSymbolValue]) maxSymbolValue--;
if (check && maxSymbolValue > *maxSymbolValuePtr) return ERROR(maxSymbolValue_tooSmall);
*maxSymbolValuePtr = maxSymbolValue;
ZSTD_memmove(count, Counting1, countSize); /* in case count & Counting1 are overlapping */
}
return (size_t)max;
}
/* HIST_countFast_wksp() :
* Same as HIST_countFast(), but using an externally provided scratch buffer.
* `workSpace` is a writable buffer which must be 4-bytes aligned,
* `workSpaceSize` must be >= HIST_WKSP_SIZE
*/
size_t HIST_countFast_wksp(unsigned* count, unsigned* maxSymbolValuePtr,
const void* source, size_t sourceSize,
void* workSpace, size_t workSpaceSize)
{
if (sourceSize < 1500) /* heuristic threshold */
return HIST_count_simple(count, maxSymbolValuePtr, source, sourceSize);
if ((size_t)workSpace & 3) return ERROR(GENERIC); /* must be aligned on 4-bytes boundaries */
if (workSpaceSize < HIST_WKSP_SIZE) return ERROR(workSpace_tooSmall);
return HIST_count_parallel_wksp(count, maxSymbolValuePtr, source, sourceSize, trustInput, (U32*)workSpace);
}
/* HIST_count_wksp() :
* Same as HIST_count(), but using an externally provided scratch buffer.
* `workSpace` size must be table of >= HIST_WKSP_SIZE_U32 unsigned */
size_t HIST_count_wksp(unsigned* count, unsigned* maxSymbolValuePtr,
const void* source, size_t sourceSize,
void* workSpace, size_t workSpaceSize)
{
if ((size_t)workSpace & 3) return ERROR(GENERIC); /* must be aligned on 4-bytes boundaries */
if (workSpaceSize < HIST_WKSP_SIZE) return ERROR(workSpace_tooSmall);
if (*maxSymbolValuePtr < 255)
return HIST_count_parallel_wksp(count, maxSymbolValuePtr, source, sourceSize, checkMaxSymbolValue, (U32*)workSpace);
*maxSymbolValuePtr = 255;
return HIST_countFast_wksp(count, maxSymbolValuePtr, source, sourceSize, workSpace, workSpaceSize);
}
#ifndef ZSTD_NO_UNUSED_FUNCTIONS
/* fast variant (unsafe : won't check if src contains values beyond count[] limit) */
size_t HIST_countFast(unsigned* count, unsigned* maxSymbolValuePtr,
const void* source, size_t sourceSize)
{
unsigned tmpCounters[HIST_WKSP_SIZE_U32];
return HIST_countFast_wksp(count, maxSymbolValuePtr, source, sourceSize, tmpCounters, sizeof(tmpCounters));
}
size_t HIST_count(unsigned* count, unsigned* maxSymbolValuePtr,
const void* src, size_t srcSize)
{
unsigned tmpCounters[HIST_WKSP_SIZE_U32];
return HIST_count_wksp(count, maxSymbolValuePtr, src, srcSize, tmpCounters, sizeof(tmpCounters));
}
#endif
/**** ended inlining compress/hist.c ****/
/**** start inlining compress/huf_compress.c ****/
/* ******************************************************************
* Huffman encoder, part of New Generation Entropy library
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* You can contact the author at :
* - FSE+HUF source repository : https://github.com/Cyan4973/FiniteStateEntropy
* - Public forum : https://groups.google.com/forum/#!forum/lz4c
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
****************************************************************** */
/* **************************************************************
* Compiler specifics
****************************************************************/
#ifdef _MSC_VER /* Visual Studio */
# pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */
#endif
/* **************************************************************
* Includes
****************************************************************/
/**** skipping file: ../common/zstd_deps.h ****/
/**** skipping file: ../common/compiler.h ****/
/**** skipping file: ../common/bitstream.h ****/
/**** skipping file: hist.h ****/
#define FSE_STATIC_LINKING_ONLY /* FSE_optimalTableLog_internal */
/**** skipping file: ../common/fse.h ****/
/**** skipping file: ../common/huf.h ****/
/**** skipping file: ../common/error_private.h ****/
/**** skipping file: ../common/bits.h ****/
/* **************************************************************
* Error Management
****************************************************************/
#define HUF_isError ERR_isError
#define HUF_STATIC_ASSERT(c) DEBUG_STATIC_ASSERT(c) /* use only *after* variable declarations */
/* **************************************************************
* Required declarations
****************************************************************/
typedef struct nodeElt_s {
U32 count;
U16 parent;
BYTE byte;
BYTE nbBits;
} nodeElt;
/* **************************************************************
* Debug Traces
****************************************************************/
#if DEBUGLEVEL >= 2
static size_t showU32(const U32* arr, size_t size)
{
size_t u;
for (u=0; u<size; u++) {
RAWLOG(6, " %u", arr[u]); (void)arr;
}
RAWLOG(6, " \n");
return size;
}
static size_t HUF_getNbBits(HUF_CElt elt);
static size_t showCTableBits(const HUF_CElt* ctable, size_t size)
{
size_t u;
for (u=0; u<size; u++) {
RAWLOG(6, " %zu", HUF_getNbBits(ctable[u])); (void)ctable;
}
RAWLOG(6, " \n");
return size;
}
static size_t showHNodeSymbols(const nodeElt* hnode, size_t size)
{
size_t u;
for (u=0; u<size; u++) {
RAWLOG(6, " %u", hnode[u].byte); (void)hnode;
}
RAWLOG(6, " \n");
return size;
}
static size_t showHNodeBits(const nodeElt* hnode, size_t size)
{
size_t u;
for (u=0; u<size; u++) {
RAWLOG(6, " %u", hnode[u].nbBits); (void)hnode;
}
RAWLOG(6, " \n");
return size;
}
#endif
/* *******************************************************
* HUF : Huffman block compression
*********************************************************/
#define HUF_WORKSPACE_MAX_ALIGNMENT 8
static void* HUF_alignUpWorkspace(void* workspace, size_t* workspaceSizePtr, size_t align)
{
size_t const mask = align - 1;
size_t const rem = (size_t)workspace & mask;
size_t const add = (align - rem) & mask;
BYTE* const aligned = (BYTE*)workspace + add;
assert((align & (align - 1)) == 0); /* pow 2 */
assert(align <= HUF_WORKSPACE_MAX_ALIGNMENT);
if (*workspaceSizePtr >= add) {
assert(add < align);
assert(((size_t)aligned & mask) == 0);
*workspaceSizePtr -= add;
return aligned;
} else {
*workspaceSizePtr = 0;
return NULL;
}
}
/* HUF_compressWeights() :
* Same as FSE_compress(), but dedicated to huff0's weights compression.
* The use case needs much less stack memory.
* Note : all elements within weightTable are supposed to be <= HUF_TABLELOG_MAX.
*/
#define MAX_FSE_TABLELOG_FOR_HUFF_HEADER 6
typedef struct {
FSE_CTable CTable[FSE_CTABLE_SIZE_U32(MAX_FSE_TABLELOG_FOR_HUFF_HEADER, HUF_TABLELOG_MAX)];
U32 scratchBuffer[FSE_BUILD_CTABLE_WORKSPACE_SIZE_U32(HUF_TABLELOG_MAX, MAX_FSE_TABLELOG_FOR_HUFF_HEADER)];
unsigned count[HUF_TABLELOG_MAX+1];
S16 norm[HUF_TABLELOG_MAX+1];
} HUF_CompressWeightsWksp;
static size_t
HUF_compressWeights(void* dst, size_t dstSize,
const void* weightTable, size_t wtSize,
void* workspace, size_t workspaceSize)
{
BYTE* const ostart = (BYTE*) dst;
BYTE* op = ostart;
BYTE* const oend = ostart + dstSize;
unsigned maxSymbolValue = HUF_TABLELOG_MAX;
U32 tableLog = MAX_FSE_TABLELOG_FOR_HUFF_HEADER;
HUF_CompressWeightsWksp* wksp = (HUF_CompressWeightsWksp*)HUF_alignUpWorkspace(workspace, &workspaceSize, ZSTD_ALIGNOF(U32));
if (workspaceSize < sizeof(HUF_CompressWeightsWksp)) return ERROR(GENERIC);
/* init conditions */
if (wtSize <= 1) return 0; /* Not compressible */
/* Scan input and build symbol stats */
{ unsigned const maxCount = HIST_count_simple(wksp->count, &maxSymbolValue, weightTable, wtSize); /* never fails */
if (maxCount == wtSize) return 1; /* only a single symbol in src : rle */
if (maxCount == 1) return 0; /* each symbol present maximum once => not compressible */
}
tableLog = FSE_optimalTableLog(tableLog, wtSize, maxSymbolValue);
CHECK_F( FSE_normalizeCount(wksp->norm, tableLog, wksp->count, wtSize, maxSymbolValue, /* useLowProbCount */ 0) );
/* Write table description header */
{ CHECK_V_F(hSize, FSE_writeNCount(op, (size_t)(oend-op), wksp->norm, maxSymbolValue, tableLog) );
op += hSize;
}
/* Compress */
CHECK_F( FSE_buildCTable_wksp(wksp->CTable, wksp->norm, maxSymbolValue, tableLog, wksp->scratchBuffer, sizeof(wksp->scratchBuffer)) );
{ CHECK_V_F(cSize, FSE_compress_usingCTable(op, (size_t)(oend - op), weightTable, wtSize, wksp->CTable) );
if (cSize == 0) return 0; /* not enough space for compressed data */
op += cSize;
}
return (size_t)(op-ostart);
}
static size_t HUF_getNbBits(HUF_CElt elt)
{
return elt & 0xFF;
}
static size_t HUF_getNbBitsFast(HUF_CElt elt)
{
return elt;
}
static size_t HUF_getValue(HUF_CElt elt)
{
return elt & ~(size_t)0xFF;
}
static size_t HUF_getValueFast(HUF_CElt elt)
{
return elt;
}
static void HUF_setNbBits(HUF_CElt* elt, size_t nbBits)
{
assert(nbBits <= HUF_TABLELOG_ABSOLUTEMAX);
*elt = nbBits;
}
static void HUF_setValue(HUF_CElt* elt, size_t value)
{
size_t const nbBits = HUF_getNbBits(*elt);
if (nbBits > 0) {
assert((value >> nbBits) == 0);
*elt |= value << (sizeof(HUF_CElt) * 8 - nbBits);
}
}
HUF_CTableHeader HUF_readCTableHeader(HUF_CElt const* ctable)
{
HUF_CTableHeader header;
ZSTD_memcpy(&header, ctable, sizeof(header));
return header;
}
static void HUF_writeCTableHeader(HUF_CElt* ctable, U32 tableLog, U32 maxSymbolValue)
{
HUF_CTableHeader header;
HUF_STATIC_ASSERT(sizeof(ctable[0]) == sizeof(header));
ZSTD_memset(&header, 0, sizeof(header));
assert(tableLog < 256);
header.tableLog = (BYTE)tableLog;
assert(maxSymbolValue < 256);
header.maxSymbolValue = (BYTE)maxSymbolValue;
ZSTD_memcpy(ctable, &header, sizeof(header));
}
typedef struct {
HUF_CompressWeightsWksp wksp;
BYTE bitsToWeight[HUF_TABLELOG_MAX + 1]; /* precomputed conversion table */
BYTE huffWeight[HUF_SYMBOLVALUE_MAX];
} HUF_WriteCTableWksp;
size_t HUF_writeCTable_wksp(void* dst, size_t maxDstSize,
const HUF_CElt* CTable, unsigned maxSymbolValue, unsigned huffLog,
void* workspace, size_t workspaceSize)
{
HUF_CElt const* const ct = CTable + 1;
BYTE* op = (BYTE*)dst;
U32 n;
HUF_WriteCTableWksp* wksp = (HUF_WriteCTableWksp*)HUF_alignUpWorkspace(workspace, &workspaceSize, ZSTD_ALIGNOF(U32));
HUF_STATIC_ASSERT(HUF_CTABLE_WORKSPACE_SIZE >= sizeof(HUF_WriteCTableWksp));
assert(HUF_readCTableHeader(CTable).maxSymbolValue == maxSymbolValue);
assert(HUF_readCTableHeader(CTable).tableLog == huffLog);
/* check conditions */
if (workspaceSize < sizeof(HUF_WriteCTableWksp)) return ERROR(GENERIC);
if (maxSymbolValue > HUF_SYMBOLVALUE_MAX) return ERROR(maxSymbolValue_tooLarge);
/* convert to weight */
wksp->bitsToWeight[0] = 0;
for (n=1; n<huffLog+1; n++)
wksp->bitsToWeight[n] = (BYTE)(huffLog + 1 - n);
for (n=0; n<maxSymbolValue; n++)
wksp->huffWeight[n] = wksp->bitsToWeight[HUF_getNbBits(ct[n])];
/* attempt weights compression by FSE */
if (maxDstSize < 1) return ERROR(dstSize_tooSmall);
{ CHECK_V_F(hSize, HUF_compressWeights(op+1, maxDstSize-1, wksp->huffWeight, maxSymbolValue, &wksp->wksp, sizeof(wksp->wksp)) );
if ((hSize>1) & (hSize < maxSymbolValue/2)) { /* FSE compressed */
op[0] = (BYTE)hSize;
return hSize+1;
} }
/* write raw values as 4-bits (max : 15) */
if (maxSymbolValue > (256-128)) return ERROR(GENERIC); /* should not happen : likely means source cannot be compressed */
if (((maxSymbolValue+1)/2) + 1 > maxDstSize) return ERROR(dstSize_tooSmall); /* not enough space within dst buffer */
op[0] = (BYTE)(128 /*special case*/ + (maxSymbolValue-1));
wksp->huffWeight[maxSymbolValue] = 0; /* to be sure it doesn't cause msan issue in final combination */
for (n=0; n<maxSymbolValue; n+=2)
op[(n/2)+1] = (BYTE)((wksp->huffWeight[n] << 4) + wksp->huffWeight[n+1]);
return ((maxSymbolValue+1)/2) + 1;
}
size_t HUF_readCTable (HUF_CElt* CTable, unsigned* maxSymbolValuePtr, const void* src, size_t srcSize, unsigned* hasZeroWeights)
{
BYTE huffWeight[HUF_SYMBOLVALUE_MAX + 1]; /* init not required, even though some static analyzer may complain */
U32 rankVal[HUF_TABLELOG_ABSOLUTEMAX + 1]; /* large enough for values from 0 to 16 */
U32 tableLog = 0;
U32 nbSymbols = 0;
HUF_CElt* const ct = CTable + 1;
/* get symbol weights */
CHECK_V_F(readSize, HUF_readStats(huffWeight, HUF_SYMBOLVALUE_MAX+1, rankVal, &nbSymbols, &tableLog, src, srcSize));
*hasZeroWeights = (rankVal[0] > 0);
/* check result */
if (tableLog > HUF_TABLELOG_MAX) return ERROR(tableLog_tooLarge);
if (nbSymbols > *maxSymbolValuePtr+1) return ERROR(maxSymbolValue_tooSmall);
*maxSymbolValuePtr = nbSymbols - 1;
HUF_writeCTableHeader(CTable, tableLog, *maxSymbolValuePtr);
/* Prepare base value per rank */
{ U32 n, nextRankStart = 0;
for (n=1; n<=tableLog; n++) {
U32 curr = nextRankStart;
nextRankStart += (rankVal[n] << (n-1));
rankVal[n] = curr;
} }
/* fill nbBits */
{ U32 n; for (n=0; n<nbSymbols; n++) {
const U32 w = huffWeight[n];
HUF_setNbBits(ct + n, (BYTE)(tableLog + 1 - w) & -(w != 0));
} }
/* fill val */
{ U16 nbPerRank[HUF_TABLELOG_MAX+2] = {0}; /* support w=0=>n=tableLog+1 */
U16 valPerRank[HUF_TABLELOG_MAX+2] = {0};
{ U32 n; for (n=0; n<nbSymbols; n++) nbPerRank[HUF_getNbBits(ct[n])]++; }
/* determine stating value per rank */
valPerRank[tableLog+1] = 0; /* for w==0 */
{ U16 min = 0;
U32 n; for (n=tableLog; n>0; n--) { /* start at n=tablelog <-> w=1 */
valPerRank[n] = min; /* get starting value within each rank */
min += nbPerRank[n];
min >>= 1;
} }
/* assign value within rank, symbol order */
{ U32 n; for (n=0; n<nbSymbols; n++) HUF_setValue(ct + n, valPerRank[HUF_getNbBits(ct[n])]++); }
}
return readSize;
}
U32 HUF_getNbBitsFromCTable(HUF_CElt const* CTable, U32 symbolValue)
{
const HUF_CElt* const ct = CTable + 1;
assert(symbolValue <= HUF_SYMBOLVALUE_MAX);
if (symbolValue > HUF_readCTableHeader(CTable).maxSymbolValue)
return 0;
return (U32)HUF_getNbBits(ct[symbolValue]);
}
/**
* HUF_setMaxHeight():
* Try to enforce @targetNbBits on the Huffman tree described in @huffNode.
*
* It attempts to convert all nodes with nbBits > @targetNbBits
* to employ @targetNbBits instead. Then it adjusts the tree
* so that it remains a valid canonical Huffman tree.
*
* @pre The sum of the ranks of each symbol == 2^largestBits,
* where largestBits == huffNode[lastNonNull].nbBits.
* @post The sum of the ranks of each symbol == 2^largestBits,
* where largestBits is the return value (expected <= targetNbBits).
*
* @param huffNode The Huffman tree modified in place to enforce targetNbBits.
* It's presumed sorted, from most frequent to rarest symbol.
* @param lastNonNull The symbol with the lowest count in the Huffman tree.
* @param targetNbBits The allowed number of bits, which the Huffman tree
* may not respect. After this function the Huffman tree will
* respect targetNbBits.
* @return The maximum number of bits of the Huffman tree after adjustment.
*/
static U32 HUF_setMaxHeight(nodeElt* huffNode, U32 lastNonNull, U32 targetNbBits)
{
const U32 largestBits = huffNode[lastNonNull].nbBits;
/* early exit : no elt > targetNbBits, so the tree is already valid. */
if (largestBits <= targetNbBits) return largestBits;
DEBUGLOG(5, "HUF_setMaxHeight (targetNbBits = %u)", targetNbBits);
/* there are several too large elements (at least >= 2) */
{ int totalCost = 0;
const U32 baseCost = 1 << (largestBits - targetNbBits);
int n = (int)lastNonNull;
/* Adjust any ranks > targetNbBits to targetNbBits.
* Compute totalCost, which is how far the sum of the ranks is
* we are over 2^largestBits after adjust the offending ranks.
*/
while (huffNode[n].nbBits > targetNbBits) {
totalCost += baseCost - (1 << (largestBits - huffNode[n].nbBits));
huffNode[n].nbBits = (BYTE)targetNbBits;
n--;
}
/* n stops at huffNode[n].nbBits <= targetNbBits */
assert(huffNode[n].nbBits <= targetNbBits);
/* n end at index of smallest symbol using < targetNbBits */
while (huffNode[n].nbBits == targetNbBits) --n;
/* renorm totalCost from 2^largestBits to 2^targetNbBits
* note : totalCost is necessarily a multiple of baseCost */
assert(((U32)totalCost & (baseCost - 1)) == 0);
totalCost >>= (largestBits - targetNbBits);
assert(totalCost > 0);
/* repay normalized cost */
{ U32 const noSymbol = 0xF0F0F0F0;
U32 rankLast[HUF_TABLELOG_MAX+2];
/* Get pos of last (smallest = lowest cum. count) symbol per rank */
ZSTD_memset(rankLast, 0xF0, sizeof(rankLast));
{ U32 currentNbBits = targetNbBits;
int pos;
for (pos=n ; pos >= 0; pos--) {
if (huffNode[pos].nbBits >= currentNbBits) continue;
currentNbBits = huffNode[pos].nbBits; /* < targetNbBits */
rankLast[targetNbBits-currentNbBits] = (U32)pos;
} }
while (totalCost > 0) {
/* Try to reduce the next power of 2 above totalCost because we
* gain back half the rank.
*/
U32 nBitsToDecrease = ZSTD_highbit32((U32)totalCost) + 1;
for ( ; nBitsToDecrease > 1; nBitsToDecrease--) {
U32 const highPos = rankLast[nBitsToDecrease];
U32 const lowPos = rankLast[nBitsToDecrease-1];
if (highPos == noSymbol) continue;
/* Decrease highPos if no symbols of lowPos or if it is
* not cheaper to remove 2 lowPos than highPos.
*/
if (lowPos == noSymbol) break;
{ U32 const highTotal = huffNode[highPos].count;
U32 const lowTotal = 2 * huffNode[lowPos].count;
if (highTotal <= lowTotal) break;
} }
/* only triggered when no more rank 1 symbol left => find closest one (note : there is necessarily at least one !) */
assert(rankLast[nBitsToDecrease] != noSymbol || nBitsToDecrease == 1);
/* HUF_MAX_TABLELOG test just to please gcc 5+; but it should not be necessary */
while ((nBitsToDecrease<=HUF_TABLELOG_MAX) && (rankLast[nBitsToDecrease] == noSymbol))
nBitsToDecrease++;
assert(rankLast[nBitsToDecrease] != noSymbol);
/* Increase the number of bits to gain back half the rank cost. */
totalCost -= 1 << (nBitsToDecrease-1);
huffNode[rankLast[nBitsToDecrease]].nbBits++;
/* Fix up the new rank.
* If the new rank was empty, this symbol is now its smallest.
* Otherwise, this symbol will be the largest in the new rank so no adjustment.
*/
if (rankLast[nBitsToDecrease-1] == noSymbol)
rankLast[nBitsToDecrease-1] = rankLast[nBitsToDecrease];
/* Fix up the old rank.
* If the symbol was at position 0, meaning it was the highest weight symbol in the tree,
* it must be the only symbol in its rank, so the old rank now has no symbols.
* Otherwise, since the Huffman nodes are sorted by count, the previous position is now
* the smallest node in the rank. If the previous position belongs to a different rank,
* then the rank is now empty.
*/
if (rankLast[nBitsToDecrease] == 0) /* special case, reached largest symbol */
rankLast[nBitsToDecrease] = noSymbol;
else {
rankLast[nBitsToDecrease]--;
if (huffNode[rankLast[nBitsToDecrease]].nbBits != targetNbBits-nBitsToDecrease)
rankLast[nBitsToDecrease] = noSymbol; /* this rank is now empty */
}
} /* while (totalCost > 0) */
/* If we've removed too much weight, then we have to add it back.
* To avoid overshooting again, we only adjust the smallest rank.
* We take the largest nodes from the lowest rank 0 and move them
* to rank 1. There's guaranteed to be enough rank 0 symbols because
* TODO.
*/
while (totalCost < 0) { /* Sometimes, cost correction overshoot */
/* special case : no rank 1 symbol (using targetNbBits-1);
* let's create one from largest rank 0 (using targetNbBits).
*/
if (rankLast[1] == noSymbol) {
while (huffNode[n].nbBits == targetNbBits) n--;
huffNode[n+1].nbBits--;
assert(n >= 0);
rankLast[1] = (U32)(n+1);
totalCost++;
continue;
}
huffNode[ rankLast[1] + 1 ].nbBits--;
rankLast[1]++;
totalCost ++;
}
} /* repay normalized cost */
} /* there are several too large elements (at least >= 2) */
return targetNbBits;
}
typedef struct {
U16 base;
U16 curr;
} rankPos;
typedef nodeElt huffNodeTable[2 * (HUF_SYMBOLVALUE_MAX + 1)];
/* Number of buckets available for HUF_sort() */
#define RANK_POSITION_TABLE_SIZE 192
typedef struct {
huffNodeTable huffNodeTbl;
rankPos rankPosition[RANK_POSITION_TABLE_SIZE];
} HUF_buildCTable_wksp_tables;
/* RANK_POSITION_DISTINCT_COUNT_CUTOFF == Cutoff point in HUF_sort() buckets for which we use log2 bucketing.
* Strategy is to use as many buckets as possible for representing distinct
* counts while using the remainder to represent all "large" counts.
*
* To satisfy this requirement for 192 buckets, we can do the following:
* Let buckets 0-166 represent distinct counts of [0, 166]
* Let buckets 166 to 192 represent all remaining counts up to RANK_POSITION_MAX_COUNT_LOG using log2 bucketing.
*/
#define RANK_POSITION_MAX_COUNT_LOG 32
#define RANK_POSITION_LOG_BUCKETS_BEGIN ((RANK_POSITION_TABLE_SIZE - 1) - RANK_POSITION_MAX_COUNT_LOG - 1 /* == 158 */)
#define RANK_POSITION_DISTINCT_COUNT_CUTOFF (RANK_POSITION_LOG_BUCKETS_BEGIN + ZSTD_highbit32(RANK_POSITION_LOG_BUCKETS_BEGIN) /* == 166 */)
/* Return the appropriate bucket index for a given count. See definition of
* RANK_POSITION_DISTINCT_COUNT_CUTOFF for explanation of bucketing strategy.
*/
static U32 HUF_getIndex(U32 const count) {
return (count < RANK_POSITION_DISTINCT_COUNT_CUTOFF)
? count
: ZSTD_highbit32(count) + RANK_POSITION_LOG_BUCKETS_BEGIN;
}
/* Helper swap function for HUF_quickSortPartition() */
static void HUF_swapNodes(nodeElt* a, nodeElt* b) {
nodeElt tmp = *a;
*a = *b;
*b = tmp;
}
/* Returns 0 if the huffNode array is not sorted by descending count */
MEM_STATIC int HUF_isSorted(nodeElt huffNode[], U32 const maxSymbolValue1) {
U32 i;
for (i = 1; i < maxSymbolValue1; ++i) {
if (huffNode[i].count > huffNode[i-1].count) {
return 0;
}
}
return 1;
}
/* Insertion sort by descending order */
HINT_INLINE void HUF_insertionSort(nodeElt huffNode[], int const low, int const high) {
int i;
int const size = high-low+1;
huffNode += low;
for (i = 1; i < size; ++i) {
nodeElt const key = huffNode[i];
int j = i - 1;
while (j >= 0 && huffNode[j].count < key.count) {
huffNode[j + 1] = huffNode[j];
j--;
}
huffNode[j + 1] = key;
}
}
/* Pivot helper function for quicksort. */
static int HUF_quickSortPartition(nodeElt arr[], int const low, int const high) {
/* Simply select rightmost element as pivot. "Better" selectors like
* median-of-three don't experimentally appear to have any benefit.
*/
U32 const pivot = arr[high].count;
int i = low - 1;
int j = low;
for ( ; j < high; j++) {
if (arr[j].count > pivot) {
i++;
HUF_swapNodes(&arr[i], &arr[j]);
}
}
HUF_swapNodes(&arr[i + 1], &arr[high]);
return i + 1;
}
/* Classic quicksort by descending with partially iterative calls
* to reduce worst case callstack size.
*/
static void HUF_simpleQuickSort(nodeElt arr[], int low, int high) {
int const kInsertionSortThreshold = 8;
if (high - low < kInsertionSortThreshold) {
HUF_insertionSort(arr, low, high);
return;
}
while (low < high) {
int const idx = HUF_quickSortPartition(arr, low, high);
if (idx - low < high - idx) {
HUF_simpleQuickSort(arr, low, idx - 1);
low = idx + 1;
} else {
HUF_simpleQuickSort(arr, idx + 1, high);
high = idx - 1;
}
}
}
/**
* HUF_sort():
* Sorts the symbols [0, maxSymbolValue] by count[symbol] in decreasing order.
* This is a typical bucket sorting strategy that uses either quicksort or insertion sort to sort each bucket.
*
* @param[out] huffNode Sorted symbols by decreasing count. Only members `.count` and `.byte` are filled.
* Must have (maxSymbolValue + 1) entries.
* @param[in] count Histogram of the symbols.
* @param[in] maxSymbolValue Maximum symbol value.
* @param rankPosition This is a scratch workspace. Must have RANK_POSITION_TABLE_SIZE entries.
*/
static void HUF_sort(nodeElt huffNode[], const unsigned count[], U32 const maxSymbolValue, rankPos rankPosition[]) {
U32 n;
U32 const maxSymbolValue1 = maxSymbolValue+1;
/* Compute base and set curr to base.
* For symbol s let lowerRank = HUF_getIndex(count[n]) and rank = lowerRank + 1.
* See HUF_getIndex to see bucketing strategy.
* We attribute each symbol to lowerRank's base value, because we want to know where
* each rank begins in the output, so for rank R we want to count ranks R+1 and above.
*/
ZSTD_memset(rankPosition, 0, sizeof(*rankPosition) * RANK_POSITION_TABLE_SIZE);
for (n = 0; n < maxSymbolValue1; ++n) {
U32 lowerRank = HUF_getIndex(count[n]);
assert(lowerRank < RANK_POSITION_TABLE_SIZE - 1);
rankPosition[lowerRank].base++;
}
assert(rankPosition[RANK_POSITION_TABLE_SIZE - 1].base == 0);
/* Set up the rankPosition table */
for (n = RANK_POSITION_TABLE_SIZE - 1; n > 0; --n) {
rankPosition[n-1].base += rankPosition[n].base;
rankPosition[n-1].curr = rankPosition[n-1].base;
}
/* Insert each symbol into their appropriate bucket, setting up rankPosition table. */
for (n = 0; n < maxSymbolValue1; ++n) {
U32 const c = count[n];
U32 const r = HUF_getIndex(c) + 1;
U32 const pos = rankPosition[r].curr++;
assert(pos < maxSymbolValue1);
huffNode[pos].count = c;
huffNode[pos].byte = (BYTE)n;
}
/* Sort each bucket. */
for (n = RANK_POSITION_DISTINCT_COUNT_CUTOFF; n < RANK_POSITION_TABLE_SIZE - 1; ++n) {
int const bucketSize = rankPosition[n].curr - rankPosition[n].base;
U32 const bucketStartIdx = rankPosition[n].base;
if (bucketSize > 1) {
assert(bucketStartIdx < maxSymbolValue1);
HUF_simpleQuickSort(huffNode + bucketStartIdx, 0, bucketSize-1);
}
}
assert(HUF_isSorted(huffNode, maxSymbolValue1));
}
/** HUF_buildCTable_wksp() :
* Same as HUF_buildCTable(), but using externally allocated scratch buffer.
* `workSpace` must be aligned on 4-bytes boundaries, and be at least as large as sizeof(HUF_buildCTable_wksp_tables).
*/
#define STARTNODE (HUF_SYMBOLVALUE_MAX+1)
/* HUF_buildTree():
* Takes the huffNode array sorted by HUF_sort() and builds an unlimited-depth Huffman tree.
*
* @param huffNode The array sorted by HUF_sort(). Builds the Huffman tree in this array.
* @param maxSymbolValue The maximum symbol value.
* @return The smallest node in the Huffman tree (by count).
*/
static int HUF_buildTree(nodeElt* huffNode, U32 maxSymbolValue)
{
nodeElt* const huffNode0 = huffNode - 1;
int nonNullRank;
int lowS, lowN;
int nodeNb = STARTNODE;
int n, nodeRoot;
DEBUGLOG(5, "HUF_buildTree (alphabet size = %u)", maxSymbolValue + 1);
/* init for parents */
nonNullRank = (int)maxSymbolValue;
while(huffNode[nonNullRank].count == 0) nonNullRank--;
lowS = nonNullRank; nodeRoot = nodeNb + lowS - 1; lowN = nodeNb;
huffNode[nodeNb].count = huffNode[lowS].count + huffNode[lowS-1].count;
huffNode[lowS].parent = huffNode[lowS-1].parent = (U16)nodeNb;
nodeNb++; lowS-=2;
for (n=nodeNb; n<=nodeRoot; n++) huffNode[n].count = (U32)(1U<<30);
huffNode0[0].count = (U32)(1U<<31); /* fake entry, strong barrier */
/* create parents */
while (nodeNb <= nodeRoot) {
int const n1 = (huffNode[lowS].count < huffNode[lowN].count) ? lowS-- : lowN++;
int const n2 = (huffNode[lowS].count < huffNode[lowN].count) ? lowS-- : lowN++;
huffNode[nodeNb].count = huffNode[n1].count + huffNode[n2].count;
huffNode[n1].parent = huffNode[n2].parent = (U16)nodeNb;
nodeNb++;
}
/* distribute weights (unlimited tree height) */
huffNode[nodeRoot].nbBits = 0;
for (n=nodeRoot-1; n>=STARTNODE; n--)
huffNode[n].nbBits = huffNode[ huffNode[n].parent ].nbBits + 1;
for (n=0; n<=nonNullRank; n++)
huffNode[n].nbBits = huffNode[ huffNode[n].parent ].nbBits + 1;
DEBUGLOG(6, "Initial distribution of bits completed (%zu sorted symbols)", showHNodeBits(huffNode, maxSymbolValue+1));
return nonNullRank;
}
/**
* HUF_buildCTableFromTree():
* Build the CTable given the Huffman tree in huffNode.
*
* @param[out] CTable The output Huffman CTable.
* @param huffNode The Huffman tree.
* @param nonNullRank The last and smallest node in the Huffman tree.
* @param maxSymbolValue The maximum symbol value.
* @param maxNbBits The exact maximum number of bits used in the Huffman tree.
*/
static void HUF_buildCTableFromTree(HUF_CElt* CTable, nodeElt const* huffNode, int nonNullRank, U32 maxSymbolValue, U32 maxNbBits)
{
HUF_CElt* const ct = CTable + 1;
/* fill result into ctable (val, nbBits) */
int n;
U16 nbPerRank[HUF_TABLELOG_MAX+1] = {0};
U16 valPerRank[HUF_TABLELOG_MAX+1] = {0};
int const alphabetSize = (int)(maxSymbolValue + 1);
for (n=0; n<=nonNullRank; n++)
nbPerRank[huffNode[n].nbBits]++;
/* determine starting value per rank */
{ U16 min = 0;
for (n=(int)maxNbBits; n>0; n--) {
valPerRank[n] = min; /* get starting value within each rank */
min += nbPerRank[n];
min >>= 1;
} }
for (n=0; n<alphabetSize; n++)
HUF_setNbBits(ct + huffNode[n].byte, huffNode[n].nbBits); /* push nbBits per symbol, symbol order */
for (n=0; n<alphabetSize; n++)
HUF_setValue(ct + n, valPerRank[HUF_getNbBits(ct[n])]++); /* assign value within rank, symbol order */
HUF_writeCTableHeader(CTable, maxNbBits, maxSymbolValue);
}
size_t
HUF_buildCTable_wksp(HUF_CElt* CTable, const unsigned* count, U32 maxSymbolValue, U32 maxNbBits,
void* workSpace, size_t wkspSize)
{
HUF_buildCTable_wksp_tables* const wksp_tables =
(HUF_buildCTable_wksp_tables*)HUF_alignUpWorkspace(workSpace, &wkspSize, ZSTD_ALIGNOF(U32));
nodeElt* const huffNode0 = wksp_tables->huffNodeTbl;
nodeElt* const huffNode = huffNode0+1;
int nonNullRank;
HUF_STATIC_ASSERT(HUF_CTABLE_WORKSPACE_SIZE == sizeof(HUF_buildCTable_wksp_tables));
DEBUGLOG(5, "HUF_buildCTable_wksp (alphabet size = %u)", maxSymbolValue+1);
/* safety checks */
if (wkspSize < sizeof(HUF_buildCTable_wksp_tables))
return ERROR(workSpace_tooSmall);
if (maxNbBits == 0) maxNbBits = HUF_TABLELOG_DEFAULT;
if (maxSymbolValue > HUF_SYMBOLVALUE_MAX)
return ERROR(maxSymbolValue_tooLarge);
ZSTD_memset(huffNode0, 0, sizeof(huffNodeTable));
/* sort, decreasing order */
HUF_sort(huffNode, count, maxSymbolValue, wksp_tables->rankPosition);
DEBUGLOG(6, "sorted symbols completed (%zu symbols)", showHNodeSymbols(huffNode, maxSymbolValue+1));
/* build tree */
nonNullRank = HUF_buildTree(huffNode, maxSymbolValue);
/* determine and enforce maxTableLog */
maxNbBits = HUF_setMaxHeight(huffNode, (U32)nonNullRank, maxNbBits);
if (maxNbBits > HUF_TABLELOG_MAX) return ERROR(GENERIC); /* check fit into table */
HUF_buildCTableFromTree(CTable, huffNode, nonNullRank, maxSymbolValue, maxNbBits);
return maxNbBits;
}
size_t HUF_estimateCompressedSize(const HUF_CElt* CTable, const unsigned* count, unsigned maxSymbolValue)
{
HUF_CElt const* ct = CTable + 1;
size_t nbBits = 0;
int s;
for (s = 0; s <= (int)maxSymbolValue; ++s) {
nbBits += HUF_getNbBits(ct[s]) * count[s];
}
return nbBits >> 3;
}
int HUF_validateCTable(const HUF_CElt* CTable, const unsigned* count, unsigned maxSymbolValue) {
HUF_CTableHeader header = HUF_readCTableHeader(CTable);
HUF_CElt const* ct = CTable + 1;
int bad = 0;
int s;
assert(header.tableLog <= HUF_TABLELOG_ABSOLUTEMAX);
if (header.maxSymbolValue < maxSymbolValue)
return 0;
for (s = 0; s <= (int)maxSymbolValue; ++s) {
bad |= (count[s] != 0) & (HUF_getNbBits(ct[s]) == 0);
}
return !bad;
}
size_t HUF_compressBound(size_t size) { return HUF_COMPRESSBOUND(size); }
/** HUF_CStream_t:
* Huffman uses its own BIT_CStream_t implementation.
* There are three major differences from BIT_CStream_t:
* 1. HUF_addBits() takes a HUF_CElt (size_t) which is
* the pair (nbBits, value) in the format:
* format:
* - Bits [0, 4) = nbBits
* - Bits [4, 64 - nbBits) = 0
* - Bits [64 - nbBits, 64) = value
* 2. The bitContainer is built from the upper bits and
* right shifted. E.g. to add a new value of N bits
* you right shift the bitContainer by N, then or in
* the new value into the N upper bits.
* 3. The bitstream has two bit containers. You can add
* bits to the second container and merge them into
* the first container.
*/
#define HUF_BITS_IN_CONTAINER (sizeof(size_t) * 8)
typedef struct {
size_t bitContainer[2];
size_t bitPos[2];
BYTE* startPtr;
BYTE* ptr;
BYTE* endPtr;
} HUF_CStream_t;
/**! HUF_initCStream():
* Initializes the bitstream.
* @returns 0 or an error code.
*/
static size_t HUF_initCStream(HUF_CStream_t* bitC,
void* startPtr, size_t dstCapacity)
{
ZSTD_memset(bitC, 0, sizeof(*bitC));
bitC->startPtr = (BYTE*)startPtr;
bitC->ptr = bitC->startPtr;
bitC->endPtr = bitC->startPtr + dstCapacity - sizeof(bitC->bitContainer[0]);
if (dstCapacity <= sizeof(bitC->bitContainer[0])) return ERROR(dstSize_tooSmall);
return 0;
}
/*! HUF_addBits():
* Adds the symbol stored in HUF_CElt elt to the bitstream.
*
* @param elt The element we're adding. This is a (nbBits, value) pair.
* See the HUF_CStream_t docs for the format.
* @param idx Insert into the bitstream at this idx.
* @param kFast This is a template parameter. If the bitstream is guaranteed
* to have at least 4 unused bits after this call it may be 1,
* otherwise it must be 0. HUF_addBits() is faster when fast is set.
*/
FORCE_INLINE_TEMPLATE void HUF_addBits(HUF_CStream_t* bitC, HUF_CElt elt, int idx, int kFast)
{
assert(idx <= 1);
assert(HUF_getNbBits(elt) <= HUF_TABLELOG_ABSOLUTEMAX);
/* This is efficient on x86-64 with BMI2 because shrx
* only reads the low 6 bits of the register. The compiler
* knows this and elides the mask. When fast is set,
* every operation can use the same value loaded from elt.
*/
bitC->bitContainer[idx] >>= HUF_getNbBits(elt);
bitC->bitContainer[idx] |= kFast ? HUF_getValueFast(elt) : HUF_getValue(elt);
/* We only read the low 8 bits of bitC->bitPos[idx] so it
* doesn't matter that the high bits have noise from the value.
*/
bitC->bitPos[idx] += HUF_getNbBitsFast(elt);
assert((bitC->bitPos[idx] & 0xFF) <= HUF_BITS_IN_CONTAINER);
/* The last 4-bits of elt are dirty if fast is set,
* so we must not be overwriting bits that have already been
* inserted into the bit container.
*/
#if DEBUGLEVEL >= 1
{
size_t const nbBits = HUF_getNbBits(elt);
size_t const dirtyBits = nbBits == 0 ? 0 : ZSTD_highbit32((U32)nbBits) + 1;
(void)dirtyBits;
/* Middle bits are 0. */
assert(((elt >> dirtyBits) << (dirtyBits + nbBits)) == 0);
/* We didn't overwrite any bits in the bit container. */
assert(!kFast || (bitC->bitPos[idx] & 0xFF) <= HUF_BITS_IN_CONTAINER);
(void)dirtyBits;
}
#endif
}
FORCE_INLINE_TEMPLATE void HUF_zeroIndex1(HUF_CStream_t* bitC)
{
bitC->bitContainer[1] = 0;
bitC->bitPos[1] = 0;
}
/*! HUF_mergeIndex1() :
* Merges the bit container @ index 1 into the bit container @ index 0
* and zeros the bit container @ index 1.
*/
FORCE_INLINE_TEMPLATE void HUF_mergeIndex1(HUF_CStream_t* bitC)
{
assert((bitC->bitPos[1] & 0xFF) < HUF_BITS_IN_CONTAINER);
bitC->bitContainer[0] >>= (bitC->bitPos[1] & 0xFF);
bitC->bitContainer[0] |= bitC->bitContainer[1];
bitC->bitPos[0] += bitC->bitPos[1];
assert((bitC->bitPos[0] & 0xFF) <= HUF_BITS_IN_CONTAINER);
}
/*! HUF_flushBits() :
* Flushes the bits in the bit container @ index 0.
*
* @post bitPos will be < 8.
* @param kFast If kFast is set then we must know a-priori that
* the bit container will not overflow.
*/
FORCE_INLINE_TEMPLATE void HUF_flushBits(HUF_CStream_t* bitC, int kFast)
{
/* The upper bits of bitPos are noisy, so we must mask by 0xFF. */
size_t const nbBits = bitC->bitPos[0] & 0xFF;
size_t const nbBytes = nbBits >> 3;
/* The top nbBits bits of bitContainer are the ones we need. */
size_t const bitContainer = bitC->bitContainer[0] >> (HUF_BITS_IN_CONTAINER - nbBits);
/* Mask bitPos to account for the bytes we consumed. */
bitC->bitPos[0] &= 7;
assert(nbBits > 0);
assert(nbBits <= sizeof(bitC->bitContainer[0]) * 8);
assert(bitC->ptr <= bitC->endPtr);
MEM_writeLEST(bitC->ptr, bitContainer);
bitC->ptr += nbBytes;
assert(!kFast || bitC->ptr <= bitC->endPtr);
if (!kFast && bitC->ptr > bitC->endPtr) bitC->ptr = bitC->endPtr;
/* bitContainer doesn't need to be modified because the leftover
* bits are already the top bitPos bits. And we don't care about
* noise in the lower values.
*/
}
/*! HUF_endMark()
* @returns The Huffman stream end mark: A 1-bit value = 1.
*/
static HUF_CElt HUF_endMark(void)
{
HUF_CElt endMark;
HUF_setNbBits(&endMark, 1);
HUF_setValue(&endMark, 1);
return endMark;
}
/*! HUF_closeCStream() :
* @return Size of CStream, in bytes,
* or 0 if it could not fit into dstBuffer */
static size_t HUF_closeCStream(HUF_CStream_t* bitC)
{
HUF_addBits(bitC, HUF_endMark(), /* idx */ 0, /* kFast */ 0);
HUF_flushBits(bitC, /* kFast */ 0);
{
size_t const nbBits = bitC->bitPos[0] & 0xFF;
if (bitC->ptr >= bitC->endPtr) return 0; /* overflow detected */
return (size_t)(bitC->ptr - bitC->startPtr) + (nbBits > 0);
}
}
FORCE_INLINE_TEMPLATE void
HUF_encodeSymbol(HUF_CStream_t* bitCPtr, U32 symbol, const HUF_CElt* CTable, int idx, int fast)
{
HUF_addBits(bitCPtr, CTable[symbol], idx, fast);
}
FORCE_INLINE_TEMPLATE void
HUF_compress1X_usingCTable_internal_body_loop(HUF_CStream_t* bitC,
const BYTE* ip, size_t srcSize,
const HUF_CElt* ct,
int kUnroll, int kFastFlush, int kLastFast)
{
/* Join to kUnroll */
int n = (int)srcSize;
int rem = n % kUnroll;
if (rem > 0) {
for (; rem > 0; --rem) {
HUF_encodeSymbol(bitC, ip[--n], ct, 0, /* fast */ 0);
}
HUF_flushBits(bitC, kFastFlush);
}
assert(n % kUnroll == 0);
/* Join to 2 * kUnroll */
if (n % (2 * kUnroll)) {
int u;
for (u = 1; u < kUnroll; ++u) {
HUF_encodeSymbol(bitC, ip[n - u], ct, 0, 1);
}
HUF_encodeSymbol(bitC, ip[n - kUnroll], ct, 0, kLastFast);
HUF_flushBits(bitC, kFastFlush);
n -= kUnroll;
}
assert(n % (2 * kUnroll) == 0);
for (; n>0; n-= 2 * kUnroll) {
/* Encode kUnroll symbols into the bitstream @ index 0. */
int u;
for (u = 1; u < kUnroll; ++u) {
HUF_encodeSymbol(bitC, ip[n - u], ct, /* idx */ 0, /* fast */ 1);
}
HUF_encodeSymbol(bitC, ip[n - kUnroll], ct, /* idx */ 0, /* fast */ kLastFast);
HUF_flushBits(bitC, kFastFlush);
/* Encode kUnroll symbols into the bitstream @ index 1.
* This allows us to start filling the bit container
* without any data dependencies.
*/
HUF_zeroIndex1(bitC);
for (u = 1; u < kUnroll; ++u) {
HUF_encodeSymbol(bitC, ip[n - kUnroll - u], ct, /* idx */ 1, /* fast */ 1);
}
HUF_encodeSymbol(bitC, ip[n - kUnroll - kUnroll], ct, /* idx */ 1, /* fast */ kLastFast);
/* Merge bitstream @ index 1 into the bitstream @ index 0 */
HUF_mergeIndex1(bitC);
HUF_flushBits(bitC, kFastFlush);
}
assert(n == 0);
}
/**
* Returns a tight upper bound on the output space needed by Huffman
* with 8 bytes buffer to handle over-writes. If the output is at least
* this large we don't need to do bounds checks during Huffman encoding.
*/
static size_t HUF_tightCompressBound(size_t srcSize, size_t tableLog)
{
return ((srcSize * tableLog) >> 3) + 8;
}
FORCE_INLINE_TEMPLATE size_t
HUF_compress1X_usingCTable_internal_body(void* dst, size_t dstSize,
const void* src, size_t srcSize,
const HUF_CElt* CTable)
{
U32 const tableLog = HUF_readCTableHeader(CTable).tableLog;
HUF_CElt const* ct = CTable + 1;
const BYTE* ip = (const BYTE*) src;
BYTE* const ostart = (BYTE*)dst;
BYTE* const oend = ostart + dstSize;
HUF_CStream_t bitC;
/* init */
if (dstSize < 8) return 0; /* not enough space to compress */
{ BYTE* op = ostart;
size_t const initErr = HUF_initCStream(&bitC, op, (size_t)(oend-op));
if (HUF_isError(initErr)) return 0; }
if (dstSize < HUF_tightCompressBound(srcSize, (size_t)tableLog) || tableLog > 11)
HUF_compress1X_usingCTable_internal_body_loop(&bitC, ip, srcSize, ct, /* kUnroll */ MEM_32bits() ? 2 : 4, /* kFast */ 0, /* kLastFast */ 0);
else {
if (MEM_32bits()) {
switch (tableLog) {
case 11:
HUF_compress1X_usingCTable_internal_body_loop(&bitC, ip, srcSize, ct, /* kUnroll */ 2, /* kFastFlush */ 1, /* kLastFast */ 0);
break;
case 10: ZSTD_FALLTHROUGH;
case 9: ZSTD_FALLTHROUGH;
case 8:
HUF_compress1X_usingCTable_internal_body_loop(&bitC, ip, srcSize, ct, /* kUnroll */ 2, /* kFastFlush */ 1, /* kLastFast */ 1);
break;
case 7: ZSTD_FALLTHROUGH;
default:
HUF_compress1X_usingCTable_internal_body_loop(&bitC, ip, srcSize, ct, /* kUnroll */ 3, /* kFastFlush */ 1, /* kLastFast */ 1);
break;
}
} else {
switch (tableLog) {
case 11:
HUF_compress1X_usingCTable_internal_body_loop(&bitC, ip, srcSize, ct, /* kUnroll */ 5, /* kFastFlush */ 1, /* kLastFast */ 0);
break;
case 10:
HUF_compress1X_usingCTable_internal_body_loop(&bitC, ip, srcSize, ct, /* kUnroll */ 5, /* kFastFlush */ 1, /* kLastFast */ 1);
break;
case 9:
HUF_compress1X_usingCTable_internal_body_loop(&bitC, ip, srcSize, ct, /* kUnroll */ 6, /* kFastFlush */ 1, /* kLastFast */ 0);
break;
case 8:
HUF_compress1X_usingCTable_internal_body_loop(&bitC, ip, srcSize, ct, /* kUnroll */ 7, /* kFastFlush */ 1, /* kLastFast */ 0);
break;
case 7:
HUF_compress1X_usingCTable_internal_body_loop(&bitC, ip, srcSize, ct, /* kUnroll */ 8, /* kFastFlush */ 1, /* kLastFast */ 0);
break;
case 6: ZSTD_FALLTHROUGH;
default:
HUF_compress1X_usingCTable_internal_body_loop(&bitC, ip, srcSize, ct, /* kUnroll */ 9, /* kFastFlush */ 1, /* kLastFast */ 1);
break;
}
}
}
assert(bitC.ptr <= bitC.endPtr);
return HUF_closeCStream(&bitC);
}
#if DYNAMIC_BMI2
static BMI2_TARGET_ATTRIBUTE size_t
HUF_compress1X_usingCTable_internal_bmi2(void* dst, size_t dstSize,
const void* src, size_t srcSize,
const HUF_CElt* CTable)
{
return HUF_compress1X_usingCTable_internal_body(dst, dstSize, src, srcSize, CTable);
}
static size_t
HUF_compress1X_usingCTable_internal_default(void* dst, size_t dstSize,
const void* src, size_t srcSize,
const HUF_CElt* CTable)
{
return HUF_compress1X_usingCTable_internal_body(dst, dstSize, src, srcSize, CTable);
}
static size_t
HUF_compress1X_usingCTable_internal(void* dst, size_t dstSize,
const void* src, size_t srcSize,
const HUF_CElt* CTable, const int flags)
{
if (flags & HUF_flags_bmi2) {
return HUF_compress1X_usingCTable_internal_bmi2(dst, dstSize, src, srcSize, CTable);
}
return HUF_compress1X_usingCTable_internal_default(dst, dstSize, src, srcSize, CTable);
}
#else
static size_t
HUF_compress1X_usingCTable_internal(void* dst, size_t dstSize,
const void* src, size_t srcSize,
const HUF_CElt* CTable, const int flags)
{
(void)flags;
return HUF_compress1X_usingCTable_internal_body(dst, dstSize, src, srcSize, CTable);
}
#endif
size_t HUF_compress1X_usingCTable(void* dst, size_t dstSize, const void* src, size_t srcSize, const HUF_CElt* CTable, int flags)
{
return HUF_compress1X_usingCTable_internal(dst, dstSize, src, srcSize, CTable, flags);
}
static size_t
HUF_compress4X_usingCTable_internal(void* dst, size_t dstSize,
const void* src, size_t srcSize,
const HUF_CElt* CTable, int flags)
{
size_t const segmentSize = (srcSize+3)/4; /* first 3 segments */
const BYTE* ip = (const BYTE*) src;
const BYTE* const iend = ip + srcSize;
BYTE* const ostart = (BYTE*) dst;
BYTE* const oend = ostart + dstSize;
BYTE* op = ostart;
if (dstSize < 6 + 1 + 1 + 1 + 8) return 0; /* minimum space to compress successfully */
if (srcSize < 12) return 0; /* no saving possible : too small input */
op += 6; /* jumpTable */
assert(op <= oend);
{ CHECK_V_F(cSize, HUF_compress1X_usingCTable_internal(op, (size_t)(oend-op), ip, segmentSize, CTable, flags) );
if (cSize == 0 || cSize > 65535) return 0;
MEM_writeLE16(ostart, (U16)cSize);
op += cSize;
}
ip += segmentSize;
assert(op <= oend);
{ CHECK_V_F(cSize, HUF_compress1X_usingCTable_internal(op, (size_t)(oend-op), ip, segmentSize, CTable, flags) );
if (cSize == 0 || cSize > 65535) return 0;
MEM_writeLE16(ostart+2, (U16)cSize);
op += cSize;
}
ip += segmentSize;
assert(op <= oend);
{ CHECK_V_F(cSize, HUF_compress1X_usingCTable_internal(op, (size_t)(oend-op), ip, segmentSize, CTable, flags) );
if (cSize == 0 || cSize > 65535) return 0;
MEM_writeLE16(ostart+4, (U16)cSize);
op += cSize;
}
ip += segmentSize;
assert(op <= oend);
assert(ip <= iend);
{ CHECK_V_F(cSize, HUF_compress1X_usingCTable_internal(op, (size_t)(oend-op), ip, (size_t)(iend-ip), CTable, flags) );
if (cSize == 0 || cSize > 65535) return 0;
op += cSize;
}
return (size_t)(op-ostart);
}
size_t HUF_compress4X_usingCTable(void* dst, size_t dstSize, const void* src, size_t srcSize, const HUF_CElt* CTable, int flags)
{
return HUF_compress4X_usingCTable_internal(dst, dstSize, src, srcSize, CTable, flags);
}
typedef enum { HUF_singleStream, HUF_fourStreams } HUF_nbStreams_e;
static size_t HUF_compressCTable_internal(
BYTE* const ostart, BYTE* op, BYTE* const oend,
const void* src, size_t srcSize,
HUF_nbStreams_e nbStreams, const HUF_CElt* CTable, const int flags)
{
size_t const cSize = (nbStreams==HUF_singleStream) ?
HUF_compress1X_usingCTable_internal(op, (size_t)(oend - op), src, srcSize, CTable, flags) :
HUF_compress4X_usingCTable_internal(op, (size_t)(oend - op), src, srcSize, CTable, flags);
if (HUF_isError(cSize)) { return cSize; }
if (cSize==0) { return 0; } /* uncompressible */
op += cSize;
/* check compressibility */
assert(op >= ostart);
if ((size_t)(op-ostart) >= srcSize-1) { return 0; }
return (size_t)(op-ostart);
}
typedef struct {
unsigned count[HUF_SYMBOLVALUE_MAX + 1];
HUF_CElt CTable[HUF_CTABLE_SIZE_ST(HUF_SYMBOLVALUE_MAX)];
union {
HUF_buildCTable_wksp_tables buildCTable_wksp;
HUF_WriteCTableWksp writeCTable_wksp;
U32 hist_wksp[HIST_WKSP_SIZE_U32];
} wksps;
} HUF_compress_tables_t;
#define SUSPECT_INCOMPRESSIBLE_SAMPLE_SIZE 4096
#define SUSPECT_INCOMPRESSIBLE_SAMPLE_RATIO 10 /* Must be >= 2 */
unsigned HUF_cardinality(const unsigned* count, unsigned maxSymbolValue)
{
unsigned cardinality = 0;
unsigned i;
for (i = 0; i < maxSymbolValue + 1; i++) {
if (count[i] != 0) cardinality += 1;
}
return cardinality;
}
unsigned HUF_minTableLog(unsigned symbolCardinality)
{
U32 minBitsSymbols = ZSTD_highbit32(symbolCardinality) + 1;
return minBitsSymbols;
}
unsigned HUF_optimalTableLog(
unsigned maxTableLog,
size_t srcSize,
unsigned maxSymbolValue,
void* workSpace, size_t wkspSize,
HUF_CElt* table,
const unsigned* count,
int flags)
{
assert(srcSize > 1); /* Not supported, RLE should be used instead */
assert(wkspSize >= sizeof(HUF_buildCTable_wksp_tables));
if (!(flags & HUF_flags_optimalDepth)) {
/* cheap evaluation, based on FSE */
return FSE_optimalTableLog_internal(maxTableLog, srcSize, maxSymbolValue, 1);
}
{ BYTE* dst = (BYTE*)workSpace + sizeof(HUF_WriteCTableWksp);
size_t dstSize = wkspSize - sizeof(HUF_WriteCTableWksp);
size_t hSize, newSize;
const unsigned symbolCardinality = HUF_cardinality(count, maxSymbolValue);
const unsigned minTableLog = HUF_minTableLog(symbolCardinality);
size_t optSize = ((size_t) ~0) - 1;
unsigned optLog = maxTableLog, optLogGuess;
DEBUGLOG(6, "HUF_optimalTableLog: probing huf depth (srcSize=%zu)", srcSize);
/* Search until size increases */
for (optLogGuess = minTableLog; optLogGuess <= maxTableLog; optLogGuess++) {
DEBUGLOG(7, "checking for huffLog=%u", optLogGuess);
{ size_t maxBits = HUF_buildCTable_wksp(table, count, maxSymbolValue, optLogGuess, workSpace, wkspSize);
if (ERR_isError(maxBits)) continue;
if (maxBits < optLogGuess && optLogGuess > minTableLog) break;
hSize = HUF_writeCTable_wksp(dst, dstSize, table, maxSymbolValue, (U32)maxBits, workSpace, wkspSize);
}
if (ERR_isError(hSize)) continue;
newSize = HUF_estimateCompressedSize(table, count, maxSymbolValue) + hSize;
if (newSize > optSize + 1) {
break;
}
if (newSize < optSize) {
optSize = newSize;
optLog = optLogGuess;
}
}
assert(optLog <= HUF_TABLELOG_MAX);
return optLog;
}
}
/* HUF_compress_internal() :
* `workSpace_align4` must be aligned on 4-bytes boundaries,
* and occupies the same space as a table of HUF_WORKSPACE_SIZE_U64 unsigned */
static size_t
HUF_compress_internal (void* dst, size_t dstSize,
const void* src, size_t srcSize,
unsigned maxSymbolValue, unsigned huffLog,
HUF_nbStreams_e nbStreams,
void* workSpace, size_t wkspSize,
HUF_CElt* oldHufTable, HUF_repeat* repeat, int flags)
{
HUF_compress_tables_t* const table = (HUF_compress_tables_t*)HUF_alignUpWorkspace(workSpace, &wkspSize, ZSTD_ALIGNOF(size_t));
BYTE* const ostart = (BYTE*)dst;
BYTE* const oend = ostart + dstSize;
BYTE* op = ostart;
DEBUGLOG(5, "HUF_compress_internal (srcSize=%zu)", srcSize);
HUF_STATIC_ASSERT(sizeof(*table) + HUF_WORKSPACE_MAX_ALIGNMENT <= HUF_WORKSPACE_SIZE);
/* checks & inits */
if (wkspSize < sizeof(*table)) return ERROR(workSpace_tooSmall);
if (!srcSize) return 0; /* Uncompressed */
if (!dstSize) return 0; /* cannot fit anything within dst budget */
if (srcSize > HUF_BLOCKSIZE_MAX) return ERROR(srcSize_wrong); /* current block size limit */
if (huffLog > HUF_TABLELOG_MAX) return ERROR(tableLog_tooLarge);
if (maxSymbolValue > HUF_SYMBOLVALUE_MAX) return ERROR(maxSymbolValue_tooLarge);
if (!maxSymbolValue) maxSymbolValue = HUF_SYMBOLVALUE_MAX;
if (!huffLog) huffLog = HUF_TABLELOG_DEFAULT;
/* Heuristic : If old table is valid, use it for small inputs */
if ((flags & HUF_flags_preferRepeat) && repeat && *repeat == HUF_repeat_valid) {
return HUF_compressCTable_internal(ostart, op, oend,
src, srcSize,
nbStreams, oldHufTable, flags);
}
/* If uncompressible data is suspected, do a smaller sampling first */
DEBUG_STATIC_ASSERT(SUSPECT_INCOMPRESSIBLE_SAMPLE_RATIO >= 2);
if ((flags & HUF_flags_suspectUncompressible) && srcSize >= (SUSPECT_INCOMPRESSIBLE_SAMPLE_SIZE * SUSPECT_INCOMPRESSIBLE_SAMPLE_RATIO)) {
size_t largestTotal = 0;
DEBUGLOG(5, "input suspected incompressible : sampling to check");
{ unsigned maxSymbolValueBegin = maxSymbolValue;
CHECK_V_F(largestBegin, HIST_count_simple (table->count, &maxSymbolValueBegin, (const BYTE*)src, SUSPECT_INCOMPRESSIBLE_SAMPLE_SIZE) );
largestTotal += largestBegin;
}
{ unsigned maxSymbolValueEnd = maxSymbolValue;
CHECK_V_F(largestEnd, HIST_count_simple (table->count, &maxSymbolValueEnd, (const BYTE*)src + srcSize - SUSPECT_INCOMPRESSIBLE_SAMPLE_SIZE, SUSPECT_INCOMPRESSIBLE_SAMPLE_SIZE) );
largestTotal += largestEnd;
}
if (largestTotal <= ((2 * SUSPECT_INCOMPRESSIBLE_SAMPLE_SIZE) >> 7)+4) return 0; /* heuristic : probably not compressible enough */
}
/* Scan input and build symbol stats */
{ CHECK_V_F(largest, HIST_count_wksp (table->count, &maxSymbolValue, (const BYTE*)src, srcSize, table->wksps.hist_wksp, sizeof(table->wksps.hist_wksp)) );
if (largest == srcSize) { *ostart = ((const BYTE*)src)[0]; return 1; } /* single symbol, rle */
if (largest <= (srcSize >> 7)+4) return 0; /* heuristic : probably not compressible enough */
}
DEBUGLOG(6, "histogram detail completed (%zu symbols)", showU32(table->count, maxSymbolValue+1));
/* Check validity of previous table */
if ( repeat
&& *repeat == HUF_repeat_check
&& !HUF_validateCTable(oldHufTable, table->count, maxSymbolValue)) {
*repeat = HUF_repeat_none;
}
/* Heuristic : use existing table for small inputs */
if ((flags & HUF_flags_preferRepeat) && repeat && *repeat != HUF_repeat_none) {
return HUF_compressCTable_internal(ostart, op, oend,
src, srcSize,
nbStreams, oldHufTable, flags);
}
/* Build Huffman Tree */
huffLog = HUF_optimalTableLog(huffLog, srcSize, maxSymbolValue, &table->wksps, sizeof(table->wksps), table->CTable, table->count, flags);
{ size_t const maxBits = HUF_buildCTable_wksp(table->CTable, table->count,
maxSymbolValue, huffLog,
&table->wksps.buildCTable_wksp, sizeof(table->wksps.buildCTable_wksp));
CHECK_F(maxBits);
huffLog = (U32)maxBits;
DEBUGLOG(6, "bit distribution completed (%zu symbols)", showCTableBits(table->CTable + 1, maxSymbolValue+1));
}
/* Write table description header */
{ CHECK_V_F(hSize, HUF_writeCTable_wksp(op, dstSize, table->CTable, maxSymbolValue, huffLog,
&table->wksps.writeCTable_wksp, sizeof(table->wksps.writeCTable_wksp)) );
/* Check if using previous huffman table is beneficial */
if (repeat && *repeat != HUF_repeat_none) {
size_t const oldSize = HUF_estimateCompressedSize(oldHufTable, table->count, maxSymbolValue);
size_t const newSize = HUF_estimateCompressedSize(table->CTable, table->count, maxSymbolValue);
if (oldSize <= hSize + newSize || hSize + 12 >= srcSize) {
return HUF_compressCTable_internal(ostart, op, oend,
src, srcSize,
nbStreams, oldHufTable, flags);
} }
/* Use the new huffman table */
if (hSize + 12ul >= srcSize) { return 0; }
op += hSize;
if (repeat) { *repeat = HUF_repeat_none; }
if (oldHufTable)
ZSTD_memcpy(oldHufTable, table->CTable, sizeof(table->CTable)); /* Save new table */
}
return HUF_compressCTable_internal(ostart, op, oend,
src, srcSize,
nbStreams, table->CTable, flags);
}
size_t HUF_compress1X_repeat (void* dst, size_t dstSize,
const void* src, size_t srcSize,
unsigned maxSymbolValue, unsigned huffLog,
void* workSpace, size_t wkspSize,
HUF_CElt* hufTable, HUF_repeat* repeat, int flags)
{
DEBUGLOG(5, "HUF_compress1X_repeat (srcSize = %zu)", srcSize);
return HUF_compress_internal(dst, dstSize, src, srcSize,
maxSymbolValue, huffLog, HUF_singleStream,
workSpace, wkspSize, hufTable,
repeat, flags);
}
/* HUF_compress4X_repeat():
* compress input using 4 streams.
* consider skipping quickly
* reuse an existing huffman compression table */
size_t HUF_compress4X_repeat (void* dst, size_t dstSize,
const void* src, size_t srcSize,
unsigned maxSymbolValue, unsigned huffLog,
void* workSpace, size_t wkspSize,
HUF_CElt* hufTable, HUF_repeat* repeat, int flags)
{
DEBUGLOG(5, "HUF_compress4X_repeat (srcSize = %zu)", srcSize);
return HUF_compress_internal(dst, dstSize, src, srcSize,
maxSymbolValue, huffLog, HUF_fourStreams,
workSpace, wkspSize,
hufTable, repeat, flags);
}
/**** ended inlining compress/huf_compress.c ****/
/**** start inlining compress/zstd_compress_literals.c ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/*-*************************************
* Dependencies
***************************************/
/**** start inlining zstd_compress_literals.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_COMPRESS_LITERALS_H
#define ZSTD_COMPRESS_LITERALS_H
/**** start inlining zstd_compress_internal.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/* This header contains definitions
* that shall **only** be used by modules within lib/compress.
*/
#ifndef ZSTD_COMPRESS_H
#define ZSTD_COMPRESS_H
/*-*************************************
* Dependencies
***************************************/
/**** skipping file: ../common/zstd_internal.h ****/
/**** start inlining zstd_cwksp.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_CWKSP_H
#define ZSTD_CWKSP_H
/*-*************************************
* Dependencies
***************************************/
/**** skipping file: ../common/allocations.h ****/
/**** skipping file: ../common/zstd_internal.h ****/
/**** skipping file: ../common/portability_macros.h ****/
/**** skipping file: ../common/compiler.h ****/
/*-*************************************
* Constants
***************************************/
/* Since the workspace is effectively its own little malloc implementation /
* arena, when we run under ASAN, we should similarly insert redzones between
* each internal element of the workspace, so ASAN will catch overruns that
* reach outside an object but that stay inside the workspace.
*
* This defines the size of that redzone.
*/
#ifndef ZSTD_CWKSP_ASAN_REDZONE_SIZE
#define ZSTD_CWKSP_ASAN_REDZONE_SIZE 128
#endif
/* Set our tables and aligneds to align by 64 bytes */
#define ZSTD_CWKSP_ALIGNMENT_BYTES 64
/*-*************************************
* Structures
***************************************/
typedef enum {
ZSTD_cwksp_alloc_objects,
ZSTD_cwksp_alloc_aligned_init_once,
ZSTD_cwksp_alloc_aligned,
ZSTD_cwksp_alloc_buffers
} ZSTD_cwksp_alloc_phase_e;
/**
* Used to describe whether the workspace is statically allocated (and will not
* necessarily ever be freed), or if it's dynamically allocated and we can
* expect a well-formed caller to free this.
*/
typedef enum {
ZSTD_cwksp_dynamic_alloc,
ZSTD_cwksp_static_alloc
} ZSTD_cwksp_static_alloc_e;
/**
* Zstd fits all its internal datastructures into a single continuous buffer,
* so that it only needs to perform a single OS allocation (or so that a buffer
* can be provided to it and it can perform no allocations at all). This buffer
* is called the workspace.
*
* Several optimizations complicate that process of allocating memory ranges
* from this workspace for each internal datastructure:
*
* - These different internal datastructures have different setup requirements:
*
* - The static objects need to be cleared once and can then be trivially
* reused for each compression.
*
* - Various buffers don't need to be initialized at all--they are always
* written into before they're read.
*
* - The matchstate tables have a unique requirement that they don't need
* their memory to be totally cleared, but they do need the memory to have
* some bound, i.e., a guarantee that all values in the memory they've been
* allocated is less than some maximum value (which is the starting value
* for the indices that they will then use for compression). When this
* guarantee is provided to them, they can use the memory without any setup
* work. When it can't, they have to clear the area.
*
* - These buffers also have different alignment requirements.
*
* - We would like to reuse the objects in the workspace for multiple
* compressions without having to perform any expensive reallocation or
* reinitialization work.
*
* - We would like to be able to efficiently reuse the workspace across
* multiple compressions **even when the compression parameters change** and
* we need to resize some of the objects (where possible).
*
* To attempt to manage this buffer, given these constraints, the ZSTD_cwksp
* abstraction was created. It works as follows:
*
* Workspace Layout:
*
* [ ... workspace ... ]
* [objects][tables ->] free space [<- buffers][<- aligned][<- init once]
*
* The various objects that live in the workspace are divided into the
* following categories, and are allocated separately:
*
* - Static objects: this is optionally the enclosing ZSTD_CCtx or ZSTD_CDict,
* so that literally everything fits in a single buffer. Note: if present,
* this must be the first object in the workspace, since ZSTD_customFree{CCtx,
* CDict}() rely on a pointer comparison to see whether one or two frees are
* required.
*
* - Fixed size objects: these are fixed-size, fixed-count objects that are
* nonetheless "dynamically" allocated in the workspace so that we can
* control how they're initialized separately from the broader ZSTD_CCtx.
* Examples:
* - Entropy Workspace
* - 2 x ZSTD_compressedBlockState_t
* - CDict dictionary contents
*
* - Tables: these are any of several different datastructures (hash tables,
* chain tables, binary trees) that all respect a common format: they are
* uint32_t arrays, all of whose values are between 0 and (nextSrc - base).
* Their sizes depend on the cparams. These tables are 64-byte aligned.
*
* - Init once: these buffers require to be initialized at least once before
* use. They should be used when we want to skip memory initialization
* while not triggering memory checkers (like Valgrind) when reading from
* from this memory without writing to it first.
* These buffers should be used carefully as they might contain data
* from previous compressions.
* Buffers are aligned to 64 bytes.
*
* - Aligned: these buffers don't require any initialization before they're
* used. The user of the buffer should make sure they write into a buffer
* location before reading from it.
* Buffers are aligned to 64 bytes.
*
* - Buffers: these buffers are used for various purposes that don't require
* any alignment or initialization before they're used. This means they can
* be moved around at no cost for a new compression.
*
* Allocating Memory:
*
* The various types of objects must be allocated in order, so they can be
* correctly packed into the workspace buffer. That order is:
*
* 1. Objects
* 2. Init once / Tables
* 3. Aligned / Tables
* 4. Buffers / Tables
*
* Attempts to reserve objects of different types out of order will fail.
*/
typedef struct {
void* workspace;
void* workspaceEnd;
void* objectEnd;
void* tableEnd;
void* tableValidEnd;
void* allocStart;
void* initOnceStart;
BYTE allocFailed;
int workspaceOversizedDuration;
ZSTD_cwksp_alloc_phase_e phase;
ZSTD_cwksp_static_alloc_e isStatic;
} ZSTD_cwksp;
/*-*************************************
* Functions
***************************************/
MEM_STATIC size_t ZSTD_cwksp_available_space(ZSTD_cwksp* ws);
MEM_STATIC void* ZSTD_cwksp_initialAllocStart(ZSTD_cwksp* ws);
MEM_STATIC void ZSTD_cwksp_assert_internal_consistency(ZSTD_cwksp* ws) {
(void)ws;
assert(ws->workspace <= ws->objectEnd);
assert(ws->objectEnd <= ws->tableEnd);
assert(ws->objectEnd <= ws->tableValidEnd);
assert(ws->tableEnd <= ws->allocStart);
assert(ws->tableValidEnd <= ws->allocStart);
assert(ws->allocStart <= ws->workspaceEnd);
assert(ws->initOnceStart <= ZSTD_cwksp_initialAllocStart(ws));
assert(ws->workspace <= ws->initOnceStart);
#if ZSTD_MEMORY_SANITIZER
{
intptr_t const offset = __msan_test_shadow(ws->initOnceStart,
(U8*)ZSTD_cwksp_initialAllocStart(ws) - (U8*)ws->initOnceStart);
(void)offset;
#if defined(ZSTD_MSAN_PRINT)
if(offset!=-1) {
__msan_print_shadow((U8*)ws->initOnceStart + offset - 8, 32);
}
#endif
assert(offset==-1);
};
#endif
}
/**
* Align must be a power of 2.
*/
MEM_STATIC size_t ZSTD_cwksp_align(size_t size, size_t align) {
size_t const mask = align - 1;
assert(ZSTD_isPower2(align));
return (size + mask) & ~mask;
}
/**
* Use this to determine how much space in the workspace we will consume to
* allocate this object. (Normally it should be exactly the size of the object,
* but under special conditions, like ASAN, where we pad each object, it might
* be larger.)
*
* Since tables aren't currently redzoned, you don't need to call through this
* to figure out how much space you need for the matchState tables. Everything
* else is though.
*
* Do not use for sizing aligned buffers. Instead, use ZSTD_cwksp_aligned64_alloc_size().
*/
MEM_STATIC size_t ZSTD_cwksp_alloc_size(size_t size) {
if (size == 0)
return 0;
#if ZSTD_ADDRESS_SANITIZER && !defined (ZSTD_ASAN_DONT_POISON_WORKSPACE)
return size + 2 * ZSTD_CWKSP_ASAN_REDZONE_SIZE;
#else
return size;
#endif
}
MEM_STATIC size_t ZSTD_cwksp_aligned_alloc_size(size_t size, size_t alignment) {
return ZSTD_cwksp_alloc_size(ZSTD_cwksp_align(size, alignment));
}
/**
* Returns an adjusted alloc size that is the nearest larger multiple of 64 bytes.
* Used to determine the number of bytes required for a given "aligned".
*/
MEM_STATIC size_t ZSTD_cwksp_aligned64_alloc_size(size_t size) {
return ZSTD_cwksp_aligned_alloc_size(size, ZSTD_CWKSP_ALIGNMENT_BYTES);
}
/**
* Returns the amount of additional space the cwksp must allocate
* for internal purposes (currently only alignment).
*/
MEM_STATIC size_t ZSTD_cwksp_slack_space_required(void) {
/* For alignment, the wksp will always allocate an additional 2*ZSTD_CWKSP_ALIGNMENT_BYTES
* bytes to align the beginning of tables section and end of buffers;
*/
size_t const slackSpace = ZSTD_CWKSP_ALIGNMENT_BYTES * 2;
return slackSpace;
}
/**
* Return the number of additional bytes required to align a pointer to the given number of bytes.
* alignBytes must be a power of two.
*/
MEM_STATIC size_t ZSTD_cwksp_bytes_to_align_ptr(void* ptr, const size_t alignBytes) {
size_t const alignBytesMask = alignBytes - 1;
size_t const bytes = (alignBytes - ((size_t)ptr & (alignBytesMask))) & alignBytesMask;
assert(ZSTD_isPower2(alignBytes));
assert(bytes < alignBytes);
return bytes;
}
/**
* Returns the initial value for allocStart which is used to determine the position from
* which we can allocate from the end of the workspace.
*/
MEM_STATIC void* ZSTD_cwksp_initialAllocStart(ZSTD_cwksp* ws)
{
char* endPtr = (char*)ws->workspaceEnd;
assert(ZSTD_isPower2(ZSTD_CWKSP_ALIGNMENT_BYTES));
endPtr = endPtr - ((size_t)endPtr % ZSTD_CWKSP_ALIGNMENT_BYTES);
return (void*)endPtr;
}
/**
* Internal function. Do not use directly.
* Reserves the given number of bytes within the aligned/buffer segment of the wksp,
* which counts from the end of the wksp (as opposed to the object/table segment).
*
* Returns a pointer to the beginning of that space.
*/
MEM_STATIC void*
ZSTD_cwksp_reserve_internal_buffer_space(ZSTD_cwksp* ws, size_t const bytes)
{
void* const alloc = (BYTE*)ws->allocStart - bytes;
void* const bottom = ws->tableEnd;
DEBUGLOG(5, "cwksp: reserving [0x%p]:%zd bytes; %zd bytes remaining",
alloc, bytes, ZSTD_cwksp_available_space(ws) - bytes);
ZSTD_cwksp_assert_internal_consistency(ws);
assert(alloc >= bottom);
if (alloc < bottom) {
DEBUGLOG(4, "cwksp: alloc failed!");
ws->allocFailed = 1;
return NULL;
}
/* the area is reserved from the end of wksp.
* If it overlaps with tableValidEnd, it voids guarantees on values' range */
if (alloc < ws->tableValidEnd) {
ws->tableValidEnd = alloc;
}
ws->allocStart = alloc;
return alloc;
}
/**
* Moves the cwksp to the next phase, and does any necessary allocations.
* cwksp initialization must necessarily go through each phase in order.
* Returns a 0 on success, or zstd error
*/
MEM_STATIC size_t
ZSTD_cwksp_internal_advance_phase(ZSTD_cwksp* ws, ZSTD_cwksp_alloc_phase_e phase)
{
assert(phase >= ws->phase);
if (phase > ws->phase) {
/* Going from allocating objects to allocating initOnce / tables */
if (ws->phase < ZSTD_cwksp_alloc_aligned_init_once &&
phase >= ZSTD_cwksp_alloc_aligned_init_once) {
ws->tableValidEnd = ws->objectEnd;
ws->initOnceStart = ZSTD_cwksp_initialAllocStart(ws);
{ /* Align the start of the tables to 64 bytes. Use [0, 63] bytes */
void *const alloc = ws->objectEnd;
size_t const bytesToAlign = ZSTD_cwksp_bytes_to_align_ptr(alloc, ZSTD_CWKSP_ALIGNMENT_BYTES);
void *const objectEnd = (BYTE *) alloc + bytesToAlign;
DEBUGLOG(5, "reserving table alignment addtl space: %zu", bytesToAlign);
RETURN_ERROR_IF(objectEnd > ws->workspaceEnd, memory_allocation,
"table phase - alignment initial allocation failed!");
ws->objectEnd = objectEnd;
ws->tableEnd = objectEnd; /* table area starts being empty */
if (ws->tableValidEnd < ws->tableEnd) {
ws->tableValidEnd = ws->tableEnd;
}
}
}
ws->phase = phase;
ZSTD_cwksp_assert_internal_consistency(ws);
}
return 0;
}
/**
* Returns whether this object/buffer/etc was allocated in this workspace.
*/
MEM_STATIC int ZSTD_cwksp_owns_buffer(const ZSTD_cwksp* ws, const void* ptr)
{
return (ptr != NULL) && (ws->workspace <= ptr) && (ptr < ws->workspaceEnd);
}
/**
* Internal function. Do not use directly.
*/
MEM_STATIC void*
ZSTD_cwksp_reserve_internal(ZSTD_cwksp* ws, size_t bytes, ZSTD_cwksp_alloc_phase_e phase)
{
void* alloc;
if (ZSTD_isError(ZSTD_cwksp_internal_advance_phase(ws, phase)) || bytes == 0) {
return NULL;
}
#if ZSTD_ADDRESS_SANITIZER && !defined (ZSTD_ASAN_DONT_POISON_WORKSPACE)
/* over-reserve space */
bytes += 2 * ZSTD_CWKSP_ASAN_REDZONE_SIZE;
#endif
alloc = ZSTD_cwksp_reserve_internal_buffer_space(ws, bytes);
#if ZSTD_ADDRESS_SANITIZER && !defined (ZSTD_ASAN_DONT_POISON_WORKSPACE)
/* Move alloc so there's ZSTD_CWKSP_ASAN_REDZONE_SIZE unused space on
* either size. */
if (alloc) {
alloc = (BYTE *)alloc + ZSTD_CWKSP_ASAN_REDZONE_SIZE;
if (ws->isStatic == ZSTD_cwksp_dynamic_alloc) {
/* We need to keep the redzone poisoned while unpoisoning the bytes that
* are actually allocated. */
__asan_unpoison_memory_region(alloc, bytes - 2 * ZSTD_CWKSP_ASAN_REDZONE_SIZE);
}
}
#endif
return alloc;
}
/**
* Reserves and returns unaligned memory.
*/
MEM_STATIC BYTE* ZSTD_cwksp_reserve_buffer(ZSTD_cwksp* ws, size_t bytes)
{
return (BYTE*)ZSTD_cwksp_reserve_internal(ws, bytes, ZSTD_cwksp_alloc_buffers);
}
/**
* Reserves and returns memory sized on and aligned on ZSTD_CWKSP_ALIGNMENT_BYTES (64 bytes).
* This memory has been initialized at least once in the past.
* This doesn't mean it has been initialized this time, and it might contain data from previous
* operations.
* The main usage is for algorithms that might need read access into uninitialized memory.
* The algorithm must maintain safety under these conditions and must make sure it doesn't
* leak any of the past data (directly or in side channels).
*/
MEM_STATIC void* ZSTD_cwksp_reserve_aligned_init_once(ZSTD_cwksp* ws, size_t bytes)
{
size_t const alignedBytes = ZSTD_cwksp_align(bytes, ZSTD_CWKSP_ALIGNMENT_BYTES);
void* ptr = ZSTD_cwksp_reserve_internal(ws, alignedBytes, ZSTD_cwksp_alloc_aligned_init_once);
assert(((size_t)ptr & (ZSTD_CWKSP_ALIGNMENT_BYTES-1)) == 0);
if(ptr && ptr < ws->initOnceStart) {
/* We assume the memory following the current allocation is either:
* 1. Not usable as initOnce memory (end of workspace)
* 2. Another initOnce buffer that has been allocated before (and so was previously memset)
* 3. An ASAN redzone, in which case we don't want to write on it
* For these reasons it should be fine to not explicitly zero every byte up to ws->initOnceStart.
* Note that we assume here that MSAN and ASAN cannot run in the same time. */
ZSTD_memset(ptr, 0, MIN((size_t)((U8*)ws->initOnceStart - (U8*)ptr), alignedBytes));
ws->initOnceStart = ptr;
}
#if ZSTD_MEMORY_SANITIZER
assert(__msan_test_shadow(ptr, bytes) == -1);
#endif
return ptr;
}
/**
* Reserves and returns memory sized on and aligned on ZSTD_CWKSP_ALIGNMENT_BYTES (64 bytes).
*/
MEM_STATIC void* ZSTD_cwksp_reserve_aligned64(ZSTD_cwksp* ws, size_t bytes)
{
void* const ptr = ZSTD_cwksp_reserve_internal(ws,
ZSTD_cwksp_align(bytes, ZSTD_CWKSP_ALIGNMENT_BYTES),
ZSTD_cwksp_alloc_aligned);
assert(((size_t)ptr & (ZSTD_CWKSP_ALIGNMENT_BYTES-1)) == 0);
return ptr;
}
/**
* Aligned on 64 bytes. These buffers have the special property that
* their values remain constrained, allowing us to reuse them without
* memset()-ing them.
*/
MEM_STATIC void* ZSTD_cwksp_reserve_table(ZSTD_cwksp* ws, size_t bytes)
{
const ZSTD_cwksp_alloc_phase_e phase = ZSTD_cwksp_alloc_aligned_init_once;
void* alloc;
void* end;
void* top;
/* We can only start allocating tables after we are done reserving space for objects at the
* start of the workspace */
if(ws->phase < phase) {
if (ZSTD_isError(ZSTD_cwksp_internal_advance_phase(ws, phase))) {
return NULL;
}
}
alloc = ws->tableEnd;
end = (BYTE *)alloc + bytes;
top = ws->allocStart;
DEBUGLOG(5, "cwksp: reserving %p table %zd bytes, %zd bytes remaining",
alloc, bytes, ZSTD_cwksp_available_space(ws) - bytes);
assert((bytes & (sizeof(U32)-1)) == 0);
ZSTD_cwksp_assert_internal_consistency(ws);
assert(end <= top);
if (end > top) {
DEBUGLOG(4, "cwksp: table alloc failed!");
ws->allocFailed = 1;
return NULL;
}
ws->tableEnd = end;
#if ZSTD_ADDRESS_SANITIZER && !defined (ZSTD_ASAN_DONT_POISON_WORKSPACE)
if (ws->isStatic == ZSTD_cwksp_dynamic_alloc) {
__asan_unpoison_memory_region(alloc, bytes);
}
#endif
assert((bytes & (ZSTD_CWKSP_ALIGNMENT_BYTES-1)) == 0);
assert(((size_t)alloc & (ZSTD_CWKSP_ALIGNMENT_BYTES-1)) == 0);
return alloc;
}
/**
* Aligned on sizeof(void*).
* Note : should happen only once, at workspace first initialization
*/
MEM_STATIC void* ZSTD_cwksp_reserve_object(ZSTD_cwksp* ws, size_t bytes)
{
size_t const roundedBytes = ZSTD_cwksp_align(bytes, sizeof(void*));
void* alloc = ws->objectEnd;
void* end = (BYTE*)alloc + roundedBytes;
#if ZSTD_ADDRESS_SANITIZER && !defined (ZSTD_ASAN_DONT_POISON_WORKSPACE)
/* over-reserve space */
end = (BYTE *)end + 2 * ZSTD_CWKSP_ASAN_REDZONE_SIZE;
#endif
DEBUGLOG(4,
"cwksp: reserving %p object %zd bytes (rounded to %zd), %zd bytes remaining",
alloc, bytes, roundedBytes, ZSTD_cwksp_available_space(ws) - roundedBytes);
assert((size_t)alloc % ZSTD_ALIGNOF(void*) == 0);
assert(bytes % ZSTD_ALIGNOF(void*) == 0);
ZSTD_cwksp_assert_internal_consistency(ws);
/* we must be in the first phase, no advance is possible */
if (ws->phase != ZSTD_cwksp_alloc_objects || end > ws->workspaceEnd) {
DEBUGLOG(3, "cwksp: object alloc failed!");
ws->allocFailed = 1;
return NULL;
}
ws->objectEnd = end;
ws->tableEnd = end;
ws->tableValidEnd = end;
#if ZSTD_ADDRESS_SANITIZER && !defined (ZSTD_ASAN_DONT_POISON_WORKSPACE)
/* Move alloc so there's ZSTD_CWKSP_ASAN_REDZONE_SIZE unused space on
* either size. */
alloc = (BYTE*)alloc + ZSTD_CWKSP_ASAN_REDZONE_SIZE;
if (ws->isStatic == ZSTD_cwksp_dynamic_alloc) {
__asan_unpoison_memory_region(alloc, bytes);
}
#endif
return alloc;
}
/**
* with alignment control
* Note : should happen only once, at workspace first initialization
*/
MEM_STATIC void* ZSTD_cwksp_reserve_object_aligned(ZSTD_cwksp* ws, size_t byteSize, size_t alignment)
{
size_t const mask = alignment - 1;
size_t const surplus = (alignment > sizeof(void*)) ? alignment - sizeof(void*) : 0;
void* const start = ZSTD_cwksp_reserve_object(ws, byteSize + surplus);
if (start == NULL) return NULL;
if (surplus == 0) return start;
assert(ZSTD_isPower2(alignment));
return (void*)(((size_t)start + surplus) & ~mask);
}
MEM_STATIC void ZSTD_cwksp_mark_tables_dirty(ZSTD_cwksp* ws)
{
DEBUGLOG(4, "cwksp: ZSTD_cwksp_mark_tables_dirty");
#if ZSTD_MEMORY_SANITIZER && !defined (ZSTD_MSAN_DONT_POISON_WORKSPACE)
/* To validate that the table reuse logic is sound, and that we don't
* access table space that we haven't cleaned, we re-"poison" the table
* space every time we mark it dirty.
* Since tableValidEnd space and initOnce space may overlap we don't poison
* the initOnce portion as it break its promise. This means that this poisoning
* check isn't always applied fully. */
{
size_t size = (BYTE*)ws->tableValidEnd - (BYTE*)ws->objectEnd;
assert(__msan_test_shadow(ws->objectEnd, size) == -1);
if((BYTE*)ws->tableValidEnd < (BYTE*)ws->initOnceStart) {
__msan_poison(ws->objectEnd, size);
} else {
assert(ws->initOnceStart >= ws->objectEnd);
__msan_poison(ws->objectEnd, (BYTE*)ws->initOnceStart - (BYTE*)ws->objectEnd);
}
}
#endif
assert(ws->tableValidEnd >= ws->objectEnd);
assert(ws->tableValidEnd <= ws->allocStart);
ws->tableValidEnd = ws->objectEnd;
ZSTD_cwksp_assert_internal_consistency(ws);
}
MEM_STATIC void ZSTD_cwksp_mark_tables_clean(ZSTD_cwksp* ws) {
DEBUGLOG(4, "cwksp: ZSTD_cwksp_mark_tables_clean");
assert(ws->tableValidEnd >= ws->objectEnd);
assert(ws->tableValidEnd <= ws->allocStart);
if (ws->tableValidEnd < ws->tableEnd) {
ws->tableValidEnd = ws->tableEnd;
}
ZSTD_cwksp_assert_internal_consistency(ws);
}
/**
* Zero the part of the allocated tables not already marked clean.
*/
MEM_STATIC void ZSTD_cwksp_clean_tables(ZSTD_cwksp* ws) {
DEBUGLOG(4, "cwksp: ZSTD_cwksp_clean_tables");
assert(ws->tableValidEnd >= ws->objectEnd);
assert(ws->tableValidEnd <= ws->allocStart);
if (ws->tableValidEnd < ws->tableEnd) {
ZSTD_memset(ws->tableValidEnd, 0, (size_t)((BYTE*)ws->tableEnd - (BYTE*)ws->tableValidEnd));
}
ZSTD_cwksp_mark_tables_clean(ws);
}
/**
* Invalidates table allocations.
* All other allocations remain valid.
*/
MEM_STATIC void ZSTD_cwksp_clear_tables(ZSTD_cwksp* ws)
{
DEBUGLOG(4, "cwksp: clearing tables!");
#if ZSTD_ADDRESS_SANITIZER && !defined (ZSTD_ASAN_DONT_POISON_WORKSPACE)
/* We don't do this when the workspace is statically allocated, because
* when that is the case, we have no capability to hook into the end of the
* workspace's lifecycle to unpoison the memory.
*/
if (ws->isStatic == ZSTD_cwksp_dynamic_alloc) {
size_t size = (BYTE*)ws->tableValidEnd - (BYTE*)ws->objectEnd;
__asan_poison_memory_region(ws->objectEnd, size);
}
#endif
ws->tableEnd = ws->objectEnd;
ZSTD_cwksp_assert_internal_consistency(ws);
}
/**
* Invalidates all buffer, aligned, and table allocations.
* Object allocations remain valid.
*/
MEM_STATIC void ZSTD_cwksp_clear(ZSTD_cwksp* ws) {
DEBUGLOG(4, "cwksp: clearing!");
#if ZSTD_MEMORY_SANITIZER && !defined (ZSTD_MSAN_DONT_POISON_WORKSPACE)
/* To validate that the context reuse logic is sound, and that we don't
* access stuff that this compression hasn't initialized, we re-"poison"
* the workspace except for the areas in which we expect memory reuse
* without initialization (objects, valid tables area and init once
* memory). */
{
if((BYTE*)ws->tableValidEnd < (BYTE*)ws->initOnceStart) {
size_t size = (BYTE*)ws->initOnceStart - (BYTE*)ws->tableValidEnd;
__msan_poison(ws->tableValidEnd, size);
}
}
#endif
#if ZSTD_ADDRESS_SANITIZER && !defined (ZSTD_ASAN_DONT_POISON_WORKSPACE)
/* We don't do this when the workspace is statically allocated, because
* when that is the case, we have no capability to hook into the end of the
* workspace's lifecycle to unpoison the memory.
*/
if (ws->isStatic == ZSTD_cwksp_dynamic_alloc) {
size_t size = (BYTE*)ws->workspaceEnd - (BYTE*)ws->objectEnd;
__asan_poison_memory_region(ws->objectEnd, size);
}
#endif
ws->tableEnd = ws->objectEnd;
ws->allocStart = ZSTD_cwksp_initialAllocStart(ws);
ws->allocFailed = 0;
if (ws->phase > ZSTD_cwksp_alloc_aligned_init_once) {
ws->phase = ZSTD_cwksp_alloc_aligned_init_once;
}
ZSTD_cwksp_assert_internal_consistency(ws);
}
MEM_STATIC size_t ZSTD_cwksp_sizeof(const ZSTD_cwksp* ws) {
return (size_t)((BYTE*)ws->workspaceEnd - (BYTE*)ws->workspace);
}
MEM_STATIC size_t ZSTD_cwksp_used(const ZSTD_cwksp* ws) {
return (size_t)((BYTE*)ws->tableEnd - (BYTE*)ws->workspace)
+ (size_t)((BYTE*)ws->workspaceEnd - (BYTE*)ws->allocStart);
}
/**
* The provided workspace takes ownership of the buffer [start, start+size).
* Any existing values in the workspace are ignored (the previously managed
* buffer, if present, must be separately freed).
*/
MEM_STATIC void ZSTD_cwksp_init(ZSTD_cwksp* ws, void* start, size_t size, ZSTD_cwksp_static_alloc_e isStatic) {
DEBUGLOG(4, "cwksp: init'ing workspace with %zd bytes", size);
assert(((size_t)start & (sizeof(void*)-1)) == 0); /* ensure correct alignment */
ws->workspace = start;
ws->workspaceEnd = (BYTE*)start + size;
ws->objectEnd = ws->workspace;
ws->tableValidEnd = ws->objectEnd;
ws->initOnceStart = ZSTD_cwksp_initialAllocStart(ws);
ws->phase = ZSTD_cwksp_alloc_objects;
ws->isStatic = isStatic;
ZSTD_cwksp_clear(ws);
ws->workspaceOversizedDuration = 0;
ZSTD_cwksp_assert_internal_consistency(ws);
}
MEM_STATIC size_t ZSTD_cwksp_create(ZSTD_cwksp* ws, size_t size, ZSTD_customMem customMem) {
void* workspace = ZSTD_customMalloc(size, customMem);
DEBUGLOG(4, "cwksp: creating new workspace with %zd bytes", size);
RETURN_ERROR_IF(workspace == NULL, memory_allocation, "NULL pointer!");
ZSTD_cwksp_init(ws, workspace, size, ZSTD_cwksp_dynamic_alloc);
return 0;
}
MEM_STATIC void ZSTD_cwksp_free(ZSTD_cwksp* ws, ZSTD_customMem customMem) {
void *ptr = ws->workspace;
DEBUGLOG(4, "cwksp: freeing workspace");
#if ZSTD_MEMORY_SANITIZER && !defined(ZSTD_MSAN_DONT_POISON_WORKSPACE)
if (ptr != NULL && customMem.customFree != NULL) {
__msan_unpoison(ptr, ZSTD_cwksp_sizeof(ws));
}
#endif
ZSTD_memset(ws, 0, sizeof(ZSTD_cwksp));
ZSTD_customFree(ptr, customMem);
}
/**
* Moves the management of a workspace from one cwksp to another. The src cwksp
* is left in an invalid state (src must be re-init()'ed before it's used again).
*/
MEM_STATIC void ZSTD_cwksp_move(ZSTD_cwksp* dst, ZSTD_cwksp* src) {
*dst = *src;
ZSTD_memset(src, 0, sizeof(ZSTD_cwksp));
}
MEM_STATIC int ZSTD_cwksp_reserve_failed(const ZSTD_cwksp* ws) {
return ws->allocFailed;
}
/*-*************************************
* Functions Checking Free Space
***************************************/
/* ZSTD_alignmentSpaceWithinBounds() :
* Returns if the estimated space needed for a wksp is within an acceptable limit of the
* actual amount of space used.
*/
MEM_STATIC int ZSTD_cwksp_estimated_space_within_bounds(const ZSTD_cwksp *const ws, size_t const estimatedSpace) {
/* We have an alignment space between objects and tables between tables and buffers, so we can have up to twice
* the alignment bytes difference between estimation and actual usage */
return (estimatedSpace - ZSTD_cwksp_slack_space_required()) <= ZSTD_cwksp_used(ws) &&
ZSTD_cwksp_used(ws) <= estimatedSpace;
}
MEM_STATIC size_t ZSTD_cwksp_available_space(ZSTD_cwksp* ws) {
return (size_t)((BYTE*)ws->allocStart - (BYTE*)ws->tableEnd);
}
MEM_STATIC int ZSTD_cwksp_check_available(ZSTD_cwksp* ws, size_t additionalNeededSpace) {
return ZSTD_cwksp_available_space(ws) >= additionalNeededSpace;
}
MEM_STATIC int ZSTD_cwksp_check_too_large(ZSTD_cwksp* ws, size_t additionalNeededSpace) {
return ZSTD_cwksp_check_available(
ws, additionalNeededSpace * ZSTD_WORKSPACETOOLARGE_FACTOR);
}
MEM_STATIC int ZSTD_cwksp_check_wasteful(ZSTD_cwksp* ws, size_t additionalNeededSpace) {
return ZSTD_cwksp_check_too_large(ws, additionalNeededSpace)
&& ws->workspaceOversizedDuration > ZSTD_WORKSPACETOOLARGE_MAXDURATION;
}
MEM_STATIC void ZSTD_cwksp_bump_oversized_duration(
ZSTD_cwksp* ws, size_t additionalNeededSpace) {
if (ZSTD_cwksp_check_too_large(ws, additionalNeededSpace)) {
ws->workspaceOversizedDuration++;
} else {
ws->workspaceOversizedDuration = 0;
}
}
#endif /* ZSTD_CWKSP_H */
/**** ended inlining zstd_cwksp.h ****/
#ifdef ZSTD_MULTITHREAD
/**** start inlining zstdmt_compress.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTDMT_COMPRESS_H
#define ZSTDMT_COMPRESS_H
/* === Dependencies === */
/**** skipping file: ../common/zstd_deps.h ****/
#define ZSTD_STATIC_LINKING_ONLY /* ZSTD_parameters */
/**** skipping file: ../zstd.h ****/
/* Note : This is an internal API.
* These APIs used to be exposed with ZSTDLIB_API,
* because it used to be the only way to invoke MT compression.
* Now, you must use ZSTD_compress2 and ZSTD_compressStream2() instead.
*
* This API requires ZSTD_MULTITHREAD to be defined during compilation,
* otherwise ZSTDMT_createCCtx*() will fail.
*/
/* === Constants === */
#ifndef ZSTDMT_NBWORKERS_MAX /* a different value can be selected at compile time */
# define ZSTDMT_NBWORKERS_MAX ((sizeof(void*)==4) /*32-bit*/ ? 64 : 256)
#endif
#ifndef ZSTDMT_JOBSIZE_MIN /* a different value can be selected at compile time */
# define ZSTDMT_JOBSIZE_MIN (512 KB)
#endif
#define ZSTDMT_JOBLOG_MAX (MEM_32bits() ? 29 : 30)
#define ZSTDMT_JOBSIZE_MAX (MEM_32bits() ? (512 MB) : (1024 MB))
/* ========================================================
* === Private interface, for use by ZSTD_compress.c ===
* === Not exposed in libzstd. Never invoke directly ===
* ======================================================== */
/* === Memory management === */
typedef struct ZSTDMT_CCtx_s ZSTDMT_CCtx;
/* Requires ZSTD_MULTITHREAD to be defined during compilation, otherwise it will return NULL. */
ZSTDMT_CCtx* ZSTDMT_createCCtx_advanced(unsigned nbWorkers,
ZSTD_customMem cMem,
ZSTD_threadPool *pool);
size_t ZSTDMT_freeCCtx(ZSTDMT_CCtx* mtctx);
size_t ZSTDMT_sizeof_CCtx(ZSTDMT_CCtx* mtctx);
/* === Streaming functions === */
size_t ZSTDMT_nextInputSizeHint(const ZSTDMT_CCtx* mtctx);
/*! ZSTDMT_initCStream_internal() :
* Private use only. Init streaming operation.
* expects params to be valid.
* must receive dict, or cdict, or none, but not both.
* mtctx can be freshly constructed or reused from a prior compression.
* If mtctx is reused, memory allocations from the prior compression may not be freed,
* even if they are not needed for the current compression.
* @return : 0, or an error code */
size_t ZSTDMT_initCStream_internal(ZSTDMT_CCtx* mtctx,
const void* dict, size_t dictSize, ZSTD_dictContentType_e dictContentType,
const ZSTD_CDict* cdict,
ZSTD_CCtx_params params, unsigned long long pledgedSrcSize);
/*! ZSTDMT_compressStream_generic() :
* Combines ZSTDMT_compressStream() with optional ZSTDMT_flushStream() or ZSTDMT_endStream()
* depending on flush directive.
* @return : minimum amount of data still to be flushed
* 0 if fully flushed
* or an error code
* note : needs to be init using any ZSTD_initCStream*() variant */
size_t ZSTDMT_compressStream_generic(ZSTDMT_CCtx* mtctx,
ZSTD_outBuffer* output,
ZSTD_inBuffer* input,
ZSTD_EndDirective endOp);
/*! ZSTDMT_toFlushNow()
* Tell how many bytes are ready to be flushed immediately.
* Probe the oldest active job (not yet entirely flushed) and check its output buffer.
* If return 0, it means there is no active job,
* or, it means oldest job is still active, but everything produced has been flushed so far,
* therefore flushing is limited by speed of oldest job. */
size_t ZSTDMT_toFlushNow(ZSTDMT_CCtx* mtctx);
/*! ZSTDMT_updateCParams_whileCompressing() :
* Updates only a selected set of compression parameters, to remain compatible with current frame.
* New parameters will be applied to next compression job. */
void ZSTDMT_updateCParams_whileCompressing(ZSTDMT_CCtx* mtctx, const ZSTD_CCtx_params* cctxParams);
/*! ZSTDMT_getFrameProgression():
* tells how much data has been consumed (input) and produced (output) for current frame.
* able to count progression inside worker threads.
*/
ZSTD_frameProgression ZSTDMT_getFrameProgression(ZSTDMT_CCtx* mtctx);
#endif /* ZSTDMT_COMPRESS_H */
/**** ended inlining zstdmt_compress.h ****/
#endif
/**** skipping file: ../common/bits.h ****/
/**** start inlining zstd_preSplit.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_PRESPLIT_H
#define ZSTD_PRESPLIT_H
#include <stddef.h> /* size_t */
#define ZSTD_SLIPBLOCK_WORKSPACESIZE 8208
/* ZSTD_splitBlock():
* @level must be a value between 0 and 4.
* higher levels spend more energy to detect block boundaries.
* @workspace must be aligned for size_t.
* @wkspSize must be at least >= ZSTD_SLIPBLOCK_WORKSPACESIZE
* note:
* For the time being, this function only accepts full 128 KB blocks.
* Therefore, @blockSize must be == 128 KB.
* While this could be extended to smaller sizes in the future,
* it is not yet clear if this would be useful. TBD.
*/
size_t ZSTD_splitBlock(const void* blockStart, size_t blockSize,
int level,
void* workspace, size_t wkspSize);
#endif /* ZSTD_PRESPLIT_H */
/**** ended inlining zstd_preSplit.h ****/
/*-*************************************
* Constants
***************************************/
#define kSearchStrength 8
#define HASH_READ_SIZE 8
#define ZSTD_DUBT_UNSORTED_MARK 1 /* For btlazy2 strategy, index ZSTD_DUBT_UNSORTED_MARK==1 means "unsorted".
It could be confused for a real successor at index "1", if sorted as larger than its predecessor.
It's not a big deal though : candidate will just be sorted again.
Additionally, candidate position 1 will be lost.
But candidate 1 cannot hide a large tree of candidates, so it's a minimal loss.
The benefit is that ZSTD_DUBT_UNSORTED_MARK cannot be mishandled after table reuse with a different strategy.
This constant is required by ZSTD_compressBlock_btlazy2() and ZSTD_reduceTable_internal() */
/*-*************************************
* Context memory management
***************************************/
typedef enum { ZSTDcs_created=0, ZSTDcs_init, ZSTDcs_ongoing, ZSTDcs_ending } ZSTD_compressionStage_e;
typedef enum { zcss_init=0, zcss_load, zcss_flush } ZSTD_cStreamStage;
typedef struct ZSTD_prefixDict_s {
const void* dict;
size_t dictSize;
ZSTD_dictContentType_e dictContentType;
} ZSTD_prefixDict;
typedef struct {
void* dictBuffer;
void const* dict;
size_t dictSize;
ZSTD_dictContentType_e dictContentType;
ZSTD_CDict* cdict;
} ZSTD_localDict;
typedef struct {
HUF_CElt CTable[HUF_CTABLE_SIZE_ST(255)];
HUF_repeat repeatMode;
} ZSTD_hufCTables_t;
typedef struct {
FSE_CTable offcodeCTable[FSE_CTABLE_SIZE_U32(OffFSELog, MaxOff)];
FSE_CTable matchlengthCTable[FSE_CTABLE_SIZE_U32(MLFSELog, MaxML)];
FSE_CTable litlengthCTable[FSE_CTABLE_SIZE_U32(LLFSELog, MaxLL)];
FSE_repeat offcode_repeatMode;
FSE_repeat matchlength_repeatMode;
FSE_repeat litlength_repeatMode;
} ZSTD_fseCTables_t;
typedef struct {
ZSTD_hufCTables_t huf;
ZSTD_fseCTables_t fse;
} ZSTD_entropyCTables_t;
/***********************************************
* Sequences *
***********************************************/
typedef struct SeqDef_s {
U32 offBase; /* offBase == Offset + ZSTD_REP_NUM, or repcode 1,2,3 */
U16 litLength;
U16 mlBase; /* mlBase == matchLength - MINMATCH */
} SeqDef;
/* Controls whether seqStore has a single "long" litLength or matchLength. See SeqStore_t. */
typedef enum {
ZSTD_llt_none = 0, /* no longLengthType */
ZSTD_llt_literalLength = 1, /* represents a long literal */
ZSTD_llt_matchLength = 2 /* represents a long match */
} ZSTD_longLengthType_e;
typedef struct {
SeqDef* sequencesStart;
SeqDef* sequences; /* ptr to end of sequences */
BYTE* litStart;
BYTE* lit; /* ptr to end of literals */
BYTE* llCode;
BYTE* mlCode;
BYTE* ofCode;
size_t maxNbSeq;
size_t maxNbLit;
/* longLengthPos and longLengthType to allow us to represent either a single litLength or matchLength
* in the seqStore that has a value larger than U16 (if it exists). To do so, we increment
* the existing value of the litLength or matchLength by 0x10000.
*/
ZSTD_longLengthType_e longLengthType;
U32 longLengthPos; /* Index of the sequence to apply long length modification to */
} SeqStore_t;
typedef struct {
U32 litLength;
U32 matchLength;
} ZSTD_SequenceLength;
/**
* Returns the ZSTD_SequenceLength for the given sequences. It handles the decoding of long sequences
* indicated by longLengthPos and longLengthType, and adds MINMATCH back to matchLength.
*/
MEM_STATIC ZSTD_SequenceLength ZSTD_getSequenceLength(SeqStore_t const* seqStore, SeqDef const* seq)
{
ZSTD_SequenceLength seqLen;
seqLen.litLength = seq->litLength;
seqLen.matchLength = seq->mlBase + MINMATCH;
if (seqStore->longLengthPos == (U32)(seq - seqStore->sequencesStart)) {
if (seqStore->longLengthType == ZSTD_llt_literalLength) {
seqLen.litLength += 0x10000;
}
if (seqStore->longLengthType == ZSTD_llt_matchLength) {
seqLen.matchLength += 0x10000;
}
}
return seqLen;
}
const SeqStore_t* ZSTD_getSeqStore(const ZSTD_CCtx* ctx); /* compress & dictBuilder */
int ZSTD_seqToCodes(const SeqStore_t* seqStorePtr); /* compress, dictBuilder, decodeCorpus (shouldn't get its definition from here) */
/***********************************************
* Entropy buffer statistics structs and funcs *
***********************************************/
/** ZSTD_hufCTablesMetadata_t :
* Stores Literals Block Type for a super-block in hType, and
* huffman tree description in hufDesBuffer.
* hufDesSize refers to the size of huffman tree description in bytes.
* This metadata is populated in ZSTD_buildBlockEntropyStats_literals() */
typedef struct {
SymbolEncodingType_e hType;
BYTE hufDesBuffer[ZSTD_MAX_HUF_HEADER_SIZE];
size_t hufDesSize;
} ZSTD_hufCTablesMetadata_t;
/** ZSTD_fseCTablesMetadata_t :
* Stores symbol compression modes for a super-block in {ll, ol, ml}Type, and
* fse tables in fseTablesBuffer.
* fseTablesSize refers to the size of fse tables in bytes.
* This metadata is populated in ZSTD_buildBlockEntropyStats_sequences() */
typedef struct {
SymbolEncodingType_e llType;
SymbolEncodingType_e ofType;
SymbolEncodingType_e mlType;
BYTE fseTablesBuffer[ZSTD_MAX_FSE_HEADERS_SIZE];
size_t fseTablesSize;
size_t lastCountSize; /* This is to account for bug in 1.3.4. More detail in ZSTD_entropyCompressSeqStore_internal() */
} ZSTD_fseCTablesMetadata_t;
typedef struct {
ZSTD_hufCTablesMetadata_t hufMetadata;
ZSTD_fseCTablesMetadata_t fseMetadata;
} ZSTD_entropyCTablesMetadata_t;
/** ZSTD_buildBlockEntropyStats() :
* Builds entropy for the block.
* @return : 0 on success or error code */
size_t ZSTD_buildBlockEntropyStats(
const SeqStore_t* seqStorePtr,
const ZSTD_entropyCTables_t* prevEntropy,
ZSTD_entropyCTables_t* nextEntropy,
const ZSTD_CCtx_params* cctxParams,
ZSTD_entropyCTablesMetadata_t* entropyMetadata,
void* workspace, size_t wkspSize);
/*********************************
* Compression internals structs *
*********************************/
typedef struct {
U32 off; /* Offset sumtype code for the match, using ZSTD_storeSeq() format */
U32 len; /* Raw length of match */
} ZSTD_match_t;
typedef struct {
U32 offset; /* Offset of sequence */
U32 litLength; /* Length of literals prior to match */
U32 matchLength; /* Raw length of match */
} rawSeq;
typedef struct {
rawSeq* seq; /* The start of the sequences */
size_t pos; /* The index in seq where reading stopped. pos <= size. */
size_t posInSequence; /* The position within the sequence at seq[pos] where reading
stopped. posInSequence <= seq[pos].litLength + seq[pos].matchLength */
size_t size; /* The number of sequences. <= capacity. */
size_t capacity; /* The capacity starting from `seq` pointer */
} RawSeqStore_t;
UNUSED_ATTR static const RawSeqStore_t kNullRawSeqStore = {NULL, 0, 0, 0, 0};
typedef struct {
int price; /* price from beginning of segment to this position */
U32 off; /* offset of previous match */
U32 mlen; /* length of previous match */
U32 litlen; /* nb of literals since previous match */
U32 rep[ZSTD_REP_NUM]; /* offset history after previous match */
} ZSTD_optimal_t;
typedef enum { zop_dynamic=0, zop_predef } ZSTD_OptPrice_e;
#define ZSTD_OPT_SIZE (ZSTD_OPT_NUM+3)
typedef struct {
/* All tables are allocated inside cctx->workspace by ZSTD_resetCCtx_internal() */
unsigned* litFreq; /* table of literals statistics, of size 256 */
unsigned* litLengthFreq; /* table of litLength statistics, of size (MaxLL+1) */
unsigned* matchLengthFreq; /* table of matchLength statistics, of size (MaxML+1) */
unsigned* offCodeFreq; /* table of offCode statistics, of size (MaxOff+1) */
ZSTD_match_t* matchTable; /* list of found matches, of size ZSTD_OPT_SIZE */
ZSTD_optimal_t* priceTable; /* All positions tracked by optimal parser, of size ZSTD_OPT_SIZE */
U32 litSum; /* nb of literals */
U32 litLengthSum; /* nb of litLength codes */
U32 matchLengthSum; /* nb of matchLength codes */
U32 offCodeSum; /* nb of offset codes */
U32 litSumBasePrice; /* to compare to log2(litfreq) */
U32 litLengthSumBasePrice; /* to compare to log2(llfreq) */
U32 matchLengthSumBasePrice;/* to compare to log2(mlfreq) */
U32 offCodeSumBasePrice; /* to compare to log2(offreq) */
ZSTD_OptPrice_e priceType; /* prices can be determined dynamically, or follow a pre-defined cost structure */
const ZSTD_entropyCTables_t* symbolCosts; /* pre-calculated dictionary statistics */
ZSTD_ParamSwitch_e literalCompressionMode;
} optState_t;
typedef struct {
ZSTD_entropyCTables_t entropy;
U32 rep[ZSTD_REP_NUM];
} ZSTD_compressedBlockState_t;
typedef struct {
BYTE const* nextSrc; /* next block here to continue on current prefix */
BYTE const* base; /* All regular indexes relative to this position */
BYTE const* dictBase; /* extDict indexes relative to this position */
U32 dictLimit; /* below that point, need extDict */
U32 lowLimit; /* below that point, no more valid data */
U32 nbOverflowCorrections; /* Number of times overflow correction has run since
* ZSTD_window_init(). Useful for debugging coredumps
* and for ZSTD_WINDOW_OVERFLOW_CORRECT_FREQUENTLY.
*/
} ZSTD_window_t;
#define ZSTD_WINDOW_START_INDEX 2
typedef struct ZSTD_MatchState_t ZSTD_MatchState_t;
#define ZSTD_ROW_HASH_CACHE_SIZE 8 /* Size of prefetching hash cache for row-based matchfinder */
struct ZSTD_MatchState_t {
ZSTD_window_t window; /* State for window round buffer management */
U32 loadedDictEnd; /* index of end of dictionary, within context's referential.
* When loadedDictEnd != 0, a dictionary is in use, and still valid.
* This relies on a mechanism to set loadedDictEnd=0 when dictionary is no longer within distance.
* Such mechanism is provided within ZSTD_window_enforceMaxDist() and ZSTD_checkDictValidity().
* When dict referential is copied into active context (i.e. not attached),
* loadedDictEnd == dictSize, since referential starts from zero.
*/
U32 nextToUpdate; /* index from which to continue table update */
U32 hashLog3; /* dispatch table for matches of len==3 : larger == faster, more memory */
U32 rowHashLog; /* For row-based matchfinder: Hashlog based on nb of rows in the hashTable.*/
BYTE* tagTable; /* For row-based matchFinder: A row-based table containing the hashes and head index. */
U32 hashCache[ZSTD_ROW_HASH_CACHE_SIZE]; /* For row-based matchFinder: a cache of hashes to improve speed */
U64 hashSalt; /* For row-based matchFinder: salts the hash for reuse of tag table */
U32 hashSaltEntropy; /* For row-based matchFinder: collects entropy for salt generation */
U32* hashTable;
U32* hashTable3;
U32* chainTable;
int forceNonContiguous; /* Non-zero if we should force non-contiguous load for the next window update. */
int dedicatedDictSearch; /* Indicates whether this matchState is using the
* dedicated dictionary search structure.
*/
optState_t opt; /* optimal parser state */
const ZSTD_MatchState_t* dictMatchState;
ZSTD_compressionParameters cParams;
const RawSeqStore_t* ldmSeqStore;
/* Controls prefetching in some dictMatchState matchfinders.
* This behavior is controlled from the cctx ms.
* This parameter has no effect in the cdict ms. */
int prefetchCDictTables;
/* When == 0, lazy match finders insert every position.
* When != 0, lazy match finders only insert positions they search.
* This allows them to skip much faster over incompressible data,
* at a small cost to compression ratio.
*/
int lazySkipping;
};
typedef struct {
ZSTD_compressedBlockState_t* prevCBlock;
ZSTD_compressedBlockState_t* nextCBlock;
ZSTD_MatchState_t matchState;
} ZSTD_blockState_t;
typedef struct {
U32 offset;
U32 checksum;
} ldmEntry_t;
typedef struct {
BYTE const* split;
U32 hash;
U32 checksum;
ldmEntry_t* bucket;
} ldmMatchCandidate_t;
#define LDM_BATCH_SIZE 64
typedef struct {
ZSTD_window_t window; /* State for the window round buffer management */
ldmEntry_t* hashTable;
U32 loadedDictEnd;
BYTE* bucketOffsets; /* Next position in bucket to insert entry */
size_t splitIndices[LDM_BATCH_SIZE];
ldmMatchCandidate_t matchCandidates[LDM_BATCH_SIZE];
} ldmState_t;
typedef struct {
ZSTD_ParamSwitch_e enableLdm; /* ZSTD_ps_enable to enable LDM. ZSTD_ps_auto by default */
U32 hashLog; /* Log size of hashTable */
U32 bucketSizeLog; /* Log bucket size for collision resolution, at most 8 */
U32 minMatchLength; /* Minimum match length */
U32 hashRateLog; /* Log number of entries to skip */
U32 windowLog; /* Window log for the LDM */
} ldmParams_t;
typedef struct {
int collectSequences;
ZSTD_Sequence* seqStart;
size_t seqIndex;
size_t maxSequences;
} SeqCollector;
struct ZSTD_CCtx_params_s {
ZSTD_format_e format;
ZSTD_compressionParameters cParams;
ZSTD_frameParameters fParams;
int compressionLevel;
int forceWindow; /* force back-references to respect limit of
* 1<<wLog, even for dictionary */
size_t targetCBlockSize; /* Tries to fit compressed block size to be around targetCBlockSize.
* No target when targetCBlockSize == 0.
* There is no guarantee on compressed block size */
int srcSizeHint; /* User's best guess of source size.
* Hint is not valid when srcSizeHint == 0.
* There is no guarantee that hint is close to actual source size */
ZSTD_dictAttachPref_e attachDictPref;
ZSTD_ParamSwitch_e literalCompressionMode;
/* Multithreading: used to pass parameters to mtctx */
int nbWorkers;
size_t jobSize;
int overlapLog;
int rsyncable;
/* Long distance matching parameters */
ldmParams_t ldmParams;
/* Dedicated dict search algorithm trigger */
int enableDedicatedDictSearch;
/* Input/output buffer modes */
ZSTD_bufferMode_e inBufferMode;
ZSTD_bufferMode_e outBufferMode;
/* Sequence compression API */
ZSTD_SequenceFormat_e blockDelimiters;
int validateSequences;
/* Block splitting
* @postBlockSplitter executes split analysis after sequences are produced,
* it's more accurate but consumes more resources.
* @preBlockSplitter_level splits before knowing sequences,
* it's more approximative but also cheaper.
* Valid @preBlockSplitter_level values range from 0 to 6 (included).
* 0 means auto, 1 means do not split,
* then levels are sorted in increasing cpu budget, from 2 (fastest) to 6 (slowest).
* Highest @preBlockSplitter_level combines well with @postBlockSplitter.
*/
ZSTD_ParamSwitch_e postBlockSplitter;
int preBlockSplitter_level;
/* Adjust the max block size*/
size_t maxBlockSize;
/* Param for deciding whether to use row-based matchfinder */
ZSTD_ParamSwitch_e useRowMatchFinder;
/* Always load a dictionary in ext-dict mode (not prefix mode)? */
int deterministicRefPrefix;
/* Internal use, for createCCtxParams() and freeCCtxParams() only */
ZSTD_customMem customMem;
/* Controls prefetching in some dictMatchState matchfinders */
ZSTD_ParamSwitch_e prefetchCDictTables;
/* Controls whether zstd will fall back to an internal matchfinder
* if the external matchfinder returns an error code. */
int enableMatchFinderFallback;
/* Parameters for the external sequence producer API.
* Users set these parameters through ZSTD_registerSequenceProducer().
* It is not possible to set these parameters individually through the public API. */
void* extSeqProdState;
ZSTD_sequenceProducer_F extSeqProdFunc;
/* Controls repcode search in external sequence parsing */
ZSTD_ParamSwitch_e searchForExternalRepcodes;
}; /* typedef'd to ZSTD_CCtx_params within "zstd.h" */
#define COMPRESS_SEQUENCES_WORKSPACE_SIZE (sizeof(unsigned) * (MaxSeq + 2))
#define ENTROPY_WORKSPACE_SIZE (HUF_WORKSPACE_SIZE + COMPRESS_SEQUENCES_WORKSPACE_SIZE)
#define TMP_WORKSPACE_SIZE (MAX(ENTROPY_WORKSPACE_SIZE, ZSTD_SLIPBLOCK_WORKSPACESIZE))
/**
* Indicates whether this compression proceeds directly from user-provided
* source buffer to user-provided destination buffer (ZSTDb_not_buffered), or
* whether the context needs to buffer the input/output (ZSTDb_buffered).
*/
typedef enum {
ZSTDb_not_buffered,
ZSTDb_buffered
} ZSTD_buffered_policy_e;
/**
* Struct that contains all elements of block splitter that should be allocated
* in a wksp.
*/
#define ZSTD_MAX_NB_BLOCK_SPLITS 196
typedef struct {
SeqStore_t fullSeqStoreChunk;
SeqStore_t firstHalfSeqStore;
SeqStore_t secondHalfSeqStore;
SeqStore_t currSeqStore;
SeqStore_t nextSeqStore;
U32 partitions[ZSTD_MAX_NB_BLOCK_SPLITS];
ZSTD_entropyCTablesMetadata_t entropyMetadata;
} ZSTD_blockSplitCtx;
struct ZSTD_CCtx_s {
ZSTD_compressionStage_e stage;
int cParamsChanged; /* == 1 if cParams(except wlog) or compression level are changed in requestedParams. Triggers transmission of new params to ZSTDMT (if available) then reset to 0. */
int bmi2; /* == 1 if the CPU supports BMI2 and 0 otherwise. CPU support is determined dynamically once per context lifetime. */
ZSTD_CCtx_params requestedParams;
ZSTD_CCtx_params appliedParams;
ZSTD_CCtx_params simpleApiParams; /* Param storage used by the simple API - not sticky. Must only be used in top-level simple API functions for storage. */
U32 dictID;
size_t dictContentSize;
ZSTD_cwksp workspace; /* manages buffer for dynamic allocations */
size_t blockSizeMax;
unsigned long long pledgedSrcSizePlusOne; /* this way, 0 (default) == unknown */
unsigned long long consumedSrcSize;
unsigned long long producedCSize;
XXH64_state_t xxhState;
ZSTD_customMem customMem;
ZSTD_threadPool* pool;
size_t staticSize;
SeqCollector seqCollector;
int isFirstBlock;
int initialized;
SeqStore_t seqStore; /* sequences storage ptrs */
ldmState_t ldmState; /* long distance matching state */
rawSeq* ldmSequences; /* Storage for the ldm output sequences */
size_t maxNbLdmSequences;
RawSeqStore_t externSeqStore; /* Mutable reference to external sequences */
ZSTD_blockState_t blockState;
void* tmpWorkspace; /* used as substitute of stack space - must be aligned for S64 type */
size_t tmpWkspSize;
/* Whether we are streaming or not */
ZSTD_buffered_policy_e bufferedPolicy;
/* streaming */
char* inBuff;
size_t inBuffSize;
size_t inToCompress;
size_t inBuffPos;
size_t inBuffTarget;
char* outBuff;
size_t outBuffSize;
size_t outBuffContentSize;
size_t outBuffFlushedSize;
ZSTD_cStreamStage streamStage;
U32 frameEnded;
/* Stable in/out buffer verification */
ZSTD_inBuffer expectedInBuffer;
size_t stableIn_notConsumed; /* nb bytes within stable input buffer that are said to be consumed but are not */
size_t expectedOutBufferSize;
/* Dictionary */
ZSTD_localDict localDict;
const ZSTD_CDict* cdict;
ZSTD_prefixDict prefixDict; /* single-usage dictionary */
/* Multi-threading */
#ifdef ZSTD_MULTITHREAD
ZSTDMT_CCtx* mtctx;
#endif
/* Tracing */
#if ZSTD_TRACE
ZSTD_TraceCtx traceCtx;
#endif
/* Workspace for block splitter */
ZSTD_blockSplitCtx blockSplitCtx;
/* Buffer for output from external sequence producer */
ZSTD_Sequence* extSeqBuf;
size_t extSeqBufCapacity;
};
typedef enum { ZSTD_dtlm_fast, ZSTD_dtlm_full } ZSTD_dictTableLoadMethod_e;
typedef enum { ZSTD_tfp_forCCtx, ZSTD_tfp_forCDict } ZSTD_tableFillPurpose_e;
typedef enum {
ZSTD_noDict = 0,
ZSTD_extDict = 1,
ZSTD_dictMatchState = 2,
ZSTD_dedicatedDictSearch = 3
} ZSTD_dictMode_e;
typedef enum {
ZSTD_cpm_noAttachDict = 0, /* Compression with ZSTD_noDict or ZSTD_extDict.
* In this mode we use both the srcSize and the dictSize
* when selecting and adjusting parameters.
*/
ZSTD_cpm_attachDict = 1, /* Compression with ZSTD_dictMatchState or ZSTD_dedicatedDictSearch.
* In this mode we only take the srcSize into account when selecting
* and adjusting parameters.
*/
ZSTD_cpm_createCDict = 2, /* Creating a CDict.
* In this mode we take both the source size and the dictionary size
* into account when selecting and adjusting the parameters.
*/
ZSTD_cpm_unknown = 3 /* ZSTD_getCParams, ZSTD_getParams, ZSTD_adjustParams.
* We don't know what these parameters are for. We default to the legacy
* behavior of taking both the source size and the dict size into account
* when selecting and adjusting parameters.
*/
} ZSTD_CParamMode_e;
typedef size_t (*ZSTD_BlockCompressor_f) (
ZSTD_MatchState_t* bs, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
ZSTD_BlockCompressor_f ZSTD_selectBlockCompressor(ZSTD_strategy strat, ZSTD_ParamSwitch_e rowMatchfinderMode, ZSTD_dictMode_e dictMode);
MEM_STATIC U32 ZSTD_LLcode(U32 litLength)
{
static const BYTE LL_Code[64] = { 0, 1, 2, 3, 4, 5, 6, 7,
8, 9, 10, 11, 12, 13, 14, 15,
16, 16, 17, 17, 18, 18, 19, 19,
20, 20, 20, 20, 21, 21, 21, 21,
22, 22, 22, 22, 22, 22, 22, 22,
23, 23, 23, 23, 23, 23, 23, 23,
24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24 };
static const U32 LL_deltaCode = 19;
return (litLength > 63) ? ZSTD_highbit32(litLength) + LL_deltaCode : LL_Code[litLength];
}
/* ZSTD_MLcode() :
* note : mlBase = matchLength - MINMATCH;
* because it's the format it's stored in seqStore->sequences */
MEM_STATIC U32 ZSTD_MLcode(U32 mlBase)
{
static const BYTE ML_Code[128] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
32, 32, 33, 33, 34, 34, 35, 35, 36, 36, 36, 36, 37, 37, 37, 37,
38, 38, 38, 38, 38, 38, 38, 38, 39, 39, 39, 39, 39, 39, 39, 39,
40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40,
41, 41, 41, 41, 41, 41, 41, 41, 41, 41, 41, 41, 41, 41, 41, 41,
42, 42, 42, 42, 42, 42, 42, 42, 42, 42, 42, 42, 42, 42, 42, 42,
42, 42, 42, 42, 42, 42, 42, 42, 42, 42, 42, 42, 42, 42, 42, 42 };
static const U32 ML_deltaCode = 36;
return (mlBase > 127) ? ZSTD_highbit32(mlBase) + ML_deltaCode : ML_Code[mlBase];
}
/* ZSTD_cParam_withinBounds:
* @return 1 if value is within cParam bounds,
* 0 otherwise */
MEM_STATIC int ZSTD_cParam_withinBounds(ZSTD_cParameter cParam, int value)
{
ZSTD_bounds const bounds = ZSTD_cParam_getBounds(cParam);
if (ZSTD_isError(bounds.error)) return 0;
if (value < bounds.lowerBound) return 0;
if (value > bounds.upperBound) return 0;
return 1;
}
/* ZSTD_selectAddr:
* @return index >= lowLimit ? candidate : backup,
* tries to force branchless codegen. */
MEM_STATIC const BYTE*
ZSTD_selectAddr(U32 index, U32 lowLimit, const BYTE* candidate, const BYTE* backup)
{
#if defined(__GNUC__) && defined(__x86_64__)
__asm__ (
"cmp %1, %2\n"
"cmova %3, %0\n"
: "+r"(candidate)
: "r"(index), "r"(lowLimit), "r"(backup)
);
return candidate;
#else
return index >= lowLimit ? candidate : backup;
#endif
}
/* ZSTD_noCompressBlock() :
* Writes uncompressed block to dst buffer from given src.
* Returns the size of the block */
MEM_STATIC size_t
ZSTD_noCompressBlock(void* dst, size_t dstCapacity, const void* src, size_t srcSize, U32 lastBlock)
{
U32 const cBlockHeader24 = lastBlock + (((U32)bt_raw)<<1) + (U32)(srcSize << 3);
DEBUGLOG(5, "ZSTD_noCompressBlock (srcSize=%zu, dstCapacity=%zu)", srcSize, dstCapacity);
RETURN_ERROR_IF(srcSize + ZSTD_blockHeaderSize > dstCapacity,
dstSize_tooSmall, "dst buf too small for uncompressed block");
MEM_writeLE24(dst, cBlockHeader24);
ZSTD_memcpy((BYTE*)dst + ZSTD_blockHeaderSize, src, srcSize);
return ZSTD_blockHeaderSize + srcSize;
}
MEM_STATIC size_t
ZSTD_rleCompressBlock(void* dst, size_t dstCapacity, BYTE src, size_t srcSize, U32 lastBlock)
{
BYTE* const op = (BYTE*)dst;
U32 const cBlockHeader = lastBlock + (((U32)bt_rle)<<1) + (U32)(srcSize << 3);
RETURN_ERROR_IF(dstCapacity < 4, dstSize_tooSmall, "");
MEM_writeLE24(op, cBlockHeader);
op[3] = src;
return 4;
}
/* ZSTD_minGain() :
* minimum compression required
* to generate a compress block or a compressed literals section.
* note : use same formula for both situations */
MEM_STATIC size_t ZSTD_minGain(size_t srcSize, ZSTD_strategy strat)
{
U32 const minlog = (strat>=ZSTD_btultra) ? (U32)(strat) - 1 : 6;
ZSTD_STATIC_ASSERT(ZSTD_btultra == 8);
assert(ZSTD_cParam_withinBounds(ZSTD_c_strategy, (int)strat));
return (srcSize >> minlog) + 2;
}
MEM_STATIC int ZSTD_literalsCompressionIsDisabled(const ZSTD_CCtx_params* cctxParams)
{
switch (cctxParams->literalCompressionMode) {
case ZSTD_ps_enable:
return 0;
case ZSTD_ps_disable:
return 1;
default:
assert(0 /* impossible: pre-validated */);
ZSTD_FALLTHROUGH;
case ZSTD_ps_auto:
return (cctxParams->cParams.strategy == ZSTD_fast) && (cctxParams->cParams.targetLength > 0);
}
}
/*! ZSTD_safecopyLiterals() :
* memcpy() function that won't read beyond more than WILDCOPY_OVERLENGTH bytes past ilimit_w.
* Only called when the sequence ends past ilimit_w, so it only needs to be optimized for single
* large copies.
*/
static void
ZSTD_safecopyLiterals(BYTE* op, BYTE const* ip, BYTE const* const iend, BYTE const* ilimit_w)
{
assert(iend > ilimit_w);
if (ip <= ilimit_w) {
ZSTD_wildcopy(op, ip, ilimit_w - ip, ZSTD_no_overlap);
op += ilimit_w - ip;
ip = ilimit_w;
}
while (ip < iend) *op++ = *ip++;
}
#define REPCODE1_TO_OFFBASE REPCODE_TO_OFFBASE(1)
#define REPCODE2_TO_OFFBASE REPCODE_TO_OFFBASE(2)
#define REPCODE3_TO_OFFBASE REPCODE_TO_OFFBASE(3)
#define REPCODE_TO_OFFBASE(r) (assert((r)>=1), assert((r)<=ZSTD_REP_NUM), (r)) /* accepts IDs 1,2,3 */
#define OFFSET_TO_OFFBASE(o) (assert((o)>0), o + ZSTD_REP_NUM)
#define OFFBASE_IS_OFFSET(o) ((o) > ZSTD_REP_NUM)
#define OFFBASE_IS_REPCODE(o) ( 1 <= (o) && (o) <= ZSTD_REP_NUM)
#define OFFBASE_TO_OFFSET(o) (assert(OFFBASE_IS_OFFSET(o)), (o) - ZSTD_REP_NUM)
#define OFFBASE_TO_REPCODE(o) (assert(OFFBASE_IS_REPCODE(o)), (o)) /* returns ID 1,2,3 */
/*! ZSTD_storeSeqOnly() :
* Store a sequence (litlen, litPtr, offBase and matchLength) into SeqStore_t.
* Literals themselves are not copied, but @litPtr is updated.
* @offBase : Users should employ macros REPCODE_TO_OFFBASE() and OFFSET_TO_OFFBASE().
* @matchLength : must be >= MINMATCH
*/
HINT_INLINE UNUSED_ATTR void
ZSTD_storeSeqOnly(SeqStore_t* seqStorePtr,
size_t litLength,
U32 offBase,
size_t matchLength)
{
assert((size_t)(seqStorePtr->sequences - seqStorePtr->sequencesStart) < seqStorePtr->maxNbSeq);
/* literal Length */
assert(litLength <= ZSTD_BLOCKSIZE_MAX);
if (UNLIKELY(litLength>0xFFFF)) {
assert(seqStorePtr->longLengthType == ZSTD_llt_none); /* there can only be a single long length */
seqStorePtr->longLengthType = ZSTD_llt_literalLength;
seqStorePtr->longLengthPos = (U32)(seqStorePtr->sequences - seqStorePtr->sequencesStart);
}
seqStorePtr->sequences[0].litLength = (U16)litLength;
/* match offset */
seqStorePtr->sequences[0].offBase = offBase;
/* match Length */
assert(matchLength <= ZSTD_BLOCKSIZE_MAX);
assert(matchLength >= MINMATCH);
{ size_t const mlBase = matchLength - MINMATCH;
if (UNLIKELY(mlBase>0xFFFF)) {
assert(seqStorePtr->longLengthType == ZSTD_llt_none); /* there can only be a single long length */
seqStorePtr->longLengthType = ZSTD_llt_matchLength;
seqStorePtr->longLengthPos = (U32)(seqStorePtr->sequences - seqStorePtr->sequencesStart);
}
seqStorePtr->sequences[0].mlBase = (U16)mlBase;
}
seqStorePtr->sequences++;
}
/*! ZSTD_storeSeq() :
* Store a sequence (litlen, litPtr, offBase and matchLength) into SeqStore_t.
* @offBase : Users should employ macros REPCODE_TO_OFFBASE() and OFFSET_TO_OFFBASE().
* @matchLength : must be >= MINMATCH
* Allowed to over-read literals up to litLimit.
*/
HINT_INLINE UNUSED_ATTR void
ZSTD_storeSeq(SeqStore_t* seqStorePtr,
size_t litLength, const BYTE* literals, const BYTE* litLimit,
U32 offBase,
size_t matchLength)
{
BYTE const* const litLimit_w = litLimit - WILDCOPY_OVERLENGTH;
BYTE const* const litEnd = literals + litLength;
#if defined(DEBUGLEVEL) && (DEBUGLEVEL >= 6)
static const BYTE* g_start = NULL;
if (g_start==NULL) g_start = (const BYTE*)literals; /* note : index only works for compression within a single segment */
{ U32 const pos = (U32)((const BYTE*)literals - g_start);
DEBUGLOG(6, "Cpos%7u :%3u literals, match%4u bytes at offBase%7u",
pos, (U32)litLength, (U32)matchLength, (U32)offBase);
}
#endif
assert((size_t)(seqStorePtr->sequences - seqStorePtr->sequencesStart) < seqStorePtr->maxNbSeq);
/* copy Literals */
assert(seqStorePtr->maxNbLit <= 128 KB);
assert(seqStorePtr->lit + litLength <= seqStorePtr->litStart + seqStorePtr->maxNbLit);
assert(literals + litLength <= litLimit);
if (litEnd <= litLimit_w) {
/* Common case we can use wildcopy.
* First copy 16 bytes, because literals are likely short.
*/
ZSTD_STATIC_ASSERT(WILDCOPY_OVERLENGTH >= 16);
ZSTD_copy16(seqStorePtr->lit, literals);
if (litLength > 16) {
ZSTD_wildcopy(seqStorePtr->lit+16, literals+16, (ptrdiff_t)litLength-16, ZSTD_no_overlap);
}
} else {
ZSTD_safecopyLiterals(seqStorePtr->lit, literals, litEnd, litLimit_w);
}
seqStorePtr->lit += litLength;
ZSTD_storeSeqOnly(seqStorePtr, litLength, offBase, matchLength);
}
/* ZSTD_updateRep() :
* updates in-place @rep (array of repeat offsets)
* @offBase : sum-type, using numeric representation of ZSTD_storeSeq()
*/
MEM_STATIC void
ZSTD_updateRep(U32 rep[ZSTD_REP_NUM], U32 const offBase, U32 const ll0)
{
if (OFFBASE_IS_OFFSET(offBase)) { /* full offset */
rep[2] = rep[1];
rep[1] = rep[0];
rep[0] = OFFBASE_TO_OFFSET(offBase);
} else { /* repcode */
U32 const repCode = OFFBASE_TO_REPCODE(offBase) - 1 + ll0;
if (repCode > 0) { /* note : if repCode==0, no change */
U32 const currentOffset = (repCode==ZSTD_REP_NUM) ? (rep[0] - 1) : rep[repCode];
rep[2] = (repCode >= 2) ? rep[1] : rep[2];
rep[1] = rep[0];
rep[0] = currentOffset;
} else { /* repCode == 0 */
/* nothing to do */
}
}
}
typedef struct repcodes_s {
U32 rep[3];
} Repcodes_t;
MEM_STATIC Repcodes_t
ZSTD_newRep(U32 const rep[ZSTD_REP_NUM], U32 const offBase, U32 const ll0)
{
Repcodes_t newReps;
ZSTD_memcpy(&newReps, rep, sizeof(newReps));
ZSTD_updateRep(newReps.rep, offBase, ll0);
return newReps;
}
/*-*************************************
* Match length counter
***************************************/
MEM_STATIC size_t ZSTD_count(const BYTE* pIn, const BYTE* pMatch, const BYTE* const pInLimit)
{
const BYTE* const pStart = pIn;
const BYTE* const pInLoopLimit = pInLimit - (sizeof(size_t)-1);
if (pIn < pInLoopLimit) {
{ size_t const diff = MEM_readST(pMatch) ^ MEM_readST(pIn);
if (diff) return ZSTD_NbCommonBytes(diff); }
pIn+=sizeof(size_t); pMatch+=sizeof(size_t);
while (pIn < pInLoopLimit) {
size_t const diff = MEM_readST(pMatch) ^ MEM_readST(pIn);
if (!diff) { pIn+=sizeof(size_t); pMatch+=sizeof(size_t); continue; }
pIn += ZSTD_NbCommonBytes(diff);
return (size_t)(pIn - pStart);
} }
if (MEM_64bits() && (pIn<(pInLimit-3)) && (MEM_read32(pMatch) == MEM_read32(pIn))) { pIn+=4; pMatch+=4; }
if ((pIn<(pInLimit-1)) && (MEM_read16(pMatch) == MEM_read16(pIn))) { pIn+=2; pMatch+=2; }
if ((pIn<pInLimit) && (*pMatch == *pIn)) pIn++;
return (size_t)(pIn - pStart);
}
/** ZSTD_count_2segments() :
* can count match length with `ip` & `match` in 2 different segments.
* convention : on reaching mEnd, match count continue starting from iStart
*/
MEM_STATIC size_t
ZSTD_count_2segments(const BYTE* ip, const BYTE* match,
const BYTE* iEnd, const BYTE* mEnd, const BYTE* iStart)
{
const BYTE* const vEnd = MIN( ip + (mEnd - match), iEnd);
size_t const matchLength = ZSTD_count(ip, match, vEnd);
if (match + matchLength != mEnd) return matchLength;
DEBUGLOG(7, "ZSTD_count_2segments: found a 2-parts match (current length==%zu)", matchLength);
DEBUGLOG(7, "distance from match beginning to end dictionary = %i", (int)(mEnd - match));
DEBUGLOG(7, "distance from current pos to end buffer = %i", (int)(iEnd - ip));
DEBUGLOG(7, "next byte : ip==%02X, istart==%02X", ip[matchLength], *iStart);
DEBUGLOG(7, "final match length = %zu", matchLength + ZSTD_count(ip+matchLength, iStart, iEnd));
return matchLength + ZSTD_count(ip+matchLength, iStart, iEnd);
}
/*-*************************************
* Hashes
***************************************/
static const U32 prime3bytes = 506832829U;
static U32 ZSTD_hash3(U32 u, U32 h, U32 s) { assert(h <= 32); return (((u << (32-24)) * prime3bytes) ^ s) >> (32-h) ; }
MEM_STATIC size_t ZSTD_hash3Ptr(const void* ptr, U32 h) { return ZSTD_hash3(MEM_readLE32(ptr), h, 0); } /* only in zstd_opt.h */
MEM_STATIC size_t ZSTD_hash3PtrS(const void* ptr, U32 h, U32 s) { return ZSTD_hash3(MEM_readLE32(ptr), h, s); }
static const U32 prime4bytes = 2654435761U;
static U32 ZSTD_hash4(U32 u, U32 h, U32 s) { assert(h <= 32); return ((u * prime4bytes) ^ s) >> (32-h) ; }
static size_t ZSTD_hash4Ptr(const void* ptr, U32 h) { return ZSTD_hash4(MEM_readLE32(ptr), h, 0); }
static size_t ZSTD_hash4PtrS(const void* ptr, U32 h, U32 s) { return ZSTD_hash4(MEM_readLE32(ptr), h, s); }
static const U64 prime5bytes = 889523592379ULL;
static size_t ZSTD_hash5(U64 u, U32 h, U64 s) { assert(h <= 64); return (size_t)((((u << (64-40)) * prime5bytes) ^ s) >> (64-h)) ; }
static size_t ZSTD_hash5Ptr(const void* p, U32 h) { return ZSTD_hash5(MEM_readLE64(p), h, 0); }
static size_t ZSTD_hash5PtrS(const void* p, U32 h, U64 s) { return ZSTD_hash5(MEM_readLE64(p), h, s); }
static const U64 prime6bytes = 227718039650203ULL;
static size_t ZSTD_hash6(U64 u, U32 h, U64 s) { assert(h <= 64); return (size_t)((((u << (64-48)) * prime6bytes) ^ s) >> (64-h)) ; }
static size_t ZSTD_hash6Ptr(const void* p, U32 h) { return ZSTD_hash6(MEM_readLE64(p), h, 0); }
static size_t ZSTD_hash6PtrS(const void* p, U32 h, U64 s) { return ZSTD_hash6(MEM_readLE64(p), h, s); }
static const U64 prime7bytes = 58295818150454627ULL;
static size_t ZSTD_hash7(U64 u, U32 h, U64 s) { assert(h <= 64); return (size_t)((((u << (64-56)) * prime7bytes) ^ s) >> (64-h)) ; }
static size_t ZSTD_hash7Ptr(const void* p, U32 h) { return ZSTD_hash7(MEM_readLE64(p), h, 0); }
static size_t ZSTD_hash7PtrS(const void* p, U32 h, U64 s) { return ZSTD_hash7(MEM_readLE64(p), h, s); }
static const U64 prime8bytes = 0xCF1BBCDCB7A56463ULL;
static size_t ZSTD_hash8(U64 u, U32 h, U64 s) { assert(h <= 64); return (size_t)((((u) * prime8bytes) ^ s) >> (64-h)) ; }
static size_t ZSTD_hash8Ptr(const void* p, U32 h) { return ZSTD_hash8(MEM_readLE64(p), h, 0); }
static size_t ZSTD_hash8PtrS(const void* p, U32 h, U64 s) { return ZSTD_hash8(MEM_readLE64(p), h, s); }
MEM_STATIC FORCE_INLINE_ATTR
size_t ZSTD_hashPtr(const void* p, U32 hBits, U32 mls)
{
/* Although some of these hashes do support hBits up to 64, some do not.
* To be on the safe side, always avoid hBits > 32. */
assert(hBits <= 32);
switch(mls)
{
default:
case 4: return ZSTD_hash4Ptr(p, hBits);
case 5: return ZSTD_hash5Ptr(p, hBits);
case 6: return ZSTD_hash6Ptr(p, hBits);
case 7: return ZSTD_hash7Ptr(p, hBits);
case 8: return ZSTD_hash8Ptr(p, hBits);
}
}
MEM_STATIC FORCE_INLINE_ATTR
size_t ZSTD_hashPtrSalted(const void* p, U32 hBits, U32 mls, const U64 hashSalt) {
/* Although some of these hashes do support hBits up to 64, some do not.
* To be on the safe side, always avoid hBits > 32. */
assert(hBits <= 32);
switch(mls)
{
default:
case 4: return ZSTD_hash4PtrS(p, hBits, (U32)hashSalt);
case 5: return ZSTD_hash5PtrS(p, hBits, hashSalt);
case 6: return ZSTD_hash6PtrS(p, hBits, hashSalt);
case 7: return ZSTD_hash7PtrS(p, hBits, hashSalt);
case 8: return ZSTD_hash8PtrS(p, hBits, hashSalt);
}
}
/** ZSTD_ipow() :
* Return base^exponent.
*/
static U64 ZSTD_ipow(U64 base, U64 exponent)
{
U64 power = 1;
while (exponent) {
if (exponent & 1) power *= base;
exponent >>= 1;
base *= base;
}
return power;
}
#define ZSTD_ROLL_HASH_CHAR_OFFSET 10
/** ZSTD_rollingHash_append() :
* Add the buffer to the hash value.
*/
static U64 ZSTD_rollingHash_append(U64 hash, void const* buf, size_t size)
{
BYTE const* istart = (BYTE const*)buf;
size_t pos;
for (pos = 0; pos < size; ++pos) {
hash *= prime8bytes;
hash += istart[pos] + ZSTD_ROLL_HASH_CHAR_OFFSET;
}
return hash;
}
/** ZSTD_rollingHash_compute() :
* Compute the rolling hash value of the buffer.
*/
MEM_STATIC U64 ZSTD_rollingHash_compute(void const* buf, size_t size)
{
return ZSTD_rollingHash_append(0, buf, size);
}
/** ZSTD_rollingHash_primePower() :
* Compute the primePower to be passed to ZSTD_rollingHash_rotate() for a hash
* over a window of length bytes.
*/
MEM_STATIC U64 ZSTD_rollingHash_primePower(U32 length)
{
return ZSTD_ipow(prime8bytes, length - 1);
}
/** ZSTD_rollingHash_rotate() :
* Rotate the rolling hash by one byte.
*/
MEM_STATIC U64 ZSTD_rollingHash_rotate(U64 hash, BYTE toRemove, BYTE toAdd, U64 primePower)
{
hash -= (toRemove + ZSTD_ROLL_HASH_CHAR_OFFSET) * primePower;
hash *= prime8bytes;
hash += toAdd + ZSTD_ROLL_HASH_CHAR_OFFSET;
return hash;
}
/*-*************************************
* Round buffer management
***************************************/
/* Max @current value allowed:
* In 32-bit mode: we want to avoid crossing the 2 GB limit,
* reducing risks of side effects in case of signed operations on indexes.
* In 64-bit mode: we want to ensure that adding the maximum job size (512 MB)
* doesn't overflow U32 index capacity (4 GB) */
#define ZSTD_CURRENT_MAX (MEM_64bits() ? 3500U MB : 2000U MB)
/* Maximum chunk size before overflow correction needs to be called again */
#define ZSTD_CHUNKSIZE_MAX \
( ((U32)-1) /* Maximum ending current index */ \
- ZSTD_CURRENT_MAX) /* Maximum beginning lowLimit */
/**
* ZSTD_window_clear():
* Clears the window containing the history by simply setting it to empty.
*/
MEM_STATIC void ZSTD_window_clear(ZSTD_window_t* window)
{
size_t const endT = (size_t)(window->nextSrc - window->base);
U32 const end = (U32)endT;
window->lowLimit = end;
window->dictLimit = end;
}
MEM_STATIC U32 ZSTD_window_isEmpty(ZSTD_window_t const window)
{
return window.dictLimit == ZSTD_WINDOW_START_INDEX &&
window.lowLimit == ZSTD_WINDOW_START_INDEX &&
(window.nextSrc - window.base) == ZSTD_WINDOW_START_INDEX;
}
/**
* ZSTD_window_hasExtDict():
* Returns non-zero if the window has a non-empty extDict.
*/
MEM_STATIC U32 ZSTD_window_hasExtDict(ZSTD_window_t const window)
{
return window.lowLimit < window.dictLimit;
}
/**
* ZSTD_matchState_dictMode():
* Inspects the provided matchState and figures out what dictMode should be
* passed to the compressor.
*/
MEM_STATIC ZSTD_dictMode_e ZSTD_matchState_dictMode(const ZSTD_MatchState_t *ms)
{
return ZSTD_window_hasExtDict(ms->window) ?
ZSTD_extDict :
ms->dictMatchState != NULL ?
(ms->dictMatchState->dedicatedDictSearch ? ZSTD_dedicatedDictSearch : ZSTD_dictMatchState) :
ZSTD_noDict;
}
/* Defining this macro to non-zero tells zstd to run the overflow correction
* code much more frequently. This is very inefficient, and should only be
* used for tests and fuzzers.
*/
#ifndef ZSTD_WINDOW_OVERFLOW_CORRECT_FREQUENTLY
# ifdef FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION
# define ZSTD_WINDOW_OVERFLOW_CORRECT_FREQUENTLY 1
# else
# define ZSTD_WINDOW_OVERFLOW_CORRECT_FREQUENTLY 0
# endif
#endif
/**
* ZSTD_window_canOverflowCorrect():
* Returns non-zero if the indices are large enough for overflow correction
* to work correctly without impacting compression ratio.
*/
MEM_STATIC U32 ZSTD_window_canOverflowCorrect(ZSTD_window_t const window,
U32 cycleLog,
U32 maxDist,
U32 loadedDictEnd,
void const* src)
{
U32 const cycleSize = 1u << cycleLog;
U32 const curr = (U32)((BYTE const*)src - window.base);
U32 const minIndexToOverflowCorrect = cycleSize
+ MAX(maxDist, cycleSize)
+ ZSTD_WINDOW_START_INDEX;
/* Adjust the min index to backoff the overflow correction frequency,
* so we don't waste too much CPU in overflow correction. If this
* computation overflows we don't really care, we just need to make
* sure it is at least minIndexToOverflowCorrect.
*/
U32 const adjustment = window.nbOverflowCorrections + 1;
U32 const adjustedIndex = MAX(minIndexToOverflowCorrect * adjustment,
minIndexToOverflowCorrect);
U32 const indexLargeEnough = curr > adjustedIndex;
/* Only overflow correct early if the dictionary is invalidated already,
* so we don't hurt compression ratio.
*/
U32 const dictionaryInvalidated = curr > maxDist + loadedDictEnd;
return indexLargeEnough && dictionaryInvalidated;
}
/**
* ZSTD_window_needOverflowCorrection():
* Returns non-zero if the indices are getting too large and need overflow
* protection.
*/
MEM_STATIC U32 ZSTD_window_needOverflowCorrection(ZSTD_window_t const window,
U32 cycleLog,
U32 maxDist,
U32 loadedDictEnd,
void const* src,
void const* srcEnd)
{
U32 const curr = (U32)((BYTE const*)srcEnd - window.base);
if (ZSTD_WINDOW_OVERFLOW_CORRECT_FREQUENTLY) {
if (ZSTD_window_canOverflowCorrect(window, cycleLog, maxDist, loadedDictEnd, src)) {
return 1;
}
}
return curr > ZSTD_CURRENT_MAX;
}
/**
* ZSTD_window_correctOverflow():
* Reduces the indices to protect from index overflow.
* Returns the correction made to the indices, which must be applied to every
* stored index.
*
* The least significant cycleLog bits of the indices must remain the same,
* which may be 0. Every index up to maxDist in the past must be valid.
*/
MEM_STATIC
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
U32 ZSTD_window_correctOverflow(ZSTD_window_t* window, U32 cycleLog,
U32 maxDist, void const* src)
{
/* preemptive overflow correction:
* 1. correction is large enough:
* lowLimit > (3<<29) ==> current > 3<<29 + 1<<windowLog
* 1<<windowLog <= newCurrent < 1<<chainLog + 1<<windowLog
*
* current - newCurrent
* > (3<<29 + 1<<windowLog) - (1<<windowLog + 1<<chainLog)
* > (3<<29) - (1<<chainLog)
* > (3<<29) - (1<<30) (NOTE: chainLog <= 30)
* > 1<<29
*
* 2. (ip+ZSTD_CHUNKSIZE_MAX - cctx->base) doesn't overflow:
* After correction, current is less than (1<<chainLog + 1<<windowLog).
* In 64-bit mode we are safe, because we have 64-bit ptrdiff_t.
* In 32-bit mode we are safe, because (chainLog <= 29), so
* ip+ZSTD_CHUNKSIZE_MAX - cctx->base < 1<<32.
* 3. (cctx->lowLimit + 1<<windowLog) < 1<<32:
* windowLog <= 31 ==> 3<<29 + 1<<windowLog < 7<<29 < 1<<32.
*/
U32 const cycleSize = 1u << cycleLog;
U32 const cycleMask = cycleSize - 1;
U32 const curr = (U32)((BYTE const*)src - window->base);
U32 const currentCycle = curr & cycleMask;
/* Ensure newCurrent - maxDist >= ZSTD_WINDOW_START_INDEX. */
U32 const currentCycleCorrection = currentCycle < ZSTD_WINDOW_START_INDEX
? MAX(cycleSize, ZSTD_WINDOW_START_INDEX)
: 0;
U32 const newCurrent = currentCycle
+ currentCycleCorrection
+ MAX(maxDist, cycleSize);
U32 const correction = curr - newCurrent;
/* maxDist must be a power of two so that:
* (newCurrent & cycleMask) == (curr & cycleMask)
* This is required to not corrupt the chains / binary tree.
*/
assert((maxDist & (maxDist - 1)) == 0);
assert((curr & cycleMask) == (newCurrent & cycleMask));
assert(curr > newCurrent);
if (!ZSTD_WINDOW_OVERFLOW_CORRECT_FREQUENTLY) {
/* Loose bound, should be around 1<<29 (see above) */
assert(correction > 1<<28);
}
window->base += correction;
window->dictBase += correction;
if (window->lowLimit < correction + ZSTD_WINDOW_START_INDEX) {
window->lowLimit = ZSTD_WINDOW_START_INDEX;
} else {
window->lowLimit -= correction;
}
if (window->dictLimit < correction + ZSTD_WINDOW_START_INDEX) {
window->dictLimit = ZSTD_WINDOW_START_INDEX;
} else {
window->dictLimit -= correction;
}
/* Ensure we can still reference the full window. */
assert(newCurrent >= maxDist);
assert(newCurrent - maxDist >= ZSTD_WINDOW_START_INDEX);
/* Ensure that lowLimit and dictLimit didn't underflow. */
assert(window->lowLimit <= newCurrent);
assert(window->dictLimit <= newCurrent);
++window->nbOverflowCorrections;
DEBUGLOG(4, "Correction of 0x%x bytes to lowLimit=0x%x", correction,
window->lowLimit);
return correction;
}
/**
* ZSTD_window_enforceMaxDist():
* Updates lowLimit so that:
* (srcEnd - base) - lowLimit == maxDist + loadedDictEnd
*
* It ensures index is valid as long as index >= lowLimit.
* This must be called before a block compression call.
*
* loadedDictEnd is only defined if a dictionary is in use for current compression.
* As the name implies, loadedDictEnd represents the index at end of dictionary.
* The value lies within context's referential, it can be directly compared to blockEndIdx.
*
* If loadedDictEndPtr is NULL, no dictionary is in use, and we use loadedDictEnd == 0.
* If loadedDictEndPtr is not NULL, we set it to zero after updating lowLimit.
* This is because dictionaries are allowed to be referenced fully
* as long as the last byte of the dictionary is in the window.
* Once input has progressed beyond window size, dictionary cannot be referenced anymore.
*
* In normal dict mode, the dictionary lies between lowLimit and dictLimit.
* In dictMatchState mode, lowLimit and dictLimit are the same,
* and the dictionary is below them.
* forceWindow and dictMatchState are therefore incompatible.
*/
MEM_STATIC void
ZSTD_window_enforceMaxDist(ZSTD_window_t* window,
const void* blockEnd,
U32 maxDist,
U32* loadedDictEndPtr,
const ZSTD_MatchState_t** dictMatchStatePtr)
{
U32 const blockEndIdx = (U32)((BYTE const*)blockEnd - window->base);
U32 const loadedDictEnd = (loadedDictEndPtr != NULL) ? *loadedDictEndPtr : 0;
DEBUGLOG(5, "ZSTD_window_enforceMaxDist: blockEndIdx=%u, maxDist=%u, loadedDictEnd=%u",
(unsigned)blockEndIdx, (unsigned)maxDist, (unsigned)loadedDictEnd);
/* - When there is no dictionary : loadedDictEnd == 0.
In which case, the test (blockEndIdx > maxDist) is merely to avoid
overflowing next operation `newLowLimit = blockEndIdx - maxDist`.
- When there is a standard dictionary :
Index referential is copied from the dictionary,
which means it starts from 0.
In which case, loadedDictEnd == dictSize,
and it makes sense to compare `blockEndIdx > maxDist + dictSize`
since `blockEndIdx` also starts from zero.
- When there is an attached dictionary :
loadedDictEnd is expressed within the referential of the context,
so it can be directly compared against blockEndIdx.
*/
if (blockEndIdx > maxDist + loadedDictEnd) {
U32 const newLowLimit = blockEndIdx - maxDist;
if (window->lowLimit < newLowLimit) window->lowLimit = newLowLimit;
if (window->dictLimit < window->lowLimit) {
DEBUGLOG(5, "Update dictLimit to match lowLimit, from %u to %u",
(unsigned)window->dictLimit, (unsigned)window->lowLimit);
window->dictLimit = window->lowLimit;
}
/* On reaching window size, dictionaries are invalidated */
if (loadedDictEndPtr) *loadedDictEndPtr = 0;
if (dictMatchStatePtr) *dictMatchStatePtr = NULL;
}
}
/* Similar to ZSTD_window_enforceMaxDist(),
* but only invalidates dictionary
* when input progresses beyond window size.
* assumption : loadedDictEndPtr and dictMatchStatePtr are valid (non NULL)
* loadedDictEnd uses same referential as window->base
* maxDist is the window size */
MEM_STATIC void
ZSTD_checkDictValidity(const ZSTD_window_t* window,
const void* blockEnd,
U32 maxDist,
U32* loadedDictEndPtr,
const ZSTD_MatchState_t** dictMatchStatePtr)
{
assert(loadedDictEndPtr != NULL);
assert(dictMatchStatePtr != NULL);
{ U32 const blockEndIdx = (U32)((BYTE const*)blockEnd - window->base);
U32 const loadedDictEnd = *loadedDictEndPtr;
DEBUGLOG(5, "ZSTD_checkDictValidity: blockEndIdx=%u, maxDist=%u, loadedDictEnd=%u",
(unsigned)blockEndIdx, (unsigned)maxDist, (unsigned)loadedDictEnd);
assert(blockEndIdx >= loadedDictEnd);
if (blockEndIdx > loadedDictEnd + maxDist || loadedDictEnd != window->dictLimit) {
/* On reaching window size, dictionaries are invalidated.
* For simplification, if window size is reached anywhere within next block,
* the dictionary is invalidated for the full block.
*
* We also have to invalidate the dictionary if ZSTD_window_update() has detected
* non-contiguous segments, which means that loadedDictEnd != window->dictLimit.
* loadedDictEnd may be 0, if forceWindow is true, but in that case we never use
* dictMatchState, so setting it to NULL is not a problem.
*/
DEBUGLOG(6, "invalidating dictionary for current block (distance > windowSize)");
*loadedDictEndPtr = 0;
*dictMatchStatePtr = NULL;
} else {
if (*loadedDictEndPtr != 0) {
DEBUGLOG(6, "dictionary considered valid for current block");
} } }
}
MEM_STATIC void ZSTD_window_init(ZSTD_window_t* window) {
ZSTD_memset(window, 0, sizeof(*window));
window->base = (BYTE const*)" ";
window->dictBase = (BYTE const*)" ";
ZSTD_STATIC_ASSERT(ZSTD_DUBT_UNSORTED_MARK < ZSTD_WINDOW_START_INDEX); /* Start above ZSTD_DUBT_UNSORTED_MARK */
window->dictLimit = ZSTD_WINDOW_START_INDEX; /* start from >0, so that 1st position is valid */
window->lowLimit = ZSTD_WINDOW_START_INDEX; /* it ensures first and later CCtx usages compress the same */
window->nextSrc = window->base + ZSTD_WINDOW_START_INDEX; /* see issue #1241 */
window->nbOverflowCorrections = 0;
}
/**
* ZSTD_window_update():
* Updates the window by appending [src, src + srcSize) to the window.
* If it is not contiguous, the current prefix becomes the extDict, and we
* forget about the extDict. Handles overlap of the prefix and extDict.
* Returns non-zero if the segment is contiguous.
*/
MEM_STATIC
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
U32 ZSTD_window_update(ZSTD_window_t* window,
const void* src, size_t srcSize,
int forceNonContiguous)
{
BYTE const* const ip = (BYTE const*)src;
U32 contiguous = 1;
DEBUGLOG(5, "ZSTD_window_update");
if (srcSize == 0)
return contiguous;
assert(window->base != NULL);
assert(window->dictBase != NULL);
/* Check if blocks follow each other */
if (src != window->nextSrc || forceNonContiguous) {
/* not contiguous */
size_t const distanceFromBase = (size_t)(window->nextSrc - window->base);
DEBUGLOG(5, "Non contiguous blocks, new segment starts at %u", window->dictLimit);
window->lowLimit = window->dictLimit;
assert(distanceFromBase == (size_t)(U32)distanceFromBase); /* should never overflow */
window->dictLimit = (U32)distanceFromBase;
window->dictBase = window->base;
window->base = ip - distanceFromBase;
/* ms->nextToUpdate = window->dictLimit; */
if (window->dictLimit - window->lowLimit < HASH_READ_SIZE) window->lowLimit = window->dictLimit; /* too small extDict */
contiguous = 0;
}
window->nextSrc = ip + srcSize;
/* if input and dictionary overlap : reduce dictionary (area presumed modified by input) */
if ( (ip+srcSize > window->dictBase + window->lowLimit)
& (ip < window->dictBase + window->dictLimit)) {
size_t const highInputIdx = (size_t)((ip + srcSize) - window->dictBase);
U32 const lowLimitMax = (highInputIdx > (size_t)window->dictLimit) ? window->dictLimit : (U32)highInputIdx;
assert(highInputIdx < UINT_MAX);
window->lowLimit = lowLimitMax;
DEBUGLOG(5, "Overlapping extDict and input : new lowLimit = %u", window->lowLimit);
}
return contiguous;
}
/**
* Returns the lowest allowed match index. It may either be in the ext-dict or the prefix.
*/
MEM_STATIC U32 ZSTD_getLowestMatchIndex(const ZSTD_MatchState_t* ms, U32 curr, unsigned windowLog)
{
U32 const maxDistance = 1U << windowLog;
U32 const lowestValid = ms->window.lowLimit;
U32 const withinWindow = (curr - lowestValid > maxDistance) ? curr - maxDistance : lowestValid;
U32 const isDictionary = (ms->loadedDictEnd != 0);
/* When using a dictionary the entire dictionary is valid if a single byte of the dictionary
* is within the window. We invalidate the dictionary (and set loadedDictEnd to 0) when it isn't
* valid for the entire block. So this check is sufficient to find the lowest valid match index.
*/
U32 const matchLowest = isDictionary ? lowestValid : withinWindow;
return matchLowest;
}
/**
* Returns the lowest allowed match index in the prefix.
*/
MEM_STATIC U32 ZSTD_getLowestPrefixIndex(const ZSTD_MatchState_t* ms, U32 curr, unsigned windowLog)
{
U32 const maxDistance = 1U << windowLog;
U32 const lowestValid = ms->window.dictLimit;
U32 const withinWindow = (curr - lowestValid > maxDistance) ? curr - maxDistance : lowestValid;
U32 const isDictionary = (ms->loadedDictEnd != 0);
/* When computing the lowest prefix index we need to take the dictionary into account to handle
* the edge case where the dictionary and the source are contiguous in memory.
*/
U32 const matchLowest = isDictionary ? lowestValid : withinWindow;
return matchLowest;
}
/* index_safety_check:
* intentional underflow : ensure repIndex isn't overlapping dict + prefix
* @return 1 if values are not overlapping,
* 0 otherwise */
MEM_STATIC int ZSTD_index_overlap_check(const U32 prefixLowestIndex, const U32 repIndex) {
return ((U32)((prefixLowestIndex-1) - repIndex) >= 3);
}
/* debug functions */
#if (DEBUGLEVEL>=2)
MEM_STATIC double ZSTD_fWeight(U32 rawStat)
{
U32 const fp_accuracy = 8;
U32 const fp_multiplier = (1 << fp_accuracy);
U32 const newStat = rawStat + 1;
U32 const hb = ZSTD_highbit32(newStat);
U32 const BWeight = hb * fp_multiplier;
U32 const FWeight = (newStat << fp_accuracy) >> hb;
U32 const weight = BWeight + FWeight;
assert(hb + fp_accuracy < 31);
return (double)weight / fp_multiplier;
}
/* display a table content,
* listing each element, its frequency, and its predicted bit cost */
MEM_STATIC void ZSTD_debugTable(const U32* table, U32 max)
{
unsigned u, sum;
for (u=0, sum=0; u<=max; u++) sum += table[u];
DEBUGLOG(2, "total nb elts: %u", sum);
for (u=0; u<=max; u++) {
DEBUGLOG(2, "%2u: %5u (%.2f)",
u, table[u], ZSTD_fWeight(sum) - ZSTD_fWeight(table[u]) );
}
}
#endif
/* Short Cache */
/* Normally, zstd matchfinders follow this flow:
* 1. Compute hash at ip
* 2. Load index from hashTable[hash]
* 3. Check if *ip == *(base + index)
* In dictionary compression, loading *(base + index) is often an L2 or even L3 miss.
*
* Short cache is an optimization which allows us to avoid step 3 most of the time
* when the data doesn't actually match. With short cache, the flow becomes:
* 1. Compute (hash, currentTag) at ip. currentTag is an 8-bit independent hash at ip.
* 2. Load (index, matchTag) from hashTable[hash]. See ZSTD_writeTaggedIndex to understand how this works.
* 3. Only if currentTag == matchTag, check *ip == *(base + index). Otherwise, continue.
*
* Currently, short cache is only implemented in CDict hashtables. Thus, its use is limited to
* dictMatchState matchfinders.
*/
#define ZSTD_SHORT_CACHE_TAG_BITS 8
#define ZSTD_SHORT_CACHE_TAG_MASK ((1u << ZSTD_SHORT_CACHE_TAG_BITS) - 1)
/* Helper function for ZSTD_fillHashTable and ZSTD_fillDoubleHashTable.
* Unpacks hashAndTag into (hash, tag), then packs (index, tag) into hashTable[hash]. */
MEM_STATIC void ZSTD_writeTaggedIndex(U32* const hashTable, size_t hashAndTag, U32 index) {
size_t const hash = hashAndTag >> ZSTD_SHORT_CACHE_TAG_BITS;
U32 const tag = (U32)(hashAndTag & ZSTD_SHORT_CACHE_TAG_MASK);
assert(index >> (32 - ZSTD_SHORT_CACHE_TAG_BITS) == 0);
hashTable[hash] = (index << ZSTD_SHORT_CACHE_TAG_BITS) | tag;
}
/* Helper function for short cache matchfinders.
* Unpacks tag1 and tag2 from lower bits of packedTag1 and packedTag2, then checks if the tags match. */
MEM_STATIC int ZSTD_comparePackedTags(size_t packedTag1, size_t packedTag2) {
U32 const tag1 = packedTag1 & ZSTD_SHORT_CACHE_TAG_MASK;
U32 const tag2 = packedTag2 & ZSTD_SHORT_CACHE_TAG_MASK;
return tag1 == tag2;
}
/* ===============================================================
* Shared internal declarations
* These prototypes may be called from sources not in lib/compress
* =============================================================== */
/* ZSTD_loadCEntropy() :
* dict : must point at beginning of a valid zstd dictionary.
* return : size of dictionary header (size of magic number + dict ID + entropy tables)
* assumptions : magic number supposed already checked
* and dictSize >= 8 */
size_t ZSTD_loadCEntropy(ZSTD_compressedBlockState_t* bs, void* workspace,
const void* const dict, size_t dictSize);
void ZSTD_reset_compressedBlockState(ZSTD_compressedBlockState_t* bs);
typedef struct {
U32 idx; /* Index in array of ZSTD_Sequence */
U32 posInSequence; /* Position within sequence at idx */
size_t posInSrc; /* Number of bytes given by sequences provided so far */
} ZSTD_SequencePosition;
/* for benchmark */
size_t ZSTD_convertBlockSequences(ZSTD_CCtx* cctx,
const ZSTD_Sequence* const inSeqs, size_t nbSequences,
int const repcodeResolution);
typedef struct {
size_t nbSequences;
size_t blockSize;
size_t litSize;
} BlockSummary;
BlockSummary ZSTD_get1BlockSummary(const ZSTD_Sequence* seqs, size_t nbSeqs);
/* ==============================================================
* Private declarations
* These prototypes shall only be called from within lib/compress
* ============================================================== */
/* ZSTD_getCParamsFromCCtxParams() :
* cParams are built depending on compressionLevel, src size hints,
* LDM and manually set compression parameters.
* Note: srcSizeHint == 0 means 0!
*/
ZSTD_compressionParameters ZSTD_getCParamsFromCCtxParams(
const ZSTD_CCtx_params* CCtxParams, U64 srcSizeHint, size_t dictSize, ZSTD_CParamMode_e mode);
/*! ZSTD_initCStream_internal() :
* Private use only. Init streaming operation.
* expects params to be valid.
* must receive dict, or cdict, or none, but not both.
* @return : 0, or an error code */
size_t ZSTD_initCStream_internal(ZSTD_CStream* zcs,
const void* dict, size_t dictSize,
const ZSTD_CDict* cdict,
const ZSTD_CCtx_params* params, unsigned long long pledgedSrcSize);
void ZSTD_resetSeqStore(SeqStore_t* ssPtr);
/*! ZSTD_getCParamsFromCDict() :
* as the name implies */
ZSTD_compressionParameters ZSTD_getCParamsFromCDict(const ZSTD_CDict* cdict);
/* ZSTD_compressBegin_advanced_internal() :
* Private use only. To be called from zstdmt_compress.c. */
size_t ZSTD_compressBegin_advanced_internal(ZSTD_CCtx* cctx,
const void* dict, size_t dictSize,
ZSTD_dictContentType_e dictContentType,
ZSTD_dictTableLoadMethod_e dtlm,
const ZSTD_CDict* cdict,
const ZSTD_CCtx_params* params,
unsigned long long pledgedSrcSize);
/* ZSTD_compress_advanced_internal() :
* Private use only. To be called from zstdmt_compress.c. */
size_t ZSTD_compress_advanced_internal(ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
const void* dict,size_t dictSize,
const ZSTD_CCtx_params* params);
/* ZSTD_writeLastEmptyBlock() :
* output an empty Block with end-of-frame mark to complete a frame
* @return : size of data written into `dst` (== ZSTD_blockHeaderSize (defined in zstd_internal.h))
* or an error code if `dstCapacity` is too small (<ZSTD_blockHeaderSize)
*/
size_t ZSTD_writeLastEmptyBlock(void* dst, size_t dstCapacity);
/* ZSTD_referenceExternalSequences() :
* Must be called before starting a compression operation.
* seqs must parse a prefix of the source.
* This cannot be used when long range matching is enabled.
* Zstd will use these sequences, and pass the literals to a secondary block
* compressor.
* NOTE: seqs are not verified! Invalid sequences can cause out-of-bounds memory
* access and data corruption.
*/
void ZSTD_referenceExternalSequences(ZSTD_CCtx* cctx, rawSeq* seq, size_t nbSeq);
/** ZSTD_cycleLog() :
* condition for correct operation : hashLog > 1 */
U32 ZSTD_cycleLog(U32 hashLog, ZSTD_strategy strat);
/** ZSTD_CCtx_trace() :
* Trace the end of a compression call.
*/
void ZSTD_CCtx_trace(ZSTD_CCtx* cctx, size_t extraCSize);
/* Returns 1 if an external sequence producer is registered, otherwise returns 0. */
MEM_STATIC int ZSTD_hasExtSeqProd(const ZSTD_CCtx_params* params) {
return params->extSeqProdFunc != NULL;
}
/* ===============================================================
* Deprecated definitions that are still used internally to avoid
* deprecation warnings. These functions are exactly equivalent to
* their public variants, but avoid the deprecation warnings.
* =============================================================== */
size_t ZSTD_compressBegin_usingCDict_deprecated(ZSTD_CCtx* cctx, const ZSTD_CDict* cdict);
size_t ZSTD_compressContinue_public(ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize);
size_t ZSTD_compressEnd_public(ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize);
size_t ZSTD_compressBlock_deprecated(ZSTD_CCtx* cctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize);
#endif /* ZSTD_COMPRESS_H */
/**** ended inlining zstd_compress_internal.h ****/
size_t ZSTD_noCompressLiterals (void* dst, size_t dstCapacity, const void* src, size_t srcSize);
/* ZSTD_compressRleLiteralsBlock() :
* Conditions :
* - All bytes in @src are identical
* - dstCapacity >= 4 */
size_t ZSTD_compressRleLiteralsBlock (void* dst, size_t dstCapacity, const void* src, size_t srcSize);
/* ZSTD_compressLiterals():
* @entropyWorkspace: must be aligned on 4-bytes boundaries
* @entropyWorkspaceSize : must be >= HUF_WORKSPACE_SIZE
* @suspectUncompressible: sampling checks, to potentially skip huffman coding
*/
size_t ZSTD_compressLiterals (void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
void* entropyWorkspace, size_t entropyWorkspaceSize,
const ZSTD_hufCTables_t* prevHuf,
ZSTD_hufCTables_t* nextHuf,
ZSTD_strategy strategy, int disableLiteralCompression,
int suspectUncompressible,
int bmi2);
#endif /* ZSTD_COMPRESS_LITERALS_H */
/**** ended inlining zstd_compress_literals.h ****/
/* **************************************************************
* Debug Traces
****************************************************************/
#if DEBUGLEVEL >= 2
static size_t showHexa(const void* src, size_t srcSize)
{
const BYTE* const ip = (const BYTE*)src;
size_t u;
for (u=0; u<srcSize; u++) {
RAWLOG(5, " %02X", ip[u]); (void)ip;
}
RAWLOG(5, " \n");
return srcSize;
}
#endif
/* **************************************************************
* Literals compression - special cases
****************************************************************/
size_t ZSTD_noCompressLiterals (void* dst, size_t dstCapacity, const void* src, size_t srcSize)
{
BYTE* const ostart = (BYTE*)dst;
U32 const flSize = 1 + (srcSize>31) + (srcSize>4095);
DEBUGLOG(5, "ZSTD_noCompressLiterals: srcSize=%zu, dstCapacity=%zu", srcSize, dstCapacity);
RETURN_ERROR_IF(srcSize + flSize > dstCapacity, dstSize_tooSmall, "");
switch(flSize)
{
case 1: /* 2 - 1 - 5 */
ostart[0] = (BYTE)((U32)set_basic + (srcSize<<3));
break;
case 2: /* 2 - 2 - 12 */
MEM_writeLE16(ostart, (U16)((U32)set_basic + (1<<2) + (srcSize<<4)));
break;
case 3: /* 2 - 2 - 20 */
MEM_writeLE32(ostart, (U32)((U32)set_basic + (3<<2) + (srcSize<<4)));
break;
default: /* not necessary : flSize is {1,2,3} */
assert(0);
}
ZSTD_memcpy(ostart + flSize, src, srcSize);
DEBUGLOG(5, "Raw (uncompressed) literals: %u -> %u", (U32)srcSize, (U32)(srcSize + flSize));
return srcSize + flSize;
}
static int allBytesIdentical(const void* src, size_t srcSize)
{
assert(srcSize >= 1);
assert(src != NULL);
{ const BYTE b = ((const BYTE*)src)[0];
size_t p;
for (p=1; p<srcSize; p++) {
if (((const BYTE*)src)[p] != b) return 0;
}
return 1;
}
}
size_t ZSTD_compressRleLiteralsBlock (void* dst, size_t dstCapacity, const void* src, size_t srcSize)
{
BYTE* const ostart = (BYTE*)dst;
U32 const flSize = 1 + (srcSize>31) + (srcSize>4095);
assert(dstCapacity >= 4); (void)dstCapacity;
assert(allBytesIdentical(src, srcSize));
switch(flSize)
{
case 1: /* 2 - 1 - 5 */
ostart[0] = (BYTE)((U32)set_rle + (srcSize<<3));
break;
case 2: /* 2 - 2 - 12 */
MEM_writeLE16(ostart, (U16)((U32)set_rle + (1<<2) + (srcSize<<4)));
break;
case 3: /* 2 - 2 - 20 */
MEM_writeLE32(ostart, (U32)((U32)set_rle + (3<<2) + (srcSize<<4)));
break;
default: /* not necessary : flSize is {1,2,3} */
assert(0);
}
ostart[flSize] = *(const BYTE*)src;
DEBUGLOG(5, "RLE : Repeated Literal (%02X: %u times) -> %u bytes encoded", ((const BYTE*)src)[0], (U32)srcSize, (U32)flSize + 1);
return flSize+1;
}
/* ZSTD_minLiteralsToCompress() :
* returns minimal amount of literals
* for literal compression to even be attempted.
* Minimum is made tighter as compression strategy increases.
*/
static size_t
ZSTD_minLiteralsToCompress(ZSTD_strategy strategy, HUF_repeat huf_repeat)
{
assert((int)strategy >= 0);
assert((int)strategy <= 9);
/* btultra2 : min 8 bytes;
* then 2x larger for each successive compression strategy
* max threshold 64 bytes */
{ int const shift = MIN(9-(int)strategy, 3);
size_t const mintc = (huf_repeat == HUF_repeat_valid) ? 6 : (size_t)8 << shift;
DEBUGLOG(7, "minLiteralsToCompress = %zu", mintc);
return mintc;
}
}
size_t ZSTD_compressLiterals (
void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
void* entropyWorkspace, size_t entropyWorkspaceSize,
const ZSTD_hufCTables_t* prevHuf,
ZSTD_hufCTables_t* nextHuf,
ZSTD_strategy strategy,
int disableLiteralCompression,
int suspectUncompressible,
int bmi2)
{
size_t const lhSize = 3 + (srcSize >= 1 KB) + (srcSize >= 16 KB);
BYTE* const ostart = (BYTE*)dst;
U32 singleStream = srcSize < 256;
SymbolEncodingType_e hType = set_compressed;
size_t cLitSize;
DEBUGLOG(5,"ZSTD_compressLiterals (disableLiteralCompression=%i, srcSize=%u, dstCapacity=%zu)",
disableLiteralCompression, (U32)srcSize, dstCapacity);
DEBUGLOG(6, "Completed literals listing (%zu bytes)", showHexa(src, srcSize));
/* Prepare nextEntropy assuming reusing the existing table */
ZSTD_memcpy(nextHuf, prevHuf, sizeof(*prevHuf));
if (disableLiteralCompression)
return ZSTD_noCompressLiterals(dst, dstCapacity, src, srcSize);
/* if too small, don't even attempt compression (speed opt) */
if (srcSize < ZSTD_minLiteralsToCompress(strategy, prevHuf->repeatMode))
return ZSTD_noCompressLiterals(dst, dstCapacity, src, srcSize);
RETURN_ERROR_IF(dstCapacity < lhSize+1, dstSize_tooSmall, "not enough space for compression");
{ HUF_repeat repeat = prevHuf->repeatMode;
int const flags = 0
| (bmi2 ? HUF_flags_bmi2 : 0)
| (strategy < ZSTD_lazy && srcSize <= 1024 ? HUF_flags_preferRepeat : 0)
| (strategy >= HUF_OPTIMAL_DEPTH_THRESHOLD ? HUF_flags_optimalDepth : 0)
| (suspectUncompressible ? HUF_flags_suspectUncompressible : 0);
typedef size_t (*huf_compress_f)(void*, size_t, const void*, size_t, unsigned, unsigned, void*, size_t, HUF_CElt*, HUF_repeat*, int);
huf_compress_f huf_compress;
if (repeat == HUF_repeat_valid && lhSize == 3) singleStream = 1;
huf_compress = singleStream ? HUF_compress1X_repeat : HUF_compress4X_repeat;
cLitSize = huf_compress(ostart+lhSize, dstCapacity-lhSize,
src, srcSize,
HUF_SYMBOLVALUE_MAX, LitHufLog,
entropyWorkspace, entropyWorkspaceSize,
(HUF_CElt*)nextHuf->CTable,
&repeat, flags);
DEBUGLOG(5, "%zu literals compressed into %zu bytes (before header)", srcSize, cLitSize);
if (repeat != HUF_repeat_none) {
/* reused the existing table */
DEBUGLOG(5, "reusing statistics from previous huffman block");
hType = set_repeat;
}
}
{ size_t const minGain = ZSTD_minGain(srcSize, strategy);
if ((cLitSize==0) || (cLitSize >= srcSize - minGain) || ERR_isError(cLitSize)) {
ZSTD_memcpy(nextHuf, prevHuf, sizeof(*prevHuf));
return ZSTD_noCompressLiterals(dst, dstCapacity, src, srcSize);
} }
if (cLitSize==1) {
/* A return value of 1 signals that the alphabet consists of a single symbol.
* However, in some rare circumstances, it could be the compressed size (a single byte).
* For that outcome to have a chance to happen, it's necessary that `srcSize < 8`.
* (it's also necessary to not generate statistics).
* Therefore, in such a case, actively check that all bytes are identical. */
if ((srcSize >= 8) || allBytesIdentical(src, srcSize)) {
ZSTD_memcpy(nextHuf, prevHuf, sizeof(*prevHuf));
return ZSTD_compressRleLiteralsBlock(dst, dstCapacity, src, srcSize);
} }
if (hType == set_compressed) {
/* using a newly constructed table */
nextHuf->repeatMode = HUF_repeat_check;
}
/* Build header */
switch(lhSize)
{
case 3: /* 2 - 2 - 10 - 10 */
if (!singleStream) assert(srcSize >= MIN_LITERALS_FOR_4_STREAMS);
{ U32 const lhc = hType + ((U32)(!singleStream) << 2) + ((U32)srcSize<<4) + ((U32)cLitSize<<14);
MEM_writeLE24(ostart, lhc);
break;
}
case 4: /* 2 - 2 - 14 - 14 */
assert(srcSize >= MIN_LITERALS_FOR_4_STREAMS);
{ U32 const lhc = hType + (2 << 2) + ((U32)srcSize<<4) + ((U32)cLitSize<<18);
MEM_writeLE32(ostart, lhc);
break;
}
case 5: /* 2 - 2 - 18 - 18 */
assert(srcSize >= MIN_LITERALS_FOR_4_STREAMS);
{ U32 const lhc = hType + (3 << 2) + ((U32)srcSize<<4) + ((U32)cLitSize<<22);
MEM_writeLE32(ostart, lhc);
ostart[4] = (BYTE)(cLitSize >> 10);
break;
}
default: /* not possible : lhSize is {3,4,5} */
assert(0);
}
DEBUGLOG(5, "Compressed literals: %u -> %u", (U32)srcSize, (U32)(lhSize+cLitSize));
return lhSize+cLitSize;
}
/**** ended inlining compress/zstd_compress_literals.c ****/
/**** start inlining compress/zstd_compress_sequences.c ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/*-*************************************
* Dependencies
***************************************/
/**** start inlining zstd_compress_sequences.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_COMPRESS_SEQUENCES_H
#define ZSTD_COMPRESS_SEQUENCES_H
/**** skipping file: zstd_compress_internal.h ****/
/**** skipping file: ../common/fse.h ****/
/**** skipping file: ../common/zstd_internal.h ****/
typedef enum {
ZSTD_defaultDisallowed = 0,
ZSTD_defaultAllowed = 1
} ZSTD_DefaultPolicy_e;
SymbolEncodingType_e
ZSTD_selectEncodingType(
FSE_repeat* repeatMode, unsigned const* count, unsigned const max,
size_t const mostFrequent, size_t nbSeq, unsigned const FSELog,
FSE_CTable const* prevCTable,
short const* defaultNorm, U32 defaultNormLog,
ZSTD_DefaultPolicy_e const isDefaultAllowed,
ZSTD_strategy const strategy);
size_t
ZSTD_buildCTable(void* dst, size_t dstCapacity,
FSE_CTable* nextCTable, U32 FSELog, SymbolEncodingType_e type,
unsigned* count, U32 max,
const BYTE* codeTable, size_t nbSeq,
const S16* defaultNorm, U32 defaultNormLog, U32 defaultMax,
const FSE_CTable* prevCTable, size_t prevCTableSize,
void* entropyWorkspace, size_t entropyWorkspaceSize);
size_t ZSTD_encodeSequences(
void* dst, size_t dstCapacity,
FSE_CTable const* CTable_MatchLength, BYTE const* mlCodeTable,
FSE_CTable const* CTable_OffsetBits, BYTE const* ofCodeTable,
FSE_CTable const* CTable_LitLength, BYTE const* llCodeTable,
SeqDef const* sequences, size_t nbSeq, int longOffsets, int bmi2);
size_t ZSTD_fseBitCost(
FSE_CTable const* ctable,
unsigned const* count,
unsigned const max);
size_t ZSTD_crossEntropyCost(short const* norm, unsigned accuracyLog,
unsigned const* count, unsigned const max);
#endif /* ZSTD_COMPRESS_SEQUENCES_H */
/**** ended inlining zstd_compress_sequences.h ****/
/**
* -log2(x / 256) lookup table for x in [0, 256).
* If x == 0: Return 0
* Else: Return floor(-log2(x / 256) * 256)
*/
static unsigned const kInverseProbabilityLog256[256] = {
0, 2048, 1792, 1642, 1536, 1453, 1386, 1329, 1280, 1236, 1197, 1162,
1130, 1100, 1073, 1047, 1024, 1001, 980, 960, 941, 923, 906, 889,
874, 859, 844, 830, 817, 804, 791, 779, 768, 756, 745, 734,
724, 714, 704, 694, 685, 676, 667, 658, 650, 642, 633, 626,
618, 610, 603, 595, 588, 581, 574, 567, 561, 554, 548, 542,
535, 529, 523, 517, 512, 506, 500, 495, 489, 484, 478, 473,
468, 463, 458, 453, 448, 443, 438, 434, 429, 424, 420, 415,
411, 407, 402, 398, 394, 390, 386, 382, 377, 373, 370, 366,
362, 358, 354, 350, 347, 343, 339, 336, 332, 329, 325, 322,
318, 315, 311, 308, 305, 302, 298, 295, 292, 289, 286, 282,
279, 276, 273, 270, 267, 264, 261, 258, 256, 253, 250, 247,
244, 241, 239, 236, 233, 230, 228, 225, 222, 220, 217, 215,
212, 209, 207, 204, 202, 199, 197, 194, 192, 190, 187, 185,
182, 180, 178, 175, 173, 171, 168, 166, 164, 162, 159, 157,
155, 153, 151, 149, 146, 144, 142, 140, 138, 136, 134, 132,
130, 128, 126, 123, 121, 119, 117, 115, 114, 112, 110, 108,
106, 104, 102, 100, 98, 96, 94, 93, 91, 89, 87, 85,
83, 82, 80, 78, 76, 74, 73, 71, 69, 67, 66, 64,
62, 61, 59, 57, 55, 54, 52, 50, 49, 47, 46, 44,
42, 41, 39, 37, 36, 34, 33, 31, 30, 28, 26, 25,
23, 22, 20, 19, 17, 16, 14, 13, 11, 10, 8, 7,
5, 4, 2, 1,
};
static unsigned ZSTD_getFSEMaxSymbolValue(FSE_CTable const* ctable) {
void const* ptr = ctable;
U16 const* u16ptr = (U16 const*)ptr;
U32 const maxSymbolValue = MEM_read16(u16ptr + 1);
return maxSymbolValue;
}
/**
* Returns true if we should use ncount=-1 else we should
* use ncount=1 for low probability symbols instead.
*/
static unsigned ZSTD_useLowProbCount(size_t const nbSeq)
{
/* Heuristic: This should cover most blocks <= 16K and
* start to fade out after 16K to about 32K depending on
* compressibility.
*/
return nbSeq >= 2048;
}
/**
* Returns the cost in bytes of encoding the normalized count header.
* Returns an error if any of the helper functions return an error.
*/
static size_t ZSTD_NCountCost(unsigned const* count, unsigned const max,
size_t const nbSeq, unsigned const FSELog)
{
BYTE wksp[FSE_NCOUNTBOUND];
S16 norm[MaxSeq + 1];
const U32 tableLog = FSE_optimalTableLog(FSELog, nbSeq, max);
FORWARD_IF_ERROR(FSE_normalizeCount(norm, tableLog, count, nbSeq, max, ZSTD_useLowProbCount(nbSeq)), "");
return FSE_writeNCount(wksp, sizeof(wksp), norm, max, tableLog);
}
/**
* Returns the cost in bits of encoding the distribution described by count
* using the entropy bound.
*/
static size_t ZSTD_entropyCost(unsigned const* count, unsigned const max, size_t const total)
{
unsigned cost = 0;
unsigned s;
assert(total > 0);
for (s = 0; s <= max; ++s) {
unsigned norm = (unsigned)((256 * count[s]) / total);
if (count[s] != 0 && norm == 0)
norm = 1;
assert(count[s] < total);
cost += count[s] * kInverseProbabilityLog256[norm];
}
return cost >> 8;
}
/**
* Returns the cost in bits of encoding the distribution in count using ctable.
* Returns an error if ctable cannot represent all the symbols in count.
*/
size_t ZSTD_fseBitCost(
FSE_CTable const* ctable,
unsigned const* count,
unsigned const max)
{
unsigned const kAccuracyLog = 8;
size_t cost = 0;
unsigned s;
FSE_CState_t cstate;
FSE_initCState(&cstate, ctable);
if (ZSTD_getFSEMaxSymbolValue(ctable) < max) {
DEBUGLOG(5, "Repeat FSE_CTable has maxSymbolValue %u < %u",
ZSTD_getFSEMaxSymbolValue(ctable), max);
return ERROR(GENERIC);
}
for (s = 0; s <= max; ++s) {
unsigned const tableLog = cstate.stateLog;
unsigned const badCost = (tableLog + 1) << kAccuracyLog;
unsigned const bitCost = FSE_bitCost(cstate.symbolTT, tableLog, s, kAccuracyLog);
if (count[s] == 0)
continue;
if (bitCost >= badCost) {
DEBUGLOG(5, "Repeat FSE_CTable has Prob[%u] == 0", s);
return ERROR(GENERIC);
}
cost += (size_t)count[s] * bitCost;
}
return cost >> kAccuracyLog;
}
/**
* Returns the cost in bits of encoding the distribution in count using the
* table described by norm. The max symbol support by norm is assumed >= max.
* norm must be valid for every symbol with non-zero probability in count.
*/
size_t ZSTD_crossEntropyCost(short const* norm, unsigned accuracyLog,
unsigned const* count, unsigned const max)
{
unsigned const shift = 8 - accuracyLog;
size_t cost = 0;
unsigned s;
assert(accuracyLog <= 8);
for (s = 0; s <= max; ++s) {
unsigned const normAcc = (norm[s] != -1) ? (unsigned)norm[s] : 1;
unsigned const norm256 = normAcc << shift;
assert(norm256 > 0);
assert(norm256 < 256);
cost += count[s] * kInverseProbabilityLog256[norm256];
}
return cost >> 8;
}
SymbolEncodingType_e
ZSTD_selectEncodingType(
FSE_repeat* repeatMode, unsigned const* count, unsigned const max,
size_t const mostFrequent, size_t nbSeq, unsigned const FSELog,
FSE_CTable const* prevCTable,
short const* defaultNorm, U32 defaultNormLog,
ZSTD_DefaultPolicy_e const isDefaultAllowed,
ZSTD_strategy const strategy)
{
ZSTD_STATIC_ASSERT(ZSTD_defaultDisallowed == 0 && ZSTD_defaultAllowed != 0);
if (mostFrequent == nbSeq) {
*repeatMode = FSE_repeat_none;
if (isDefaultAllowed && nbSeq <= 2) {
/* Prefer set_basic over set_rle when there are 2 or fewer symbols,
* since RLE uses 1 byte, but set_basic uses 5-6 bits per symbol.
* If basic encoding isn't possible, always choose RLE.
*/
DEBUGLOG(5, "Selected set_basic");
return set_basic;
}
DEBUGLOG(5, "Selected set_rle");
return set_rle;
}
if (strategy < ZSTD_lazy) {
if (isDefaultAllowed) {
size_t const staticFse_nbSeq_max = 1000;
size_t const mult = 10 - strategy;
size_t const baseLog = 3;
size_t const dynamicFse_nbSeq_min = (((size_t)1 << defaultNormLog) * mult) >> baseLog; /* 28-36 for offset, 56-72 for lengths */
assert(defaultNormLog >= 5 && defaultNormLog <= 6); /* xx_DEFAULTNORMLOG */
assert(mult <= 9 && mult >= 7);
if ( (*repeatMode == FSE_repeat_valid)
&& (nbSeq < staticFse_nbSeq_max) ) {
DEBUGLOG(5, "Selected set_repeat");
return set_repeat;
}
if ( (nbSeq < dynamicFse_nbSeq_min)
|| (mostFrequent < (nbSeq >> (defaultNormLog-1))) ) {
DEBUGLOG(5, "Selected set_basic");
/* The format allows default tables to be repeated, but it isn't useful.
* When using simple heuristics to select encoding type, we don't want
* to confuse these tables with dictionaries. When running more careful
* analysis, we don't need to waste time checking both repeating tables
* and default tables.
*/
*repeatMode = FSE_repeat_none;
return set_basic;
}
}
} else {
size_t const basicCost = isDefaultAllowed ? ZSTD_crossEntropyCost(defaultNorm, defaultNormLog, count, max) : ERROR(GENERIC);
size_t const repeatCost = *repeatMode != FSE_repeat_none ? ZSTD_fseBitCost(prevCTable, count, max) : ERROR(GENERIC);
size_t const NCountCost = ZSTD_NCountCost(count, max, nbSeq, FSELog);
size_t const compressedCost = (NCountCost << 3) + ZSTD_entropyCost(count, max, nbSeq);
if (isDefaultAllowed) {
assert(!ZSTD_isError(basicCost));
assert(!(*repeatMode == FSE_repeat_valid && ZSTD_isError(repeatCost)));
}
assert(!ZSTD_isError(NCountCost));
assert(compressedCost < ERROR(maxCode));
DEBUGLOG(5, "Estimated bit costs: basic=%u\trepeat=%u\tcompressed=%u",
(unsigned)basicCost, (unsigned)repeatCost, (unsigned)compressedCost);
if (basicCost <= repeatCost && basicCost <= compressedCost) {
DEBUGLOG(5, "Selected set_basic");
assert(isDefaultAllowed);
*repeatMode = FSE_repeat_none;
return set_basic;
}
if (repeatCost <= compressedCost) {
DEBUGLOG(5, "Selected set_repeat");
assert(!ZSTD_isError(repeatCost));
return set_repeat;
}
assert(compressedCost < basicCost && compressedCost < repeatCost);
}
DEBUGLOG(5, "Selected set_compressed");
*repeatMode = FSE_repeat_check;
return set_compressed;
}
typedef struct {
S16 norm[MaxSeq + 1];
U32 wksp[FSE_BUILD_CTABLE_WORKSPACE_SIZE_U32(MaxSeq, MaxFSELog)];
} ZSTD_BuildCTableWksp;
size_t
ZSTD_buildCTable(void* dst, size_t dstCapacity,
FSE_CTable* nextCTable, U32 FSELog, SymbolEncodingType_e type,
unsigned* count, U32 max,
const BYTE* codeTable, size_t nbSeq,
const S16* defaultNorm, U32 defaultNormLog, U32 defaultMax,
const FSE_CTable* prevCTable, size_t prevCTableSize,
void* entropyWorkspace, size_t entropyWorkspaceSize)
{
BYTE* op = (BYTE*)dst;
const BYTE* const oend = op + dstCapacity;
DEBUGLOG(6, "ZSTD_buildCTable (dstCapacity=%u)", (unsigned)dstCapacity);
switch (type) {
case set_rle:
FORWARD_IF_ERROR(FSE_buildCTable_rle(nextCTable, (BYTE)max), "");
RETURN_ERROR_IF(dstCapacity==0, dstSize_tooSmall, "not enough space");
*op = codeTable[0];
return 1;
case set_repeat:
ZSTD_memcpy(nextCTable, prevCTable, prevCTableSize);
return 0;
case set_basic:
FORWARD_IF_ERROR(FSE_buildCTable_wksp(nextCTable, defaultNorm, defaultMax, defaultNormLog, entropyWorkspace, entropyWorkspaceSize), ""); /* note : could be pre-calculated */
return 0;
case set_compressed: {
ZSTD_BuildCTableWksp* wksp = (ZSTD_BuildCTableWksp*)entropyWorkspace;
size_t nbSeq_1 = nbSeq;
const U32 tableLog = FSE_optimalTableLog(FSELog, nbSeq, max);
if (count[codeTable[nbSeq-1]] > 1) {
count[codeTable[nbSeq-1]]--;
nbSeq_1--;
}
assert(nbSeq_1 > 1);
assert(entropyWorkspaceSize >= sizeof(ZSTD_BuildCTableWksp));
(void)entropyWorkspaceSize;
FORWARD_IF_ERROR(FSE_normalizeCount(wksp->norm, tableLog, count, nbSeq_1, max, ZSTD_useLowProbCount(nbSeq_1)), "FSE_normalizeCount failed");
assert(oend >= op);
{ size_t const NCountSize = FSE_writeNCount(op, (size_t)(oend - op), wksp->norm, max, tableLog); /* overflow protected */
FORWARD_IF_ERROR(NCountSize, "FSE_writeNCount failed");
FORWARD_IF_ERROR(FSE_buildCTable_wksp(nextCTable, wksp->norm, max, tableLog, wksp->wksp, sizeof(wksp->wksp)), "FSE_buildCTable_wksp failed");
return NCountSize;
}
}
default: assert(0); RETURN_ERROR(GENERIC, "impossible to reach");
}
}
FORCE_INLINE_TEMPLATE size_t
ZSTD_encodeSequences_body(
void* dst, size_t dstCapacity,
FSE_CTable const* CTable_MatchLength, BYTE const* mlCodeTable,
FSE_CTable const* CTable_OffsetBits, BYTE const* ofCodeTable,
FSE_CTable const* CTable_LitLength, BYTE const* llCodeTable,
SeqDef const* sequences, size_t nbSeq, int longOffsets)
{
BIT_CStream_t blockStream;
FSE_CState_t stateMatchLength;
FSE_CState_t stateOffsetBits;
FSE_CState_t stateLitLength;
RETURN_ERROR_IF(
ERR_isError(BIT_initCStream(&blockStream, dst, dstCapacity)),
dstSize_tooSmall, "not enough space remaining");
DEBUGLOG(6, "available space for bitstream : %i (dstCapacity=%u)",
(int)(blockStream.endPtr - blockStream.startPtr),
(unsigned)dstCapacity);
/* first symbols */
FSE_initCState2(&stateMatchLength, CTable_MatchLength, mlCodeTable[nbSeq-1]);
FSE_initCState2(&stateOffsetBits, CTable_OffsetBits, ofCodeTable[nbSeq-1]);
FSE_initCState2(&stateLitLength, CTable_LitLength, llCodeTable[nbSeq-1]);
BIT_addBits(&blockStream, sequences[nbSeq-1].litLength, LL_bits[llCodeTable[nbSeq-1]]);
if (MEM_32bits()) BIT_flushBits(&blockStream);
BIT_addBits(&blockStream, sequences[nbSeq-1].mlBase, ML_bits[mlCodeTable[nbSeq-1]]);
if (MEM_32bits()) BIT_flushBits(&blockStream);
if (longOffsets) {
U32 const ofBits = ofCodeTable[nbSeq-1];
unsigned const extraBits = ofBits - MIN(ofBits, STREAM_ACCUMULATOR_MIN-1);
if (extraBits) {
BIT_addBits(&blockStream, sequences[nbSeq-1].offBase, extraBits);
BIT_flushBits(&blockStream);
}
BIT_addBits(&blockStream, sequences[nbSeq-1].offBase >> extraBits,
ofBits - extraBits);
} else {
BIT_addBits(&blockStream, sequences[nbSeq-1].offBase, ofCodeTable[nbSeq-1]);
}
BIT_flushBits(&blockStream);
{ size_t n;
for (n=nbSeq-2 ; n<nbSeq ; n--) { /* intentional underflow */
BYTE const llCode = llCodeTable[n];
BYTE const ofCode = ofCodeTable[n];
BYTE const mlCode = mlCodeTable[n];
U32 const llBits = LL_bits[llCode];
U32 const ofBits = ofCode;
U32 const mlBits = ML_bits[mlCode];
DEBUGLOG(6, "encoding: litlen:%2u - matchlen:%2u - offCode:%7u",
(unsigned)sequences[n].litLength,
(unsigned)sequences[n].mlBase + MINMATCH,
(unsigned)sequences[n].offBase);
/* 32b*/ /* 64b*/
/* (7)*/ /* (7)*/
FSE_encodeSymbol(&blockStream, &stateOffsetBits, ofCode); /* 15 */ /* 15 */
FSE_encodeSymbol(&blockStream, &stateMatchLength, mlCode); /* 24 */ /* 24 */
if (MEM_32bits()) BIT_flushBits(&blockStream); /* (7)*/
FSE_encodeSymbol(&blockStream, &stateLitLength, llCode); /* 16 */ /* 33 */
if (MEM_32bits() || (ofBits+mlBits+llBits >= 64-7-(LLFSELog+MLFSELog+OffFSELog)))
BIT_flushBits(&blockStream); /* (7)*/
BIT_addBits(&blockStream, sequences[n].litLength, llBits);
if (MEM_32bits() && ((llBits+mlBits)>24)) BIT_flushBits(&blockStream);
BIT_addBits(&blockStream, sequences[n].mlBase, mlBits);
if (MEM_32bits() || (ofBits+mlBits+llBits > 56)) BIT_flushBits(&blockStream);
if (longOffsets) {
unsigned const extraBits = ofBits - MIN(ofBits, STREAM_ACCUMULATOR_MIN-1);
if (extraBits) {
BIT_addBits(&blockStream, sequences[n].offBase, extraBits);
BIT_flushBits(&blockStream); /* (7)*/
}
BIT_addBits(&blockStream, sequences[n].offBase >> extraBits,
ofBits - extraBits); /* 31 */
} else {
BIT_addBits(&blockStream, sequences[n].offBase, ofBits); /* 31 */
}
BIT_flushBits(&blockStream); /* (7)*/
DEBUGLOG(7, "remaining space : %i", (int)(blockStream.endPtr - blockStream.ptr));
} }
DEBUGLOG(6, "ZSTD_encodeSequences: flushing ML state with %u bits", stateMatchLength.stateLog);
FSE_flushCState(&blockStream, &stateMatchLength);
DEBUGLOG(6, "ZSTD_encodeSequences: flushing Off state with %u bits", stateOffsetBits.stateLog);
FSE_flushCState(&blockStream, &stateOffsetBits);
DEBUGLOG(6, "ZSTD_encodeSequences: flushing LL state with %u bits", stateLitLength.stateLog);
FSE_flushCState(&blockStream, &stateLitLength);
{ size_t const streamSize = BIT_closeCStream(&blockStream);
RETURN_ERROR_IF(streamSize==0, dstSize_tooSmall, "not enough space");
return streamSize;
}
}
static size_t
ZSTD_encodeSequences_default(
void* dst, size_t dstCapacity,
FSE_CTable const* CTable_MatchLength, BYTE const* mlCodeTable,
FSE_CTable const* CTable_OffsetBits, BYTE const* ofCodeTable,
FSE_CTable const* CTable_LitLength, BYTE const* llCodeTable,
SeqDef const* sequences, size_t nbSeq, int longOffsets)
{
return ZSTD_encodeSequences_body(dst, dstCapacity,
CTable_MatchLength, mlCodeTable,
CTable_OffsetBits, ofCodeTable,
CTable_LitLength, llCodeTable,
sequences, nbSeq, longOffsets);
}
#if DYNAMIC_BMI2
static BMI2_TARGET_ATTRIBUTE size_t
ZSTD_encodeSequences_bmi2(
void* dst, size_t dstCapacity,
FSE_CTable const* CTable_MatchLength, BYTE const* mlCodeTable,
FSE_CTable const* CTable_OffsetBits, BYTE const* ofCodeTable,
FSE_CTable const* CTable_LitLength, BYTE const* llCodeTable,
SeqDef const* sequences, size_t nbSeq, int longOffsets)
{
return ZSTD_encodeSequences_body(dst, dstCapacity,
CTable_MatchLength, mlCodeTable,
CTable_OffsetBits, ofCodeTable,
CTable_LitLength, llCodeTable,
sequences, nbSeq, longOffsets);
}
#endif
size_t ZSTD_encodeSequences(
void* dst, size_t dstCapacity,
FSE_CTable const* CTable_MatchLength, BYTE const* mlCodeTable,
FSE_CTable const* CTable_OffsetBits, BYTE const* ofCodeTable,
FSE_CTable const* CTable_LitLength, BYTE const* llCodeTable,
SeqDef const* sequences, size_t nbSeq, int longOffsets, int bmi2)
{
DEBUGLOG(5, "ZSTD_encodeSequences: dstCapacity = %u", (unsigned)dstCapacity);
#if DYNAMIC_BMI2
if (bmi2) {
return ZSTD_encodeSequences_bmi2(dst, dstCapacity,
CTable_MatchLength, mlCodeTable,
CTable_OffsetBits, ofCodeTable,
CTable_LitLength, llCodeTable,
sequences, nbSeq, longOffsets);
}
#endif
(void)bmi2;
return ZSTD_encodeSequences_default(dst, dstCapacity,
CTable_MatchLength, mlCodeTable,
CTable_OffsetBits, ofCodeTable,
CTable_LitLength, llCodeTable,
sequences, nbSeq, longOffsets);
}
/**** ended inlining compress/zstd_compress_sequences.c ****/
/**** start inlining compress/zstd_compress_superblock.c ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/*-*************************************
* Dependencies
***************************************/
/**** start inlining zstd_compress_superblock.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_COMPRESS_ADVANCED_H
#define ZSTD_COMPRESS_ADVANCED_H
/*-*************************************
* Dependencies
***************************************/
/**** skipping file: ../zstd.h ****/
/*-*************************************
* Target Compressed Block Size
***************************************/
/* ZSTD_compressSuperBlock() :
* Used to compress a super block when targetCBlockSize is being used.
* The given block will be compressed into multiple sub blocks that are around targetCBlockSize. */
size_t ZSTD_compressSuperBlock(ZSTD_CCtx* zc,
void* dst, size_t dstCapacity,
void const* src, size_t srcSize,
unsigned lastBlock);
#endif /* ZSTD_COMPRESS_ADVANCED_H */
/**** ended inlining zstd_compress_superblock.h ****/
/**** skipping file: ../common/zstd_internal.h ****/
/**** skipping file: hist.h ****/
/**** skipping file: zstd_compress_internal.h ****/
/**** skipping file: zstd_compress_sequences.h ****/
/**** skipping file: zstd_compress_literals.h ****/
/** ZSTD_compressSubBlock_literal() :
* Compresses literals section for a sub-block.
* When we have to write the Huffman table we will sometimes choose a header
* size larger than necessary. This is because we have to pick the header size
* before we know the table size + compressed size, so we have a bound on the
* table size. If we guessed incorrectly, we fall back to uncompressed literals.
*
* We write the header when writeEntropy=1 and set entropyWritten=1 when we succeeded
* in writing the header, otherwise it is set to 0.
*
* hufMetadata->hType has literals block type info.
* If it is set_basic, all sub-blocks literals section will be Raw_Literals_Block.
* If it is set_rle, all sub-blocks literals section will be RLE_Literals_Block.
* If it is set_compressed, first sub-block's literals section will be Compressed_Literals_Block
* If it is set_compressed, first sub-block's literals section will be Treeless_Literals_Block
* and the following sub-blocks' literals sections will be Treeless_Literals_Block.
* @return : compressed size of literals section of a sub-block
* Or 0 if unable to compress.
* Or error code */
static size_t
ZSTD_compressSubBlock_literal(const HUF_CElt* hufTable,
const ZSTD_hufCTablesMetadata_t* hufMetadata,
const BYTE* literals, size_t litSize,
void* dst, size_t dstSize,
const int bmi2, int writeEntropy, int* entropyWritten)
{
size_t const header = writeEntropy ? 200 : 0;
size_t const lhSize = 3 + (litSize >= (1 KB - header)) + (litSize >= (16 KB - header));
BYTE* const ostart = (BYTE*)dst;
BYTE* const oend = ostart + dstSize;
BYTE* op = ostart + lhSize;
U32 const singleStream = lhSize == 3;
SymbolEncodingType_e hType = writeEntropy ? hufMetadata->hType : set_repeat;
size_t cLitSize = 0;
DEBUGLOG(5, "ZSTD_compressSubBlock_literal (litSize=%zu, lhSize=%zu, writeEntropy=%d)", litSize, lhSize, writeEntropy);
*entropyWritten = 0;
if (litSize == 0 || hufMetadata->hType == set_basic) {
DEBUGLOG(5, "ZSTD_compressSubBlock_literal using raw literal");
return ZSTD_noCompressLiterals(dst, dstSize, literals, litSize);
} else if (hufMetadata->hType == set_rle) {
DEBUGLOG(5, "ZSTD_compressSubBlock_literal using rle literal");
return ZSTD_compressRleLiteralsBlock(dst, dstSize, literals, litSize);
}
assert(litSize > 0);
assert(hufMetadata->hType == set_compressed || hufMetadata->hType == set_repeat);
if (writeEntropy && hufMetadata->hType == set_compressed) {
ZSTD_memcpy(op, hufMetadata->hufDesBuffer, hufMetadata->hufDesSize);
op += hufMetadata->hufDesSize;
cLitSize += hufMetadata->hufDesSize;
DEBUGLOG(5, "ZSTD_compressSubBlock_literal (hSize=%zu)", hufMetadata->hufDesSize);
}
{ int const flags = bmi2 ? HUF_flags_bmi2 : 0;
const size_t cSize = singleStream ? HUF_compress1X_usingCTable(op, (size_t)(oend-op), literals, litSize, hufTable, flags)
: HUF_compress4X_usingCTable(op, (size_t)(oend-op), literals, litSize, hufTable, flags);
op += cSize;
cLitSize += cSize;
if (cSize == 0 || ERR_isError(cSize)) {
DEBUGLOG(5, "Failed to write entropy tables %s", ZSTD_getErrorName(cSize));
return 0;
}
/* If we expand and we aren't writing a header then emit uncompressed */
if (!writeEntropy && cLitSize >= litSize) {
DEBUGLOG(5, "ZSTD_compressSubBlock_literal using raw literal because uncompressible");
return ZSTD_noCompressLiterals(dst, dstSize, literals, litSize);
}
/* If we are writing headers then allow expansion that doesn't change our header size. */
if (lhSize < (size_t)(3 + (cLitSize >= 1 KB) + (cLitSize >= 16 KB))) {
assert(cLitSize > litSize);
DEBUGLOG(5, "Literals expanded beyond allowed header size");
return ZSTD_noCompressLiterals(dst, dstSize, literals, litSize);
}
DEBUGLOG(5, "ZSTD_compressSubBlock_literal (cSize=%zu)", cSize);
}
/* Build header */
switch(lhSize)
{
case 3: /* 2 - 2 - 10 - 10 */
{ U32 const lhc = hType + ((U32)(!singleStream) << 2) + ((U32)litSize<<4) + ((U32)cLitSize<<14);
MEM_writeLE24(ostart, lhc);
break;
}
case 4: /* 2 - 2 - 14 - 14 */
{ U32 const lhc = hType + (2 << 2) + ((U32)litSize<<4) + ((U32)cLitSize<<18);
MEM_writeLE32(ostart, lhc);
break;
}
case 5: /* 2 - 2 - 18 - 18 */
{ U32 const lhc = hType + (3 << 2) + ((U32)litSize<<4) + ((U32)cLitSize<<22);
MEM_writeLE32(ostart, lhc);
ostart[4] = (BYTE)(cLitSize >> 10);
break;
}
default: /* not possible : lhSize is {3,4,5} */
assert(0);
}
*entropyWritten = 1;
DEBUGLOG(5, "Compressed literals: %u -> %u", (U32)litSize, (U32)(op-ostart));
return (size_t)(op-ostart);
}
static size_t
ZSTD_seqDecompressedSize(SeqStore_t const* seqStore,
const SeqDef* sequences, size_t nbSeqs,
size_t litSize, int lastSubBlock)
{
size_t matchLengthSum = 0;
size_t litLengthSum = 0;
size_t n;
for (n=0; n<nbSeqs; n++) {
const ZSTD_SequenceLength seqLen = ZSTD_getSequenceLength(seqStore, sequences+n);
litLengthSum += seqLen.litLength;
matchLengthSum += seqLen.matchLength;
}
DEBUGLOG(5, "ZSTD_seqDecompressedSize: %u sequences from %p: %u literals + %u matchlength",
(unsigned)nbSeqs, (const void*)sequences,
(unsigned)litLengthSum, (unsigned)matchLengthSum);
if (!lastSubBlock)
assert(litLengthSum == litSize);
else
assert(litLengthSum <= litSize);
(void)litLengthSum;
return matchLengthSum + litSize;
}
/** ZSTD_compressSubBlock_sequences() :
* Compresses sequences section for a sub-block.
* fseMetadata->llType, fseMetadata->ofType, and fseMetadata->mlType have
* symbol compression modes for the super-block.
* The first successfully compressed block will have these in its header.
* We set entropyWritten=1 when we succeed in compressing the sequences.
* The following sub-blocks will always have repeat mode.
* @return : compressed size of sequences section of a sub-block
* Or 0 if it is unable to compress
* Or error code. */
static size_t
ZSTD_compressSubBlock_sequences(const ZSTD_fseCTables_t* fseTables,
const ZSTD_fseCTablesMetadata_t* fseMetadata,
const SeqDef* sequences, size_t nbSeq,
const BYTE* llCode, const BYTE* mlCode, const BYTE* ofCode,
const ZSTD_CCtx_params* cctxParams,
void* dst, size_t dstCapacity,
const int bmi2, int writeEntropy, int* entropyWritten)
{
const int longOffsets = cctxParams->cParams.windowLog > STREAM_ACCUMULATOR_MIN;
BYTE* const ostart = (BYTE*)dst;
BYTE* const oend = ostart + dstCapacity;
BYTE* op = ostart;
BYTE* seqHead;
DEBUGLOG(5, "ZSTD_compressSubBlock_sequences (nbSeq=%zu, writeEntropy=%d, longOffsets=%d)", nbSeq, writeEntropy, longOffsets);
*entropyWritten = 0;
/* Sequences Header */
RETURN_ERROR_IF((oend-op) < 3 /*max nbSeq Size*/ + 1 /*seqHead*/,
dstSize_tooSmall, "");
if (nbSeq < 128)
*op++ = (BYTE)nbSeq;
else if (nbSeq < LONGNBSEQ)
op[0] = (BYTE)((nbSeq>>8) + 0x80), op[1] = (BYTE)nbSeq, op+=2;
else
op[0]=0xFF, MEM_writeLE16(op+1, (U16)(nbSeq - LONGNBSEQ)), op+=3;
if (nbSeq==0) {
return (size_t)(op - ostart);
}
/* seqHead : flags for FSE encoding type */
seqHead = op++;
DEBUGLOG(5, "ZSTD_compressSubBlock_sequences (seqHeadSize=%u)", (unsigned)(op-ostart));
if (writeEntropy) {
const U32 LLtype = fseMetadata->llType;
const U32 Offtype = fseMetadata->ofType;
const U32 MLtype = fseMetadata->mlType;
DEBUGLOG(5, "ZSTD_compressSubBlock_sequences (fseTablesSize=%zu)", fseMetadata->fseTablesSize);
*seqHead = (BYTE)((LLtype<<6) + (Offtype<<4) + (MLtype<<2));
ZSTD_memcpy(op, fseMetadata->fseTablesBuffer, fseMetadata->fseTablesSize);
op += fseMetadata->fseTablesSize;
} else {
const U32 repeat = set_repeat;
*seqHead = (BYTE)((repeat<<6) + (repeat<<4) + (repeat<<2));
}
{ size_t const bitstreamSize = ZSTD_encodeSequences(
op, (size_t)(oend - op),
fseTables->matchlengthCTable, mlCode,
fseTables->offcodeCTable, ofCode,
fseTables->litlengthCTable, llCode,
sequences, nbSeq,
longOffsets, bmi2);
FORWARD_IF_ERROR(bitstreamSize, "ZSTD_encodeSequences failed");
op += bitstreamSize;
/* zstd versions <= 1.3.4 mistakenly report corruption when
* FSE_readNCount() receives a buffer < 4 bytes.
* Fixed by https://github.com/facebook/zstd/pull/1146.
* This can happen when the last set_compressed table present is 2
* bytes and the bitstream is only one byte.
* In this exceedingly rare case, we will simply emit an uncompressed
* block, since it isn't worth optimizing.
*/
#ifndef FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION
if (writeEntropy && fseMetadata->lastCountSize && fseMetadata->lastCountSize + bitstreamSize < 4) {
/* NCountSize >= 2 && bitstreamSize > 0 ==> lastCountSize == 3 */
assert(fseMetadata->lastCountSize + bitstreamSize == 3);
DEBUGLOG(5, "Avoiding bug in zstd decoder in versions <= 1.3.4 by "
"emitting an uncompressed block.");
return 0;
}
#endif
DEBUGLOG(5, "ZSTD_compressSubBlock_sequences (bitstreamSize=%zu)", bitstreamSize);
}
/* zstd versions <= 1.4.0 mistakenly report error when
* sequences section body size is less than 3 bytes.
* Fixed by https://github.com/facebook/zstd/pull/1664.
* This can happen when the previous sequences section block is compressed
* with rle mode and the current block's sequences section is compressed
* with repeat mode where sequences section body size can be 1 byte.
*/
#ifndef FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION
if (op-seqHead < 4) {
DEBUGLOG(5, "Avoiding bug in zstd decoder in versions <= 1.4.0 by emitting "
"an uncompressed block when sequences are < 4 bytes");
return 0;
}
#endif
*entropyWritten = 1;
return (size_t)(op - ostart);
}
/** ZSTD_compressSubBlock() :
* Compresses a single sub-block.
* @return : compressed size of the sub-block
* Or 0 if it failed to compress. */
static size_t ZSTD_compressSubBlock(const ZSTD_entropyCTables_t* entropy,
const ZSTD_entropyCTablesMetadata_t* entropyMetadata,
const SeqDef* sequences, size_t nbSeq,
const BYTE* literals, size_t litSize,
const BYTE* llCode, const BYTE* mlCode, const BYTE* ofCode,
const ZSTD_CCtx_params* cctxParams,
void* dst, size_t dstCapacity,
const int bmi2,
int writeLitEntropy, int writeSeqEntropy,
int* litEntropyWritten, int* seqEntropyWritten,
U32 lastBlock)
{
BYTE* const ostart = (BYTE*)dst;
BYTE* const oend = ostart + dstCapacity;
BYTE* op = ostart + ZSTD_blockHeaderSize;
DEBUGLOG(5, "ZSTD_compressSubBlock (litSize=%zu, nbSeq=%zu, writeLitEntropy=%d, writeSeqEntropy=%d, lastBlock=%d)",
litSize, nbSeq, writeLitEntropy, writeSeqEntropy, lastBlock);
{ size_t cLitSize = ZSTD_compressSubBlock_literal((const HUF_CElt*)entropy->huf.CTable,
&entropyMetadata->hufMetadata, literals, litSize,
op, (size_t)(oend-op),
bmi2, writeLitEntropy, litEntropyWritten);
FORWARD_IF_ERROR(cLitSize, "ZSTD_compressSubBlock_literal failed");
if (cLitSize == 0) return 0;
op += cLitSize;
}
{ size_t cSeqSize = ZSTD_compressSubBlock_sequences(&entropy->fse,
&entropyMetadata->fseMetadata,
sequences, nbSeq,
llCode, mlCode, ofCode,
cctxParams,
op, (size_t)(oend-op),
bmi2, writeSeqEntropy, seqEntropyWritten);
FORWARD_IF_ERROR(cSeqSize, "ZSTD_compressSubBlock_sequences failed");
if (cSeqSize == 0) return 0;
op += cSeqSize;
}
/* Write block header */
{ size_t cSize = (size_t)(op-ostart) - ZSTD_blockHeaderSize;
U32 const cBlockHeader24 = lastBlock + (((U32)bt_compressed)<<1) + (U32)(cSize << 3);
MEM_writeLE24(ostart, cBlockHeader24);
}
return (size_t)(op-ostart);
}
static size_t ZSTD_estimateSubBlockSize_literal(const BYTE* literals, size_t litSize,
const ZSTD_hufCTables_t* huf,
const ZSTD_hufCTablesMetadata_t* hufMetadata,
void* workspace, size_t wkspSize,
int writeEntropy)
{
unsigned* const countWksp = (unsigned*)workspace;
unsigned maxSymbolValue = 255;
size_t literalSectionHeaderSize = 3; /* Use hard coded size of 3 bytes */
if (hufMetadata->hType == set_basic) return litSize;
else if (hufMetadata->hType == set_rle) return 1;
else if (hufMetadata->hType == set_compressed || hufMetadata->hType == set_repeat) {
size_t const largest = HIST_count_wksp (countWksp, &maxSymbolValue, (const BYTE*)literals, litSize, workspace, wkspSize);
if (ZSTD_isError(largest)) return litSize;
{ size_t cLitSizeEstimate = HUF_estimateCompressedSize((const HUF_CElt*)huf->CTable, countWksp, maxSymbolValue);
if (writeEntropy) cLitSizeEstimate += hufMetadata->hufDesSize;
return cLitSizeEstimate + literalSectionHeaderSize;
} }
assert(0); /* impossible */
return 0;
}
static size_t ZSTD_estimateSubBlockSize_symbolType(SymbolEncodingType_e type,
const BYTE* codeTable, unsigned maxCode,
size_t nbSeq, const FSE_CTable* fseCTable,
const U8* additionalBits,
short const* defaultNorm, U32 defaultNormLog, U32 defaultMax,
void* workspace, size_t wkspSize)
{
unsigned* const countWksp = (unsigned*)workspace;
const BYTE* ctp = codeTable;
const BYTE* const ctStart = ctp;
const BYTE* const ctEnd = ctStart + nbSeq;
size_t cSymbolTypeSizeEstimateInBits = 0;
unsigned max = maxCode;
HIST_countFast_wksp(countWksp, &max, codeTable, nbSeq, workspace, wkspSize); /* can't fail */
if (type == set_basic) {
/* We selected this encoding type, so it must be valid. */
assert(max <= defaultMax);
cSymbolTypeSizeEstimateInBits = max <= defaultMax
? ZSTD_crossEntropyCost(defaultNorm, defaultNormLog, countWksp, max)
: ERROR(GENERIC);
} else if (type == set_rle) {
cSymbolTypeSizeEstimateInBits = 0;
} else if (type == set_compressed || type == set_repeat) {
cSymbolTypeSizeEstimateInBits = ZSTD_fseBitCost(fseCTable, countWksp, max);
}
if (ZSTD_isError(cSymbolTypeSizeEstimateInBits)) return nbSeq * 10;
while (ctp < ctEnd) {
if (additionalBits) cSymbolTypeSizeEstimateInBits += additionalBits[*ctp];
else cSymbolTypeSizeEstimateInBits += *ctp; /* for offset, offset code is also the number of additional bits */
ctp++;
}
return cSymbolTypeSizeEstimateInBits / 8;
}
static size_t ZSTD_estimateSubBlockSize_sequences(const BYTE* ofCodeTable,
const BYTE* llCodeTable,
const BYTE* mlCodeTable,
size_t nbSeq,
const ZSTD_fseCTables_t* fseTables,
const ZSTD_fseCTablesMetadata_t* fseMetadata,
void* workspace, size_t wkspSize,
int writeEntropy)
{
size_t const sequencesSectionHeaderSize = 3; /* Use hard coded size of 3 bytes */
size_t cSeqSizeEstimate = 0;
if (nbSeq == 0) return sequencesSectionHeaderSize;
cSeqSizeEstimate += ZSTD_estimateSubBlockSize_symbolType(fseMetadata->ofType, ofCodeTable, MaxOff,
nbSeq, fseTables->offcodeCTable, NULL,
OF_defaultNorm, OF_defaultNormLog, DefaultMaxOff,
workspace, wkspSize);
cSeqSizeEstimate += ZSTD_estimateSubBlockSize_symbolType(fseMetadata->llType, llCodeTable, MaxLL,
nbSeq, fseTables->litlengthCTable, LL_bits,
LL_defaultNorm, LL_defaultNormLog, MaxLL,
workspace, wkspSize);
cSeqSizeEstimate += ZSTD_estimateSubBlockSize_symbolType(fseMetadata->mlType, mlCodeTable, MaxML,
nbSeq, fseTables->matchlengthCTable, ML_bits,
ML_defaultNorm, ML_defaultNormLog, MaxML,
workspace, wkspSize);
if (writeEntropy) cSeqSizeEstimate += fseMetadata->fseTablesSize;
return cSeqSizeEstimate + sequencesSectionHeaderSize;
}
typedef struct {
size_t estLitSize;
size_t estBlockSize;
} EstimatedBlockSize;
static EstimatedBlockSize ZSTD_estimateSubBlockSize(const BYTE* literals, size_t litSize,
const BYTE* ofCodeTable,
const BYTE* llCodeTable,
const BYTE* mlCodeTable,
size_t nbSeq,
const ZSTD_entropyCTables_t* entropy,
const ZSTD_entropyCTablesMetadata_t* entropyMetadata,
void* workspace, size_t wkspSize,
int writeLitEntropy, int writeSeqEntropy)
{
EstimatedBlockSize ebs;
ebs.estLitSize = ZSTD_estimateSubBlockSize_literal(literals, litSize,
&entropy->huf, &entropyMetadata->hufMetadata,
workspace, wkspSize, writeLitEntropy);
ebs.estBlockSize = ZSTD_estimateSubBlockSize_sequences(ofCodeTable, llCodeTable, mlCodeTable,
nbSeq, &entropy->fse, &entropyMetadata->fseMetadata,
workspace, wkspSize, writeSeqEntropy);
ebs.estBlockSize += ebs.estLitSize + ZSTD_blockHeaderSize;
return ebs;
}
static int ZSTD_needSequenceEntropyTables(ZSTD_fseCTablesMetadata_t const* fseMetadata)
{
if (fseMetadata->llType == set_compressed || fseMetadata->llType == set_rle)
return 1;
if (fseMetadata->mlType == set_compressed || fseMetadata->mlType == set_rle)
return 1;
if (fseMetadata->ofType == set_compressed || fseMetadata->ofType == set_rle)
return 1;
return 0;
}
static size_t countLiterals(SeqStore_t const* seqStore, const SeqDef* sp, size_t seqCount)
{
size_t n, total = 0;
assert(sp != NULL);
for (n=0; n<seqCount; n++) {
total += ZSTD_getSequenceLength(seqStore, sp+n).litLength;
}
DEBUGLOG(6, "countLiterals for %zu sequences from %p => %zu bytes", seqCount, (const void*)sp, total);
return total;
}
#define BYTESCALE 256
static size_t sizeBlockSequences(const SeqDef* sp, size_t nbSeqs,
size_t targetBudget, size_t avgLitCost, size_t avgSeqCost,
int firstSubBlock)
{
size_t n, budget = 0, inSize=0;
/* entropy headers */
size_t const headerSize = (size_t)firstSubBlock * 120 * BYTESCALE; /* generous estimate */
assert(firstSubBlock==0 || firstSubBlock==1);
budget += headerSize;
/* first sequence => at least one sequence*/
budget += sp[0].litLength * avgLitCost + avgSeqCost;
if (budget > targetBudget) return 1;
inSize = sp[0].litLength + (sp[0].mlBase+MINMATCH);
/* loop over sequences */
for (n=1; n<nbSeqs; n++) {
size_t currentCost = sp[n].litLength * avgLitCost + avgSeqCost;
budget += currentCost;
inSize += sp[n].litLength + (sp[n].mlBase+MINMATCH);
/* stop when sub-block budget is reached */
if ( (budget > targetBudget)
/* though continue to expand until the sub-block is deemed compressible */
&& (budget < inSize * BYTESCALE) )
break;
}
return n;
}
/** ZSTD_compressSubBlock_multi() :
* Breaks super-block into multiple sub-blocks and compresses them.
* Entropy will be written into the first block.
* The following blocks use repeat_mode to compress.
* Sub-blocks are all compressed, except the last one when beneficial.
* @return : compressed size of the super block (which features multiple ZSTD blocks)
* or 0 if it failed to compress. */
static size_t ZSTD_compressSubBlock_multi(const SeqStore_t* seqStorePtr,
const ZSTD_compressedBlockState_t* prevCBlock,
ZSTD_compressedBlockState_t* nextCBlock,
const ZSTD_entropyCTablesMetadata_t* entropyMetadata,
const ZSTD_CCtx_params* cctxParams,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
const int bmi2, U32 lastBlock,
void* workspace, size_t wkspSize)
{
const SeqDef* const sstart = seqStorePtr->sequencesStart;
const SeqDef* const send = seqStorePtr->sequences;
const SeqDef* sp = sstart; /* tracks progresses within seqStorePtr->sequences */
size_t const nbSeqs = (size_t)(send - sstart);
const BYTE* const lstart = seqStorePtr->litStart;
const BYTE* const lend = seqStorePtr->lit;
const BYTE* lp = lstart;
size_t const nbLiterals = (size_t)(lend - lstart);
BYTE const* ip = (BYTE const*)src;
BYTE const* const iend = ip + srcSize;
BYTE* const ostart = (BYTE*)dst;
BYTE* const oend = ostart + dstCapacity;
BYTE* op = ostart;
const BYTE* llCodePtr = seqStorePtr->llCode;
const BYTE* mlCodePtr = seqStorePtr->mlCode;
const BYTE* ofCodePtr = seqStorePtr->ofCode;
size_t const minTarget = ZSTD_TARGETCBLOCKSIZE_MIN; /* enforce minimum size, to reduce undesirable side effects */
size_t const targetCBlockSize = MAX(minTarget, cctxParams->targetCBlockSize);
int writeLitEntropy = (entropyMetadata->hufMetadata.hType == set_compressed);
int writeSeqEntropy = 1;
DEBUGLOG(5, "ZSTD_compressSubBlock_multi (srcSize=%u, litSize=%u, nbSeq=%u)",
(unsigned)srcSize, (unsigned)(lend-lstart), (unsigned)(send-sstart));
/* let's start by a general estimation for the full block */
if (nbSeqs > 0) {
EstimatedBlockSize const ebs =
ZSTD_estimateSubBlockSize(lp, nbLiterals,
ofCodePtr, llCodePtr, mlCodePtr, nbSeqs,
&nextCBlock->entropy, entropyMetadata,
workspace, wkspSize,
writeLitEntropy, writeSeqEntropy);
/* quick estimation */
size_t const avgLitCost = nbLiterals ? (ebs.estLitSize * BYTESCALE) / nbLiterals : BYTESCALE;
size_t const avgSeqCost = ((ebs.estBlockSize - ebs.estLitSize) * BYTESCALE) / nbSeqs;
const size_t nbSubBlocks = MAX((ebs.estBlockSize + (targetCBlockSize/2)) / targetCBlockSize, 1);
size_t n, avgBlockBudget, blockBudgetSupp=0;
avgBlockBudget = (ebs.estBlockSize * BYTESCALE) / nbSubBlocks;
DEBUGLOG(5, "estimated fullblock size=%u bytes ; avgLitCost=%.2f ; avgSeqCost=%.2f ; targetCBlockSize=%u, nbSubBlocks=%u ; avgBlockBudget=%.0f bytes",
(unsigned)ebs.estBlockSize, (double)avgLitCost/BYTESCALE, (double)avgSeqCost/BYTESCALE,
(unsigned)targetCBlockSize, (unsigned)nbSubBlocks, (double)avgBlockBudget/BYTESCALE);
/* simplification: if estimates states that the full superblock doesn't compress, just bail out immediately
* this will result in the production of a single uncompressed block covering @srcSize.*/
if (ebs.estBlockSize > srcSize) return 0;
/* compress and write sub-blocks */
assert(nbSubBlocks>0);
for (n=0; n < nbSubBlocks-1; n++) {
/* determine nb of sequences for current sub-block + nbLiterals from next sequence */
size_t const seqCount = sizeBlockSequences(sp, (size_t)(send-sp),
avgBlockBudget + blockBudgetSupp, avgLitCost, avgSeqCost, n==0);
/* if reached last sequence : break to last sub-block (simplification) */
assert(seqCount <= (size_t)(send-sp));
if (sp + seqCount == send) break;
assert(seqCount > 0);
/* compress sub-block */
{ int litEntropyWritten = 0;
int seqEntropyWritten = 0;
size_t litSize = countLiterals(seqStorePtr, sp, seqCount);
const size_t decompressedSize =
ZSTD_seqDecompressedSize(seqStorePtr, sp, seqCount, litSize, 0);
size_t const cSize = ZSTD_compressSubBlock(&nextCBlock->entropy, entropyMetadata,
sp, seqCount,
lp, litSize,
llCodePtr, mlCodePtr, ofCodePtr,
cctxParams,
op, (size_t)(oend-op),
bmi2, writeLitEntropy, writeSeqEntropy,
&litEntropyWritten, &seqEntropyWritten,
0);
FORWARD_IF_ERROR(cSize, "ZSTD_compressSubBlock failed");
/* check compressibility, update state components */
if (cSize > 0 && cSize < decompressedSize) {
DEBUGLOG(5, "Committed sub-block compressing %u bytes => %u bytes",
(unsigned)decompressedSize, (unsigned)cSize);
assert(ip + decompressedSize <= iend);
ip += decompressedSize;
lp += litSize;
op += cSize;
llCodePtr += seqCount;
mlCodePtr += seqCount;
ofCodePtr += seqCount;
/* Entropy only needs to be written once */
if (litEntropyWritten) {
writeLitEntropy = 0;
}
if (seqEntropyWritten) {
writeSeqEntropy = 0;
}
sp += seqCount;
blockBudgetSupp = 0;
} }
/* otherwise : do not compress yet, coalesce current sub-block with following one */
}
} /* if (nbSeqs > 0) */
/* write last block */
DEBUGLOG(5, "Generate last sub-block: %u sequences remaining", (unsigned)(send - sp));
{ int litEntropyWritten = 0;
int seqEntropyWritten = 0;
size_t litSize = (size_t)(lend - lp);
size_t seqCount = (size_t)(send - sp);
const size_t decompressedSize =
ZSTD_seqDecompressedSize(seqStorePtr, sp, seqCount, litSize, 1);
size_t const cSize = ZSTD_compressSubBlock(&nextCBlock->entropy, entropyMetadata,
sp, seqCount,
lp, litSize,
llCodePtr, mlCodePtr, ofCodePtr,
cctxParams,
op, (size_t)(oend-op),
bmi2, writeLitEntropy, writeSeqEntropy,
&litEntropyWritten, &seqEntropyWritten,
lastBlock);
FORWARD_IF_ERROR(cSize, "ZSTD_compressSubBlock failed");
/* update pointers, the nb of literals borrowed from next sequence must be preserved */
if (cSize > 0 && cSize < decompressedSize) {
DEBUGLOG(5, "Last sub-block compressed %u bytes => %u bytes",
(unsigned)decompressedSize, (unsigned)cSize);
assert(ip + decompressedSize <= iend);
ip += decompressedSize;
lp += litSize;
op += cSize;
llCodePtr += seqCount;
mlCodePtr += seqCount;
ofCodePtr += seqCount;
/* Entropy only needs to be written once */
if (litEntropyWritten) {
writeLitEntropy = 0;
}
if (seqEntropyWritten) {
writeSeqEntropy = 0;
}
sp += seqCount;
}
}
if (writeLitEntropy) {
DEBUGLOG(5, "Literal entropy tables were never written");
ZSTD_memcpy(&nextCBlock->entropy.huf, &prevCBlock->entropy.huf, sizeof(prevCBlock->entropy.huf));
}
if (writeSeqEntropy && ZSTD_needSequenceEntropyTables(&entropyMetadata->fseMetadata)) {
/* If we haven't written our entropy tables, then we've violated our contract and
* must emit an uncompressed block.
*/
DEBUGLOG(5, "Sequence entropy tables were never written => cancel, emit an uncompressed block");
return 0;
}
if (ip < iend) {
/* some data left : last part of the block sent uncompressed */
size_t const rSize = (size_t)((iend - ip));
size_t const cSize = ZSTD_noCompressBlock(op, (size_t)(oend - op), ip, rSize, lastBlock);
DEBUGLOG(5, "Generate last uncompressed sub-block of %u bytes", (unsigned)(rSize));
FORWARD_IF_ERROR(cSize, "ZSTD_noCompressBlock failed");
assert(cSize != 0);
op += cSize;
/* We have to regenerate the repcodes because we've skipped some sequences */
if (sp < send) {
const SeqDef* seq;
Repcodes_t rep;
ZSTD_memcpy(&rep, prevCBlock->rep, sizeof(rep));
for (seq = sstart; seq < sp; ++seq) {
ZSTD_updateRep(rep.rep, seq->offBase, ZSTD_getSequenceLength(seqStorePtr, seq).litLength == 0);
}
ZSTD_memcpy(nextCBlock->rep, &rep, sizeof(rep));
}
}
DEBUGLOG(5, "ZSTD_compressSubBlock_multi compressed all subBlocks: total compressed size = %u",
(unsigned)(op-ostart));
return (size_t)(op-ostart);
}
size_t ZSTD_compressSuperBlock(ZSTD_CCtx* zc,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
unsigned lastBlock)
{
ZSTD_entropyCTablesMetadata_t entropyMetadata;
FORWARD_IF_ERROR(ZSTD_buildBlockEntropyStats(&zc->seqStore,
&zc->blockState.prevCBlock->entropy,
&zc->blockState.nextCBlock->entropy,
&zc->appliedParams,
&entropyMetadata,
zc->tmpWorkspace, zc->tmpWkspSize /* statically allocated in resetCCtx */), "");
return ZSTD_compressSubBlock_multi(&zc->seqStore,
zc->blockState.prevCBlock,
zc->blockState.nextCBlock,
&entropyMetadata,
&zc->appliedParams,
dst, dstCapacity,
src, srcSize,
zc->bmi2, lastBlock,
zc->tmpWorkspace, zc->tmpWkspSize /* statically allocated in resetCCtx */);
}
/**** ended inlining compress/zstd_compress_superblock.c ****/
/**** start inlining compress/zstd_preSplit.c ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/**** skipping file: ../common/compiler.h ****/
/**** skipping file: ../common/mem.h ****/
/**** skipping file: ../common/zstd_deps.h ****/
/**** skipping file: ../common/zstd_internal.h ****/
/**** skipping file: hist.h ****/
/**** skipping file: zstd_preSplit.h ****/
#define BLOCKSIZE_MIN 3500
#define THRESHOLD_PENALTY_RATE 16
#define THRESHOLD_BASE (THRESHOLD_PENALTY_RATE - 2)
#define THRESHOLD_PENALTY 3
#define HASHLENGTH 2
#define HASHLOG_MAX 10
#define HASHTABLESIZE (1 << HASHLOG_MAX)
#define HASHMASK (HASHTABLESIZE - 1)
#define KNUTH 0x9e3779b9
/* for hashLog > 8, hash 2 bytes.
* for hashLog == 8, just take the byte, no hashing.
* The speed of this method relies on compile-time constant propagation */
FORCE_INLINE_TEMPLATE unsigned hash2(const void *p, unsigned hashLog)
{
assert(hashLog >= 8);
if (hashLog == 8) return (U32)((const BYTE*)p)[0];
assert(hashLog <= HASHLOG_MAX);
return (U32)(MEM_read16(p)) * KNUTH >> (32 - hashLog);
}
typedef struct {
unsigned events[HASHTABLESIZE];
size_t nbEvents;
} Fingerprint;
typedef struct {
Fingerprint pastEvents;
Fingerprint newEvents;
} FPStats;
static void initStats(FPStats* fpstats)
{
ZSTD_memset(fpstats, 0, sizeof(FPStats));
}
FORCE_INLINE_TEMPLATE void
addEvents_generic(Fingerprint* fp, const void* src, size_t srcSize, size_t samplingRate, unsigned hashLog)
{
const char* p = (const char*)src;
size_t limit = srcSize - HASHLENGTH + 1;
size_t n;
assert(srcSize >= HASHLENGTH);
for (n = 0; n < limit; n+=samplingRate) {
fp->events[hash2(p+n, hashLog)]++;
}
fp->nbEvents += limit/samplingRate;
}
FORCE_INLINE_TEMPLATE void
recordFingerprint_generic(Fingerprint* fp, const void* src, size_t srcSize, size_t samplingRate, unsigned hashLog)
{
ZSTD_memset(fp, 0, sizeof(unsigned) * ((size_t)1 << hashLog));
fp->nbEvents = 0;
addEvents_generic(fp, src, srcSize, samplingRate, hashLog);
}
typedef void (*RecordEvents_f)(Fingerprint* fp, const void* src, size_t srcSize);
#define FP_RECORD(_rate) ZSTD_recordFingerprint_##_rate
#define ZSTD_GEN_RECORD_FINGERPRINT(_rate, _hSize) \
static void FP_RECORD(_rate)(Fingerprint* fp, const void* src, size_t srcSize) \
{ \
recordFingerprint_generic(fp, src, srcSize, _rate, _hSize); \
}
ZSTD_GEN_RECORD_FINGERPRINT(1, 10)
ZSTD_GEN_RECORD_FINGERPRINT(5, 10)
ZSTD_GEN_RECORD_FINGERPRINT(11, 9)
ZSTD_GEN_RECORD_FINGERPRINT(43, 8)
static U64 abs64(S64 s64) { return (U64)((s64 < 0) ? -s64 : s64); }
static U64 fpDistance(const Fingerprint* fp1, const Fingerprint* fp2, unsigned hashLog)
{
U64 distance = 0;
size_t n;
assert(hashLog <= HASHLOG_MAX);
for (n = 0; n < ((size_t)1 << hashLog); n++) {
distance +=
abs64((S64)fp1->events[n] * (S64)fp2->nbEvents - (S64)fp2->events[n] * (S64)fp1->nbEvents);
}
return distance;
}
/* Compare newEvents with pastEvents
* return 1 when considered "too different"
*/
static int compareFingerprints(const Fingerprint* ref,
const Fingerprint* newfp,
int penalty,
unsigned hashLog)
{
assert(ref->nbEvents > 0);
assert(newfp->nbEvents > 0);
{ U64 p50 = (U64)ref->nbEvents * (U64)newfp->nbEvents;
U64 deviation = fpDistance(ref, newfp, hashLog);
U64 threshold = p50 * (U64)(THRESHOLD_BASE + penalty) / THRESHOLD_PENALTY_RATE;
return deviation >= threshold;
}
}
static void mergeEvents(Fingerprint* acc, const Fingerprint* newfp)
{
size_t n;
for (n = 0; n < HASHTABLESIZE; n++) {
acc->events[n] += newfp->events[n];
}
acc->nbEvents += newfp->nbEvents;
}
static void flushEvents(FPStats* fpstats)
{
size_t n;
for (n = 0; n < HASHTABLESIZE; n++) {
fpstats->pastEvents.events[n] = fpstats->newEvents.events[n];
}
fpstats->pastEvents.nbEvents = fpstats->newEvents.nbEvents;
ZSTD_memset(&fpstats->newEvents, 0, sizeof(fpstats->newEvents));
}
static void removeEvents(Fingerprint* acc, const Fingerprint* slice)
{
size_t n;
for (n = 0; n < HASHTABLESIZE; n++) {
assert(acc->events[n] >= slice->events[n]);
acc->events[n] -= slice->events[n];
}
acc->nbEvents -= slice->nbEvents;
}
#define CHUNKSIZE (8 << 10)
static size_t ZSTD_splitBlock_byChunks(const void* blockStart, size_t blockSize,
int level,
void* workspace, size_t wkspSize)
{
static const RecordEvents_f records_fs[] = {
FP_RECORD(43), FP_RECORD(11), FP_RECORD(5), FP_RECORD(1)
};
static const unsigned hashParams[] = { 8, 9, 10, 10 };
const RecordEvents_f record_f = (assert(0<=level && level<=3), records_fs[level]);
FPStats* const fpstats = (FPStats*)workspace;
const char* p = (const char*)blockStart;
int penalty = THRESHOLD_PENALTY;
size_t pos = 0;
assert(blockSize == (128 << 10));
assert(workspace != NULL);
assert((size_t)workspace % ZSTD_ALIGNOF(FPStats) == 0);
ZSTD_STATIC_ASSERT(ZSTD_SLIPBLOCK_WORKSPACESIZE >= sizeof(FPStats));
assert(wkspSize >= sizeof(FPStats)); (void)wkspSize;
initStats(fpstats);
record_f(&fpstats->pastEvents, p, CHUNKSIZE);
for (pos = CHUNKSIZE; pos <= blockSize - CHUNKSIZE; pos += CHUNKSIZE) {
record_f(&fpstats->newEvents, p + pos, CHUNKSIZE);
if (compareFingerprints(&fpstats->pastEvents, &fpstats->newEvents, penalty, hashParams[level])) {
return pos;
} else {
mergeEvents(&fpstats->pastEvents, &fpstats->newEvents);
if (penalty > 0) penalty--;
}
}
assert(pos == blockSize);
return blockSize;
(void)flushEvents; (void)removeEvents;
}
/* ZSTD_splitBlock_fromBorders(): very fast strategy :
* compare fingerprint from beginning and end of the block,
* derive from their difference if it's preferable to split in the middle,
* repeat the process a second time, for finer grained decision.
* 3 times did not brought improvements, so I stopped at 2.
* Benefits are good enough for a cheap heuristic.
* More accurate splitting saves more, but speed impact is also more perceptible.
* For better accuracy, use more elaborate variant *_byChunks.
*/
static size_t ZSTD_splitBlock_fromBorders(const void* blockStart, size_t blockSize,
void* workspace, size_t wkspSize)
{
#define SEGMENT_SIZE 512
FPStats* const fpstats = (FPStats*)workspace;
Fingerprint* middleEvents = (Fingerprint*)(void*)((char*)workspace + 512 * sizeof(unsigned));
assert(blockSize == (128 << 10));
assert(workspace != NULL);
assert((size_t)workspace % ZSTD_ALIGNOF(FPStats) == 0);
ZSTD_STATIC_ASSERT(ZSTD_SLIPBLOCK_WORKSPACESIZE >= sizeof(FPStats));
assert(wkspSize >= sizeof(FPStats)); (void)wkspSize;
initStats(fpstats);
HIST_add(fpstats->pastEvents.events, blockStart, SEGMENT_SIZE);
HIST_add(fpstats->newEvents.events, (const char*)blockStart + blockSize - SEGMENT_SIZE, SEGMENT_SIZE);
fpstats->pastEvents.nbEvents = fpstats->newEvents.nbEvents = SEGMENT_SIZE;
if (!compareFingerprints(&fpstats->pastEvents, &fpstats->newEvents, 0, 8))
return blockSize;
HIST_add(middleEvents->events, (const char*)blockStart + blockSize/2 - SEGMENT_SIZE/2, SEGMENT_SIZE);
middleEvents->nbEvents = SEGMENT_SIZE;
{ U64 const distFromBegin = fpDistance(&fpstats->pastEvents, middleEvents, 8);
U64 const distFromEnd = fpDistance(&fpstats->newEvents, middleEvents, 8);
U64 const minDistance = SEGMENT_SIZE * SEGMENT_SIZE / 3;
if (abs64((S64)distFromBegin - (S64)distFromEnd) < minDistance)
return 64 KB;
return (distFromBegin > distFromEnd) ? 32 KB : 96 KB;
}
}
size_t ZSTD_splitBlock(const void* blockStart, size_t blockSize,
int level,
void* workspace, size_t wkspSize)
{
DEBUGLOG(6, "ZSTD_splitBlock (level=%i)", level);
assert(0<=level && level<=4);
if (level == 0)
return ZSTD_splitBlock_fromBorders(blockStart, blockSize, workspace, wkspSize);
/* level >= 1*/
return ZSTD_splitBlock_byChunks(blockStart, blockSize, level-1, workspace, wkspSize);
}
/**** ended inlining compress/zstd_preSplit.c ****/
/**** start inlining compress/zstd_compress.c ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/*-*************************************
* Dependencies
***************************************/
/**** skipping file: ../common/allocations.h ****/
/**** skipping file: ../common/zstd_deps.h ****/
/**** skipping file: ../common/mem.h ****/
/**** skipping file: ../common/error_private.h ****/
/**** skipping file: hist.h ****/
#define FSE_STATIC_LINKING_ONLY /* FSE_encodeSymbol */
/**** skipping file: ../common/fse.h ****/
/**** skipping file: ../common/huf.h ****/
/**** skipping file: zstd_compress_internal.h ****/
/**** skipping file: zstd_compress_sequences.h ****/
/**** skipping file: zstd_compress_literals.h ****/
/**** start inlining zstd_fast.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_FAST_H
#define ZSTD_FAST_H
/**** skipping file: ../common/mem.h ****/
/**** skipping file: zstd_compress_internal.h ****/
void ZSTD_fillHashTable(ZSTD_MatchState_t* ms,
void const* end, ZSTD_dictTableLoadMethod_e dtlm,
ZSTD_tableFillPurpose_e tfp);
size_t ZSTD_compressBlock_fast(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_fast_dictMatchState(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_fast_extDict(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
#endif /* ZSTD_FAST_H */
/**** ended inlining zstd_fast.h ****/
/**** start inlining zstd_double_fast.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_DOUBLE_FAST_H
#define ZSTD_DOUBLE_FAST_H
/**** skipping file: ../common/mem.h ****/
/**** skipping file: zstd_compress_internal.h ****/
#ifndef ZSTD_EXCLUDE_DFAST_BLOCK_COMPRESSOR
void ZSTD_fillDoubleHashTable(ZSTD_MatchState_t* ms,
void const* end, ZSTD_dictTableLoadMethod_e dtlm,
ZSTD_tableFillPurpose_e tfp);
size_t ZSTD_compressBlock_doubleFast(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_doubleFast_dictMatchState(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_doubleFast_extDict(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
#define ZSTD_COMPRESSBLOCK_DOUBLEFAST ZSTD_compressBlock_doubleFast
#define ZSTD_COMPRESSBLOCK_DOUBLEFAST_DICTMATCHSTATE ZSTD_compressBlock_doubleFast_dictMatchState
#define ZSTD_COMPRESSBLOCK_DOUBLEFAST_EXTDICT ZSTD_compressBlock_doubleFast_extDict
#else
#define ZSTD_COMPRESSBLOCK_DOUBLEFAST NULL
#define ZSTD_COMPRESSBLOCK_DOUBLEFAST_DICTMATCHSTATE NULL
#define ZSTD_COMPRESSBLOCK_DOUBLEFAST_EXTDICT NULL
#endif /* ZSTD_EXCLUDE_DFAST_BLOCK_COMPRESSOR */
#endif /* ZSTD_DOUBLE_FAST_H */
/**** ended inlining zstd_double_fast.h ****/
/**** start inlining zstd_lazy.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_LAZY_H
#define ZSTD_LAZY_H
/**** skipping file: zstd_compress_internal.h ****/
/**
* Dedicated Dictionary Search Structure bucket log. In the
* ZSTD_dedicatedDictSearch mode, the hashTable has
* 2 ** ZSTD_LAZY_DDSS_BUCKET_LOG entries in each bucket, rather than just
* one.
*/
#define ZSTD_LAZY_DDSS_BUCKET_LOG 2
#define ZSTD_ROW_HASH_TAG_BITS 8 /* nb bits to use for the tag */
#if !defined(ZSTD_EXCLUDE_GREEDY_BLOCK_COMPRESSOR) \
|| !defined(ZSTD_EXCLUDE_LAZY_BLOCK_COMPRESSOR) \
|| !defined(ZSTD_EXCLUDE_LAZY2_BLOCK_COMPRESSOR) \
|| !defined(ZSTD_EXCLUDE_BTLAZY2_BLOCK_COMPRESSOR)
U32 ZSTD_insertAndFindFirstIndex(ZSTD_MatchState_t* ms, const BYTE* ip);
void ZSTD_row_update(ZSTD_MatchState_t* const ms, const BYTE* ip);
void ZSTD_dedicatedDictSearch_lazy_loadDictionary(ZSTD_MatchState_t* ms, const BYTE* const ip);
void ZSTD_preserveUnsortedMark (U32* const table, U32 const size, U32 const reducerValue); /*! used in ZSTD_reduceIndex(). preemptively increase value of ZSTD_DUBT_UNSORTED_MARK */
#endif
#ifndef ZSTD_EXCLUDE_GREEDY_BLOCK_COMPRESSOR
size_t ZSTD_compressBlock_greedy(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_greedy_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_greedy_dictMatchState(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_greedy_dictMatchState_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_greedy_dedicatedDictSearch(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_greedy_dedicatedDictSearch_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_greedy_extDict(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_greedy_extDict_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
#define ZSTD_COMPRESSBLOCK_GREEDY ZSTD_compressBlock_greedy
#define ZSTD_COMPRESSBLOCK_GREEDY_ROW ZSTD_compressBlock_greedy_row
#define ZSTD_COMPRESSBLOCK_GREEDY_DICTMATCHSTATE ZSTD_compressBlock_greedy_dictMatchState
#define ZSTD_COMPRESSBLOCK_GREEDY_DICTMATCHSTATE_ROW ZSTD_compressBlock_greedy_dictMatchState_row
#define ZSTD_COMPRESSBLOCK_GREEDY_DEDICATEDDICTSEARCH ZSTD_compressBlock_greedy_dedicatedDictSearch
#define ZSTD_COMPRESSBLOCK_GREEDY_DEDICATEDDICTSEARCH_ROW ZSTD_compressBlock_greedy_dedicatedDictSearch_row
#define ZSTD_COMPRESSBLOCK_GREEDY_EXTDICT ZSTD_compressBlock_greedy_extDict
#define ZSTD_COMPRESSBLOCK_GREEDY_EXTDICT_ROW ZSTD_compressBlock_greedy_extDict_row
#else
#define ZSTD_COMPRESSBLOCK_GREEDY NULL
#define ZSTD_COMPRESSBLOCK_GREEDY_ROW NULL
#define ZSTD_COMPRESSBLOCK_GREEDY_DICTMATCHSTATE NULL
#define ZSTD_COMPRESSBLOCK_GREEDY_DICTMATCHSTATE_ROW NULL
#define ZSTD_COMPRESSBLOCK_GREEDY_DEDICATEDDICTSEARCH NULL
#define ZSTD_COMPRESSBLOCK_GREEDY_DEDICATEDDICTSEARCH_ROW NULL
#define ZSTD_COMPRESSBLOCK_GREEDY_EXTDICT NULL
#define ZSTD_COMPRESSBLOCK_GREEDY_EXTDICT_ROW NULL
#endif
#ifndef ZSTD_EXCLUDE_LAZY_BLOCK_COMPRESSOR
size_t ZSTD_compressBlock_lazy(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_lazy_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_lazy_dictMatchState(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_lazy_dictMatchState_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_lazy_dedicatedDictSearch(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_lazy_dedicatedDictSearch_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_lazy_extDict(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_lazy_extDict_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
#define ZSTD_COMPRESSBLOCK_LAZY ZSTD_compressBlock_lazy
#define ZSTD_COMPRESSBLOCK_LAZY_ROW ZSTD_compressBlock_lazy_row
#define ZSTD_COMPRESSBLOCK_LAZY_DICTMATCHSTATE ZSTD_compressBlock_lazy_dictMatchState
#define ZSTD_COMPRESSBLOCK_LAZY_DICTMATCHSTATE_ROW ZSTD_compressBlock_lazy_dictMatchState_row
#define ZSTD_COMPRESSBLOCK_LAZY_DEDICATEDDICTSEARCH ZSTD_compressBlock_lazy_dedicatedDictSearch
#define ZSTD_COMPRESSBLOCK_LAZY_DEDICATEDDICTSEARCH_ROW ZSTD_compressBlock_lazy_dedicatedDictSearch_row
#define ZSTD_COMPRESSBLOCK_LAZY_EXTDICT ZSTD_compressBlock_lazy_extDict
#define ZSTD_COMPRESSBLOCK_LAZY_EXTDICT_ROW ZSTD_compressBlock_lazy_extDict_row
#else
#define ZSTD_COMPRESSBLOCK_LAZY NULL
#define ZSTD_COMPRESSBLOCK_LAZY_ROW NULL
#define ZSTD_COMPRESSBLOCK_LAZY_DICTMATCHSTATE NULL
#define ZSTD_COMPRESSBLOCK_LAZY_DICTMATCHSTATE_ROW NULL
#define ZSTD_COMPRESSBLOCK_LAZY_DEDICATEDDICTSEARCH NULL
#define ZSTD_COMPRESSBLOCK_LAZY_DEDICATEDDICTSEARCH_ROW NULL
#define ZSTD_COMPRESSBLOCK_LAZY_EXTDICT NULL
#define ZSTD_COMPRESSBLOCK_LAZY_EXTDICT_ROW NULL
#endif
#ifndef ZSTD_EXCLUDE_LAZY2_BLOCK_COMPRESSOR
size_t ZSTD_compressBlock_lazy2(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_lazy2_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_lazy2_dictMatchState(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_lazy2_dictMatchState_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_lazy2_dedicatedDictSearch(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_lazy2_dedicatedDictSearch_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_lazy2_extDict(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_lazy2_extDict_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
#define ZSTD_COMPRESSBLOCK_LAZY2 ZSTD_compressBlock_lazy2
#define ZSTD_COMPRESSBLOCK_LAZY2_ROW ZSTD_compressBlock_lazy2_row
#define ZSTD_COMPRESSBLOCK_LAZY2_DICTMATCHSTATE ZSTD_compressBlock_lazy2_dictMatchState
#define ZSTD_COMPRESSBLOCK_LAZY2_DICTMATCHSTATE_ROW ZSTD_compressBlock_lazy2_dictMatchState_row
#define ZSTD_COMPRESSBLOCK_LAZY2_DEDICATEDDICTSEARCH ZSTD_compressBlock_lazy2_dedicatedDictSearch
#define ZSTD_COMPRESSBLOCK_LAZY2_DEDICATEDDICTSEARCH_ROW ZSTD_compressBlock_lazy2_dedicatedDictSearch_row
#define ZSTD_COMPRESSBLOCK_LAZY2_EXTDICT ZSTD_compressBlock_lazy2_extDict
#define ZSTD_COMPRESSBLOCK_LAZY2_EXTDICT_ROW ZSTD_compressBlock_lazy2_extDict_row
#else
#define ZSTD_COMPRESSBLOCK_LAZY2 NULL
#define ZSTD_COMPRESSBLOCK_LAZY2_ROW NULL
#define ZSTD_COMPRESSBLOCK_LAZY2_DICTMATCHSTATE NULL
#define ZSTD_COMPRESSBLOCK_LAZY2_DICTMATCHSTATE_ROW NULL
#define ZSTD_COMPRESSBLOCK_LAZY2_DEDICATEDDICTSEARCH NULL
#define ZSTD_COMPRESSBLOCK_LAZY2_DEDICATEDDICTSEARCH_ROW NULL
#define ZSTD_COMPRESSBLOCK_LAZY2_EXTDICT NULL
#define ZSTD_COMPRESSBLOCK_LAZY2_EXTDICT_ROW NULL
#endif
#ifndef ZSTD_EXCLUDE_BTLAZY2_BLOCK_COMPRESSOR
size_t ZSTD_compressBlock_btlazy2(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_btlazy2_dictMatchState(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_btlazy2_extDict(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
#define ZSTD_COMPRESSBLOCK_BTLAZY2 ZSTD_compressBlock_btlazy2
#define ZSTD_COMPRESSBLOCK_BTLAZY2_DICTMATCHSTATE ZSTD_compressBlock_btlazy2_dictMatchState
#define ZSTD_COMPRESSBLOCK_BTLAZY2_EXTDICT ZSTD_compressBlock_btlazy2_extDict
#else
#define ZSTD_COMPRESSBLOCK_BTLAZY2 NULL
#define ZSTD_COMPRESSBLOCK_BTLAZY2_DICTMATCHSTATE NULL
#define ZSTD_COMPRESSBLOCK_BTLAZY2_EXTDICT NULL
#endif
#endif /* ZSTD_LAZY_H */
/**** ended inlining zstd_lazy.h ****/
/**** start inlining zstd_opt.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_OPT_H
#define ZSTD_OPT_H
/**** skipping file: zstd_compress_internal.h ****/
#if !defined(ZSTD_EXCLUDE_BTLAZY2_BLOCK_COMPRESSOR) \
|| !defined(ZSTD_EXCLUDE_BTOPT_BLOCK_COMPRESSOR) \
|| !defined(ZSTD_EXCLUDE_BTULTRA_BLOCK_COMPRESSOR)
/* used in ZSTD_loadDictionaryContent() */
void ZSTD_updateTree(ZSTD_MatchState_t* ms, const BYTE* ip, const BYTE* iend);
#endif
#ifndef ZSTD_EXCLUDE_BTOPT_BLOCK_COMPRESSOR
size_t ZSTD_compressBlock_btopt(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_btopt_dictMatchState(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_btopt_extDict(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
#define ZSTD_COMPRESSBLOCK_BTOPT ZSTD_compressBlock_btopt
#define ZSTD_COMPRESSBLOCK_BTOPT_DICTMATCHSTATE ZSTD_compressBlock_btopt_dictMatchState
#define ZSTD_COMPRESSBLOCK_BTOPT_EXTDICT ZSTD_compressBlock_btopt_extDict
#else
#define ZSTD_COMPRESSBLOCK_BTOPT NULL
#define ZSTD_COMPRESSBLOCK_BTOPT_DICTMATCHSTATE NULL
#define ZSTD_COMPRESSBLOCK_BTOPT_EXTDICT NULL
#endif
#ifndef ZSTD_EXCLUDE_BTULTRA_BLOCK_COMPRESSOR
size_t ZSTD_compressBlock_btultra(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_btultra_dictMatchState(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
size_t ZSTD_compressBlock_btultra_extDict(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
/* note : no btultra2 variant for extDict nor dictMatchState,
* because btultra2 is not meant to work with dictionaries
* and is only specific for the first block (no prefix) */
size_t ZSTD_compressBlock_btultra2(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize);
#define ZSTD_COMPRESSBLOCK_BTULTRA ZSTD_compressBlock_btultra
#define ZSTD_COMPRESSBLOCK_BTULTRA_DICTMATCHSTATE ZSTD_compressBlock_btultra_dictMatchState
#define ZSTD_COMPRESSBLOCK_BTULTRA_EXTDICT ZSTD_compressBlock_btultra_extDict
#define ZSTD_COMPRESSBLOCK_BTULTRA2 ZSTD_compressBlock_btultra2
#else
#define ZSTD_COMPRESSBLOCK_BTULTRA NULL
#define ZSTD_COMPRESSBLOCK_BTULTRA_DICTMATCHSTATE NULL
#define ZSTD_COMPRESSBLOCK_BTULTRA_EXTDICT NULL
#define ZSTD_COMPRESSBLOCK_BTULTRA2 NULL
#endif
#endif /* ZSTD_OPT_H */
/**** ended inlining zstd_opt.h ****/
/**** start inlining zstd_ldm.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_LDM_H
#define ZSTD_LDM_H
/**** skipping file: zstd_compress_internal.h ****/
/**** skipping file: ../zstd.h ****/
/*-*************************************
* Long distance matching
***************************************/
#define ZSTD_LDM_DEFAULT_WINDOW_LOG ZSTD_WINDOWLOG_LIMIT_DEFAULT
void ZSTD_ldm_fillHashTable(
ldmState_t* state, const BYTE* ip,
const BYTE* iend, ldmParams_t const* params);
/**
* ZSTD_ldm_generateSequences():
*
* Generates the sequences using the long distance match finder.
* Generates long range matching sequences in `sequences`, which parse a prefix
* of the source. `sequences` must be large enough to store every sequence,
* which can be checked with `ZSTD_ldm_getMaxNbSeq()`.
* @returns 0 or an error code.
*
* NOTE: The user must have called ZSTD_window_update() for all of the input
* they have, even if they pass it to ZSTD_ldm_generateSequences() in chunks.
* NOTE: This function returns an error if it runs out of space to store
* sequences.
*/
size_t ZSTD_ldm_generateSequences(
ldmState_t* ldms, RawSeqStore_t* sequences,
ldmParams_t const* params, void const* src, size_t srcSize);
/**
* ZSTD_ldm_blockCompress():
*
* Compresses a block using the predefined sequences, along with a secondary
* block compressor. The literals section of every sequence is passed to the
* secondary block compressor, and those sequences are interspersed with the
* predefined sequences. Returns the length of the last literals.
* Updates `rawSeqStore.pos` to indicate how many sequences have been consumed.
* `rawSeqStore.seq` may also be updated to split the last sequence between two
* blocks.
* @return The length of the last literals.
*
* NOTE: The source must be at most the maximum block size, but the predefined
* sequences can be any size, and may be longer than the block. In the case that
* they are longer than the block, the last sequences may need to be split into
* two. We handle that case correctly, and update `rawSeqStore` appropriately.
* NOTE: This function does not return any errors.
*/
size_t ZSTD_ldm_blockCompress(RawSeqStore_t* rawSeqStore,
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
ZSTD_ParamSwitch_e useRowMatchFinder,
void const* src, size_t srcSize);
/**
* ZSTD_ldm_skipSequences():
*
* Skip past `srcSize` bytes worth of sequences in `rawSeqStore`.
* Avoids emitting matches less than `minMatch` bytes.
* Must be called for data that is not passed to ZSTD_ldm_blockCompress().
*/
void ZSTD_ldm_skipSequences(RawSeqStore_t* rawSeqStore, size_t srcSize,
U32 const minMatch);
/* ZSTD_ldm_skipRawSeqStoreBytes():
* Moves forward in rawSeqStore by nbBytes, updating fields 'pos' and 'posInSequence'.
* Not to be used in conjunction with ZSTD_ldm_skipSequences().
* Must be called for data with is not passed to ZSTD_ldm_blockCompress().
*/
void ZSTD_ldm_skipRawSeqStoreBytes(RawSeqStore_t* rawSeqStore, size_t nbBytes);
/** ZSTD_ldm_getTableSize() :
* Estimate the space needed for long distance matching tables or 0 if LDM is
* disabled.
*/
size_t ZSTD_ldm_getTableSize(ldmParams_t params);
/** ZSTD_ldm_getSeqSpace() :
* Return an upper bound on the number of sequences that can be produced by
* the long distance matcher, or 0 if LDM is disabled.
*/
size_t ZSTD_ldm_getMaxNbSeq(ldmParams_t params, size_t maxChunkSize);
/** ZSTD_ldm_adjustParameters() :
* If the params->hashRateLog is not set, set it to its default value based on
* windowLog and params->hashLog.
*
* Ensures that params->bucketSizeLog is <= params->hashLog (setting it to
* params->hashLog if it is not).
*
* Ensures that the minMatchLength >= targetLength during optimal parsing.
*/
void ZSTD_ldm_adjustParameters(ldmParams_t* params,
ZSTD_compressionParameters const* cParams);
#endif /* ZSTD_FAST_H */
/**** ended inlining zstd_ldm.h ****/
/**** skipping file: zstd_compress_superblock.h ****/
/**** skipping file: ../common/bits.h ****/
/* ***************************************************************
* Tuning parameters
*****************************************************************/
/*!
* COMPRESS_HEAPMODE :
* Select how default decompression function ZSTD_compress() allocates its context,
* on stack (0, default), or into heap (1).
* Note that functions with explicit context such as ZSTD_compressCCtx() are unaffected.
*/
#ifndef ZSTD_COMPRESS_HEAPMODE
# define ZSTD_COMPRESS_HEAPMODE 0
#endif
/*!
* ZSTD_HASHLOG3_MAX :
* Maximum size of the hash table dedicated to find 3-bytes matches,
* in log format, aka 17 => 1 << 17 == 128Ki positions.
* This structure is only used in zstd_opt.
* Since allocation is centralized for all strategies, it has to be known here.
* The actual (selected) size of the hash table is then stored in ZSTD_MatchState_t.hashLog3,
* so that zstd_opt.c doesn't need to know about this constant.
*/
#ifndef ZSTD_HASHLOG3_MAX
# define ZSTD_HASHLOG3_MAX 17
#endif
/*-*************************************
* Helper functions
***************************************/
/* ZSTD_compressBound()
* Note that the result from this function is only valid for
* the one-pass compression functions.
* When employing the streaming mode,
* if flushes are frequently altering the size of blocks,
* the overhead from block headers can make the compressed data larger
* than the return value of ZSTD_compressBound().
*/
size_t ZSTD_compressBound(size_t srcSize) {
size_t const r = ZSTD_COMPRESSBOUND(srcSize);
if (r==0) return ERROR(srcSize_wrong);
return r;
}
/*-*************************************
* Context memory management
***************************************/
struct ZSTD_CDict_s {
const void* dictContent;
size_t dictContentSize;
ZSTD_dictContentType_e dictContentType; /* The dictContentType the CDict was created with */
U32* entropyWorkspace; /* entropy workspace of HUF_WORKSPACE_SIZE bytes */
ZSTD_cwksp workspace;
ZSTD_MatchState_t matchState;
ZSTD_compressedBlockState_t cBlockState;
ZSTD_customMem customMem;
U32 dictID;
int compressionLevel; /* 0 indicates that advanced API was used to select CDict params */
ZSTD_ParamSwitch_e useRowMatchFinder; /* Indicates whether the CDict was created with params that would use
* row-based matchfinder. Unless the cdict is reloaded, we will use
* the same greedy/lazy matchfinder at compression time.
*/
}; /* typedef'd to ZSTD_CDict within "zstd.h" */
ZSTD_CCtx* ZSTD_createCCtx(void)
{
return ZSTD_createCCtx_advanced(ZSTD_defaultCMem);
}
static void ZSTD_initCCtx(ZSTD_CCtx* cctx, ZSTD_customMem memManager)
{
assert(cctx != NULL);
ZSTD_memset(cctx, 0, sizeof(*cctx));
cctx->customMem = memManager;
cctx->bmi2 = ZSTD_cpuSupportsBmi2();
{ size_t const err = ZSTD_CCtx_reset(cctx, ZSTD_reset_parameters);
assert(!ZSTD_isError(err));
(void)err;
}
}
ZSTD_CCtx* ZSTD_createCCtx_advanced(ZSTD_customMem customMem)
{
ZSTD_STATIC_ASSERT(zcss_init==0);
ZSTD_STATIC_ASSERT(ZSTD_CONTENTSIZE_UNKNOWN==(0ULL - 1));
if ((!customMem.customAlloc) ^ (!customMem.customFree)) return NULL;
{ ZSTD_CCtx* const cctx = (ZSTD_CCtx*)ZSTD_customMalloc(sizeof(ZSTD_CCtx), customMem);
if (!cctx) return NULL;
ZSTD_initCCtx(cctx, customMem);
return cctx;
}
}
ZSTD_CCtx* ZSTD_initStaticCCtx(void* workspace, size_t workspaceSize)
{
ZSTD_cwksp ws;
ZSTD_CCtx* cctx;
if (workspaceSize <= sizeof(ZSTD_CCtx)) return NULL; /* minimum size */
if ((size_t)workspace & 7) return NULL; /* must be 8-aligned */
ZSTD_cwksp_init(&ws, workspace, workspaceSize, ZSTD_cwksp_static_alloc);
cctx = (ZSTD_CCtx*)ZSTD_cwksp_reserve_object(&ws, sizeof(ZSTD_CCtx));
if (cctx == NULL) return NULL;
ZSTD_memset(cctx, 0, sizeof(ZSTD_CCtx));
ZSTD_cwksp_move(&cctx->workspace, &ws);
cctx->staticSize = workspaceSize;
/* statically sized space. tmpWorkspace never moves (but prev/next block swap places) */
if (!ZSTD_cwksp_check_available(&cctx->workspace, TMP_WORKSPACE_SIZE + 2 * sizeof(ZSTD_compressedBlockState_t))) return NULL;
cctx->blockState.prevCBlock = (ZSTD_compressedBlockState_t*)ZSTD_cwksp_reserve_object(&cctx->workspace, sizeof(ZSTD_compressedBlockState_t));
cctx->blockState.nextCBlock = (ZSTD_compressedBlockState_t*)ZSTD_cwksp_reserve_object(&cctx->workspace, sizeof(ZSTD_compressedBlockState_t));
cctx->tmpWorkspace = ZSTD_cwksp_reserve_object(&cctx->workspace, TMP_WORKSPACE_SIZE);
cctx->tmpWkspSize = TMP_WORKSPACE_SIZE;
cctx->bmi2 = ZSTD_cpuid_bmi2(ZSTD_cpuid());
return cctx;
}
/**
* Clears and frees all of the dictionaries in the CCtx.
*/
static void ZSTD_clearAllDicts(ZSTD_CCtx* cctx)
{
ZSTD_customFree(cctx->localDict.dictBuffer, cctx->customMem);
ZSTD_freeCDict(cctx->localDict.cdict);
ZSTD_memset(&cctx->localDict, 0, sizeof(cctx->localDict));
ZSTD_memset(&cctx->prefixDict, 0, sizeof(cctx->prefixDict));
cctx->cdict = NULL;
}
static size_t ZSTD_sizeof_localDict(ZSTD_localDict dict)
{
size_t const bufferSize = dict.dictBuffer != NULL ? dict.dictSize : 0;
size_t const cdictSize = ZSTD_sizeof_CDict(dict.cdict);
return bufferSize + cdictSize;
}
static void ZSTD_freeCCtxContent(ZSTD_CCtx* cctx)
{
assert(cctx != NULL);
assert(cctx->staticSize == 0);
ZSTD_clearAllDicts(cctx);
#ifdef ZSTD_MULTITHREAD
ZSTDMT_freeCCtx(cctx->mtctx); cctx->mtctx = NULL;
#endif
ZSTD_cwksp_free(&cctx->workspace, cctx->customMem);
}
size_t ZSTD_freeCCtx(ZSTD_CCtx* cctx)
{
DEBUGLOG(3, "ZSTD_freeCCtx (address: %p)", (void*)cctx);
if (cctx==NULL) return 0; /* support free on NULL */
RETURN_ERROR_IF(cctx->staticSize, memory_allocation,
"not compatible with static CCtx");
{ int cctxInWorkspace = ZSTD_cwksp_owns_buffer(&cctx->workspace, cctx);
ZSTD_freeCCtxContent(cctx);
if (!cctxInWorkspace) ZSTD_customFree(cctx, cctx->customMem);
}
return 0;
}
static size_t ZSTD_sizeof_mtctx(const ZSTD_CCtx* cctx)
{
#ifdef ZSTD_MULTITHREAD
return ZSTDMT_sizeof_CCtx(cctx->mtctx);
#else
(void)cctx;
return 0;
#endif
}
size_t ZSTD_sizeof_CCtx(const ZSTD_CCtx* cctx)
{
if (cctx==NULL) return 0; /* support sizeof on NULL */
/* cctx may be in the workspace */
return (cctx->workspace.workspace == cctx ? 0 : sizeof(*cctx))
+ ZSTD_cwksp_sizeof(&cctx->workspace)
+ ZSTD_sizeof_localDict(cctx->localDict)
+ ZSTD_sizeof_mtctx(cctx);
}
size_t ZSTD_sizeof_CStream(const ZSTD_CStream* zcs)
{
return ZSTD_sizeof_CCtx(zcs); /* same object */
}
/* private API call, for dictBuilder only */
const SeqStore_t* ZSTD_getSeqStore(const ZSTD_CCtx* ctx) { return &(ctx->seqStore); }
/* Returns true if the strategy supports using a row based matchfinder */
static int ZSTD_rowMatchFinderSupported(const ZSTD_strategy strategy) {
return (strategy >= ZSTD_greedy && strategy <= ZSTD_lazy2);
}
/* Returns true if the strategy and useRowMatchFinder mode indicate that we will use the row based matchfinder
* for this compression.
*/
static int ZSTD_rowMatchFinderUsed(const ZSTD_strategy strategy, const ZSTD_ParamSwitch_e mode) {
assert(mode != ZSTD_ps_auto);
return ZSTD_rowMatchFinderSupported(strategy) && (mode == ZSTD_ps_enable);
}
/* Returns row matchfinder usage given an initial mode and cParams */
static ZSTD_ParamSwitch_e ZSTD_resolveRowMatchFinderMode(ZSTD_ParamSwitch_e mode,
const ZSTD_compressionParameters* const cParams) {
if (mode != ZSTD_ps_auto) return mode; /* if requested enabled, but no SIMD, we still will use row matchfinder */
mode = ZSTD_ps_disable;
if (!ZSTD_rowMatchFinderSupported(cParams->strategy)) return mode;
if (cParams->windowLog > 14) mode = ZSTD_ps_enable;
return mode;
}
/* Returns block splitter usage (generally speaking, when using slower/stronger compression modes) */
static ZSTD_ParamSwitch_e ZSTD_resolveBlockSplitterMode(ZSTD_ParamSwitch_e mode,
const ZSTD_compressionParameters* const cParams) {
if (mode != ZSTD_ps_auto) return mode;
return (cParams->strategy >= ZSTD_btopt && cParams->windowLog >= 17) ? ZSTD_ps_enable : ZSTD_ps_disable;
}
/* Returns 1 if the arguments indicate that we should allocate a chainTable, 0 otherwise */
static int ZSTD_allocateChainTable(const ZSTD_strategy strategy,
const ZSTD_ParamSwitch_e useRowMatchFinder,
const U32 forDDSDict) {
assert(useRowMatchFinder != ZSTD_ps_auto);
/* We always should allocate a chaintable if we are allocating a matchstate for a DDS dictionary matchstate.
* We do not allocate a chaintable if we are using ZSTD_fast, or are using the row-based matchfinder.
*/
return forDDSDict || ((strategy != ZSTD_fast) && !ZSTD_rowMatchFinderUsed(strategy, useRowMatchFinder));
}
/* Returns ZSTD_ps_enable if compression parameters are such that we should
* enable long distance matching (wlog >= 27, strategy >= btopt).
* Returns ZSTD_ps_disable otherwise.
*/
static ZSTD_ParamSwitch_e ZSTD_resolveEnableLdm(ZSTD_ParamSwitch_e mode,
const ZSTD_compressionParameters* const cParams) {
if (mode != ZSTD_ps_auto) return mode;
return (cParams->strategy >= ZSTD_btopt && cParams->windowLog >= 27) ? ZSTD_ps_enable : ZSTD_ps_disable;
}
static int ZSTD_resolveExternalSequenceValidation(int mode) {
return mode;
}
/* Resolves maxBlockSize to the default if no value is present. */
static size_t ZSTD_resolveMaxBlockSize(size_t maxBlockSize) {
if (maxBlockSize == 0) {
return ZSTD_BLOCKSIZE_MAX;
} else {
return maxBlockSize;
}
}
static ZSTD_ParamSwitch_e ZSTD_resolveExternalRepcodeSearch(ZSTD_ParamSwitch_e value, int cLevel) {
if (value != ZSTD_ps_auto) return value;
if (cLevel < 10) {
return ZSTD_ps_disable;
} else {
return ZSTD_ps_enable;
}
}
/* Returns 1 if compression parameters are such that CDict hashtable and chaintable indices are tagged.
* If so, the tags need to be removed in ZSTD_resetCCtx_byCopyingCDict. */
static int ZSTD_CDictIndicesAreTagged(const ZSTD_compressionParameters* const cParams) {
return cParams->strategy == ZSTD_fast || cParams->strategy == ZSTD_dfast;
}
static ZSTD_CCtx_params ZSTD_makeCCtxParamsFromCParams(
ZSTD_compressionParameters cParams)
{
ZSTD_CCtx_params cctxParams;
/* should not matter, as all cParams are presumed properly defined */
ZSTD_CCtxParams_init(&cctxParams, ZSTD_CLEVEL_DEFAULT);
cctxParams.cParams = cParams;
/* Adjust advanced params according to cParams */
cctxParams.ldmParams.enableLdm = ZSTD_resolveEnableLdm(cctxParams.ldmParams.enableLdm, &cParams);
if (cctxParams.ldmParams.enableLdm == ZSTD_ps_enable) {
ZSTD_ldm_adjustParameters(&cctxParams.ldmParams, &cParams);
assert(cctxParams.ldmParams.hashLog >= cctxParams.ldmParams.bucketSizeLog);
assert(cctxParams.ldmParams.hashRateLog < 32);
}
cctxParams.postBlockSplitter = ZSTD_resolveBlockSplitterMode(cctxParams.postBlockSplitter, &cParams);
cctxParams.useRowMatchFinder = ZSTD_resolveRowMatchFinderMode(cctxParams.useRowMatchFinder, &cParams);
cctxParams.validateSequences = ZSTD_resolveExternalSequenceValidation(cctxParams.validateSequences);
cctxParams.maxBlockSize = ZSTD_resolveMaxBlockSize(cctxParams.maxBlockSize);
cctxParams.searchForExternalRepcodes = ZSTD_resolveExternalRepcodeSearch(cctxParams.searchForExternalRepcodes,
cctxParams.compressionLevel);
assert(!ZSTD_checkCParams(cParams));
return cctxParams;
}
static ZSTD_CCtx_params* ZSTD_createCCtxParams_advanced(
ZSTD_customMem customMem)
{
ZSTD_CCtx_params* params;
if ((!customMem.customAlloc) ^ (!customMem.customFree)) return NULL;
params = (ZSTD_CCtx_params*)ZSTD_customCalloc(
sizeof(ZSTD_CCtx_params), customMem);
if (!params) { return NULL; }
ZSTD_CCtxParams_init(params, ZSTD_CLEVEL_DEFAULT);
params->customMem = customMem;
return params;
}
ZSTD_CCtx_params* ZSTD_createCCtxParams(void)
{
return ZSTD_createCCtxParams_advanced(ZSTD_defaultCMem);
}
size_t ZSTD_freeCCtxParams(ZSTD_CCtx_params* params)
{
if (params == NULL) { return 0; }
ZSTD_customFree(params, params->customMem);
return 0;
}
size_t ZSTD_CCtxParams_reset(ZSTD_CCtx_params* params)
{
return ZSTD_CCtxParams_init(params, ZSTD_CLEVEL_DEFAULT);
}
size_t ZSTD_CCtxParams_init(ZSTD_CCtx_params* cctxParams, int compressionLevel) {
RETURN_ERROR_IF(!cctxParams, GENERIC, "NULL pointer!");
ZSTD_memset(cctxParams, 0, sizeof(*cctxParams));
cctxParams->compressionLevel = compressionLevel;
cctxParams->fParams.contentSizeFlag = 1;
return 0;
}
#define ZSTD_NO_CLEVEL 0
/**
* Initializes `cctxParams` from `params` and `compressionLevel`.
* @param compressionLevel If params are derived from a compression level then that compression level, otherwise ZSTD_NO_CLEVEL.
*/
static void
ZSTD_CCtxParams_init_internal(ZSTD_CCtx_params* cctxParams,
const ZSTD_parameters* params,
int compressionLevel)
{
assert(!ZSTD_checkCParams(params->cParams));
ZSTD_memset(cctxParams, 0, sizeof(*cctxParams));
cctxParams->cParams = params->cParams;
cctxParams->fParams = params->fParams;
/* Should not matter, as all cParams are presumed properly defined.
* But, set it for tracing anyway.
*/
cctxParams->compressionLevel = compressionLevel;
cctxParams->useRowMatchFinder = ZSTD_resolveRowMatchFinderMode(cctxParams->useRowMatchFinder, ¶ms->cParams);
cctxParams->postBlockSplitter = ZSTD_resolveBlockSplitterMode(cctxParams->postBlockSplitter, ¶ms->cParams);
cctxParams->ldmParams.enableLdm = ZSTD_resolveEnableLdm(cctxParams->ldmParams.enableLdm, ¶ms->cParams);
cctxParams->validateSequences = ZSTD_resolveExternalSequenceValidation(cctxParams->validateSequences);
cctxParams->maxBlockSize = ZSTD_resolveMaxBlockSize(cctxParams->maxBlockSize);
cctxParams->searchForExternalRepcodes = ZSTD_resolveExternalRepcodeSearch(cctxParams->searchForExternalRepcodes, compressionLevel);
DEBUGLOG(4, "ZSTD_CCtxParams_init_internal: useRowMatchFinder=%d, useBlockSplitter=%d ldm=%d",
cctxParams->useRowMatchFinder, cctxParams->postBlockSplitter, cctxParams->ldmParams.enableLdm);
}
size_t ZSTD_CCtxParams_init_advanced(ZSTD_CCtx_params* cctxParams, ZSTD_parameters params)
{
RETURN_ERROR_IF(!cctxParams, GENERIC, "NULL pointer!");
FORWARD_IF_ERROR( ZSTD_checkCParams(params.cParams) , "");
ZSTD_CCtxParams_init_internal(cctxParams, ¶ms, ZSTD_NO_CLEVEL);
return 0;
}
/**
* Sets cctxParams' cParams and fParams from params, but otherwise leaves them alone.
* @param params Validated zstd parameters.
*/
static void ZSTD_CCtxParams_setZstdParams(
ZSTD_CCtx_params* cctxParams, const ZSTD_parameters* params)
{
assert(!ZSTD_checkCParams(params->cParams));
cctxParams->cParams = params->cParams;
cctxParams->fParams = params->fParams;
/* Should not matter, as all cParams are presumed properly defined.
* But, set it for tracing anyway.
*/
cctxParams->compressionLevel = ZSTD_NO_CLEVEL;
}
ZSTD_bounds ZSTD_cParam_getBounds(ZSTD_cParameter param)
{
ZSTD_bounds bounds = { 0, 0, 0 };
switch(param)
{
case ZSTD_c_compressionLevel:
bounds.lowerBound = ZSTD_minCLevel();
bounds.upperBound = ZSTD_maxCLevel();
return bounds;
case ZSTD_c_windowLog:
bounds.lowerBound = ZSTD_WINDOWLOG_MIN;
bounds.upperBound = ZSTD_WINDOWLOG_MAX;
return bounds;
case ZSTD_c_hashLog:
bounds.lowerBound = ZSTD_HASHLOG_MIN;
bounds.upperBound = ZSTD_HASHLOG_MAX;
return bounds;
case ZSTD_c_chainLog:
bounds.lowerBound = ZSTD_CHAINLOG_MIN;
bounds.upperBound = ZSTD_CHAINLOG_MAX;
return bounds;
case ZSTD_c_searchLog:
bounds.lowerBound = ZSTD_SEARCHLOG_MIN;
bounds.upperBound = ZSTD_SEARCHLOG_MAX;
return bounds;
case ZSTD_c_minMatch:
bounds.lowerBound = ZSTD_MINMATCH_MIN;
bounds.upperBound = ZSTD_MINMATCH_MAX;
return bounds;
case ZSTD_c_targetLength:
bounds.lowerBound = ZSTD_TARGETLENGTH_MIN;
bounds.upperBound = ZSTD_TARGETLENGTH_MAX;
return bounds;
case ZSTD_c_strategy:
bounds.lowerBound = ZSTD_STRATEGY_MIN;
bounds.upperBound = ZSTD_STRATEGY_MAX;
return bounds;
case ZSTD_c_contentSizeFlag:
bounds.lowerBound = 0;
bounds.upperBound = 1;
return bounds;
case ZSTD_c_checksumFlag:
bounds.lowerBound = 0;
bounds.upperBound = 1;
return bounds;
case ZSTD_c_dictIDFlag:
bounds.lowerBound = 0;
bounds.upperBound = 1;
return bounds;
case ZSTD_c_nbWorkers:
bounds.lowerBound = 0;
#ifdef ZSTD_MULTITHREAD
bounds.upperBound = ZSTDMT_NBWORKERS_MAX;
#else
bounds.upperBound = 0;
#endif
return bounds;
case ZSTD_c_jobSize:
bounds.lowerBound = 0;
#ifdef ZSTD_MULTITHREAD
bounds.upperBound = ZSTDMT_JOBSIZE_MAX;
#else
bounds.upperBound = 0;
#endif
return bounds;
case ZSTD_c_overlapLog:
#ifdef ZSTD_MULTITHREAD
bounds.lowerBound = ZSTD_OVERLAPLOG_MIN;
bounds.upperBound = ZSTD_OVERLAPLOG_MAX;
#else
bounds.lowerBound = 0;
bounds.upperBound = 0;
#endif
return bounds;
case ZSTD_c_enableDedicatedDictSearch:
bounds.lowerBound = 0;
bounds.upperBound = 1;
return bounds;
case ZSTD_c_enableLongDistanceMatching:
bounds.lowerBound = (int)ZSTD_ps_auto;
bounds.upperBound = (int)ZSTD_ps_disable;
return bounds;
case ZSTD_c_ldmHashLog:
bounds.lowerBound = ZSTD_LDM_HASHLOG_MIN;
bounds.upperBound = ZSTD_LDM_HASHLOG_MAX;
return bounds;
case ZSTD_c_ldmMinMatch:
bounds.lowerBound = ZSTD_LDM_MINMATCH_MIN;
bounds.upperBound = ZSTD_LDM_MINMATCH_MAX;
return bounds;
case ZSTD_c_ldmBucketSizeLog:
bounds.lowerBound = ZSTD_LDM_BUCKETSIZELOG_MIN;
bounds.upperBound = ZSTD_LDM_BUCKETSIZELOG_MAX;
return bounds;
case ZSTD_c_ldmHashRateLog:
bounds.lowerBound = ZSTD_LDM_HASHRATELOG_MIN;
bounds.upperBound = ZSTD_LDM_HASHRATELOG_MAX;
return bounds;
/* experimental parameters */
case ZSTD_c_rsyncable:
bounds.lowerBound = 0;
bounds.upperBound = 1;
return bounds;
case ZSTD_c_forceMaxWindow :
bounds.lowerBound = 0;
bounds.upperBound = 1;
return bounds;
case ZSTD_c_format:
ZSTD_STATIC_ASSERT(ZSTD_f_zstd1 < ZSTD_f_zstd1_magicless);
bounds.lowerBound = ZSTD_f_zstd1;
bounds.upperBound = ZSTD_f_zstd1_magicless; /* note : how to ensure at compile time that this is the highest value enum ? */
return bounds;
case ZSTD_c_forceAttachDict:
ZSTD_STATIC_ASSERT(ZSTD_dictDefaultAttach < ZSTD_dictForceLoad);
bounds.lowerBound = ZSTD_dictDefaultAttach;
bounds.upperBound = ZSTD_dictForceLoad; /* note : how to ensure at compile time that this is the highest value enum ? */
return bounds;
case ZSTD_c_literalCompressionMode:
ZSTD_STATIC_ASSERT(ZSTD_ps_auto < ZSTD_ps_enable && ZSTD_ps_enable < ZSTD_ps_disable);
bounds.lowerBound = (int)ZSTD_ps_auto;
bounds.upperBound = (int)ZSTD_ps_disable;
return bounds;
case ZSTD_c_targetCBlockSize:
bounds.lowerBound = ZSTD_TARGETCBLOCKSIZE_MIN;
bounds.upperBound = ZSTD_TARGETCBLOCKSIZE_MAX;
return bounds;
case ZSTD_c_srcSizeHint:
bounds.lowerBound = ZSTD_SRCSIZEHINT_MIN;
bounds.upperBound = ZSTD_SRCSIZEHINT_MAX;
return bounds;
case ZSTD_c_stableInBuffer:
case ZSTD_c_stableOutBuffer:
bounds.lowerBound = (int)ZSTD_bm_buffered;
bounds.upperBound = (int)ZSTD_bm_stable;
return bounds;
case ZSTD_c_blockDelimiters:
bounds.lowerBound = (int)ZSTD_sf_noBlockDelimiters;
bounds.upperBound = (int)ZSTD_sf_explicitBlockDelimiters;
return bounds;
case ZSTD_c_validateSequences:
bounds.lowerBound = 0;
bounds.upperBound = 1;
return bounds;
case ZSTD_c_splitAfterSequences:
bounds.lowerBound = (int)ZSTD_ps_auto;
bounds.upperBound = (int)ZSTD_ps_disable;
return bounds;
case ZSTD_c_blockSplitterLevel:
bounds.lowerBound = 0;
bounds.upperBound = ZSTD_BLOCKSPLITTER_LEVEL_MAX;
return bounds;
case ZSTD_c_useRowMatchFinder:
bounds.lowerBound = (int)ZSTD_ps_auto;
bounds.upperBound = (int)ZSTD_ps_disable;
return bounds;
case ZSTD_c_deterministicRefPrefix:
bounds.lowerBound = 0;
bounds.upperBound = 1;
return bounds;
case ZSTD_c_prefetchCDictTables:
bounds.lowerBound = (int)ZSTD_ps_auto;
bounds.upperBound = (int)ZSTD_ps_disable;
return bounds;
case ZSTD_c_enableSeqProducerFallback:
bounds.lowerBound = 0;
bounds.upperBound = 1;
return bounds;
case ZSTD_c_maxBlockSize:
bounds.lowerBound = ZSTD_BLOCKSIZE_MAX_MIN;
bounds.upperBound = ZSTD_BLOCKSIZE_MAX;
return bounds;
case ZSTD_c_repcodeResolution:
bounds.lowerBound = (int)ZSTD_ps_auto;
bounds.upperBound = (int)ZSTD_ps_disable;
return bounds;
default:
bounds.error = ERROR(parameter_unsupported);
return bounds;
}
}
/* ZSTD_cParam_clampBounds:
* Clamps the value into the bounded range.
*/
static size_t ZSTD_cParam_clampBounds(ZSTD_cParameter cParam, int* value)
{
ZSTD_bounds const bounds = ZSTD_cParam_getBounds(cParam);
if (ZSTD_isError(bounds.error)) return bounds.error;
if (*value < bounds.lowerBound) *value = bounds.lowerBound;
if (*value > bounds.upperBound) *value = bounds.upperBound;
return 0;
}
#define BOUNDCHECK(cParam, val) \
do { \
RETURN_ERROR_IF(!ZSTD_cParam_withinBounds(cParam,val), \
parameter_outOfBound, "Param out of bounds"); \
} while (0)
static int ZSTD_isUpdateAuthorized(ZSTD_cParameter param)
{
switch(param)
{
case ZSTD_c_compressionLevel:
case ZSTD_c_hashLog:
case ZSTD_c_chainLog:
case ZSTD_c_searchLog:
case ZSTD_c_minMatch:
case ZSTD_c_targetLength:
case ZSTD_c_strategy:
case ZSTD_c_blockSplitterLevel:
return 1;
case ZSTD_c_format:
case ZSTD_c_windowLog:
case ZSTD_c_contentSizeFlag:
case ZSTD_c_checksumFlag:
case ZSTD_c_dictIDFlag:
case ZSTD_c_forceMaxWindow :
case ZSTD_c_nbWorkers:
case ZSTD_c_jobSize:
case ZSTD_c_overlapLog:
case ZSTD_c_rsyncable:
case ZSTD_c_enableDedicatedDictSearch:
case ZSTD_c_enableLongDistanceMatching:
case ZSTD_c_ldmHashLog:
case ZSTD_c_ldmMinMatch:
case ZSTD_c_ldmBucketSizeLog:
case ZSTD_c_ldmHashRateLog:
case ZSTD_c_forceAttachDict:
case ZSTD_c_literalCompressionMode:
case ZSTD_c_targetCBlockSize:
case ZSTD_c_srcSizeHint:
case ZSTD_c_stableInBuffer:
case ZSTD_c_stableOutBuffer:
case ZSTD_c_blockDelimiters:
case ZSTD_c_validateSequences:
case ZSTD_c_splitAfterSequences:
case ZSTD_c_useRowMatchFinder:
case ZSTD_c_deterministicRefPrefix:
case ZSTD_c_prefetchCDictTables:
case ZSTD_c_enableSeqProducerFallback:
case ZSTD_c_maxBlockSize:
case ZSTD_c_repcodeResolution:
default:
return 0;
}
}
size_t ZSTD_CCtx_setParameter(ZSTD_CCtx* cctx, ZSTD_cParameter param, int value)
{
DEBUGLOG(4, "ZSTD_CCtx_setParameter (%i, %i)", (int)param, value);
if (cctx->streamStage != zcss_init) {
if (ZSTD_isUpdateAuthorized(param)) {
cctx->cParamsChanged = 1;
} else {
RETURN_ERROR(stage_wrong, "can only set params in cctx init stage");
} }
switch(param)
{
case ZSTD_c_nbWorkers:
RETURN_ERROR_IF((value!=0) && cctx->staticSize, parameter_unsupported,
"MT not compatible with static alloc");
break;
case ZSTD_c_compressionLevel:
case ZSTD_c_windowLog:
case ZSTD_c_hashLog:
case ZSTD_c_chainLog:
case ZSTD_c_searchLog:
case ZSTD_c_minMatch:
case ZSTD_c_targetLength:
case ZSTD_c_strategy:
case ZSTD_c_ldmHashRateLog:
case ZSTD_c_format:
case ZSTD_c_contentSizeFlag:
case ZSTD_c_checksumFlag:
case ZSTD_c_dictIDFlag:
case ZSTD_c_forceMaxWindow:
case ZSTD_c_forceAttachDict:
case ZSTD_c_literalCompressionMode:
case ZSTD_c_jobSize:
case ZSTD_c_overlapLog:
case ZSTD_c_rsyncable:
case ZSTD_c_enableDedicatedDictSearch:
case ZSTD_c_enableLongDistanceMatching:
case ZSTD_c_ldmHashLog:
case ZSTD_c_ldmMinMatch:
case ZSTD_c_ldmBucketSizeLog:
case ZSTD_c_targetCBlockSize:
case ZSTD_c_srcSizeHint:
case ZSTD_c_stableInBuffer:
case ZSTD_c_stableOutBuffer:
case ZSTD_c_blockDelimiters:
case ZSTD_c_validateSequences:
case ZSTD_c_splitAfterSequences:
case ZSTD_c_blockSplitterLevel:
case ZSTD_c_useRowMatchFinder:
case ZSTD_c_deterministicRefPrefix:
case ZSTD_c_prefetchCDictTables:
case ZSTD_c_enableSeqProducerFallback:
case ZSTD_c_maxBlockSize:
case ZSTD_c_repcodeResolution:
break;
default: RETURN_ERROR(parameter_unsupported, "unknown parameter");
}
return ZSTD_CCtxParams_setParameter(&cctx->requestedParams, param, value);
}
size_t ZSTD_CCtxParams_setParameter(ZSTD_CCtx_params* CCtxParams,
ZSTD_cParameter param, int value)
{
DEBUGLOG(4, "ZSTD_CCtxParams_setParameter (%i, %i)", (int)param, value);
switch(param)
{
case ZSTD_c_format :
BOUNDCHECK(ZSTD_c_format, value);
CCtxParams->format = (ZSTD_format_e)value;
return (size_t)CCtxParams->format;
case ZSTD_c_compressionLevel : {
FORWARD_IF_ERROR(ZSTD_cParam_clampBounds(param, &value), "");
if (value == 0)
CCtxParams->compressionLevel = ZSTD_CLEVEL_DEFAULT; /* 0 == default */
else
CCtxParams->compressionLevel = value;
if (CCtxParams->compressionLevel >= 0) return (size_t)CCtxParams->compressionLevel;
return 0; /* return type (size_t) cannot represent negative values */
}
case ZSTD_c_windowLog :
if (value!=0) /* 0 => use default */
BOUNDCHECK(ZSTD_c_windowLog, value);
CCtxParams->cParams.windowLog = (U32)value;
return CCtxParams->cParams.windowLog;
case ZSTD_c_hashLog :
if (value!=0) /* 0 => use default */
BOUNDCHECK(ZSTD_c_hashLog, value);
CCtxParams->cParams.hashLog = (U32)value;
return CCtxParams->cParams.hashLog;
case ZSTD_c_chainLog :
if (value!=0) /* 0 => use default */
BOUNDCHECK(ZSTD_c_chainLog, value);
CCtxParams->cParams.chainLog = (U32)value;
return CCtxParams->cParams.chainLog;
case ZSTD_c_searchLog :
if (value!=0) /* 0 => use default */
BOUNDCHECK(ZSTD_c_searchLog, value);
CCtxParams->cParams.searchLog = (U32)value;
return (size_t)value;
case ZSTD_c_minMatch :
if (value!=0) /* 0 => use default */
BOUNDCHECK(ZSTD_c_minMatch, value);
CCtxParams->cParams.minMatch = (U32)value;
return CCtxParams->cParams.minMatch;
case ZSTD_c_targetLength :
BOUNDCHECK(ZSTD_c_targetLength, value);
CCtxParams->cParams.targetLength = (U32)value;
return CCtxParams->cParams.targetLength;
case ZSTD_c_strategy :
if (value!=0) /* 0 => use default */
BOUNDCHECK(ZSTD_c_strategy, value);
CCtxParams->cParams.strategy = (ZSTD_strategy)value;
return (size_t)CCtxParams->cParams.strategy;
case ZSTD_c_contentSizeFlag :
/* Content size written in frame header _when known_ (default:1) */
DEBUGLOG(4, "set content size flag = %u", (value!=0));
CCtxParams->fParams.contentSizeFlag = value != 0;
return (size_t)CCtxParams->fParams.contentSizeFlag;
case ZSTD_c_checksumFlag :
/* A 32-bits content checksum will be calculated and written at end of frame (default:0) */
CCtxParams->fParams.checksumFlag = value != 0;
return (size_t)CCtxParams->fParams.checksumFlag;
case ZSTD_c_dictIDFlag : /* When applicable, dictionary's dictID is provided in frame header (default:1) */
DEBUGLOG(4, "set dictIDFlag = %u", (value!=0));
CCtxParams->fParams.noDictIDFlag = !value;
return !CCtxParams->fParams.noDictIDFlag;
case ZSTD_c_forceMaxWindow :
CCtxParams->forceWindow = (value != 0);
return (size_t)CCtxParams->forceWindow;
case ZSTD_c_forceAttachDict : {
const ZSTD_dictAttachPref_e pref = (ZSTD_dictAttachPref_e)value;
BOUNDCHECK(ZSTD_c_forceAttachDict, (int)pref);
CCtxParams->attachDictPref = pref;
return CCtxParams->attachDictPref;
}
case ZSTD_c_literalCompressionMode : {
const ZSTD_ParamSwitch_e lcm = (ZSTD_ParamSwitch_e)value;
BOUNDCHECK(ZSTD_c_literalCompressionMode, (int)lcm);
CCtxParams->literalCompressionMode = lcm;
return CCtxParams->literalCompressionMode;
}
case ZSTD_c_nbWorkers :
#ifndef ZSTD_MULTITHREAD
RETURN_ERROR_IF(value!=0, parameter_unsupported, "not compiled with multithreading");
return 0;
#else
FORWARD_IF_ERROR(ZSTD_cParam_clampBounds(param, &value), "");
CCtxParams->nbWorkers = value;
return (size_t)(CCtxParams->nbWorkers);
#endif
case ZSTD_c_jobSize :
#ifndef ZSTD_MULTITHREAD
RETURN_ERROR_IF(value!=0, parameter_unsupported, "not compiled with multithreading");
return 0;
#else
/* Adjust to the minimum non-default value. */
if (value != 0 && value < ZSTDMT_JOBSIZE_MIN)
value = ZSTDMT_JOBSIZE_MIN;
FORWARD_IF_ERROR(ZSTD_cParam_clampBounds(param, &value), "");
assert(value >= 0);
CCtxParams->jobSize = (size_t)value;
return CCtxParams->jobSize;
#endif
case ZSTD_c_overlapLog :
#ifndef ZSTD_MULTITHREAD
RETURN_ERROR_IF(value!=0, parameter_unsupported, "not compiled with multithreading");
return 0;
#else
FORWARD_IF_ERROR(ZSTD_cParam_clampBounds(ZSTD_c_overlapLog, &value), "");
CCtxParams->overlapLog = value;
return (size_t)CCtxParams->overlapLog;
#endif
case ZSTD_c_rsyncable :
#ifndef ZSTD_MULTITHREAD
RETURN_ERROR_IF(value!=0, parameter_unsupported, "not compiled with multithreading");
return 0;
#else
FORWARD_IF_ERROR(ZSTD_cParam_clampBounds(ZSTD_c_overlapLog, &value), "");
CCtxParams->rsyncable = value;
return (size_t)CCtxParams->rsyncable;
#endif
case ZSTD_c_enableDedicatedDictSearch :
CCtxParams->enableDedicatedDictSearch = (value!=0);
return (size_t)CCtxParams->enableDedicatedDictSearch;
case ZSTD_c_enableLongDistanceMatching :
BOUNDCHECK(ZSTD_c_enableLongDistanceMatching, value);
CCtxParams->ldmParams.enableLdm = (ZSTD_ParamSwitch_e)value;
return CCtxParams->ldmParams.enableLdm;
case ZSTD_c_ldmHashLog :
if (value!=0) /* 0 ==> auto */
BOUNDCHECK(ZSTD_c_ldmHashLog, value);
CCtxParams->ldmParams.hashLog = (U32)value;
return CCtxParams->ldmParams.hashLog;
case ZSTD_c_ldmMinMatch :
if (value!=0) /* 0 ==> default */
BOUNDCHECK(ZSTD_c_ldmMinMatch, value);
CCtxParams->ldmParams.minMatchLength = (U32)value;
return CCtxParams->ldmParams.minMatchLength;
case ZSTD_c_ldmBucketSizeLog :
if (value!=0) /* 0 ==> default */
BOUNDCHECK(ZSTD_c_ldmBucketSizeLog, value);
CCtxParams->ldmParams.bucketSizeLog = (U32)value;
return CCtxParams->ldmParams.bucketSizeLog;
case ZSTD_c_ldmHashRateLog :
if (value!=0) /* 0 ==> default */
BOUNDCHECK(ZSTD_c_ldmHashRateLog, value);
CCtxParams->ldmParams.hashRateLog = (U32)value;
return CCtxParams->ldmParams.hashRateLog;
case ZSTD_c_targetCBlockSize :
if (value!=0) { /* 0 ==> default */
value = MAX(value, ZSTD_TARGETCBLOCKSIZE_MIN);
BOUNDCHECK(ZSTD_c_targetCBlockSize, value);
}
CCtxParams->targetCBlockSize = (U32)value;
return CCtxParams->targetCBlockSize;
case ZSTD_c_srcSizeHint :
if (value!=0) /* 0 ==> default */
BOUNDCHECK(ZSTD_c_srcSizeHint, value);
CCtxParams->srcSizeHint = value;
return (size_t)CCtxParams->srcSizeHint;
case ZSTD_c_stableInBuffer:
BOUNDCHECK(ZSTD_c_stableInBuffer, value);
CCtxParams->inBufferMode = (ZSTD_bufferMode_e)value;
return CCtxParams->inBufferMode;
case ZSTD_c_stableOutBuffer:
BOUNDCHECK(ZSTD_c_stableOutBuffer, value);
CCtxParams->outBufferMode = (ZSTD_bufferMode_e)value;
return CCtxParams->outBufferMode;
case ZSTD_c_blockDelimiters:
BOUNDCHECK(ZSTD_c_blockDelimiters, value);
CCtxParams->blockDelimiters = (ZSTD_SequenceFormat_e)value;
return CCtxParams->blockDelimiters;
case ZSTD_c_validateSequences:
BOUNDCHECK(ZSTD_c_validateSequences, value);
CCtxParams->validateSequences = value;
return (size_t)CCtxParams->validateSequences;
case ZSTD_c_splitAfterSequences:
BOUNDCHECK(ZSTD_c_splitAfterSequences, value);
CCtxParams->postBlockSplitter = (ZSTD_ParamSwitch_e)value;
return CCtxParams->postBlockSplitter;
case ZSTD_c_blockSplitterLevel:
BOUNDCHECK(ZSTD_c_blockSplitterLevel, value);
CCtxParams->preBlockSplitter_level = value;
return (size_t)CCtxParams->preBlockSplitter_level;
case ZSTD_c_useRowMatchFinder:
BOUNDCHECK(ZSTD_c_useRowMatchFinder, value);
CCtxParams->useRowMatchFinder = (ZSTD_ParamSwitch_e)value;
return CCtxParams->useRowMatchFinder;
case ZSTD_c_deterministicRefPrefix:
BOUNDCHECK(ZSTD_c_deterministicRefPrefix, value);
CCtxParams->deterministicRefPrefix = !!value;
return (size_t)CCtxParams->deterministicRefPrefix;
case ZSTD_c_prefetchCDictTables:
BOUNDCHECK(ZSTD_c_prefetchCDictTables, value);
CCtxParams->prefetchCDictTables = (ZSTD_ParamSwitch_e)value;
return CCtxParams->prefetchCDictTables;
case ZSTD_c_enableSeqProducerFallback:
BOUNDCHECK(ZSTD_c_enableSeqProducerFallback, value);
CCtxParams->enableMatchFinderFallback = value;
return (size_t)CCtxParams->enableMatchFinderFallback;
case ZSTD_c_maxBlockSize:
if (value!=0) /* 0 ==> default */
BOUNDCHECK(ZSTD_c_maxBlockSize, value);
assert(value>=0);
CCtxParams->maxBlockSize = (size_t)value;
return CCtxParams->maxBlockSize;
case ZSTD_c_repcodeResolution:
BOUNDCHECK(ZSTD_c_repcodeResolution, value);
CCtxParams->searchForExternalRepcodes = (ZSTD_ParamSwitch_e)value;
return CCtxParams->searchForExternalRepcodes;
default: RETURN_ERROR(parameter_unsupported, "unknown parameter");
}
}
size_t ZSTD_CCtx_getParameter(ZSTD_CCtx const* cctx, ZSTD_cParameter param, int* value)
{
return ZSTD_CCtxParams_getParameter(&cctx->requestedParams, param, value);
}
size_t ZSTD_CCtxParams_getParameter(
ZSTD_CCtx_params const* CCtxParams, ZSTD_cParameter param, int* value)
{
switch(param)
{
case ZSTD_c_format :
*value = (int)CCtxParams->format;
break;
case ZSTD_c_compressionLevel :
*value = CCtxParams->compressionLevel;
break;
case ZSTD_c_windowLog :
*value = (int)CCtxParams->cParams.windowLog;
break;
case ZSTD_c_hashLog :
*value = (int)CCtxParams->cParams.hashLog;
break;
case ZSTD_c_chainLog :
*value = (int)CCtxParams->cParams.chainLog;
break;
case ZSTD_c_searchLog :
*value = (int)CCtxParams->cParams.searchLog;
break;
case ZSTD_c_minMatch :
*value = (int)CCtxParams->cParams.minMatch;
break;
case ZSTD_c_targetLength :
*value = (int)CCtxParams->cParams.targetLength;
break;
case ZSTD_c_strategy :
*value = (int)CCtxParams->cParams.strategy;
break;
case ZSTD_c_contentSizeFlag :
*value = CCtxParams->fParams.contentSizeFlag;
break;
case ZSTD_c_checksumFlag :
*value = CCtxParams->fParams.checksumFlag;
break;
case ZSTD_c_dictIDFlag :
*value = !CCtxParams->fParams.noDictIDFlag;
break;
case ZSTD_c_forceMaxWindow :
*value = CCtxParams->forceWindow;
break;
case ZSTD_c_forceAttachDict :
*value = (int)CCtxParams->attachDictPref;
break;
case ZSTD_c_literalCompressionMode :
*value = (int)CCtxParams->literalCompressionMode;
break;
case ZSTD_c_nbWorkers :
#ifndef ZSTD_MULTITHREAD
assert(CCtxParams->nbWorkers == 0);
#endif
*value = CCtxParams->nbWorkers;
break;
case ZSTD_c_jobSize :
#ifndef ZSTD_MULTITHREAD
RETURN_ERROR(parameter_unsupported, "not compiled with multithreading");
#else
assert(CCtxParams->jobSize <= INT_MAX);
*value = (int)CCtxParams->jobSize;
break;
#endif
case ZSTD_c_overlapLog :
#ifndef ZSTD_MULTITHREAD
RETURN_ERROR(parameter_unsupported, "not compiled with multithreading");
#else
*value = CCtxParams->overlapLog;
break;
#endif
case ZSTD_c_rsyncable :
#ifndef ZSTD_MULTITHREAD
RETURN_ERROR(parameter_unsupported, "not compiled with multithreading");
#else
*value = CCtxParams->rsyncable;
break;
#endif
case ZSTD_c_enableDedicatedDictSearch :
*value = CCtxParams->enableDedicatedDictSearch;
break;
case ZSTD_c_enableLongDistanceMatching :
*value = (int)CCtxParams->ldmParams.enableLdm;
break;
case ZSTD_c_ldmHashLog :
*value = (int)CCtxParams->ldmParams.hashLog;
break;
case ZSTD_c_ldmMinMatch :
*value = (int)CCtxParams->ldmParams.minMatchLength;
break;
case ZSTD_c_ldmBucketSizeLog :
*value = (int)CCtxParams->ldmParams.bucketSizeLog;
break;
case ZSTD_c_ldmHashRateLog :
*value = (int)CCtxParams->ldmParams.hashRateLog;
break;
case ZSTD_c_targetCBlockSize :
*value = (int)CCtxParams->targetCBlockSize;
break;
case ZSTD_c_srcSizeHint :
*value = (int)CCtxParams->srcSizeHint;
break;
case ZSTD_c_stableInBuffer :
*value = (int)CCtxParams->inBufferMode;
break;
case ZSTD_c_stableOutBuffer :
*value = (int)CCtxParams->outBufferMode;
break;
case ZSTD_c_blockDelimiters :
*value = (int)CCtxParams->blockDelimiters;
break;
case ZSTD_c_validateSequences :
*value = (int)CCtxParams->validateSequences;
break;
case ZSTD_c_splitAfterSequences :
*value = (int)CCtxParams->postBlockSplitter;
break;
case ZSTD_c_blockSplitterLevel :
*value = CCtxParams->preBlockSplitter_level;
break;
case ZSTD_c_useRowMatchFinder :
*value = (int)CCtxParams->useRowMatchFinder;
break;
case ZSTD_c_deterministicRefPrefix:
*value = (int)CCtxParams->deterministicRefPrefix;
break;
case ZSTD_c_prefetchCDictTables:
*value = (int)CCtxParams->prefetchCDictTables;
break;
case ZSTD_c_enableSeqProducerFallback:
*value = CCtxParams->enableMatchFinderFallback;
break;
case ZSTD_c_maxBlockSize:
*value = (int)CCtxParams->maxBlockSize;
break;
case ZSTD_c_repcodeResolution:
*value = (int)CCtxParams->searchForExternalRepcodes;
break;
default: RETURN_ERROR(parameter_unsupported, "unknown parameter");
}
return 0;
}
/** ZSTD_CCtx_setParametersUsingCCtxParams() :
* just applies `params` into `cctx`
* no action is performed, parameters are merely stored.
* If ZSTDMT is enabled, parameters are pushed to cctx->mtctx.
* This is possible even if a compression is ongoing.
* In which case, new parameters will be applied on the fly, starting with next compression job.
*/
size_t ZSTD_CCtx_setParametersUsingCCtxParams(
ZSTD_CCtx* cctx, const ZSTD_CCtx_params* params)
{
DEBUGLOG(4, "ZSTD_CCtx_setParametersUsingCCtxParams");
RETURN_ERROR_IF(cctx->streamStage != zcss_init, stage_wrong,
"The context is in the wrong stage!");
RETURN_ERROR_IF(cctx->cdict, stage_wrong,
"Can't override parameters with cdict attached (some must "
"be inherited from the cdict).");
cctx->requestedParams = *params;
return 0;
}
size_t ZSTD_CCtx_setCParams(ZSTD_CCtx* cctx, ZSTD_compressionParameters cparams)
{
ZSTD_STATIC_ASSERT(sizeof(cparams) == 7 * 4 /* all params are listed below */);
DEBUGLOG(4, "ZSTD_CCtx_setCParams");
/* only update if all parameters are valid */
FORWARD_IF_ERROR(ZSTD_checkCParams(cparams), "");
FORWARD_IF_ERROR(ZSTD_CCtx_setParameter(cctx, ZSTD_c_windowLog, (int)cparams.windowLog), "");
FORWARD_IF_ERROR(ZSTD_CCtx_setParameter(cctx, ZSTD_c_chainLog, (int)cparams.chainLog), "");
FORWARD_IF_ERROR(ZSTD_CCtx_setParameter(cctx, ZSTD_c_hashLog, (int)cparams.hashLog), "");
FORWARD_IF_ERROR(ZSTD_CCtx_setParameter(cctx, ZSTD_c_searchLog, (int)cparams.searchLog), "");
FORWARD_IF_ERROR(ZSTD_CCtx_setParameter(cctx, ZSTD_c_minMatch, (int)cparams.minMatch), "");
FORWARD_IF_ERROR(ZSTD_CCtx_setParameter(cctx, ZSTD_c_targetLength, (int)cparams.targetLength), "");
FORWARD_IF_ERROR(ZSTD_CCtx_setParameter(cctx, ZSTD_c_strategy, (int)cparams.strategy), "");
return 0;
}
size_t ZSTD_CCtx_setFParams(ZSTD_CCtx* cctx, ZSTD_frameParameters fparams)
{
ZSTD_STATIC_ASSERT(sizeof(fparams) == 3 * 4 /* all params are listed below */);
DEBUGLOG(4, "ZSTD_CCtx_setFParams");
FORWARD_IF_ERROR(ZSTD_CCtx_setParameter(cctx, ZSTD_c_contentSizeFlag, fparams.contentSizeFlag != 0), "");
FORWARD_IF_ERROR(ZSTD_CCtx_setParameter(cctx, ZSTD_c_checksumFlag, fparams.checksumFlag != 0), "");
FORWARD_IF_ERROR(ZSTD_CCtx_setParameter(cctx, ZSTD_c_dictIDFlag, fparams.noDictIDFlag == 0), "");
return 0;
}
size_t ZSTD_CCtx_setParams(ZSTD_CCtx* cctx, ZSTD_parameters params)
{
DEBUGLOG(4, "ZSTD_CCtx_setParams");
/* First check cParams, because we want to update all or none. */
FORWARD_IF_ERROR(ZSTD_checkCParams(params.cParams), "");
/* Next set fParams, because this could fail if the cctx isn't in init stage. */
FORWARD_IF_ERROR(ZSTD_CCtx_setFParams(cctx, params.fParams), "");
/* Finally set cParams, which should succeed. */
FORWARD_IF_ERROR(ZSTD_CCtx_setCParams(cctx, params.cParams), "");
return 0;
}
size_t ZSTD_CCtx_setPledgedSrcSize(ZSTD_CCtx* cctx, unsigned long long pledgedSrcSize)
{
DEBUGLOG(4, "ZSTD_CCtx_setPledgedSrcSize to %llu bytes", pledgedSrcSize);
RETURN_ERROR_IF(cctx->streamStage != zcss_init, stage_wrong,
"Can't set pledgedSrcSize when not in init stage.");
cctx->pledgedSrcSizePlusOne = pledgedSrcSize+1;
return 0;
}
static ZSTD_compressionParameters ZSTD_dedicatedDictSearch_getCParams(
int const compressionLevel,
size_t const dictSize);
static int ZSTD_dedicatedDictSearch_isSupported(
const ZSTD_compressionParameters* cParams);
static void ZSTD_dedicatedDictSearch_revertCParams(
ZSTD_compressionParameters* cParams);
/**
* Initializes the local dictionary using requested parameters.
* NOTE: Initialization does not employ the pledged src size,
* because the dictionary may be used for multiple compressions.
*/
static size_t ZSTD_initLocalDict(ZSTD_CCtx* cctx)
{
ZSTD_localDict* const dl = &cctx->localDict;
if (dl->dict == NULL) {
/* No local dictionary. */
assert(dl->dictBuffer == NULL);
assert(dl->cdict == NULL);
assert(dl->dictSize == 0);
return 0;
}
if (dl->cdict != NULL) {
/* Local dictionary already initialized. */
assert(cctx->cdict == dl->cdict);
return 0;
}
assert(dl->dictSize > 0);
assert(cctx->cdict == NULL);
assert(cctx->prefixDict.dict == NULL);
dl->cdict = ZSTD_createCDict_advanced2(
dl->dict,
dl->dictSize,
ZSTD_dlm_byRef,
dl->dictContentType,
&cctx->requestedParams,
cctx->customMem);
RETURN_ERROR_IF(!dl->cdict, memory_allocation, "ZSTD_createCDict_advanced failed");
cctx->cdict = dl->cdict;
return 0;
}
size_t ZSTD_CCtx_loadDictionary_advanced(
ZSTD_CCtx* cctx,
const void* dict, size_t dictSize,
ZSTD_dictLoadMethod_e dictLoadMethod,
ZSTD_dictContentType_e dictContentType)
{
DEBUGLOG(4, "ZSTD_CCtx_loadDictionary_advanced (size: %u)", (U32)dictSize);
RETURN_ERROR_IF(cctx->streamStage != zcss_init, stage_wrong,
"Can't load a dictionary when cctx is not in init stage.");
ZSTD_clearAllDicts(cctx); /* erase any previously set dictionary */
if (dict == NULL || dictSize == 0) /* no dictionary */
return 0;
if (dictLoadMethod == ZSTD_dlm_byRef) {
cctx->localDict.dict = dict;
} else {
/* copy dictionary content inside CCtx to own its lifetime */
void* dictBuffer;
RETURN_ERROR_IF(cctx->staticSize, memory_allocation,
"static CCtx can't allocate for an internal copy of dictionary");
dictBuffer = ZSTD_customMalloc(dictSize, cctx->customMem);
RETURN_ERROR_IF(dictBuffer==NULL, memory_allocation,
"allocation failed for dictionary content");
ZSTD_memcpy(dictBuffer, dict, dictSize);
cctx->localDict.dictBuffer = dictBuffer; /* owned ptr to free */
cctx->localDict.dict = dictBuffer; /* read-only reference */
}
cctx->localDict.dictSize = dictSize;
cctx->localDict.dictContentType = dictContentType;
return 0;
}
size_t ZSTD_CCtx_loadDictionary_byReference(
ZSTD_CCtx* cctx, const void* dict, size_t dictSize)
{
return ZSTD_CCtx_loadDictionary_advanced(
cctx, dict, dictSize, ZSTD_dlm_byRef, ZSTD_dct_auto);
}
size_t ZSTD_CCtx_loadDictionary(ZSTD_CCtx* cctx, const void* dict, size_t dictSize)
{
return ZSTD_CCtx_loadDictionary_advanced(
cctx, dict, dictSize, ZSTD_dlm_byCopy, ZSTD_dct_auto);
}
size_t ZSTD_CCtx_refCDict(ZSTD_CCtx* cctx, const ZSTD_CDict* cdict)
{
RETURN_ERROR_IF(cctx->streamStage != zcss_init, stage_wrong,
"Can't ref a dict when ctx not in init stage.");
/* Free the existing local cdict (if any) to save memory. */
ZSTD_clearAllDicts(cctx);
cctx->cdict = cdict;
return 0;
}
size_t ZSTD_CCtx_refThreadPool(ZSTD_CCtx* cctx, ZSTD_threadPool* pool)
{
RETURN_ERROR_IF(cctx->streamStage != zcss_init, stage_wrong,
"Can't ref a pool when ctx not in init stage.");
cctx->pool = pool;
return 0;
}
size_t ZSTD_CCtx_refPrefix(ZSTD_CCtx* cctx, const void* prefix, size_t prefixSize)
{
return ZSTD_CCtx_refPrefix_advanced(cctx, prefix, prefixSize, ZSTD_dct_rawContent);
}
size_t ZSTD_CCtx_refPrefix_advanced(
ZSTD_CCtx* cctx, const void* prefix, size_t prefixSize, ZSTD_dictContentType_e dictContentType)
{
RETURN_ERROR_IF(cctx->streamStage != zcss_init, stage_wrong,
"Can't ref a prefix when ctx not in init stage.");
ZSTD_clearAllDicts(cctx);
if (prefix != NULL && prefixSize > 0) {
cctx->prefixDict.dict = prefix;
cctx->prefixDict.dictSize = prefixSize;
cctx->prefixDict.dictContentType = dictContentType;
}
return 0;
}
/*! ZSTD_CCtx_reset() :
* Also dumps dictionary */
size_t ZSTD_CCtx_reset(ZSTD_CCtx* cctx, ZSTD_ResetDirective reset)
{
if ( (reset == ZSTD_reset_session_only)
|| (reset == ZSTD_reset_session_and_parameters) ) {
cctx->streamStage = zcss_init;
cctx->pledgedSrcSizePlusOne = 0;
}
if ( (reset == ZSTD_reset_parameters)
|| (reset == ZSTD_reset_session_and_parameters) ) {
RETURN_ERROR_IF(cctx->streamStage != zcss_init, stage_wrong,
"Reset parameters is only possible during init stage.");
ZSTD_clearAllDicts(cctx);
return ZSTD_CCtxParams_reset(&cctx->requestedParams);
}
return 0;
}
/** ZSTD_checkCParams() :
control CParam values remain within authorized range.
@return : 0, or an error code if one value is beyond authorized range */
size_t ZSTD_checkCParams(ZSTD_compressionParameters cParams)
{
BOUNDCHECK(ZSTD_c_windowLog, (int)cParams.windowLog);
BOUNDCHECK(ZSTD_c_chainLog, (int)cParams.chainLog);
BOUNDCHECK(ZSTD_c_hashLog, (int)cParams.hashLog);
BOUNDCHECK(ZSTD_c_searchLog, (int)cParams.searchLog);
BOUNDCHECK(ZSTD_c_minMatch, (int)cParams.minMatch);
BOUNDCHECK(ZSTD_c_targetLength,(int)cParams.targetLength);
BOUNDCHECK(ZSTD_c_strategy, (int)cParams.strategy);
return 0;
}
/** ZSTD_clampCParams() :
* make CParam values within valid range.
* @return : valid CParams */
static ZSTD_compressionParameters
ZSTD_clampCParams(ZSTD_compressionParameters cParams)
{
# define CLAMP_TYPE(cParam, val, type) \
do { \
ZSTD_bounds const bounds = ZSTD_cParam_getBounds(cParam); \
if ((int)val<bounds.lowerBound) val=(type)bounds.lowerBound; \
else if ((int)val>bounds.upperBound) val=(type)bounds.upperBound; \
} while (0)
# define CLAMP(cParam, val) CLAMP_TYPE(cParam, val, unsigned)
CLAMP(ZSTD_c_windowLog, cParams.windowLog);
CLAMP(ZSTD_c_chainLog, cParams.chainLog);
CLAMP(ZSTD_c_hashLog, cParams.hashLog);
CLAMP(ZSTD_c_searchLog, cParams.searchLog);
CLAMP(ZSTD_c_minMatch, cParams.minMatch);
CLAMP(ZSTD_c_targetLength,cParams.targetLength);
CLAMP_TYPE(ZSTD_c_strategy,cParams.strategy, ZSTD_strategy);
return cParams;
}
/** ZSTD_cycleLog() :
* condition for correct operation : hashLog > 1 */
U32 ZSTD_cycleLog(U32 hashLog, ZSTD_strategy strat)
{
U32 const btScale = ((U32)strat >= (U32)ZSTD_btlazy2);
return hashLog - btScale;
}
/** ZSTD_dictAndWindowLog() :
* Returns an adjusted window log that is large enough to fit the source and the dictionary.
* The zstd format says that the entire dictionary is valid if one byte of the dictionary
* is within the window. So the hashLog and chainLog should be large enough to reference both
* the dictionary and the window. So we must use this adjusted dictAndWindowLog when downsizing
* the hashLog and windowLog.
* NOTE: srcSize must not be ZSTD_CONTENTSIZE_UNKNOWN.
*/
static U32 ZSTD_dictAndWindowLog(U32 windowLog, U64 srcSize, U64 dictSize)
{
const U64 maxWindowSize = 1ULL << ZSTD_WINDOWLOG_MAX;
/* No dictionary ==> No change */
if (dictSize == 0) {
return windowLog;
}
assert(windowLog <= ZSTD_WINDOWLOG_MAX);
assert(srcSize != ZSTD_CONTENTSIZE_UNKNOWN); /* Handled in ZSTD_adjustCParams_internal() */
{
U64 const windowSize = 1ULL << windowLog;
U64 const dictAndWindowSize = dictSize + windowSize;
/* If the window size is already large enough to fit both the source and the dictionary
* then just use the window size. Otherwise adjust so that it fits the dictionary and
* the window.
*/
if (windowSize >= dictSize + srcSize) {
return windowLog; /* Window size large enough already */
} else if (dictAndWindowSize >= maxWindowSize) {
return ZSTD_WINDOWLOG_MAX; /* Larger than max window log */
} else {
return ZSTD_highbit32((U32)dictAndWindowSize - 1) + 1;
}
}
}
/** ZSTD_adjustCParams_internal() :
* optimize `cPar` for a specified input (`srcSize` and `dictSize`).
* mostly downsize to reduce memory consumption and initialization latency.
* `srcSize` can be ZSTD_CONTENTSIZE_UNKNOWN when not known.
* `mode` is the mode for parameter adjustment. See docs for `ZSTD_CParamMode_e`.
* note : `srcSize==0` means 0!
* condition : cPar is presumed validated (can be checked using ZSTD_checkCParams()). */
static ZSTD_compressionParameters
ZSTD_adjustCParams_internal(ZSTD_compressionParameters cPar,
unsigned long long srcSize,
size_t dictSize,
ZSTD_CParamMode_e mode,
ZSTD_ParamSwitch_e useRowMatchFinder)
{
const U64 minSrcSize = 513; /* (1<<9) + 1 */
const U64 maxWindowResize = 1ULL << (ZSTD_WINDOWLOG_MAX-1);
assert(ZSTD_checkCParams(cPar)==0);
/* Cascade the selected strategy down to the next-highest one built into
* this binary. */
#ifdef ZSTD_EXCLUDE_BTULTRA_BLOCK_COMPRESSOR
if (cPar.strategy == ZSTD_btultra2) {
cPar.strategy = ZSTD_btultra;
}
if (cPar.strategy == ZSTD_btultra) {
cPar.strategy = ZSTD_btopt;
}
#endif
#ifdef ZSTD_EXCLUDE_BTOPT_BLOCK_COMPRESSOR
if (cPar.strategy == ZSTD_btopt) {
cPar.strategy = ZSTD_btlazy2;
}
#endif
#ifdef ZSTD_EXCLUDE_BTLAZY2_BLOCK_COMPRESSOR
if (cPar.strategy == ZSTD_btlazy2) {
cPar.strategy = ZSTD_lazy2;
}
#endif
#ifdef ZSTD_EXCLUDE_LAZY2_BLOCK_COMPRESSOR
if (cPar.strategy == ZSTD_lazy2) {
cPar.strategy = ZSTD_lazy;
}
#endif
#ifdef ZSTD_EXCLUDE_LAZY_BLOCK_COMPRESSOR
if (cPar.strategy == ZSTD_lazy) {
cPar.strategy = ZSTD_greedy;
}
#endif
#ifdef ZSTD_EXCLUDE_GREEDY_BLOCK_COMPRESSOR
if (cPar.strategy == ZSTD_greedy) {
cPar.strategy = ZSTD_dfast;
}
#endif
#ifdef ZSTD_EXCLUDE_DFAST_BLOCK_COMPRESSOR
if (cPar.strategy == ZSTD_dfast) {
cPar.strategy = ZSTD_fast;
cPar.targetLength = 0;
}
#endif
switch (mode) {
case ZSTD_cpm_unknown:
case ZSTD_cpm_noAttachDict:
/* If we don't know the source size, don't make any
* assumptions about it. We will already have selected
* smaller parameters if a dictionary is in use.
*/
break;
case ZSTD_cpm_createCDict:
/* Assume a small source size when creating a dictionary
* with an unknown source size.
*/
if (dictSize && srcSize == ZSTD_CONTENTSIZE_UNKNOWN)
srcSize = minSrcSize;
break;
case ZSTD_cpm_attachDict:
/* Dictionary has its own dedicated parameters which have
* already been selected. We are selecting parameters
* for only the source.
*/
dictSize = 0;
break;
default:
assert(0);
break;
}
/* resize windowLog if input is small enough, to use less memory */
if ( (srcSize <= maxWindowResize)
&& (dictSize <= maxWindowResize) ) {
U32 const tSize = (U32)(srcSize + dictSize);
static U32 const hashSizeMin = 1 << ZSTD_HASHLOG_MIN;
U32 const srcLog = (tSize < hashSizeMin) ? ZSTD_HASHLOG_MIN :
ZSTD_highbit32(tSize-1) + 1;
if (cPar.windowLog > srcLog) cPar.windowLog = srcLog;
}
if (srcSize != ZSTD_CONTENTSIZE_UNKNOWN) {
U32 const dictAndWindowLog = ZSTD_dictAndWindowLog(cPar.windowLog, (U64)srcSize, (U64)dictSize);
U32 const cycleLog = ZSTD_cycleLog(cPar.chainLog, cPar.strategy);
if (cPar.hashLog > dictAndWindowLog+1) cPar.hashLog = dictAndWindowLog+1;
if (cycleLog > dictAndWindowLog)
cPar.chainLog -= (cycleLog - dictAndWindowLog);
}
if (cPar.windowLog < ZSTD_WINDOWLOG_ABSOLUTEMIN)
cPar.windowLog = ZSTD_WINDOWLOG_ABSOLUTEMIN; /* minimum wlog required for valid frame header */
/* We can't use more than 32 bits of hash in total, so that means that we require:
* (hashLog + 8) <= 32 && (chainLog + 8) <= 32
*/
if (mode == ZSTD_cpm_createCDict && ZSTD_CDictIndicesAreTagged(&cPar)) {
U32 const maxShortCacheHashLog = 32 - ZSTD_SHORT_CACHE_TAG_BITS;
if (cPar.hashLog > maxShortCacheHashLog) {
cPar.hashLog = maxShortCacheHashLog;
}
if (cPar.chainLog > maxShortCacheHashLog) {
cPar.chainLog = maxShortCacheHashLog;
}
}
/* At this point, we aren't 100% sure if we are using the row match finder.
* Unless it is explicitly disabled, conservatively assume that it is enabled.
* In this case it will only be disabled for small sources, so shrinking the
* hash log a little bit shouldn't result in any ratio loss.
*/
if (useRowMatchFinder == ZSTD_ps_auto)
useRowMatchFinder = ZSTD_ps_enable;
/* We can't hash more than 32-bits in total. So that means that we require:
* (hashLog - rowLog + 8) <= 32
*/
if (ZSTD_rowMatchFinderUsed(cPar.strategy, useRowMatchFinder)) {
/* Switch to 32-entry rows if searchLog is 5 (or more) */
U32 const rowLog = BOUNDED(4, cPar.searchLog, 6);
U32 const maxRowHashLog = 32 - ZSTD_ROW_HASH_TAG_BITS;
U32 const maxHashLog = maxRowHashLog + rowLog;
assert(cPar.hashLog >= rowLog);
if (cPar.hashLog > maxHashLog) {
cPar.hashLog = maxHashLog;
}
}
return cPar;
}
ZSTD_compressionParameters
ZSTD_adjustCParams(ZSTD_compressionParameters cPar,
unsigned long long srcSize,
size_t dictSize)
{
cPar = ZSTD_clampCParams(cPar); /* resulting cPar is necessarily valid (all parameters within range) */
if (srcSize == 0) srcSize = ZSTD_CONTENTSIZE_UNKNOWN;
return ZSTD_adjustCParams_internal(cPar, srcSize, dictSize, ZSTD_cpm_unknown, ZSTD_ps_auto);
}
static ZSTD_compressionParameters ZSTD_getCParams_internal(int compressionLevel, unsigned long long srcSizeHint, size_t dictSize, ZSTD_CParamMode_e mode);
static ZSTD_parameters ZSTD_getParams_internal(int compressionLevel, unsigned long long srcSizeHint, size_t dictSize, ZSTD_CParamMode_e mode);
static void ZSTD_overrideCParams(
ZSTD_compressionParameters* cParams,
const ZSTD_compressionParameters* overrides)
{
if (overrides->windowLog) cParams->windowLog = overrides->windowLog;
if (overrides->hashLog) cParams->hashLog = overrides->hashLog;
if (overrides->chainLog) cParams->chainLog = overrides->chainLog;
if (overrides->searchLog) cParams->searchLog = overrides->searchLog;
if (overrides->minMatch) cParams->minMatch = overrides->minMatch;
if (overrides->targetLength) cParams->targetLength = overrides->targetLength;
if (overrides->strategy) cParams->strategy = overrides->strategy;
}
ZSTD_compressionParameters ZSTD_getCParamsFromCCtxParams(
const ZSTD_CCtx_params* CCtxParams, U64 srcSizeHint, size_t dictSize, ZSTD_CParamMode_e mode)
{
ZSTD_compressionParameters cParams;
if (srcSizeHint == ZSTD_CONTENTSIZE_UNKNOWN && CCtxParams->srcSizeHint > 0) {
assert(CCtxParams->srcSizeHint>=0);
srcSizeHint = (U64)CCtxParams->srcSizeHint;
}
cParams = ZSTD_getCParams_internal(CCtxParams->compressionLevel, srcSizeHint, dictSize, mode);
if (CCtxParams->ldmParams.enableLdm == ZSTD_ps_enable) cParams.windowLog = ZSTD_LDM_DEFAULT_WINDOW_LOG;
ZSTD_overrideCParams(&cParams, &CCtxParams->cParams);
assert(!ZSTD_checkCParams(cParams));
/* srcSizeHint == 0 means 0 */
return ZSTD_adjustCParams_internal(cParams, srcSizeHint, dictSize, mode, CCtxParams->useRowMatchFinder);
}
static size_t
ZSTD_sizeof_matchState(const ZSTD_compressionParameters* const cParams,
const ZSTD_ParamSwitch_e useRowMatchFinder,
const int enableDedicatedDictSearch,
const U32 forCCtx)
{
/* chain table size should be 0 for fast or row-hash strategies */
size_t const chainSize = ZSTD_allocateChainTable(cParams->strategy, useRowMatchFinder, enableDedicatedDictSearch && !forCCtx)
? ((size_t)1 << cParams->chainLog)
: 0;
size_t const hSize = ((size_t)1) << cParams->hashLog;
U32 const hashLog3 = (forCCtx && cParams->minMatch==3) ? MIN(ZSTD_HASHLOG3_MAX, cParams->windowLog) : 0;
size_t const h3Size = hashLog3 ? ((size_t)1) << hashLog3 : 0;
/* We don't use ZSTD_cwksp_alloc_size() here because the tables aren't
* surrounded by redzones in ASAN. */
size_t const tableSpace = chainSize * sizeof(U32)
+ hSize * sizeof(U32)
+ h3Size * sizeof(U32);
size_t const optPotentialSpace =
ZSTD_cwksp_aligned64_alloc_size((MaxML+1) * sizeof(U32))
+ ZSTD_cwksp_aligned64_alloc_size((MaxLL+1) * sizeof(U32))
+ ZSTD_cwksp_aligned64_alloc_size((MaxOff+1) * sizeof(U32))
+ ZSTD_cwksp_aligned64_alloc_size((1<<Litbits) * sizeof(U32))
+ ZSTD_cwksp_aligned64_alloc_size(ZSTD_OPT_SIZE * sizeof(ZSTD_match_t))
+ ZSTD_cwksp_aligned64_alloc_size(ZSTD_OPT_SIZE * sizeof(ZSTD_optimal_t));
size_t const lazyAdditionalSpace = ZSTD_rowMatchFinderUsed(cParams->strategy, useRowMatchFinder)
? ZSTD_cwksp_aligned64_alloc_size(hSize)
: 0;
size_t const optSpace = (forCCtx && (cParams->strategy >= ZSTD_btopt))
? optPotentialSpace
: 0;
size_t const slackSpace = ZSTD_cwksp_slack_space_required();
/* tables are guaranteed to be sized in multiples of 64 bytes (or 16 uint32_t) */
ZSTD_STATIC_ASSERT(ZSTD_HASHLOG_MIN >= 4 && ZSTD_WINDOWLOG_MIN >= 4 && ZSTD_CHAINLOG_MIN >= 4);
assert(useRowMatchFinder != ZSTD_ps_auto);
DEBUGLOG(4, "chainSize: %u - hSize: %u - h3Size: %u",
(U32)chainSize, (U32)hSize, (U32)h3Size);
return tableSpace + optSpace + slackSpace + lazyAdditionalSpace;
}
/* Helper function for calculating memory requirements.
* Gives a tighter bound than ZSTD_sequenceBound() by taking minMatch into account. */
static size_t ZSTD_maxNbSeq(size_t blockSize, unsigned minMatch, int useSequenceProducer) {
U32 const divider = (minMatch==3 || useSequenceProducer) ? 3 : 4;
return blockSize / divider;
}
static size_t ZSTD_estimateCCtxSize_usingCCtxParams_internal(
const ZSTD_compressionParameters* cParams,
const ldmParams_t* ldmParams,
const int isStatic,
const ZSTD_ParamSwitch_e useRowMatchFinder,
const size_t buffInSize,
const size_t buffOutSize,
const U64 pledgedSrcSize,
int useSequenceProducer,
size_t maxBlockSize)
{
size_t const windowSize = (size_t) BOUNDED(1ULL, 1ULL << cParams->windowLog, pledgedSrcSize);
size_t const blockSize = MIN(ZSTD_resolveMaxBlockSize(maxBlockSize), windowSize);
size_t const maxNbSeq = ZSTD_maxNbSeq(blockSize, cParams->minMatch, useSequenceProducer);
size_t const tokenSpace = ZSTD_cwksp_alloc_size(WILDCOPY_OVERLENGTH + blockSize)
+ ZSTD_cwksp_aligned64_alloc_size(maxNbSeq * sizeof(SeqDef))
+ 3 * ZSTD_cwksp_alloc_size(maxNbSeq * sizeof(BYTE));
size_t const tmpWorkSpace = ZSTD_cwksp_alloc_size(TMP_WORKSPACE_SIZE);
size_t const blockStateSpace = 2 * ZSTD_cwksp_alloc_size(sizeof(ZSTD_compressedBlockState_t));
size_t const matchStateSize = ZSTD_sizeof_matchState(cParams, useRowMatchFinder, /* enableDedicatedDictSearch */ 0, /* forCCtx */ 1);
size_t const ldmSpace = ZSTD_ldm_getTableSize(*ldmParams);
size_t const maxNbLdmSeq = ZSTD_ldm_getMaxNbSeq(*ldmParams, blockSize);
size_t const ldmSeqSpace = ldmParams->enableLdm == ZSTD_ps_enable ?
ZSTD_cwksp_aligned64_alloc_size(maxNbLdmSeq * sizeof(rawSeq)) : 0;
size_t const bufferSpace = ZSTD_cwksp_alloc_size(buffInSize)
+ ZSTD_cwksp_alloc_size(buffOutSize);
size_t const cctxSpace = isStatic ? ZSTD_cwksp_alloc_size(sizeof(ZSTD_CCtx)) : 0;
size_t const maxNbExternalSeq = ZSTD_sequenceBound(blockSize);
size_t const externalSeqSpace = useSequenceProducer
? ZSTD_cwksp_aligned64_alloc_size(maxNbExternalSeq * sizeof(ZSTD_Sequence))
: 0;
size_t const neededSpace =
cctxSpace +
tmpWorkSpace +
blockStateSpace +
ldmSpace +
ldmSeqSpace +
matchStateSize +
tokenSpace +
bufferSpace +
externalSeqSpace;
DEBUGLOG(5, "estimate workspace : %u", (U32)neededSpace);
return neededSpace;
}
size_t ZSTD_estimateCCtxSize_usingCCtxParams(const ZSTD_CCtx_params* params)
{
ZSTD_compressionParameters const cParams =
ZSTD_getCParamsFromCCtxParams(params, ZSTD_CONTENTSIZE_UNKNOWN, 0, ZSTD_cpm_noAttachDict);
ZSTD_ParamSwitch_e const useRowMatchFinder = ZSTD_resolveRowMatchFinderMode(params->useRowMatchFinder,
&cParams);
RETURN_ERROR_IF(params->nbWorkers > 0, GENERIC, "Estimate CCtx size is supported for single-threaded compression only.");
/* estimateCCtxSize is for one-shot compression. So no buffers should
* be needed. However, we still allocate two 0-sized buffers, which can
* take space under ASAN. */
return ZSTD_estimateCCtxSize_usingCCtxParams_internal(
&cParams, ¶ms->ldmParams, 1, useRowMatchFinder, 0, 0, ZSTD_CONTENTSIZE_UNKNOWN, ZSTD_hasExtSeqProd(params), params->maxBlockSize);
}
size_t ZSTD_estimateCCtxSize_usingCParams(ZSTD_compressionParameters cParams)
{
ZSTD_CCtx_params initialParams = ZSTD_makeCCtxParamsFromCParams(cParams);
if (ZSTD_rowMatchFinderSupported(cParams.strategy)) {
/* Pick bigger of not using and using row-based matchfinder for greedy and lazy strategies */
size_t noRowCCtxSize;
size_t rowCCtxSize;
initialParams.useRowMatchFinder = ZSTD_ps_disable;
noRowCCtxSize = ZSTD_estimateCCtxSize_usingCCtxParams(&initialParams);
initialParams.useRowMatchFinder = ZSTD_ps_enable;
rowCCtxSize = ZSTD_estimateCCtxSize_usingCCtxParams(&initialParams);
return MAX(noRowCCtxSize, rowCCtxSize);
} else {
return ZSTD_estimateCCtxSize_usingCCtxParams(&initialParams);
}
}
static size_t ZSTD_estimateCCtxSize_internal(int compressionLevel)
{
int tier = 0;
size_t largestSize = 0;
static const unsigned long long srcSizeTiers[4] = {16 KB, 128 KB, 256 KB, ZSTD_CONTENTSIZE_UNKNOWN};
for (; tier < 4; ++tier) {
/* Choose the set of cParams for a given level across all srcSizes that give the largest cctxSize */
ZSTD_compressionParameters const cParams = ZSTD_getCParams_internal(compressionLevel, srcSizeTiers[tier], 0, ZSTD_cpm_noAttachDict);
largestSize = MAX(ZSTD_estimateCCtxSize_usingCParams(cParams), largestSize);
}
return largestSize;
}
size_t ZSTD_estimateCCtxSize(int compressionLevel)
{
int level;
size_t memBudget = 0;
for (level=MIN(compressionLevel, 1); level<=compressionLevel; level++) {
/* Ensure monotonically increasing memory usage as compression level increases */
size_t const newMB = ZSTD_estimateCCtxSize_internal(level);
if (newMB > memBudget) memBudget = newMB;
}
return memBudget;
}
size_t ZSTD_estimateCStreamSize_usingCCtxParams(const ZSTD_CCtx_params* params)
{
RETURN_ERROR_IF(params->nbWorkers > 0, GENERIC, "Estimate CCtx size is supported for single-threaded compression only.");
{ ZSTD_compressionParameters const cParams =
ZSTD_getCParamsFromCCtxParams(params, ZSTD_CONTENTSIZE_UNKNOWN, 0, ZSTD_cpm_noAttachDict);
size_t const blockSize = MIN(ZSTD_resolveMaxBlockSize(params->maxBlockSize), (size_t)1 << cParams.windowLog);
size_t const inBuffSize = (params->inBufferMode == ZSTD_bm_buffered)
? ((size_t)1 << cParams.windowLog) + blockSize
: 0;
size_t const outBuffSize = (params->outBufferMode == ZSTD_bm_buffered)
? ZSTD_compressBound(blockSize) + 1
: 0;
ZSTD_ParamSwitch_e const useRowMatchFinder = ZSTD_resolveRowMatchFinderMode(params->useRowMatchFinder, ¶ms->cParams);
return ZSTD_estimateCCtxSize_usingCCtxParams_internal(
&cParams, ¶ms->ldmParams, 1, useRowMatchFinder, inBuffSize, outBuffSize,
ZSTD_CONTENTSIZE_UNKNOWN, ZSTD_hasExtSeqProd(params), params->maxBlockSize);
}
}
size_t ZSTD_estimateCStreamSize_usingCParams(ZSTD_compressionParameters cParams)
{
ZSTD_CCtx_params initialParams = ZSTD_makeCCtxParamsFromCParams(cParams);
if (ZSTD_rowMatchFinderSupported(cParams.strategy)) {
/* Pick bigger of not using and using row-based matchfinder for greedy and lazy strategies */
size_t noRowCCtxSize;
size_t rowCCtxSize;
initialParams.useRowMatchFinder = ZSTD_ps_disable;
noRowCCtxSize = ZSTD_estimateCStreamSize_usingCCtxParams(&initialParams);
initialParams.useRowMatchFinder = ZSTD_ps_enable;
rowCCtxSize = ZSTD_estimateCStreamSize_usingCCtxParams(&initialParams);
return MAX(noRowCCtxSize, rowCCtxSize);
} else {
return ZSTD_estimateCStreamSize_usingCCtxParams(&initialParams);
}
}
static size_t ZSTD_estimateCStreamSize_internal(int compressionLevel)
{
ZSTD_compressionParameters const cParams = ZSTD_getCParams_internal(compressionLevel, ZSTD_CONTENTSIZE_UNKNOWN, 0, ZSTD_cpm_noAttachDict);
return ZSTD_estimateCStreamSize_usingCParams(cParams);
}
size_t ZSTD_estimateCStreamSize(int compressionLevel)
{
int level;
size_t memBudget = 0;
for (level=MIN(compressionLevel, 1); level<=compressionLevel; level++) {
size_t const newMB = ZSTD_estimateCStreamSize_internal(level);
if (newMB > memBudget) memBudget = newMB;
}
return memBudget;
}
/* ZSTD_getFrameProgression():
* tells how much data has been consumed (input) and produced (output) for current frame.
* able to count progression inside worker threads (non-blocking mode).
*/
ZSTD_frameProgression ZSTD_getFrameProgression(const ZSTD_CCtx* cctx)
{
#ifdef ZSTD_MULTITHREAD
if (cctx->appliedParams.nbWorkers > 0) {
return ZSTDMT_getFrameProgression(cctx->mtctx);
}
#endif
{ ZSTD_frameProgression fp;
size_t const buffered = (cctx->inBuff == NULL) ? 0 :
cctx->inBuffPos - cctx->inToCompress;
if (buffered) assert(cctx->inBuffPos >= cctx->inToCompress);
assert(buffered <= ZSTD_BLOCKSIZE_MAX);
fp.ingested = cctx->consumedSrcSize + buffered;
fp.consumed = cctx->consumedSrcSize;
fp.produced = cctx->producedCSize;
fp.flushed = cctx->producedCSize; /* simplified; some data might still be left within streaming output buffer */
fp.currentJobID = 0;
fp.nbActiveWorkers = 0;
return fp;
} }
/*! ZSTD_toFlushNow()
* Only useful for multithreading scenarios currently (nbWorkers >= 1).
*/
size_t ZSTD_toFlushNow(ZSTD_CCtx* cctx)
{
#ifdef ZSTD_MULTITHREAD
if (cctx->appliedParams.nbWorkers > 0) {
return ZSTDMT_toFlushNow(cctx->mtctx);
}
#endif
(void)cctx;
return 0; /* over-simplification; could also check if context is currently running in streaming mode, and in which case, report how many bytes are left to be flushed within output buffer */
}
static void ZSTD_assertEqualCParams(ZSTD_compressionParameters cParams1,
ZSTD_compressionParameters cParams2)
{
(void)cParams1;
(void)cParams2;
assert(cParams1.windowLog == cParams2.windowLog);
assert(cParams1.chainLog == cParams2.chainLog);
assert(cParams1.hashLog == cParams2.hashLog);
assert(cParams1.searchLog == cParams2.searchLog);
assert(cParams1.minMatch == cParams2.minMatch);
assert(cParams1.targetLength == cParams2.targetLength);
assert(cParams1.strategy == cParams2.strategy);
}
void ZSTD_reset_compressedBlockState(ZSTD_compressedBlockState_t* bs)
{
int i;
for (i = 0; i < ZSTD_REP_NUM; ++i)
bs->rep[i] = repStartValue[i];
bs->entropy.huf.repeatMode = HUF_repeat_none;
bs->entropy.fse.offcode_repeatMode = FSE_repeat_none;
bs->entropy.fse.matchlength_repeatMode = FSE_repeat_none;
bs->entropy.fse.litlength_repeatMode = FSE_repeat_none;
}
/*! ZSTD_invalidateMatchState()
* Invalidate all the matches in the match finder tables.
* Requires nextSrc and base to be set (can be NULL).
*/
static void ZSTD_invalidateMatchState(ZSTD_MatchState_t* ms)
{
ZSTD_window_clear(&ms->window);
ms->nextToUpdate = ms->window.dictLimit;
ms->loadedDictEnd = 0;
ms->opt.litLengthSum = 0; /* force reset of btopt stats */
ms->dictMatchState = NULL;
}
/**
* Controls, for this matchState reset, whether the tables need to be cleared /
* prepared for the coming compression (ZSTDcrp_makeClean), or whether the
* tables can be left unclean (ZSTDcrp_leaveDirty), because we know that a
* subsequent operation will overwrite the table space anyways (e.g., copying
* the matchState contents in from a CDict).
*/
typedef enum {
ZSTDcrp_makeClean,
ZSTDcrp_leaveDirty
} ZSTD_compResetPolicy_e;
/**
* Controls, for this matchState reset, whether indexing can continue where it
* left off (ZSTDirp_continue), or whether it needs to be restarted from zero
* (ZSTDirp_reset).
*/
typedef enum {
ZSTDirp_continue,
ZSTDirp_reset
} ZSTD_indexResetPolicy_e;
typedef enum {
ZSTD_resetTarget_CDict,
ZSTD_resetTarget_CCtx
} ZSTD_resetTarget_e;
/* Mixes bits in a 64 bits in a value, based on XXH3_rrmxmx */
static U64 ZSTD_bitmix(U64 val, U64 len) {
val ^= ZSTD_rotateRight_U64(val, 49) ^ ZSTD_rotateRight_U64(val, 24);
val *= 0x9FB21C651E98DF25ULL;
val ^= (val >> 35) + len ;
val *= 0x9FB21C651E98DF25ULL;
return val ^ (val >> 28);
}
/* Mixes in the hashSalt and hashSaltEntropy to create a new hashSalt */
static void ZSTD_advanceHashSalt(ZSTD_MatchState_t* ms) {
ms->hashSalt = ZSTD_bitmix(ms->hashSalt, 8) ^ ZSTD_bitmix((U64) ms->hashSaltEntropy, 4);
}
static size_t
ZSTD_reset_matchState(ZSTD_MatchState_t* ms,
ZSTD_cwksp* ws,
const ZSTD_compressionParameters* cParams,
const ZSTD_ParamSwitch_e useRowMatchFinder,
const ZSTD_compResetPolicy_e crp,
const ZSTD_indexResetPolicy_e forceResetIndex,
const ZSTD_resetTarget_e forWho)
{
/* disable chain table allocation for fast or row-based strategies */
size_t const chainSize = ZSTD_allocateChainTable(cParams->strategy, useRowMatchFinder,
ms->dedicatedDictSearch && (forWho == ZSTD_resetTarget_CDict))
? ((size_t)1 << cParams->chainLog)
: 0;
size_t const hSize = ((size_t)1) << cParams->hashLog;
U32 const hashLog3 = ((forWho == ZSTD_resetTarget_CCtx) && cParams->minMatch==3) ? MIN(ZSTD_HASHLOG3_MAX, cParams->windowLog) : 0;
size_t const h3Size = hashLog3 ? ((size_t)1) << hashLog3 : 0;
DEBUGLOG(4, "reset indices : %u", forceResetIndex == ZSTDirp_reset);
assert(useRowMatchFinder != ZSTD_ps_auto);
if (forceResetIndex == ZSTDirp_reset) {
ZSTD_window_init(&ms->window);
ZSTD_cwksp_mark_tables_dirty(ws);
}
ms->hashLog3 = hashLog3;
ms->lazySkipping = 0;
ZSTD_invalidateMatchState(ms);
assert(!ZSTD_cwksp_reserve_failed(ws)); /* check that allocation hasn't already failed */
ZSTD_cwksp_clear_tables(ws);
DEBUGLOG(5, "reserving table space");
/* table Space */
ms->hashTable = (U32*)ZSTD_cwksp_reserve_table(ws, hSize * sizeof(U32));
ms->chainTable = (U32*)ZSTD_cwksp_reserve_table(ws, chainSize * sizeof(U32));
ms->hashTable3 = (U32*)ZSTD_cwksp_reserve_table(ws, h3Size * sizeof(U32));
RETURN_ERROR_IF(ZSTD_cwksp_reserve_failed(ws), memory_allocation,
"failed a workspace allocation in ZSTD_reset_matchState");
DEBUGLOG(4, "reset table : %u", crp!=ZSTDcrp_leaveDirty);
if (crp!=ZSTDcrp_leaveDirty) {
/* reset tables only */
ZSTD_cwksp_clean_tables(ws);
}
if (ZSTD_rowMatchFinderUsed(cParams->strategy, useRowMatchFinder)) {
/* Row match finder needs an additional table of hashes ("tags") */
size_t const tagTableSize = hSize;
/* We want to generate a new salt in case we reset a Cctx, but we always want to use
* 0 when we reset a Cdict */
if(forWho == ZSTD_resetTarget_CCtx) {
ms->tagTable = (BYTE*) ZSTD_cwksp_reserve_aligned_init_once(ws, tagTableSize);
ZSTD_advanceHashSalt(ms);
} else {
/* When we are not salting we want to always memset the memory */
ms->tagTable = (BYTE*) ZSTD_cwksp_reserve_aligned64(ws, tagTableSize);
ZSTD_memset(ms->tagTable, 0, tagTableSize);
ms->hashSalt = 0;
}
{ /* Switch to 32-entry rows if searchLog is 5 (or more) */
U32 const rowLog = BOUNDED(4, cParams->searchLog, 6);
assert(cParams->hashLog >= rowLog);
ms->rowHashLog = cParams->hashLog - rowLog;
}
}
/* opt parser space */
if ((forWho == ZSTD_resetTarget_CCtx) && (cParams->strategy >= ZSTD_btopt)) {
DEBUGLOG(4, "reserving optimal parser space");
ms->opt.litFreq = (unsigned*)ZSTD_cwksp_reserve_aligned64(ws, (1<<Litbits) * sizeof(unsigned));
ms->opt.litLengthFreq = (unsigned*)ZSTD_cwksp_reserve_aligned64(ws, (MaxLL+1) * sizeof(unsigned));
ms->opt.matchLengthFreq = (unsigned*)ZSTD_cwksp_reserve_aligned64(ws, (MaxML+1) * sizeof(unsigned));
ms->opt.offCodeFreq = (unsigned*)ZSTD_cwksp_reserve_aligned64(ws, (MaxOff+1) * sizeof(unsigned));
ms->opt.matchTable = (ZSTD_match_t*)ZSTD_cwksp_reserve_aligned64(ws, ZSTD_OPT_SIZE * sizeof(ZSTD_match_t));
ms->opt.priceTable = (ZSTD_optimal_t*)ZSTD_cwksp_reserve_aligned64(ws, ZSTD_OPT_SIZE * sizeof(ZSTD_optimal_t));
}
ms->cParams = *cParams;
RETURN_ERROR_IF(ZSTD_cwksp_reserve_failed(ws), memory_allocation,
"failed a workspace allocation in ZSTD_reset_matchState");
return 0;
}
/* ZSTD_indexTooCloseToMax() :
* minor optimization : prefer memset() rather than reduceIndex()
* which is measurably slow in some circumstances (reported for Visual Studio).
* Works when re-using a context for a lot of smallish inputs :
* if all inputs are smaller than ZSTD_INDEXOVERFLOW_MARGIN,
* memset() will be triggered before reduceIndex().
*/
#define ZSTD_INDEXOVERFLOW_MARGIN (16 MB)
static int ZSTD_indexTooCloseToMax(ZSTD_window_t w)
{
return (size_t)(w.nextSrc - w.base) > (ZSTD_CURRENT_MAX - ZSTD_INDEXOVERFLOW_MARGIN);
}
/** ZSTD_dictTooBig():
* When dictionaries are larger than ZSTD_CHUNKSIZE_MAX they can't be loaded in
* one go generically. So we ensure that in that case we reset the tables to zero,
* so that we can load as much of the dictionary as possible.
*/
static int ZSTD_dictTooBig(size_t const loadedDictSize)
{
return loadedDictSize > ZSTD_CHUNKSIZE_MAX;
}
/*! ZSTD_resetCCtx_internal() :
* @param loadedDictSize The size of the dictionary to be loaded
* into the context, if any. If no dictionary is used, or the
* dictionary is being attached / copied, then pass 0.
* note : `params` are assumed fully validated at this stage.
*/
static size_t ZSTD_resetCCtx_internal(ZSTD_CCtx* zc,
ZSTD_CCtx_params const* params,
U64 const pledgedSrcSize,
size_t const loadedDictSize,
ZSTD_compResetPolicy_e const crp,
ZSTD_buffered_policy_e const zbuff)
{
ZSTD_cwksp* const ws = &zc->workspace;
DEBUGLOG(4, "ZSTD_resetCCtx_internal: pledgedSrcSize=%u, wlog=%u, useRowMatchFinder=%d useBlockSplitter=%d",
(U32)pledgedSrcSize, params->cParams.windowLog, (int)params->useRowMatchFinder, (int)params->postBlockSplitter);
assert(!ZSTD_isError(ZSTD_checkCParams(params->cParams)));
zc->isFirstBlock = 1;
/* Set applied params early so we can modify them for LDM,
* and point params at the applied params.
*/
zc->appliedParams = *params;
params = &zc->appliedParams;
assert(params->useRowMatchFinder != ZSTD_ps_auto);
assert(params->postBlockSplitter != ZSTD_ps_auto);
assert(params->ldmParams.enableLdm != ZSTD_ps_auto);
assert(params->maxBlockSize != 0);
if (params->ldmParams.enableLdm == ZSTD_ps_enable) {
/* Adjust long distance matching parameters */
ZSTD_ldm_adjustParameters(&zc->appliedParams.ldmParams, ¶ms->cParams);
assert(params->ldmParams.hashLog >= params->ldmParams.bucketSizeLog);
assert(params->ldmParams.hashRateLog < 32);
}
{ size_t const windowSize = MAX(1, (size_t)MIN(((U64)1 << params->cParams.windowLog), pledgedSrcSize));
size_t const blockSize = MIN(params->maxBlockSize, windowSize);
size_t const maxNbSeq = ZSTD_maxNbSeq(blockSize, params->cParams.minMatch, ZSTD_hasExtSeqProd(params));
size_t const buffOutSize = (zbuff == ZSTDb_buffered && params->outBufferMode == ZSTD_bm_buffered)
? ZSTD_compressBound(blockSize) + 1
: 0;
size_t const buffInSize = (zbuff == ZSTDb_buffered && params->inBufferMode == ZSTD_bm_buffered)
? windowSize + blockSize
: 0;
size_t const maxNbLdmSeq = ZSTD_ldm_getMaxNbSeq(params->ldmParams, blockSize);
int const indexTooClose = ZSTD_indexTooCloseToMax(zc->blockState.matchState.window);
int const dictTooBig = ZSTD_dictTooBig(loadedDictSize);
ZSTD_indexResetPolicy_e needsIndexReset =
(indexTooClose || dictTooBig || !zc->initialized) ? ZSTDirp_reset : ZSTDirp_continue;
size_t const neededSpace =
ZSTD_estimateCCtxSize_usingCCtxParams_internal(
¶ms->cParams, ¶ms->ldmParams, zc->staticSize != 0, params->useRowMatchFinder,
buffInSize, buffOutSize, pledgedSrcSize, ZSTD_hasExtSeqProd(params), params->maxBlockSize);
FORWARD_IF_ERROR(neededSpace, "cctx size estimate failed!");
if (!zc->staticSize) ZSTD_cwksp_bump_oversized_duration(ws, 0);
{ /* Check if workspace is large enough, alloc a new one if needed */
int const workspaceTooSmall = ZSTD_cwksp_sizeof(ws) < neededSpace;
int const workspaceWasteful = ZSTD_cwksp_check_wasteful(ws, neededSpace);
int resizeWorkspace = workspaceTooSmall || workspaceWasteful;
DEBUGLOG(4, "Need %zu B workspace", neededSpace);
DEBUGLOG(4, "windowSize: %zu - blockSize: %zu", windowSize, blockSize);
if (resizeWorkspace) {
DEBUGLOG(4, "Resize workspaceSize from %zuKB to %zuKB",
ZSTD_cwksp_sizeof(ws) >> 10,
neededSpace >> 10);
RETURN_ERROR_IF(zc->staticSize, memory_allocation, "static cctx : no resize");
needsIndexReset = ZSTDirp_reset;
ZSTD_cwksp_free(ws, zc->customMem);
FORWARD_IF_ERROR(ZSTD_cwksp_create(ws, neededSpace, zc->customMem), "");
DEBUGLOG(5, "reserving object space");
/* Statically sized space.
* tmpWorkspace never moves,
* though prev/next block swap places */
assert(ZSTD_cwksp_check_available(ws, 2 * sizeof(ZSTD_compressedBlockState_t)));
zc->blockState.prevCBlock = (ZSTD_compressedBlockState_t*) ZSTD_cwksp_reserve_object(ws, sizeof(ZSTD_compressedBlockState_t));
RETURN_ERROR_IF(zc->blockState.prevCBlock == NULL, memory_allocation, "couldn't allocate prevCBlock");
zc->blockState.nextCBlock = (ZSTD_compressedBlockState_t*) ZSTD_cwksp_reserve_object(ws, sizeof(ZSTD_compressedBlockState_t));
RETURN_ERROR_IF(zc->blockState.nextCBlock == NULL, memory_allocation, "couldn't allocate nextCBlock");
zc->tmpWorkspace = ZSTD_cwksp_reserve_object(ws, TMP_WORKSPACE_SIZE);
RETURN_ERROR_IF(zc->tmpWorkspace == NULL, memory_allocation, "couldn't allocate tmpWorkspace");
zc->tmpWkspSize = TMP_WORKSPACE_SIZE;
} }
ZSTD_cwksp_clear(ws);
/* init params */
zc->blockState.matchState.cParams = params->cParams;
zc->blockState.matchState.prefetchCDictTables = params->prefetchCDictTables == ZSTD_ps_enable;
zc->pledgedSrcSizePlusOne = pledgedSrcSize+1;
zc->consumedSrcSize = 0;
zc->producedCSize = 0;
if (pledgedSrcSize == ZSTD_CONTENTSIZE_UNKNOWN)
zc->appliedParams.fParams.contentSizeFlag = 0;
DEBUGLOG(4, "pledged content size : %u ; flag : %u",
(unsigned)pledgedSrcSize, zc->appliedParams.fParams.contentSizeFlag);
zc->blockSizeMax = blockSize;
XXH64_reset(&zc->xxhState, 0);
zc->stage = ZSTDcs_init;
zc->dictID = 0;
zc->dictContentSize = 0;
ZSTD_reset_compressedBlockState(zc->blockState.prevCBlock);
FORWARD_IF_ERROR(ZSTD_reset_matchState(
&zc->blockState.matchState,
ws,
¶ms->cParams,
params->useRowMatchFinder,
crp,
needsIndexReset,
ZSTD_resetTarget_CCtx), "");
zc->seqStore.sequencesStart = (SeqDef*)ZSTD_cwksp_reserve_aligned64(ws, maxNbSeq * sizeof(SeqDef));
/* ldm hash table */
if (params->ldmParams.enableLdm == ZSTD_ps_enable) {
/* TODO: avoid memset? */
size_t const ldmHSize = ((size_t)1) << params->ldmParams.hashLog;
zc->ldmState.hashTable = (ldmEntry_t*)ZSTD_cwksp_reserve_aligned64(ws, ldmHSize * sizeof(ldmEntry_t));
ZSTD_memset(zc->ldmState.hashTable, 0, ldmHSize * sizeof(ldmEntry_t));
zc->ldmSequences = (rawSeq*)ZSTD_cwksp_reserve_aligned64(ws, maxNbLdmSeq * sizeof(rawSeq));
zc->maxNbLdmSequences = maxNbLdmSeq;
ZSTD_window_init(&zc->ldmState.window);
zc->ldmState.loadedDictEnd = 0;
}
/* reserve space for block-level external sequences */
if (ZSTD_hasExtSeqProd(params)) {
size_t const maxNbExternalSeq = ZSTD_sequenceBound(blockSize);
zc->extSeqBufCapacity = maxNbExternalSeq;
zc->extSeqBuf =
(ZSTD_Sequence*)ZSTD_cwksp_reserve_aligned64(ws, maxNbExternalSeq * sizeof(ZSTD_Sequence));
}
/* buffers */
/* ZSTD_wildcopy() is used to copy into the literals buffer,
* so we have to oversize the buffer by WILDCOPY_OVERLENGTH bytes.
*/
zc->seqStore.litStart = ZSTD_cwksp_reserve_buffer(ws, blockSize + WILDCOPY_OVERLENGTH);
zc->seqStore.maxNbLit = blockSize;
zc->bufferedPolicy = zbuff;
zc->inBuffSize = buffInSize;
zc->inBuff = (char*)ZSTD_cwksp_reserve_buffer(ws, buffInSize);
zc->outBuffSize = buffOutSize;
zc->outBuff = (char*)ZSTD_cwksp_reserve_buffer(ws, buffOutSize);
/* ldm bucketOffsets table */
if (params->ldmParams.enableLdm == ZSTD_ps_enable) {
/* TODO: avoid memset? */
size_t const numBuckets =
((size_t)1) << (params->ldmParams.hashLog -
params->ldmParams.bucketSizeLog);
zc->ldmState.bucketOffsets = ZSTD_cwksp_reserve_buffer(ws, numBuckets);
ZSTD_memset(zc->ldmState.bucketOffsets, 0, numBuckets);
}
/* sequences storage */
ZSTD_referenceExternalSequences(zc, NULL, 0);
zc->seqStore.maxNbSeq = maxNbSeq;
zc->seqStore.llCode = ZSTD_cwksp_reserve_buffer(ws, maxNbSeq * sizeof(BYTE));
zc->seqStore.mlCode = ZSTD_cwksp_reserve_buffer(ws, maxNbSeq * sizeof(BYTE));
zc->seqStore.ofCode = ZSTD_cwksp_reserve_buffer(ws, maxNbSeq * sizeof(BYTE));
DEBUGLOG(3, "wksp: finished allocating, %zd bytes remain available", ZSTD_cwksp_available_space(ws));
assert(ZSTD_cwksp_estimated_space_within_bounds(ws, neededSpace));
zc->initialized = 1;
return 0;
}
}
/* ZSTD_invalidateRepCodes() :
* ensures next compression will not use repcodes from previous block.
* Note : only works with regular variant;
* do not use with extDict variant ! */
void ZSTD_invalidateRepCodes(ZSTD_CCtx* cctx) {
int i;
for (i=0; i<ZSTD_REP_NUM; i++) cctx->blockState.prevCBlock->rep[i] = 0;
assert(!ZSTD_window_hasExtDict(cctx->blockState.matchState.window));
}
/* These are the approximate sizes for each strategy past which copying the
* dictionary tables into the working context is faster than using them
* in-place.
*/
static const size_t attachDictSizeCutoffs[ZSTD_STRATEGY_MAX+1] = {
8 KB, /* unused */
8 KB, /* ZSTD_fast */
16 KB, /* ZSTD_dfast */
32 KB, /* ZSTD_greedy */
32 KB, /* ZSTD_lazy */
32 KB, /* ZSTD_lazy2 */
32 KB, /* ZSTD_btlazy2 */
32 KB, /* ZSTD_btopt */
8 KB, /* ZSTD_btultra */
8 KB /* ZSTD_btultra2 */
};
static int ZSTD_shouldAttachDict(const ZSTD_CDict* cdict,
const ZSTD_CCtx_params* params,
U64 pledgedSrcSize)
{
size_t cutoff = attachDictSizeCutoffs[cdict->matchState.cParams.strategy];
int const dedicatedDictSearch = cdict->matchState.dedicatedDictSearch;
return dedicatedDictSearch
|| ( ( pledgedSrcSize <= cutoff
|| pledgedSrcSize == ZSTD_CONTENTSIZE_UNKNOWN
|| params->attachDictPref == ZSTD_dictForceAttach )
&& params->attachDictPref != ZSTD_dictForceCopy
&& !params->forceWindow ); /* dictMatchState isn't correctly
* handled in _enforceMaxDist */
}
static size_t
ZSTD_resetCCtx_byAttachingCDict(ZSTD_CCtx* cctx,
const ZSTD_CDict* cdict,
ZSTD_CCtx_params params,
U64 pledgedSrcSize,
ZSTD_buffered_policy_e zbuff)
{
DEBUGLOG(4, "ZSTD_resetCCtx_byAttachingCDict() pledgedSrcSize=%llu",
(unsigned long long)pledgedSrcSize);
{
ZSTD_compressionParameters adjusted_cdict_cParams = cdict->matchState.cParams;
unsigned const windowLog = params.cParams.windowLog;
assert(windowLog != 0);
/* Resize working context table params for input only, since the dict
* has its own tables. */
/* pledgedSrcSize == 0 means 0! */
if (cdict->matchState.dedicatedDictSearch) {
ZSTD_dedicatedDictSearch_revertCParams(&adjusted_cdict_cParams);
}
params.cParams = ZSTD_adjustCParams_internal(adjusted_cdict_cParams, pledgedSrcSize,
cdict->dictContentSize, ZSTD_cpm_attachDict,
params.useRowMatchFinder);
params.cParams.windowLog = windowLog;
params.useRowMatchFinder = cdict->useRowMatchFinder; /* cdict overrides */
FORWARD_IF_ERROR(ZSTD_resetCCtx_internal(cctx, ¶ms, pledgedSrcSize,
/* loadedDictSize */ 0,
ZSTDcrp_makeClean, zbuff), "");
assert(cctx->appliedParams.cParams.strategy == adjusted_cdict_cParams.strategy);
}
{ const U32 cdictEnd = (U32)( cdict->matchState.window.nextSrc
- cdict->matchState.window.base);
const U32 cdictLen = cdictEnd - cdict->matchState.window.dictLimit;
if (cdictLen == 0) {
/* don't even attach dictionaries with no contents */
DEBUGLOG(4, "skipping attaching empty dictionary");
} else {
DEBUGLOG(4, "attaching dictionary into context");
cctx->blockState.matchState.dictMatchState = &cdict->matchState;
/* prep working match state so dict matches never have negative indices
* when they are translated to the working context's index space. */
if (cctx->blockState.matchState.window.dictLimit < cdictEnd) {
cctx->blockState.matchState.window.nextSrc =
cctx->blockState.matchState.window.base + cdictEnd;
ZSTD_window_clear(&cctx->blockState.matchState.window);
}
/* loadedDictEnd is expressed within the referential of the active context */
cctx->blockState.matchState.loadedDictEnd = cctx->blockState.matchState.window.dictLimit;
} }
cctx->dictID = cdict->dictID;
cctx->dictContentSize = cdict->dictContentSize;
/* copy block state */
ZSTD_memcpy(cctx->blockState.prevCBlock, &cdict->cBlockState, sizeof(cdict->cBlockState));
return 0;
}
static void ZSTD_copyCDictTableIntoCCtx(U32* dst, U32 const* src, size_t tableSize,
ZSTD_compressionParameters const* cParams) {
if (ZSTD_CDictIndicesAreTagged(cParams)){
/* Remove tags from the CDict table if they are present.
* See docs on "short cache" in zstd_compress_internal.h for context. */
size_t i;
for (i = 0; i < tableSize; i++) {
U32 const taggedIndex = src[i];
U32 const index = taggedIndex >> ZSTD_SHORT_CACHE_TAG_BITS;
dst[i] = index;
}
} else {
ZSTD_memcpy(dst, src, tableSize * sizeof(U32));
}
}
static size_t ZSTD_resetCCtx_byCopyingCDict(ZSTD_CCtx* cctx,
const ZSTD_CDict* cdict,
ZSTD_CCtx_params params,
U64 pledgedSrcSize,
ZSTD_buffered_policy_e zbuff)
{
const ZSTD_compressionParameters *cdict_cParams = &cdict->matchState.cParams;
assert(!cdict->matchState.dedicatedDictSearch);
DEBUGLOG(4, "ZSTD_resetCCtx_byCopyingCDict() pledgedSrcSize=%llu",
(unsigned long long)pledgedSrcSize);
{ unsigned const windowLog = params.cParams.windowLog;
assert(windowLog != 0);
/* Copy only compression parameters related to tables. */
params.cParams = *cdict_cParams;
params.cParams.windowLog = windowLog;
params.useRowMatchFinder = cdict->useRowMatchFinder;
FORWARD_IF_ERROR(ZSTD_resetCCtx_internal(cctx, ¶ms, pledgedSrcSize,
/* loadedDictSize */ 0,
ZSTDcrp_leaveDirty, zbuff), "");
assert(cctx->appliedParams.cParams.strategy == cdict_cParams->strategy);
assert(cctx->appliedParams.cParams.hashLog == cdict_cParams->hashLog);
assert(cctx->appliedParams.cParams.chainLog == cdict_cParams->chainLog);
}
ZSTD_cwksp_mark_tables_dirty(&cctx->workspace);
assert(params.useRowMatchFinder != ZSTD_ps_auto);
/* copy tables */
{ size_t const chainSize = ZSTD_allocateChainTable(cdict_cParams->strategy, cdict->useRowMatchFinder, 0 /* DDS guaranteed disabled */)
? ((size_t)1 << cdict_cParams->chainLog)
: 0;
size_t const hSize = (size_t)1 << cdict_cParams->hashLog;
ZSTD_copyCDictTableIntoCCtx(cctx->blockState.matchState.hashTable,
cdict->matchState.hashTable,
hSize, cdict_cParams);
/* Do not copy cdict's chainTable if cctx has parameters such that it would not use chainTable */
if (ZSTD_allocateChainTable(cctx->appliedParams.cParams.strategy, cctx->appliedParams.useRowMatchFinder, 0 /* forDDSDict */)) {
ZSTD_copyCDictTableIntoCCtx(cctx->blockState.matchState.chainTable,
cdict->matchState.chainTable,
chainSize, cdict_cParams);
}
/* copy tag table */
if (ZSTD_rowMatchFinderUsed(cdict_cParams->strategy, cdict->useRowMatchFinder)) {
size_t const tagTableSize = hSize;
ZSTD_memcpy(cctx->blockState.matchState.tagTable,
cdict->matchState.tagTable,
tagTableSize);
cctx->blockState.matchState.hashSalt = cdict->matchState.hashSalt;
}
}
/* Zero the hashTable3, since the cdict never fills it */
assert(cctx->blockState.matchState.hashLog3 <= 31);
{ U32 const h3log = cctx->blockState.matchState.hashLog3;
size_t const h3Size = h3log ? ((size_t)1 << h3log) : 0;
assert(cdict->matchState.hashLog3 == 0);
ZSTD_memset(cctx->blockState.matchState.hashTable3, 0, h3Size * sizeof(U32));
}
ZSTD_cwksp_mark_tables_clean(&cctx->workspace);
/* copy dictionary offsets */
{ ZSTD_MatchState_t const* srcMatchState = &cdict->matchState;
ZSTD_MatchState_t* dstMatchState = &cctx->blockState.matchState;
dstMatchState->window = srcMatchState->window;
dstMatchState->nextToUpdate = srcMatchState->nextToUpdate;
dstMatchState->loadedDictEnd= srcMatchState->loadedDictEnd;
}
cctx->dictID = cdict->dictID;
cctx->dictContentSize = cdict->dictContentSize;
/* copy block state */
ZSTD_memcpy(cctx->blockState.prevCBlock, &cdict->cBlockState, sizeof(cdict->cBlockState));
return 0;
}
/* We have a choice between copying the dictionary context into the working
* context, or referencing the dictionary context from the working context
* in-place. We decide here which strategy to use. */
static size_t ZSTD_resetCCtx_usingCDict(ZSTD_CCtx* cctx,
const ZSTD_CDict* cdict,
const ZSTD_CCtx_params* params,
U64 pledgedSrcSize,
ZSTD_buffered_policy_e zbuff)
{
DEBUGLOG(4, "ZSTD_resetCCtx_usingCDict (pledgedSrcSize=%u)",
(unsigned)pledgedSrcSize);
if (ZSTD_shouldAttachDict(cdict, params, pledgedSrcSize)) {
return ZSTD_resetCCtx_byAttachingCDict(
cctx, cdict, *params, pledgedSrcSize, zbuff);
} else {
return ZSTD_resetCCtx_byCopyingCDict(
cctx, cdict, *params, pledgedSrcSize, zbuff);
}
}
/*! ZSTD_copyCCtx_internal() :
* Duplicate an existing context `srcCCtx` into another one `dstCCtx`.
* Only works during stage ZSTDcs_init (i.e. after creation, but before first call to ZSTD_compressContinue()).
* The "context", in this case, refers to the hash and chain tables,
* entropy tables, and dictionary references.
* `windowLog` value is enforced if != 0, otherwise value is copied from srcCCtx.
* @return : 0, or an error code */
static size_t ZSTD_copyCCtx_internal(ZSTD_CCtx* dstCCtx,
const ZSTD_CCtx* srcCCtx,
ZSTD_frameParameters fParams,
U64 pledgedSrcSize,
ZSTD_buffered_policy_e zbuff)
{
RETURN_ERROR_IF(srcCCtx->stage!=ZSTDcs_init, stage_wrong,
"Can't copy a ctx that's not in init stage.");
DEBUGLOG(5, "ZSTD_copyCCtx_internal");
ZSTD_memcpy(&dstCCtx->customMem, &srcCCtx->customMem, sizeof(ZSTD_customMem));
{ ZSTD_CCtx_params params = dstCCtx->requestedParams;
/* Copy only compression parameters related to tables. */
params.cParams = srcCCtx->appliedParams.cParams;
assert(srcCCtx->appliedParams.useRowMatchFinder != ZSTD_ps_auto);
assert(srcCCtx->appliedParams.postBlockSplitter != ZSTD_ps_auto);
assert(srcCCtx->appliedParams.ldmParams.enableLdm != ZSTD_ps_auto);
params.useRowMatchFinder = srcCCtx->appliedParams.useRowMatchFinder;
params.postBlockSplitter = srcCCtx->appliedParams.postBlockSplitter;
params.ldmParams = srcCCtx->appliedParams.ldmParams;
params.fParams = fParams;
params.maxBlockSize = srcCCtx->appliedParams.maxBlockSize;
ZSTD_resetCCtx_internal(dstCCtx, ¶ms, pledgedSrcSize,
/* loadedDictSize */ 0,
ZSTDcrp_leaveDirty, zbuff);
assert(dstCCtx->appliedParams.cParams.windowLog == srcCCtx->appliedParams.cParams.windowLog);
assert(dstCCtx->appliedParams.cParams.strategy == srcCCtx->appliedParams.cParams.strategy);
assert(dstCCtx->appliedParams.cParams.hashLog == srcCCtx->appliedParams.cParams.hashLog);
assert(dstCCtx->appliedParams.cParams.chainLog == srcCCtx->appliedParams.cParams.chainLog);
assert(dstCCtx->blockState.matchState.hashLog3 == srcCCtx->blockState.matchState.hashLog3);
}
ZSTD_cwksp_mark_tables_dirty(&dstCCtx->workspace);
/* copy tables */
{ size_t const chainSize = ZSTD_allocateChainTable(srcCCtx->appliedParams.cParams.strategy,
srcCCtx->appliedParams.useRowMatchFinder,
0 /* forDDSDict */)
? ((size_t)1 << srcCCtx->appliedParams.cParams.chainLog)
: 0;
size_t const hSize = (size_t)1 << srcCCtx->appliedParams.cParams.hashLog;
U32 const h3log = srcCCtx->blockState.matchState.hashLog3;
size_t const h3Size = h3log ? ((size_t)1 << h3log) : 0;
ZSTD_memcpy(dstCCtx->blockState.matchState.hashTable,
srcCCtx->blockState.matchState.hashTable,
hSize * sizeof(U32));
ZSTD_memcpy(dstCCtx->blockState.matchState.chainTable,
srcCCtx->blockState.matchState.chainTable,
chainSize * sizeof(U32));
ZSTD_memcpy(dstCCtx->blockState.matchState.hashTable3,
srcCCtx->blockState.matchState.hashTable3,
h3Size * sizeof(U32));
}
ZSTD_cwksp_mark_tables_clean(&dstCCtx->workspace);
/* copy dictionary offsets */
{
const ZSTD_MatchState_t* srcMatchState = &srcCCtx->blockState.matchState;
ZSTD_MatchState_t* dstMatchState = &dstCCtx->blockState.matchState;
dstMatchState->window = srcMatchState->window;
dstMatchState->nextToUpdate = srcMatchState->nextToUpdate;
dstMatchState->loadedDictEnd= srcMatchState->loadedDictEnd;
}
dstCCtx->dictID = srcCCtx->dictID;
dstCCtx->dictContentSize = srcCCtx->dictContentSize;
/* copy block state */
ZSTD_memcpy(dstCCtx->blockState.prevCBlock, srcCCtx->blockState.prevCBlock, sizeof(*srcCCtx->blockState.prevCBlock));
return 0;
}
/*! ZSTD_copyCCtx() :
* Duplicate an existing context `srcCCtx` into another one `dstCCtx`.
* Only works during stage ZSTDcs_init (i.e. after creation, but before first call to ZSTD_compressContinue()).
* pledgedSrcSize==0 means "unknown".
* @return : 0, or an error code */
size_t ZSTD_copyCCtx(ZSTD_CCtx* dstCCtx, const ZSTD_CCtx* srcCCtx, unsigned long long pledgedSrcSize)
{
ZSTD_frameParameters fParams = { 1 /*content*/, 0 /*checksum*/, 0 /*noDictID*/ };
ZSTD_buffered_policy_e const zbuff = srcCCtx->bufferedPolicy;
ZSTD_STATIC_ASSERT((U32)ZSTDb_buffered==1);
if (pledgedSrcSize==0) pledgedSrcSize = ZSTD_CONTENTSIZE_UNKNOWN;
fParams.contentSizeFlag = (pledgedSrcSize != ZSTD_CONTENTSIZE_UNKNOWN);
return ZSTD_copyCCtx_internal(dstCCtx, srcCCtx,
fParams, pledgedSrcSize,
zbuff);
}
#define ZSTD_ROWSIZE 16
/*! ZSTD_reduceTable() :
* reduce table indexes by `reducerValue`, or squash to zero.
* PreserveMark preserves "unsorted mark" for btlazy2 strategy.
* It must be set to a clear 0/1 value, to remove branch during inlining.
* Presume table size is a multiple of ZSTD_ROWSIZE
* to help auto-vectorization */
FORCE_INLINE_TEMPLATE void
ZSTD_reduceTable_internal (U32* const table, U32 const size, U32 const reducerValue, int const preserveMark)
{
int const nbRows = (int)size / ZSTD_ROWSIZE;
int cellNb = 0;
int rowNb;
/* Protect special index values < ZSTD_WINDOW_START_INDEX. */
U32 const reducerThreshold = reducerValue + ZSTD_WINDOW_START_INDEX;
assert((size & (ZSTD_ROWSIZE-1)) == 0); /* multiple of ZSTD_ROWSIZE */
assert(size < (1U<<31)); /* can be cast to int */
#if ZSTD_MEMORY_SANITIZER && !defined (ZSTD_MSAN_DONT_POISON_WORKSPACE)
/* To validate that the table reuse logic is sound, and that we don't
* access table space that we haven't cleaned, we re-"poison" the table
* space every time we mark it dirty.
*
* This function however is intended to operate on those dirty tables and
* re-clean them. So when this function is used correctly, we can unpoison
* the memory it operated on. This introduces a blind spot though, since
* if we now try to operate on __actually__ poisoned memory, we will not
* detect that. */
__msan_unpoison(table, size * sizeof(U32));
#endif
for (rowNb=0 ; rowNb < nbRows ; rowNb++) {
int column;
for (column=0; column<ZSTD_ROWSIZE; column++) {
U32 newVal;
if (preserveMark && table[cellNb] == ZSTD_DUBT_UNSORTED_MARK) {
/* This write is pointless, but is required(?) for the compiler
* to auto-vectorize the loop. */
newVal = ZSTD_DUBT_UNSORTED_MARK;
} else if (table[cellNb] < reducerThreshold) {
newVal = 0;
} else {
newVal = table[cellNb] - reducerValue;
}
table[cellNb] = newVal;
cellNb++;
} }
}
static void ZSTD_reduceTable(U32* const table, U32 const size, U32 const reducerValue)
{
ZSTD_reduceTable_internal(table, size, reducerValue, 0);
}
static void ZSTD_reduceTable_btlazy2(U32* const table, U32 const size, U32 const reducerValue)
{
ZSTD_reduceTable_internal(table, size, reducerValue, 1);
}
/*! ZSTD_reduceIndex() :
* rescale all indexes to avoid future overflow (indexes are U32) */
static void ZSTD_reduceIndex (ZSTD_MatchState_t* ms, ZSTD_CCtx_params const* params, const U32 reducerValue)
{
{ U32 const hSize = (U32)1 << params->cParams.hashLog;
ZSTD_reduceTable(ms->hashTable, hSize, reducerValue);
}
if (ZSTD_allocateChainTable(params->cParams.strategy, params->useRowMatchFinder, (U32)ms->dedicatedDictSearch)) {
U32 const chainSize = (U32)1 << params->cParams.chainLog;
if (params->cParams.strategy == ZSTD_btlazy2)
ZSTD_reduceTable_btlazy2(ms->chainTable, chainSize, reducerValue);
else
ZSTD_reduceTable(ms->chainTable, chainSize, reducerValue);
}
if (ms->hashLog3) {
U32 const h3Size = (U32)1 << ms->hashLog3;
ZSTD_reduceTable(ms->hashTable3, h3Size, reducerValue);
}
}
/*-*******************************************************
* Block entropic compression
*********************************************************/
/* See doc/zstd_compression_format.md for detailed format description */
int ZSTD_seqToCodes(const SeqStore_t* seqStorePtr)
{
const SeqDef* const sequences = seqStorePtr->sequencesStart;
BYTE* const llCodeTable = seqStorePtr->llCode;
BYTE* const ofCodeTable = seqStorePtr->ofCode;
BYTE* const mlCodeTable = seqStorePtr->mlCode;
U32 const nbSeq = (U32)(seqStorePtr->sequences - seqStorePtr->sequencesStart);
U32 u;
int longOffsets = 0;
assert(nbSeq <= seqStorePtr->maxNbSeq);
for (u=0; u<nbSeq; u++) {
U32 const llv = sequences[u].litLength;
U32 const ofCode = ZSTD_highbit32(sequences[u].offBase);
U32 const mlv = sequences[u].mlBase;
llCodeTable[u] = (BYTE)ZSTD_LLcode(llv);
ofCodeTable[u] = (BYTE)ofCode;
mlCodeTable[u] = (BYTE)ZSTD_MLcode(mlv);
assert(!(MEM_64bits() && ofCode >= STREAM_ACCUMULATOR_MIN));
if (MEM_32bits() && ofCode >= STREAM_ACCUMULATOR_MIN)
longOffsets = 1;
}
if (seqStorePtr->longLengthType==ZSTD_llt_literalLength)
llCodeTable[seqStorePtr->longLengthPos] = MaxLL;
if (seqStorePtr->longLengthType==ZSTD_llt_matchLength)
mlCodeTable[seqStorePtr->longLengthPos] = MaxML;
return longOffsets;
}
/* ZSTD_useTargetCBlockSize():
* Returns if target compressed block size param is being used.
* If used, compression will do best effort to make a compressed block size to be around targetCBlockSize.
* Returns 1 if true, 0 otherwise. */
static int ZSTD_useTargetCBlockSize(const ZSTD_CCtx_params* cctxParams)
{
DEBUGLOG(5, "ZSTD_useTargetCBlockSize (targetCBlockSize=%zu)", cctxParams->targetCBlockSize);
return (cctxParams->targetCBlockSize != 0);
}
/* ZSTD_blockSplitterEnabled():
* Returns if block splitting param is being used
* If used, compression will do best effort to split a block in order to improve compression ratio.
* At the time this function is called, the parameter must be finalized.
* Returns 1 if true, 0 otherwise. */
static int ZSTD_blockSplitterEnabled(ZSTD_CCtx_params* cctxParams)
{
DEBUGLOG(5, "ZSTD_blockSplitterEnabled (postBlockSplitter=%d)", cctxParams->postBlockSplitter);
assert(cctxParams->postBlockSplitter != ZSTD_ps_auto);
return (cctxParams->postBlockSplitter == ZSTD_ps_enable);
}
/* Type returned by ZSTD_buildSequencesStatistics containing finalized symbol encoding types
* and size of the sequences statistics
*/
typedef struct {
U32 LLtype;
U32 Offtype;
U32 MLtype;
size_t size;
size_t lastCountSize; /* Accounts for bug in 1.3.4. More detail in ZSTD_entropyCompressSeqStore_internal() */
int longOffsets;
} ZSTD_symbolEncodingTypeStats_t;
/* ZSTD_buildSequencesStatistics():
* Returns a ZSTD_symbolEncodingTypeStats_t, or a zstd error code in the `size` field.
* Modifies `nextEntropy` to have the appropriate values as a side effect.
* nbSeq must be greater than 0.
*
* entropyWkspSize must be of size at least ENTROPY_WORKSPACE_SIZE - (MaxSeq + 1)*sizeof(U32)
*/
static ZSTD_symbolEncodingTypeStats_t
ZSTD_buildSequencesStatistics(
const SeqStore_t* seqStorePtr, size_t nbSeq,
const ZSTD_fseCTables_t* prevEntropy, ZSTD_fseCTables_t* nextEntropy,
BYTE* dst, const BYTE* const dstEnd,
ZSTD_strategy strategy, unsigned* countWorkspace,
void* entropyWorkspace, size_t entropyWkspSize)
{
BYTE* const ostart = dst;
const BYTE* const oend = dstEnd;
BYTE* op = ostart;
FSE_CTable* CTable_LitLength = nextEntropy->litlengthCTable;
FSE_CTable* CTable_OffsetBits = nextEntropy->offcodeCTable;
FSE_CTable* CTable_MatchLength = nextEntropy->matchlengthCTable;
const BYTE* const ofCodeTable = seqStorePtr->ofCode;
const BYTE* const llCodeTable = seqStorePtr->llCode;
const BYTE* const mlCodeTable = seqStorePtr->mlCode;
ZSTD_symbolEncodingTypeStats_t stats;
stats.lastCountSize = 0;
/* convert length/distances into codes */
stats.longOffsets = ZSTD_seqToCodes(seqStorePtr);
assert(op <= oend);
assert(nbSeq != 0); /* ZSTD_selectEncodingType() divides by nbSeq */
/* build CTable for Literal Lengths */
{ unsigned max = MaxLL;
size_t const mostFrequent = HIST_countFast_wksp(countWorkspace, &max, llCodeTable, nbSeq, entropyWorkspace, entropyWkspSize); /* can't fail */
DEBUGLOG(5, "Building LL table");
nextEntropy->litlength_repeatMode = prevEntropy->litlength_repeatMode;
stats.LLtype = ZSTD_selectEncodingType(&nextEntropy->litlength_repeatMode,
countWorkspace, max, mostFrequent, nbSeq,
LLFSELog, prevEntropy->litlengthCTable,
LL_defaultNorm, LL_defaultNormLog,
ZSTD_defaultAllowed, strategy);
assert(set_basic < set_compressed && set_rle < set_compressed);
assert(!(stats.LLtype < set_compressed && nextEntropy->litlength_repeatMode != FSE_repeat_none)); /* We don't copy tables */
{ size_t const countSize = ZSTD_buildCTable(
op, (size_t)(oend - op),
CTable_LitLength, LLFSELog, (SymbolEncodingType_e)stats.LLtype,
countWorkspace, max, llCodeTable, nbSeq,
LL_defaultNorm, LL_defaultNormLog, MaxLL,
prevEntropy->litlengthCTable,
sizeof(prevEntropy->litlengthCTable),
entropyWorkspace, entropyWkspSize);
if (ZSTD_isError(countSize)) {
DEBUGLOG(3, "ZSTD_buildCTable for LitLens failed");
stats.size = countSize;
return stats;
}
if (stats.LLtype == set_compressed)
stats.lastCountSize = countSize;
op += countSize;
assert(op <= oend);
} }
/* build CTable for Offsets */
{ unsigned max = MaxOff;
size_t const mostFrequent = HIST_countFast_wksp(
countWorkspace, &max, ofCodeTable, nbSeq, entropyWorkspace, entropyWkspSize); /* can't fail */
/* We can only use the basic table if max <= DefaultMaxOff, otherwise the offsets are too large */
ZSTD_DefaultPolicy_e const defaultPolicy = (max <= DefaultMaxOff) ? ZSTD_defaultAllowed : ZSTD_defaultDisallowed;
DEBUGLOG(5, "Building OF table");
nextEntropy->offcode_repeatMode = prevEntropy->offcode_repeatMode;
stats.Offtype = ZSTD_selectEncodingType(&nextEntropy->offcode_repeatMode,
countWorkspace, max, mostFrequent, nbSeq,
OffFSELog, prevEntropy->offcodeCTable,
OF_defaultNorm, OF_defaultNormLog,
defaultPolicy, strategy);
assert(!(stats.Offtype < set_compressed && nextEntropy->offcode_repeatMode != FSE_repeat_none)); /* We don't copy tables */
{ size_t const countSize = ZSTD_buildCTable(
op, (size_t)(oend - op),
CTable_OffsetBits, OffFSELog, (SymbolEncodingType_e)stats.Offtype,
countWorkspace, max, ofCodeTable, nbSeq,
OF_defaultNorm, OF_defaultNormLog, DefaultMaxOff,
prevEntropy->offcodeCTable,
sizeof(prevEntropy->offcodeCTable),
entropyWorkspace, entropyWkspSize);
if (ZSTD_isError(countSize)) {
DEBUGLOG(3, "ZSTD_buildCTable for Offsets failed");
stats.size = countSize;
return stats;
}
if (stats.Offtype == set_compressed)
stats.lastCountSize = countSize;
op += countSize;
assert(op <= oend);
} }
/* build CTable for MatchLengths */
{ unsigned max = MaxML;
size_t const mostFrequent = HIST_countFast_wksp(
countWorkspace, &max, mlCodeTable, nbSeq, entropyWorkspace, entropyWkspSize); /* can't fail */
DEBUGLOG(5, "Building ML table (remaining space : %i)", (int)(oend-op));
nextEntropy->matchlength_repeatMode = prevEntropy->matchlength_repeatMode;
stats.MLtype = ZSTD_selectEncodingType(&nextEntropy->matchlength_repeatMode,
countWorkspace, max, mostFrequent, nbSeq,
MLFSELog, prevEntropy->matchlengthCTable,
ML_defaultNorm, ML_defaultNormLog,
ZSTD_defaultAllowed, strategy);
assert(!(stats.MLtype < set_compressed && nextEntropy->matchlength_repeatMode != FSE_repeat_none)); /* We don't copy tables */
{ size_t const countSize = ZSTD_buildCTable(
op, (size_t)(oend - op),
CTable_MatchLength, MLFSELog, (SymbolEncodingType_e)stats.MLtype,
countWorkspace, max, mlCodeTable, nbSeq,
ML_defaultNorm, ML_defaultNormLog, MaxML,
prevEntropy->matchlengthCTable,
sizeof(prevEntropy->matchlengthCTable),
entropyWorkspace, entropyWkspSize);
if (ZSTD_isError(countSize)) {
DEBUGLOG(3, "ZSTD_buildCTable for MatchLengths failed");
stats.size = countSize;
return stats;
}
if (stats.MLtype == set_compressed)
stats.lastCountSize = countSize;
op += countSize;
assert(op <= oend);
} }
stats.size = (size_t)(op-ostart);
return stats;
}
/* ZSTD_entropyCompressSeqStore_internal():
* compresses both literals and sequences
* Returns compressed size of block, or a zstd error.
*/
#define SUSPECT_UNCOMPRESSIBLE_LITERAL_RATIO 20
MEM_STATIC size_t
ZSTD_entropyCompressSeqStore_internal(
void* dst, size_t dstCapacity,
const void* literals, size_t litSize,
const SeqStore_t* seqStorePtr,
const ZSTD_entropyCTables_t* prevEntropy,
ZSTD_entropyCTables_t* nextEntropy,
const ZSTD_CCtx_params* cctxParams,
void* entropyWorkspace, size_t entropyWkspSize,
const int bmi2)
{
ZSTD_strategy const strategy = cctxParams->cParams.strategy;
unsigned* count = (unsigned*)entropyWorkspace;
FSE_CTable* CTable_LitLength = nextEntropy->fse.litlengthCTable;
FSE_CTable* CTable_OffsetBits = nextEntropy->fse.offcodeCTable;
FSE_CTable* CTable_MatchLength = nextEntropy->fse.matchlengthCTable;
const SeqDef* const sequences = seqStorePtr->sequencesStart;
const size_t nbSeq = (size_t)(seqStorePtr->sequences - seqStorePtr->sequencesStart);
const BYTE* const ofCodeTable = seqStorePtr->ofCode;
const BYTE* const llCodeTable = seqStorePtr->llCode;
const BYTE* const mlCodeTable = seqStorePtr->mlCode;
BYTE* const ostart = (BYTE*)dst;
BYTE* const oend = ostart + dstCapacity;
BYTE* op = ostart;
size_t lastCountSize;
int longOffsets = 0;
entropyWorkspace = count + (MaxSeq + 1);
entropyWkspSize -= (MaxSeq + 1) * sizeof(*count);
DEBUGLOG(5, "ZSTD_entropyCompressSeqStore_internal (nbSeq=%zu, dstCapacity=%zu)", nbSeq, dstCapacity);
ZSTD_STATIC_ASSERT(HUF_WORKSPACE_SIZE >= (1<<MAX(MLFSELog,LLFSELog)));
assert(entropyWkspSize >= HUF_WORKSPACE_SIZE);
/* Compress literals */
{ size_t const numSequences = (size_t)(seqStorePtr->sequences - seqStorePtr->sequencesStart);
/* Base suspicion of uncompressibility on ratio of literals to sequences */
int const suspectUncompressible = (numSequences == 0) || (litSize / numSequences >= SUSPECT_UNCOMPRESSIBLE_LITERAL_RATIO);
size_t const cSize = ZSTD_compressLiterals(
op, dstCapacity,
literals, litSize,
entropyWorkspace, entropyWkspSize,
&prevEntropy->huf, &nextEntropy->huf,
cctxParams->cParams.strategy,
ZSTD_literalsCompressionIsDisabled(cctxParams),
suspectUncompressible, bmi2);
FORWARD_IF_ERROR(cSize, "ZSTD_compressLiterals failed");
assert(cSize <= dstCapacity);
op += cSize;
}
/* Sequences Header */
RETURN_ERROR_IF((oend-op) < 3 /*max nbSeq Size*/ + 1 /*seqHead*/,
dstSize_tooSmall, "Can't fit seq hdr in output buf!");
if (nbSeq < 128) {
*op++ = (BYTE)nbSeq;
} else if (nbSeq < LONGNBSEQ) {
op[0] = (BYTE)((nbSeq>>8) + 0x80);
op[1] = (BYTE)nbSeq;
op+=2;
} else {
op[0]=0xFF;
MEM_writeLE16(op+1, (U16)(nbSeq - LONGNBSEQ));
op+=3;
}
assert(op <= oend);
if (nbSeq==0) {
/* Copy the old tables over as if we repeated them */
ZSTD_memcpy(&nextEntropy->fse, &prevEntropy->fse, sizeof(prevEntropy->fse));
return (size_t)(op - ostart);
}
{ BYTE* const seqHead = op++;
/* build stats for sequences */
const ZSTD_symbolEncodingTypeStats_t stats =
ZSTD_buildSequencesStatistics(seqStorePtr, nbSeq,
&prevEntropy->fse, &nextEntropy->fse,
op, oend,
strategy, count,
entropyWorkspace, entropyWkspSize);
FORWARD_IF_ERROR(stats.size, "ZSTD_buildSequencesStatistics failed!");
*seqHead = (BYTE)((stats.LLtype<<6) + (stats.Offtype<<4) + (stats.MLtype<<2));
lastCountSize = stats.lastCountSize;
op += stats.size;
longOffsets = stats.longOffsets;
}
{ size_t const bitstreamSize = ZSTD_encodeSequences(
op, (size_t)(oend - op),
CTable_MatchLength, mlCodeTable,
CTable_OffsetBits, ofCodeTable,
CTable_LitLength, llCodeTable,
sequences, nbSeq,
longOffsets, bmi2);
FORWARD_IF_ERROR(bitstreamSize, "ZSTD_encodeSequences failed");
op += bitstreamSize;
assert(op <= oend);
/* zstd versions <= 1.3.4 mistakenly report corruption when
* FSE_readNCount() receives a buffer < 4 bytes.
* Fixed by https://github.com/facebook/zstd/pull/1146.
* This can happen when the last set_compressed table present is 2
* bytes and the bitstream is only one byte.
* In this exceedingly rare case, we will simply emit an uncompressed
* block, since it isn't worth optimizing.
*/
if (lastCountSize && (lastCountSize + bitstreamSize) < 4) {
/* lastCountSize >= 2 && bitstreamSize > 0 ==> lastCountSize == 3 */
assert(lastCountSize + bitstreamSize == 3);
DEBUGLOG(5, "Avoiding bug in zstd decoder in versions <= 1.3.4 by "
"emitting an uncompressed block.");
return 0;
}
}
DEBUGLOG(5, "compressed block size : %u", (unsigned)(op - ostart));
return (size_t)(op - ostart);
}
static size_t
ZSTD_entropyCompressSeqStore_wExtLitBuffer(
void* dst, size_t dstCapacity,
const void* literals, size_t litSize,
size_t blockSize,
const SeqStore_t* seqStorePtr,
const ZSTD_entropyCTables_t* prevEntropy,
ZSTD_entropyCTables_t* nextEntropy,
const ZSTD_CCtx_params* cctxParams,
void* entropyWorkspace, size_t entropyWkspSize,
int bmi2)
{
size_t const cSize = ZSTD_entropyCompressSeqStore_internal(
dst, dstCapacity,
literals, litSize,
seqStorePtr, prevEntropy, nextEntropy, cctxParams,
entropyWorkspace, entropyWkspSize, bmi2);
if (cSize == 0) return 0;
/* When srcSize <= dstCapacity, there is enough space to write a raw uncompressed block.
* Since we ran out of space, block must be not compressible, so fall back to raw uncompressed block.
*/
if ((cSize == ERROR(dstSize_tooSmall)) & (blockSize <= dstCapacity)) {
DEBUGLOG(4, "not enough dstCapacity (%zu) for ZSTD_entropyCompressSeqStore_internal()=> do not compress block", dstCapacity);
return 0; /* block not compressed */
}
FORWARD_IF_ERROR(cSize, "ZSTD_entropyCompressSeqStore_internal failed");
/* Check compressibility */
{ size_t const maxCSize = blockSize - ZSTD_minGain(blockSize, cctxParams->cParams.strategy);
if (cSize >= maxCSize) return 0; /* block not compressed */
}
DEBUGLOG(5, "ZSTD_entropyCompressSeqStore() cSize: %zu", cSize);
/* libzstd decoder before > v1.5.4 is not compatible with compressed blocks of size ZSTD_BLOCKSIZE_MAX exactly.
* This restriction is indirectly already fulfilled by respecting ZSTD_minGain() condition above.
*/
assert(cSize < ZSTD_BLOCKSIZE_MAX);
return cSize;
}
static size_t
ZSTD_entropyCompressSeqStore(
const SeqStore_t* seqStorePtr,
const ZSTD_entropyCTables_t* prevEntropy,
ZSTD_entropyCTables_t* nextEntropy,
const ZSTD_CCtx_params* cctxParams,
void* dst, size_t dstCapacity,
size_t srcSize,
void* entropyWorkspace, size_t entropyWkspSize,
int bmi2)
{
return ZSTD_entropyCompressSeqStore_wExtLitBuffer(
dst, dstCapacity,
seqStorePtr->litStart, (size_t)(seqStorePtr->lit - seqStorePtr->litStart),
srcSize,
seqStorePtr,
prevEntropy, nextEntropy,
cctxParams,
entropyWorkspace, entropyWkspSize,
bmi2);
}
/* ZSTD_selectBlockCompressor() :
* Not static, but internal use only (used by long distance matcher)
* assumption : strat is a valid strategy */
ZSTD_BlockCompressor_f ZSTD_selectBlockCompressor(ZSTD_strategy strat, ZSTD_ParamSwitch_e useRowMatchFinder, ZSTD_dictMode_e dictMode)
{
static const ZSTD_BlockCompressor_f blockCompressor[4][ZSTD_STRATEGY_MAX+1] = {
{ ZSTD_compressBlock_fast /* default for 0 */,
ZSTD_compressBlock_fast,
ZSTD_COMPRESSBLOCK_DOUBLEFAST,
ZSTD_COMPRESSBLOCK_GREEDY,
ZSTD_COMPRESSBLOCK_LAZY,
ZSTD_COMPRESSBLOCK_LAZY2,
ZSTD_COMPRESSBLOCK_BTLAZY2,
ZSTD_COMPRESSBLOCK_BTOPT,
ZSTD_COMPRESSBLOCK_BTULTRA,
ZSTD_COMPRESSBLOCK_BTULTRA2
},
{ ZSTD_compressBlock_fast_extDict /* default for 0 */,
ZSTD_compressBlock_fast_extDict,
ZSTD_COMPRESSBLOCK_DOUBLEFAST_EXTDICT,
ZSTD_COMPRESSBLOCK_GREEDY_EXTDICT,
ZSTD_COMPRESSBLOCK_LAZY_EXTDICT,
ZSTD_COMPRESSBLOCK_LAZY2_EXTDICT,
ZSTD_COMPRESSBLOCK_BTLAZY2_EXTDICT,
ZSTD_COMPRESSBLOCK_BTOPT_EXTDICT,
ZSTD_COMPRESSBLOCK_BTULTRA_EXTDICT,
ZSTD_COMPRESSBLOCK_BTULTRA_EXTDICT
},
{ ZSTD_compressBlock_fast_dictMatchState /* default for 0 */,
ZSTD_compressBlock_fast_dictMatchState,
ZSTD_COMPRESSBLOCK_DOUBLEFAST_DICTMATCHSTATE,
ZSTD_COMPRESSBLOCK_GREEDY_DICTMATCHSTATE,
ZSTD_COMPRESSBLOCK_LAZY_DICTMATCHSTATE,
ZSTD_COMPRESSBLOCK_LAZY2_DICTMATCHSTATE,
ZSTD_COMPRESSBLOCK_BTLAZY2_DICTMATCHSTATE,
ZSTD_COMPRESSBLOCK_BTOPT_DICTMATCHSTATE,
ZSTD_COMPRESSBLOCK_BTULTRA_DICTMATCHSTATE,
ZSTD_COMPRESSBLOCK_BTULTRA_DICTMATCHSTATE
},
{ NULL /* default for 0 */,
NULL,
NULL,
ZSTD_COMPRESSBLOCK_GREEDY_DEDICATEDDICTSEARCH,
ZSTD_COMPRESSBLOCK_LAZY_DEDICATEDDICTSEARCH,
ZSTD_COMPRESSBLOCK_LAZY2_DEDICATEDDICTSEARCH,
NULL,
NULL,
NULL,
NULL }
};
ZSTD_BlockCompressor_f selectedCompressor;
ZSTD_STATIC_ASSERT((unsigned)ZSTD_fast == 1);
assert(ZSTD_cParam_withinBounds(ZSTD_c_strategy, (int)strat));
DEBUGLOG(5, "Selected block compressor: dictMode=%d strat=%d rowMatchfinder=%d", (int)dictMode, (int)strat, (int)useRowMatchFinder);
if (ZSTD_rowMatchFinderUsed(strat, useRowMatchFinder)) {
static const ZSTD_BlockCompressor_f rowBasedBlockCompressors[4][3] = {
{
ZSTD_COMPRESSBLOCK_GREEDY_ROW,
ZSTD_COMPRESSBLOCK_LAZY_ROW,
ZSTD_COMPRESSBLOCK_LAZY2_ROW
},
{
ZSTD_COMPRESSBLOCK_GREEDY_EXTDICT_ROW,
ZSTD_COMPRESSBLOCK_LAZY_EXTDICT_ROW,
ZSTD_COMPRESSBLOCK_LAZY2_EXTDICT_ROW
},
{
ZSTD_COMPRESSBLOCK_GREEDY_DICTMATCHSTATE_ROW,
ZSTD_COMPRESSBLOCK_LAZY_DICTMATCHSTATE_ROW,
ZSTD_COMPRESSBLOCK_LAZY2_DICTMATCHSTATE_ROW
},
{
ZSTD_COMPRESSBLOCK_GREEDY_DEDICATEDDICTSEARCH_ROW,
ZSTD_COMPRESSBLOCK_LAZY_DEDICATEDDICTSEARCH_ROW,
ZSTD_COMPRESSBLOCK_LAZY2_DEDICATEDDICTSEARCH_ROW
}
};
DEBUGLOG(5, "Selecting a row-based matchfinder");
assert(useRowMatchFinder != ZSTD_ps_auto);
selectedCompressor = rowBasedBlockCompressors[(int)dictMode][(int)strat - (int)ZSTD_greedy];
} else {
selectedCompressor = blockCompressor[(int)dictMode][(int)strat];
}
assert(selectedCompressor != NULL);
return selectedCompressor;
}
static void ZSTD_storeLastLiterals(SeqStore_t* seqStorePtr,
const BYTE* anchor, size_t lastLLSize)
{
ZSTD_memcpy(seqStorePtr->lit, anchor, lastLLSize);
seqStorePtr->lit += lastLLSize;
}
void ZSTD_resetSeqStore(SeqStore_t* ssPtr)
{
ssPtr->lit = ssPtr->litStart;
ssPtr->sequences = ssPtr->sequencesStart;
ssPtr->longLengthType = ZSTD_llt_none;
}
/* ZSTD_postProcessSequenceProducerResult() :
* Validates and post-processes sequences obtained through the external matchfinder API:
* - Checks whether nbExternalSeqs represents an error condition.
* - Appends a block delimiter to outSeqs if one is not already present.
* See zstd.h for context regarding block delimiters.
* Returns the number of sequences after post-processing, or an error code. */
static size_t ZSTD_postProcessSequenceProducerResult(
ZSTD_Sequence* outSeqs, size_t nbExternalSeqs, size_t outSeqsCapacity, size_t srcSize
) {
RETURN_ERROR_IF(
nbExternalSeqs > outSeqsCapacity,
sequenceProducer_failed,
"External sequence producer returned error code %lu",
(unsigned long)nbExternalSeqs
);
RETURN_ERROR_IF(
nbExternalSeqs == 0 && srcSize > 0,
sequenceProducer_failed,
"Got zero sequences from external sequence producer for a non-empty src buffer!"
);
if (srcSize == 0) {
ZSTD_memset(&outSeqs[0], 0, sizeof(ZSTD_Sequence));
return 1;
}
{
ZSTD_Sequence const lastSeq = outSeqs[nbExternalSeqs - 1];
/* We can return early if lastSeq is already a block delimiter. */
if (lastSeq.offset == 0 && lastSeq.matchLength == 0) {
return nbExternalSeqs;
}
/* This error condition is only possible if the external matchfinder
* produced an invalid parse, by definition of ZSTD_sequenceBound(). */
RETURN_ERROR_IF(
nbExternalSeqs == outSeqsCapacity,
sequenceProducer_failed,
"nbExternalSeqs == outSeqsCapacity but lastSeq is not a block delimiter!"
);
/* lastSeq is not a block delimiter, so we need to append one. */
ZSTD_memset(&outSeqs[nbExternalSeqs], 0, sizeof(ZSTD_Sequence));
return nbExternalSeqs + 1;
}
}
/* ZSTD_fastSequenceLengthSum() :
* Returns sum(litLen) + sum(matchLen) + lastLits for *seqBuf*.
* Similar to another function in zstd_compress.c (determine_blockSize),
* except it doesn't check for a block delimiter to end summation.
* Removing the early exit allows the compiler to auto-vectorize (https://godbolt.org/z/cY1cajz9P).
* This function can be deleted and replaced by determine_blockSize after we resolve issue #3456. */
static size_t ZSTD_fastSequenceLengthSum(ZSTD_Sequence const* seqBuf, size_t seqBufSize) {
size_t matchLenSum, litLenSum, i;
matchLenSum = 0;
litLenSum = 0;
for (i = 0; i < seqBufSize; i++) {
litLenSum += seqBuf[i].litLength;
matchLenSum += seqBuf[i].matchLength;
}
return litLenSum + matchLenSum;
}
/**
* Function to validate sequences produced by a block compressor.
*/
static void ZSTD_validateSeqStore(const SeqStore_t* seqStore, const ZSTD_compressionParameters* cParams)
{
#if DEBUGLEVEL >= 1
const SeqDef* seq = seqStore->sequencesStart;
const SeqDef* const seqEnd = seqStore->sequences;
size_t const matchLenLowerBound = cParams->minMatch == 3 ? 3 : 4;
for (; seq < seqEnd; ++seq) {
const ZSTD_SequenceLength seqLength = ZSTD_getSequenceLength(seqStore, seq);
assert(seqLength.matchLength >= matchLenLowerBound);
(void)seqLength;
(void)matchLenLowerBound;
}
#else
(void)seqStore;
(void)cParams;
#endif
}
static size_t
ZSTD_transferSequences_wBlockDelim(ZSTD_CCtx* cctx,
ZSTD_SequencePosition* seqPos,
const ZSTD_Sequence* const inSeqs, size_t inSeqsSize,
const void* src, size_t blockSize,
ZSTD_ParamSwitch_e externalRepSearch);
typedef enum { ZSTDbss_compress, ZSTDbss_noCompress } ZSTD_BuildSeqStore_e;
static size_t ZSTD_buildSeqStore(ZSTD_CCtx* zc, const void* src, size_t srcSize)
{
ZSTD_MatchState_t* const ms = &zc->blockState.matchState;
DEBUGLOG(5, "ZSTD_buildSeqStore (srcSize=%zu)", srcSize);
assert(srcSize <= ZSTD_BLOCKSIZE_MAX);
/* Assert that we have correctly flushed the ctx params into the ms's copy */
ZSTD_assertEqualCParams(zc->appliedParams.cParams, ms->cParams);
/* TODO: See 3090. We reduced MIN_CBLOCK_SIZE from 3 to 2 so to compensate we are adding
* additional 1. We need to revisit and change this logic to be more consistent */
if (srcSize < MIN_CBLOCK_SIZE+ZSTD_blockHeaderSize+1+1) {
if (zc->appliedParams.cParams.strategy >= ZSTD_btopt) {
ZSTD_ldm_skipRawSeqStoreBytes(&zc->externSeqStore, srcSize);
} else {
ZSTD_ldm_skipSequences(&zc->externSeqStore, srcSize, zc->appliedParams.cParams.minMatch);
}
return ZSTDbss_noCompress; /* don't even attempt compression below a certain srcSize */
}
ZSTD_resetSeqStore(&(zc->seqStore));
/* required for optimal parser to read stats from dictionary */
ms->opt.symbolCosts = &zc->blockState.prevCBlock->entropy;
/* tell the optimal parser how we expect to compress literals */
ms->opt.literalCompressionMode = zc->appliedParams.literalCompressionMode;
/* a gap between an attached dict and the current window is not safe,
* they must remain adjacent,
* and when that stops being the case, the dict must be unset */
assert(ms->dictMatchState == NULL || ms->loadedDictEnd == ms->window.dictLimit);
/* limited update after a very long match */
{ const BYTE* const base = ms->window.base;
const BYTE* const istart = (const BYTE*)src;
const U32 curr = (U32)(istart-base);
if (sizeof(ptrdiff_t)==8) assert(istart - base < (ptrdiff_t)(U32)(-1)); /* ensure no overflow */
if (curr > ms->nextToUpdate + 384)
ms->nextToUpdate = curr - MIN(192, (U32)(curr - ms->nextToUpdate - 384));
}
/* select and store sequences */
{ ZSTD_dictMode_e const dictMode = ZSTD_matchState_dictMode(ms);
size_t lastLLSize;
{ int i;
for (i = 0; i < ZSTD_REP_NUM; ++i)
zc->blockState.nextCBlock->rep[i] = zc->blockState.prevCBlock->rep[i];
}
if (zc->externSeqStore.pos < zc->externSeqStore.size) {
assert(zc->appliedParams.ldmParams.enableLdm == ZSTD_ps_disable);
/* External matchfinder + LDM is technically possible, just not implemented yet.
* We need to revisit soon and implement it. */
RETURN_ERROR_IF(
ZSTD_hasExtSeqProd(&zc->appliedParams),
parameter_combination_unsupported,
"Long-distance matching with external sequence producer enabled is not currently supported."
);
/* Updates ldmSeqStore.pos */
lastLLSize =
ZSTD_ldm_blockCompress(&zc->externSeqStore,
ms, &zc->seqStore,
zc->blockState.nextCBlock->rep,
zc->appliedParams.useRowMatchFinder,
src, srcSize);
assert(zc->externSeqStore.pos <= zc->externSeqStore.size);
} else if (zc->appliedParams.ldmParams.enableLdm == ZSTD_ps_enable) {
RawSeqStore_t ldmSeqStore = kNullRawSeqStore;
/* External matchfinder + LDM is technically possible, just not implemented yet.
* We need to revisit soon and implement it. */
RETURN_ERROR_IF(
ZSTD_hasExtSeqProd(&zc->appliedParams),
parameter_combination_unsupported,
"Long-distance matching with external sequence producer enabled is not currently supported."
);
ldmSeqStore.seq = zc->ldmSequences;
ldmSeqStore.capacity = zc->maxNbLdmSequences;
/* Updates ldmSeqStore.size */
FORWARD_IF_ERROR(ZSTD_ldm_generateSequences(&zc->ldmState, &ldmSeqStore,
&zc->appliedParams.ldmParams,
src, srcSize), "");
/* Updates ldmSeqStore.pos */
lastLLSize =
ZSTD_ldm_blockCompress(&ldmSeqStore,
ms, &zc->seqStore,
zc->blockState.nextCBlock->rep,
zc->appliedParams.useRowMatchFinder,
src, srcSize);
assert(ldmSeqStore.pos == ldmSeqStore.size);
} else if (ZSTD_hasExtSeqProd(&zc->appliedParams)) {
assert(
zc->extSeqBufCapacity >= ZSTD_sequenceBound(srcSize)
);
assert(zc->appliedParams.extSeqProdFunc != NULL);
{ U32 const windowSize = (U32)1 << zc->appliedParams.cParams.windowLog;
size_t const nbExternalSeqs = (zc->appliedParams.extSeqProdFunc)(
zc->appliedParams.extSeqProdState,
zc->extSeqBuf,
zc->extSeqBufCapacity,
src, srcSize,
NULL, 0, /* dict and dictSize, currently not supported */
zc->appliedParams.compressionLevel,
windowSize
);
size_t const nbPostProcessedSeqs = ZSTD_postProcessSequenceProducerResult(
zc->extSeqBuf,
nbExternalSeqs,
zc->extSeqBufCapacity,
srcSize
);
/* Return early if there is no error, since we don't need to worry about last literals */
if (!ZSTD_isError(nbPostProcessedSeqs)) {
ZSTD_SequencePosition seqPos = {0,0,0};
size_t const seqLenSum = ZSTD_fastSequenceLengthSum(zc->extSeqBuf, nbPostProcessedSeqs);
RETURN_ERROR_IF(seqLenSum > srcSize, externalSequences_invalid, "External sequences imply too large a block!");
FORWARD_IF_ERROR(
ZSTD_transferSequences_wBlockDelim(
zc, &seqPos,
zc->extSeqBuf, nbPostProcessedSeqs,
src, srcSize,
zc->appliedParams.searchForExternalRepcodes
),
"Failed to copy external sequences to seqStore!"
);
ms->ldmSeqStore = NULL;
DEBUGLOG(5, "Copied %lu sequences from external sequence producer to internal seqStore.", (unsigned long)nbExternalSeqs);
return ZSTDbss_compress;
}
/* Propagate the error if fallback is disabled */
if (!zc->appliedParams.enableMatchFinderFallback) {
return nbPostProcessedSeqs;
}
/* Fallback to software matchfinder */
{ ZSTD_BlockCompressor_f const blockCompressor =
ZSTD_selectBlockCompressor(
zc->appliedParams.cParams.strategy,
zc->appliedParams.useRowMatchFinder,
dictMode);
ms->ldmSeqStore = NULL;
DEBUGLOG(
5,
"External sequence producer returned error code %lu. Falling back to internal parser.",
(unsigned long)nbExternalSeqs
);
lastLLSize = blockCompressor(ms, &zc->seqStore, zc->blockState.nextCBlock->rep, src, srcSize);
} }
} else { /* not long range mode and no external matchfinder */
ZSTD_BlockCompressor_f const blockCompressor = ZSTD_selectBlockCompressor(
zc->appliedParams.cParams.strategy,
zc->appliedParams.useRowMatchFinder,
dictMode);
ms->ldmSeqStore = NULL;
lastLLSize = blockCompressor(ms, &zc->seqStore, zc->blockState.nextCBlock->rep, src, srcSize);
}
{ const BYTE* const lastLiterals = (const BYTE*)src + srcSize - lastLLSize;
ZSTD_storeLastLiterals(&zc->seqStore, lastLiterals, lastLLSize);
} }
ZSTD_validateSeqStore(&zc->seqStore, &zc->appliedParams.cParams);
return ZSTDbss_compress;
}
static size_t ZSTD_copyBlockSequences(SeqCollector* seqCollector, const SeqStore_t* seqStore, const U32 prevRepcodes[ZSTD_REP_NUM])
{
const SeqDef* inSeqs = seqStore->sequencesStart;
const size_t nbInSequences = (size_t)(seqStore->sequences - inSeqs);
const size_t nbInLiterals = (size_t)(seqStore->lit - seqStore->litStart);
ZSTD_Sequence* outSeqs = seqCollector->seqIndex == 0 ? seqCollector->seqStart : seqCollector->seqStart + seqCollector->seqIndex;
const size_t nbOutSequences = nbInSequences + 1;
size_t nbOutLiterals = 0;
Repcodes_t repcodes;
size_t i;
/* Bounds check that we have enough space for every input sequence
* and the block delimiter
*/
assert(seqCollector->seqIndex <= seqCollector->maxSequences);
RETURN_ERROR_IF(
nbOutSequences > (size_t)(seqCollector->maxSequences - seqCollector->seqIndex),
dstSize_tooSmall,
"Not enough space to copy sequences");
ZSTD_memcpy(&repcodes, prevRepcodes, sizeof(repcodes));
for (i = 0; i < nbInSequences; ++i) {
U32 rawOffset;
outSeqs[i].litLength = inSeqs[i].litLength;
outSeqs[i].matchLength = inSeqs[i].mlBase + MINMATCH;
outSeqs[i].rep = 0;
/* Handle the possible single length >= 64K
* There can only be one because we add MINMATCH to every match length,
* and blocks are at most 128K.
*/
if (i == seqStore->longLengthPos) {
if (seqStore->longLengthType == ZSTD_llt_literalLength) {
outSeqs[i].litLength += 0x10000;
} else if (seqStore->longLengthType == ZSTD_llt_matchLength) {
outSeqs[i].matchLength += 0x10000;
}
}
/* Determine the raw offset given the offBase, which may be a repcode. */
if (OFFBASE_IS_REPCODE(inSeqs[i].offBase)) {
const U32 repcode = OFFBASE_TO_REPCODE(inSeqs[i].offBase);
assert(repcode > 0);
outSeqs[i].rep = repcode;
if (outSeqs[i].litLength != 0) {
rawOffset = repcodes.rep[repcode - 1];
} else {
if (repcode == 3) {
assert(repcodes.rep[0] > 1);
rawOffset = repcodes.rep[0] - 1;
} else {
rawOffset = repcodes.rep[repcode];
}
}
} else {
rawOffset = OFFBASE_TO_OFFSET(inSeqs[i].offBase);
}
outSeqs[i].offset = rawOffset;
/* Update repcode history for the sequence */
ZSTD_updateRep(repcodes.rep,
inSeqs[i].offBase,
inSeqs[i].litLength == 0);
nbOutLiterals += outSeqs[i].litLength;
}
/* Insert last literals (if any exist) in the block as a sequence with ml == off == 0.
* If there are no last literals, then we'll emit (of: 0, ml: 0, ll: 0), which is a marker
* for the block boundary, according to the API.
*/
assert(nbInLiterals >= nbOutLiterals);
{
const size_t lastLLSize = nbInLiterals - nbOutLiterals;
outSeqs[nbInSequences].litLength = (U32)lastLLSize;
outSeqs[nbInSequences].matchLength = 0;
outSeqs[nbInSequences].offset = 0;
assert(nbOutSequences == nbInSequences + 1);
}
seqCollector->seqIndex += nbOutSequences;
assert(seqCollector->seqIndex <= seqCollector->maxSequences);
return 0;
}
size_t ZSTD_sequenceBound(size_t srcSize) {
const size_t maxNbSeq = (srcSize / ZSTD_MINMATCH_MIN) + 1;
const size_t maxNbDelims = (srcSize / ZSTD_BLOCKSIZE_MAX_MIN) + 1;
return maxNbSeq + maxNbDelims;
}
size_t ZSTD_generateSequences(ZSTD_CCtx* zc, ZSTD_Sequence* outSeqs,
size_t outSeqsSize, const void* src, size_t srcSize)
{
const size_t dstCapacity = ZSTD_compressBound(srcSize);
void* dst; /* Make C90 happy. */
SeqCollector seqCollector;
{
int targetCBlockSize;
FORWARD_IF_ERROR(ZSTD_CCtx_getParameter(zc, ZSTD_c_targetCBlockSize, &targetCBlockSize), "");
RETURN_ERROR_IF(targetCBlockSize != 0, parameter_unsupported, "targetCBlockSize != 0");
}
{
int nbWorkers;
FORWARD_IF_ERROR(ZSTD_CCtx_getParameter(zc, ZSTD_c_nbWorkers, &nbWorkers), "");
RETURN_ERROR_IF(nbWorkers != 0, parameter_unsupported, "nbWorkers != 0");
}
dst = ZSTD_customMalloc(dstCapacity, ZSTD_defaultCMem);
RETURN_ERROR_IF(dst == NULL, memory_allocation, "NULL pointer!");
seqCollector.collectSequences = 1;
seqCollector.seqStart = outSeqs;
seqCollector.seqIndex = 0;
seqCollector.maxSequences = outSeqsSize;
zc->seqCollector = seqCollector;
{
const size_t ret = ZSTD_compress2(zc, dst, dstCapacity, src, srcSize);
ZSTD_customFree(dst, ZSTD_defaultCMem);
FORWARD_IF_ERROR(ret, "ZSTD_compress2 failed");
}
assert(zc->seqCollector.seqIndex <= ZSTD_sequenceBound(srcSize));
return zc->seqCollector.seqIndex;
}
size_t ZSTD_mergeBlockDelimiters(ZSTD_Sequence* sequences, size_t seqsSize) {
size_t in = 0;
size_t out = 0;
for (; in < seqsSize; ++in) {
if (sequences[in].offset == 0 && sequences[in].matchLength == 0) {
if (in != seqsSize - 1) {
sequences[in+1].litLength += sequences[in].litLength;
}
} else {
sequences[out] = sequences[in];
++out;
}
}
return out;
}
/* Unrolled loop to read four size_ts of input at a time. Returns 1 if is RLE, 0 if not. */
static int ZSTD_isRLE(const BYTE* src, size_t length) {
const BYTE* ip = src;
const BYTE value = ip[0];
const size_t valueST = (size_t)((U64)value * 0x0101010101010101ULL);
const size_t unrollSize = sizeof(size_t) * 4;
const size_t unrollMask = unrollSize - 1;
const size_t prefixLength = length & unrollMask;
size_t i;
if (length == 1) return 1;
/* Check if prefix is RLE first before using unrolled loop */
if (prefixLength && ZSTD_count(ip+1, ip, ip+prefixLength) != prefixLength-1) {
return 0;
}
for (i = prefixLength; i != length; i += unrollSize) {
size_t u;
for (u = 0; u < unrollSize; u += sizeof(size_t)) {
if (MEM_readST(ip + i + u) != valueST) {
return 0;
} } }
return 1;
}
/* Returns true if the given block may be RLE.
* This is just a heuristic based on the compressibility.
* It may return both false positives and false negatives.
*/
static int ZSTD_maybeRLE(SeqStore_t const* seqStore)
{
size_t const nbSeqs = (size_t)(seqStore->sequences - seqStore->sequencesStart);
size_t const nbLits = (size_t)(seqStore->lit - seqStore->litStart);
return nbSeqs < 4 && nbLits < 10;
}
static void
ZSTD_blockState_confirmRepcodesAndEntropyTables(ZSTD_blockState_t* const bs)
{
ZSTD_compressedBlockState_t* const tmp = bs->prevCBlock;
bs->prevCBlock = bs->nextCBlock;
bs->nextCBlock = tmp;
}
/* Writes the block header */
static void
writeBlockHeader(void* op, size_t cSize, size_t blockSize, U32 lastBlock)
{
U32 const cBlockHeader = cSize == 1 ?
lastBlock + (((U32)bt_rle)<<1) + (U32)(blockSize << 3) :
lastBlock + (((U32)bt_compressed)<<1) + (U32)(cSize << 3);
MEM_writeLE24(op, cBlockHeader);
DEBUGLOG(5, "writeBlockHeader: cSize: %zu blockSize: %zu lastBlock: %u", cSize, blockSize, lastBlock);
}
/** ZSTD_buildBlockEntropyStats_literals() :
* Builds entropy for the literals.
* Stores literals block type (raw, rle, compressed, repeat) and
* huffman description table to hufMetadata.
* Requires ENTROPY_WORKSPACE_SIZE workspace
* @return : size of huffman description table, or an error code
*/
static size_t
ZSTD_buildBlockEntropyStats_literals(void* const src, size_t srcSize,
const ZSTD_hufCTables_t* prevHuf,
ZSTD_hufCTables_t* nextHuf,
ZSTD_hufCTablesMetadata_t* hufMetadata,
const int literalsCompressionIsDisabled,
void* workspace, size_t wkspSize,
int hufFlags)
{
BYTE* const wkspStart = (BYTE*)workspace;
BYTE* const wkspEnd = wkspStart + wkspSize;
BYTE* const countWkspStart = wkspStart;
unsigned* const countWksp = (unsigned*)workspace;
const size_t countWkspSize = (HUF_SYMBOLVALUE_MAX + 1) * sizeof(unsigned);
BYTE* const nodeWksp = countWkspStart + countWkspSize;
const size_t nodeWkspSize = (size_t)(wkspEnd - nodeWksp);
unsigned maxSymbolValue = HUF_SYMBOLVALUE_MAX;
unsigned huffLog = LitHufLog;
HUF_repeat repeat = prevHuf->repeatMode;
DEBUGLOG(5, "ZSTD_buildBlockEntropyStats_literals (srcSize=%zu)", srcSize);
/* Prepare nextEntropy assuming reusing the existing table */
ZSTD_memcpy(nextHuf, prevHuf, sizeof(*prevHuf));
if (literalsCompressionIsDisabled) {
DEBUGLOG(5, "set_basic - disabled");
hufMetadata->hType = set_basic;
return 0;
}
/* small ? don't even attempt compression (speed opt) */
#ifndef COMPRESS_LITERALS_SIZE_MIN
# define COMPRESS_LITERALS_SIZE_MIN 63 /* heuristic */
#endif
{ size_t const minLitSize = (prevHuf->repeatMode == HUF_repeat_valid) ? 6 : COMPRESS_LITERALS_SIZE_MIN;
if (srcSize <= minLitSize) {
DEBUGLOG(5, "set_basic - too small");
hufMetadata->hType = set_basic;
return 0;
} }
/* Scan input and build symbol stats */
{ size_t const largest =
HIST_count_wksp (countWksp, &maxSymbolValue,
(const BYTE*)src, srcSize,
workspace, wkspSize);
FORWARD_IF_ERROR(largest, "HIST_count_wksp failed");
if (largest == srcSize) {
/* only one literal symbol */
DEBUGLOG(5, "set_rle");
hufMetadata->hType = set_rle;
return 0;
}
if (largest <= (srcSize >> 7)+4) {
/* heuristic: likely not compressible */
DEBUGLOG(5, "set_basic - no gain");
hufMetadata->hType = set_basic;
return 0;
} }
/* Validate the previous Huffman table */
if (repeat == HUF_repeat_check
&& !HUF_validateCTable((HUF_CElt const*)prevHuf->CTable, countWksp, maxSymbolValue)) {
repeat = HUF_repeat_none;
}
/* Build Huffman Tree */
ZSTD_memset(nextHuf->CTable, 0, sizeof(nextHuf->CTable));
huffLog = HUF_optimalTableLog(huffLog, srcSize, maxSymbolValue, nodeWksp, nodeWkspSize, nextHuf->CTable, countWksp, hufFlags);
assert(huffLog <= LitHufLog);
{ size_t const maxBits = HUF_buildCTable_wksp((HUF_CElt*)nextHuf->CTable, countWksp,
maxSymbolValue, huffLog,
nodeWksp, nodeWkspSize);
FORWARD_IF_ERROR(maxBits, "HUF_buildCTable_wksp");
huffLog = (U32)maxBits;
}
{ /* Build and write the CTable */
size_t const newCSize = HUF_estimateCompressedSize(
(HUF_CElt*)nextHuf->CTable, countWksp, maxSymbolValue);
size_t const hSize = HUF_writeCTable_wksp(
hufMetadata->hufDesBuffer, sizeof(hufMetadata->hufDesBuffer),
(HUF_CElt*)nextHuf->CTable, maxSymbolValue, huffLog,
nodeWksp, nodeWkspSize);
/* Check against repeating the previous CTable */
if (repeat != HUF_repeat_none) {
size_t const oldCSize = HUF_estimateCompressedSize(
(HUF_CElt const*)prevHuf->CTable, countWksp, maxSymbolValue);
if (oldCSize < srcSize && (oldCSize <= hSize + newCSize || hSize + 12 >= srcSize)) {
DEBUGLOG(5, "set_repeat - smaller");
ZSTD_memcpy(nextHuf, prevHuf, sizeof(*prevHuf));
hufMetadata->hType = set_repeat;
return 0;
} }
if (newCSize + hSize >= srcSize) {
DEBUGLOG(5, "set_basic - no gains");
ZSTD_memcpy(nextHuf, prevHuf, sizeof(*prevHuf));
hufMetadata->hType = set_basic;
return 0;
}
DEBUGLOG(5, "set_compressed (hSize=%u)", (U32)hSize);
hufMetadata->hType = set_compressed;
nextHuf->repeatMode = HUF_repeat_check;
return hSize;
}
}
/* ZSTD_buildDummySequencesStatistics():
* Returns a ZSTD_symbolEncodingTypeStats_t with all encoding types as set_basic,
* and updates nextEntropy to the appropriate repeatMode.
*/
static ZSTD_symbolEncodingTypeStats_t
ZSTD_buildDummySequencesStatistics(ZSTD_fseCTables_t* nextEntropy)
{
ZSTD_symbolEncodingTypeStats_t stats = {set_basic, set_basic, set_basic, 0, 0, 0};
nextEntropy->litlength_repeatMode = FSE_repeat_none;
nextEntropy->offcode_repeatMode = FSE_repeat_none;
nextEntropy->matchlength_repeatMode = FSE_repeat_none;
return stats;
}
/** ZSTD_buildBlockEntropyStats_sequences() :
* Builds entropy for the sequences.
* Stores symbol compression modes and fse table to fseMetadata.
* Requires ENTROPY_WORKSPACE_SIZE wksp.
* @return : size of fse tables or error code */
static size_t
ZSTD_buildBlockEntropyStats_sequences(
const SeqStore_t* seqStorePtr,
const ZSTD_fseCTables_t* prevEntropy,
ZSTD_fseCTables_t* nextEntropy,
const ZSTD_CCtx_params* cctxParams,
ZSTD_fseCTablesMetadata_t* fseMetadata,
void* workspace, size_t wkspSize)
{
ZSTD_strategy const strategy = cctxParams->cParams.strategy;
size_t const nbSeq = (size_t)(seqStorePtr->sequences - seqStorePtr->sequencesStart);
BYTE* const ostart = fseMetadata->fseTablesBuffer;
BYTE* const oend = ostart + sizeof(fseMetadata->fseTablesBuffer);
BYTE* op = ostart;
unsigned* countWorkspace = (unsigned*)workspace;
unsigned* entropyWorkspace = countWorkspace + (MaxSeq + 1);
size_t entropyWorkspaceSize = wkspSize - (MaxSeq + 1) * sizeof(*countWorkspace);
ZSTD_symbolEncodingTypeStats_t stats;
DEBUGLOG(5, "ZSTD_buildBlockEntropyStats_sequences (nbSeq=%zu)", nbSeq);
stats = nbSeq != 0 ? ZSTD_buildSequencesStatistics(seqStorePtr, nbSeq,
prevEntropy, nextEntropy, op, oend,
strategy, countWorkspace,
entropyWorkspace, entropyWorkspaceSize)
: ZSTD_buildDummySequencesStatistics(nextEntropy);
FORWARD_IF_ERROR(stats.size, "ZSTD_buildSequencesStatistics failed!");
fseMetadata->llType = (SymbolEncodingType_e) stats.LLtype;
fseMetadata->ofType = (SymbolEncodingType_e) stats.Offtype;
fseMetadata->mlType = (SymbolEncodingType_e) stats.MLtype;
fseMetadata->lastCountSize = stats.lastCountSize;
return stats.size;
}
/** ZSTD_buildBlockEntropyStats() :
* Builds entropy for the block.
* Requires workspace size ENTROPY_WORKSPACE_SIZE
* @return : 0 on success, or an error code
* Note : also employed in superblock
*/
size_t ZSTD_buildBlockEntropyStats(
const SeqStore_t* seqStorePtr,
const ZSTD_entropyCTables_t* prevEntropy,
ZSTD_entropyCTables_t* nextEntropy,
const ZSTD_CCtx_params* cctxParams,
ZSTD_entropyCTablesMetadata_t* entropyMetadata,
void* workspace, size_t wkspSize)
{
size_t const litSize = (size_t)(seqStorePtr->lit - seqStorePtr->litStart);
int const huf_useOptDepth = (cctxParams->cParams.strategy >= HUF_OPTIMAL_DEPTH_THRESHOLD);
int const hufFlags = huf_useOptDepth ? HUF_flags_optimalDepth : 0;
entropyMetadata->hufMetadata.hufDesSize =
ZSTD_buildBlockEntropyStats_literals(seqStorePtr->litStart, litSize,
&prevEntropy->huf, &nextEntropy->huf,
&entropyMetadata->hufMetadata,
ZSTD_literalsCompressionIsDisabled(cctxParams),
workspace, wkspSize, hufFlags);
FORWARD_IF_ERROR(entropyMetadata->hufMetadata.hufDesSize, "ZSTD_buildBlockEntropyStats_literals failed");
entropyMetadata->fseMetadata.fseTablesSize =
ZSTD_buildBlockEntropyStats_sequences(seqStorePtr,
&prevEntropy->fse, &nextEntropy->fse,
cctxParams,
&entropyMetadata->fseMetadata,
workspace, wkspSize);
FORWARD_IF_ERROR(entropyMetadata->fseMetadata.fseTablesSize, "ZSTD_buildBlockEntropyStats_sequences failed");
return 0;
}
/* Returns the size estimate for the literals section (header + content) of a block */
static size_t
ZSTD_estimateBlockSize_literal(const BYTE* literals, size_t litSize,
const ZSTD_hufCTables_t* huf,
const ZSTD_hufCTablesMetadata_t* hufMetadata,
void* workspace, size_t wkspSize,
int writeEntropy)
{
unsigned* const countWksp = (unsigned*)workspace;
unsigned maxSymbolValue = HUF_SYMBOLVALUE_MAX;
size_t literalSectionHeaderSize = 3 + (litSize >= 1 KB) + (litSize >= 16 KB);
U32 singleStream = litSize < 256;
if (hufMetadata->hType == set_basic) return litSize;
else if (hufMetadata->hType == set_rle) return 1;
else if (hufMetadata->hType == set_compressed || hufMetadata->hType == set_repeat) {
size_t const largest = HIST_count_wksp (countWksp, &maxSymbolValue, (const BYTE*)literals, litSize, workspace, wkspSize);
if (ZSTD_isError(largest)) return litSize;
{ size_t cLitSizeEstimate = HUF_estimateCompressedSize((const HUF_CElt*)huf->CTable, countWksp, maxSymbolValue);
if (writeEntropy) cLitSizeEstimate += hufMetadata->hufDesSize;
if (!singleStream) cLitSizeEstimate += 6; /* multi-stream huffman uses 6-byte jump table */
return cLitSizeEstimate + literalSectionHeaderSize;
} }
assert(0); /* impossible */
return 0;
}
/* Returns the size estimate for the FSE-compressed symbols (of, ml, ll) of a block */
static size_t
ZSTD_estimateBlockSize_symbolType(SymbolEncodingType_e type,
const BYTE* codeTable, size_t nbSeq, unsigned maxCode,
const FSE_CTable* fseCTable,
const U8* additionalBits,
short const* defaultNorm, U32 defaultNormLog, U32 defaultMax,
void* workspace, size_t wkspSize)
{
unsigned* const countWksp = (unsigned*)workspace;
const BYTE* ctp = codeTable;
const BYTE* const ctStart = ctp;
const BYTE* const ctEnd = ctStart + nbSeq;
size_t cSymbolTypeSizeEstimateInBits = 0;
unsigned max = maxCode;
HIST_countFast_wksp(countWksp, &max, codeTable, nbSeq, workspace, wkspSize); /* can't fail */
if (type == set_basic) {
/* We selected this encoding type, so it must be valid. */
assert(max <= defaultMax);
(void)defaultMax;
cSymbolTypeSizeEstimateInBits = ZSTD_crossEntropyCost(defaultNorm, defaultNormLog, countWksp, max);
} else if (type == set_rle) {
cSymbolTypeSizeEstimateInBits = 0;
} else if (type == set_compressed || type == set_repeat) {
cSymbolTypeSizeEstimateInBits = ZSTD_fseBitCost(fseCTable, countWksp, max);
}
if (ZSTD_isError(cSymbolTypeSizeEstimateInBits)) {
return nbSeq * 10;
}
while (ctp < ctEnd) {
if (additionalBits) cSymbolTypeSizeEstimateInBits += additionalBits[*ctp];
else cSymbolTypeSizeEstimateInBits += *ctp; /* for offset, offset code is also the number of additional bits */
ctp++;
}
return cSymbolTypeSizeEstimateInBits >> 3;
}
/* Returns the size estimate for the sequences section (header + content) of a block */
static size_t
ZSTD_estimateBlockSize_sequences(const BYTE* ofCodeTable,
const BYTE* llCodeTable,
const BYTE* mlCodeTable,
size_t nbSeq,
const ZSTD_fseCTables_t* fseTables,
const ZSTD_fseCTablesMetadata_t* fseMetadata,
void* workspace, size_t wkspSize,
int writeEntropy)
{
size_t sequencesSectionHeaderSize = 1 /* seqHead */ + 1 /* min seqSize size */ + (nbSeq >= 128) + (nbSeq >= LONGNBSEQ);
size_t cSeqSizeEstimate = 0;
cSeqSizeEstimate += ZSTD_estimateBlockSize_symbolType(fseMetadata->ofType, ofCodeTable, nbSeq, MaxOff,
fseTables->offcodeCTable, NULL,
OF_defaultNorm, OF_defaultNormLog, DefaultMaxOff,
workspace, wkspSize);
cSeqSizeEstimate += ZSTD_estimateBlockSize_symbolType(fseMetadata->llType, llCodeTable, nbSeq, MaxLL,
fseTables->litlengthCTable, LL_bits,
LL_defaultNorm, LL_defaultNormLog, MaxLL,
workspace, wkspSize);
cSeqSizeEstimate += ZSTD_estimateBlockSize_symbolType(fseMetadata->mlType, mlCodeTable, nbSeq, MaxML,
fseTables->matchlengthCTable, ML_bits,
ML_defaultNorm, ML_defaultNormLog, MaxML,
workspace, wkspSize);
if (writeEntropy) cSeqSizeEstimate += fseMetadata->fseTablesSize;
return cSeqSizeEstimate + sequencesSectionHeaderSize;
}
/* Returns the size estimate for a given stream of literals, of, ll, ml */
static size_t
ZSTD_estimateBlockSize(const BYTE* literals, size_t litSize,
const BYTE* ofCodeTable,
const BYTE* llCodeTable,
const BYTE* mlCodeTable,
size_t nbSeq,
const ZSTD_entropyCTables_t* entropy,
const ZSTD_entropyCTablesMetadata_t* entropyMetadata,
void* workspace, size_t wkspSize,
int writeLitEntropy, int writeSeqEntropy)
{
size_t const literalsSize = ZSTD_estimateBlockSize_literal(literals, litSize,
&entropy->huf, &entropyMetadata->hufMetadata,
workspace, wkspSize, writeLitEntropy);
size_t const seqSize = ZSTD_estimateBlockSize_sequences(ofCodeTable, llCodeTable, mlCodeTable,
nbSeq, &entropy->fse, &entropyMetadata->fseMetadata,
workspace, wkspSize, writeSeqEntropy);
return seqSize + literalsSize + ZSTD_blockHeaderSize;
}
/* Builds entropy statistics and uses them for blocksize estimation.
*
* @return: estimated compressed size of the seqStore, or a zstd error.
*/
static size_t
ZSTD_buildEntropyStatisticsAndEstimateSubBlockSize(SeqStore_t* seqStore, ZSTD_CCtx* zc)
{
ZSTD_entropyCTablesMetadata_t* const entropyMetadata = &zc->blockSplitCtx.entropyMetadata;
DEBUGLOG(6, "ZSTD_buildEntropyStatisticsAndEstimateSubBlockSize()");
FORWARD_IF_ERROR(ZSTD_buildBlockEntropyStats(seqStore,
&zc->blockState.prevCBlock->entropy,
&zc->blockState.nextCBlock->entropy,
&zc->appliedParams,
entropyMetadata,
zc->tmpWorkspace, zc->tmpWkspSize), "");
return ZSTD_estimateBlockSize(
seqStore->litStart, (size_t)(seqStore->lit - seqStore->litStart),
seqStore->ofCode, seqStore->llCode, seqStore->mlCode,
(size_t)(seqStore->sequences - seqStore->sequencesStart),
&zc->blockState.nextCBlock->entropy,
entropyMetadata,
zc->tmpWorkspace, zc->tmpWkspSize,
(int)(entropyMetadata->hufMetadata.hType == set_compressed), 1);
}
/* Returns literals bytes represented in a seqStore */
static size_t ZSTD_countSeqStoreLiteralsBytes(const SeqStore_t* const seqStore)
{
size_t literalsBytes = 0;
size_t const nbSeqs = (size_t)(seqStore->sequences - seqStore->sequencesStart);
size_t i;
for (i = 0; i < nbSeqs; ++i) {
SeqDef const seq = seqStore->sequencesStart[i];
literalsBytes += seq.litLength;
if (i == seqStore->longLengthPos && seqStore->longLengthType == ZSTD_llt_literalLength) {
literalsBytes += 0x10000;
} }
return literalsBytes;
}
/* Returns match bytes represented in a seqStore */
static size_t ZSTD_countSeqStoreMatchBytes(const SeqStore_t* const seqStore)
{
size_t matchBytes = 0;
size_t const nbSeqs = (size_t)(seqStore->sequences - seqStore->sequencesStart);
size_t i;
for (i = 0; i < nbSeqs; ++i) {
SeqDef seq = seqStore->sequencesStart[i];
matchBytes += seq.mlBase + MINMATCH;
if (i == seqStore->longLengthPos && seqStore->longLengthType == ZSTD_llt_matchLength) {
matchBytes += 0x10000;
} }
return matchBytes;
}
/* Derives the seqStore that is a chunk of the originalSeqStore from [startIdx, endIdx).
* Stores the result in resultSeqStore.
*/
static void ZSTD_deriveSeqStoreChunk(SeqStore_t* resultSeqStore,
const SeqStore_t* originalSeqStore,
size_t startIdx, size_t endIdx)
{
*resultSeqStore = *originalSeqStore;
if (startIdx > 0) {
resultSeqStore->sequences = originalSeqStore->sequencesStart + startIdx;
resultSeqStore->litStart += ZSTD_countSeqStoreLiteralsBytes(resultSeqStore);
}
/* Move longLengthPos into the correct position if necessary */
if (originalSeqStore->longLengthType != ZSTD_llt_none) {
if (originalSeqStore->longLengthPos < startIdx || originalSeqStore->longLengthPos > endIdx) {
resultSeqStore->longLengthType = ZSTD_llt_none;
} else {
resultSeqStore->longLengthPos -= (U32)startIdx;
}
}
resultSeqStore->sequencesStart = originalSeqStore->sequencesStart + startIdx;
resultSeqStore->sequences = originalSeqStore->sequencesStart + endIdx;
if (endIdx == (size_t)(originalSeqStore->sequences - originalSeqStore->sequencesStart)) {
/* This accounts for possible last literals if the derived chunk reaches the end of the block */
assert(resultSeqStore->lit == originalSeqStore->lit);
} else {
size_t const literalsBytes = ZSTD_countSeqStoreLiteralsBytes(resultSeqStore);
resultSeqStore->lit = resultSeqStore->litStart + literalsBytes;
}
resultSeqStore->llCode += startIdx;
resultSeqStore->mlCode += startIdx;
resultSeqStore->ofCode += startIdx;
}
/**
* Returns the raw offset represented by the combination of offBase, ll0, and repcode history.
* offBase must represent a repcode in the numeric representation of ZSTD_storeSeq().
*/
static U32
ZSTD_resolveRepcodeToRawOffset(const U32 rep[ZSTD_REP_NUM], const U32 offBase, const U32 ll0)
{
U32 const adjustedRepCode = OFFBASE_TO_REPCODE(offBase) - 1 + ll0; /* [ 0 - 3 ] */
assert(OFFBASE_IS_REPCODE(offBase));
if (adjustedRepCode == ZSTD_REP_NUM) {
assert(ll0);
/* litlength == 0 and offCode == 2 implies selection of first repcode - 1
* This is only valid if it results in a valid offset value, aka > 0.
* Note : it may happen that `rep[0]==1` in exceptional circumstances.
* In which case this function will return 0, which is an invalid offset.
* It's not an issue though, since this value will be
* compared and discarded within ZSTD_seqStore_resolveOffCodes().
*/
return rep[0] - 1;
}
return rep[adjustedRepCode];
}
/**
* ZSTD_seqStore_resolveOffCodes() reconciles any possible divergences in offset history that may arise
* due to emission of RLE/raw blocks that disturb the offset history,
* and replaces any repcodes within the seqStore that may be invalid.
*
* dRepcodes are updated as would be on the decompression side.
* cRepcodes are updated exactly in accordance with the seqStore.
*
* Note : this function assumes seq->offBase respects the following numbering scheme :
* 0 : invalid
* 1-3 : repcode 1-3
* 4+ : real_offset+3
*/
static void
ZSTD_seqStore_resolveOffCodes(Repcodes_t* const dRepcodes, Repcodes_t* const cRepcodes,
const SeqStore_t* const seqStore, U32 const nbSeq)
{
U32 idx = 0;
U32 const longLitLenIdx = seqStore->longLengthType == ZSTD_llt_literalLength ? seqStore->longLengthPos : nbSeq;
for (; idx < nbSeq; ++idx) {
SeqDef* const seq = seqStore->sequencesStart + idx;
U32 const ll0 = (seq->litLength == 0) && (idx != longLitLenIdx);
U32 const offBase = seq->offBase;
assert(offBase > 0);
if (OFFBASE_IS_REPCODE(offBase)) {
U32 const dRawOffset = ZSTD_resolveRepcodeToRawOffset(dRepcodes->rep, offBase, ll0);
U32 const cRawOffset = ZSTD_resolveRepcodeToRawOffset(cRepcodes->rep, offBase, ll0);
/* Adjust simulated decompression repcode history if we come across a mismatch. Replace
* the repcode with the offset it actually references, determined by the compression
* repcode history.
*/
if (dRawOffset != cRawOffset) {
seq->offBase = OFFSET_TO_OFFBASE(cRawOffset);
}
}
/* Compression repcode history is always updated with values directly from the unmodified seqStore.
* Decompression repcode history may use modified seq->offset value taken from compression repcode history.
*/
ZSTD_updateRep(dRepcodes->rep, seq->offBase, ll0);
ZSTD_updateRep(cRepcodes->rep, offBase, ll0);
}
}
/* ZSTD_compressSeqStore_singleBlock():
* Compresses a seqStore into a block with a block header, into the buffer dst.
*
* Returns the total size of that block (including header) or a ZSTD error code.
*/
static size_t
ZSTD_compressSeqStore_singleBlock(ZSTD_CCtx* zc,
const SeqStore_t* const seqStore,
Repcodes_t* const dRep, Repcodes_t* const cRep,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
U32 lastBlock, U32 isPartition)
{
const U32 rleMaxLength = 25;
BYTE* op = (BYTE*)dst;
const BYTE* ip = (const BYTE*)src;
size_t cSize;
size_t cSeqsSize;
/* In case of an RLE or raw block, the simulated decompression repcode history must be reset */
Repcodes_t const dRepOriginal = *dRep;
DEBUGLOG(5, "ZSTD_compressSeqStore_singleBlock");
if (isPartition)
ZSTD_seqStore_resolveOffCodes(dRep, cRep, seqStore, (U32)(seqStore->sequences - seqStore->sequencesStart));
RETURN_ERROR_IF(dstCapacity < ZSTD_blockHeaderSize, dstSize_tooSmall, "Block header doesn't fit");
cSeqsSize = ZSTD_entropyCompressSeqStore(seqStore,
&zc->blockState.prevCBlock->entropy, &zc->blockState.nextCBlock->entropy,
&zc->appliedParams,
op + ZSTD_blockHeaderSize, dstCapacity - ZSTD_blockHeaderSize,
srcSize,
zc->tmpWorkspace, zc->tmpWkspSize /* statically allocated in resetCCtx */,
zc->bmi2);
FORWARD_IF_ERROR(cSeqsSize, "ZSTD_entropyCompressSeqStore failed!");
if (!zc->isFirstBlock &&
cSeqsSize < rleMaxLength &&
ZSTD_isRLE((BYTE const*)src, srcSize)) {
/* We don't want to emit our first block as a RLE even if it qualifies because
* doing so will cause the decoder (cli only) to throw a "should consume all input error."
* This is only an issue for zstd <= v1.4.3
*/
cSeqsSize = 1;
}
/* Sequence collection not supported when block splitting */
if (zc->seqCollector.collectSequences) {
FORWARD_IF_ERROR(ZSTD_copyBlockSequences(&zc->seqCollector, seqStore, dRepOriginal.rep), "copyBlockSequences failed");
ZSTD_blockState_confirmRepcodesAndEntropyTables(&zc->blockState);
return 0;
}
if (cSeqsSize == 0) {
cSize = ZSTD_noCompressBlock(op, dstCapacity, ip, srcSize, lastBlock);
FORWARD_IF_ERROR(cSize, "Nocompress block failed");
DEBUGLOG(5, "Writing out nocompress block, size: %zu", cSize);
*dRep = dRepOriginal; /* reset simulated decompression repcode history */
} else if (cSeqsSize == 1) {
cSize = ZSTD_rleCompressBlock(op, dstCapacity, *ip, srcSize, lastBlock);
FORWARD_IF_ERROR(cSize, "RLE compress block failed");
DEBUGLOG(5, "Writing out RLE block, size: %zu", cSize);
*dRep = dRepOriginal; /* reset simulated decompression repcode history */
} else {
ZSTD_blockState_confirmRepcodesAndEntropyTables(&zc->blockState);
writeBlockHeader(op, cSeqsSize, srcSize, lastBlock);
cSize = ZSTD_blockHeaderSize + cSeqsSize;
DEBUGLOG(5, "Writing out compressed block, size: %zu", cSize);
}
if (zc->blockState.prevCBlock->entropy.fse.offcode_repeatMode == FSE_repeat_valid)
zc->blockState.prevCBlock->entropy.fse.offcode_repeatMode = FSE_repeat_check;
return cSize;
}
/* Struct to keep track of where we are in our recursive calls. */
typedef struct {
U32* splitLocations; /* Array of split indices */
size_t idx; /* The current index within splitLocations being worked on */
} seqStoreSplits;
#define MIN_SEQUENCES_BLOCK_SPLITTING 300
/* Helper function to perform the recursive search for block splits.
* Estimates the cost of seqStore prior to split, and estimates the cost of splitting the sequences in half.
* If advantageous to split, then we recurse down the two sub-blocks.
* If not, or if an error occurred in estimation, then we do not recurse.
*
* Note: The recursion depth is capped by a heuristic minimum number of sequences,
* defined by MIN_SEQUENCES_BLOCK_SPLITTING.
* In theory, this means the absolute largest recursion depth is 10 == log2(maxNbSeqInBlock/MIN_SEQUENCES_BLOCK_SPLITTING).
* In practice, recursion depth usually doesn't go beyond 4.
*
* Furthermore, the number of splits is capped by ZSTD_MAX_NB_BLOCK_SPLITS.
* At ZSTD_MAX_NB_BLOCK_SPLITS == 196 with the current existing blockSize
* maximum of 128 KB, this value is actually impossible to reach.
*/
static void
ZSTD_deriveBlockSplitsHelper(seqStoreSplits* splits, size_t startIdx, size_t endIdx,
ZSTD_CCtx* zc, const SeqStore_t* origSeqStore)
{
SeqStore_t* const fullSeqStoreChunk = &zc->blockSplitCtx.fullSeqStoreChunk;
SeqStore_t* const firstHalfSeqStore = &zc->blockSplitCtx.firstHalfSeqStore;
SeqStore_t* const secondHalfSeqStore = &zc->blockSplitCtx.secondHalfSeqStore;
size_t estimatedOriginalSize;
size_t estimatedFirstHalfSize;
size_t estimatedSecondHalfSize;
size_t midIdx = (startIdx + endIdx)/2;
DEBUGLOG(5, "ZSTD_deriveBlockSplitsHelper: startIdx=%zu endIdx=%zu", startIdx, endIdx);
assert(endIdx >= startIdx);
if (endIdx - startIdx < MIN_SEQUENCES_BLOCK_SPLITTING || splits->idx >= ZSTD_MAX_NB_BLOCK_SPLITS) {
DEBUGLOG(6, "ZSTD_deriveBlockSplitsHelper: Too few sequences (%zu)", endIdx - startIdx);
return;
}
ZSTD_deriveSeqStoreChunk(fullSeqStoreChunk, origSeqStore, startIdx, endIdx);
ZSTD_deriveSeqStoreChunk(firstHalfSeqStore, origSeqStore, startIdx, midIdx);
ZSTD_deriveSeqStoreChunk(secondHalfSeqStore, origSeqStore, midIdx, endIdx);
estimatedOriginalSize = ZSTD_buildEntropyStatisticsAndEstimateSubBlockSize(fullSeqStoreChunk, zc);
estimatedFirstHalfSize = ZSTD_buildEntropyStatisticsAndEstimateSubBlockSize(firstHalfSeqStore, zc);
estimatedSecondHalfSize = ZSTD_buildEntropyStatisticsAndEstimateSubBlockSize(secondHalfSeqStore, zc);
DEBUGLOG(5, "Estimated original block size: %zu -- First half split: %zu -- Second half split: %zu",
estimatedOriginalSize, estimatedFirstHalfSize, estimatedSecondHalfSize);
if (ZSTD_isError(estimatedOriginalSize) || ZSTD_isError(estimatedFirstHalfSize) || ZSTD_isError(estimatedSecondHalfSize)) {
return;
}
if (estimatedFirstHalfSize + estimatedSecondHalfSize < estimatedOriginalSize) {
DEBUGLOG(5, "split decided at seqNb:%zu", midIdx);
ZSTD_deriveBlockSplitsHelper(splits, startIdx, midIdx, zc, origSeqStore);
splits->splitLocations[splits->idx] = (U32)midIdx;
splits->idx++;
ZSTD_deriveBlockSplitsHelper(splits, midIdx, endIdx, zc, origSeqStore);
}
}
/* Base recursive function.
* Populates a table with intra-block partition indices that can improve compression ratio.
*
* @return: number of splits made (which equals the size of the partition table - 1).
*/
static size_t ZSTD_deriveBlockSplits(ZSTD_CCtx* zc, U32 partitions[], U32 nbSeq)
{
seqStoreSplits splits;
splits.splitLocations = partitions;
splits.idx = 0;
if (nbSeq <= 4) {
DEBUGLOG(5, "ZSTD_deriveBlockSplits: Too few sequences to split (%u <= 4)", nbSeq);
/* Refuse to try and split anything with less than 4 sequences */
return 0;
}
ZSTD_deriveBlockSplitsHelper(&splits, 0, nbSeq, zc, &zc->seqStore);
splits.splitLocations[splits.idx] = nbSeq;
DEBUGLOG(5, "ZSTD_deriveBlockSplits: final nb partitions: %zu", splits.idx+1);
return splits.idx;
}
/* ZSTD_compressBlock_splitBlock():
* Attempts to split a given block into multiple blocks to improve compression ratio.
*
* Returns combined size of all blocks (which includes headers), or a ZSTD error code.
*/
static size_t
ZSTD_compressBlock_splitBlock_internal(ZSTD_CCtx* zc,
void* dst, size_t dstCapacity,
const void* src, size_t blockSize,
U32 lastBlock, U32 nbSeq)
{
size_t cSize = 0;
const BYTE* ip = (const BYTE*)src;
BYTE* op = (BYTE*)dst;
size_t i = 0;
size_t srcBytesTotal = 0;
U32* const partitions = zc->blockSplitCtx.partitions; /* size == ZSTD_MAX_NB_BLOCK_SPLITS */
SeqStore_t* const nextSeqStore = &zc->blockSplitCtx.nextSeqStore;
SeqStore_t* const currSeqStore = &zc->blockSplitCtx.currSeqStore;
size_t const numSplits = ZSTD_deriveBlockSplits(zc, partitions, nbSeq);
/* If a block is split and some partitions are emitted as RLE/uncompressed, then repcode history
* may become invalid. In order to reconcile potentially invalid repcodes, we keep track of two
* separate repcode histories that simulate repcode history on compression and decompression side,
* and use the histories to determine whether we must replace a particular repcode with its raw offset.
*
* 1) cRep gets updated for each partition, regardless of whether the block was emitted as uncompressed
* or RLE. This allows us to retrieve the offset value that an invalid repcode references within
* a nocompress/RLE block.
* 2) dRep gets updated only for compressed partitions, and when a repcode gets replaced, will use
* the replacement offset value rather than the original repcode to update the repcode history.
* dRep also will be the final repcode history sent to the next block.
*
* See ZSTD_seqStore_resolveOffCodes() for more details.
*/
Repcodes_t dRep;
Repcodes_t cRep;
ZSTD_memcpy(dRep.rep, zc->blockState.prevCBlock->rep, sizeof(Repcodes_t));
ZSTD_memcpy(cRep.rep, zc->blockState.prevCBlock->rep, sizeof(Repcodes_t));
ZSTD_memset(nextSeqStore, 0, sizeof(SeqStore_t));
DEBUGLOG(5, "ZSTD_compressBlock_splitBlock_internal (dstCapacity=%u, dictLimit=%u, nextToUpdate=%u)",
(unsigned)dstCapacity, (unsigned)zc->blockState.matchState.window.dictLimit,
(unsigned)zc->blockState.matchState.nextToUpdate);
if (numSplits == 0) {
size_t cSizeSingleBlock =
ZSTD_compressSeqStore_singleBlock(zc, &zc->seqStore,
&dRep, &cRep,
op, dstCapacity,
ip, blockSize,
lastBlock, 0 /* isPartition */);
FORWARD_IF_ERROR(cSizeSingleBlock, "Compressing single block from splitBlock_internal() failed!");
DEBUGLOG(5, "ZSTD_compressBlock_splitBlock_internal: No splits");
assert(zc->blockSizeMax <= ZSTD_BLOCKSIZE_MAX);
assert(cSizeSingleBlock <= zc->blockSizeMax + ZSTD_blockHeaderSize);
return cSizeSingleBlock;
}
ZSTD_deriveSeqStoreChunk(currSeqStore, &zc->seqStore, 0, partitions[0]);
for (i = 0; i <= numSplits; ++i) {
size_t cSizeChunk;
U32 const lastPartition = (i == numSplits);
U32 lastBlockEntireSrc = 0;
size_t srcBytes = ZSTD_countSeqStoreLiteralsBytes(currSeqStore) + ZSTD_countSeqStoreMatchBytes(currSeqStore);
srcBytesTotal += srcBytes;
if (lastPartition) {
/* This is the final partition, need to account for possible last literals */
srcBytes += blockSize - srcBytesTotal;
lastBlockEntireSrc = lastBlock;
} else {
ZSTD_deriveSeqStoreChunk(nextSeqStore, &zc->seqStore, partitions[i], partitions[i+1]);
}
cSizeChunk = ZSTD_compressSeqStore_singleBlock(zc, currSeqStore,
&dRep, &cRep,
op, dstCapacity,
ip, srcBytes,
lastBlockEntireSrc, 1 /* isPartition */);
DEBUGLOG(5, "Estimated size: %zu vs %zu : actual size",
ZSTD_buildEntropyStatisticsAndEstimateSubBlockSize(currSeqStore, zc), cSizeChunk);
FORWARD_IF_ERROR(cSizeChunk, "Compressing chunk failed!");
ip += srcBytes;
op += cSizeChunk;
dstCapacity -= cSizeChunk;
cSize += cSizeChunk;
*currSeqStore = *nextSeqStore;
assert(cSizeChunk <= zc->blockSizeMax + ZSTD_blockHeaderSize);
}
/* cRep and dRep may have diverged during the compression.
* If so, we use the dRep repcodes for the next block.
*/
ZSTD_memcpy(zc->blockState.prevCBlock->rep, dRep.rep, sizeof(Repcodes_t));
return cSize;
}
static size_t
ZSTD_compressBlock_splitBlock(ZSTD_CCtx* zc,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize, U32 lastBlock)
{
U32 nbSeq;
size_t cSize;
DEBUGLOG(5, "ZSTD_compressBlock_splitBlock");
assert(zc->appliedParams.postBlockSplitter == ZSTD_ps_enable);
{ const size_t bss = ZSTD_buildSeqStore(zc, src, srcSize);
FORWARD_IF_ERROR(bss, "ZSTD_buildSeqStore failed");
if (bss == ZSTDbss_noCompress) {
if (zc->blockState.prevCBlock->entropy.fse.offcode_repeatMode == FSE_repeat_valid)
zc->blockState.prevCBlock->entropy.fse.offcode_repeatMode = FSE_repeat_check;
RETURN_ERROR_IF(zc->seqCollector.collectSequences, sequenceProducer_failed, "Uncompressible block");
cSize = ZSTD_noCompressBlock(dst, dstCapacity, src, srcSize, lastBlock);
FORWARD_IF_ERROR(cSize, "ZSTD_noCompressBlock failed");
DEBUGLOG(5, "ZSTD_compressBlock_splitBlock: Nocompress block");
return cSize;
}
nbSeq = (U32)(zc->seqStore.sequences - zc->seqStore.sequencesStart);
}
cSize = ZSTD_compressBlock_splitBlock_internal(zc, dst, dstCapacity, src, srcSize, lastBlock, nbSeq);
FORWARD_IF_ERROR(cSize, "Splitting blocks failed!");
return cSize;
}
static size_t
ZSTD_compressBlock_internal(ZSTD_CCtx* zc,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize, U32 frame)
{
/* This is an estimated upper bound for the length of an rle block.
* This isn't the actual upper bound.
* Finding the real threshold needs further investigation.
*/
const U32 rleMaxLength = 25;
size_t cSize;
const BYTE* ip = (const BYTE*)src;
BYTE* op = (BYTE*)dst;
DEBUGLOG(5, "ZSTD_compressBlock_internal (dstCapacity=%u, dictLimit=%u, nextToUpdate=%u)",
(unsigned)dstCapacity, (unsigned)zc->blockState.matchState.window.dictLimit,
(unsigned)zc->blockState.matchState.nextToUpdate);
{ const size_t bss = ZSTD_buildSeqStore(zc, src, srcSize);
FORWARD_IF_ERROR(bss, "ZSTD_buildSeqStore failed");
if (bss == ZSTDbss_noCompress) {
RETURN_ERROR_IF(zc->seqCollector.collectSequences, sequenceProducer_failed, "Uncompressible block");
cSize = 0;
goto out;
}
}
if (zc->seqCollector.collectSequences) {
FORWARD_IF_ERROR(ZSTD_copyBlockSequences(&zc->seqCollector, ZSTD_getSeqStore(zc), zc->blockState.prevCBlock->rep), "copyBlockSequences failed");
ZSTD_blockState_confirmRepcodesAndEntropyTables(&zc->blockState);
return 0;
}
/* encode sequences and literals */
cSize = ZSTD_entropyCompressSeqStore(&zc->seqStore,
&zc->blockState.prevCBlock->entropy, &zc->blockState.nextCBlock->entropy,
&zc->appliedParams,
dst, dstCapacity,
srcSize,
zc->tmpWorkspace, zc->tmpWkspSize /* statically allocated in resetCCtx */,
zc->bmi2);
if (frame &&
/* We don't want to emit our first block as a RLE even if it qualifies because
* doing so will cause the decoder (cli only) to throw a "should consume all input error."
* This is only an issue for zstd <= v1.4.3
*/
!zc->isFirstBlock &&
cSize < rleMaxLength &&
ZSTD_isRLE(ip, srcSize))
{
cSize = 1;
op[0] = ip[0];
}
out:
if (!ZSTD_isError(cSize) && cSize > 1) {
ZSTD_blockState_confirmRepcodesAndEntropyTables(&zc->blockState);
}
/* We check that dictionaries have offset codes available for the first
* block. After the first block, the offcode table might not have large
* enough codes to represent the offsets in the data.
*/
if (zc->blockState.prevCBlock->entropy.fse.offcode_repeatMode == FSE_repeat_valid)
zc->blockState.prevCBlock->entropy.fse.offcode_repeatMode = FSE_repeat_check;
return cSize;
}
static size_t ZSTD_compressBlock_targetCBlockSize_body(ZSTD_CCtx* zc,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
const size_t bss, U32 lastBlock)
{
DEBUGLOG(6, "Attempting ZSTD_compressSuperBlock()");
if (bss == ZSTDbss_compress) {
if (/* We don't want to emit our first block as a RLE even if it qualifies because
* doing so will cause the decoder (cli only) to throw a "should consume all input error."
* This is only an issue for zstd <= v1.4.3
*/
!zc->isFirstBlock &&
ZSTD_maybeRLE(&zc->seqStore) &&
ZSTD_isRLE((BYTE const*)src, srcSize))
{
return ZSTD_rleCompressBlock(dst, dstCapacity, *(BYTE const*)src, srcSize, lastBlock);
}
/* Attempt superblock compression.
*
* Note that compressed size of ZSTD_compressSuperBlock() is not bound by the
* standard ZSTD_compressBound(). This is a problem, because even if we have
* space now, taking an extra byte now could cause us to run out of space later
* and violate ZSTD_compressBound().
*
* Define blockBound(blockSize) = blockSize + ZSTD_blockHeaderSize.
*
* In order to respect ZSTD_compressBound() we must attempt to emit a raw
* uncompressed block in these cases:
* * cSize == 0: Return code for an uncompressed block.
* * cSize == dstSize_tooSmall: We may have expanded beyond blockBound(srcSize).
* ZSTD_noCompressBlock() will return dstSize_tooSmall if we are really out of
* output space.
* * cSize >= blockBound(srcSize): We have expanded the block too much so
* emit an uncompressed block.
*/
{ size_t const cSize =
ZSTD_compressSuperBlock(zc, dst, dstCapacity, src, srcSize, lastBlock);
if (cSize != ERROR(dstSize_tooSmall)) {
size_t const maxCSize =
srcSize - ZSTD_minGain(srcSize, zc->appliedParams.cParams.strategy);
FORWARD_IF_ERROR(cSize, "ZSTD_compressSuperBlock failed");
if (cSize != 0 && cSize < maxCSize + ZSTD_blockHeaderSize) {
ZSTD_blockState_confirmRepcodesAndEntropyTables(&zc->blockState);
return cSize;
}
}
}
} /* if (bss == ZSTDbss_compress)*/
DEBUGLOG(6, "Resorting to ZSTD_noCompressBlock()");
/* Superblock compression failed, attempt to emit a single no compress block.
* The decoder will be able to stream this block since it is uncompressed.
*/
return ZSTD_noCompressBlock(dst, dstCapacity, src, srcSize, lastBlock);
}
static size_t ZSTD_compressBlock_targetCBlockSize(ZSTD_CCtx* zc,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
U32 lastBlock)
{
size_t cSize = 0;
const size_t bss = ZSTD_buildSeqStore(zc, src, srcSize);
DEBUGLOG(5, "ZSTD_compressBlock_targetCBlockSize (dstCapacity=%u, dictLimit=%u, nextToUpdate=%u, srcSize=%zu)",
(unsigned)dstCapacity, (unsigned)zc->blockState.matchState.window.dictLimit, (unsigned)zc->blockState.matchState.nextToUpdate, srcSize);
FORWARD_IF_ERROR(bss, "ZSTD_buildSeqStore failed");
cSize = ZSTD_compressBlock_targetCBlockSize_body(zc, dst, dstCapacity, src, srcSize, bss, lastBlock);
FORWARD_IF_ERROR(cSize, "ZSTD_compressBlock_targetCBlockSize_body failed");
if (zc->blockState.prevCBlock->entropy.fse.offcode_repeatMode == FSE_repeat_valid)
zc->blockState.prevCBlock->entropy.fse.offcode_repeatMode = FSE_repeat_check;
return cSize;
}
static void ZSTD_overflowCorrectIfNeeded(ZSTD_MatchState_t* ms,
ZSTD_cwksp* ws,
ZSTD_CCtx_params const* params,
void const* ip,
void const* iend)
{
U32 const cycleLog = ZSTD_cycleLog(params->cParams.chainLog, params->cParams.strategy);
U32 const maxDist = (U32)1 << params->cParams.windowLog;
if (ZSTD_window_needOverflowCorrection(ms->window, cycleLog, maxDist, ms->loadedDictEnd, ip, iend)) {
U32 const correction = ZSTD_window_correctOverflow(&ms->window, cycleLog, maxDist, ip);
ZSTD_STATIC_ASSERT(ZSTD_CHAINLOG_MAX <= 30);
ZSTD_STATIC_ASSERT(ZSTD_WINDOWLOG_MAX_32 <= 30);
ZSTD_STATIC_ASSERT(ZSTD_WINDOWLOG_MAX <= 31);
ZSTD_cwksp_mark_tables_dirty(ws);
ZSTD_reduceIndex(ms, params, correction);
ZSTD_cwksp_mark_tables_clean(ws);
if (ms->nextToUpdate < correction) ms->nextToUpdate = 0;
else ms->nextToUpdate -= correction;
/* invalidate dictionaries on overflow correction */
ms->loadedDictEnd = 0;
ms->dictMatchState = NULL;
}
}
/**** skipping file: zstd_preSplit.h ****/
static size_t ZSTD_optimalBlockSize(ZSTD_CCtx* cctx, const void* src, size_t srcSize, size_t blockSizeMax, int splitLevel, ZSTD_strategy strat, S64 savings)
{
/* split level based on compression strategy, from `fast` to `btultra2` */
static const int splitLevels[] = { 0, 0, 1, 2, 2, 3, 3, 4, 4, 4 };
/* note: conservatively only split full blocks (128 KB) currently.
* While it's possible to go lower, let's keep it simple for a first implementation.
* Besides, benefits of splitting are reduced when blocks are already small.
*/
if (srcSize < 128 KB || blockSizeMax < 128 KB)
return MIN(srcSize, blockSizeMax);
/* do not split incompressible data though:
* require verified savings to allow pre-splitting.
* Note: as a consequence, the first full block is not split.
*/
if (savings < 3) {
DEBUGLOG(6, "don't attempt splitting: savings (%i) too low", (int)savings);
return 128 KB;
}
/* apply @splitLevel, or use default value (which depends on @strat).
* note that splitting heuristic is still conditioned by @savings >= 3,
* so the first block will not reach this code path */
if (splitLevel == 1) return 128 KB;
if (splitLevel == 0) {
assert(ZSTD_fast <= strat && strat <= ZSTD_btultra2);
splitLevel = splitLevels[strat];
} else {
assert(2 <= splitLevel && splitLevel <= 6);
splitLevel -= 2;
}
return ZSTD_splitBlock(src, blockSizeMax, splitLevel, cctx->tmpWorkspace, cctx->tmpWkspSize);
}
/*! ZSTD_compress_frameChunk() :
* Compress a chunk of data into one or multiple blocks.
* All blocks will be terminated, all input will be consumed.
* Function will issue an error if there is not enough `dstCapacity` to hold the compressed content.
* Frame is supposed already started (header already produced)
* @return : compressed size, or an error code
*/
static size_t ZSTD_compress_frameChunk(ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
U32 lastFrameChunk)
{
size_t blockSizeMax = cctx->blockSizeMax;
size_t remaining = srcSize;
const BYTE* ip = (const BYTE*)src;
BYTE* const ostart = (BYTE*)dst;
BYTE* op = ostart;
U32 const maxDist = (U32)1 << cctx->appliedParams.cParams.windowLog;
S64 savings = (S64)cctx->consumedSrcSize - (S64)cctx->producedCSize;
assert(cctx->appliedParams.cParams.windowLog <= ZSTD_WINDOWLOG_MAX);
DEBUGLOG(5, "ZSTD_compress_frameChunk (srcSize=%u, blockSizeMax=%u)", (unsigned)srcSize, (unsigned)blockSizeMax);
if (cctx->appliedParams.fParams.checksumFlag && srcSize)
XXH64_update(&cctx->xxhState, src, srcSize);
while (remaining) {
ZSTD_MatchState_t* const ms = &cctx->blockState.matchState;
size_t const blockSize = ZSTD_optimalBlockSize(cctx,
ip, remaining,
blockSizeMax,
cctx->appliedParams.preBlockSplitter_level,
cctx->appliedParams.cParams.strategy,
savings);
U32 const lastBlock = lastFrameChunk & (blockSize == remaining);
assert(blockSize <= remaining);
/* TODO: See 3090. We reduced MIN_CBLOCK_SIZE from 3 to 2 so to compensate we are adding
* additional 1. We need to revisit and change this logic to be more consistent */
RETURN_ERROR_IF(dstCapacity < ZSTD_blockHeaderSize + MIN_CBLOCK_SIZE + 1,
dstSize_tooSmall,
"not enough space to store compressed block");
ZSTD_overflowCorrectIfNeeded(
ms, &cctx->workspace, &cctx->appliedParams, ip, ip + blockSize);
ZSTD_checkDictValidity(&ms->window, ip + blockSize, maxDist, &ms->loadedDictEnd, &ms->dictMatchState);
ZSTD_window_enforceMaxDist(&ms->window, ip, maxDist, &ms->loadedDictEnd, &ms->dictMatchState);
/* Ensure hash/chain table insertion resumes no sooner than lowlimit */
if (ms->nextToUpdate < ms->window.lowLimit) ms->nextToUpdate = ms->window.lowLimit;
{ size_t cSize;
if (ZSTD_useTargetCBlockSize(&cctx->appliedParams)) {
cSize = ZSTD_compressBlock_targetCBlockSize(cctx, op, dstCapacity, ip, blockSize, lastBlock);
FORWARD_IF_ERROR(cSize, "ZSTD_compressBlock_targetCBlockSize failed");
assert(cSize > 0);
assert(cSize <= blockSize + ZSTD_blockHeaderSize);
} else if (ZSTD_blockSplitterEnabled(&cctx->appliedParams)) {
cSize = ZSTD_compressBlock_splitBlock(cctx, op, dstCapacity, ip, blockSize, lastBlock);
FORWARD_IF_ERROR(cSize, "ZSTD_compressBlock_splitBlock failed");
assert(cSize > 0 || cctx->seqCollector.collectSequences == 1);
} else {
cSize = ZSTD_compressBlock_internal(cctx,
op+ZSTD_blockHeaderSize, dstCapacity-ZSTD_blockHeaderSize,
ip, blockSize, 1 /* frame */);
FORWARD_IF_ERROR(cSize, "ZSTD_compressBlock_internal failed");
if (cSize == 0) { /* block is not compressible */
cSize = ZSTD_noCompressBlock(op, dstCapacity, ip, blockSize, lastBlock);
FORWARD_IF_ERROR(cSize, "ZSTD_noCompressBlock failed");
} else {
U32 const cBlockHeader = cSize == 1 ?
lastBlock + (((U32)bt_rle)<<1) + (U32)(blockSize << 3) :
lastBlock + (((U32)bt_compressed)<<1) + (U32)(cSize << 3);
MEM_writeLE24(op, cBlockHeader);
cSize += ZSTD_blockHeaderSize;
}
} /* if (ZSTD_useTargetCBlockSize(&cctx->appliedParams))*/
/* @savings is employed to ensure that splitting doesn't worsen expansion of incompressible data.
* Without splitting, the maximum expansion is 3 bytes per full block.
* An adversarial input could attempt to fudge the split detector,
* and make it split incompressible data, resulting in more block headers.
* Note that, since ZSTD_COMPRESSBOUND() assumes a worst case scenario of 1KB per block,
* and the splitter never creates blocks that small (current lower limit is 8 KB),
* there is already no risk to expand beyond ZSTD_COMPRESSBOUND() limit.
* But if the goal is to not expand by more than 3-bytes per 128 KB full block,
* then yes, it becomes possible to make the block splitter oversplit incompressible data.
* Using @savings, we enforce an even more conservative condition,
* requiring the presence of enough savings (at least 3 bytes) to authorize splitting,
* otherwise only full blocks are used.
* But being conservative is fine,
* since splitting barely compressible blocks is not fruitful anyway */
savings += (S64)blockSize - (S64)cSize;
ip += blockSize;
assert(remaining >= blockSize);
remaining -= blockSize;
op += cSize;
assert(dstCapacity >= cSize);
dstCapacity -= cSize;
cctx->isFirstBlock = 0;
DEBUGLOG(5, "ZSTD_compress_frameChunk: adding a block of size %u",
(unsigned)cSize);
} }
if (lastFrameChunk && (op>ostart)) cctx->stage = ZSTDcs_ending;
return (size_t)(op-ostart);
}
static size_t ZSTD_writeFrameHeader(void* dst, size_t dstCapacity,
const ZSTD_CCtx_params* params,
U64 pledgedSrcSize, U32 dictID)
{
BYTE* const op = (BYTE*)dst;
U32 const dictIDSizeCodeLength = (dictID>0) + (dictID>=256) + (dictID>=65536); /* 0-3 */
U32 const dictIDSizeCode = params->fParams.noDictIDFlag ? 0 : dictIDSizeCodeLength; /* 0-3 */
U32 const checksumFlag = params->fParams.checksumFlag>0;
U32 const windowSize = (U32)1 << params->cParams.windowLog;
U32 const singleSegment = params->fParams.contentSizeFlag && (windowSize >= pledgedSrcSize);
BYTE const windowLogByte = (BYTE)((params->cParams.windowLog - ZSTD_WINDOWLOG_ABSOLUTEMIN) << 3);
U32 const fcsCode = params->fParams.contentSizeFlag ?
(pledgedSrcSize>=256) + (pledgedSrcSize>=65536+256) + (pledgedSrcSize>=0xFFFFFFFFU) : 0; /* 0-3 */
BYTE const frameHeaderDescriptionByte = (BYTE)(dictIDSizeCode + (checksumFlag<<2) + (singleSegment<<5) + (fcsCode<<6) );
size_t pos=0;
assert(!(params->fParams.contentSizeFlag && pledgedSrcSize == ZSTD_CONTENTSIZE_UNKNOWN));
RETURN_ERROR_IF(dstCapacity < ZSTD_FRAMEHEADERSIZE_MAX, dstSize_tooSmall,
"dst buf is too small to fit worst-case frame header size.");
DEBUGLOG(4, "ZSTD_writeFrameHeader : dictIDFlag : %u ; dictID : %u ; dictIDSizeCode : %u",
!params->fParams.noDictIDFlag, (unsigned)dictID, (unsigned)dictIDSizeCode);
if (params->format == ZSTD_f_zstd1) {
MEM_writeLE32(dst, ZSTD_MAGICNUMBER);
pos = 4;
}
op[pos++] = frameHeaderDescriptionByte;
if (!singleSegment) op[pos++] = windowLogByte;
switch(dictIDSizeCode)
{
default:
assert(0); /* impossible */
ZSTD_FALLTHROUGH;
case 0 : break;
case 1 : op[pos] = (BYTE)(dictID); pos++; break;
case 2 : MEM_writeLE16(op+pos, (U16)dictID); pos+=2; break;
case 3 : MEM_writeLE32(op+pos, dictID); pos+=4; break;
}
switch(fcsCode)
{
default:
assert(0); /* impossible */
ZSTD_FALLTHROUGH;
case 0 : if (singleSegment) op[pos++] = (BYTE)(pledgedSrcSize); break;
case 1 : MEM_writeLE16(op+pos, (U16)(pledgedSrcSize-256)); pos+=2; break;
case 2 : MEM_writeLE32(op+pos, (U32)(pledgedSrcSize)); pos+=4; break;
case 3 : MEM_writeLE64(op+pos, (U64)(pledgedSrcSize)); pos+=8; break;
}
return pos;
}
/* ZSTD_writeSkippableFrame_advanced() :
* Writes out a skippable frame with the specified magic number variant (16 are supported),
* from ZSTD_MAGIC_SKIPPABLE_START to ZSTD_MAGIC_SKIPPABLE_START+15, and the desired source data.
*
* Returns the total number of bytes written, or a ZSTD error code.
*/
size_t ZSTD_writeSkippableFrame(void* dst, size_t dstCapacity,
const void* src, size_t srcSize, unsigned magicVariant) {
BYTE* op = (BYTE*)dst;
RETURN_ERROR_IF(dstCapacity < srcSize + ZSTD_SKIPPABLEHEADERSIZE /* Skippable frame overhead */,
dstSize_tooSmall, "Not enough room for skippable frame");
RETURN_ERROR_IF(srcSize > (unsigned)0xFFFFFFFF, srcSize_wrong, "Src size too large for skippable frame");
RETURN_ERROR_IF(magicVariant > 15, parameter_outOfBound, "Skippable frame magic number variant not supported");
MEM_writeLE32(op, (U32)(ZSTD_MAGIC_SKIPPABLE_START + magicVariant));
MEM_writeLE32(op+4, (U32)srcSize);
ZSTD_memcpy(op+8, src, srcSize);
return srcSize + ZSTD_SKIPPABLEHEADERSIZE;
}
/* ZSTD_writeLastEmptyBlock() :
* output an empty Block with end-of-frame mark to complete a frame
* @return : size of data written into `dst` (== ZSTD_blockHeaderSize (defined in zstd_internal.h))
* or an error code if `dstCapacity` is too small (<ZSTD_blockHeaderSize)
*/
size_t ZSTD_writeLastEmptyBlock(void* dst, size_t dstCapacity)
{
RETURN_ERROR_IF(dstCapacity < ZSTD_blockHeaderSize, dstSize_tooSmall,
"dst buf is too small to write frame trailer empty block.");
{ U32 const cBlockHeader24 = 1 /*lastBlock*/ + (((U32)bt_raw)<<1); /* 0 size */
MEM_writeLE24(dst, cBlockHeader24);
return ZSTD_blockHeaderSize;
}
}
void ZSTD_referenceExternalSequences(ZSTD_CCtx* cctx, rawSeq* seq, size_t nbSeq)
{
assert(cctx->stage == ZSTDcs_init);
assert(nbSeq == 0 || cctx->appliedParams.ldmParams.enableLdm != ZSTD_ps_enable);
cctx->externSeqStore.seq = seq;
cctx->externSeqStore.size = nbSeq;
cctx->externSeqStore.capacity = nbSeq;
cctx->externSeqStore.pos = 0;
cctx->externSeqStore.posInSequence = 0;
}
static size_t ZSTD_compressContinue_internal (ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
U32 frame, U32 lastFrameChunk)
{
ZSTD_MatchState_t* const ms = &cctx->blockState.matchState;
size_t fhSize = 0;
DEBUGLOG(5, "ZSTD_compressContinue_internal, stage: %u, srcSize: %u",
cctx->stage, (unsigned)srcSize);
RETURN_ERROR_IF(cctx->stage==ZSTDcs_created, stage_wrong,
"missing init (ZSTD_compressBegin)");
if (frame && (cctx->stage==ZSTDcs_init)) {
fhSize = ZSTD_writeFrameHeader(dst, dstCapacity, &cctx->appliedParams,
cctx->pledgedSrcSizePlusOne-1, cctx->dictID);
FORWARD_IF_ERROR(fhSize, "ZSTD_writeFrameHeader failed");
assert(fhSize <= dstCapacity);
dstCapacity -= fhSize;
dst = (char*)dst + fhSize;
cctx->stage = ZSTDcs_ongoing;
}
if (!srcSize) return fhSize; /* do not generate an empty block if no input */
if (!ZSTD_window_update(&ms->window, src, srcSize, ms->forceNonContiguous)) {
ms->forceNonContiguous = 0;
ms->nextToUpdate = ms->window.dictLimit;
}
if (cctx->appliedParams.ldmParams.enableLdm == ZSTD_ps_enable) {
ZSTD_window_update(&cctx->ldmState.window, src, srcSize, /* forceNonContiguous */ 0);
}
if (!frame) {
/* overflow check and correction for block mode */
ZSTD_overflowCorrectIfNeeded(
ms, &cctx->workspace, &cctx->appliedParams,
src, (BYTE const*)src + srcSize);
}
DEBUGLOG(5, "ZSTD_compressContinue_internal (blockSize=%u)", (unsigned)cctx->blockSizeMax);
{ size_t const cSize = frame ?
ZSTD_compress_frameChunk (cctx, dst, dstCapacity, src, srcSize, lastFrameChunk) :
ZSTD_compressBlock_internal (cctx, dst, dstCapacity, src, srcSize, 0 /* frame */);
FORWARD_IF_ERROR(cSize, "%s", frame ? "ZSTD_compress_frameChunk failed" : "ZSTD_compressBlock_internal failed");
cctx->consumedSrcSize += srcSize;
cctx->producedCSize += (cSize + fhSize);
assert(!(cctx->appliedParams.fParams.contentSizeFlag && cctx->pledgedSrcSizePlusOne == 0));
if (cctx->pledgedSrcSizePlusOne != 0) { /* control src size */
ZSTD_STATIC_ASSERT(ZSTD_CONTENTSIZE_UNKNOWN == (unsigned long long)-1);
RETURN_ERROR_IF(
cctx->consumedSrcSize+1 > cctx->pledgedSrcSizePlusOne,
srcSize_wrong,
"error : pledgedSrcSize = %u, while realSrcSize >= %u",
(unsigned)cctx->pledgedSrcSizePlusOne-1,
(unsigned)cctx->consumedSrcSize);
}
return cSize + fhSize;
}
}
size_t ZSTD_compressContinue_public(ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize)
{
DEBUGLOG(5, "ZSTD_compressContinue (srcSize=%u)", (unsigned)srcSize);
return ZSTD_compressContinue_internal(cctx, dst, dstCapacity, src, srcSize, 1 /* frame mode */, 0 /* last chunk */);
}
/* NOTE: Must just wrap ZSTD_compressContinue_public() */
size_t ZSTD_compressContinue(ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize)
{
return ZSTD_compressContinue_public(cctx, dst, dstCapacity, src, srcSize);
}
static size_t ZSTD_getBlockSize_deprecated(const ZSTD_CCtx* cctx)
{
ZSTD_compressionParameters const cParams = cctx->appliedParams.cParams;
assert(!ZSTD_checkCParams(cParams));
return MIN(cctx->appliedParams.maxBlockSize, (size_t)1 << cParams.windowLog);
}
/* NOTE: Must just wrap ZSTD_getBlockSize_deprecated() */
size_t ZSTD_getBlockSize(const ZSTD_CCtx* cctx)
{
return ZSTD_getBlockSize_deprecated(cctx);
}
/* NOTE: Must just wrap ZSTD_compressBlock_deprecated() */
size_t ZSTD_compressBlock_deprecated(ZSTD_CCtx* cctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize)
{
DEBUGLOG(5, "ZSTD_compressBlock: srcSize = %u", (unsigned)srcSize);
{ size_t const blockSizeMax = ZSTD_getBlockSize_deprecated(cctx);
RETURN_ERROR_IF(srcSize > blockSizeMax, srcSize_wrong, "input is larger than a block"); }
return ZSTD_compressContinue_internal(cctx, dst, dstCapacity, src, srcSize, 0 /* frame mode */, 0 /* last chunk */);
}
/* NOTE: Must just wrap ZSTD_compressBlock_deprecated() */
size_t ZSTD_compressBlock(ZSTD_CCtx* cctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize)
{
return ZSTD_compressBlock_deprecated(cctx, dst, dstCapacity, src, srcSize);
}
/*! ZSTD_loadDictionaryContent() :
* @return : 0, or an error code
*/
static size_t
ZSTD_loadDictionaryContent(ZSTD_MatchState_t* ms,
ldmState_t* ls,
ZSTD_cwksp* ws,
ZSTD_CCtx_params const* params,
const void* src, size_t srcSize,
ZSTD_dictTableLoadMethod_e dtlm,
ZSTD_tableFillPurpose_e tfp)
{
const BYTE* ip = (const BYTE*) src;
const BYTE* const iend = ip + srcSize;
int const loadLdmDict = params->ldmParams.enableLdm == ZSTD_ps_enable && ls != NULL;
/* Assert that the ms params match the params we're being given */
ZSTD_assertEqualCParams(params->cParams, ms->cParams);
{ /* Ensure large dictionaries can't cause index overflow */
/* Allow the dictionary to set indices up to exactly ZSTD_CURRENT_MAX.
* Dictionaries right at the edge will immediately trigger overflow
* correction, but I don't want to insert extra constraints here.
*/
U32 maxDictSize = ZSTD_CURRENT_MAX - ZSTD_WINDOW_START_INDEX;
int const CDictTaggedIndices = ZSTD_CDictIndicesAreTagged(¶ms->cParams);
if (CDictTaggedIndices && tfp == ZSTD_tfp_forCDict) {
/* Some dictionary matchfinders in zstd use "short cache",
* which treats the lower ZSTD_SHORT_CACHE_TAG_BITS of each
* CDict hashtable entry as a tag rather than as part of an index.
* When short cache is used, we need to truncate the dictionary
* so that its indices don't overlap with the tag. */
U32 const shortCacheMaxDictSize = (1u << (32 - ZSTD_SHORT_CACHE_TAG_BITS)) - ZSTD_WINDOW_START_INDEX;
maxDictSize = MIN(maxDictSize, shortCacheMaxDictSize);
assert(!loadLdmDict);
}
/* If the dictionary is too large, only load the suffix of the dictionary. */
if (srcSize > maxDictSize) {
ip = iend - maxDictSize;
src = ip;
srcSize = maxDictSize;
}
}
if (srcSize > ZSTD_CHUNKSIZE_MAX) {
/* We must have cleared our windows when our source is this large. */
assert(ZSTD_window_isEmpty(ms->window));
if (loadLdmDict) assert(ZSTD_window_isEmpty(ls->window));
}
ZSTD_window_update(&ms->window, src, srcSize, /* forceNonContiguous */ 0);
DEBUGLOG(4, "ZSTD_loadDictionaryContent: useRowMatchFinder=%d", (int)params->useRowMatchFinder);
if (loadLdmDict) { /* Load the entire dict into LDM matchfinders. */
DEBUGLOG(4, "ZSTD_loadDictionaryContent: Trigger loadLdmDict");
ZSTD_window_update(&ls->window, src, srcSize, /* forceNonContiguous */ 0);
ls->loadedDictEnd = params->forceWindow ? 0 : (U32)(iend - ls->window.base);
ZSTD_ldm_fillHashTable(ls, ip, iend, ¶ms->ldmParams);
DEBUGLOG(4, "ZSTD_loadDictionaryContent: ZSTD_ldm_fillHashTable completes");
}
/* If the dict is larger than we can reasonably index in our tables, only load the suffix. */
{ U32 maxDictSize = 1U << MIN(MAX(params->cParams.hashLog + 3, params->cParams.chainLog + 1), 31);
if (srcSize > maxDictSize) {
ip = iend - maxDictSize;
src = ip;
srcSize = maxDictSize;
}
}
ms->nextToUpdate = (U32)(ip - ms->window.base);
ms->loadedDictEnd = params->forceWindow ? 0 : (U32)(iend - ms->window.base);
ms->forceNonContiguous = params->deterministicRefPrefix;
if (srcSize <= HASH_READ_SIZE) return 0;
ZSTD_overflowCorrectIfNeeded(ms, ws, params, ip, iend);
switch(params->cParams.strategy)
{
case ZSTD_fast:
ZSTD_fillHashTable(ms, iend, dtlm, tfp);
break;
case ZSTD_dfast:
#ifndef ZSTD_EXCLUDE_DFAST_BLOCK_COMPRESSOR
ZSTD_fillDoubleHashTable(ms, iend, dtlm, tfp);
#else
assert(0); /* shouldn't be called: cparams should've been adjusted. */
#endif
break;
case ZSTD_greedy:
case ZSTD_lazy:
case ZSTD_lazy2:
#if !defined(ZSTD_EXCLUDE_GREEDY_BLOCK_COMPRESSOR) \
|| !defined(ZSTD_EXCLUDE_LAZY_BLOCK_COMPRESSOR) \
|| !defined(ZSTD_EXCLUDE_LAZY2_BLOCK_COMPRESSOR)
assert(srcSize >= HASH_READ_SIZE);
if (ms->dedicatedDictSearch) {
assert(ms->chainTable != NULL);
ZSTD_dedicatedDictSearch_lazy_loadDictionary(ms, iend-HASH_READ_SIZE);
} else {
assert(params->useRowMatchFinder != ZSTD_ps_auto);
if (params->useRowMatchFinder == ZSTD_ps_enable) {
size_t const tagTableSize = ((size_t)1 << params->cParams.hashLog);
ZSTD_memset(ms->tagTable, 0, tagTableSize);
ZSTD_row_update(ms, iend-HASH_READ_SIZE);
DEBUGLOG(4, "Using row-based hash table for lazy dict");
} else {
ZSTD_insertAndFindFirstIndex(ms, iend-HASH_READ_SIZE);
DEBUGLOG(4, "Using chain-based hash table for lazy dict");
}
}
#else
assert(0); /* shouldn't be called: cparams should've been adjusted. */
#endif
break;
case ZSTD_btlazy2: /* we want the dictionary table fully sorted */
case ZSTD_btopt:
case ZSTD_btultra:
case ZSTD_btultra2:
#if !defined(ZSTD_EXCLUDE_BTLAZY2_BLOCK_COMPRESSOR) \
|| !defined(ZSTD_EXCLUDE_BTOPT_BLOCK_COMPRESSOR) \
|| !defined(ZSTD_EXCLUDE_BTULTRA_BLOCK_COMPRESSOR)
assert(srcSize >= HASH_READ_SIZE);
DEBUGLOG(4, "Fill %u bytes into the Binary Tree", (unsigned)srcSize);
ZSTD_updateTree(ms, iend-HASH_READ_SIZE, iend);
#else
assert(0); /* shouldn't be called: cparams should've been adjusted. */
#endif
break;
default:
assert(0); /* not possible : not a valid strategy id */
}
ms->nextToUpdate = (U32)(iend - ms->window.base);
return 0;
}
/* Dictionaries that assign zero probability to symbols that show up causes problems
* when FSE encoding. Mark dictionaries with zero probability symbols as FSE_repeat_check
* and only dictionaries with 100% valid symbols can be assumed valid.
*/
static FSE_repeat ZSTD_dictNCountRepeat(short* normalizedCounter, unsigned dictMaxSymbolValue, unsigned maxSymbolValue)
{
U32 s;
if (dictMaxSymbolValue < maxSymbolValue) {
return FSE_repeat_check;
}
for (s = 0; s <= maxSymbolValue; ++s) {
if (normalizedCounter[s] == 0) {
return FSE_repeat_check;
}
}
return FSE_repeat_valid;
}
size_t ZSTD_loadCEntropy(ZSTD_compressedBlockState_t* bs, void* workspace,
const void* const dict, size_t dictSize)
{
short offcodeNCount[MaxOff+1];
unsigned offcodeMaxValue = MaxOff;
const BYTE* dictPtr = (const BYTE*)dict; /* skip magic num and dict ID */
const BYTE* const dictEnd = dictPtr + dictSize;
dictPtr += 8;
bs->entropy.huf.repeatMode = HUF_repeat_check;
{ unsigned maxSymbolValue = 255;
unsigned hasZeroWeights = 1;
size_t const hufHeaderSize = HUF_readCTable((HUF_CElt*)bs->entropy.huf.CTable, &maxSymbolValue, dictPtr,
(size_t)(dictEnd-dictPtr), &hasZeroWeights);
/* We only set the loaded table as valid if it contains all non-zero
* weights. Otherwise, we set it to check */
if (!hasZeroWeights && maxSymbolValue == 255)
bs->entropy.huf.repeatMode = HUF_repeat_valid;
RETURN_ERROR_IF(HUF_isError(hufHeaderSize), dictionary_corrupted, "");
dictPtr += hufHeaderSize;
}
{ unsigned offcodeLog;
size_t const offcodeHeaderSize = FSE_readNCount(offcodeNCount, &offcodeMaxValue, &offcodeLog, dictPtr, (size_t)(dictEnd-dictPtr));
RETURN_ERROR_IF(FSE_isError(offcodeHeaderSize), dictionary_corrupted, "");
RETURN_ERROR_IF(offcodeLog > OffFSELog, dictionary_corrupted, "");
/* fill all offset symbols to avoid garbage at end of table */
RETURN_ERROR_IF(FSE_isError(FSE_buildCTable_wksp(
bs->entropy.fse.offcodeCTable,
offcodeNCount, MaxOff, offcodeLog,
workspace, HUF_WORKSPACE_SIZE)),
dictionary_corrupted, "");
/* Defer checking offcodeMaxValue because we need to know the size of the dictionary content */
dictPtr += offcodeHeaderSize;
}
{ short matchlengthNCount[MaxML+1];
unsigned matchlengthMaxValue = MaxML, matchlengthLog;
size_t const matchlengthHeaderSize = FSE_readNCount(matchlengthNCount, &matchlengthMaxValue, &matchlengthLog, dictPtr, (size_t)(dictEnd-dictPtr));
RETURN_ERROR_IF(FSE_isError(matchlengthHeaderSize), dictionary_corrupted, "");
RETURN_ERROR_IF(matchlengthLog > MLFSELog, dictionary_corrupted, "");
RETURN_ERROR_IF(FSE_isError(FSE_buildCTable_wksp(
bs->entropy.fse.matchlengthCTable,
matchlengthNCount, matchlengthMaxValue, matchlengthLog,
workspace, HUF_WORKSPACE_SIZE)),
dictionary_corrupted, "");
bs->entropy.fse.matchlength_repeatMode = ZSTD_dictNCountRepeat(matchlengthNCount, matchlengthMaxValue, MaxML);
dictPtr += matchlengthHeaderSize;
}
{ short litlengthNCount[MaxLL+1];
unsigned litlengthMaxValue = MaxLL, litlengthLog;
size_t const litlengthHeaderSize = FSE_readNCount(litlengthNCount, &litlengthMaxValue, &litlengthLog, dictPtr, (size_t)(dictEnd-dictPtr));
RETURN_ERROR_IF(FSE_isError(litlengthHeaderSize), dictionary_corrupted, "");
RETURN_ERROR_IF(litlengthLog > LLFSELog, dictionary_corrupted, "");
RETURN_ERROR_IF(FSE_isError(FSE_buildCTable_wksp(
bs->entropy.fse.litlengthCTable,
litlengthNCount, litlengthMaxValue, litlengthLog,
workspace, HUF_WORKSPACE_SIZE)),
dictionary_corrupted, "");
bs->entropy.fse.litlength_repeatMode = ZSTD_dictNCountRepeat(litlengthNCount, litlengthMaxValue, MaxLL);
dictPtr += litlengthHeaderSize;
}
RETURN_ERROR_IF(dictPtr+12 > dictEnd, dictionary_corrupted, "");
bs->rep[0] = MEM_readLE32(dictPtr+0);
bs->rep[1] = MEM_readLE32(dictPtr+4);
bs->rep[2] = MEM_readLE32(dictPtr+8);
dictPtr += 12;
{ size_t const dictContentSize = (size_t)(dictEnd - dictPtr);
U32 offcodeMax = MaxOff;
if (dictContentSize <= ((U32)-1) - 128 KB) {
U32 const maxOffset = (U32)dictContentSize + 128 KB; /* The maximum offset that must be supported */
offcodeMax = ZSTD_highbit32(maxOffset); /* Calculate minimum offset code required to represent maxOffset */
}
/* All offset values <= dictContentSize + 128 KB must be representable for a valid table */
bs->entropy.fse.offcode_repeatMode = ZSTD_dictNCountRepeat(offcodeNCount, offcodeMaxValue, MIN(offcodeMax, MaxOff));
/* All repCodes must be <= dictContentSize and != 0 */
{ U32 u;
for (u=0; u<3; u++) {
RETURN_ERROR_IF(bs->rep[u] == 0, dictionary_corrupted, "");
RETURN_ERROR_IF(bs->rep[u] > dictContentSize, dictionary_corrupted, "");
} } }
return (size_t)(dictPtr - (const BYTE*)dict);
}
/* Dictionary format :
* See :
* https://github.com/facebook/zstd/blob/release/doc/zstd_compression_format.md#dictionary-format
*/
/*! ZSTD_loadZstdDictionary() :
* @return : dictID, or an error code
* assumptions : magic number supposed already checked
* dictSize supposed >= 8
*/
static size_t ZSTD_loadZstdDictionary(ZSTD_compressedBlockState_t* bs,
ZSTD_MatchState_t* ms,
ZSTD_cwksp* ws,
ZSTD_CCtx_params const* params,
const void* dict, size_t dictSize,
ZSTD_dictTableLoadMethod_e dtlm,
ZSTD_tableFillPurpose_e tfp,
void* workspace)
{
const BYTE* dictPtr = (const BYTE*)dict;
const BYTE* const dictEnd = dictPtr + dictSize;
size_t dictID;
size_t eSize;
ZSTD_STATIC_ASSERT(HUF_WORKSPACE_SIZE >= (1<<MAX(MLFSELog,LLFSELog)));
assert(dictSize >= 8);
assert(MEM_readLE32(dictPtr) == ZSTD_MAGIC_DICTIONARY);
dictID = params->fParams.noDictIDFlag ? 0 : MEM_readLE32(dictPtr + 4 /* skip magic number */ );
eSize = ZSTD_loadCEntropy(bs, workspace, dict, dictSize);
FORWARD_IF_ERROR(eSize, "ZSTD_loadCEntropy failed");
dictPtr += eSize;
{
size_t const dictContentSize = (size_t)(dictEnd - dictPtr);
FORWARD_IF_ERROR(ZSTD_loadDictionaryContent(
ms, NULL, ws, params, dictPtr, dictContentSize, dtlm, tfp), "");
}
return dictID;
}
/** ZSTD_compress_insertDictionary() :
* @return : dictID, or an error code */
static size_t
ZSTD_compress_insertDictionary(ZSTD_compressedBlockState_t* bs,
ZSTD_MatchState_t* ms,
ldmState_t* ls,
ZSTD_cwksp* ws,
const ZSTD_CCtx_params* params,
const void* dict, size_t dictSize,
ZSTD_dictContentType_e dictContentType,
ZSTD_dictTableLoadMethod_e dtlm,
ZSTD_tableFillPurpose_e tfp,
void* workspace)
{
DEBUGLOG(4, "ZSTD_compress_insertDictionary (dictSize=%u)", (U32)dictSize);
if ((dict==NULL) || (dictSize<8)) {
RETURN_ERROR_IF(dictContentType == ZSTD_dct_fullDict, dictionary_wrong, "");
return 0;
}
ZSTD_reset_compressedBlockState(bs);
/* dict restricted modes */
if (dictContentType == ZSTD_dct_rawContent)
return ZSTD_loadDictionaryContent(ms, ls, ws, params, dict, dictSize, dtlm, tfp);
if (MEM_readLE32(dict) != ZSTD_MAGIC_DICTIONARY) {
if (dictContentType == ZSTD_dct_auto) {
DEBUGLOG(4, "raw content dictionary detected");
return ZSTD_loadDictionaryContent(
ms, ls, ws, params, dict, dictSize, dtlm, tfp);
}
RETURN_ERROR_IF(dictContentType == ZSTD_dct_fullDict, dictionary_wrong, "");
assert(0); /* impossible */
}
/* dict as full zstd dictionary */
return ZSTD_loadZstdDictionary(
bs, ms, ws, params, dict, dictSize, dtlm, tfp, workspace);
}
#define ZSTD_USE_CDICT_PARAMS_SRCSIZE_CUTOFF (128 KB)
#define ZSTD_USE_CDICT_PARAMS_DICTSIZE_MULTIPLIER (6ULL)
/*! ZSTD_compressBegin_internal() :
* Assumption : either @dict OR @cdict (or none) is non-NULL, never both
* @return : 0, or an error code */
static size_t ZSTD_compressBegin_internal(ZSTD_CCtx* cctx,
const void* dict, size_t dictSize,
ZSTD_dictContentType_e dictContentType,
ZSTD_dictTableLoadMethod_e dtlm,
const ZSTD_CDict* cdict,
const ZSTD_CCtx_params* params, U64 pledgedSrcSize,
ZSTD_buffered_policy_e zbuff)
{
size_t const dictContentSize = cdict ? cdict->dictContentSize : dictSize;
#if ZSTD_TRACE
cctx->traceCtx = (ZSTD_trace_compress_begin != NULL) ? ZSTD_trace_compress_begin(cctx) : 0;
#endif
DEBUGLOG(4, "ZSTD_compressBegin_internal: wlog=%u", params->cParams.windowLog);
/* params are supposed to be fully validated at this point */
assert(!ZSTD_isError(ZSTD_checkCParams(params->cParams)));
assert(!((dict) && (cdict))); /* either dict or cdict, not both */
if ( (cdict)
&& (cdict->dictContentSize > 0)
&& ( pledgedSrcSize < ZSTD_USE_CDICT_PARAMS_SRCSIZE_CUTOFF
|| pledgedSrcSize < cdict->dictContentSize * ZSTD_USE_CDICT_PARAMS_DICTSIZE_MULTIPLIER
|| pledgedSrcSize == ZSTD_CONTENTSIZE_UNKNOWN
|| cdict->compressionLevel == 0)
&& (params->attachDictPref != ZSTD_dictForceLoad) ) {
return ZSTD_resetCCtx_usingCDict(cctx, cdict, params, pledgedSrcSize, zbuff);
}
FORWARD_IF_ERROR( ZSTD_resetCCtx_internal(cctx, params, pledgedSrcSize,
dictContentSize,
ZSTDcrp_makeClean, zbuff) , "");
{ size_t const dictID = cdict ?
ZSTD_compress_insertDictionary(
cctx->blockState.prevCBlock, &cctx->blockState.matchState,
&cctx->ldmState, &cctx->workspace, &cctx->appliedParams, cdict->dictContent,
cdict->dictContentSize, cdict->dictContentType, dtlm,
ZSTD_tfp_forCCtx, cctx->tmpWorkspace)
: ZSTD_compress_insertDictionary(
cctx->blockState.prevCBlock, &cctx->blockState.matchState,
&cctx->ldmState, &cctx->workspace, &cctx->appliedParams, dict, dictSize,
dictContentType, dtlm, ZSTD_tfp_forCCtx, cctx->tmpWorkspace);
FORWARD_IF_ERROR(dictID, "ZSTD_compress_insertDictionary failed");
assert(dictID <= UINT_MAX);
cctx->dictID = (U32)dictID;
cctx->dictContentSize = dictContentSize;
}
return 0;
}
size_t ZSTD_compressBegin_advanced_internal(ZSTD_CCtx* cctx,
const void* dict, size_t dictSize,
ZSTD_dictContentType_e dictContentType,
ZSTD_dictTableLoadMethod_e dtlm,
const ZSTD_CDict* cdict,
const ZSTD_CCtx_params* params,
unsigned long long pledgedSrcSize)
{
DEBUGLOG(4, "ZSTD_compressBegin_advanced_internal: wlog=%u", params->cParams.windowLog);
/* compression parameters verification and optimization */
FORWARD_IF_ERROR( ZSTD_checkCParams(params->cParams) , "");
return ZSTD_compressBegin_internal(cctx,
dict, dictSize, dictContentType, dtlm,
cdict,
params, pledgedSrcSize,
ZSTDb_not_buffered);
}
/*! ZSTD_compressBegin_advanced() :
* @return : 0, or an error code */
size_t ZSTD_compressBegin_advanced(ZSTD_CCtx* cctx,
const void* dict, size_t dictSize,
ZSTD_parameters params, unsigned long long pledgedSrcSize)
{
ZSTD_CCtx_params cctxParams;
ZSTD_CCtxParams_init_internal(&cctxParams, ¶ms, ZSTD_NO_CLEVEL);
return ZSTD_compressBegin_advanced_internal(cctx,
dict, dictSize, ZSTD_dct_auto, ZSTD_dtlm_fast,
NULL /*cdict*/,
&cctxParams, pledgedSrcSize);
}
static size_t
ZSTD_compressBegin_usingDict_deprecated(ZSTD_CCtx* cctx, const void* dict, size_t dictSize, int compressionLevel)
{
ZSTD_CCtx_params cctxParams;
{ ZSTD_parameters const params = ZSTD_getParams_internal(compressionLevel, ZSTD_CONTENTSIZE_UNKNOWN, dictSize, ZSTD_cpm_noAttachDict);
ZSTD_CCtxParams_init_internal(&cctxParams, ¶ms, (compressionLevel == 0) ? ZSTD_CLEVEL_DEFAULT : compressionLevel);
}
DEBUGLOG(4, "ZSTD_compressBegin_usingDict (dictSize=%u)", (unsigned)dictSize);
return ZSTD_compressBegin_internal(cctx, dict, dictSize, ZSTD_dct_auto, ZSTD_dtlm_fast, NULL,
&cctxParams, ZSTD_CONTENTSIZE_UNKNOWN, ZSTDb_not_buffered);
}
size_t
ZSTD_compressBegin_usingDict(ZSTD_CCtx* cctx, const void* dict, size_t dictSize, int compressionLevel)
{
return ZSTD_compressBegin_usingDict_deprecated(cctx, dict, dictSize, compressionLevel);
}
size_t ZSTD_compressBegin(ZSTD_CCtx* cctx, int compressionLevel)
{
return ZSTD_compressBegin_usingDict_deprecated(cctx, NULL, 0, compressionLevel);
}
/*! ZSTD_writeEpilogue() :
* Ends a frame.
* @return : nb of bytes written into dst (or an error code) */
static size_t ZSTD_writeEpilogue(ZSTD_CCtx* cctx, void* dst, size_t dstCapacity)
{
BYTE* const ostart = (BYTE*)dst;
BYTE* op = ostart;
DEBUGLOG(4, "ZSTD_writeEpilogue");
RETURN_ERROR_IF(cctx->stage == ZSTDcs_created, stage_wrong, "init missing");
/* special case : empty frame */
if (cctx->stage == ZSTDcs_init) {
size_t fhSize = ZSTD_writeFrameHeader(dst, dstCapacity, &cctx->appliedParams, 0, 0);
FORWARD_IF_ERROR(fhSize, "ZSTD_writeFrameHeader failed");
dstCapacity -= fhSize;
op += fhSize;
cctx->stage = ZSTDcs_ongoing;
}
if (cctx->stage != ZSTDcs_ending) {
/* write one last empty block, make it the "last" block */
U32 const cBlockHeader24 = 1 /* last block */ + (((U32)bt_raw)<<1) + 0;
ZSTD_STATIC_ASSERT(ZSTD_BLOCKHEADERSIZE == 3);
RETURN_ERROR_IF(dstCapacity<3, dstSize_tooSmall, "no room for epilogue");
MEM_writeLE24(op, cBlockHeader24);
op += ZSTD_blockHeaderSize;
dstCapacity -= ZSTD_blockHeaderSize;
}
if (cctx->appliedParams.fParams.checksumFlag) {
U32 const checksum = (U32) XXH64_digest(&cctx->xxhState);
RETURN_ERROR_IF(dstCapacity<4, dstSize_tooSmall, "no room for checksum");
DEBUGLOG(4, "ZSTD_writeEpilogue: write checksum : %08X", (unsigned)checksum);
MEM_writeLE32(op, checksum);
op += 4;
}
cctx->stage = ZSTDcs_created; /* return to "created but no init" status */
return (size_t)(op-ostart);
}
void ZSTD_CCtx_trace(ZSTD_CCtx* cctx, size_t extraCSize)
{
#if ZSTD_TRACE
if (cctx->traceCtx && ZSTD_trace_compress_end != NULL) {
int const streaming = cctx->inBuffSize > 0 || cctx->outBuffSize > 0 || cctx->appliedParams.nbWorkers > 0;
ZSTD_Trace trace;
ZSTD_memset(&trace, 0, sizeof(trace));
trace.version = ZSTD_VERSION_NUMBER;
trace.streaming = streaming;
trace.dictionaryID = cctx->dictID;
trace.dictionarySize = cctx->dictContentSize;
trace.uncompressedSize = cctx->consumedSrcSize;
trace.compressedSize = cctx->producedCSize + extraCSize;
trace.params = &cctx->appliedParams;
trace.cctx = cctx;
ZSTD_trace_compress_end(cctx->traceCtx, &trace);
}
cctx->traceCtx = 0;
#else
(void)cctx;
(void)extraCSize;
#endif
}
size_t ZSTD_compressEnd_public(ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize)
{
size_t endResult;
size_t const cSize = ZSTD_compressContinue_internal(cctx,
dst, dstCapacity, src, srcSize,
1 /* frame mode */, 1 /* last chunk */);
FORWARD_IF_ERROR(cSize, "ZSTD_compressContinue_internal failed");
endResult = ZSTD_writeEpilogue(cctx, (char*)dst + cSize, dstCapacity-cSize);
FORWARD_IF_ERROR(endResult, "ZSTD_writeEpilogue failed");
assert(!(cctx->appliedParams.fParams.contentSizeFlag && cctx->pledgedSrcSizePlusOne == 0));
if (cctx->pledgedSrcSizePlusOne != 0) { /* control src size */
ZSTD_STATIC_ASSERT(ZSTD_CONTENTSIZE_UNKNOWN == (unsigned long long)-1);
DEBUGLOG(4, "end of frame : controlling src size");
RETURN_ERROR_IF(
cctx->pledgedSrcSizePlusOne != cctx->consumedSrcSize+1,
srcSize_wrong,
"error : pledgedSrcSize = %u, while realSrcSize = %u",
(unsigned)cctx->pledgedSrcSizePlusOne-1,
(unsigned)cctx->consumedSrcSize);
}
ZSTD_CCtx_trace(cctx, endResult);
return cSize + endResult;
}
/* NOTE: Must just wrap ZSTD_compressEnd_public() */
size_t ZSTD_compressEnd(ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize)
{
return ZSTD_compressEnd_public(cctx, dst, dstCapacity, src, srcSize);
}
size_t ZSTD_compress_advanced (ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
const void* dict,size_t dictSize,
ZSTD_parameters params)
{
DEBUGLOG(4, "ZSTD_compress_advanced");
FORWARD_IF_ERROR(ZSTD_checkCParams(params.cParams), "");
ZSTD_CCtxParams_init_internal(&cctx->simpleApiParams, ¶ms, ZSTD_NO_CLEVEL);
return ZSTD_compress_advanced_internal(cctx,
dst, dstCapacity,
src, srcSize,
dict, dictSize,
&cctx->simpleApiParams);
}
/* Internal */
size_t ZSTD_compress_advanced_internal(
ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
const void* dict,size_t dictSize,
const ZSTD_CCtx_params* params)
{
DEBUGLOG(4, "ZSTD_compress_advanced_internal (srcSize:%u)", (unsigned)srcSize);
FORWARD_IF_ERROR( ZSTD_compressBegin_internal(cctx,
dict, dictSize, ZSTD_dct_auto, ZSTD_dtlm_fast, NULL,
params, srcSize, ZSTDb_not_buffered) , "");
return ZSTD_compressEnd_public(cctx, dst, dstCapacity, src, srcSize);
}
size_t ZSTD_compress_usingDict(ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
const void* dict, size_t dictSize,
int compressionLevel)
{
{
ZSTD_parameters const params = ZSTD_getParams_internal(compressionLevel, srcSize, dict ? dictSize : 0, ZSTD_cpm_noAttachDict);
assert(params.fParams.contentSizeFlag == 1);
ZSTD_CCtxParams_init_internal(&cctx->simpleApiParams, ¶ms, (compressionLevel == 0) ? ZSTD_CLEVEL_DEFAULT: compressionLevel);
}
DEBUGLOG(4, "ZSTD_compress_usingDict (srcSize=%u)", (unsigned)srcSize);
return ZSTD_compress_advanced_internal(cctx, dst, dstCapacity, src, srcSize, dict, dictSize, &cctx->simpleApiParams);
}
size_t ZSTD_compressCCtx(ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
int compressionLevel)
{
DEBUGLOG(4, "ZSTD_compressCCtx (srcSize=%u)", (unsigned)srcSize);
assert(cctx != NULL);
return ZSTD_compress_usingDict(cctx, dst, dstCapacity, src, srcSize, NULL, 0, compressionLevel);
}
size_t ZSTD_compress(void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
int compressionLevel)
{
size_t result;
#if ZSTD_COMPRESS_HEAPMODE
ZSTD_CCtx* cctx = ZSTD_createCCtx();
RETURN_ERROR_IF(!cctx, memory_allocation, "ZSTD_createCCtx failed");
result = ZSTD_compressCCtx(cctx, dst, dstCapacity, src, srcSize, compressionLevel);
ZSTD_freeCCtx(cctx);
#else
ZSTD_CCtx ctxBody;
ZSTD_initCCtx(&ctxBody, ZSTD_defaultCMem);
result = ZSTD_compressCCtx(&ctxBody, dst, dstCapacity, src, srcSize, compressionLevel);
ZSTD_freeCCtxContent(&ctxBody); /* can't free ctxBody itself, as it's on stack; free only heap content */
#endif
return result;
}
/* ===== Dictionary API ===== */
/*! ZSTD_estimateCDictSize_advanced() :
* Estimate amount of memory that will be needed to create a dictionary with following arguments */
size_t ZSTD_estimateCDictSize_advanced(
size_t dictSize, ZSTD_compressionParameters cParams,
ZSTD_dictLoadMethod_e dictLoadMethod)
{
DEBUGLOG(5, "sizeof(ZSTD_CDict) : %u", (unsigned)sizeof(ZSTD_CDict));
return ZSTD_cwksp_alloc_size(sizeof(ZSTD_CDict))
+ ZSTD_cwksp_alloc_size(HUF_WORKSPACE_SIZE)
/* enableDedicatedDictSearch == 1 ensures that CDict estimation will not be too small
* in case we are using DDS with row-hash. */
+ ZSTD_sizeof_matchState(&cParams, ZSTD_resolveRowMatchFinderMode(ZSTD_ps_auto, &cParams),
/* enableDedicatedDictSearch */ 1, /* forCCtx */ 0)
+ (dictLoadMethod == ZSTD_dlm_byRef ? 0
: ZSTD_cwksp_alloc_size(ZSTD_cwksp_align(dictSize, sizeof(void *))));
}
size_t ZSTD_estimateCDictSize(size_t dictSize, int compressionLevel)
{
ZSTD_compressionParameters const cParams = ZSTD_getCParams_internal(compressionLevel, ZSTD_CONTENTSIZE_UNKNOWN, dictSize, ZSTD_cpm_createCDict);
return ZSTD_estimateCDictSize_advanced(dictSize, cParams, ZSTD_dlm_byCopy);
}
size_t ZSTD_sizeof_CDict(const ZSTD_CDict* cdict)
{
if (cdict==NULL) return 0; /* support sizeof on NULL */
DEBUGLOG(5, "sizeof(*cdict) : %u", (unsigned)sizeof(*cdict));
/* cdict may be in the workspace */
return (cdict->workspace.workspace == cdict ? 0 : sizeof(*cdict))
+ ZSTD_cwksp_sizeof(&cdict->workspace);
}
static size_t ZSTD_initCDict_internal(
ZSTD_CDict* cdict,
const void* dictBuffer, size_t dictSize,
ZSTD_dictLoadMethod_e dictLoadMethod,
ZSTD_dictContentType_e dictContentType,
ZSTD_CCtx_params params)
{
DEBUGLOG(3, "ZSTD_initCDict_internal (dictContentType:%u)", (unsigned)dictContentType);
assert(!ZSTD_checkCParams(params.cParams));
cdict->matchState.cParams = params.cParams;
cdict->matchState.dedicatedDictSearch = params.enableDedicatedDictSearch;
if ((dictLoadMethod == ZSTD_dlm_byRef) || (!dictBuffer) || (!dictSize)) {
cdict->dictContent = dictBuffer;
} else {
void *internalBuffer = ZSTD_cwksp_reserve_object(&cdict->workspace, ZSTD_cwksp_align(dictSize, sizeof(void*)));
RETURN_ERROR_IF(!internalBuffer, memory_allocation, "NULL pointer!");
cdict->dictContent = internalBuffer;
ZSTD_memcpy(internalBuffer, dictBuffer, dictSize);
}
cdict->dictContentSize = dictSize;
cdict->dictContentType = dictContentType;
cdict->entropyWorkspace = (U32*)ZSTD_cwksp_reserve_object(&cdict->workspace, HUF_WORKSPACE_SIZE);
/* Reset the state to no dictionary */
ZSTD_reset_compressedBlockState(&cdict->cBlockState);
FORWARD_IF_ERROR(ZSTD_reset_matchState(
&cdict->matchState,
&cdict->workspace,
¶ms.cParams,
params.useRowMatchFinder,
ZSTDcrp_makeClean,
ZSTDirp_reset,
ZSTD_resetTarget_CDict), "");
/* (Maybe) load the dictionary
* Skips loading the dictionary if it is < 8 bytes.
*/
{ params.compressionLevel = ZSTD_CLEVEL_DEFAULT;
params.fParams.contentSizeFlag = 1;
{ size_t const dictID = ZSTD_compress_insertDictionary(
&cdict->cBlockState, &cdict->matchState, NULL, &cdict->workspace,
¶ms, cdict->dictContent, cdict->dictContentSize,
dictContentType, ZSTD_dtlm_full, ZSTD_tfp_forCDict, cdict->entropyWorkspace);
FORWARD_IF_ERROR(dictID, "ZSTD_compress_insertDictionary failed");
assert(dictID <= (size_t)(U32)-1);
cdict->dictID = (U32)dictID;
}
}
return 0;
}
static ZSTD_CDict*
ZSTD_createCDict_advanced_internal(size_t dictSize,
ZSTD_dictLoadMethod_e dictLoadMethod,
ZSTD_compressionParameters cParams,
ZSTD_ParamSwitch_e useRowMatchFinder,
int enableDedicatedDictSearch,
ZSTD_customMem customMem)
{
if ((!customMem.customAlloc) ^ (!customMem.customFree)) return NULL;
DEBUGLOG(3, "ZSTD_createCDict_advanced_internal (dictSize=%u)", (unsigned)dictSize);
{ size_t const workspaceSize =
ZSTD_cwksp_alloc_size(sizeof(ZSTD_CDict)) +
ZSTD_cwksp_alloc_size(HUF_WORKSPACE_SIZE) +
ZSTD_sizeof_matchState(&cParams, useRowMatchFinder, enableDedicatedDictSearch, /* forCCtx */ 0) +
(dictLoadMethod == ZSTD_dlm_byRef ? 0
: ZSTD_cwksp_alloc_size(ZSTD_cwksp_align(dictSize, sizeof(void*))));
void* const workspace = ZSTD_customMalloc(workspaceSize, customMem);
ZSTD_cwksp ws;
ZSTD_CDict* cdict;
if (!workspace) {
ZSTD_customFree(workspace, customMem);
return NULL;
}
ZSTD_cwksp_init(&ws, workspace, workspaceSize, ZSTD_cwksp_dynamic_alloc);
cdict = (ZSTD_CDict*)ZSTD_cwksp_reserve_object(&ws, sizeof(ZSTD_CDict));
assert(cdict != NULL);
ZSTD_cwksp_move(&cdict->workspace, &ws);
cdict->customMem = customMem;
cdict->compressionLevel = ZSTD_NO_CLEVEL; /* signals advanced API usage */
cdict->useRowMatchFinder = useRowMatchFinder;
return cdict;
}
}
ZSTD_CDict* ZSTD_createCDict_advanced(const void* dictBuffer, size_t dictSize,
ZSTD_dictLoadMethod_e dictLoadMethod,
ZSTD_dictContentType_e dictContentType,
ZSTD_compressionParameters cParams,
ZSTD_customMem customMem)
{
ZSTD_CCtx_params cctxParams;
ZSTD_memset(&cctxParams, 0, sizeof(cctxParams));
DEBUGLOG(3, "ZSTD_createCDict_advanced, dictSize=%u, mode=%u", (unsigned)dictSize, (unsigned)dictContentType);
ZSTD_CCtxParams_init(&cctxParams, 0);
cctxParams.cParams = cParams;
cctxParams.customMem = customMem;
return ZSTD_createCDict_advanced2(
dictBuffer, dictSize,
dictLoadMethod, dictContentType,
&cctxParams, customMem);
}
ZSTD_CDict* ZSTD_createCDict_advanced2(
const void* dict, size_t dictSize,
ZSTD_dictLoadMethod_e dictLoadMethod,
ZSTD_dictContentType_e dictContentType,
const ZSTD_CCtx_params* originalCctxParams,
ZSTD_customMem customMem)
{
ZSTD_CCtx_params cctxParams = *originalCctxParams;
ZSTD_compressionParameters cParams;
ZSTD_CDict* cdict;
DEBUGLOG(3, "ZSTD_createCDict_advanced2, dictSize=%u, mode=%u", (unsigned)dictSize, (unsigned)dictContentType);
if (!customMem.customAlloc ^ !customMem.customFree) return NULL;
if (cctxParams.enableDedicatedDictSearch) {
cParams = ZSTD_dedicatedDictSearch_getCParams(
cctxParams.compressionLevel, dictSize);
ZSTD_overrideCParams(&cParams, &cctxParams.cParams);
} else {
cParams = ZSTD_getCParamsFromCCtxParams(
&cctxParams, ZSTD_CONTENTSIZE_UNKNOWN, dictSize, ZSTD_cpm_createCDict);
}
if (!ZSTD_dedicatedDictSearch_isSupported(&cParams)) {
/* Fall back to non-DDSS params */
cctxParams.enableDedicatedDictSearch = 0;
cParams = ZSTD_getCParamsFromCCtxParams(
&cctxParams, ZSTD_CONTENTSIZE_UNKNOWN, dictSize, ZSTD_cpm_createCDict);
}
DEBUGLOG(3, "ZSTD_createCDict_advanced2: DedicatedDictSearch=%u", cctxParams.enableDedicatedDictSearch);
cctxParams.cParams = cParams;
cctxParams.useRowMatchFinder = ZSTD_resolveRowMatchFinderMode(cctxParams.useRowMatchFinder, &cParams);
cdict = ZSTD_createCDict_advanced_internal(dictSize,
dictLoadMethod, cctxParams.cParams,
cctxParams.useRowMatchFinder, cctxParams.enableDedicatedDictSearch,
customMem);
if (!cdict || ZSTD_isError( ZSTD_initCDict_internal(cdict,
dict, dictSize,
dictLoadMethod, dictContentType,
cctxParams) )) {
ZSTD_freeCDict(cdict);
return NULL;
}
return cdict;
}
ZSTD_CDict* ZSTD_createCDict(const void* dict, size_t dictSize, int compressionLevel)
{
ZSTD_compressionParameters cParams = ZSTD_getCParams_internal(compressionLevel, ZSTD_CONTENTSIZE_UNKNOWN, dictSize, ZSTD_cpm_createCDict);
ZSTD_CDict* const cdict = ZSTD_createCDict_advanced(dict, dictSize,
ZSTD_dlm_byCopy, ZSTD_dct_auto,
cParams, ZSTD_defaultCMem);
if (cdict)
cdict->compressionLevel = (compressionLevel == 0) ? ZSTD_CLEVEL_DEFAULT : compressionLevel;
return cdict;
}
ZSTD_CDict* ZSTD_createCDict_byReference(const void* dict, size_t dictSize, int compressionLevel)
{
ZSTD_compressionParameters cParams = ZSTD_getCParams_internal(compressionLevel, ZSTD_CONTENTSIZE_UNKNOWN, dictSize, ZSTD_cpm_createCDict);
ZSTD_CDict* const cdict = ZSTD_createCDict_advanced(dict, dictSize,
ZSTD_dlm_byRef, ZSTD_dct_auto,
cParams, ZSTD_defaultCMem);
if (cdict)
cdict->compressionLevel = (compressionLevel == 0) ? ZSTD_CLEVEL_DEFAULT : compressionLevel;
return cdict;
}
size_t ZSTD_freeCDict(ZSTD_CDict* cdict)
{
if (cdict==NULL) return 0; /* support free on NULL */
{ ZSTD_customMem const cMem = cdict->customMem;
int cdictInWorkspace = ZSTD_cwksp_owns_buffer(&cdict->workspace, cdict);
ZSTD_cwksp_free(&cdict->workspace, cMem);
if (!cdictInWorkspace) {
ZSTD_customFree(cdict, cMem);
}
return 0;
}
}
/*! ZSTD_initStaticCDict_advanced() :
* Generate a digested dictionary in provided memory area.
* workspace: The memory area to emplace the dictionary into.
* Provided pointer must 8-bytes aligned.
* It must outlive dictionary usage.
* workspaceSize: Use ZSTD_estimateCDictSize()
* to determine how large workspace must be.
* cParams : use ZSTD_getCParams() to transform a compression level
* into its relevant cParams.
* @return : pointer to ZSTD_CDict*, or NULL if error (size too small)
* Note : there is no corresponding "free" function.
* Since workspace was allocated externally, it must be freed externally.
*/
const ZSTD_CDict* ZSTD_initStaticCDict(
void* workspace, size_t workspaceSize,
const void* dict, size_t dictSize,
ZSTD_dictLoadMethod_e dictLoadMethod,
ZSTD_dictContentType_e dictContentType,
ZSTD_compressionParameters cParams)
{
ZSTD_ParamSwitch_e const useRowMatchFinder = ZSTD_resolveRowMatchFinderMode(ZSTD_ps_auto, &cParams);
/* enableDedicatedDictSearch == 1 ensures matchstate is not too small in case this CDict will be used for DDS + row hash */
size_t const matchStateSize = ZSTD_sizeof_matchState(&cParams, useRowMatchFinder, /* enableDedicatedDictSearch */ 1, /* forCCtx */ 0);
size_t const neededSize = ZSTD_cwksp_alloc_size(sizeof(ZSTD_CDict))
+ (dictLoadMethod == ZSTD_dlm_byRef ? 0
: ZSTD_cwksp_alloc_size(ZSTD_cwksp_align(dictSize, sizeof(void*))))
+ ZSTD_cwksp_alloc_size(HUF_WORKSPACE_SIZE)
+ matchStateSize;
ZSTD_CDict* cdict;
ZSTD_CCtx_params params;
DEBUGLOG(4, "ZSTD_initStaticCDict (dictSize==%u)", (unsigned)dictSize);
if ((size_t)workspace & 7) return NULL; /* 8-aligned */
{
ZSTD_cwksp ws;
ZSTD_cwksp_init(&ws, workspace, workspaceSize, ZSTD_cwksp_static_alloc);
cdict = (ZSTD_CDict*)ZSTD_cwksp_reserve_object(&ws, sizeof(ZSTD_CDict));
if (cdict == NULL) return NULL;
ZSTD_cwksp_move(&cdict->workspace, &ws);
}
if (workspaceSize < neededSize) return NULL;
ZSTD_CCtxParams_init(¶ms, 0);
params.cParams = cParams;
params.useRowMatchFinder = useRowMatchFinder;
cdict->useRowMatchFinder = useRowMatchFinder;
cdict->compressionLevel = ZSTD_NO_CLEVEL;
if (ZSTD_isError( ZSTD_initCDict_internal(cdict,
dict, dictSize,
dictLoadMethod, dictContentType,
params) ))
return NULL;
return cdict;
}
ZSTD_compressionParameters ZSTD_getCParamsFromCDict(const ZSTD_CDict* cdict)
{
assert(cdict != NULL);
return cdict->matchState.cParams;
}
/*! ZSTD_getDictID_fromCDict() :
* Provides the dictID of the dictionary loaded into `cdict`.
* If @return == 0, the dictionary is not conformant to Zstandard specification, or empty.
* Non-conformant dictionaries can still be loaded, but as content-only dictionaries. */
unsigned ZSTD_getDictID_fromCDict(const ZSTD_CDict* cdict)
{
if (cdict==NULL) return 0;
return cdict->dictID;
}
/* ZSTD_compressBegin_usingCDict_internal() :
* Implementation of various ZSTD_compressBegin_usingCDict* functions.
*/
static size_t ZSTD_compressBegin_usingCDict_internal(
ZSTD_CCtx* const cctx, const ZSTD_CDict* const cdict,
ZSTD_frameParameters const fParams, unsigned long long const pledgedSrcSize)
{
ZSTD_CCtx_params cctxParams;
DEBUGLOG(4, "ZSTD_compressBegin_usingCDict_internal");
RETURN_ERROR_IF(cdict==NULL, dictionary_wrong, "NULL pointer!");
/* Initialize the cctxParams from the cdict */
{
ZSTD_parameters params;
params.fParams = fParams;
params.cParams = ( pledgedSrcSize < ZSTD_USE_CDICT_PARAMS_SRCSIZE_CUTOFF
|| pledgedSrcSize < cdict->dictContentSize * ZSTD_USE_CDICT_PARAMS_DICTSIZE_MULTIPLIER
|| pledgedSrcSize == ZSTD_CONTENTSIZE_UNKNOWN
|| cdict->compressionLevel == 0 ) ?
ZSTD_getCParamsFromCDict(cdict)
: ZSTD_getCParams(cdict->compressionLevel,
pledgedSrcSize,
cdict->dictContentSize);
ZSTD_CCtxParams_init_internal(&cctxParams, ¶ms, cdict->compressionLevel);
}
/* Increase window log to fit the entire dictionary and source if the
* source size is known. Limit the increase to 19, which is the
* window log for compression level 1 with the largest source size.
*/
if (pledgedSrcSize != ZSTD_CONTENTSIZE_UNKNOWN) {
U32 const limitedSrcSize = (U32)MIN(pledgedSrcSize, 1U << 19);
U32 const limitedSrcLog = limitedSrcSize > 1 ? ZSTD_highbit32(limitedSrcSize - 1) + 1 : 1;
cctxParams.cParams.windowLog = MAX(cctxParams.cParams.windowLog, limitedSrcLog);
}
return ZSTD_compressBegin_internal(cctx,
NULL, 0, ZSTD_dct_auto, ZSTD_dtlm_fast,
cdict,
&cctxParams, pledgedSrcSize,
ZSTDb_not_buffered);
}
/* ZSTD_compressBegin_usingCDict_advanced() :
* This function is DEPRECATED.
* cdict must be != NULL */
size_t ZSTD_compressBegin_usingCDict_advanced(
ZSTD_CCtx* const cctx, const ZSTD_CDict* const cdict,
ZSTD_frameParameters const fParams, unsigned long long const pledgedSrcSize)
{
return ZSTD_compressBegin_usingCDict_internal(cctx, cdict, fParams, pledgedSrcSize);
}
/* ZSTD_compressBegin_usingCDict() :
* cdict must be != NULL */
size_t ZSTD_compressBegin_usingCDict_deprecated(ZSTD_CCtx* cctx, const ZSTD_CDict* cdict)
{
ZSTD_frameParameters const fParams = { 0 /*content*/, 0 /*checksum*/, 0 /*noDictID*/ };
return ZSTD_compressBegin_usingCDict_internal(cctx, cdict, fParams, ZSTD_CONTENTSIZE_UNKNOWN);
}
size_t ZSTD_compressBegin_usingCDict(ZSTD_CCtx* cctx, const ZSTD_CDict* cdict)
{
return ZSTD_compressBegin_usingCDict_deprecated(cctx, cdict);
}
/*! ZSTD_compress_usingCDict_internal():
* Implementation of various ZSTD_compress_usingCDict* functions.
*/
static size_t ZSTD_compress_usingCDict_internal(ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
const ZSTD_CDict* cdict, ZSTD_frameParameters fParams)
{
FORWARD_IF_ERROR(ZSTD_compressBegin_usingCDict_internal(cctx, cdict, fParams, srcSize), ""); /* will check if cdict != NULL */
return ZSTD_compressEnd_public(cctx, dst, dstCapacity, src, srcSize);
}
/*! ZSTD_compress_usingCDict_advanced():
* This function is DEPRECATED.
*/
size_t ZSTD_compress_usingCDict_advanced(ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
const ZSTD_CDict* cdict, ZSTD_frameParameters fParams)
{
return ZSTD_compress_usingCDict_internal(cctx, dst, dstCapacity, src, srcSize, cdict, fParams);
}
/*! ZSTD_compress_usingCDict() :
* Compression using a digested Dictionary.
* Faster startup than ZSTD_compress_usingDict(), recommended when same dictionary is used multiple times.
* Note that compression parameters are decided at CDict creation time
* while frame parameters are hardcoded */
size_t ZSTD_compress_usingCDict(ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
const ZSTD_CDict* cdict)
{
ZSTD_frameParameters const fParams = { 1 /*content*/, 0 /*checksum*/, 0 /*noDictID*/ };
return ZSTD_compress_usingCDict_internal(cctx, dst, dstCapacity, src, srcSize, cdict, fParams);
}
/* ******************************************************************
* Streaming
********************************************************************/
ZSTD_CStream* ZSTD_createCStream(void)
{
DEBUGLOG(3, "ZSTD_createCStream");
return ZSTD_createCStream_advanced(ZSTD_defaultCMem);
}
ZSTD_CStream* ZSTD_initStaticCStream(void *workspace, size_t workspaceSize)
{
return ZSTD_initStaticCCtx(workspace, workspaceSize);
}
ZSTD_CStream* ZSTD_createCStream_advanced(ZSTD_customMem customMem)
{ /* CStream and CCtx are now same object */
return ZSTD_createCCtx_advanced(customMem);
}
size_t ZSTD_freeCStream(ZSTD_CStream* zcs)
{
return ZSTD_freeCCtx(zcs); /* same object */
}
/*====== Initialization ======*/
size_t ZSTD_CStreamInSize(void) { return ZSTD_BLOCKSIZE_MAX; }
size_t ZSTD_CStreamOutSize(void)
{
return ZSTD_compressBound(ZSTD_BLOCKSIZE_MAX) + ZSTD_blockHeaderSize + 4 /* 32-bits hash */ ;
}
static ZSTD_CParamMode_e ZSTD_getCParamMode(ZSTD_CDict const* cdict, ZSTD_CCtx_params const* params, U64 pledgedSrcSize)
{
if (cdict != NULL && ZSTD_shouldAttachDict(cdict, params, pledgedSrcSize))
return ZSTD_cpm_attachDict;
else
return ZSTD_cpm_noAttachDict;
}
/* ZSTD_resetCStream():
* pledgedSrcSize == 0 means "unknown" */
size_t ZSTD_resetCStream(ZSTD_CStream* zcs, unsigned long long pss)
{
/* temporary : 0 interpreted as "unknown" during transition period.
* Users willing to specify "unknown" **must** use ZSTD_CONTENTSIZE_UNKNOWN.
* 0 will be interpreted as "empty" in the future.
*/
U64 const pledgedSrcSize = (pss==0) ? ZSTD_CONTENTSIZE_UNKNOWN : pss;
DEBUGLOG(4, "ZSTD_resetCStream: pledgedSrcSize = %u", (unsigned)pledgedSrcSize);
FORWARD_IF_ERROR( ZSTD_CCtx_reset(zcs, ZSTD_reset_session_only) , "");
FORWARD_IF_ERROR( ZSTD_CCtx_setPledgedSrcSize(zcs, pledgedSrcSize) , "");
return 0;
}
/*! ZSTD_initCStream_internal() :
* Note : for lib/compress only. Used by zstdmt_compress.c.
* Assumption 1 : params are valid
* Assumption 2 : either dict, or cdict, is defined, not both */
size_t ZSTD_initCStream_internal(ZSTD_CStream* zcs,
const void* dict, size_t dictSize, const ZSTD_CDict* cdict,
const ZSTD_CCtx_params* params,
unsigned long long pledgedSrcSize)
{
DEBUGLOG(4, "ZSTD_initCStream_internal");
FORWARD_IF_ERROR( ZSTD_CCtx_reset(zcs, ZSTD_reset_session_only) , "");
FORWARD_IF_ERROR( ZSTD_CCtx_setPledgedSrcSize(zcs, pledgedSrcSize) , "");
assert(!ZSTD_isError(ZSTD_checkCParams(params->cParams)));
zcs->requestedParams = *params;
assert(!((dict) && (cdict))); /* either dict or cdict, not both */
if (dict) {
FORWARD_IF_ERROR( ZSTD_CCtx_loadDictionary(zcs, dict, dictSize) , "");
} else {
/* Dictionary is cleared if !cdict */
FORWARD_IF_ERROR( ZSTD_CCtx_refCDict(zcs, cdict) , "");
}
return 0;
}
/* ZSTD_initCStream_usingCDict_advanced() :
* same as ZSTD_initCStream_usingCDict(), with control over frame parameters */
size_t ZSTD_initCStream_usingCDict_advanced(ZSTD_CStream* zcs,
const ZSTD_CDict* cdict,
ZSTD_frameParameters fParams,
unsigned long long pledgedSrcSize)
{
DEBUGLOG(4, "ZSTD_initCStream_usingCDict_advanced");
FORWARD_IF_ERROR( ZSTD_CCtx_reset(zcs, ZSTD_reset_session_only) , "");
FORWARD_IF_ERROR( ZSTD_CCtx_setPledgedSrcSize(zcs, pledgedSrcSize) , "");
zcs->requestedParams.fParams = fParams;
FORWARD_IF_ERROR( ZSTD_CCtx_refCDict(zcs, cdict) , "");
return 0;
}
/* note : cdict must outlive compression session */
size_t ZSTD_initCStream_usingCDict(ZSTD_CStream* zcs, const ZSTD_CDict* cdict)
{
DEBUGLOG(4, "ZSTD_initCStream_usingCDict");
FORWARD_IF_ERROR( ZSTD_CCtx_reset(zcs, ZSTD_reset_session_only) , "");
FORWARD_IF_ERROR( ZSTD_CCtx_refCDict(zcs, cdict) , "");
return 0;
}
/* ZSTD_initCStream_advanced() :
* pledgedSrcSize must be exact.
* if srcSize is not known at init time, use value ZSTD_CONTENTSIZE_UNKNOWN.
* dict is loaded with default parameters ZSTD_dct_auto and ZSTD_dlm_byCopy. */
size_t ZSTD_initCStream_advanced(ZSTD_CStream* zcs,
const void* dict, size_t dictSize,
ZSTD_parameters params, unsigned long long pss)
{
/* for compatibility with older programs relying on this behavior.
* Users should now specify ZSTD_CONTENTSIZE_UNKNOWN.
* This line will be removed in the future.
*/
U64 const pledgedSrcSize = (pss==0 && params.fParams.contentSizeFlag==0) ? ZSTD_CONTENTSIZE_UNKNOWN : pss;
DEBUGLOG(4, "ZSTD_initCStream_advanced");
FORWARD_IF_ERROR( ZSTD_CCtx_reset(zcs, ZSTD_reset_session_only) , "");
FORWARD_IF_ERROR( ZSTD_CCtx_setPledgedSrcSize(zcs, pledgedSrcSize) , "");
FORWARD_IF_ERROR( ZSTD_checkCParams(params.cParams) , "");
ZSTD_CCtxParams_setZstdParams(&zcs->requestedParams, ¶ms);
FORWARD_IF_ERROR( ZSTD_CCtx_loadDictionary(zcs, dict, dictSize) , "");
return 0;
}
size_t ZSTD_initCStream_usingDict(ZSTD_CStream* zcs, const void* dict, size_t dictSize, int compressionLevel)
{
DEBUGLOG(4, "ZSTD_initCStream_usingDict");
FORWARD_IF_ERROR( ZSTD_CCtx_reset(zcs, ZSTD_reset_session_only) , "");
FORWARD_IF_ERROR( ZSTD_CCtx_setParameter(zcs, ZSTD_c_compressionLevel, compressionLevel) , "");
FORWARD_IF_ERROR( ZSTD_CCtx_loadDictionary(zcs, dict, dictSize) , "");
return 0;
}
size_t ZSTD_initCStream_srcSize(ZSTD_CStream* zcs, int compressionLevel, unsigned long long pss)
{
/* temporary : 0 interpreted as "unknown" during transition period.
* Users willing to specify "unknown" **must** use ZSTD_CONTENTSIZE_UNKNOWN.
* 0 will be interpreted as "empty" in the future.
*/
U64 const pledgedSrcSize = (pss==0) ? ZSTD_CONTENTSIZE_UNKNOWN : pss;
DEBUGLOG(4, "ZSTD_initCStream_srcSize");
FORWARD_IF_ERROR( ZSTD_CCtx_reset(zcs, ZSTD_reset_session_only) , "");
FORWARD_IF_ERROR( ZSTD_CCtx_refCDict(zcs, NULL) , "");
FORWARD_IF_ERROR( ZSTD_CCtx_setParameter(zcs, ZSTD_c_compressionLevel, compressionLevel) , "");
FORWARD_IF_ERROR( ZSTD_CCtx_setPledgedSrcSize(zcs, pledgedSrcSize) , "");
return 0;
}
size_t ZSTD_initCStream(ZSTD_CStream* zcs, int compressionLevel)
{
DEBUGLOG(4, "ZSTD_initCStream");
FORWARD_IF_ERROR( ZSTD_CCtx_reset(zcs, ZSTD_reset_session_only) , "");
FORWARD_IF_ERROR( ZSTD_CCtx_refCDict(zcs, NULL) , "");
FORWARD_IF_ERROR( ZSTD_CCtx_setParameter(zcs, ZSTD_c_compressionLevel, compressionLevel) , "");
return 0;
}
/*====== Compression ======*/
static size_t ZSTD_nextInputSizeHint(const ZSTD_CCtx* cctx)
{
if (cctx->appliedParams.inBufferMode == ZSTD_bm_stable) {
return cctx->blockSizeMax - cctx->stableIn_notConsumed;
}
assert(cctx->appliedParams.inBufferMode == ZSTD_bm_buffered);
{ size_t hintInSize = cctx->inBuffTarget - cctx->inBuffPos;
if (hintInSize==0) hintInSize = cctx->blockSizeMax;
return hintInSize;
}
}
/** ZSTD_compressStream_generic():
* internal function for all *compressStream*() variants
* @return : hint size for next input to complete ongoing block */
static size_t ZSTD_compressStream_generic(ZSTD_CStream* zcs,
ZSTD_outBuffer* output,
ZSTD_inBuffer* input,
ZSTD_EndDirective const flushMode)
{
const char* const istart = (assert(input != NULL), (const char*)input->src);
const char* const iend = (istart != NULL) ? istart + input->size : istart;
const char* ip = (istart != NULL) ? istart + input->pos : istart;
char* const ostart = (assert(output != NULL), (char*)output->dst);
char* const oend = (ostart != NULL) ? ostart + output->size : ostart;
char* op = (ostart != NULL) ? ostart + output->pos : ostart;
U32 someMoreWork = 1;
/* check expectations */
DEBUGLOG(5, "ZSTD_compressStream_generic, flush=%i, srcSize = %zu", (int)flushMode, input->size - input->pos);
assert(zcs != NULL);
if (zcs->appliedParams.inBufferMode == ZSTD_bm_stable) {
assert(input->pos >= zcs->stableIn_notConsumed);
input->pos -= zcs->stableIn_notConsumed;
if (ip) ip -= zcs->stableIn_notConsumed;
zcs->stableIn_notConsumed = 0;
}
if (zcs->appliedParams.inBufferMode == ZSTD_bm_buffered) {
assert(zcs->inBuff != NULL);
assert(zcs->inBuffSize > 0);
}
if (zcs->appliedParams.outBufferMode == ZSTD_bm_buffered) {
assert(zcs->outBuff != NULL);
assert(zcs->outBuffSize > 0);
}
if (input->src == NULL) assert(input->size == 0);
assert(input->pos <= input->size);
if (output->dst == NULL) assert(output->size == 0);
assert(output->pos <= output->size);
assert((U32)flushMode <= (U32)ZSTD_e_end);
while (someMoreWork) {
switch(zcs->streamStage)
{
case zcss_init:
RETURN_ERROR(init_missing, "call ZSTD_initCStream() first!");
case zcss_load:
if ( (flushMode == ZSTD_e_end)
&& ( (size_t)(oend-op) >= ZSTD_compressBound((size_t)(iend-ip)) /* Enough output space */
|| zcs->appliedParams.outBufferMode == ZSTD_bm_stable) /* OR we are allowed to return dstSizeTooSmall */
&& (zcs->inBuffPos == 0) ) {
/* shortcut to compression pass directly into output buffer */
size_t const cSize = ZSTD_compressEnd_public(zcs,
op, (size_t)(oend-op),
ip, (size_t)(iend-ip));
DEBUGLOG(4, "ZSTD_compressEnd : cSize=%u", (unsigned)cSize);
FORWARD_IF_ERROR(cSize, "ZSTD_compressEnd failed");
ip = iend;
op += cSize;
zcs->frameEnded = 1;
ZSTD_CCtx_reset(zcs, ZSTD_reset_session_only);
someMoreWork = 0; break;
}
/* complete loading into inBuffer in buffered mode */
if (zcs->appliedParams.inBufferMode == ZSTD_bm_buffered) {
size_t const toLoad = zcs->inBuffTarget - zcs->inBuffPos;
size_t const loaded = ZSTD_limitCopy(
zcs->inBuff + zcs->inBuffPos, toLoad,
ip, (size_t)(iend-ip));
zcs->inBuffPos += loaded;
if (ip) ip += loaded;
if ( (flushMode == ZSTD_e_continue)
&& (zcs->inBuffPos < zcs->inBuffTarget) ) {
/* not enough input to fill full block : stop here */
someMoreWork = 0; break;
}
if ( (flushMode == ZSTD_e_flush)
&& (zcs->inBuffPos == zcs->inToCompress) ) {
/* empty */
someMoreWork = 0; break;
}
} else {
assert(zcs->appliedParams.inBufferMode == ZSTD_bm_stable);
if ( (flushMode == ZSTD_e_continue)
&& ( (size_t)(iend - ip) < zcs->blockSizeMax) ) {
/* can't compress a full block : stop here */
zcs->stableIn_notConsumed = (size_t)(iend - ip);
ip = iend; /* pretend to have consumed input */
someMoreWork = 0; break;
}
if ( (flushMode == ZSTD_e_flush)
&& (ip == iend) ) {
/* empty */
someMoreWork = 0; break;
}
}
/* compress current block (note : this stage cannot be stopped in the middle) */
DEBUGLOG(5, "stream compression stage (flushMode==%u)", flushMode);
{ int const inputBuffered = (zcs->appliedParams.inBufferMode == ZSTD_bm_buffered);
void* cDst;
size_t cSize;
size_t oSize = (size_t)(oend-op);
size_t const iSize = inputBuffered ? zcs->inBuffPos - zcs->inToCompress
: MIN((size_t)(iend - ip), zcs->blockSizeMax);
if (oSize >= ZSTD_compressBound(iSize) || zcs->appliedParams.outBufferMode == ZSTD_bm_stable)
cDst = op; /* compress into output buffer, to skip flush stage */
else
cDst = zcs->outBuff, oSize = zcs->outBuffSize;
if (inputBuffered) {
unsigned const lastBlock = (flushMode == ZSTD_e_end) && (ip==iend);
cSize = lastBlock ?
ZSTD_compressEnd_public(zcs, cDst, oSize,
zcs->inBuff + zcs->inToCompress, iSize) :
ZSTD_compressContinue_public(zcs, cDst, oSize,
zcs->inBuff + zcs->inToCompress, iSize);
FORWARD_IF_ERROR(cSize, "%s", lastBlock ? "ZSTD_compressEnd failed" : "ZSTD_compressContinue failed");
zcs->frameEnded = lastBlock;
/* prepare next block */
zcs->inBuffTarget = zcs->inBuffPos + zcs->blockSizeMax;
if (zcs->inBuffTarget > zcs->inBuffSize)
zcs->inBuffPos = 0, zcs->inBuffTarget = zcs->blockSizeMax;
DEBUGLOG(5, "inBuffTarget:%u / inBuffSize:%u",
(unsigned)zcs->inBuffTarget, (unsigned)zcs->inBuffSize);
if (!lastBlock)
assert(zcs->inBuffTarget <= zcs->inBuffSize);
zcs->inToCompress = zcs->inBuffPos;
} else { /* !inputBuffered, hence ZSTD_bm_stable */
unsigned const lastBlock = (flushMode == ZSTD_e_end) && (ip + iSize == iend);
cSize = lastBlock ?
ZSTD_compressEnd_public(zcs, cDst, oSize, ip, iSize) :
ZSTD_compressContinue_public(zcs, cDst, oSize, ip, iSize);
/* Consume the input prior to error checking to mirror buffered mode. */
if (ip) ip += iSize;
FORWARD_IF_ERROR(cSize, "%s", lastBlock ? "ZSTD_compressEnd failed" : "ZSTD_compressContinue failed");
zcs->frameEnded = lastBlock;
if (lastBlock) assert(ip == iend);
}
if (cDst == op) { /* no need to flush */
op += cSize;
if (zcs->frameEnded) {
DEBUGLOG(5, "Frame completed directly in outBuffer");
someMoreWork = 0;
ZSTD_CCtx_reset(zcs, ZSTD_reset_session_only);
}
break;
}
zcs->outBuffContentSize = cSize;
zcs->outBuffFlushedSize = 0;
zcs->streamStage = zcss_flush; /* pass-through to flush stage */
}
ZSTD_FALLTHROUGH;
case zcss_flush:
DEBUGLOG(5, "flush stage");
assert(zcs->appliedParams.outBufferMode == ZSTD_bm_buffered);
{ size_t const toFlush = zcs->outBuffContentSize - zcs->outBuffFlushedSize;
size_t const flushed = ZSTD_limitCopy(op, (size_t)(oend-op),
zcs->outBuff + zcs->outBuffFlushedSize, toFlush);
DEBUGLOG(5, "toFlush: %u into %u ==> flushed: %u",
(unsigned)toFlush, (unsigned)(oend-op), (unsigned)flushed);
if (flushed)
op += flushed;
zcs->outBuffFlushedSize += flushed;
if (toFlush!=flushed) {
/* flush not fully completed, presumably because dst is too small */
assert(op==oend);
someMoreWork = 0;
break;
}
zcs->outBuffContentSize = zcs->outBuffFlushedSize = 0;
if (zcs->frameEnded) {
DEBUGLOG(5, "Frame completed on flush");
someMoreWork = 0;
ZSTD_CCtx_reset(zcs, ZSTD_reset_session_only);
break;
}
zcs->streamStage = zcss_load;
break;
}
default: /* impossible */
assert(0);
}
}
input->pos = (size_t)(ip - istart);
output->pos = (size_t)(op - ostart);
if (zcs->frameEnded) return 0;
return ZSTD_nextInputSizeHint(zcs);
}
static size_t ZSTD_nextInputSizeHint_MTorST(const ZSTD_CCtx* cctx)
{
#ifdef ZSTD_MULTITHREAD
if (cctx->appliedParams.nbWorkers >= 1) {
assert(cctx->mtctx != NULL);
return ZSTDMT_nextInputSizeHint(cctx->mtctx);
}
#endif
return ZSTD_nextInputSizeHint(cctx);
}
size_t ZSTD_compressStream(ZSTD_CStream* zcs, ZSTD_outBuffer* output, ZSTD_inBuffer* input)
{
FORWARD_IF_ERROR( ZSTD_compressStream2(zcs, output, input, ZSTD_e_continue) , "");
return ZSTD_nextInputSizeHint_MTorST(zcs);
}
/* After a compression call set the expected input/output buffer.
* This is validated at the start of the next compression call.
*/
static void
ZSTD_setBufferExpectations(ZSTD_CCtx* cctx, const ZSTD_outBuffer* output, const ZSTD_inBuffer* input)
{
DEBUGLOG(5, "ZSTD_setBufferExpectations (for advanced stable in/out modes)");
if (cctx->appliedParams.inBufferMode == ZSTD_bm_stable) {
cctx->expectedInBuffer = *input;
}
if (cctx->appliedParams.outBufferMode == ZSTD_bm_stable) {
cctx->expectedOutBufferSize = output->size - output->pos;
}
}
/* Validate that the input/output buffers match the expectations set by
* ZSTD_setBufferExpectations.
*/
static size_t ZSTD_checkBufferStability(ZSTD_CCtx const* cctx,
ZSTD_outBuffer const* output,
ZSTD_inBuffer const* input,
ZSTD_EndDirective endOp)
{
if (cctx->appliedParams.inBufferMode == ZSTD_bm_stable) {
ZSTD_inBuffer const expect = cctx->expectedInBuffer;
if (expect.src != input->src || expect.pos != input->pos)
RETURN_ERROR(stabilityCondition_notRespected, "ZSTD_c_stableInBuffer enabled but input differs!");
}
(void)endOp;
if (cctx->appliedParams.outBufferMode == ZSTD_bm_stable) {
size_t const outBufferSize = output->size - output->pos;
if (cctx->expectedOutBufferSize != outBufferSize)
RETURN_ERROR(stabilityCondition_notRespected, "ZSTD_c_stableOutBuffer enabled but output size differs!");
}
return 0;
}
/*
* If @endOp == ZSTD_e_end, @inSize becomes pledgedSrcSize.
* Otherwise, it's ignored.
* @return: 0 on success, or a ZSTD_error code otherwise.
*/
static size_t ZSTD_CCtx_init_compressStream2(ZSTD_CCtx* cctx,
ZSTD_EndDirective endOp,
size_t inSize)
{
ZSTD_CCtx_params params = cctx->requestedParams;
ZSTD_prefixDict const prefixDict = cctx->prefixDict;
FORWARD_IF_ERROR( ZSTD_initLocalDict(cctx) , ""); /* Init the local dict if present. */
ZSTD_memset(&cctx->prefixDict, 0, sizeof(cctx->prefixDict)); /* single usage */
assert(prefixDict.dict==NULL || cctx->cdict==NULL); /* only one can be set */
if (cctx->cdict && !cctx->localDict.cdict) {
/* Let the cdict's compression level take priority over the requested params.
* But do not take the cdict's compression level if the "cdict" is actually a localDict
* generated from ZSTD_initLocalDict().
*/
params.compressionLevel = cctx->cdict->compressionLevel;
}
DEBUGLOG(4, "ZSTD_CCtx_init_compressStream2 : transparent init stage");
if (endOp == ZSTD_e_end) cctx->pledgedSrcSizePlusOne = inSize + 1; /* auto-determine pledgedSrcSize */
{ size_t const dictSize = prefixDict.dict
? prefixDict.dictSize
: (cctx->cdict ? cctx->cdict->dictContentSize : 0);
ZSTD_CParamMode_e const mode = ZSTD_getCParamMode(cctx->cdict, ¶ms, cctx->pledgedSrcSizePlusOne - 1);
params.cParams = ZSTD_getCParamsFromCCtxParams(
¶ms, cctx->pledgedSrcSizePlusOne-1,
dictSize, mode);
}
params.postBlockSplitter = ZSTD_resolveBlockSplitterMode(params.postBlockSplitter, ¶ms.cParams);
params.ldmParams.enableLdm = ZSTD_resolveEnableLdm(params.ldmParams.enableLdm, ¶ms.cParams);
params.useRowMatchFinder = ZSTD_resolveRowMatchFinderMode(params.useRowMatchFinder, ¶ms.cParams);
params.validateSequences = ZSTD_resolveExternalSequenceValidation(params.validateSequences);
params.maxBlockSize = ZSTD_resolveMaxBlockSize(params.maxBlockSize);
params.searchForExternalRepcodes = ZSTD_resolveExternalRepcodeSearch(params.searchForExternalRepcodes, params.compressionLevel);
#ifdef ZSTD_MULTITHREAD
/* If external matchfinder is enabled, make sure to fail before checking job size (for consistency) */
RETURN_ERROR_IF(
ZSTD_hasExtSeqProd(¶ms) && params.nbWorkers >= 1,
parameter_combination_unsupported,
"External sequence producer isn't supported with nbWorkers >= 1"
);
if ((cctx->pledgedSrcSizePlusOne-1) <= ZSTDMT_JOBSIZE_MIN) {
params.nbWorkers = 0; /* do not invoke multi-threading when src size is too small */
}
if (params.nbWorkers > 0) {
# if ZSTD_TRACE
cctx->traceCtx = (ZSTD_trace_compress_begin != NULL) ? ZSTD_trace_compress_begin(cctx) : 0;
# endif
/* mt context creation */
if (cctx->mtctx == NULL) {
DEBUGLOG(4, "ZSTD_compressStream2: creating new mtctx for nbWorkers=%u",
params.nbWorkers);
cctx->mtctx = ZSTDMT_createCCtx_advanced((U32)params.nbWorkers, cctx->customMem, cctx->pool);
RETURN_ERROR_IF(cctx->mtctx == NULL, memory_allocation, "NULL pointer!");
}
/* mt compression */
DEBUGLOG(4, "call ZSTDMT_initCStream_internal as nbWorkers=%u", params.nbWorkers);
FORWARD_IF_ERROR( ZSTDMT_initCStream_internal(
cctx->mtctx,
prefixDict.dict, prefixDict.dictSize, prefixDict.dictContentType,
cctx->cdict, params, cctx->pledgedSrcSizePlusOne-1) , "");
cctx->dictID = cctx->cdict ? cctx->cdict->dictID : 0;
cctx->dictContentSize = cctx->cdict ? cctx->cdict->dictContentSize : prefixDict.dictSize;
cctx->consumedSrcSize = 0;
cctx->producedCSize = 0;
cctx->streamStage = zcss_load;
cctx->appliedParams = params;
} else
#endif /* ZSTD_MULTITHREAD */
{ U64 const pledgedSrcSize = cctx->pledgedSrcSizePlusOne - 1;
assert(!ZSTD_isError(ZSTD_checkCParams(params.cParams)));
FORWARD_IF_ERROR( ZSTD_compressBegin_internal(cctx,
prefixDict.dict, prefixDict.dictSize, prefixDict.dictContentType, ZSTD_dtlm_fast,
cctx->cdict,
¶ms, pledgedSrcSize,
ZSTDb_buffered) , "");
assert(cctx->appliedParams.nbWorkers == 0);
cctx->inToCompress = 0;
cctx->inBuffPos = 0;
if (cctx->appliedParams.inBufferMode == ZSTD_bm_buffered) {
/* for small input: avoid automatic flush on reaching end of block, since
* it would require to add a 3-bytes null block to end frame
*/
cctx->inBuffTarget = cctx->blockSizeMax + (cctx->blockSizeMax == pledgedSrcSize);
} else {
cctx->inBuffTarget = 0;
}
cctx->outBuffContentSize = cctx->outBuffFlushedSize = 0;
cctx->streamStage = zcss_load;
cctx->frameEnded = 0;
}
return 0;
}
/* @return provides a minimum amount of data remaining to be flushed from internal buffers
*/
size_t ZSTD_compressStream2( ZSTD_CCtx* cctx,
ZSTD_outBuffer* output,
ZSTD_inBuffer* input,
ZSTD_EndDirective endOp)
{
DEBUGLOG(5, "ZSTD_compressStream2, endOp=%u ", (unsigned)endOp);
/* check conditions */
RETURN_ERROR_IF(output->pos > output->size, dstSize_tooSmall, "invalid output buffer");
RETURN_ERROR_IF(input->pos > input->size, srcSize_wrong, "invalid input buffer");
RETURN_ERROR_IF((U32)endOp > (U32)ZSTD_e_end, parameter_outOfBound, "invalid endDirective");
assert(cctx != NULL);
/* transparent initialization stage */
if (cctx->streamStage == zcss_init) {
size_t const inputSize = input->size - input->pos; /* no obligation to start from pos==0 */
size_t const totalInputSize = inputSize + cctx->stableIn_notConsumed;
if ( (cctx->requestedParams.inBufferMode == ZSTD_bm_stable) /* input is presumed stable, across invocations */
&& (endOp == ZSTD_e_continue) /* no flush requested, more input to come */
&& (totalInputSize < ZSTD_BLOCKSIZE_MAX) ) { /* not even reached one block yet */
if (cctx->stableIn_notConsumed) { /* not the first time */
/* check stable source guarantees */
RETURN_ERROR_IF(input->src != cctx->expectedInBuffer.src, stabilityCondition_notRespected, "stableInBuffer condition not respected: wrong src pointer");
RETURN_ERROR_IF(input->pos != cctx->expectedInBuffer.size, stabilityCondition_notRespected, "stableInBuffer condition not respected: externally modified pos");
}
/* pretend input was consumed, to give a sense forward progress */
input->pos = input->size;
/* save stable inBuffer, for later control, and flush/end */
cctx->expectedInBuffer = *input;
/* but actually input wasn't consumed, so keep track of position from where compression shall resume */
cctx->stableIn_notConsumed += inputSize;
/* don't initialize yet, wait for the first block of flush() order, for better parameters adaptation */
return ZSTD_FRAMEHEADERSIZE_MIN(cctx->requestedParams.format); /* at least some header to produce */
}
FORWARD_IF_ERROR(ZSTD_CCtx_init_compressStream2(cctx, endOp, totalInputSize), "compressStream2 initialization failed");
ZSTD_setBufferExpectations(cctx, output, input); /* Set initial buffer expectations now that we've initialized */
}
/* end of transparent initialization stage */
FORWARD_IF_ERROR(ZSTD_checkBufferStability(cctx, output, input, endOp), "invalid buffers");
/* compression stage */
#ifdef ZSTD_MULTITHREAD
if (cctx->appliedParams.nbWorkers > 0) {
size_t flushMin;
if (cctx->cParamsChanged) {
ZSTDMT_updateCParams_whileCompressing(cctx->mtctx, &cctx->requestedParams);
cctx->cParamsChanged = 0;
}
if (cctx->stableIn_notConsumed) {
assert(cctx->appliedParams.inBufferMode == ZSTD_bm_stable);
/* some early data was skipped - make it available for consumption */
assert(input->pos >= cctx->stableIn_notConsumed);
input->pos -= cctx->stableIn_notConsumed;
cctx->stableIn_notConsumed = 0;
}
for (;;) {
size_t const ipos = input->pos;
size_t const opos = output->pos;
flushMin = ZSTDMT_compressStream_generic(cctx->mtctx, output, input, endOp);
cctx->consumedSrcSize += (U64)(input->pos - ipos);
cctx->producedCSize += (U64)(output->pos - opos);
if ( ZSTD_isError(flushMin)
|| (endOp == ZSTD_e_end && flushMin == 0) ) { /* compression completed */
if (flushMin == 0)
ZSTD_CCtx_trace(cctx, 0);
ZSTD_CCtx_reset(cctx, ZSTD_reset_session_only);
}
FORWARD_IF_ERROR(flushMin, "ZSTDMT_compressStream_generic failed");
if (endOp == ZSTD_e_continue) {
/* We only require some progress with ZSTD_e_continue, not maximal progress.
* We're done if we've consumed or produced any bytes, or either buffer is
* full.
*/
if (input->pos != ipos || output->pos != opos || input->pos == input->size || output->pos == output->size)
break;
} else {
assert(endOp == ZSTD_e_flush || endOp == ZSTD_e_end);
/* We require maximal progress. We're done when the flush is complete or the
* output buffer is full.
*/
if (flushMin == 0 || output->pos == output->size)
break;
}
}
DEBUGLOG(5, "completed ZSTD_compressStream2 delegating to ZSTDMT_compressStream_generic");
/* Either we don't require maximum forward progress, we've finished the
* flush, or we are out of output space.
*/
assert(endOp == ZSTD_e_continue || flushMin == 0 || output->pos == output->size);
ZSTD_setBufferExpectations(cctx, output, input);
return flushMin;
}
#endif /* ZSTD_MULTITHREAD */
FORWARD_IF_ERROR( ZSTD_compressStream_generic(cctx, output, input, endOp) , "");
DEBUGLOG(5, "completed ZSTD_compressStream2");
ZSTD_setBufferExpectations(cctx, output, input);
return cctx->outBuffContentSize - cctx->outBuffFlushedSize; /* remaining to flush */
}
size_t ZSTD_compressStream2_simpleArgs (
ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity, size_t* dstPos,
const void* src, size_t srcSize, size_t* srcPos,
ZSTD_EndDirective endOp)
{
ZSTD_outBuffer output;
ZSTD_inBuffer input;
output.dst = dst;
output.size = dstCapacity;
output.pos = *dstPos;
input.src = src;
input.size = srcSize;
input.pos = *srcPos;
/* ZSTD_compressStream2() will check validity of dstPos and srcPos */
{ size_t const cErr = ZSTD_compressStream2(cctx, &output, &input, endOp);
*dstPos = output.pos;
*srcPos = input.pos;
return cErr;
}
}
size_t ZSTD_compress2(ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize)
{
ZSTD_bufferMode_e const originalInBufferMode = cctx->requestedParams.inBufferMode;
ZSTD_bufferMode_e const originalOutBufferMode = cctx->requestedParams.outBufferMode;
DEBUGLOG(4, "ZSTD_compress2 (srcSize=%u)", (unsigned)srcSize);
ZSTD_CCtx_reset(cctx, ZSTD_reset_session_only);
/* Enable stable input/output buffers. */
cctx->requestedParams.inBufferMode = ZSTD_bm_stable;
cctx->requestedParams.outBufferMode = ZSTD_bm_stable;
{ size_t oPos = 0;
size_t iPos = 0;
size_t const result = ZSTD_compressStream2_simpleArgs(cctx,
dst, dstCapacity, &oPos,
src, srcSize, &iPos,
ZSTD_e_end);
/* Reset to the original values. */
cctx->requestedParams.inBufferMode = originalInBufferMode;
cctx->requestedParams.outBufferMode = originalOutBufferMode;
FORWARD_IF_ERROR(result, "ZSTD_compressStream2_simpleArgs failed");
if (result != 0) { /* compression not completed, due to lack of output space */
assert(oPos == dstCapacity);
RETURN_ERROR(dstSize_tooSmall, "");
}
assert(iPos == srcSize); /* all input is expected consumed */
return oPos;
}
}
/* ZSTD_validateSequence() :
* @offBase : must use the format required by ZSTD_storeSeq()
* @returns a ZSTD error code if sequence is not valid
*/
static size_t
ZSTD_validateSequence(U32 offBase, U32 matchLength, U32 minMatch,
size_t posInSrc, U32 windowLog, size_t dictSize, int useSequenceProducer)
{
U32 const windowSize = 1u << windowLog;
/* posInSrc represents the amount of data the decoder would decode up to this point.
* As long as the amount of data decoded is less than or equal to window size, offsets may be
* larger than the total length of output decoded in order to reference the dict, even larger than
* window size. After output surpasses windowSize, we're limited to windowSize offsets again.
*/
size_t const offsetBound = posInSrc > windowSize ? (size_t)windowSize : posInSrc + (size_t)dictSize;
size_t const matchLenLowerBound = (minMatch == 3 || useSequenceProducer) ? 3 : 4;
RETURN_ERROR_IF(offBase > OFFSET_TO_OFFBASE(offsetBound), externalSequences_invalid, "Offset too large!");
/* Validate maxNbSeq is large enough for the given matchLength and minMatch */
RETURN_ERROR_IF(matchLength < matchLenLowerBound, externalSequences_invalid, "Matchlength too small for the minMatch");
return 0;
}
/* Returns an offset code, given a sequence's raw offset, the ongoing repcode array, and whether litLength == 0 */
static U32 ZSTD_finalizeOffBase(U32 rawOffset, const U32 rep[ZSTD_REP_NUM], U32 ll0)
{
U32 offBase = OFFSET_TO_OFFBASE(rawOffset);
if (!ll0 && rawOffset == rep[0]) {
offBase = REPCODE1_TO_OFFBASE;
} else if (rawOffset == rep[1]) {
offBase = REPCODE_TO_OFFBASE(2 - ll0);
} else if (rawOffset == rep[2]) {
offBase = REPCODE_TO_OFFBASE(3 - ll0);
} else if (ll0 && rawOffset == rep[0] - 1) {
offBase = REPCODE3_TO_OFFBASE;
}
return offBase;
}
/* This function scans through an array of ZSTD_Sequence,
* storing the sequences it reads, until it reaches a block delimiter.
* Note that the block delimiter includes the last literals of the block.
* @blockSize must be == sum(sequence_lengths).
* @returns @blockSize on success, and a ZSTD_error otherwise.
*/
static size_t
ZSTD_transferSequences_wBlockDelim(ZSTD_CCtx* cctx,
ZSTD_SequencePosition* seqPos,
const ZSTD_Sequence* const inSeqs, size_t inSeqsSize,
const void* src, size_t blockSize,
ZSTD_ParamSwitch_e externalRepSearch)
{
U32 idx = seqPos->idx;
U32 const startIdx = idx;
BYTE const* ip = (BYTE const*)(src);
const BYTE* const iend = ip + blockSize;
Repcodes_t updatedRepcodes;
U32 dictSize;
DEBUGLOG(5, "ZSTD_transferSequences_wBlockDelim (blockSize = %zu)", blockSize);
if (cctx->cdict) {
dictSize = (U32)cctx->cdict->dictContentSize;
} else if (cctx->prefixDict.dict) {
dictSize = (U32)cctx->prefixDict.dictSize;
} else {
dictSize = 0;
}
ZSTD_memcpy(updatedRepcodes.rep, cctx->blockState.prevCBlock->rep, sizeof(Repcodes_t));
for (; idx < inSeqsSize && (inSeqs[idx].matchLength != 0 || inSeqs[idx].offset != 0); ++idx) {
U32 const litLength = inSeqs[idx].litLength;
U32 const matchLength = inSeqs[idx].matchLength;
U32 offBase;
if (externalRepSearch == ZSTD_ps_disable) {
offBase = OFFSET_TO_OFFBASE(inSeqs[idx].offset);
} else {
U32 const ll0 = (litLength == 0);
offBase = ZSTD_finalizeOffBase(inSeqs[idx].offset, updatedRepcodes.rep, ll0);
ZSTD_updateRep(updatedRepcodes.rep, offBase, ll0);
}
DEBUGLOG(6, "Storing sequence: (of: %u, ml: %u, ll: %u)", offBase, matchLength, litLength);
if (cctx->appliedParams.validateSequences) {
seqPos->posInSrc += litLength + matchLength;
FORWARD_IF_ERROR(ZSTD_validateSequence(offBase, matchLength, cctx->appliedParams.cParams.minMatch,
seqPos->posInSrc,
cctx->appliedParams.cParams.windowLog, dictSize,
ZSTD_hasExtSeqProd(&cctx->appliedParams)),
"Sequence validation failed");
}
RETURN_ERROR_IF(idx - seqPos->idx >= cctx->seqStore.maxNbSeq, externalSequences_invalid,
"Not enough memory allocated. Try adjusting ZSTD_c_minMatch.");
ZSTD_storeSeq(&cctx->seqStore, litLength, ip, iend, offBase, matchLength);
ip += matchLength + litLength;
}
RETURN_ERROR_IF(idx == inSeqsSize, externalSequences_invalid, "Block delimiter not found.");
/* If we skipped repcode search while parsing, we need to update repcodes now */
assert(externalRepSearch != ZSTD_ps_auto);
assert(idx >= startIdx);
if (externalRepSearch == ZSTD_ps_disable && idx != startIdx) {
U32* const rep = updatedRepcodes.rep;
U32 lastSeqIdx = idx - 1; /* index of last non-block-delimiter sequence */
if (lastSeqIdx >= startIdx + 2) {
rep[2] = inSeqs[lastSeqIdx - 2].offset;
rep[1] = inSeqs[lastSeqIdx - 1].offset;
rep[0] = inSeqs[lastSeqIdx].offset;
} else if (lastSeqIdx == startIdx + 1) {
rep[2] = rep[0];
rep[1] = inSeqs[lastSeqIdx - 1].offset;
rep[0] = inSeqs[lastSeqIdx].offset;
} else {
assert(lastSeqIdx == startIdx);
rep[2] = rep[1];
rep[1] = rep[0];
rep[0] = inSeqs[lastSeqIdx].offset;
}
}
ZSTD_memcpy(cctx->blockState.nextCBlock->rep, updatedRepcodes.rep, sizeof(Repcodes_t));
if (inSeqs[idx].litLength) {
DEBUGLOG(6, "Storing last literals of size: %u", inSeqs[idx].litLength);
ZSTD_storeLastLiterals(&cctx->seqStore, ip, inSeqs[idx].litLength);
ip += inSeqs[idx].litLength;
seqPos->posInSrc += inSeqs[idx].litLength;
}
RETURN_ERROR_IF(ip != iend, externalSequences_invalid, "Blocksize doesn't agree with block delimiter!");
seqPos->idx = idx+1;
return blockSize;
}
/*
* This function attempts to scan through @blockSize bytes in @src
* represented by the sequences in @inSeqs,
* storing any (partial) sequences.
*
* Occasionally, we may want to reduce the actual number of bytes consumed from @src
* to avoid splitting a match, notably if it would produce a match smaller than MINMATCH.
*
* @returns the number of bytes consumed from @src, necessarily <= @blockSize.
* Otherwise, it may return a ZSTD error if something went wrong.
*/
static size_t
ZSTD_transferSequences_noDelim(ZSTD_CCtx* cctx,
ZSTD_SequencePosition* seqPos,
const ZSTD_Sequence* const inSeqs, size_t inSeqsSize,
const void* src, size_t blockSize,
ZSTD_ParamSwitch_e externalRepSearch)
{
U32 idx = seqPos->idx;
U32 startPosInSequence = seqPos->posInSequence;
U32 endPosInSequence = seqPos->posInSequence + (U32)blockSize;
size_t dictSize;
const BYTE* const istart = (const BYTE*)(src);
const BYTE* ip = istart;
const BYTE* iend = istart + blockSize; /* May be adjusted if we decide to process fewer than blockSize bytes */
Repcodes_t updatedRepcodes;
U32 bytesAdjustment = 0;
U32 finalMatchSplit = 0;
/* TODO(embg) support fast parsing mode in noBlockDelim mode */
(void)externalRepSearch;
if (cctx->cdict) {
dictSize = cctx->cdict->dictContentSize;
} else if (cctx->prefixDict.dict) {
dictSize = cctx->prefixDict.dictSize;
} else {
dictSize = 0;
}
DEBUGLOG(5, "ZSTD_transferSequences_noDelim: idx: %u PIS: %u blockSize: %zu", idx, startPosInSequence, blockSize);
DEBUGLOG(5, "Start seq: idx: %u (of: %u ml: %u ll: %u)", idx, inSeqs[idx].offset, inSeqs[idx].matchLength, inSeqs[idx].litLength);
ZSTD_memcpy(updatedRepcodes.rep, cctx->blockState.prevCBlock->rep, sizeof(Repcodes_t));
while (endPosInSequence && idx < inSeqsSize && !finalMatchSplit) {
const ZSTD_Sequence currSeq = inSeqs[idx];
U32 litLength = currSeq.litLength;
U32 matchLength = currSeq.matchLength;
U32 const rawOffset = currSeq.offset;
U32 offBase;
/* Modify the sequence depending on where endPosInSequence lies */
if (endPosInSequence >= currSeq.litLength + currSeq.matchLength) {
if (startPosInSequence >= litLength) {
startPosInSequence -= litLength;
litLength = 0;
matchLength -= startPosInSequence;
} else {
litLength -= startPosInSequence;
}
/* Move to the next sequence */
endPosInSequence -= currSeq.litLength + currSeq.matchLength;
startPosInSequence = 0;
} else {
/* This is the final (partial) sequence we're adding from inSeqs, and endPosInSequence
does not reach the end of the match. So, we have to split the sequence */
DEBUGLOG(6, "Require a split: diff: %u, idx: %u PIS: %u",
currSeq.litLength + currSeq.matchLength - endPosInSequence, idx, endPosInSequence);
if (endPosInSequence > litLength) {
U32 firstHalfMatchLength;
litLength = startPosInSequence >= litLength ? 0 : litLength - startPosInSequence;
firstHalfMatchLength = endPosInSequence - startPosInSequence - litLength;
if (matchLength > blockSize && firstHalfMatchLength >= cctx->appliedParams.cParams.minMatch) {
/* Only ever split the match if it is larger than the block size */
U32 secondHalfMatchLength = currSeq.matchLength + currSeq.litLength - endPosInSequence;
if (secondHalfMatchLength < cctx->appliedParams.cParams.minMatch) {
/* Move the endPosInSequence backward so that it creates match of minMatch length */
endPosInSequence -= cctx->appliedParams.cParams.minMatch - secondHalfMatchLength;
bytesAdjustment = cctx->appliedParams.cParams.minMatch - secondHalfMatchLength;
firstHalfMatchLength -= bytesAdjustment;
}
matchLength = firstHalfMatchLength;
/* Flag that we split the last match - after storing the sequence, exit the loop,
but keep the value of endPosInSequence */
finalMatchSplit = 1;
} else {
/* Move the position in sequence backwards so that we don't split match, and break to store
* the last literals. We use the original currSeq.litLength as a marker for where endPosInSequence
* should go. We prefer to do this whenever it is not necessary to split the match, or if doing so
* would cause the first half of the match to be too small
*/
bytesAdjustment = endPosInSequence - currSeq.litLength;
endPosInSequence = currSeq.litLength;
break;
}
} else {
/* This sequence ends inside the literals, break to store the last literals */
break;
}
}
/* Check if this offset can be represented with a repcode */
{ U32 const ll0 = (litLength == 0);
offBase = ZSTD_finalizeOffBase(rawOffset, updatedRepcodes.rep, ll0);
ZSTD_updateRep(updatedRepcodes.rep, offBase, ll0);
}
if (cctx->appliedParams.validateSequences) {
seqPos->posInSrc += litLength + matchLength;
FORWARD_IF_ERROR(ZSTD_validateSequence(offBase, matchLength, cctx->appliedParams.cParams.minMatch, seqPos->posInSrc,
cctx->appliedParams.cParams.windowLog, dictSize, ZSTD_hasExtSeqProd(&cctx->appliedParams)),
"Sequence validation failed");
}
DEBUGLOG(6, "Storing sequence: (of: %u, ml: %u, ll: %u)", offBase, matchLength, litLength);
RETURN_ERROR_IF(idx - seqPos->idx >= cctx->seqStore.maxNbSeq, externalSequences_invalid,
"Not enough memory allocated. Try adjusting ZSTD_c_minMatch.");
ZSTD_storeSeq(&cctx->seqStore, litLength, ip, iend, offBase, matchLength);
ip += matchLength + litLength;
if (!finalMatchSplit)
idx++; /* Next Sequence */
}
DEBUGLOG(5, "Ending seq: idx: %u (of: %u ml: %u ll: %u)", idx, inSeqs[idx].offset, inSeqs[idx].matchLength, inSeqs[idx].litLength);
assert(idx == inSeqsSize || endPosInSequence <= inSeqs[idx].litLength + inSeqs[idx].matchLength);
seqPos->idx = idx;
seqPos->posInSequence = endPosInSequence;
ZSTD_memcpy(cctx->blockState.nextCBlock->rep, updatedRepcodes.rep, sizeof(Repcodes_t));
iend -= bytesAdjustment;
if (ip != iend) {
/* Store any last literals */
U32 const lastLLSize = (U32)(iend - ip);
assert(ip <= iend);
DEBUGLOG(6, "Storing last literals of size: %u", lastLLSize);
ZSTD_storeLastLiterals(&cctx->seqStore, ip, lastLLSize);
seqPos->posInSrc += lastLLSize;
}
return (size_t)(iend-istart);
}
/* @seqPos represents a position within @inSeqs,
* it is read and updated by this function,
* once the goal to produce a block of size @blockSize is reached.
* @return: nb of bytes consumed from @src, necessarily <= @blockSize.
*/
typedef size_t (*ZSTD_SequenceCopier_f)(ZSTD_CCtx* cctx,
ZSTD_SequencePosition* seqPos,
const ZSTD_Sequence* const inSeqs, size_t inSeqsSize,
const void* src, size_t blockSize,
ZSTD_ParamSwitch_e externalRepSearch);
static ZSTD_SequenceCopier_f ZSTD_selectSequenceCopier(ZSTD_SequenceFormat_e mode)
{
assert(ZSTD_cParam_withinBounds(ZSTD_c_blockDelimiters, (int)mode));
if (mode == ZSTD_sf_explicitBlockDelimiters) {
return ZSTD_transferSequences_wBlockDelim;
}
assert(mode == ZSTD_sf_noBlockDelimiters);
return ZSTD_transferSequences_noDelim;
}
/* Discover the size of next block by searching for the delimiter.
* Note that a block delimiter **must** exist in this mode,
* otherwise it's an input error.
* The block size retrieved will be later compared to ensure it remains within bounds */
static size_t
blockSize_explicitDelimiter(const ZSTD_Sequence* inSeqs, size_t inSeqsSize, ZSTD_SequencePosition seqPos)
{
int end = 0;
size_t blockSize = 0;
size_t spos = seqPos.idx;
DEBUGLOG(6, "blockSize_explicitDelimiter : seq %zu / %zu", spos, inSeqsSize);
assert(spos <= inSeqsSize);
while (spos < inSeqsSize) {
end = (inSeqs[spos].offset == 0);
blockSize += inSeqs[spos].litLength + inSeqs[spos].matchLength;
if (end) {
if (inSeqs[spos].matchLength != 0)
RETURN_ERROR(externalSequences_invalid, "delimiter format error : both matchlength and offset must be == 0");
break;
}
spos++;
}
if (!end)
RETURN_ERROR(externalSequences_invalid, "Reached end of sequences without finding a block delimiter");
return blockSize;
}
static size_t determine_blockSize(ZSTD_SequenceFormat_e mode,
size_t blockSize, size_t remaining,
const ZSTD_Sequence* inSeqs, size_t inSeqsSize,
ZSTD_SequencePosition seqPos)
{
DEBUGLOG(6, "determine_blockSize : remainingSize = %zu", remaining);
if (mode == ZSTD_sf_noBlockDelimiters) {
/* Note: more a "target" block size */
return MIN(remaining, blockSize);
}
assert(mode == ZSTD_sf_explicitBlockDelimiters);
{ size_t const explicitBlockSize = blockSize_explicitDelimiter(inSeqs, inSeqsSize, seqPos);
FORWARD_IF_ERROR(explicitBlockSize, "Error while determining block size with explicit delimiters");
if (explicitBlockSize > blockSize)
RETURN_ERROR(externalSequences_invalid, "sequences incorrectly define a too large block");
if (explicitBlockSize > remaining)
RETURN_ERROR(externalSequences_invalid, "sequences define a frame longer than source");
return explicitBlockSize;
}
}
/* Compress all provided sequences, block-by-block.
*
* Returns the cumulative size of all compressed blocks (including their headers),
* otherwise a ZSTD error.
*/
static size_t
ZSTD_compressSequences_internal(ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity,
const ZSTD_Sequence* inSeqs, size_t inSeqsSize,
const void* src, size_t srcSize)
{
size_t cSize = 0;
size_t remaining = srcSize;
ZSTD_SequencePosition seqPos = {0, 0, 0};
const BYTE* ip = (BYTE const*)src;
BYTE* op = (BYTE*)dst;
ZSTD_SequenceCopier_f const sequenceCopier = ZSTD_selectSequenceCopier(cctx->appliedParams.blockDelimiters);
DEBUGLOG(4, "ZSTD_compressSequences_internal srcSize: %zu, inSeqsSize: %zu", srcSize, inSeqsSize);
/* Special case: empty frame */
if (remaining == 0) {
U32 const cBlockHeader24 = 1 /* last block */ + (((U32)bt_raw)<<1);
RETURN_ERROR_IF(dstCapacity<4, dstSize_tooSmall, "No room for empty frame block header");
MEM_writeLE32(op, cBlockHeader24);
op += ZSTD_blockHeaderSize;
dstCapacity -= ZSTD_blockHeaderSize;
cSize += ZSTD_blockHeaderSize;
}
while (remaining) {
size_t compressedSeqsSize;
size_t cBlockSize;
size_t blockSize = determine_blockSize(cctx->appliedParams.blockDelimiters,
cctx->blockSizeMax, remaining,
inSeqs, inSeqsSize, seqPos);
U32 const lastBlock = (blockSize == remaining);
FORWARD_IF_ERROR(blockSize, "Error while trying to determine block size");
assert(blockSize <= remaining);
ZSTD_resetSeqStore(&cctx->seqStore);
blockSize = sequenceCopier(cctx,
&seqPos, inSeqs, inSeqsSize,
ip, blockSize,
cctx->appliedParams.searchForExternalRepcodes);
FORWARD_IF_ERROR(blockSize, "Bad sequence copy");
/* If blocks are too small, emit as a nocompress block */
/* TODO: See 3090. We reduced MIN_CBLOCK_SIZE from 3 to 2 so to compensate we are adding
* additional 1. We need to revisit and change this logic to be more consistent */
if (blockSize < MIN_CBLOCK_SIZE+ZSTD_blockHeaderSize+1+1) {
cBlockSize = ZSTD_noCompressBlock(op, dstCapacity, ip, blockSize, lastBlock);
FORWARD_IF_ERROR(cBlockSize, "Nocompress block failed");
DEBUGLOG(5, "Block too small (%zu): data remains uncompressed: cSize=%zu", blockSize, cBlockSize);
cSize += cBlockSize;
ip += blockSize;
op += cBlockSize;
remaining -= blockSize;
dstCapacity -= cBlockSize;
continue;
}
RETURN_ERROR_IF(dstCapacity < ZSTD_blockHeaderSize, dstSize_tooSmall, "not enough dstCapacity to write a new compressed block");
compressedSeqsSize = ZSTD_entropyCompressSeqStore(&cctx->seqStore,
&cctx->blockState.prevCBlock->entropy, &cctx->blockState.nextCBlock->entropy,
&cctx->appliedParams,
op + ZSTD_blockHeaderSize /* Leave space for block header */, dstCapacity - ZSTD_blockHeaderSize,
blockSize,
cctx->tmpWorkspace, cctx->tmpWkspSize /* statically allocated in resetCCtx */,
cctx->bmi2);
FORWARD_IF_ERROR(compressedSeqsSize, "Compressing sequences of block failed");
DEBUGLOG(5, "Compressed sequences size: %zu", compressedSeqsSize);
if (!cctx->isFirstBlock &&
ZSTD_maybeRLE(&cctx->seqStore) &&
ZSTD_isRLE(ip, blockSize)) {
/* Note: don't emit the first block as RLE even if it qualifies because
* doing so will cause the decoder (cli <= v1.4.3 only) to throw an (invalid) error
* "should consume all input error."
*/
compressedSeqsSize = 1;
}
if (compressedSeqsSize == 0) {
/* ZSTD_noCompressBlock writes the block header as well */
cBlockSize = ZSTD_noCompressBlock(op, dstCapacity, ip, blockSize, lastBlock);
FORWARD_IF_ERROR(cBlockSize, "ZSTD_noCompressBlock failed");
DEBUGLOG(5, "Writing out nocompress block, size: %zu", cBlockSize);
} else if (compressedSeqsSize == 1) {
cBlockSize = ZSTD_rleCompressBlock(op, dstCapacity, *ip, blockSize, lastBlock);
FORWARD_IF_ERROR(cBlockSize, "ZSTD_rleCompressBlock failed");
DEBUGLOG(5, "Writing out RLE block, size: %zu", cBlockSize);
} else {
U32 cBlockHeader;
/* Error checking and repcodes update */
ZSTD_blockState_confirmRepcodesAndEntropyTables(&cctx->blockState);
if (cctx->blockState.prevCBlock->entropy.fse.offcode_repeatMode == FSE_repeat_valid)
cctx->blockState.prevCBlock->entropy.fse.offcode_repeatMode = FSE_repeat_check;
/* Write block header into beginning of block*/
cBlockHeader = lastBlock + (((U32)bt_compressed)<<1) + (U32)(compressedSeqsSize << 3);
MEM_writeLE24(op, cBlockHeader);
cBlockSize = ZSTD_blockHeaderSize + compressedSeqsSize;
DEBUGLOG(5, "Writing out compressed block, size: %zu", cBlockSize);
}
cSize += cBlockSize;
if (lastBlock) {
break;
} else {
ip += blockSize;
op += cBlockSize;
remaining -= blockSize;
dstCapacity -= cBlockSize;
cctx->isFirstBlock = 0;
}
DEBUGLOG(5, "cSize running total: %zu (remaining dstCapacity=%zu)", cSize, dstCapacity);
}
DEBUGLOG(4, "cSize final total: %zu", cSize);
return cSize;
}
size_t ZSTD_compressSequences(ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity,
const ZSTD_Sequence* inSeqs, size_t inSeqsSize,
const void* src, size_t srcSize)
{
BYTE* op = (BYTE*)dst;
size_t cSize = 0;
/* Transparent initialization stage, same as compressStream2() */
DEBUGLOG(4, "ZSTD_compressSequences (nbSeqs=%zu,dstCapacity=%zu)", inSeqsSize, dstCapacity);
assert(cctx != NULL);
FORWARD_IF_ERROR(ZSTD_CCtx_init_compressStream2(cctx, ZSTD_e_end, srcSize), "CCtx initialization failed");
/* Begin writing output, starting with frame header */
{ size_t const frameHeaderSize = ZSTD_writeFrameHeader(op, dstCapacity,
&cctx->appliedParams, srcSize, cctx->dictID);
op += frameHeaderSize;
assert(frameHeaderSize <= dstCapacity);
dstCapacity -= frameHeaderSize;
cSize += frameHeaderSize;
}
if (cctx->appliedParams.fParams.checksumFlag && srcSize) {
XXH64_update(&cctx->xxhState, src, srcSize);
}
/* Now generate compressed blocks */
{ size_t const cBlocksSize = ZSTD_compressSequences_internal(cctx,
op, dstCapacity,
inSeqs, inSeqsSize,
src, srcSize);
FORWARD_IF_ERROR(cBlocksSize, "Compressing blocks failed!");
cSize += cBlocksSize;
assert(cBlocksSize <= dstCapacity);
dstCapacity -= cBlocksSize;
}
/* Complete with frame checksum, if needed */
if (cctx->appliedParams.fParams.checksumFlag) {
U32 const checksum = (U32) XXH64_digest(&cctx->xxhState);
RETURN_ERROR_IF(dstCapacity<4, dstSize_tooSmall, "no room for checksum");
DEBUGLOG(4, "Write checksum : %08X", (unsigned)checksum);
MEM_writeLE32((char*)dst + cSize, checksum);
cSize += 4;
}
DEBUGLOG(4, "Final compressed size: %zu", cSize);
return cSize;
}
#if defined(__AVX2__)
#include <immintrin.h> /* AVX2 intrinsics */
/*
* Convert 2 sequences per iteration, using AVX2 intrinsics:
* - offset -> offBase = offset + 2
* - litLength -> (U16) litLength
* - matchLength -> (U16)(matchLength - 3)
* - rep is ignored
* Store only 8 bytes per SeqDef (offBase[4], litLength[2], mlBase[2]).
*
* At the end, instead of extracting two __m128i,
* we use _mm256_permute4x64_epi64(..., 0xE8) to move lane2 into lane1,
* then store the lower 16 bytes in one go.
*
* @returns 0 on succes, with no long length detected
* @returns > 0 if there is one long length (> 65535),
* indicating the position, and type.
*/
static size_t convertSequences_noRepcodes(
SeqDef* dstSeqs,
const ZSTD_Sequence* inSeqs,
size_t nbSequences)
{
/*
* addition:
* For each 128-bit half: (offset+2, litLength+0, matchLength-3, rep+0)
*/
const __m256i addition = _mm256_setr_epi32(
ZSTD_REP_NUM, 0, -MINMATCH, 0, /* for sequence i */
ZSTD_REP_NUM, 0, -MINMATCH, 0 /* for sequence i+1 */
);
/* limit: check if there is a long length */
const __m256i limit = _mm256_set1_epi32(65535);
/*
* shuffle mask for byte-level rearrangement in each 128-bit half:
*
* Input layout (after addition) per 128-bit half:
* [ offset+2 (4 bytes) | litLength (4 bytes) | matchLength (4 bytes) | rep (4 bytes) ]
* We only need:
* offBase (4 bytes) = offset+2
* litLength (2 bytes) = low 2 bytes of litLength
* mlBase (2 bytes) = low 2 bytes of (matchLength)
* => Bytes [0..3, 4..5, 8..9], zero the rest.
*/
const __m256i mask = _mm256_setr_epi8(
/* For the lower 128 bits => sequence i */
0, 1, 2, 3, /* offset+2 */
4, 5, /* litLength (16 bits) */
8, 9, /* matchLength (16 bits) */
(BYTE)0x80, (BYTE)0x80, (BYTE)0x80, (BYTE)0x80,
(BYTE)0x80, (BYTE)0x80, (BYTE)0x80, (BYTE)0x80,
/* For the upper 128 bits => sequence i+1 */
16,17,18,19, /* offset+2 */
20,21, /* litLength */
24,25, /* matchLength */
(BYTE)0x80, (BYTE)0x80, (BYTE)0x80, (BYTE)0x80,
(BYTE)0x80, (BYTE)0x80, (BYTE)0x80, (BYTE)0x80
);
/*
* Next, we'll use _mm256_permute4x64_epi64(vshf, 0xE8).
* Explanation of 0xE8 = 11101000b => [lane0, lane2, lane2, lane3].
* So the lower 128 bits become [lane0, lane2] => combining seq0 and seq1.
*/
#define PERM_LANE_0X_E8 0xE8 /* [0,2,2,3] in lane indices */
size_t longLen = 0, i = 0;
/* AVX permutation depends on the specific definition of target structures */
ZSTD_STATIC_ASSERT(sizeof(ZSTD_Sequence) == 16);
ZSTD_STATIC_ASSERT(offsetof(ZSTD_Sequence, offset) == 0);
ZSTD_STATIC_ASSERT(offsetof(ZSTD_Sequence, litLength) == 4);
ZSTD_STATIC_ASSERT(offsetof(ZSTD_Sequence, matchLength) == 8);
ZSTD_STATIC_ASSERT(sizeof(SeqDef) == 8);
ZSTD_STATIC_ASSERT(offsetof(SeqDef, offBase) == 0);
ZSTD_STATIC_ASSERT(offsetof(SeqDef, litLength) == 4);
ZSTD_STATIC_ASSERT(offsetof(SeqDef, mlBase) == 6);
/* Process 2 sequences per loop iteration */
for (; i + 1 < nbSequences; i += 2) {
/* Load 2 ZSTD_Sequence (32 bytes) */
__m256i vin = _mm256_loadu_si256((const __m256i*)(const void*)&inSeqs[i]);
/* Add {2, 0, -3, 0} in each 128-bit half */
__m256i vadd = _mm256_add_epi32(vin, addition);
/* Check for long length */
__m256i ll_cmp = _mm256_cmpgt_epi32(vadd, limit); /* 0xFFFFFFFF for element > 65535 */
int ll_res = _mm256_movemask_epi8(ll_cmp);
/* Shuffle bytes so each half gives us the 8 bytes we need */
__m256i vshf = _mm256_shuffle_epi8(vadd, mask);
/*
* Now:
* Lane0 = seq0's 8 bytes
* Lane1 = 0
* Lane2 = seq1's 8 bytes
* Lane3 = 0
*/
/* Permute 64-bit lanes => move Lane2 down into Lane1. */
__m256i vperm = _mm256_permute4x64_epi64(vshf, PERM_LANE_0X_E8);
/*
* Now the lower 16 bytes (Lane0+Lane1) = [seq0, seq1].
* The upper 16 bytes are [Lane2, Lane3] = [seq1, 0], but we won't use them.
*/
/* Store only the lower 16 bytes => 2 SeqDef (8 bytes each) */
_mm_storeu_si128((__m128i *)(void*)&dstSeqs[i], _mm256_castsi256_si128(vperm));
/*
* This writes out 16 bytes total:
* - offset 0..7 => seq0 (offBase, litLength, mlBase)
* - offset 8..15 => seq1 (offBase, litLength, mlBase)
*/
/* check (unlikely) long lengths > 65535
* indices for lengths correspond to bits [4..7], [8..11], [20..23], [24..27]
* => combined mask = 0x0FF00FF0
*/
if (UNLIKELY((ll_res & 0x0FF00FF0) != 0)) {
/* long length detected: let's figure out which one*/
if (inSeqs[i].matchLength > 65535+MINMATCH) {
assert(longLen == 0);
longLen = i + 1;
}
if (inSeqs[i].litLength > 65535) {
assert(longLen == 0);
longLen = i + nbSequences + 1;
}
if (inSeqs[i+1].matchLength > 65535+MINMATCH) {
assert(longLen == 0);
longLen = i + 1 + 1;
}
if (inSeqs[i+1].litLength > 65535) {
assert(longLen == 0);
longLen = i + 1 + nbSequences + 1;
}
}
}
/* Handle leftover if @nbSequences is odd */
if (i < nbSequences) {
/* process last sequence */
assert(i == nbSequences - 1);
dstSeqs[i].offBase = OFFSET_TO_OFFBASE(inSeqs[i].offset);
dstSeqs[i].litLength = (U16)inSeqs[i].litLength;
dstSeqs[i].mlBase = (U16)(inSeqs[i].matchLength - MINMATCH);
/* check (unlikely) long lengths > 65535 */
if (UNLIKELY(inSeqs[i].matchLength > 65535+MINMATCH)) {
assert(longLen == 0);
longLen = i + 1;
}
if (UNLIKELY(inSeqs[i].litLength > 65535)) {
assert(longLen == 0);
longLen = i + nbSequences + 1;
}
}
return longLen;
}
/* the vector implementation could also be ported to SSSE3,
* but since this implementation is targeting modern systems (>= Sapphire Rapid),
* it's not useful to develop and maintain code for older pre-AVX2 platforms */
#else /* no AVX2 */
static size_t convertSequences_noRepcodes(
SeqDef* dstSeqs,
const ZSTD_Sequence* inSeqs,
size_t nbSequences)
{
size_t longLen = 0;
size_t n;
for (n=0; n<nbSequences; n++) {
dstSeqs[n].offBase = OFFSET_TO_OFFBASE(inSeqs[n].offset);
dstSeqs[n].litLength = (U16)inSeqs[n].litLength;
dstSeqs[n].mlBase = (U16)(inSeqs[n].matchLength - MINMATCH);
/* check for long length > 65535 */
if (UNLIKELY(inSeqs[n].matchLength > 65535+MINMATCH)) {
assert(longLen == 0);
longLen = n + 1;
}
if (UNLIKELY(inSeqs[n].litLength > 65535)) {
assert(longLen == 0);
longLen = n + nbSequences + 1;
}
}
return longLen;
}
#endif
/*
* Precondition: Sequences must end on an explicit Block Delimiter
* @return: 0 on success, or an error code.
* Note: Sequence validation functionality has been disabled (removed).
* This is helpful to generate a lean main pipeline, improving performance.
* It may be re-inserted later.
*/
size_t ZSTD_convertBlockSequences(ZSTD_CCtx* cctx,
const ZSTD_Sequence* const inSeqs, size_t nbSequences,
int repcodeResolution)
{
Repcodes_t updatedRepcodes;
size_t seqNb = 0;
DEBUGLOG(5, "ZSTD_convertBlockSequences (nbSequences = %zu)", nbSequences);
RETURN_ERROR_IF(nbSequences >= cctx->seqStore.maxNbSeq, externalSequences_invalid,
"Not enough memory allocated. Try adjusting ZSTD_c_minMatch.");
ZSTD_memcpy(updatedRepcodes.rep, cctx->blockState.prevCBlock->rep, sizeof(Repcodes_t));
/* check end condition */
assert(nbSequences >= 1);
assert(inSeqs[nbSequences-1].matchLength == 0);
assert(inSeqs[nbSequences-1].offset == 0);
/* Convert Sequences from public format to internal format */
if (!repcodeResolution) {
size_t const longl = convertSequences_noRepcodes(cctx->seqStore.sequencesStart, inSeqs, nbSequences-1);
cctx->seqStore.sequences = cctx->seqStore.sequencesStart + nbSequences-1;
if (longl) {
DEBUGLOG(5, "long length");
assert(cctx->seqStore.longLengthType == ZSTD_llt_none);
if (longl <= nbSequences-1) {
DEBUGLOG(5, "long match length detected at pos %zu", longl-1);
cctx->seqStore.longLengthType = ZSTD_llt_matchLength;
cctx->seqStore.longLengthPos = (U32)(longl-1);
} else {
DEBUGLOG(5, "long literals length detected at pos %zu", longl-nbSequences);
assert(longl <= 2* (nbSequences-1));
cctx->seqStore.longLengthType = ZSTD_llt_literalLength;
cctx->seqStore.longLengthPos = (U32)(longl-(nbSequences-1)-1);
}
}
} else {
for (seqNb = 0; seqNb < nbSequences - 1 ; seqNb++) {
U32 const litLength = inSeqs[seqNb].litLength;
U32 const matchLength = inSeqs[seqNb].matchLength;
U32 const ll0 = (litLength == 0);
U32 const offBase = ZSTD_finalizeOffBase(inSeqs[seqNb].offset, updatedRepcodes.rep, ll0);
DEBUGLOG(6, "Storing sequence: (of: %u, ml: %u, ll: %u)", offBase, matchLength, litLength);
ZSTD_storeSeqOnly(&cctx->seqStore, litLength, offBase, matchLength);
ZSTD_updateRep(updatedRepcodes.rep, offBase, ll0);
}
}
/* If we skipped repcode search while parsing, we need to update repcodes now */
if (!repcodeResolution && nbSequences > 1) {
U32* const rep = updatedRepcodes.rep;
if (nbSequences >= 4) {
U32 lastSeqIdx = (U32)nbSequences - 2; /* index of last full sequence */
rep[2] = inSeqs[lastSeqIdx - 2].offset;
rep[1] = inSeqs[lastSeqIdx - 1].offset;
rep[0] = inSeqs[lastSeqIdx].offset;
} else if (nbSequences == 3) {
rep[2] = rep[0];
rep[1] = inSeqs[0].offset;
rep[0] = inSeqs[1].offset;
} else {
assert(nbSequences == 2);
rep[2] = rep[1];
rep[1] = rep[0];
rep[0] = inSeqs[0].offset;
}
}
ZSTD_memcpy(cctx->blockState.nextCBlock->rep, updatedRepcodes.rep, sizeof(Repcodes_t));
return 0;
}
#if defined(ZSTD_ARCH_X86_AVX2)
BlockSummary ZSTD_get1BlockSummary(const ZSTD_Sequence* seqs, size_t nbSeqs)
{
size_t i;
__m256i const zeroVec = _mm256_setzero_si256();
__m256i sumVec = zeroVec; /* accumulates match+lit in 32-bit lanes */
ZSTD_ALIGNED(32) U32 tmp[8]; /* temporary buffer for reduction */
size_t mSum = 0, lSum = 0;
ZSTD_STATIC_ASSERT(sizeof(ZSTD_Sequence) == 16);
/* Process 2 structs (32 bytes) at a time */
for (i = 0; i + 2 <= nbSeqs; i += 2) {
/* Load two consecutive ZSTD_Sequence (8×4 = 32 bytes) */
__m256i data = _mm256_loadu_si256((const __m256i*)(const void*)&seqs[i]);
/* check end of block signal */
__m256i cmp = _mm256_cmpeq_epi32(data, zeroVec);
int cmp_res = _mm256_movemask_epi8(cmp);
/* indices for match lengths correspond to bits [8..11], [24..27]
* => combined mask = 0x0F000F00 */
ZSTD_STATIC_ASSERT(offsetof(ZSTD_Sequence, matchLength) == 8);
if (cmp_res & 0x0F000F00) break;
/* Accumulate in sumVec */
sumVec = _mm256_add_epi32(sumVec, data);
}
/* Horizontal reduction */
_mm256_store_si256((__m256i*)tmp, sumVec);
lSum = tmp[1] + tmp[5];
mSum = tmp[2] + tmp[6];
/* Handle the leftover */
for (; i < nbSeqs; i++) {
lSum += seqs[i].litLength;
mSum += seqs[i].matchLength;
if (seqs[i].matchLength == 0) break; /* end of block */
}
if (i==nbSeqs) {
/* reaching end of sequences: end of block signal was not present */
BlockSummary bs;
bs.nbSequences = ERROR(externalSequences_invalid);
return bs;
}
{ BlockSummary bs;
bs.nbSequences = i+1;
bs.blockSize = lSum + mSum;
bs.litSize = lSum;
return bs;
}
}
#else
BlockSummary ZSTD_get1BlockSummary(const ZSTD_Sequence* seqs, size_t nbSeqs)
{
size_t totalMatchSize = 0;
size_t litSize = 0;
size_t n;
assert(seqs);
for (n=0; n<nbSeqs; n++) {
totalMatchSize += seqs[n].matchLength;
litSize += seqs[n].litLength;
if (seqs[n].matchLength == 0) {
assert(seqs[n].offset == 0);
break;
}
}
if (n==nbSeqs) {
BlockSummary bs;
bs.nbSequences = ERROR(externalSequences_invalid);
return bs;
}
{ BlockSummary bs;
bs.nbSequences = n+1;
bs.blockSize = litSize + totalMatchSize;
bs.litSize = litSize;
return bs;
}
}
#endif
static size_t
ZSTD_compressSequencesAndLiterals_internal(ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity,
const ZSTD_Sequence* inSeqs, size_t nbSequences,
const void* literals, size_t litSize, size_t srcSize)
{
size_t remaining = srcSize;
size_t cSize = 0;
BYTE* op = (BYTE*)dst;
int const repcodeResolution = (cctx->appliedParams.searchForExternalRepcodes == ZSTD_ps_enable);
assert(cctx->appliedParams.searchForExternalRepcodes != ZSTD_ps_auto);
DEBUGLOG(4, "ZSTD_compressSequencesAndLiterals_internal: nbSeqs=%zu, litSize=%zu", nbSequences, litSize);
RETURN_ERROR_IF(nbSequences == 0, externalSequences_invalid, "Requires at least 1 end-of-block");
/* Special case: empty frame */
if ((nbSequences == 1) && (inSeqs[0].litLength == 0)) {
U32 const cBlockHeader24 = 1 /* last block */ + (((U32)bt_raw)<<1);
RETURN_ERROR_IF(dstCapacity<3, dstSize_tooSmall, "No room for empty frame block header");
MEM_writeLE24(op, cBlockHeader24);
op += ZSTD_blockHeaderSize;
dstCapacity -= ZSTD_blockHeaderSize;
cSize += ZSTD_blockHeaderSize;
}
while (nbSequences) {
size_t compressedSeqsSize, cBlockSize, conversionStatus;
BlockSummary const block = ZSTD_get1BlockSummary(inSeqs, nbSequences);
U32 const lastBlock = (block.nbSequences == nbSequences);
FORWARD_IF_ERROR(block.nbSequences, "Error while trying to determine nb of sequences for a block");
assert(block.nbSequences <= nbSequences);
RETURN_ERROR_IF(block.litSize > litSize, externalSequences_invalid, "discrepancy: Sequences require more literals than present in buffer");
ZSTD_resetSeqStore(&cctx->seqStore);
conversionStatus = ZSTD_convertBlockSequences(cctx,
inSeqs, block.nbSequences,
repcodeResolution);
FORWARD_IF_ERROR(conversionStatus, "Bad sequence conversion");
inSeqs += block.nbSequences;
nbSequences -= block.nbSequences;
remaining -= block.blockSize;
/* Note: when blockSize is very small, other variant send it uncompressed.
* Here, we still send the sequences, because we don't have the original source to send it uncompressed.
* One could imagine in theory reproducing the source from the sequences,
* but that's complex and costly memory intensive, and goes against the objectives of this variant. */
RETURN_ERROR_IF(dstCapacity < ZSTD_blockHeaderSize, dstSize_tooSmall, "not enough dstCapacity to write a new compressed block");
compressedSeqsSize = ZSTD_entropyCompressSeqStore_internal(
op + ZSTD_blockHeaderSize /* Leave space for block header */, dstCapacity - ZSTD_blockHeaderSize,
literals, block.litSize,
&cctx->seqStore,
&cctx->blockState.prevCBlock->entropy, &cctx->blockState.nextCBlock->entropy,
&cctx->appliedParams,
cctx->tmpWorkspace, cctx->tmpWkspSize /* statically allocated in resetCCtx */,
cctx->bmi2);
FORWARD_IF_ERROR(compressedSeqsSize, "Compressing sequences of block failed");
/* note: the spec forbids for any compressed block to be larger than maximum block size */
if (compressedSeqsSize > cctx->blockSizeMax) compressedSeqsSize = 0;
DEBUGLOG(5, "Compressed sequences size: %zu", compressedSeqsSize);
litSize -= block.litSize;
literals = (const char*)literals + block.litSize;
/* Note: difficult to check source for RLE block when only Literals are provided,
* but it could be considered from analyzing the sequence directly */
if (compressedSeqsSize == 0) {
/* Sending uncompressed blocks is out of reach, because the source is not provided.
* In theory, one could use the sequences to regenerate the source, like a decompressor,
* but it's complex, and memory hungry, killing the purpose of this variant.
* Current outcome: generate an error code.
*/
RETURN_ERROR(cannotProduce_uncompressedBlock, "ZSTD_compressSequencesAndLiterals cannot generate an uncompressed block");
} else {
U32 cBlockHeader;
assert(compressedSeqsSize > 1); /* no RLE */
/* Error checking and repcodes update */
ZSTD_blockState_confirmRepcodesAndEntropyTables(&cctx->blockState);
if (cctx->blockState.prevCBlock->entropy.fse.offcode_repeatMode == FSE_repeat_valid)
cctx->blockState.prevCBlock->entropy.fse.offcode_repeatMode = FSE_repeat_check;
/* Write block header into beginning of block*/
cBlockHeader = lastBlock + (((U32)bt_compressed)<<1) + (U32)(compressedSeqsSize << 3);
MEM_writeLE24(op, cBlockHeader);
cBlockSize = ZSTD_blockHeaderSize + compressedSeqsSize;
DEBUGLOG(5, "Writing out compressed block, size: %zu", cBlockSize);
}
cSize += cBlockSize;
op += cBlockSize;
dstCapacity -= cBlockSize;
cctx->isFirstBlock = 0;
DEBUGLOG(5, "cSize running total: %zu (remaining dstCapacity=%zu)", cSize, dstCapacity);
if (lastBlock) {
assert(nbSequences == 0);
break;
}
}
RETURN_ERROR_IF(litSize != 0, externalSequences_invalid, "literals must be entirely and exactly consumed");
RETURN_ERROR_IF(remaining != 0, externalSequences_invalid, "Sequences must represent a total of exactly srcSize=%zu", srcSize);
DEBUGLOG(4, "cSize final total: %zu", cSize);
return cSize;
}
size_t
ZSTD_compressSequencesAndLiterals(ZSTD_CCtx* cctx,
void* dst, size_t dstCapacity,
const ZSTD_Sequence* inSeqs, size_t inSeqsSize,
const void* literals, size_t litSize, size_t litCapacity,
size_t decompressedSize)
{
BYTE* op = (BYTE*)dst;
size_t cSize = 0;
/* Transparent initialization stage, same as compressStream2() */
DEBUGLOG(4, "ZSTD_compressSequencesAndLiterals (dstCapacity=%zu)", dstCapacity);
assert(cctx != NULL);
if (litCapacity < litSize) {
RETURN_ERROR(workSpace_tooSmall, "literals buffer is not large enough: must be at least 8 bytes larger than litSize (risk of read out-of-bound)");
}
FORWARD_IF_ERROR(ZSTD_CCtx_init_compressStream2(cctx, ZSTD_e_end, decompressedSize), "CCtx initialization failed");
if (cctx->appliedParams.blockDelimiters == ZSTD_sf_noBlockDelimiters) {
RETURN_ERROR(frameParameter_unsupported, "This mode is only compatible with explicit delimiters");
}
if (cctx->appliedParams.validateSequences) {
RETURN_ERROR(parameter_unsupported, "This mode is not compatible with Sequence validation");
}
if (cctx->appliedParams.fParams.checksumFlag) {
RETURN_ERROR(frameParameter_unsupported, "this mode is not compatible with frame checksum");
}
/* Begin writing output, starting with frame header */
{ size_t const frameHeaderSize = ZSTD_writeFrameHeader(op, dstCapacity,
&cctx->appliedParams, decompressedSize, cctx->dictID);
op += frameHeaderSize;
assert(frameHeaderSize <= dstCapacity);
dstCapacity -= frameHeaderSize;
cSize += frameHeaderSize;
}
/* Now generate compressed blocks */
{ size_t const cBlocksSize = ZSTD_compressSequencesAndLiterals_internal(cctx,
op, dstCapacity,
inSeqs, inSeqsSize,
literals, litSize, decompressedSize);
FORWARD_IF_ERROR(cBlocksSize, "Compressing blocks failed!");
cSize += cBlocksSize;
assert(cBlocksSize <= dstCapacity);
dstCapacity -= cBlocksSize;
}
DEBUGLOG(4, "Final compressed size: %zu", cSize);
return cSize;
}
/*====== Finalize ======*/
static ZSTD_inBuffer inBuffer_forEndFlush(const ZSTD_CStream* zcs)
{
const ZSTD_inBuffer nullInput = { NULL, 0, 0 };
const int stableInput = (zcs->appliedParams.inBufferMode == ZSTD_bm_stable);
return stableInput ? zcs->expectedInBuffer : nullInput;
}
/*! ZSTD_flushStream() :
* @return : amount of data remaining to flush */
size_t ZSTD_flushStream(ZSTD_CStream* zcs, ZSTD_outBuffer* output)
{
ZSTD_inBuffer input = inBuffer_forEndFlush(zcs);
input.size = input.pos; /* do not ingest more input during flush */
return ZSTD_compressStream2(zcs, output, &input, ZSTD_e_flush);
}
size_t ZSTD_endStream(ZSTD_CStream* zcs, ZSTD_outBuffer* output)
{
ZSTD_inBuffer input = inBuffer_forEndFlush(zcs);
size_t const remainingToFlush = ZSTD_compressStream2(zcs, output, &input, ZSTD_e_end);
FORWARD_IF_ERROR(remainingToFlush , "ZSTD_compressStream2(,,ZSTD_e_end) failed");
if (zcs->appliedParams.nbWorkers > 0) return remainingToFlush; /* minimal estimation */
/* single thread mode : attempt to calculate remaining to flush more precisely */
{ size_t const lastBlockSize = zcs->frameEnded ? 0 : ZSTD_BLOCKHEADERSIZE;
size_t const checksumSize = (size_t)(zcs->frameEnded ? 0 : zcs->appliedParams.fParams.checksumFlag * 4);
size_t const toFlush = remainingToFlush + lastBlockSize + checksumSize;
DEBUGLOG(4, "ZSTD_endStream : remaining to flush : %u", (unsigned)toFlush);
return toFlush;
}
}
/*-===== Pre-defined compression levels =====-*/
/**** start inlining clevels.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_CLEVELS_H
#define ZSTD_CLEVELS_H
#define ZSTD_STATIC_LINKING_ONLY /* ZSTD_compressionParameters */
/**** skipping file: ../zstd.h ****/
/*-===== Pre-defined compression levels =====-*/
#define ZSTD_MAX_CLEVEL 22
#ifdef __GNUC__
__attribute__((__unused__))
#endif
static const ZSTD_compressionParameters ZSTD_defaultCParameters[4][ZSTD_MAX_CLEVEL+1] = {
{ /* "default" - for any srcSize > 256 KB */
/* W, C, H, S, L, TL, strat */
{ 19, 12, 13, 1, 6, 1, ZSTD_fast }, /* base for negative levels */
{ 19, 13, 14, 1, 7, 0, ZSTD_fast }, /* level 1 */
{ 20, 15, 16, 1, 6, 0, ZSTD_fast }, /* level 2 */
{ 21, 16, 17, 1, 5, 0, ZSTD_dfast }, /* level 3 */
{ 21, 18, 18, 1, 5, 0, ZSTD_dfast }, /* level 4 */
{ 21, 18, 19, 3, 5, 2, ZSTD_greedy }, /* level 5 */
{ 21, 18, 19, 3, 5, 4, ZSTD_lazy }, /* level 6 */
{ 21, 19, 20, 4, 5, 8, ZSTD_lazy }, /* level 7 */
{ 21, 19, 20, 4, 5, 16, ZSTD_lazy2 }, /* level 8 */
{ 22, 20, 21, 4, 5, 16, ZSTD_lazy2 }, /* level 9 */
{ 22, 21, 22, 5, 5, 16, ZSTD_lazy2 }, /* level 10 */
{ 22, 21, 22, 6, 5, 16, ZSTD_lazy2 }, /* level 11 */
{ 22, 22, 23, 6, 5, 32, ZSTD_lazy2 }, /* level 12 */
{ 22, 22, 22, 4, 5, 32, ZSTD_btlazy2 }, /* level 13 */
{ 22, 22, 23, 5, 5, 32, ZSTD_btlazy2 }, /* level 14 */
{ 22, 23, 23, 6, 5, 32, ZSTD_btlazy2 }, /* level 15 */
{ 22, 22, 22, 5, 5, 48, ZSTD_btopt }, /* level 16 */
{ 23, 23, 22, 5, 4, 64, ZSTD_btopt }, /* level 17 */
{ 23, 23, 22, 6, 3, 64, ZSTD_btultra }, /* level 18 */
{ 23, 24, 22, 7, 3,256, ZSTD_btultra2}, /* level 19 */
{ 25, 25, 23, 7, 3,256, ZSTD_btultra2}, /* level 20 */
{ 26, 26, 24, 7, 3,512, ZSTD_btultra2}, /* level 21 */
{ 27, 27, 25, 9, 3,999, ZSTD_btultra2}, /* level 22 */
},
{ /* for srcSize <= 256 KB */
/* W, C, H, S, L, T, strat */
{ 18, 12, 13, 1, 5, 1, ZSTD_fast }, /* base for negative levels */
{ 18, 13, 14, 1, 6, 0, ZSTD_fast }, /* level 1 */
{ 18, 14, 14, 1, 5, 0, ZSTD_dfast }, /* level 2 */
{ 18, 16, 16, 1, 4, 0, ZSTD_dfast }, /* level 3 */
{ 18, 16, 17, 3, 5, 2, ZSTD_greedy }, /* level 4.*/
{ 18, 17, 18, 5, 5, 2, ZSTD_greedy }, /* level 5.*/
{ 18, 18, 19, 3, 5, 4, ZSTD_lazy }, /* level 6.*/
{ 18, 18, 19, 4, 4, 4, ZSTD_lazy }, /* level 7 */
{ 18, 18, 19, 4, 4, 8, ZSTD_lazy2 }, /* level 8 */
{ 18, 18, 19, 5, 4, 8, ZSTD_lazy2 }, /* level 9 */
{ 18, 18, 19, 6, 4, 8, ZSTD_lazy2 }, /* level 10 */
{ 18, 18, 19, 5, 4, 12, ZSTD_btlazy2 }, /* level 11.*/
{ 18, 19, 19, 7, 4, 12, ZSTD_btlazy2 }, /* level 12.*/
{ 18, 18, 19, 4, 4, 16, ZSTD_btopt }, /* level 13 */
{ 18, 18, 19, 4, 3, 32, ZSTD_btopt }, /* level 14.*/
{ 18, 18, 19, 6, 3,128, ZSTD_btopt }, /* level 15.*/
{ 18, 19, 19, 6, 3,128, ZSTD_btultra }, /* level 16.*/
{ 18, 19, 19, 8, 3,256, ZSTD_btultra }, /* level 17.*/
{ 18, 19, 19, 6, 3,128, ZSTD_btultra2}, /* level 18.*/
{ 18, 19, 19, 8, 3,256, ZSTD_btultra2}, /* level 19.*/
{ 18, 19, 19, 10, 3,512, ZSTD_btultra2}, /* level 20.*/
{ 18, 19, 19, 12, 3,512, ZSTD_btultra2}, /* level 21.*/
{ 18, 19, 19, 13, 3,999, ZSTD_btultra2}, /* level 22.*/
},
{ /* for srcSize <= 128 KB */
/* W, C, H, S, L, T, strat */
{ 17, 12, 12, 1, 5, 1, ZSTD_fast }, /* base for negative levels */
{ 17, 12, 13, 1, 6, 0, ZSTD_fast }, /* level 1 */
{ 17, 13, 15, 1, 5, 0, ZSTD_fast }, /* level 2 */
{ 17, 15, 16, 2, 5, 0, ZSTD_dfast }, /* level 3 */
{ 17, 17, 17, 2, 4, 0, ZSTD_dfast }, /* level 4 */
{ 17, 16, 17, 3, 4, 2, ZSTD_greedy }, /* level 5 */
{ 17, 16, 17, 3, 4, 4, ZSTD_lazy }, /* level 6 */
{ 17, 16, 17, 3, 4, 8, ZSTD_lazy2 }, /* level 7 */
{ 17, 16, 17, 4, 4, 8, ZSTD_lazy2 }, /* level 8 */
{ 17, 16, 17, 5, 4, 8, ZSTD_lazy2 }, /* level 9 */
{ 17, 16, 17, 6, 4, 8, ZSTD_lazy2 }, /* level 10 */
{ 17, 17, 17, 5, 4, 8, ZSTD_btlazy2 }, /* level 11 */
{ 17, 18, 17, 7, 4, 12, ZSTD_btlazy2 }, /* level 12 */
{ 17, 18, 17, 3, 4, 12, ZSTD_btopt }, /* level 13.*/
{ 17, 18, 17, 4, 3, 32, ZSTD_btopt }, /* level 14.*/
{ 17, 18, 17, 6, 3,256, ZSTD_btopt }, /* level 15.*/
{ 17, 18, 17, 6, 3,128, ZSTD_btultra }, /* level 16.*/
{ 17, 18, 17, 8, 3,256, ZSTD_btultra }, /* level 17.*/
{ 17, 18, 17, 10, 3,512, ZSTD_btultra }, /* level 18.*/
{ 17, 18, 17, 5, 3,256, ZSTD_btultra2}, /* level 19.*/
{ 17, 18, 17, 7, 3,512, ZSTD_btultra2}, /* level 20.*/
{ 17, 18, 17, 9, 3,512, ZSTD_btultra2}, /* level 21.*/
{ 17, 18, 17, 11, 3,999, ZSTD_btultra2}, /* level 22.*/
},
{ /* for srcSize <= 16 KB */
/* W, C, H, S, L, T, strat */
{ 14, 12, 13, 1, 5, 1, ZSTD_fast }, /* base for negative levels */
{ 14, 14, 15, 1, 5, 0, ZSTD_fast }, /* level 1 */
{ 14, 14, 15, 1, 4, 0, ZSTD_fast }, /* level 2 */
{ 14, 14, 15, 2, 4, 0, ZSTD_dfast }, /* level 3 */
{ 14, 14, 14, 4, 4, 2, ZSTD_greedy }, /* level 4 */
{ 14, 14, 14, 3, 4, 4, ZSTD_lazy }, /* level 5.*/
{ 14, 14, 14, 4, 4, 8, ZSTD_lazy2 }, /* level 6 */
{ 14, 14, 14, 6, 4, 8, ZSTD_lazy2 }, /* level 7 */
{ 14, 14, 14, 8, 4, 8, ZSTD_lazy2 }, /* level 8.*/
{ 14, 15, 14, 5, 4, 8, ZSTD_btlazy2 }, /* level 9.*/
{ 14, 15, 14, 9, 4, 8, ZSTD_btlazy2 }, /* level 10.*/
{ 14, 15, 14, 3, 4, 12, ZSTD_btopt }, /* level 11.*/
{ 14, 15, 14, 4, 3, 24, ZSTD_btopt }, /* level 12.*/
{ 14, 15, 14, 5, 3, 32, ZSTD_btultra }, /* level 13.*/
{ 14, 15, 15, 6, 3, 64, ZSTD_btultra }, /* level 14.*/
{ 14, 15, 15, 7, 3,256, ZSTD_btultra }, /* level 15.*/
{ 14, 15, 15, 5, 3, 48, ZSTD_btultra2}, /* level 16.*/
{ 14, 15, 15, 6, 3,128, ZSTD_btultra2}, /* level 17.*/
{ 14, 15, 15, 7, 3,256, ZSTD_btultra2}, /* level 18.*/
{ 14, 15, 15, 8, 3,256, ZSTD_btultra2}, /* level 19.*/
{ 14, 15, 15, 8, 3,512, ZSTD_btultra2}, /* level 20.*/
{ 14, 15, 15, 9, 3,512, ZSTD_btultra2}, /* level 21.*/
{ 14, 15, 15, 10, 3,999, ZSTD_btultra2}, /* level 22.*/
},
};
#endif /* ZSTD_CLEVELS_H */
/**** ended inlining clevels.h ****/
int ZSTD_maxCLevel(void) { return ZSTD_MAX_CLEVEL; }
int ZSTD_minCLevel(void) { return (int)-ZSTD_TARGETLENGTH_MAX; }
int ZSTD_defaultCLevel(void) { return ZSTD_CLEVEL_DEFAULT; }
static ZSTD_compressionParameters ZSTD_dedicatedDictSearch_getCParams(int const compressionLevel, size_t const dictSize)
{
ZSTD_compressionParameters cParams = ZSTD_getCParams_internal(compressionLevel, 0, dictSize, ZSTD_cpm_createCDict);
switch (cParams.strategy) {
case ZSTD_fast:
case ZSTD_dfast:
break;
case ZSTD_greedy:
case ZSTD_lazy:
case ZSTD_lazy2:
cParams.hashLog += ZSTD_LAZY_DDSS_BUCKET_LOG;
break;
case ZSTD_btlazy2:
case ZSTD_btopt:
case ZSTD_btultra:
case ZSTD_btultra2:
break;
}
return cParams;
}
static int ZSTD_dedicatedDictSearch_isSupported(
ZSTD_compressionParameters const* cParams)
{
return (cParams->strategy >= ZSTD_greedy)
&& (cParams->strategy <= ZSTD_lazy2)
&& (cParams->hashLog > cParams->chainLog)
&& (cParams->chainLog <= 24);
}
/**
* Reverses the adjustment applied to cparams when enabling dedicated dict
* search. This is used to recover the params set to be used in the working
* context. (Otherwise, those tables would also grow.)
*/
static void ZSTD_dedicatedDictSearch_revertCParams(
ZSTD_compressionParameters* cParams) {
switch (cParams->strategy) {
case ZSTD_fast:
case ZSTD_dfast:
break;
case ZSTD_greedy:
case ZSTD_lazy:
case ZSTD_lazy2:
cParams->hashLog -= ZSTD_LAZY_DDSS_BUCKET_LOG;
if (cParams->hashLog < ZSTD_HASHLOG_MIN) {
cParams->hashLog = ZSTD_HASHLOG_MIN;
}
break;
case ZSTD_btlazy2:
case ZSTD_btopt:
case ZSTD_btultra:
case ZSTD_btultra2:
break;
}
}
static U64 ZSTD_getCParamRowSize(U64 srcSizeHint, size_t dictSize, ZSTD_CParamMode_e mode)
{
switch (mode) {
case ZSTD_cpm_unknown:
case ZSTD_cpm_noAttachDict:
case ZSTD_cpm_createCDict:
break;
case ZSTD_cpm_attachDict:
dictSize = 0;
break;
default:
assert(0);
break;
}
{ int const unknown = srcSizeHint == ZSTD_CONTENTSIZE_UNKNOWN;
size_t const addedSize = unknown && dictSize > 0 ? 500 : 0;
return unknown && dictSize == 0 ? ZSTD_CONTENTSIZE_UNKNOWN : srcSizeHint+dictSize+addedSize;
}
}
/*! ZSTD_getCParams_internal() :
* @return ZSTD_compressionParameters structure for a selected compression level, srcSize and dictSize.
* Note: srcSizeHint 0 means 0, use ZSTD_CONTENTSIZE_UNKNOWN for unknown.
* Use dictSize == 0 for unknown or unused.
* Note: `mode` controls how we treat the `dictSize`. See docs for `ZSTD_CParamMode_e`. */
static ZSTD_compressionParameters ZSTD_getCParams_internal(int compressionLevel, unsigned long long srcSizeHint, size_t dictSize, ZSTD_CParamMode_e mode)
{
U64 const rSize = ZSTD_getCParamRowSize(srcSizeHint, dictSize, mode);
U32 const tableID = (rSize <= 256 KB) + (rSize <= 128 KB) + (rSize <= 16 KB);
int row;
DEBUGLOG(5, "ZSTD_getCParams_internal (cLevel=%i)", compressionLevel);
/* row */
if (compressionLevel == 0) row = ZSTD_CLEVEL_DEFAULT; /* 0 == default */
else if (compressionLevel < 0) row = 0; /* entry 0 is baseline for fast mode */
else if (compressionLevel > ZSTD_MAX_CLEVEL) row = ZSTD_MAX_CLEVEL;
else row = compressionLevel;
{ ZSTD_compressionParameters cp = ZSTD_defaultCParameters[tableID][row];
DEBUGLOG(5, "ZSTD_getCParams_internal selected tableID: %u row: %u strat: %u", tableID, row, (U32)cp.strategy);
/* acceleration factor */
if (compressionLevel < 0) {
int const clampedCompressionLevel = MAX(ZSTD_minCLevel(), compressionLevel);
cp.targetLength = (unsigned)(-clampedCompressionLevel);
}
/* refine parameters based on srcSize & dictSize */
return ZSTD_adjustCParams_internal(cp, srcSizeHint, dictSize, mode, ZSTD_ps_auto);
}
}
/*! ZSTD_getCParams() :
* @return ZSTD_compressionParameters structure for a selected compression level, srcSize and dictSize.
* Size values are optional, provide 0 if not known or unused */
ZSTD_compressionParameters ZSTD_getCParams(int compressionLevel, unsigned long long srcSizeHint, size_t dictSize)
{
if (srcSizeHint == 0) srcSizeHint = ZSTD_CONTENTSIZE_UNKNOWN;
return ZSTD_getCParams_internal(compressionLevel, srcSizeHint, dictSize, ZSTD_cpm_unknown);
}
/*! ZSTD_getParams() :
* same idea as ZSTD_getCParams()
* @return a `ZSTD_parameters` structure (instead of `ZSTD_compressionParameters`).
* Fields of `ZSTD_frameParameters` are set to default values */
static ZSTD_parameters
ZSTD_getParams_internal(int compressionLevel, unsigned long long srcSizeHint, size_t dictSize, ZSTD_CParamMode_e mode)
{
ZSTD_parameters params;
ZSTD_compressionParameters const cParams = ZSTD_getCParams_internal(compressionLevel, srcSizeHint, dictSize, mode);
DEBUGLOG(5, "ZSTD_getParams (cLevel=%i)", compressionLevel);
ZSTD_memset(¶ms, 0, sizeof(params));
params.cParams = cParams;
params.fParams.contentSizeFlag = 1;
return params;
}
/*! ZSTD_getParams() :
* same idea as ZSTD_getCParams()
* @return a `ZSTD_parameters` structure (instead of `ZSTD_compressionParameters`).
* Fields of `ZSTD_frameParameters` are set to default values */
ZSTD_parameters ZSTD_getParams(int compressionLevel, unsigned long long srcSizeHint, size_t dictSize)
{
if (srcSizeHint == 0) srcSizeHint = ZSTD_CONTENTSIZE_UNKNOWN;
return ZSTD_getParams_internal(compressionLevel, srcSizeHint, dictSize, ZSTD_cpm_unknown);
}
void ZSTD_registerSequenceProducer(
ZSTD_CCtx* zc,
void* extSeqProdState,
ZSTD_sequenceProducer_F extSeqProdFunc)
{
assert(zc != NULL);
ZSTD_CCtxParams_registerSequenceProducer(
&zc->requestedParams, extSeqProdState, extSeqProdFunc
);
}
void ZSTD_CCtxParams_registerSequenceProducer(
ZSTD_CCtx_params* params,
void* extSeqProdState,
ZSTD_sequenceProducer_F extSeqProdFunc)
{
assert(params != NULL);
if (extSeqProdFunc != NULL) {
params->extSeqProdFunc = extSeqProdFunc;
params->extSeqProdState = extSeqProdState;
} else {
params->extSeqProdFunc = NULL;
params->extSeqProdState = NULL;
}
}
/**** ended inlining compress/zstd_compress.c ****/
/**** start inlining compress/zstd_double_fast.c ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/**** skipping file: zstd_compress_internal.h ****/
/**** skipping file: zstd_double_fast.h ****/
#ifndef ZSTD_EXCLUDE_DFAST_BLOCK_COMPRESSOR
static
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
void ZSTD_fillDoubleHashTableForCDict(ZSTD_MatchState_t* ms,
void const* end, ZSTD_dictTableLoadMethod_e dtlm)
{
const ZSTD_compressionParameters* const cParams = &ms->cParams;
U32* const hashLarge = ms->hashTable;
U32 const hBitsL = cParams->hashLog + ZSTD_SHORT_CACHE_TAG_BITS;
U32 const mls = cParams->minMatch;
U32* const hashSmall = ms->chainTable;
U32 const hBitsS = cParams->chainLog + ZSTD_SHORT_CACHE_TAG_BITS;
const BYTE* const base = ms->window.base;
const BYTE* ip = base + ms->nextToUpdate;
const BYTE* const iend = ((const BYTE*)end) - HASH_READ_SIZE;
const U32 fastHashFillStep = 3;
/* Always insert every fastHashFillStep position into the hash tables.
* Insert the other positions into the large hash table if their entry
* is empty.
*/
for (; ip + fastHashFillStep - 1 <= iend; ip += fastHashFillStep) {
U32 const curr = (U32)(ip - base);
U32 i;
for (i = 0; i < fastHashFillStep; ++i) {
size_t const smHashAndTag = ZSTD_hashPtr(ip + i, hBitsS, mls);
size_t const lgHashAndTag = ZSTD_hashPtr(ip + i, hBitsL, 8);
if (i == 0) {
ZSTD_writeTaggedIndex(hashSmall, smHashAndTag, curr + i);
}
if (i == 0 || hashLarge[lgHashAndTag >> ZSTD_SHORT_CACHE_TAG_BITS] == 0) {
ZSTD_writeTaggedIndex(hashLarge, lgHashAndTag, curr + i);
}
/* Only load extra positions for ZSTD_dtlm_full */
if (dtlm == ZSTD_dtlm_fast)
break;
} }
}
static
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
void ZSTD_fillDoubleHashTableForCCtx(ZSTD_MatchState_t* ms,
void const* end, ZSTD_dictTableLoadMethod_e dtlm)
{
const ZSTD_compressionParameters* const cParams = &ms->cParams;
U32* const hashLarge = ms->hashTable;
U32 const hBitsL = cParams->hashLog;
U32 const mls = cParams->minMatch;
U32* const hashSmall = ms->chainTable;
U32 const hBitsS = cParams->chainLog;
const BYTE* const base = ms->window.base;
const BYTE* ip = base + ms->nextToUpdate;
const BYTE* const iend = ((const BYTE*)end) - HASH_READ_SIZE;
const U32 fastHashFillStep = 3;
/* Always insert every fastHashFillStep position into the hash tables.
* Insert the other positions into the large hash table if their entry
* is empty.
*/
for (; ip + fastHashFillStep - 1 <= iend; ip += fastHashFillStep) {
U32 const curr = (U32)(ip - base);
U32 i;
for (i = 0; i < fastHashFillStep; ++i) {
size_t const smHash = ZSTD_hashPtr(ip + i, hBitsS, mls);
size_t const lgHash = ZSTD_hashPtr(ip + i, hBitsL, 8);
if (i == 0)
hashSmall[smHash] = curr + i;
if (i == 0 || hashLarge[lgHash] == 0)
hashLarge[lgHash] = curr + i;
/* Only load extra positions for ZSTD_dtlm_full */
if (dtlm == ZSTD_dtlm_fast)
break;
} }
}
void ZSTD_fillDoubleHashTable(ZSTD_MatchState_t* ms,
const void* const end,
ZSTD_dictTableLoadMethod_e dtlm,
ZSTD_tableFillPurpose_e tfp)
{
if (tfp == ZSTD_tfp_forCDict) {
ZSTD_fillDoubleHashTableForCDict(ms, end, dtlm);
} else {
ZSTD_fillDoubleHashTableForCCtx(ms, end, dtlm);
}
}
FORCE_INLINE_TEMPLATE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
size_t ZSTD_compressBlock_doubleFast_noDict_generic(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize, U32 const mls /* template */)
{
ZSTD_compressionParameters const* cParams = &ms->cParams;
U32* const hashLong = ms->hashTable;
const U32 hBitsL = cParams->hashLog;
U32* const hashSmall = ms->chainTable;
const U32 hBitsS = cParams->chainLog;
const BYTE* const base = ms->window.base;
const BYTE* const istart = (const BYTE*)src;
const BYTE* anchor = istart;
const U32 endIndex = (U32)((size_t)(istart - base) + srcSize);
/* presumes that, if there is a dictionary, it must be using Attach mode */
const U32 prefixLowestIndex = ZSTD_getLowestPrefixIndex(ms, endIndex, cParams->windowLog);
const BYTE* const prefixLowest = base + prefixLowestIndex;
const BYTE* const iend = istart + srcSize;
const BYTE* const ilimit = iend - HASH_READ_SIZE;
U32 offset_1=rep[0], offset_2=rep[1];
U32 offsetSaved1 = 0, offsetSaved2 = 0;
size_t mLength;
U32 offset;
U32 curr;
/* how many positions to search before increasing step size */
const size_t kStepIncr = 1 << kSearchStrength;
/* the position at which to increment the step size if no match is found */
const BYTE* nextStep;
size_t step; /* the current step size */
size_t hl0; /* the long hash at ip */
size_t hl1; /* the long hash at ip1 */
U32 idxl0; /* the long match index for ip */
U32 idxl1; /* the long match index for ip1 */
const BYTE* matchl0; /* the long match for ip */
const BYTE* matchs0; /* the short match for ip */
const BYTE* matchl1; /* the long match for ip1 */
const BYTE* matchs0_safe; /* matchs0 or safe address */
const BYTE* ip = istart; /* the current position */
const BYTE* ip1; /* the next position */
/* Array of ~random data, should have low probability of matching data
* we load from here instead of from tables, if matchl0/matchl1 are
* invalid indices. Used to avoid unpredictable branches. */
const BYTE dummy[] = {0x12,0x34,0x56,0x78,0x9a,0xbc,0xde,0xf0,0xe2,0xb4};
DEBUGLOG(5, "ZSTD_compressBlock_doubleFast_noDict_generic");
/* init */
ip += ((ip - prefixLowest) == 0);
{
U32 const current = (U32)(ip - base);
U32 const windowLow = ZSTD_getLowestPrefixIndex(ms, current, cParams->windowLog);
U32 const maxRep = current - windowLow;
if (offset_2 > maxRep) offsetSaved2 = offset_2, offset_2 = 0;
if (offset_1 > maxRep) offsetSaved1 = offset_1, offset_1 = 0;
}
/* Outer Loop: one iteration per match found and stored */
while (1) {
step = 1;
nextStep = ip + kStepIncr;
ip1 = ip + step;
if (ip1 > ilimit) {
goto _cleanup;
}
hl0 = ZSTD_hashPtr(ip, hBitsL, 8);
idxl0 = hashLong[hl0];
matchl0 = base + idxl0;
/* Inner Loop: one iteration per search / position */
do {
const size_t hs0 = ZSTD_hashPtr(ip, hBitsS, mls);
const U32 idxs0 = hashSmall[hs0];
curr = (U32)(ip-base);
matchs0 = base + idxs0;
hashLong[hl0] = hashSmall[hs0] = curr; /* update hash tables */
/* check noDict repcode */
if ((offset_1 > 0) & (MEM_read32(ip+1-offset_1) == MEM_read32(ip+1))) {
mLength = ZSTD_count(ip+1+4, ip+1+4-offset_1, iend) + 4;
ip++;
ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, iend, REPCODE1_TO_OFFBASE, mLength);
goto _match_stored;
}
hl1 = ZSTD_hashPtr(ip1, hBitsL, 8);
/* idxl0 > prefixLowestIndex is a (somewhat) unpredictable branch.
* However expression below complies into conditional move. Since
* match is unlikely and we only *branch* on idxl0 > prefixLowestIndex
* if there is a match, all branches become predictable. */
{ const BYTE* const matchl0_safe = ZSTD_selectAddr(idxl0, prefixLowestIndex, matchl0, &dummy[0]);
/* check prefix long match */
if (MEM_read64(matchl0_safe) == MEM_read64(ip) && matchl0_safe == matchl0) {
mLength = ZSTD_count(ip+8, matchl0+8, iend) + 8;
offset = (U32)(ip-matchl0);
while (((ip>anchor) & (matchl0>prefixLowest)) && (ip[-1] == matchl0[-1])) { ip--; matchl0--; mLength++; } /* catch up */
goto _match_found;
} }
idxl1 = hashLong[hl1];
matchl1 = base + idxl1;
/* Same optimization as matchl0 above */
matchs0_safe = ZSTD_selectAddr(idxs0, prefixLowestIndex, matchs0, &dummy[0]);
/* check prefix short match */
if(MEM_read32(matchs0_safe) == MEM_read32(ip) && matchs0_safe == matchs0) {
goto _search_next_long;
}
if (ip1 >= nextStep) {
PREFETCH_L1(ip1 + 64);
PREFETCH_L1(ip1 + 128);
step++;
nextStep += kStepIncr;
}
ip = ip1;
ip1 += step;
hl0 = hl1;
idxl0 = idxl1;
matchl0 = matchl1;
#if defined(__aarch64__)
PREFETCH_L1(ip+256);
#endif
} while (ip1 <= ilimit);
_cleanup:
/* If offset_1 started invalid (offsetSaved1 != 0) and became valid (offset_1 != 0),
* rotate saved offsets. See comment in ZSTD_compressBlock_fast_noDict for more context. */
offsetSaved2 = ((offsetSaved1 != 0) && (offset_1 != 0)) ? offsetSaved1 : offsetSaved2;
/* save reps for next block */
rep[0] = offset_1 ? offset_1 : offsetSaved1;
rep[1] = offset_2 ? offset_2 : offsetSaved2;
/* Return the last literals size */
return (size_t)(iend - anchor);
_search_next_long:
/* short match found: let's check for a longer one */
mLength = ZSTD_count(ip+4, matchs0+4, iend) + 4;
offset = (U32)(ip - matchs0);
/* check long match at +1 position */
if ((idxl1 > prefixLowestIndex) && (MEM_read64(matchl1) == MEM_read64(ip1))) {
size_t const l1len = ZSTD_count(ip1+8, matchl1+8, iend) + 8;
if (l1len > mLength) {
/* use the long match instead */
ip = ip1;
mLength = l1len;
offset = (U32)(ip-matchl1);
matchs0 = matchl1;
}
}
while (((ip>anchor) & (matchs0>prefixLowest)) && (ip[-1] == matchs0[-1])) { ip--; matchs0--; mLength++; } /* complete backward */
/* fall-through */
_match_found: /* requires ip, offset, mLength */
offset_2 = offset_1;
offset_1 = offset;
if (step < 4) {
/* It is unsafe to write this value back to the hashtable when ip1 is
* greater than or equal to the new ip we will have after we're done
* processing this match. Rather than perform that test directly
* (ip1 >= ip + mLength), which costs speed in practice, we do a simpler
* more predictable test. The minmatch even if we take a short match is
* 4 bytes, so as long as step, the distance between ip and ip1
* (initially) is less than 4, we know ip1 < new ip. */
hashLong[hl1] = (U32)(ip1 - base);
}
ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, iend, OFFSET_TO_OFFBASE(offset), mLength);
_match_stored:
/* match found */
ip += mLength;
anchor = ip;
if (ip <= ilimit) {
/* Complementary insertion */
/* done after iLimit test, as candidates could be > iend-8 */
{ U32 const indexToInsert = curr+2;
hashLong[ZSTD_hashPtr(base+indexToInsert, hBitsL, 8)] = indexToInsert;
hashLong[ZSTD_hashPtr(ip-2, hBitsL, 8)] = (U32)(ip-2-base);
hashSmall[ZSTD_hashPtr(base+indexToInsert, hBitsS, mls)] = indexToInsert;
hashSmall[ZSTD_hashPtr(ip-1, hBitsS, mls)] = (U32)(ip-1-base);
}
/* check immediate repcode */
while ( (ip <= ilimit)
&& ( (offset_2>0)
& (MEM_read32(ip) == MEM_read32(ip - offset_2)) )) {
/* store sequence */
size_t const rLength = ZSTD_count(ip+4, ip+4-offset_2, iend) + 4;
U32 const tmpOff = offset_2; offset_2 = offset_1; offset_1 = tmpOff; /* swap offset_2 <=> offset_1 */
hashSmall[ZSTD_hashPtr(ip, hBitsS, mls)] = (U32)(ip-base);
hashLong[ZSTD_hashPtr(ip, hBitsL, 8)] = (U32)(ip-base);
ZSTD_storeSeq(seqStore, 0, anchor, iend, REPCODE1_TO_OFFBASE, rLength);
ip += rLength;
anchor = ip;
continue; /* faster when present ... (?) */
}
}
}
}
FORCE_INLINE_TEMPLATE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
size_t ZSTD_compressBlock_doubleFast_dictMatchState_generic(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize,
U32 const mls /* template */)
{
ZSTD_compressionParameters const* cParams = &ms->cParams;
U32* const hashLong = ms->hashTable;
const U32 hBitsL = cParams->hashLog;
U32* const hashSmall = ms->chainTable;
const U32 hBitsS = cParams->chainLog;
const BYTE* const base = ms->window.base;
const BYTE* const istart = (const BYTE*)src;
const BYTE* ip = istart;
const BYTE* anchor = istart;
const U32 endIndex = (U32)((size_t)(istart - base) + srcSize);
/* presumes that, if there is a dictionary, it must be using Attach mode */
const U32 prefixLowestIndex = ZSTD_getLowestPrefixIndex(ms, endIndex, cParams->windowLog);
const BYTE* const prefixLowest = base + prefixLowestIndex;
const BYTE* const iend = istart + srcSize;
const BYTE* const ilimit = iend - HASH_READ_SIZE;
U32 offset_1=rep[0], offset_2=rep[1];
const ZSTD_MatchState_t* const dms = ms->dictMatchState;
const ZSTD_compressionParameters* const dictCParams = &dms->cParams;
const U32* const dictHashLong = dms->hashTable;
const U32* const dictHashSmall = dms->chainTable;
const U32 dictStartIndex = dms->window.dictLimit;
const BYTE* const dictBase = dms->window.base;
const BYTE* const dictStart = dictBase + dictStartIndex;
const BYTE* const dictEnd = dms->window.nextSrc;
const U32 dictIndexDelta = prefixLowestIndex - (U32)(dictEnd - dictBase);
const U32 dictHBitsL = dictCParams->hashLog + ZSTD_SHORT_CACHE_TAG_BITS;
const U32 dictHBitsS = dictCParams->chainLog + ZSTD_SHORT_CACHE_TAG_BITS;
const U32 dictAndPrefixLength = (U32)((ip - prefixLowest) + (dictEnd - dictStart));
DEBUGLOG(5, "ZSTD_compressBlock_doubleFast_dictMatchState_generic");
/* if a dictionary is attached, it must be within window range */
assert(ms->window.dictLimit + (1U << cParams->windowLog) >= endIndex);
if (ms->prefetchCDictTables) {
size_t const hashTableBytes = (((size_t)1) << dictCParams->hashLog) * sizeof(U32);
size_t const chainTableBytes = (((size_t)1) << dictCParams->chainLog) * sizeof(U32);
PREFETCH_AREA(dictHashLong, hashTableBytes);
PREFETCH_AREA(dictHashSmall, chainTableBytes);
}
/* init */
ip += (dictAndPrefixLength == 0);
/* dictMatchState repCode checks don't currently handle repCode == 0
* disabling. */
assert(offset_1 <= dictAndPrefixLength);
assert(offset_2 <= dictAndPrefixLength);
/* Main Search Loop */
while (ip < ilimit) { /* < instead of <=, because repcode check at (ip+1) */
size_t mLength;
U32 offset;
size_t const h2 = ZSTD_hashPtr(ip, hBitsL, 8);
size_t const h = ZSTD_hashPtr(ip, hBitsS, mls);
size_t const dictHashAndTagL = ZSTD_hashPtr(ip, dictHBitsL, 8);
size_t const dictHashAndTagS = ZSTD_hashPtr(ip, dictHBitsS, mls);
U32 const dictMatchIndexAndTagL = dictHashLong[dictHashAndTagL >> ZSTD_SHORT_CACHE_TAG_BITS];
U32 const dictMatchIndexAndTagS = dictHashSmall[dictHashAndTagS >> ZSTD_SHORT_CACHE_TAG_BITS];
int const dictTagsMatchL = ZSTD_comparePackedTags(dictMatchIndexAndTagL, dictHashAndTagL);
int const dictTagsMatchS = ZSTD_comparePackedTags(dictMatchIndexAndTagS, dictHashAndTagS);
U32 const curr = (U32)(ip-base);
U32 const matchIndexL = hashLong[h2];
U32 matchIndexS = hashSmall[h];
const BYTE* matchLong = base + matchIndexL;
const BYTE* match = base + matchIndexS;
const U32 repIndex = curr + 1 - offset_1;
const BYTE* repMatch = (repIndex < prefixLowestIndex) ?
dictBase + (repIndex - dictIndexDelta) :
base + repIndex;
hashLong[h2] = hashSmall[h] = curr; /* update hash tables */
/* check repcode */
if ((ZSTD_index_overlap_check(prefixLowestIndex, repIndex))
&& (MEM_read32(repMatch) == MEM_read32(ip+1)) ) {
const BYTE* repMatchEnd = repIndex < prefixLowestIndex ? dictEnd : iend;
mLength = ZSTD_count_2segments(ip+1+4, repMatch+4, iend, repMatchEnd, prefixLowest) + 4;
ip++;
ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, iend, REPCODE1_TO_OFFBASE, mLength);
goto _match_stored;
}
if ((matchIndexL >= prefixLowestIndex) && (MEM_read64(matchLong) == MEM_read64(ip))) {
/* check prefix long match */
mLength = ZSTD_count(ip+8, matchLong+8, iend) + 8;
offset = (U32)(ip-matchLong);
while (((ip>anchor) & (matchLong>prefixLowest)) && (ip[-1] == matchLong[-1])) { ip--; matchLong--; mLength++; } /* catch up */
goto _match_found;
} else if (dictTagsMatchL) {
/* check dictMatchState long match */
U32 const dictMatchIndexL = dictMatchIndexAndTagL >> ZSTD_SHORT_CACHE_TAG_BITS;
const BYTE* dictMatchL = dictBase + dictMatchIndexL;
assert(dictMatchL < dictEnd);
if (dictMatchL > dictStart && MEM_read64(dictMatchL) == MEM_read64(ip)) {
mLength = ZSTD_count_2segments(ip+8, dictMatchL+8, iend, dictEnd, prefixLowest) + 8;
offset = (U32)(curr - dictMatchIndexL - dictIndexDelta);
while (((ip>anchor) & (dictMatchL>dictStart)) && (ip[-1] == dictMatchL[-1])) { ip--; dictMatchL--; mLength++; } /* catch up */
goto _match_found;
} }
if (matchIndexS > prefixLowestIndex) {
/* short match candidate */
if (MEM_read32(match) == MEM_read32(ip)) {
goto _search_next_long;
}
} else if (dictTagsMatchS) {
/* check dictMatchState short match */
U32 const dictMatchIndexS = dictMatchIndexAndTagS >> ZSTD_SHORT_CACHE_TAG_BITS;
match = dictBase + dictMatchIndexS;
matchIndexS = dictMatchIndexS + dictIndexDelta;
if (match > dictStart && MEM_read32(match) == MEM_read32(ip)) {
goto _search_next_long;
} }
ip += ((ip-anchor) >> kSearchStrength) + 1;
#if defined(__aarch64__)
PREFETCH_L1(ip+256);
#endif
continue;
_search_next_long:
{ size_t const hl3 = ZSTD_hashPtr(ip+1, hBitsL, 8);
size_t const dictHashAndTagL3 = ZSTD_hashPtr(ip+1, dictHBitsL, 8);
U32 const matchIndexL3 = hashLong[hl3];
U32 const dictMatchIndexAndTagL3 = dictHashLong[dictHashAndTagL3 >> ZSTD_SHORT_CACHE_TAG_BITS];
int const dictTagsMatchL3 = ZSTD_comparePackedTags(dictMatchIndexAndTagL3, dictHashAndTagL3);
const BYTE* matchL3 = base + matchIndexL3;
hashLong[hl3] = curr + 1;
/* check prefix long +1 match */
if ((matchIndexL3 >= prefixLowestIndex) && (MEM_read64(matchL3) == MEM_read64(ip+1))) {
mLength = ZSTD_count(ip+9, matchL3+8, iend) + 8;
ip++;
offset = (U32)(ip-matchL3);
while (((ip>anchor) & (matchL3>prefixLowest)) && (ip[-1] == matchL3[-1])) { ip--; matchL3--; mLength++; } /* catch up */
goto _match_found;
} else if (dictTagsMatchL3) {
/* check dict long +1 match */
U32 const dictMatchIndexL3 = dictMatchIndexAndTagL3 >> ZSTD_SHORT_CACHE_TAG_BITS;
const BYTE* dictMatchL3 = dictBase + dictMatchIndexL3;
assert(dictMatchL3 < dictEnd);
if (dictMatchL3 > dictStart && MEM_read64(dictMatchL3) == MEM_read64(ip+1)) {
mLength = ZSTD_count_2segments(ip+1+8, dictMatchL3+8, iend, dictEnd, prefixLowest) + 8;
ip++;
offset = (U32)(curr + 1 - dictMatchIndexL3 - dictIndexDelta);
while (((ip>anchor) & (dictMatchL3>dictStart)) && (ip[-1] == dictMatchL3[-1])) { ip--; dictMatchL3--; mLength++; } /* catch up */
goto _match_found;
} } }
/* if no long +1 match, explore the short match we found */
if (matchIndexS < prefixLowestIndex) {
mLength = ZSTD_count_2segments(ip+4, match+4, iend, dictEnd, prefixLowest) + 4;
offset = (U32)(curr - matchIndexS);
while (((ip>anchor) & (match>dictStart)) && (ip[-1] == match[-1])) { ip--; match--; mLength++; } /* catch up */
} else {
mLength = ZSTD_count(ip+4, match+4, iend) + 4;
offset = (U32)(ip - match);
while (((ip>anchor) & (match>prefixLowest)) && (ip[-1] == match[-1])) { ip--; match--; mLength++; } /* catch up */
}
_match_found:
offset_2 = offset_1;
offset_1 = offset;
ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, iend, OFFSET_TO_OFFBASE(offset), mLength);
_match_stored:
/* match found */
ip += mLength;
anchor = ip;
if (ip <= ilimit) {
/* Complementary insertion */
/* done after iLimit test, as candidates could be > iend-8 */
{ U32 const indexToInsert = curr+2;
hashLong[ZSTD_hashPtr(base+indexToInsert, hBitsL, 8)] = indexToInsert;
hashLong[ZSTD_hashPtr(ip-2, hBitsL, 8)] = (U32)(ip-2-base);
hashSmall[ZSTD_hashPtr(base+indexToInsert, hBitsS, mls)] = indexToInsert;
hashSmall[ZSTD_hashPtr(ip-1, hBitsS, mls)] = (U32)(ip-1-base);
}
/* check immediate repcode */
while (ip <= ilimit) {
U32 const current2 = (U32)(ip-base);
U32 const repIndex2 = current2 - offset_2;
const BYTE* repMatch2 = repIndex2 < prefixLowestIndex ?
dictBase + repIndex2 - dictIndexDelta :
base + repIndex2;
if ( (ZSTD_index_overlap_check(prefixLowestIndex, repIndex2))
&& (MEM_read32(repMatch2) == MEM_read32(ip)) ) {
const BYTE* const repEnd2 = repIndex2 < prefixLowestIndex ? dictEnd : iend;
size_t const repLength2 = ZSTD_count_2segments(ip+4, repMatch2+4, iend, repEnd2, prefixLowest) + 4;
U32 tmpOffset = offset_2; offset_2 = offset_1; offset_1 = tmpOffset; /* swap offset_2 <=> offset_1 */
ZSTD_storeSeq(seqStore, 0, anchor, iend, REPCODE1_TO_OFFBASE, repLength2);
hashSmall[ZSTD_hashPtr(ip, hBitsS, mls)] = current2;
hashLong[ZSTD_hashPtr(ip, hBitsL, 8)] = current2;
ip += repLength2;
anchor = ip;
continue;
}
break;
}
}
} /* while (ip < ilimit) */
/* save reps for next block */
rep[0] = offset_1;
rep[1] = offset_2;
/* Return the last literals size */
return (size_t)(iend - anchor);
}
#define ZSTD_GEN_DFAST_FN(dictMode, mls) \
static size_t ZSTD_compressBlock_doubleFast_##dictMode##_##mls( \
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM], \
void const* src, size_t srcSize) \
{ \
return ZSTD_compressBlock_doubleFast_##dictMode##_generic(ms, seqStore, rep, src, srcSize, mls); \
}
ZSTD_GEN_DFAST_FN(noDict, 4)
ZSTD_GEN_DFAST_FN(noDict, 5)
ZSTD_GEN_DFAST_FN(noDict, 6)
ZSTD_GEN_DFAST_FN(noDict, 7)
ZSTD_GEN_DFAST_FN(dictMatchState, 4)
ZSTD_GEN_DFAST_FN(dictMatchState, 5)
ZSTD_GEN_DFAST_FN(dictMatchState, 6)
ZSTD_GEN_DFAST_FN(dictMatchState, 7)
size_t ZSTD_compressBlock_doubleFast(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
const U32 mls = ms->cParams.minMatch;
switch(mls)
{
default: /* includes case 3 */
case 4 :
return ZSTD_compressBlock_doubleFast_noDict_4(ms, seqStore, rep, src, srcSize);
case 5 :
return ZSTD_compressBlock_doubleFast_noDict_5(ms, seqStore, rep, src, srcSize);
case 6 :
return ZSTD_compressBlock_doubleFast_noDict_6(ms, seqStore, rep, src, srcSize);
case 7 :
return ZSTD_compressBlock_doubleFast_noDict_7(ms, seqStore, rep, src, srcSize);
}
}
size_t ZSTD_compressBlock_doubleFast_dictMatchState(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
const U32 mls = ms->cParams.minMatch;
switch(mls)
{
default: /* includes case 3 */
case 4 :
return ZSTD_compressBlock_doubleFast_dictMatchState_4(ms, seqStore, rep, src, srcSize);
case 5 :
return ZSTD_compressBlock_doubleFast_dictMatchState_5(ms, seqStore, rep, src, srcSize);
case 6 :
return ZSTD_compressBlock_doubleFast_dictMatchState_6(ms, seqStore, rep, src, srcSize);
case 7 :
return ZSTD_compressBlock_doubleFast_dictMatchState_7(ms, seqStore, rep, src, srcSize);
}
}
static
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
size_t ZSTD_compressBlock_doubleFast_extDict_generic(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize,
U32 const mls /* template */)
{
ZSTD_compressionParameters const* cParams = &ms->cParams;
U32* const hashLong = ms->hashTable;
U32 const hBitsL = cParams->hashLog;
U32* const hashSmall = ms->chainTable;
U32 const hBitsS = cParams->chainLog;
const BYTE* const istart = (const BYTE*)src;
const BYTE* ip = istart;
const BYTE* anchor = istart;
const BYTE* const iend = istart + srcSize;
const BYTE* const ilimit = iend - 8;
const BYTE* const base = ms->window.base;
const U32 endIndex = (U32)((size_t)(istart - base) + srcSize);
const U32 lowLimit = ZSTD_getLowestMatchIndex(ms, endIndex, cParams->windowLog);
const U32 dictStartIndex = lowLimit;
const U32 dictLimit = ms->window.dictLimit;
const U32 prefixStartIndex = (dictLimit > lowLimit) ? dictLimit : lowLimit;
const BYTE* const prefixStart = base + prefixStartIndex;
const BYTE* const dictBase = ms->window.dictBase;
const BYTE* const dictStart = dictBase + dictStartIndex;
const BYTE* const dictEnd = dictBase + prefixStartIndex;
U32 offset_1=rep[0], offset_2=rep[1];
DEBUGLOG(5, "ZSTD_compressBlock_doubleFast_extDict_generic (srcSize=%zu)", srcSize);
/* if extDict is invalidated due to maxDistance, switch to "regular" variant */
if (prefixStartIndex == dictStartIndex)
return ZSTD_compressBlock_doubleFast(ms, seqStore, rep, src, srcSize);
/* Search Loop */
while (ip < ilimit) { /* < instead of <=, because (ip+1) */
const size_t hSmall = ZSTD_hashPtr(ip, hBitsS, mls);
const U32 matchIndex = hashSmall[hSmall];
const BYTE* const matchBase = matchIndex < prefixStartIndex ? dictBase : base;
const BYTE* match = matchBase + matchIndex;
const size_t hLong = ZSTD_hashPtr(ip, hBitsL, 8);
const U32 matchLongIndex = hashLong[hLong];
const BYTE* const matchLongBase = matchLongIndex < prefixStartIndex ? dictBase : base;
const BYTE* matchLong = matchLongBase + matchLongIndex;
const U32 curr = (U32)(ip-base);
const U32 repIndex = curr + 1 - offset_1; /* offset_1 expected <= curr +1 */
const BYTE* const repBase = repIndex < prefixStartIndex ? dictBase : base;
const BYTE* const repMatch = repBase + repIndex;
size_t mLength;
hashSmall[hSmall] = hashLong[hLong] = curr; /* update hash table */
if (((ZSTD_index_overlap_check(prefixStartIndex, repIndex))
& (offset_1 <= curr+1 - dictStartIndex)) /* note: we are searching at curr+1 */
&& (MEM_read32(repMatch) == MEM_read32(ip+1)) ) {
const BYTE* repMatchEnd = repIndex < prefixStartIndex ? dictEnd : iend;
mLength = ZSTD_count_2segments(ip+1+4, repMatch+4, iend, repMatchEnd, prefixStart) + 4;
ip++;
ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, iend, REPCODE1_TO_OFFBASE, mLength);
} else {
if ((matchLongIndex > dictStartIndex) && (MEM_read64(matchLong) == MEM_read64(ip))) {
const BYTE* const matchEnd = matchLongIndex < prefixStartIndex ? dictEnd : iend;
const BYTE* const lowMatchPtr = matchLongIndex < prefixStartIndex ? dictStart : prefixStart;
U32 offset;
mLength = ZSTD_count_2segments(ip+8, matchLong+8, iend, matchEnd, prefixStart) + 8;
offset = curr - matchLongIndex;
while (((ip>anchor) & (matchLong>lowMatchPtr)) && (ip[-1] == matchLong[-1])) { ip--; matchLong--; mLength++; } /* catch up */
offset_2 = offset_1;
offset_1 = offset;
ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, iend, OFFSET_TO_OFFBASE(offset), mLength);
} else if ((matchIndex > dictStartIndex) && (MEM_read32(match) == MEM_read32(ip))) {
size_t const h3 = ZSTD_hashPtr(ip+1, hBitsL, 8);
U32 const matchIndex3 = hashLong[h3];
const BYTE* const match3Base = matchIndex3 < prefixStartIndex ? dictBase : base;
const BYTE* match3 = match3Base + matchIndex3;
U32 offset;
hashLong[h3] = curr + 1;
if ( (matchIndex3 > dictStartIndex) && (MEM_read64(match3) == MEM_read64(ip+1)) ) {
const BYTE* const matchEnd = matchIndex3 < prefixStartIndex ? dictEnd : iend;
const BYTE* const lowMatchPtr = matchIndex3 < prefixStartIndex ? dictStart : prefixStart;
mLength = ZSTD_count_2segments(ip+9, match3+8, iend, matchEnd, prefixStart) + 8;
ip++;
offset = curr+1 - matchIndex3;
while (((ip>anchor) & (match3>lowMatchPtr)) && (ip[-1] == match3[-1])) { ip--; match3--; mLength++; } /* catch up */
} else {
const BYTE* const matchEnd = matchIndex < prefixStartIndex ? dictEnd : iend;
const BYTE* const lowMatchPtr = matchIndex < prefixStartIndex ? dictStart : prefixStart;
mLength = ZSTD_count_2segments(ip+4, match+4, iend, matchEnd, prefixStart) + 4;
offset = curr - matchIndex;
while (((ip>anchor) & (match>lowMatchPtr)) && (ip[-1] == match[-1])) { ip--; match--; mLength++; } /* catch up */
}
offset_2 = offset_1;
offset_1 = offset;
ZSTD_storeSeq(seqStore, (size_t)(ip-anchor), anchor, iend, OFFSET_TO_OFFBASE(offset), mLength);
} else {
ip += ((ip-anchor) >> kSearchStrength) + 1;
continue;
} }
/* move to next sequence start */
ip += mLength;
anchor = ip;
if (ip <= ilimit) {
/* Complementary insertion */
/* done after iLimit test, as candidates could be > iend-8 */
{ U32 const indexToInsert = curr+2;
hashLong[ZSTD_hashPtr(base+indexToInsert, hBitsL, 8)] = indexToInsert;
hashLong[ZSTD_hashPtr(ip-2, hBitsL, 8)] = (U32)(ip-2-base);
hashSmall[ZSTD_hashPtr(base+indexToInsert, hBitsS, mls)] = indexToInsert;
hashSmall[ZSTD_hashPtr(ip-1, hBitsS, mls)] = (U32)(ip-1-base);
}
/* check immediate repcode */
while (ip <= ilimit) {
U32 const current2 = (U32)(ip-base);
U32 const repIndex2 = current2 - offset_2;
const BYTE* repMatch2 = repIndex2 < prefixStartIndex ? dictBase + repIndex2 : base + repIndex2;
if ( ((ZSTD_index_overlap_check(prefixStartIndex, repIndex2))
& (offset_2 <= current2 - dictStartIndex))
&& (MEM_read32(repMatch2) == MEM_read32(ip)) ) {
const BYTE* const repEnd2 = repIndex2 < prefixStartIndex ? dictEnd : iend;
size_t const repLength2 = ZSTD_count_2segments(ip+4, repMatch2+4, iend, repEnd2, prefixStart) + 4;
U32 const tmpOffset = offset_2; offset_2 = offset_1; offset_1 = tmpOffset; /* swap offset_2 <=> offset_1 */
ZSTD_storeSeq(seqStore, 0, anchor, iend, REPCODE1_TO_OFFBASE, repLength2);
hashSmall[ZSTD_hashPtr(ip, hBitsS, mls)] = current2;
hashLong[ZSTD_hashPtr(ip, hBitsL, 8)] = current2;
ip += repLength2;
anchor = ip;
continue;
}
break;
} } }
/* save reps for next block */
rep[0] = offset_1;
rep[1] = offset_2;
/* Return the last literals size */
return (size_t)(iend - anchor);
}
ZSTD_GEN_DFAST_FN(extDict, 4)
ZSTD_GEN_DFAST_FN(extDict, 5)
ZSTD_GEN_DFAST_FN(extDict, 6)
ZSTD_GEN_DFAST_FN(extDict, 7)
size_t ZSTD_compressBlock_doubleFast_extDict(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
U32 const mls = ms->cParams.minMatch;
switch(mls)
{
default: /* includes case 3 */
case 4 :
return ZSTD_compressBlock_doubleFast_extDict_4(ms, seqStore, rep, src, srcSize);
case 5 :
return ZSTD_compressBlock_doubleFast_extDict_5(ms, seqStore, rep, src, srcSize);
case 6 :
return ZSTD_compressBlock_doubleFast_extDict_6(ms, seqStore, rep, src, srcSize);
case 7 :
return ZSTD_compressBlock_doubleFast_extDict_7(ms, seqStore, rep, src, srcSize);
}
}
#endif /* ZSTD_EXCLUDE_DFAST_BLOCK_COMPRESSOR */
/**** ended inlining compress/zstd_double_fast.c ****/
/**** start inlining compress/zstd_fast.c ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/**** skipping file: zstd_compress_internal.h ****/
/**** skipping file: zstd_fast.h ****/
static
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
void ZSTD_fillHashTableForCDict(ZSTD_MatchState_t* ms,
const void* const end,
ZSTD_dictTableLoadMethod_e dtlm)
{
const ZSTD_compressionParameters* const cParams = &ms->cParams;
U32* const hashTable = ms->hashTable;
U32 const hBits = cParams->hashLog + ZSTD_SHORT_CACHE_TAG_BITS;
U32 const mls = cParams->minMatch;
const BYTE* const base = ms->window.base;
const BYTE* ip = base + ms->nextToUpdate;
const BYTE* const iend = ((const BYTE*)end) - HASH_READ_SIZE;
const U32 fastHashFillStep = 3;
/* Currently, we always use ZSTD_dtlm_full for filling CDict tables.
* Feel free to remove this assert if there's a good reason! */
assert(dtlm == ZSTD_dtlm_full);
/* Always insert every fastHashFillStep position into the hash table.
* Insert the other positions if their hash entry is empty.
*/
for ( ; ip + fastHashFillStep < iend + 2; ip += fastHashFillStep) {
U32 const curr = (U32)(ip - base);
{ size_t const hashAndTag = ZSTD_hashPtr(ip, hBits, mls);
ZSTD_writeTaggedIndex(hashTable, hashAndTag, curr); }
if (dtlm == ZSTD_dtlm_fast) continue;
/* Only load extra positions for ZSTD_dtlm_full */
{ U32 p;
for (p = 1; p < fastHashFillStep; ++p) {
size_t const hashAndTag = ZSTD_hashPtr(ip + p, hBits, mls);
if (hashTable[hashAndTag >> ZSTD_SHORT_CACHE_TAG_BITS] == 0) { /* not yet filled */
ZSTD_writeTaggedIndex(hashTable, hashAndTag, curr + p);
} } } }
}
static
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
void ZSTD_fillHashTableForCCtx(ZSTD_MatchState_t* ms,
const void* const end,
ZSTD_dictTableLoadMethod_e dtlm)
{
const ZSTD_compressionParameters* const cParams = &ms->cParams;
U32* const hashTable = ms->hashTable;
U32 const hBits = cParams->hashLog;
U32 const mls = cParams->minMatch;
const BYTE* const base = ms->window.base;
const BYTE* ip = base + ms->nextToUpdate;
const BYTE* const iend = ((const BYTE*)end) - HASH_READ_SIZE;
const U32 fastHashFillStep = 3;
/* Currently, we always use ZSTD_dtlm_fast for filling CCtx tables.
* Feel free to remove this assert if there's a good reason! */
assert(dtlm == ZSTD_dtlm_fast);
/* Always insert every fastHashFillStep position into the hash table.
* Insert the other positions if their hash entry is empty.
*/
for ( ; ip + fastHashFillStep < iend + 2; ip += fastHashFillStep) {
U32 const curr = (U32)(ip - base);
size_t const hash0 = ZSTD_hashPtr(ip, hBits, mls);
hashTable[hash0] = curr;
if (dtlm == ZSTD_dtlm_fast) continue;
/* Only load extra positions for ZSTD_dtlm_full */
{ U32 p;
for (p = 1; p < fastHashFillStep; ++p) {
size_t const hash = ZSTD_hashPtr(ip + p, hBits, mls);
if (hashTable[hash] == 0) { /* not yet filled */
hashTable[hash] = curr + p;
} } } }
}
void ZSTD_fillHashTable(ZSTD_MatchState_t* ms,
const void* const end,
ZSTD_dictTableLoadMethod_e dtlm,
ZSTD_tableFillPurpose_e tfp)
{
if (tfp == ZSTD_tfp_forCDict) {
ZSTD_fillHashTableForCDict(ms, end, dtlm);
} else {
ZSTD_fillHashTableForCCtx(ms, end, dtlm);
}
}
typedef int (*ZSTD_match4Found) (const BYTE* currentPtr, const BYTE* matchAddress, U32 matchIdx, U32 idxLowLimit);
static int
ZSTD_match4Found_cmov(const BYTE* currentPtr, const BYTE* matchAddress, U32 matchIdx, U32 idxLowLimit)
{
/* Array of ~random data, should have low probability of matching data.
* Load from here if the index is invalid.
* Used to avoid unpredictable branches. */
static const BYTE dummy[] = {0x12,0x34,0x56,0x78};
/* currentIdx >= lowLimit is a (somewhat) unpredictable branch.
* However expression below compiles into conditional move.
*/
const BYTE* mvalAddr = ZSTD_selectAddr(matchIdx, idxLowLimit, matchAddress, dummy);
/* Note: this used to be written as : return test1 && test2;
* Unfortunately, once inlined, these tests become branches,
* in which case it becomes critical that they are executed in the right order (test1 then test2).
* So we have to write these tests in a specific manner to ensure their ordering.
*/
if (MEM_read32(currentPtr) != MEM_read32(mvalAddr)) return 0;
/* force ordering of these tests, which matters once the function is inlined, as they become branches */
#if defined(__GNUC__)
__asm__("");
#endif
return matchIdx >= idxLowLimit;
}
static int
ZSTD_match4Found_branch(const BYTE* currentPtr, const BYTE* matchAddress, U32 matchIdx, U32 idxLowLimit)
{
/* using a branch instead of a cmov,
* because it's faster in scenarios where matchIdx >= idxLowLimit is generally true,
* aka almost all candidates are within range */
U32 mval;
if (matchIdx >= idxLowLimit) {
mval = MEM_read32(matchAddress);
} else {
mval = MEM_read32(currentPtr) ^ 1; /* guaranteed to not match. */
}
return (MEM_read32(currentPtr) == mval);
}
/**
* If you squint hard enough (and ignore repcodes), the search operation at any
* given position is broken into 4 stages:
*
* 1. Hash (map position to hash value via input read)
* 2. Lookup (map hash val to index via hashtable read)
* 3. Load (map index to value at that position via input read)
* 4. Compare
*
* Each of these steps involves a memory read at an address which is computed
* from the previous step. This means these steps must be sequenced and their
* latencies are cumulative.
*
* Rather than do 1->2->3->4 sequentially for a single position before moving
* onto the next, this implementation interleaves these operations across the
* next few positions:
*
* R = Repcode Read & Compare
* H = Hash
* T = Table Lookup
* M = Match Read & Compare
*
* Pos | Time -->
* ----+-------------------
* N | ... M
* N+1 | ... TM
* N+2 | R H T M
* N+3 | H TM
* N+4 | R H T M
* N+5 | H ...
* N+6 | R ...
*
* This is very much analogous to the pipelining of execution in a CPU. And just
* like a CPU, we have to dump the pipeline when we find a match (i.e., take a
* branch).
*
* When this happens, we throw away our current state, and do the following prep
* to re-enter the loop:
*
* Pos | Time -->
* ----+-------------------
* N | H T
* N+1 | H
*
* This is also the work we do at the beginning to enter the loop initially.
*/
FORCE_INLINE_TEMPLATE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
size_t ZSTD_compressBlock_fast_noDict_generic(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize,
U32 const mls, int useCmov)
{
const ZSTD_compressionParameters* const cParams = &ms->cParams;
U32* const hashTable = ms->hashTable;
U32 const hlog = cParams->hashLog;
size_t const stepSize = cParams->targetLength + !(cParams->targetLength) + 1; /* min 2 */
const BYTE* const base = ms->window.base;
const BYTE* const istart = (const BYTE*)src;
const U32 endIndex = (U32)((size_t)(istart - base) + srcSize);
const U32 prefixStartIndex = ZSTD_getLowestPrefixIndex(ms, endIndex, cParams->windowLog);
const BYTE* const prefixStart = base + prefixStartIndex;
const BYTE* const iend = istart + srcSize;
const BYTE* const ilimit = iend - HASH_READ_SIZE;
const BYTE* anchor = istart;
const BYTE* ip0 = istart;
const BYTE* ip1;
const BYTE* ip2;
const BYTE* ip3;
U32 current0;
U32 rep_offset1 = rep[0];
U32 rep_offset2 = rep[1];
U32 offsetSaved1 = 0, offsetSaved2 = 0;
size_t hash0; /* hash for ip0 */
size_t hash1; /* hash for ip1 */
U32 matchIdx; /* match idx for ip0 */
U32 offcode;
const BYTE* match0;
size_t mLength;
/* ip0 and ip1 are always adjacent. The targetLength skipping and
* uncompressibility acceleration is applied to every other position,
* matching the behavior of #1562. step therefore represents the gap
* between pairs of positions, from ip0 to ip2 or ip1 to ip3. */
size_t step;
const BYTE* nextStep;
const size_t kStepIncr = (1 << (kSearchStrength - 1));
const ZSTD_match4Found matchFound = useCmov ? ZSTD_match4Found_cmov : ZSTD_match4Found_branch;
DEBUGLOG(5, "ZSTD_compressBlock_fast_generic");
ip0 += (ip0 == prefixStart);
{ U32 const curr = (U32)(ip0 - base);
U32 const windowLow = ZSTD_getLowestPrefixIndex(ms, curr, cParams->windowLog);
U32 const maxRep = curr - windowLow;
if (rep_offset2 > maxRep) offsetSaved2 = rep_offset2, rep_offset2 = 0;
if (rep_offset1 > maxRep) offsetSaved1 = rep_offset1, rep_offset1 = 0;
}
/* start each op */
_start: /* Requires: ip0 */
step = stepSize;
nextStep = ip0 + kStepIncr;
/* calculate positions, ip0 - anchor == 0, so we skip step calc */
ip1 = ip0 + 1;
ip2 = ip0 + step;
ip3 = ip2 + 1;
if (ip3 >= ilimit) {
goto _cleanup;
}
hash0 = ZSTD_hashPtr(ip0, hlog, mls);
hash1 = ZSTD_hashPtr(ip1, hlog, mls);
matchIdx = hashTable[hash0];
do {
/* load repcode match for ip[2]*/
const U32 rval = MEM_read32(ip2 - rep_offset1);
/* write back hash table entry */
current0 = (U32)(ip0 - base);
hashTable[hash0] = current0;
/* check repcode at ip[2] */
if ((MEM_read32(ip2) == rval) & (rep_offset1 > 0)) {
ip0 = ip2;
match0 = ip0 - rep_offset1;
mLength = ip0[-1] == match0[-1];
ip0 -= mLength;
match0 -= mLength;
offcode = REPCODE1_TO_OFFBASE;
mLength += 4;
/* Write next hash table entry: it's already calculated.
* This write is known to be safe because ip1 is before the
* repcode (ip2). */
hashTable[hash1] = (U32)(ip1 - base);
goto _match;
}
if (matchFound(ip0, base + matchIdx, matchIdx, prefixStartIndex)) {
/* Write next hash table entry (it's already calculated).
* This write is known to be safe because the ip1 == ip0 + 1,
* so searching will resume after ip1 */
hashTable[hash1] = (U32)(ip1 - base);
goto _offset;
}
/* lookup ip[1] */
matchIdx = hashTable[hash1];
/* hash ip[2] */
hash0 = hash1;
hash1 = ZSTD_hashPtr(ip2, hlog, mls);
/* advance to next positions */
ip0 = ip1;
ip1 = ip2;
ip2 = ip3;
/* write back hash table entry */
current0 = (U32)(ip0 - base);
hashTable[hash0] = current0;
if (matchFound(ip0, base + matchIdx, matchIdx, prefixStartIndex)) {
/* Write next hash table entry, since it's already calculated */
if (step <= 4) {
/* Avoid writing an index if it's >= position where search will resume.
* The minimum possible match has length 4, so search can resume at ip0 + 4.
*/
hashTable[hash1] = (U32)(ip1 - base);
}
goto _offset;
}
/* lookup ip[1] */
matchIdx = hashTable[hash1];
/* hash ip[2] */
hash0 = hash1;
hash1 = ZSTD_hashPtr(ip2, hlog, mls);
/* advance to next positions */
ip0 = ip1;
ip1 = ip2;
ip2 = ip0 + step;
ip3 = ip1 + step;
/* calculate step */
if (ip2 >= nextStep) {
step++;
PREFETCH_L1(ip1 + 64);
PREFETCH_L1(ip1 + 128);
nextStep += kStepIncr;
}
} while (ip3 < ilimit);
_cleanup:
/* Note that there are probably still a couple positions one could search.
* However, it seems to be a meaningful performance hit to try to search
* them. So let's not. */
/* When the repcodes are outside of the prefix, we set them to zero before the loop.
* When the offsets are still zero, we need to restore them after the block to have a correct
* repcode history. If only one offset was invalid, it is easy. The tricky case is when both
* offsets were invalid. We need to figure out which offset to refill with.
* - If both offsets are zero they are in the same order.
* - If both offsets are non-zero, we won't restore the offsets from `offsetSaved[12]`.
* - If only one is zero, we need to decide which offset to restore.
* - If rep_offset1 is non-zero, then rep_offset2 must be offsetSaved1.
* - It is impossible for rep_offset2 to be non-zero.
*
* So if rep_offset1 started invalid (offsetSaved1 != 0) and became valid (rep_offset1 != 0), then
* set rep[0] = rep_offset1 and rep[1] = offsetSaved1.
*/
offsetSaved2 = ((offsetSaved1 != 0) && (rep_offset1 != 0)) ? offsetSaved1 : offsetSaved2;
/* save reps for next block */
rep[0] = rep_offset1 ? rep_offset1 : offsetSaved1;
rep[1] = rep_offset2 ? rep_offset2 : offsetSaved2;
/* Return the last literals size */
return (size_t)(iend - anchor);
_offset: /* Requires: ip0, idx */
/* Compute the offset code. */
match0 = base + matchIdx;
rep_offset2 = rep_offset1;
rep_offset1 = (U32)(ip0-match0);
offcode = OFFSET_TO_OFFBASE(rep_offset1);
mLength = 4;
/* Count the backwards match length. */
while (((ip0>anchor) & (match0>prefixStart)) && (ip0[-1] == match0[-1])) {
ip0--;
match0--;
mLength++;
}
_match: /* Requires: ip0, match0, offcode */
/* Count the forward length. */
mLength += ZSTD_count(ip0 + mLength, match0 + mLength, iend);
ZSTD_storeSeq(seqStore, (size_t)(ip0 - anchor), anchor, iend, offcode, mLength);
ip0 += mLength;
anchor = ip0;
/* Fill table and check for immediate repcode. */
if (ip0 <= ilimit) {
/* Fill Table */
assert(base+current0+2 > istart); /* check base overflow */
hashTable[ZSTD_hashPtr(base+current0+2, hlog, mls)] = current0+2; /* here because current+2 could be > iend-8 */
hashTable[ZSTD_hashPtr(ip0-2, hlog, mls)] = (U32)(ip0-2-base);
if (rep_offset2 > 0) { /* rep_offset2==0 means rep_offset2 is invalidated */
while ( (ip0 <= ilimit) && (MEM_read32(ip0) == MEM_read32(ip0 - rep_offset2)) ) {
/* store sequence */
size_t const rLength = ZSTD_count(ip0+4, ip0+4-rep_offset2, iend) + 4;
{ U32 const tmpOff = rep_offset2; rep_offset2 = rep_offset1; rep_offset1 = tmpOff; } /* swap rep_offset2 <=> rep_offset1 */
hashTable[ZSTD_hashPtr(ip0, hlog, mls)] = (U32)(ip0-base);
ip0 += rLength;
ZSTD_storeSeq(seqStore, 0 /*litLen*/, anchor, iend, REPCODE1_TO_OFFBASE, rLength);
anchor = ip0;
continue; /* faster when present (confirmed on gcc-8) ... (?) */
} } }
goto _start;
}
#define ZSTD_GEN_FAST_FN(dictMode, mml, cmov) \
static size_t ZSTD_compressBlock_fast_##dictMode##_##mml##_##cmov( \
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM], \
void const* src, size_t srcSize) \
{ \
return ZSTD_compressBlock_fast_##dictMode##_generic(ms, seqStore, rep, src, srcSize, mml, cmov); \
}
ZSTD_GEN_FAST_FN(noDict, 4, 1)
ZSTD_GEN_FAST_FN(noDict, 5, 1)
ZSTD_GEN_FAST_FN(noDict, 6, 1)
ZSTD_GEN_FAST_FN(noDict, 7, 1)
ZSTD_GEN_FAST_FN(noDict, 4, 0)
ZSTD_GEN_FAST_FN(noDict, 5, 0)
ZSTD_GEN_FAST_FN(noDict, 6, 0)
ZSTD_GEN_FAST_FN(noDict, 7, 0)
size_t ZSTD_compressBlock_fast(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
U32 const mml = ms->cParams.minMatch;
/* use cmov when "candidate in range" branch is likely unpredictable */
int const useCmov = ms->cParams.windowLog < 19;
assert(ms->dictMatchState == NULL);
if (useCmov) {
switch(mml)
{
default: /* includes case 3 */
case 4 :
return ZSTD_compressBlock_fast_noDict_4_1(ms, seqStore, rep, src, srcSize);
case 5 :
return ZSTD_compressBlock_fast_noDict_5_1(ms, seqStore, rep, src, srcSize);
case 6 :
return ZSTD_compressBlock_fast_noDict_6_1(ms, seqStore, rep, src, srcSize);
case 7 :
return ZSTD_compressBlock_fast_noDict_7_1(ms, seqStore, rep, src, srcSize);
}
} else {
/* use a branch instead */
switch(mml)
{
default: /* includes case 3 */
case 4 :
return ZSTD_compressBlock_fast_noDict_4_0(ms, seqStore, rep, src, srcSize);
case 5 :
return ZSTD_compressBlock_fast_noDict_5_0(ms, seqStore, rep, src, srcSize);
case 6 :
return ZSTD_compressBlock_fast_noDict_6_0(ms, seqStore, rep, src, srcSize);
case 7 :
return ZSTD_compressBlock_fast_noDict_7_0(ms, seqStore, rep, src, srcSize);
}
}
}
FORCE_INLINE_TEMPLATE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
size_t ZSTD_compressBlock_fast_dictMatchState_generic(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize, U32 const mls, U32 const hasStep)
{
const ZSTD_compressionParameters* const cParams = &ms->cParams;
U32* const hashTable = ms->hashTable;
U32 const hlog = cParams->hashLog;
/* support stepSize of 0 */
U32 const stepSize = cParams->targetLength + !(cParams->targetLength);
const BYTE* const base = ms->window.base;
const BYTE* const istart = (const BYTE*)src;
const BYTE* ip0 = istart;
const BYTE* ip1 = ip0 + stepSize; /* we assert below that stepSize >= 1 */
const BYTE* anchor = istart;
const U32 prefixStartIndex = ms->window.dictLimit;
const BYTE* const prefixStart = base + prefixStartIndex;
const BYTE* const iend = istart + srcSize;
const BYTE* const ilimit = iend - HASH_READ_SIZE;
U32 offset_1=rep[0], offset_2=rep[1];
const ZSTD_MatchState_t* const dms = ms->dictMatchState;
const ZSTD_compressionParameters* const dictCParams = &dms->cParams ;
const U32* const dictHashTable = dms->hashTable;
const U32 dictStartIndex = dms->window.dictLimit;
const BYTE* const dictBase = dms->window.base;
const BYTE* const dictStart = dictBase + dictStartIndex;
const BYTE* const dictEnd = dms->window.nextSrc;
const U32 dictIndexDelta = prefixStartIndex - (U32)(dictEnd - dictBase);
const U32 dictAndPrefixLength = (U32)(istart - prefixStart + dictEnd - dictStart);
const U32 dictHBits = dictCParams->hashLog + ZSTD_SHORT_CACHE_TAG_BITS;
/* if a dictionary is still attached, it necessarily means that
* it is within window size. So we just check it. */
const U32 maxDistance = 1U << cParams->windowLog;
const U32 endIndex = (U32)((size_t)(istart - base) + srcSize);
assert(endIndex - prefixStartIndex <= maxDistance);
(void)maxDistance; (void)endIndex; /* these variables are not used when assert() is disabled */
(void)hasStep; /* not currently specialized on whether it's accelerated */
/* ensure there will be no underflow
* when translating a dict index into a local index */
assert(prefixStartIndex >= (U32)(dictEnd - dictBase));
if (ms->prefetchCDictTables) {
size_t const hashTableBytes = (((size_t)1) << dictCParams->hashLog) * sizeof(U32);
PREFETCH_AREA(dictHashTable, hashTableBytes);
}
/* init */
DEBUGLOG(5, "ZSTD_compressBlock_fast_dictMatchState_generic");
ip0 += (dictAndPrefixLength == 0);
/* dictMatchState repCode checks don't currently handle repCode == 0
* disabling. */
assert(offset_1 <= dictAndPrefixLength);
assert(offset_2 <= dictAndPrefixLength);
/* Outer search loop */
assert(stepSize >= 1);
while (ip1 <= ilimit) { /* repcode check at (ip0 + 1) is safe because ip0 < ip1 */
size_t mLength;
size_t hash0 = ZSTD_hashPtr(ip0, hlog, mls);
size_t const dictHashAndTag0 = ZSTD_hashPtr(ip0, dictHBits, mls);
U32 dictMatchIndexAndTag = dictHashTable[dictHashAndTag0 >> ZSTD_SHORT_CACHE_TAG_BITS];
int dictTagsMatch = ZSTD_comparePackedTags(dictMatchIndexAndTag, dictHashAndTag0);
U32 matchIndex = hashTable[hash0];
U32 curr = (U32)(ip0 - base);
size_t step = stepSize;
const size_t kStepIncr = 1 << kSearchStrength;
const BYTE* nextStep = ip0 + kStepIncr;
/* Inner search loop */
while (1) {
const BYTE* match = base + matchIndex;
const U32 repIndex = curr + 1 - offset_1;
const BYTE* repMatch = (repIndex < prefixStartIndex) ?
dictBase + (repIndex - dictIndexDelta) :
base + repIndex;
const size_t hash1 = ZSTD_hashPtr(ip1, hlog, mls);
size_t const dictHashAndTag1 = ZSTD_hashPtr(ip1, dictHBits, mls);
hashTable[hash0] = curr; /* update hash table */
if ((ZSTD_index_overlap_check(prefixStartIndex, repIndex))
&& (MEM_read32(repMatch) == MEM_read32(ip0 + 1))) {
const BYTE* const repMatchEnd = repIndex < prefixStartIndex ? dictEnd : iend;
mLength = ZSTD_count_2segments(ip0 + 1 + 4, repMatch + 4, iend, repMatchEnd, prefixStart) + 4;
ip0++;
ZSTD_storeSeq(seqStore, (size_t) (ip0 - anchor), anchor, iend, REPCODE1_TO_OFFBASE, mLength);
break;
}
if (dictTagsMatch) {
/* Found a possible dict match */
const U32 dictMatchIndex = dictMatchIndexAndTag >> ZSTD_SHORT_CACHE_TAG_BITS;
const BYTE* dictMatch = dictBase + dictMatchIndex;
if (dictMatchIndex > dictStartIndex &&
MEM_read32(dictMatch) == MEM_read32(ip0)) {
/* To replicate extDict parse behavior, we only use dict matches when the normal matchIndex is invalid */
if (matchIndex <= prefixStartIndex) {
U32 const offset = (U32) (curr - dictMatchIndex - dictIndexDelta);
mLength = ZSTD_count_2segments(ip0 + 4, dictMatch + 4, iend, dictEnd, prefixStart) + 4;
while (((ip0 > anchor) & (dictMatch > dictStart))
&& (ip0[-1] == dictMatch[-1])) {
ip0--;
dictMatch--;
mLength++;
} /* catch up */
offset_2 = offset_1;
offset_1 = offset;
ZSTD_storeSeq(seqStore, (size_t) (ip0 - anchor), anchor, iend, OFFSET_TO_OFFBASE(offset), mLength);
break;
}
}
}
if (ZSTD_match4Found_cmov(ip0, match, matchIndex, prefixStartIndex)) {
/* found a regular match of size >= 4 */
U32 const offset = (U32) (ip0 - match);
mLength = ZSTD_count(ip0 + 4, match + 4, iend) + 4;
while (((ip0 > anchor) & (match > prefixStart))
&& (ip0[-1] == match[-1])) {
ip0--;
match--;
mLength++;
} /* catch up */
offset_2 = offset_1;
offset_1 = offset;
ZSTD_storeSeq(seqStore, (size_t) (ip0 - anchor), anchor, iend, OFFSET_TO_OFFBASE(offset), mLength);
break;
}
/* Prepare for next iteration */
dictMatchIndexAndTag = dictHashTable[dictHashAndTag1 >> ZSTD_SHORT_CACHE_TAG_BITS];
dictTagsMatch = ZSTD_comparePackedTags(dictMatchIndexAndTag, dictHashAndTag1);
matchIndex = hashTable[hash1];
if (ip1 >= nextStep) {
step++;
nextStep += kStepIncr;
}
ip0 = ip1;
ip1 = ip1 + step;
if (ip1 > ilimit) goto _cleanup;
curr = (U32)(ip0 - base);
hash0 = hash1;
} /* end inner search loop */
/* match found */
assert(mLength);
ip0 += mLength;
anchor = ip0;
if (ip0 <= ilimit) {
/* Fill Table */
assert(base+curr+2 > istart); /* check base overflow */
hashTable[ZSTD_hashPtr(base+curr+2, hlog, mls)] = curr+2; /* here because curr+2 could be > iend-8 */
hashTable[ZSTD_hashPtr(ip0-2, hlog, mls)] = (U32)(ip0-2-base);
/* check immediate repcode */
while (ip0 <= ilimit) {
U32 const current2 = (U32)(ip0-base);
U32 const repIndex2 = current2 - offset_2;
const BYTE* repMatch2 = repIndex2 < prefixStartIndex ?
dictBase - dictIndexDelta + repIndex2 :
base + repIndex2;
if ( (ZSTD_index_overlap_check(prefixStartIndex, repIndex2))
&& (MEM_read32(repMatch2) == MEM_read32(ip0))) {
const BYTE* const repEnd2 = repIndex2 < prefixStartIndex ? dictEnd : iend;
size_t const repLength2 = ZSTD_count_2segments(ip0+4, repMatch2+4, iend, repEnd2, prefixStart) + 4;
U32 tmpOffset = offset_2; offset_2 = offset_1; offset_1 = tmpOffset; /* swap offset_2 <=> offset_1 */
ZSTD_storeSeq(seqStore, 0, anchor, iend, REPCODE1_TO_OFFBASE, repLength2);
hashTable[ZSTD_hashPtr(ip0, hlog, mls)] = current2;
ip0 += repLength2;
anchor = ip0;
continue;
}
break;
}
}
/* Prepare for next iteration */
assert(ip0 == anchor);
ip1 = ip0 + stepSize;
}
_cleanup:
/* save reps for next block */
rep[0] = offset_1;
rep[1] = offset_2;
/* Return the last literals size */
return (size_t)(iend - anchor);
}
ZSTD_GEN_FAST_FN(dictMatchState, 4, 0)
ZSTD_GEN_FAST_FN(dictMatchState, 5, 0)
ZSTD_GEN_FAST_FN(dictMatchState, 6, 0)
ZSTD_GEN_FAST_FN(dictMatchState, 7, 0)
size_t ZSTD_compressBlock_fast_dictMatchState(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
U32 const mls = ms->cParams.minMatch;
assert(ms->dictMatchState != NULL);
switch(mls)
{
default: /* includes case 3 */
case 4 :
return ZSTD_compressBlock_fast_dictMatchState_4_0(ms, seqStore, rep, src, srcSize);
case 5 :
return ZSTD_compressBlock_fast_dictMatchState_5_0(ms, seqStore, rep, src, srcSize);
case 6 :
return ZSTD_compressBlock_fast_dictMatchState_6_0(ms, seqStore, rep, src, srcSize);
case 7 :
return ZSTD_compressBlock_fast_dictMatchState_7_0(ms, seqStore, rep, src, srcSize);
}
}
static
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
size_t ZSTD_compressBlock_fast_extDict_generic(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize, U32 const mls, U32 const hasStep)
{
const ZSTD_compressionParameters* const cParams = &ms->cParams;
U32* const hashTable = ms->hashTable;
U32 const hlog = cParams->hashLog;
/* support stepSize of 0 */
size_t const stepSize = cParams->targetLength + !(cParams->targetLength) + 1;
const BYTE* const base = ms->window.base;
const BYTE* const dictBase = ms->window.dictBase;
const BYTE* const istart = (const BYTE*)src;
const BYTE* anchor = istart;
const U32 endIndex = (U32)((size_t)(istart - base) + srcSize);
const U32 lowLimit = ZSTD_getLowestMatchIndex(ms, endIndex, cParams->windowLog);
const U32 dictStartIndex = lowLimit;
const BYTE* const dictStart = dictBase + dictStartIndex;
const U32 dictLimit = ms->window.dictLimit;
const U32 prefixStartIndex = dictLimit < lowLimit ? lowLimit : dictLimit;
const BYTE* const prefixStart = base + prefixStartIndex;
const BYTE* const dictEnd = dictBase + prefixStartIndex;
const BYTE* const iend = istart + srcSize;
const BYTE* const ilimit = iend - 8;
U32 offset_1=rep[0], offset_2=rep[1];
U32 offsetSaved1 = 0, offsetSaved2 = 0;
const BYTE* ip0 = istart;
const BYTE* ip1;
const BYTE* ip2;
const BYTE* ip3;
U32 current0;
size_t hash0; /* hash for ip0 */
size_t hash1; /* hash for ip1 */
U32 idx; /* match idx for ip0 */
const BYTE* idxBase; /* base pointer for idx */
U32 offcode;
const BYTE* match0;
size_t mLength;
const BYTE* matchEnd = 0; /* initialize to avoid warning, assert != 0 later */
size_t step;
const BYTE* nextStep;
const size_t kStepIncr = (1 << (kSearchStrength - 1));
(void)hasStep; /* not currently specialized on whether it's accelerated */
DEBUGLOG(5, "ZSTD_compressBlock_fast_extDict_generic (offset_1=%u)", offset_1);
/* switch to "regular" variant if extDict is invalidated due to maxDistance */
if (prefixStartIndex == dictStartIndex)
return ZSTD_compressBlock_fast(ms, seqStore, rep, src, srcSize);
{ U32 const curr = (U32)(ip0 - base);
U32 const maxRep = curr - dictStartIndex;
if (offset_2 >= maxRep) offsetSaved2 = offset_2, offset_2 = 0;
if (offset_1 >= maxRep) offsetSaved1 = offset_1, offset_1 = 0;
}
/* start each op */
_start: /* Requires: ip0 */
step = stepSize;
nextStep = ip0 + kStepIncr;
/* calculate positions, ip0 - anchor == 0, so we skip step calc */
ip1 = ip0 + 1;
ip2 = ip0 + step;
ip3 = ip2 + 1;
if (ip3 >= ilimit) {
goto _cleanup;
}
hash0 = ZSTD_hashPtr(ip0, hlog, mls);
hash1 = ZSTD_hashPtr(ip1, hlog, mls);
idx = hashTable[hash0];
idxBase = idx < prefixStartIndex ? dictBase : base;
do {
{ /* load repcode match for ip[2] */
U32 const current2 = (U32)(ip2 - base);
U32 const repIndex = current2 - offset_1;
const BYTE* const repBase = repIndex < prefixStartIndex ? dictBase : base;
U32 rval;
if ( ((U32)(prefixStartIndex - repIndex) >= 4) /* intentional underflow */
& (offset_1 > 0) ) {
rval = MEM_read32(repBase + repIndex);
} else {
rval = MEM_read32(ip2) ^ 1; /* guaranteed to not match. */
}
/* write back hash table entry */
current0 = (U32)(ip0 - base);
hashTable[hash0] = current0;
/* check repcode at ip[2] */
if (MEM_read32(ip2) == rval) {
ip0 = ip2;
match0 = repBase + repIndex;
matchEnd = repIndex < prefixStartIndex ? dictEnd : iend;
assert((match0 != prefixStart) & (match0 != dictStart));
mLength = ip0[-1] == match0[-1];
ip0 -= mLength;
match0 -= mLength;
offcode = REPCODE1_TO_OFFBASE;
mLength += 4;
goto _match;
} }
{ /* load match for ip[0] */
U32 const mval = idx >= dictStartIndex ?
MEM_read32(idxBase + idx) :
MEM_read32(ip0) ^ 1; /* guaranteed not to match */
/* check match at ip[0] */
if (MEM_read32(ip0) == mval) {
/* found a match! */
goto _offset;
} }
/* lookup ip[1] */
idx = hashTable[hash1];
idxBase = idx < prefixStartIndex ? dictBase : base;
/* hash ip[2] */
hash0 = hash1;
hash1 = ZSTD_hashPtr(ip2, hlog, mls);
/* advance to next positions */
ip0 = ip1;
ip1 = ip2;
ip2 = ip3;
/* write back hash table entry */
current0 = (U32)(ip0 - base);
hashTable[hash0] = current0;
{ /* load match for ip[0] */
U32 const mval = idx >= dictStartIndex ?
MEM_read32(idxBase + idx) :
MEM_read32(ip0) ^ 1; /* guaranteed not to match */
/* check match at ip[0] */
if (MEM_read32(ip0) == mval) {
/* found a match! */
goto _offset;
} }
/* lookup ip[1] */
idx = hashTable[hash1];
idxBase = idx < prefixStartIndex ? dictBase : base;
/* hash ip[2] */
hash0 = hash1;
hash1 = ZSTD_hashPtr(ip2, hlog, mls);
/* advance to next positions */
ip0 = ip1;
ip1 = ip2;
ip2 = ip0 + step;
ip3 = ip1 + step;
/* calculate step */
if (ip2 >= nextStep) {
step++;
PREFETCH_L1(ip1 + 64);
PREFETCH_L1(ip1 + 128);
nextStep += kStepIncr;
}
} while (ip3 < ilimit);
_cleanup:
/* Note that there are probably still a couple positions we could search.
* However, it seems to be a meaningful performance hit to try to search
* them. So let's not. */
/* If offset_1 started invalid (offsetSaved1 != 0) and became valid (offset_1 != 0),
* rotate saved offsets. See comment in ZSTD_compressBlock_fast_noDict for more context. */
offsetSaved2 = ((offsetSaved1 != 0) && (offset_1 != 0)) ? offsetSaved1 : offsetSaved2;
/* save reps for next block */
rep[0] = offset_1 ? offset_1 : offsetSaved1;
rep[1] = offset_2 ? offset_2 : offsetSaved2;
/* Return the last literals size */
return (size_t)(iend - anchor);
_offset: /* Requires: ip0, idx, idxBase */
/* Compute the offset code. */
{ U32 const offset = current0 - idx;
const BYTE* const lowMatchPtr = idx < prefixStartIndex ? dictStart : prefixStart;
matchEnd = idx < prefixStartIndex ? dictEnd : iend;
match0 = idxBase + idx;
offset_2 = offset_1;
offset_1 = offset;
offcode = OFFSET_TO_OFFBASE(offset);
mLength = 4;
/* Count the backwards match length. */
while (((ip0>anchor) & (match0>lowMatchPtr)) && (ip0[-1] == match0[-1])) {
ip0--;
match0--;
mLength++;
} }
_match: /* Requires: ip0, match0, offcode, matchEnd */
/* Count the forward length. */
assert(matchEnd != 0);
mLength += ZSTD_count_2segments(ip0 + mLength, match0 + mLength, iend, matchEnd, prefixStart);
ZSTD_storeSeq(seqStore, (size_t)(ip0 - anchor), anchor, iend, offcode, mLength);
ip0 += mLength;
anchor = ip0;
/* write next hash table entry */
if (ip1 < ip0) {
hashTable[hash1] = (U32)(ip1 - base);
}
/* Fill table and check for immediate repcode. */
if (ip0 <= ilimit) {
/* Fill Table */
assert(base+current0+2 > istart); /* check base overflow */
hashTable[ZSTD_hashPtr(base+current0+2, hlog, mls)] = current0+2; /* here because current+2 could be > iend-8 */
hashTable[ZSTD_hashPtr(ip0-2, hlog, mls)] = (U32)(ip0-2-base);
while (ip0 <= ilimit) {
U32 const repIndex2 = (U32)(ip0-base) - offset_2;
const BYTE* const repMatch2 = repIndex2 < prefixStartIndex ? dictBase + repIndex2 : base + repIndex2;
if ( ((ZSTD_index_overlap_check(prefixStartIndex, repIndex2)) & (offset_2 > 0))
&& (MEM_read32(repMatch2) == MEM_read32(ip0)) ) {
const BYTE* const repEnd2 = repIndex2 < prefixStartIndex ? dictEnd : iend;
size_t const repLength2 = ZSTD_count_2segments(ip0+4, repMatch2+4, iend, repEnd2, prefixStart) + 4;
{ U32 const tmpOffset = offset_2; offset_2 = offset_1; offset_1 = tmpOffset; } /* swap offset_2 <=> offset_1 */
ZSTD_storeSeq(seqStore, 0 /*litlen*/, anchor, iend, REPCODE1_TO_OFFBASE, repLength2);
hashTable[ZSTD_hashPtr(ip0, hlog, mls)] = (U32)(ip0-base);
ip0 += repLength2;
anchor = ip0;
continue;
}
break;
} }
goto _start;
}
ZSTD_GEN_FAST_FN(extDict, 4, 0)
ZSTD_GEN_FAST_FN(extDict, 5, 0)
ZSTD_GEN_FAST_FN(extDict, 6, 0)
ZSTD_GEN_FAST_FN(extDict, 7, 0)
size_t ZSTD_compressBlock_fast_extDict(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
U32 const mls = ms->cParams.minMatch;
assert(ms->dictMatchState == NULL);
switch(mls)
{
default: /* includes case 3 */
case 4 :
return ZSTD_compressBlock_fast_extDict_4_0(ms, seqStore, rep, src, srcSize);
case 5 :
return ZSTD_compressBlock_fast_extDict_5_0(ms, seqStore, rep, src, srcSize);
case 6 :
return ZSTD_compressBlock_fast_extDict_6_0(ms, seqStore, rep, src, srcSize);
case 7 :
return ZSTD_compressBlock_fast_extDict_7_0(ms, seqStore, rep, src, srcSize);
}
}
/**** ended inlining compress/zstd_fast.c ****/
/**** start inlining compress/zstd_lazy.c ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/**** skipping file: zstd_compress_internal.h ****/
/**** skipping file: zstd_lazy.h ****/
/**** skipping file: ../common/bits.h ****/
#if !defined(ZSTD_EXCLUDE_GREEDY_BLOCK_COMPRESSOR) \
|| !defined(ZSTD_EXCLUDE_LAZY_BLOCK_COMPRESSOR) \
|| !defined(ZSTD_EXCLUDE_LAZY2_BLOCK_COMPRESSOR) \
|| !defined(ZSTD_EXCLUDE_BTLAZY2_BLOCK_COMPRESSOR)
#define kLazySkippingStep 8
/*-*************************************
* Binary Tree search
***************************************/
static
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
void ZSTD_updateDUBT(ZSTD_MatchState_t* ms,
const BYTE* ip, const BYTE* iend,
U32 mls)
{
const ZSTD_compressionParameters* const cParams = &ms->cParams;
U32* const hashTable = ms->hashTable;
U32 const hashLog = cParams->hashLog;
U32* const bt = ms->chainTable;
U32 const btLog = cParams->chainLog - 1;
U32 const btMask = (1 << btLog) - 1;
const BYTE* const base = ms->window.base;
U32 const target = (U32)(ip - base);
U32 idx = ms->nextToUpdate;
if (idx != target)
DEBUGLOG(7, "ZSTD_updateDUBT, from %u to %u (dictLimit:%u)",
idx, target, ms->window.dictLimit);
assert(ip + 8 <= iend); /* condition for ZSTD_hashPtr */
(void)iend;
assert(idx >= ms->window.dictLimit); /* condition for valid base+idx */
for ( ; idx < target ; idx++) {
size_t const h = ZSTD_hashPtr(base + idx, hashLog, mls); /* assumption : ip + 8 <= iend */
U32 const matchIndex = hashTable[h];
U32* const nextCandidatePtr = bt + 2*(idx&btMask);
U32* const sortMarkPtr = nextCandidatePtr + 1;
DEBUGLOG(8, "ZSTD_updateDUBT: insert %u", idx);
hashTable[h] = idx; /* Update Hash Table */
*nextCandidatePtr = matchIndex; /* update BT like a chain */
*sortMarkPtr = ZSTD_DUBT_UNSORTED_MARK;
}
ms->nextToUpdate = target;
}
/** ZSTD_insertDUBT1() :
* sort one already inserted but unsorted position
* assumption : curr >= btlow == (curr - btmask)
* doesn't fail */
static
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
void ZSTD_insertDUBT1(const ZSTD_MatchState_t* ms,
U32 curr, const BYTE* inputEnd,
U32 nbCompares, U32 btLow,
const ZSTD_dictMode_e dictMode)
{
const ZSTD_compressionParameters* const cParams = &ms->cParams;
U32* const bt = ms->chainTable;
U32 const btLog = cParams->chainLog - 1;
U32 const btMask = (1 << btLog) - 1;
size_t commonLengthSmaller=0, commonLengthLarger=0;
const BYTE* const base = ms->window.base;
const BYTE* const dictBase = ms->window.dictBase;
const U32 dictLimit = ms->window.dictLimit;
const BYTE* const ip = (curr>=dictLimit) ? base + curr : dictBase + curr;
const BYTE* const iend = (curr>=dictLimit) ? inputEnd : dictBase + dictLimit;
const BYTE* const dictEnd = dictBase + dictLimit;
const BYTE* const prefixStart = base + dictLimit;
const BYTE* match;
U32* smallerPtr = bt + 2*(curr&btMask);
U32* largerPtr = smallerPtr + 1;
U32 matchIndex = *smallerPtr; /* this candidate is unsorted : next sorted candidate is reached through *smallerPtr, while *largerPtr contains previous unsorted candidate (which is already saved and can be overwritten) */
U32 dummy32; /* to be nullified at the end */
U32 const windowValid = ms->window.lowLimit;
U32 const maxDistance = 1U << cParams->windowLog;
U32 const windowLow = (curr - windowValid > maxDistance) ? curr - maxDistance : windowValid;
DEBUGLOG(8, "ZSTD_insertDUBT1(%u) (dictLimit=%u, lowLimit=%u)",
curr, dictLimit, windowLow);
assert(curr >= btLow);
assert(ip < iend); /* condition for ZSTD_count */
for (; nbCompares && (matchIndex > windowLow); --nbCompares) {
U32* const nextPtr = bt + 2*(matchIndex & btMask);
size_t matchLength = MIN(commonLengthSmaller, commonLengthLarger); /* guaranteed minimum nb of common bytes */
assert(matchIndex < curr);
/* note : all candidates are now supposed sorted,
* but it's still possible to have nextPtr[1] == ZSTD_DUBT_UNSORTED_MARK
* when a real index has the same value as ZSTD_DUBT_UNSORTED_MARK */
if ( (dictMode != ZSTD_extDict)
|| (matchIndex+matchLength >= dictLimit) /* both in current segment*/
|| (curr < dictLimit) /* both in extDict */) {
const BYTE* const mBase = ( (dictMode != ZSTD_extDict)
|| (matchIndex+matchLength >= dictLimit)) ?
base : dictBase;
assert( (matchIndex+matchLength >= dictLimit) /* might be wrong if extDict is incorrectly set to 0 */
|| (curr < dictLimit) );
match = mBase + matchIndex;
matchLength += ZSTD_count(ip+matchLength, match+matchLength, iend);
} else {
match = dictBase + matchIndex;
matchLength += ZSTD_count_2segments(ip+matchLength, match+matchLength, iend, dictEnd, prefixStart);
if (matchIndex+matchLength >= dictLimit)
match = base + matchIndex; /* preparation for next read of match[matchLength] */
}
DEBUGLOG(8, "ZSTD_insertDUBT1: comparing %u with %u : found %u common bytes ",
curr, matchIndex, (U32)matchLength);
if (ip+matchLength == iend) { /* equal : no way to know if inf or sup */
break; /* drop , to guarantee consistency ; miss a bit of compression, but other solutions can corrupt tree */
}
if (match[matchLength] < ip[matchLength]) { /* necessarily within buffer */
/* match is smaller than current */
*smallerPtr = matchIndex; /* update smaller idx */
commonLengthSmaller = matchLength; /* all smaller will now have at least this guaranteed common length */
if (matchIndex <= btLow) { smallerPtr=&dummy32; break; } /* beyond tree size, stop searching */
DEBUGLOG(8, "ZSTD_insertDUBT1: %u (>btLow=%u) is smaller : next => %u",
matchIndex, btLow, nextPtr[1]);
smallerPtr = nextPtr+1; /* new "candidate" => larger than match, which was smaller than target */
matchIndex = nextPtr[1]; /* new matchIndex, larger than previous and closer to current */
} else {
/* match is larger than current */
*largerPtr = matchIndex;
commonLengthLarger = matchLength;
if (matchIndex <= btLow) { largerPtr=&dummy32; break; } /* beyond tree size, stop searching */
DEBUGLOG(8, "ZSTD_insertDUBT1: %u (>btLow=%u) is larger => %u",
matchIndex, btLow, nextPtr[0]);
largerPtr = nextPtr;
matchIndex = nextPtr[0];
} }
*smallerPtr = *largerPtr = 0;
}
static
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
size_t ZSTD_DUBT_findBetterDictMatch (
const ZSTD_MatchState_t* ms,
const BYTE* const ip, const BYTE* const iend,
size_t* offsetPtr,
size_t bestLength,
U32 nbCompares,
U32 const mls,
const ZSTD_dictMode_e dictMode)
{
const ZSTD_MatchState_t * const dms = ms->dictMatchState;
const ZSTD_compressionParameters* const dmsCParams = &dms->cParams;
const U32 * const dictHashTable = dms->hashTable;
U32 const hashLog = dmsCParams->hashLog;
size_t const h = ZSTD_hashPtr(ip, hashLog, mls);
U32 dictMatchIndex = dictHashTable[h];
const BYTE* const base = ms->window.base;
const BYTE* const prefixStart = base + ms->window.dictLimit;
U32 const curr = (U32)(ip-base);
const BYTE* const dictBase = dms->window.base;
const BYTE* const dictEnd = dms->window.nextSrc;
U32 const dictHighLimit = (U32)(dms->window.nextSrc - dms->window.base);
U32 const dictLowLimit = dms->window.lowLimit;
U32 const dictIndexDelta = ms->window.lowLimit - dictHighLimit;
U32* const dictBt = dms->chainTable;
U32 const btLog = dmsCParams->chainLog - 1;
U32 const btMask = (1 << btLog) - 1;
U32 const btLow = (btMask >= dictHighLimit - dictLowLimit) ? dictLowLimit : dictHighLimit - btMask;
size_t commonLengthSmaller=0, commonLengthLarger=0;
(void)dictMode;
assert(dictMode == ZSTD_dictMatchState);
for (; nbCompares && (dictMatchIndex > dictLowLimit); --nbCompares) {
U32* const nextPtr = dictBt + 2*(dictMatchIndex & btMask);
size_t matchLength = MIN(commonLengthSmaller, commonLengthLarger); /* guaranteed minimum nb of common bytes */
const BYTE* match = dictBase + dictMatchIndex;
matchLength += ZSTD_count_2segments(ip+matchLength, match+matchLength, iend, dictEnd, prefixStart);
if (dictMatchIndex+matchLength >= dictHighLimit)
match = base + dictMatchIndex + dictIndexDelta; /* to prepare for next usage of match[matchLength] */
if (matchLength > bestLength) {
U32 matchIndex = dictMatchIndex + dictIndexDelta;
if ( (4*(int)(matchLength-bestLength)) > (int)(ZSTD_highbit32(curr-matchIndex+1) - ZSTD_highbit32((U32)offsetPtr[0]+1)) ) {
DEBUGLOG(9, "ZSTD_DUBT_findBetterDictMatch(%u) : found better match length %u -> %u and offsetCode %u -> %u (dictMatchIndex %u, matchIndex %u)",
curr, (U32)bestLength, (U32)matchLength, (U32)*offsetPtr, OFFSET_TO_OFFBASE(curr - matchIndex), dictMatchIndex, matchIndex);
bestLength = matchLength, *offsetPtr = OFFSET_TO_OFFBASE(curr - matchIndex);
}
if (ip+matchLength == iend) { /* reached end of input : ip[matchLength] is not valid, no way to know if it's larger or smaller than match */
break; /* drop, to guarantee consistency (miss a little bit of compression) */
}
}
if (match[matchLength] < ip[matchLength]) {
if (dictMatchIndex <= btLow) { break; } /* beyond tree size, stop the search */
commonLengthSmaller = matchLength; /* all smaller will now have at least this guaranteed common length */
dictMatchIndex = nextPtr[1]; /* new matchIndex larger than previous (closer to current) */
} else {
/* match is larger than current */
if (dictMatchIndex <= btLow) { break; } /* beyond tree size, stop the search */
commonLengthLarger = matchLength;
dictMatchIndex = nextPtr[0];
}
}
if (bestLength >= MINMATCH) {
U32 const mIndex = curr - (U32)OFFBASE_TO_OFFSET(*offsetPtr); (void)mIndex;
DEBUGLOG(8, "ZSTD_DUBT_findBetterDictMatch(%u) : found match of length %u and offsetCode %u (pos %u)",
curr, (U32)bestLength, (U32)*offsetPtr, mIndex);
}
return bestLength;
}
static
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
size_t ZSTD_DUBT_findBestMatch(ZSTD_MatchState_t* ms,
const BYTE* const ip, const BYTE* const iend,
size_t* offBasePtr,
U32 const mls,
const ZSTD_dictMode_e dictMode)
{
const ZSTD_compressionParameters* const cParams = &ms->cParams;
U32* const hashTable = ms->hashTable;
U32 const hashLog = cParams->hashLog;
size_t const h = ZSTD_hashPtr(ip, hashLog, mls);
U32 matchIndex = hashTable[h];
const BYTE* const base = ms->window.base;
U32 const curr = (U32)(ip-base);
U32 const windowLow = ZSTD_getLowestMatchIndex(ms, curr, cParams->windowLog);
U32* const bt = ms->chainTable;
U32 const btLog = cParams->chainLog - 1;
U32 const btMask = (1 << btLog) - 1;
U32 const btLow = (btMask >= curr) ? 0 : curr - btMask;
U32 const unsortLimit = MAX(btLow, windowLow);
U32* nextCandidate = bt + 2*(matchIndex&btMask);
U32* unsortedMark = bt + 2*(matchIndex&btMask) + 1;
U32 nbCompares = 1U << cParams->searchLog;
U32 nbCandidates = nbCompares;
U32 previousCandidate = 0;
DEBUGLOG(7, "ZSTD_DUBT_findBestMatch (%u) ", curr);
assert(ip <= iend-8); /* required for h calculation */
assert(dictMode != ZSTD_dedicatedDictSearch);
/* reach end of unsorted candidates list */
while ( (matchIndex > unsortLimit)
&& (*unsortedMark == ZSTD_DUBT_UNSORTED_MARK)
&& (nbCandidates > 1) ) {
DEBUGLOG(8, "ZSTD_DUBT_findBestMatch: candidate %u is unsorted",
matchIndex);
*unsortedMark = previousCandidate; /* the unsortedMark becomes a reversed chain, to move up back to original position */
previousCandidate = matchIndex;
matchIndex = *nextCandidate;
nextCandidate = bt + 2*(matchIndex&btMask);
unsortedMark = bt + 2*(matchIndex&btMask) + 1;
nbCandidates --;
}
/* nullify last candidate if it's still unsorted
* simplification, detrimental to compression ratio, beneficial for speed */
if ( (matchIndex > unsortLimit)
&& (*unsortedMark==ZSTD_DUBT_UNSORTED_MARK) ) {
DEBUGLOG(7, "ZSTD_DUBT_findBestMatch: nullify last unsorted candidate %u",
matchIndex);
*nextCandidate = *unsortedMark = 0;
}
/* batch sort stacked candidates */
matchIndex = previousCandidate;
while (matchIndex) { /* will end on matchIndex == 0 */
U32* const nextCandidateIdxPtr = bt + 2*(matchIndex&btMask) + 1;
U32 const nextCandidateIdx = *nextCandidateIdxPtr;
ZSTD_insertDUBT1(ms, matchIndex, iend,
nbCandidates, unsortLimit, dictMode);
matchIndex = nextCandidateIdx;
nbCandidates++;
}
/* find longest match */
{ size_t commonLengthSmaller = 0, commonLengthLarger = 0;
const BYTE* const dictBase = ms->window.dictBase;
const U32 dictLimit = ms->window.dictLimit;
const BYTE* const dictEnd = dictBase + dictLimit;
const BYTE* const prefixStart = base + dictLimit;
U32* smallerPtr = bt + 2*(curr&btMask);
U32* largerPtr = bt + 2*(curr&btMask) + 1;
U32 matchEndIdx = curr + 8 + 1;
U32 dummy32; /* to be nullified at the end */
size_t bestLength = 0;
matchIndex = hashTable[h];
hashTable[h] = curr; /* Update Hash Table */
for (; nbCompares && (matchIndex > windowLow); --nbCompares) {
U32* const nextPtr = bt + 2*(matchIndex & btMask);
size_t matchLength = MIN(commonLengthSmaller, commonLengthLarger); /* guaranteed minimum nb of common bytes */
const BYTE* match;
if ((dictMode != ZSTD_extDict) || (matchIndex+matchLength >= dictLimit)) {
match = base + matchIndex;
matchLength += ZSTD_count(ip+matchLength, match+matchLength, iend);
} else {
match = dictBase + matchIndex;
matchLength += ZSTD_count_2segments(ip+matchLength, match+matchLength, iend, dictEnd, prefixStart);
if (matchIndex+matchLength >= dictLimit)
match = base + matchIndex; /* to prepare for next usage of match[matchLength] */
}
if (matchLength > bestLength) {
if (matchLength > matchEndIdx - matchIndex)
matchEndIdx = matchIndex + (U32)matchLength;
if ( (4*(int)(matchLength-bestLength)) > (int)(ZSTD_highbit32(curr - matchIndex + 1) - ZSTD_highbit32((U32)*offBasePtr)) )
bestLength = matchLength, *offBasePtr = OFFSET_TO_OFFBASE(curr - matchIndex);
if (ip+matchLength == iend) { /* equal : no way to know if inf or sup */
if (dictMode == ZSTD_dictMatchState) {
nbCompares = 0; /* in addition to avoiding checking any
* further in this loop, make sure we
* skip checking in the dictionary. */
}
break; /* drop, to guarantee consistency (miss a little bit of compression) */
}
}
if (match[matchLength] < ip[matchLength]) {
/* match is smaller than current */
*smallerPtr = matchIndex; /* update smaller idx */
commonLengthSmaller = matchLength; /* all smaller will now have at least this guaranteed common length */
if (matchIndex <= btLow) { smallerPtr=&dummy32; break; } /* beyond tree size, stop the search */
smallerPtr = nextPtr+1; /* new "smaller" => larger of match */
matchIndex = nextPtr[1]; /* new matchIndex larger than previous (closer to current) */
} else {
/* match is larger than current */
*largerPtr = matchIndex;
commonLengthLarger = matchLength;
if (matchIndex <= btLow) { largerPtr=&dummy32; break; } /* beyond tree size, stop the search */
largerPtr = nextPtr;
matchIndex = nextPtr[0];
} }
*smallerPtr = *largerPtr = 0;
assert(nbCompares <= (1U << ZSTD_SEARCHLOG_MAX)); /* Check we haven't underflowed. */
if (dictMode == ZSTD_dictMatchState && nbCompares) {
bestLength = ZSTD_DUBT_findBetterDictMatch(
ms, ip, iend,
offBasePtr, bestLength, nbCompares,
mls, dictMode);
}
assert(matchEndIdx > curr+8); /* ensure nextToUpdate is increased */
ms->nextToUpdate = matchEndIdx - 8; /* skip repetitive patterns */
if (bestLength >= MINMATCH) {
U32 const mIndex = curr - (U32)OFFBASE_TO_OFFSET(*offBasePtr); (void)mIndex;
DEBUGLOG(8, "ZSTD_DUBT_findBestMatch(%u) : found match of length %u and offsetCode %u (pos %u)",
curr, (U32)bestLength, (U32)*offBasePtr, mIndex);
}
return bestLength;
}
}
/** ZSTD_BtFindBestMatch() : Tree updater, providing best match */
FORCE_INLINE_TEMPLATE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
size_t ZSTD_BtFindBestMatch( ZSTD_MatchState_t* ms,
const BYTE* const ip, const BYTE* const iLimit,
size_t* offBasePtr,
const U32 mls /* template */,
const ZSTD_dictMode_e dictMode)
{
DEBUGLOG(7, "ZSTD_BtFindBestMatch");
if (ip < ms->window.base + ms->nextToUpdate) return 0; /* skipped area */
ZSTD_updateDUBT(ms, ip, iLimit, mls);
return ZSTD_DUBT_findBestMatch(ms, ip, iLimit, offBasePtr, mls, dictMode);
}
/***********************************
* Dedicated dict search
***********************************/
void ZSTD_dedicatedDictSearch_lazy_loadDictionary(ZSTD_MatchState_t* ms, const BYTE* const ip)
{
const BYTE* const base = ms->window.base;
U32 const target = (U32)(ip - base);
U32* const hashTable = ms->hashTable;
U32* const chainTable = ms->chainTable;
U32 const chainSize = 1 << ms->cParams.chainLog;
U32 idx = ms->nextToUpdate;
U32 const minChain = chainSize < target - idx ? target - chainSize : idx;
U32 const bucketSize = 1 << ZSTD_LAZY_DDSS_BUCKET_LOG;
U32 const cacheSize = bucketSize - 1;
U32 const chainAttempts = (1 << ms->cParams.searchLog) - cacheSize;
U32 const chainLimit = chainAttempts > 255 ? 255 : chainAttempts;
/* We know the hashtable is oversized by a factor of `bucketSize`.
* We are going to temporarily pretend `bucketSize == 1`, keeping only a
* single entry. We will use the rest of the space to construct a temporary
* chaintable.
*/
U32 const hashLog = ms->cParams.hashLog - ZSTD_LAZY_DDSS_BUCKET_LOG;
U32* const tmpHashTable = hashTable;
U32* const tmpChainTable = hashTable + ((size_t)1 << hashLog);
U32 const tmpChainSize = (U32)((1 << ZSTD_LAZY_DDSS_BUCKET_LOG) - 1) << hashLog;
U32 const tmpMinChain = tmpChainSize < target ? target - tmpChainSize : idx;
U32 hashIdx;
assert(ms->cParams.chainLog <= 24);
assert(ms->cParams.hashLog > ms->cParams.chainLog);
assert(idx != 0);
assert(tmpMinChain <= minChain);
/* fill conventional hash table and conventional chain table */
for ( ; idx < target; idx++) {
U32 const h = (U32)ZSTD_hashPtr(base + idx, hashLog, ms->cParams.minMatch);
if (idx >= tmpMinChain) {
tmpChainTable[idx - tmpMinChain] = hashTable[h];
}
tmpHashTable[h] = idx;
}
/* sort chains into ddss chain table */
{
U32 chainPos = 0;
for (hashIdx = 0; hashIdx < (1U << hashLog); hashIdx++) {
U32 count;
U32 countBeyondMinChain = 0;
U32 i = tmpHashTable[hashIdx];
for (count = 0; i >= tmpMinChain && count < cacheSize; count++) {
/* skip through the chain to the first position that won't be
* in the hash cache bucket */
if (i < minChain) {
countBeyondMinChain++;
}
i = tmpChainTable[i - tmpMinChain];
}
if (count == cacheSize) {
for (count = 0; count < chainLimit;) {
if (i < minChain) {
if (!i || ++countBeyondMinChain > cacheSize) {
/* only allow pulling `cacheSize` number of entries
* into the cache or chainTable beyond `minChain`,
* to replace the entries pulled out of the
* chainTable into the cache. This lets us reach
* back further without increasing the total number
* of entries in the chainTable, guaranteeing the
* DDSS chain table will fit into the space
* allocated for the regular one. */
break;
}
}
chainTable[chainPos++] = i;
count++;
if (i < tmpMinChain) {
break;
}
i = tmpChainTable[i - tmpMinChain];
}
} else {
count = 0;
}
if (count) {
tmpHashTable[hashIdx] = ((chainPos - count) << 8) + count;
} else {
tmpHashTable[hashIdx] = 0;
}
}
assert(chainPos <= chainSize); /* I believe this is guaranteed... */
}
/* move chain pointers into the last entry of each hash bucket */
for (hashIdx = (1 << hashLog); hashIdx; ) {
U32 const bucketIdx = --hashIdx << ZSTD_LAZY_DDSS_BUCKET_LOG;
U32 const chainPackedPointer = tmpHashTable[hashIdx];
U32 i;
for (i = 0; i < cacheSize; i++) {
hashTable[bucketIdx + i] = 0;
}
hashTable[bucketIdx + bucketSize - 1] = chainPackedPointer;
}
/* fill the buckets of the hash table */
for (idx = ms->nextToUpdate; idx < target; idx++) {
U32 const h = (U32)ZSTD_hashPtr(base + idx, hashLog, ms->cParams.minMatch)
<< ZSTD_LAZY_DDSS_BUCKET_LOG;
U32 i;
/* Shift hash cache down 1. */
for (i = cacheSize - 1; i; i--)
hashTable[h + i] = hashTable[h + i - 1];
hashTable[h] = idx;
}
ms->nextToUpdate = target;
}
/* Returns the longest match length found in the dedicated dict search structure.
* If none are longer than the argument ml, then ml will be returned.
*/
FORCE_INLINE_TEMPLATE
size_t ZSTD_dedicatedDictSearch_lazy_search(size_t* offsetPtr, size_t ml, U32 nbAttempts,
const ZSTD_MatchState_t* const dms,
const BYTE* const ip, const BYTE* const iLimit,
const BYTE* const prefixStart, const U32 curr,
const U32 dictLimit, const size_t ddsIdx) {
const U32 ddsLowestIndex = dms->window.dictLimit;
const BYTE* const ddsBase = dms->window.base;
const BYTE* const ddsEnd = dms->window.nextSrc;
const U32 ddsSize = (U32)(ddsEnd - ddsBase);
const U32 ddsIndexDelta = dictLimit - ddsSize;
const U32 bucketSize = (1 << ZSTD_LAZY_DDSS_BUCKET_LOG);
const U32 bucketLimit = nbAttempts < bucketSize - 1 ? nbAttempts : bucketSize - 1;
U32 ddsAttempt;
U32 matchIndex;
for (ddsAttempt = 0; ddsAttempt < bucketSize - 1; ddsAttempt++) {
PREFETCH_L1(ddsBase + dms->hashTable[ddsIdx + ddsAttempt]);
}
{
U32 const chainPackedPointer = dms->hashTable[ddsIdx + bucketSize - 1];
U32 const chainIndex = chainPackedPointer >> 8;
PREFETCH_L1(&dms->chainTable[chainIndex]);
}
for (ddsAttempt = 0; ddsAttempt < bucketLimit; ddsAttempt++) {
size_t currentMl=0;
const BYTE* match;
matchIndex = dms->hashTable[ddsIdx + ddsAttempt];
match = ddsBase + matchIndex;
if (!matchIndex) {
return ml;
}
/* guaranteed by table construction */
(void)ddsLowestIndex;
assert(matchIndex >= ddsLowestIndex);
assert(match+4 <= ddsEnd);
if (MEM_read32(match) == MEM_read32(ip)) {
/* assumption : matchIndex <= dictLimit-4 (by table construction) */
currentMl = ZSTD_count_2segments(ip+4, match+4, iLimit, ddsEnd, prefixStart) + 4;
}
/* save best solution */
if (currentMl > ml) {
ml = currentMl;
*offsetPtr = OFFSET_TO_OFFBASE(curr - (matchIndex + ddsIndexDelta));
if (ip+currentMl == iLimit) {
/* best possible, avoids read overflow on next attempt */
return ml;
}
}
}
{
U32 const chainPackedPointer = dms->hashTable[ddsIdx + bucketSize - 1];
U32 chainIndex = chainPackedPointer >> 8;
U32 const chainLength = chainPackedPointer & 0xFF;
U32 const chainAttempts = nbAttempts - ddsAttempt;
U32 const chainLimit = chainAttempts > chainLength ? chainLength : chainAttempts;
U32 chainAttempt;
for (chainAttempt = 0 ; chainAttempt < chainLimit; chainAttempt++) {
PREFETCH_L1(ddsBase + dms->chainTable[chainIndex + chainAttempt]);
}
for (chainAttempt = 0 ; chainAttempt < chainLimit; chainAttempt++, chainIndex++) {
size_t currentMl=0;
const BYTE* match;
matchIndex = dms->chainTable[chainIndex];
match = ddsBase + matchIndex;
/* guaranteed by table construction */
assert(matchIndex >= ddsLowestIndex);
assert(match+4 <= ddsEnd);
if (MEM_read32(match) == MEM_read32(ip)) {
/* assumption : matchIndex <= dictLimit-4 (by table construction) */
currentMl = ZSTD_count_2segments(ip+4, match+4, iLimit, ddsEnd, prefixStart) + 4;
}
/* save best solution */
if (currentMl > ml) {
ml = currentMl;
*offsetPtr = OFFSET_TO_OFFBASE(curr - (matchIndex + ddsIndexDelta));
if (ip+currentMl == iLimit) break; /* best possible, avoids read overflow on next attempt */
}
}
}
return ml;
}
/* *********************************
* Hash Chain
***********************************/
#define NEXT_IN_CHAIN(d, mask) chainTable[(d) & (mask)]
/* Update chains up to ip (excluded)
Assumption : always within prefix (i.e. not within extDict) */
FORCE_INLINE_TEMPLATE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
U32 ZSTD_insertAndFindFirstIndex_internal(
ZSTD_MatchState_t* ms,
const ZSTD_compressionParameters* const cParams,
const BYTE* ip, U32 const mls, U32 const lazySkipping)
{
U32* const hashTable = ms->hashTable;
const U32 hashLog = cParams->hashLog;
U32* const chainTable = ms->chainTable;
const U32 chainMask = (1 << cParams->chainLog) - 1;
const BYTE* const base = ms->window.base;
const U32 target = (U32)(ip - base);
U32 idx = ms->nextToUpdate;
while(idx < target) { /* catch up */
size_t const h = ZSTD_hashPtr(base+idx, hashLog, mls);
NEXT_IN_CHAIN(idx, chainMask) = hashTable[h];
hashTable[h] = idx;
idx++;
/* Stop inserting every position when in the lazy skipping mode. */
if (lazySkipping)
break;
}
ms->nextToUpdate = target;
return hashTable[ZSTD_hashPtr(ip, hashLog, mls)];
}
U32 ZSTD_insertAndFindFirstIndex(ZSTD_MatchState_t* ms, const BYTE* ip) {
const ZSTD_compressionParameters* const cParams = &ms->cParams;
return ZSTD_insertAndFindFirstIndex_internal(ms, cParams, ip, ms->cParams.minMatch, /* lazySkipping*/ 0);
}
/* inlining is important to hardwire a hot branch (template emulation) */
FORCE_INLINE_TEMPLATE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
size_t ZSTD_HcFindBestMatch(
ZSTD_MatchState_t* ms,
const BYTE* const ip, const BYTE* const iLimit,
size_t* offsetPtr,
const U32 mls, const ZSTD_dictMode_e dictMode)
{
const ZSTD_compressionParameters* const cParams = &ms->cParams;
U32* const chainTable = ms->chainTable;
const U32 chainSize = (1 << cParams->chainLog);
const U32 chainMask = chainSize-1;
const BYTE* const base = ms->window.base;
const BYTE* const dictBase = ms->window.dictBase;
const U32 dictLimit = ms->window.dictLimit;
const BYTE* const prefixStart = base + dictLimit;
const BYTE* const dictEnd = dictBase + dictLimit;
const U32 curr = (U32)(ip-base);
const U32 maxDistance = 1U << cParams->windowLog;
const U32 lowestValid = ms->window.lowLimit;
const U32 withinMaxDistance = (curr - lowestValid > maxDistance) ? curr - maxDistance : lowestValid;
const U32 isDictionary = (ms->loadedDictEnd != 0);
const U32 lowLimit = isDictionary ? lowestValid : withinMaxDistance;
const U32 minChain = curr > chainSize ? curr - chainSize : 0;
U32 nbAttempts = 1U << cParams->searchLog;
size_t ml=4-1;
const ZSTD_MatchState_t* const dms = ms->dictMatchState;
const U32 ddsHashLog = dictMode == ZSTD_dedicatedDictSearch
? dms->cParams.hashLog - ZSTD_LAZY_DDSS_BUCKET_LOG : 0;
const size_t ddsIdx = dictMode == ZSTD_dedicatedDictSearch
? ZSTD_hashPtr(ip, ddsHashLog, mls) << ZSTD_LAZY_DDSS_BUCKET_LOG : 0;
U32 matchIndex;
if (dictMode == ZSTD_dedicatedDictSearch) {
const U32* entry = &dms->hashTable[ddsIdx];
PREFETCH_L1(entry);
}
/* HC4 match finder */
matchIndex = ZSTD_insertAndFindFirstIndex_internal(ms, cParams, ip, mls, ms->lazySkipping);
for ( ; (matchIndex>=lowLimit) & (nbAttempts>0) ; nbAttempts--) {
size_t currentMl=0;
if ((dictMode != ZSTD_extDict) || matchIndex >= dictLimit) {
const BYTE* const match = base + matchIndex;
assert(matchIndex >= dictLimit); /* ensures this is true if dictMode != ZSTD_extDict */
/* read 4B starting from (match + ml + 1 - sizeof(U32)) */
if (MEM_read32(match + ml - 3) == MEM_read32(ip + ml - 3)) /* potentially better */
currentMl = ZSTD_count(ip, match, iLimit);
} else {
const BYTE* const match = dictBase + matchIndex;
assert(match+4 <= dictEnd);
if (MEM_read32(match) == MEM_read32(ip)) /* assumption : matchIndex <= dictLimit-4 (by table construction) */
currentMl = ZSTD_count_2segments(ip+4, match+4, iLimit, dictEnd, prefixStart) + 4;
}
/* save best solution */
if (currentMl > ml) {
ml = currentMl;
*offsetPtr = OFFSET_TO_OFFBASE(curr - matchIndex);
if (ip+currentMl == iLimit) break; /* best possible, avoids read overflow on next attempt */
}
if (matchIndex <= minChain) break;
matchIndex = NEXT_IN_CHAIN(matchIndex, chainMask);
}
assert(nbAttempts <= (1U << ZSTD_SEARCHLOG_MAX)); /* Check we haven't underflowed. */
if (dictMode == ZSTD_dedicatedDictSearch) {
ml = ZSTD_dedicatedDictSearch_lazy_search(offsetPtr, ml, nbAttempts, dms,
ip, iLimit, prefixStart, curr, dictLimit, ddsIdx);
} else if (dictMode == ZSTD_dictMatchState) {
const U32* const dmsChainTable = dms->chainTable;
const U32 dmsChainSize = (1 << dms->cParams.chainLog);
const U32 dmsChainMask = dmsChainSize - 1;
const U32 dmsLowestIndex = dms->window.dictLimit;
const BYTE* const dmsBase = dms->window.base;
const BYTE* const dmsEnd = dms->window.nextSrc;
const U32 dmsSize = (U32)(dmsEnd - dmsBase);
const U32 dmsIndexDelta = dictLimit - dmsSize;
const U32 dmsMinChain = dmsSize > dmsChainSize ? dmsSize - dmsChainSize : 0;
matchIndex = dms->hashTable[ZSTD_hashPtr(ip, dms->cParams.hashLog, mls)];
for ( ; (matchIndex>=dmsLowestIndex) & (nbAttempts>0) ; nbAttempts--) {
size_t currentMl=0;
const BYTE* const match = dmsBase + matchIndex;
assert(match+4 <= dmsEnd);
if (MEM_read32(match) == MEM_read32(ip)) /* assumption : matchIndex <= dictLimit-4 (by table construction) */
currentMl = ZSTD_count_2segments(ip+4, match+4, iLimit, dmsEnd, prefixStart) + 4;
/* save best solution */
if (currentMl > ml) {
ml = currentMl;
assert(curr > matchIndex + dmsIndexDelta);
*offsetPtr = OFFSET_TO_OFFBASE(curr - (matchIndex + dmsIndexDelta));
if (ip+currentMl == iLimit) break; /* best possible, avoids read overflow on next attempt */
}
if (matchIndex <= dmsMinChain) break;
matchIndex = dmsChainTable[matchIndex & dmsChainMask];
}
}
return ml;
}
/* *********************************
* (SIMD) Row-based matchfinder
***********************************/
/* Constants for row-based hash */
#define ZSTD_ROW_HASH_TAG_MASK ((1u << ZSTD_ROW_HASH_TAG_BITS) - 1)
#define ZSTD_ROW_HASH_MAX_ENTRIES 64 /* absolute maximum number of entries per row, for all configurations */
#define ZSTD_ROW_HASH_CACHE_MASK (ZSTD_ROW_HASH_CACHE_SIZE - 1)
typedef U64 ZSTD_VecMask; /* Clarifies when we are interacting with a U64 representing a mask of matches */
/* ZSTD_VecMask_next():
* Starting from the LSB, returns the idx of the next non-zero bit.
* Basically counting the nb of trailing zeroes.
*/
MEM_STATIC U32 ZSTD_VecMask_next(ZSTD_VecMask val) {
return ZSTD_countTrailingZeros64(val);
}
/* ZSTD_row_nextIndex():
* Returns the next index to insert at within a tagTable row, and updates the "head"
* value to reflect the update. Essentially cycles backwards from [1, {entries per row})
*/
FORCE_INLINE_TEMPLATE U32 ZSTD_row_nextIndex(BYTE* const tagRow, U32 const rowMask) {
U32 next = (*tagRow-1) & rowMask;
next += (next == 0) ? rowMask : 0; /* skip first position */
*tagRow = (BYTE)next;
return next;
}
/* ZSTD_isAligned():
* Checks that a pointer is aligned to "align" bytes which must be a power of 2.
*/
MEM_STATIC int ZSTD_isAligned(void const* ptr, size_t align) {
assert((align & (align - 1)) == 0);
return (((size_t)ptr) & (align - 1)) == 0;
}
/* ZSTD_row_prefetch():
* Performs prefetching for the hashTable and tagTable at a given row.
*/
FORCE_INLINE_TEMPLATE void ZSTD_row_prefetch(U32 const* hashTable, BYTE const* tagTable, U32 const relRow, U32 const rowLog) {
PREFETCH_L1(hashTable + relRow);
if (rowLog >= 5) {
PREFETCH_L1(hashTable + relRow + 16);
/* Note: prefetching more of the hash table does not appear to be beneficial for 128-entry rows */
}
PREFETCH_L1(tagTable + relRow);
if (rowLog == 6) {
PREFETCH_L1(tagTable + relRow + 32);
}
assert(rowLog == 4 || rowLog == 5 || rowLog == 6);
assert(ZSTD_isAligned(hashTable + relRow, 64)); /* prefetched hash row always 64-byte aligned */
assert(ZSTD_isAligned(tagTable + relRow, (size_t)1 << rowLog)); /* prefetched tagRow sits on correct multiple of bytes (32,64,128) */
}
/* ZSTD_row_fillHashCache():
* Fill up the hash cache starting at idx, prefetching up to ZSTD_ROW_HASH_CACHE_SIZE entries,
* but not beyond iLimit.
*/
FORCE_INLINE_TEMPLATE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
void ZSTD_row_fillHashCache(ZSTD_MatchState_t* ms, const BYTE* base,
U32 const rowLog, U32 const mls,
U32 idx, const BYTE* const iLimit)
{
U32 const* const hashTable = ms->hashTable;
BYTE const* const tagTable = ms->tagTable;
U32 const hashLog = ms->rowHashLog;
U32 const maxElemsToPrefetch = (base + idx) > iLimit ? 0 : (U32)(iLimit - (base + idx) + 1);
U32 const lim = idx + MIN(ZSTD_ROW_HASH_CACHE_SIZE, maxElemsToPrefetch);
for (; idx < lim; ++idx) {
U32 const hash = (U32)ZSTD_hashPtrSalted(base + idx, hashLog + ZSTD_ROW_HASH_TAG_BITS, mls, ms->hashSalt);
U32 const row = (hash >> ZSTD_ROW_HASH_TAG_BITS) << rowLog;
ZSTD_row_prefetch(hashTable, tagTable, row, rowLog);
ms->hashCache[idx & ZSTD_ROW_HASH_CACHE_MASK] = hash;
}
DEBUGLOG(6, "ZSTD_row_fillHashCache(): [%u %u %u %u %u %u %u %u]", ms->hashCache[0], ms->hashCache[1],
ms->hashCache[2], ms->hashCache[3], ms->hashCache[4],
ms->hashCache[5], ms->hashCache[6], ms->hashCache[7]);
}
/* ZSTD_row_nextCachedHash():
* Returns the hash of base + idx, and replaces the hash in the hash cache with the byte at
* base + idx + ZSTD_ROW_HASH_CACHE_SIZE. Also prefetches the appropriate rows from hashTable and tagTable.
*/
FORCE_INLINE_TEMPLATE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
U32 ZSTD_row_nextCachedHash(U32* cache, U32 const* hashTable,
BYTE const* tagTable, BYTE const* base,
U32 idx, U32 const hashLog,
U32 const rowLog, U32 const mls,
U64 const hashSalt)
{
U32 const newHash = (U32)ZSTD_hashPtrSalted(base+idx+ZSTD_ROW_HASH_CACHE_SIZE, hashLog + ZSTD_ROW_HASH_TAG_BITS, mls, hashSalt);
U32 const row = (newHash >> ZSTD_ROW_HASH_TAG_BITS) << rowLog;
ZSTD_row_prefetch(hashTable, tagTable, row, rowLog);
{ U32 const hash = cache[idx & ZSTD_ROW_HASH_CACHE_MASK];
cache[idx & ZSTD_ROW_HASH_CACHE_MASK] = newHash;
return hash;
}
}
/* ZSTD_row_update_internalImpl():
* Updates the hash table with positions starting from updateStartIdx until updateEndIdx.
*/
FORCE_INLINE_TEMPLATE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
void ZSTD_row_update_internalImpl(ZSTD_MatchState_t* ms,
U32 updateStartIdx, U32 const updateEndIdx,
U32 const mls, U32 const rowLog,
U32 const rowMask, U32 const useCache)
{
U32* const hashTable = ms->hashTable;
BYTE* const tagTable = ms->tagTable;
U32 const hashLog = ms->rowHashLog;
const BYTE* const base = ms->window.base;
DEBUGLOG(6, "ZSTD_row_update_internalImpl(): updateStartIdx=%u, updateEndIdx=%u", updateStartIdx, updateEndIdx);
for (; updateStartIdx < updateEndIdx; ++updateStartIdx) {
U32 const hash = useCache ? ZSTD_row_nextCachedHash(ms->hashCache, hashTable, tagTable, base, updateStartIdx, hashLog, rowLog, mls, ms->hashSalt)
: (U32)ZSTD_hashPtrSalted(base + updateStartIdx, hashLog + ZSTD_ROW_HASH_TAG_BITS, mls, ms->hashSalt);
U32 const relRow = (hash >> ZSTD_ROW_HASH_TAG_BITS) << rowLog;
U32* const row = hashTable + relRow;
BYTE* tagRow = tagTable + relRow;
U32 const pos = ZSTD_row_nextIndex(tagRow, rowMask);
assert(hash == ZSTD_hashPtrSalted(base + updateStartIdx, hashLog + ZSTD_ROW_HASH_TAG_BITS, mls, ms->hashSalt));
tagRow[pos] = hash & ZSTD_ROW_HASH_TAG_MASK;
row[pos] = updateStartIdx;
}
}
/* ZSTD_row_update_internal():
* Inserts the byte at ip into the appropriate position in the hash table, and updates ms->nextToUpdate.
* Skips sections of long matches as is necessary.
*/
FORCE_INLINE_TEMPLATE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
void ZSTD_row_update_internal(ZSTD_MatchState_t* ms, const BYTE* ip,
U32 const mls, U32 const rowLog,
U32 const rowMask, U32 const useCache)
{
U32 idx = ms->nextToUpdate;
const BYTE* const base = ms->window.base;
const U32 target = (U32)(ip - base);
const U32 kSkipThreshold = 384;
const U32 kMaxMatchStartPositionsToUpdate = 96;
const U32 kMaxMatchEndPositionsToUpdate = 32;
if (useCache) {
/* Only skip positions when using hash cache, i.e.
* if we are loading a dict, don't skip anything.
* If we decide to skip, then we only update a set number
* of positions at the beginning and end of the match.
*/
if (UNLIKELY(target - idx > kSkipThreshold)) {
U32 const bound = idx + kMaxMatchStartPositionsToUpdate;
ZSTD_row_update_internalImpl(ms, idx, bound, mls, rowLog, rowMask, useCache);
idx = target - kMaxMatchEndPositionsToUpdate;
ZSTD_row_fillHashCache(ms, base, rowLog, mls, idx, ip+1);
}
}
assert(target >= idx);
ZSTD_row_update_internalImpl(ms, idx, target, mls, rowLog, rowMask, useCache);
ms->nextToUpdate = target;
}
/* ZSTD_row_update():
* External wrapper for ZSTD_row_update_internal(). Used for filling the hashtable during dictionary
* processing.
*/
void ZSTD_row_update(ZSTD_MatchState_t* const ms, const BYTE* ip) {
const U32 rowLog = BOUNDED(4, ms->cParams.searchLog, 6);
const U32 rowMask = (1u << rowLog) - 1;
const U32 mls = MIN(ms->cParams.minMatch, 6 /* mls caps out at 6 */);
DEBUGLOG(5, "ZSTD_row_update(), rowLog=%u", rowLog);
ZSTD_row_update_internal(ms, ip, mls, rowLog, rowMask, 0 /* don't use cache */);
}
/* Returns the mask width of bits group of which will be set to 1. Given not all
* architectures have easy movemask instruction, this helps to iterate over
* groups of bits easier and faster.
*/
FORCE_INLINE_TEMPLATE U32
ZSTD_row_matchMaskGroupWidth(const U32 rowEntries)
{
assert((rowEntries == 16) || (rowEntries == 32) || rowEntries == 64);
assert(rowEntries <= ZSTD_ROW_HASH_MAX_ENTRIES);
(void)rowEntries;
#if defined(ZSTD_ARCH_ARM_NEON)
/* NEON path only works for little endian */
if (!MEM_isLittleEndian()) {
return 1;
}
if (rowEntries == 16) {
return 4;
}
if (rowEntries == 32) {
return 2;
}
if (rowEntries == 64) {
return 1;
}
#endif
return 1;
}
#if defined(ZSTD_ARCH_X86_SSE2)
FORCE_INLINE_TEMPLATE ZSTD_VecMask
ZSTD_row_getSSEMask(int nbChunks, const BYTE* const src, const BYTE tag, const U32 head)
{
const __m128i comparisonMask = _mm_set1_epi8((char)tag);
int matches[4] = {0};
int i;
assert(nbChunks == 1 || nbChunks == 2 || nbChunks == 4);
for (i=0; i<nbChunks; i++) {
const __m128i chunk = _mm_loadu_si128((const __m128i*)(const void*)(src + 16*i));
const __m128i equalMask = _mm_cmpeq_epi8(chunk, comparisonMask);
matches[i] = _mm_movemask_epi8(equalMask);
}
if (nbChunks == 1) return ZSTD_rotateRight_U16((U16)matches[0], head);
if (nbChunks == 2) return ZSTD_rotateRight_U32((U32)matches[1] << 16 | (U32)matches[0], head);
assert(nbChunks == 4);
return ZSTD_rotateRight_U64((U64)matches[3] << 48 | (U64)matches[2] << 32 | (U64)matches[1] << 16 | (U64)matches[0], head);
}
#endif
#if defined(ZSTD_ARCH_ARM_NEON)
FORCE_INLINE_TEMPLATE ZSTD_VecMask
ZSTD_row_getNEONMask(const U32 rowEntries, const BYTE* const src, const BYTE tag, const U32 headGrouped)
{
assert((rowEntries == 16) || (rowEntries == 32) || rowEntries == 64);
if (rowEntries == 16) {
/* vshrn_n_u16 shifts by 4 every u16 and narrows to 8 lower bits.
* After that groups of 4 bits represent the equalMask. We lower
* all bits except the highest in these groups by doing AND with
* 0x88 = 0b10001000.
*/
const uint8x16_t chunk = vld1q_u8(src);
const uint16x8_t equalMask = vreinterpretq_u16_u8(vceqq_u8(chunk, vdupq_n_u8(tag)));
const uint8x8_t res = vshrn_n_u16(equalMask, 4);
const U64 matches = vget_lane_u64(vreinterpret_u64_u8(res), 0);
return ZSTD_rotateRight_U64(matches, headGrouped) & 0x8888888888888888ull;
} else if (rowEntries == 32) {
/* Same idea as with rowEntries == 16 but doing AND with
* 0x55 = 0b01010101.
*/
const uint16x8x2_t chunk = vld2q_u16((const uint16_t*)(const void*)src);
const uint8x16_t chunk0 = vreinterpretq_u8_u16(chunk.val[0]);
const uint8x16_t chunk1 = vreinterpretq_u8_u16(chunk.val[1]);
const uint8x16_t dup = vdupq_n_u8(tag);
const uint8x8_t t0 = vshrn_n_u16(vreinterpretq_u16_u8(vceqq_u8(chunk0, dup)), 6);
const uint8x8_t t1 = vshrn_n_u16(vreinterpretq_u16_u8(vceqq_u8(chunk1, dup)), 6);
const uint8x8_t res = vsli_n_u8(t0, t1, 4);
const U64 matches = vget_lane_u64(vreinterpret_u64_u8(res), 0) ;
return ZSTD_rotateRight_U64(matches, headGrouped) & 0x5555555555555555ull;
} else { /* rowEntries == 64 */
const uint8x16x4_t chunk = vld4q_u8(src);
const uint8x16_t dup = vdupq_n_u8(tag);
const uint8x16_t cmp0 = vceqq_u8(chunk.val[0], dup);
const uint8x16_t cmp1 = vceqq_u8(chunk.val[1], dup);
const uint8x16_t cmp2 = vceqq_u8(chunk.val[2], dup);
const uint8x16_t cmp3 = vceqq_u8(chunk.val[3], dup);
const uint8x16_t t0 = vsriq_n_u8(cmp1, cmp0, 1);
const uint8x16_t t1 = vsriq_n_u8(cmp3, cmp2, 1);
const uint8x16_t t2 = vsriq_n_u8(t1, t0, 2);
const uint8x16_t t3 = vsriq_n_u8(t2, t2, 4);
const uint8x8_t t4 = vshrn_n_u16(vreinterpretq_u16_u8(t3), 4);
const U64 matches = vget_lane_u64(vreinterpret_u64_u8(t4), 0);
return ZSTD_rotateRight_U64(matches, headGrouped);
}
}
#endif
/* Returns a ZSTD_VecMask (U64) that has the nth group (determined by
* ZSTD_row_matchMaskGroupWidth) of bits set to 1 if the newly-computed "tag"
* matches the hash at the nth position in a row of the tagTable.
* Each row is a circular buffer beginning at the value of "headGrouped". So we
* must rotate the "matches" bitfield to match up with the actual layout of the
* entries within the hashTable */
FORCE_INLINE_TEMPLATE ZSTD_VecMask
ZSTD_row_getMatchMask(const BYTE* const tagRow, const BYTE tag, const U32 headGrouped, const U32 rowEntries)
{
const BYTE* const src = tagRow;
assert((rowEntries == 16) || (rowEntries == 32) || rowEntries == 64);
assert(rowEntries <= ZSTD_ROW_HASH_MAX_ENTRIES);
assert(ZSTD_row_matchMaskGroupWidth(rowEntries) * rowEntries <= sizeof(ZSTD_VecMask) * 8);
#if defined(ZSTD_ARCH_X86_SSE2)
return ZSTD_row_getSSEMask(rowEntries / 16, src, tag, headGrouped);
#else /* SW or NEON-LE */
# if defined(ZSTD_ARCH_ARM_NEON)
/* This NEON path only works for little endian - otherwise use SWAR below */
if (MEM_isLittleEndian()) {
return ZSTD_row_getNEONMask(rowEntries, src, tag, headGrouped);
}
# endif /* ZSTD_ARCH_ARM_NEON */
/* SWAR */
{ const int chunkSize = sizeof(size_t);
const size_t shiftAmount = ((chunkSize * 8) - chunkSize);
const size_t xFF = ~((size_t)0);
const size_t x01 = xFF / 0xFF;
const size_t x80 = x01 << 7;
const size_t splatChar = tag * x01;
ZSTD_VecMask matches = 0;
int i = rowEntries - chunkSize;
assert((sizeof(size_t) == 4) || (sizeof(size_t) == 8));
if (MEM_isLittleEndian()) { /* runtime check so have two loops */
const size_t extractMagic = (xFF / 0x7F) >> chunkSize;
do {
size_t chunk = MEM_readST(&src[i]);
chunk ^= splatChar;
chunk = (((chunk | x80) - x01) | chunk) & x80;
matches <<= chunkSize;
matches |= (chunk * extractMagic) >> shiftAmount;
i -= chunkSize;
} while (i >= 0);
} else { /* big endian: reverse bits during extraction */
const size_t msb = xFF ^ (xFF >> 1);
const size_t extractMagic = (msb / 0x1FF) | msb;
do {
size_t chunk = MEM_readST(&src[i]);
chunk ^= splatChar;
chunk = (((chunk | x80) - x01) | chunk) & x80;
matches <<= chunkSize;
matches |= ((chunk >> 7) * extractMagic) >> shiftAmount;
i -= chunkSize;
} while (i >= 0);
}
matches = ~matches;
if (rowEntries == 16) {
return ZSTD_rotateRight_U16((U16)matches, headGrouped);
} else if (rowEntries == 32) {
return ZSTD_rotateRight_U32((U32)matches, headGrouped);
} else {
return ZSTD_rotateRight_U64((U64)matches, headGrouped);
}
}
#endif
}
/* The high-level approach of the SIMD row based match finder is as follows:
* - Figure out where to insert the new entry:
* - Generate a hash for current input position and split it into a one byte of tag and `rowHashLog` bits of index.
* - The hash is salted by a value that changes on every context reset, so when the same table is used
* we will avoid collisions that would otherwise slow us down by introducing phantom matches.
* - The hashTable is effectively split into groups or "rows" of 15 or 31 entries of U32, and the index determines
* which row to insert into.
* - Determine the correct position within the row to insert the entry into. Each row of 15 or 31 can
* be considered as a circular buffer with a "head" index that resides in the tagTable (overall 16 or 32 bytes
* per row).
* - Use SIMD to efficiently compare the tags in the tagTable to the 1-byte tag calculated for the position and
* generate a bitfield that we can cycle through to check the collisions in the hash table.
* - Pick the longest match.
* - Insert the tag into the equivalent row and position in the tagTable.
*/
FORCE_INLINE_TEMPLATE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
size_t ZSTD_RowFindBestMatch(
ZSTD_MatchState_t* ms,
const BYTE* const ip, const BYTE* const iLimit,
size_t* offsetPtr,
const U32 mls, const ZSTD_dictMode_e dictMode,
const U32 rowLog)
{
U32* const hashTable = ms->hashTable;
BYTE* const tagTable = ms->tagTable;
U32* const hashCache = ms->hashCache;
const U32 hashLog = ms->rowHashLog;
const ZSTD_compressionParameters* const cParams = &ms->cParams;
const BYTE* const base = ms->window.base;
const BYTE* const dictBase = ms->window.dictBase;
const U32 dictLimit = ms->window.dictLimit;
const BYTE* const prefixStart = base + dictLimit;
const BYTE* const dictEnd = dictBase + dictLimit;
const U32 curr = (U32)(ip-base);
const U32 maxDistance = 1U << cParams->windowLog;
const U32 lowestValid = ms->window.lowLimit;
const U32 withinMaxDistance = (curr - lowestValid > maxDistance) ? curr - maxDistance : lowestValid;
const U32 isDictionary = (ms->loadedDictEnd != 0);
const U32 lowLimit = isDictionary ? lowestValid : withinMaxDistance;
const U32 rowEntries = (1U << rowLog);
const U32 rowMask = rowEntries - 1;
const U32 cappedSearchLog = MIN(cParams->searchLog, rowLog); /* nb of searches is capped at nb entries per row */
const U32 groupWidth = ZSTD_row_matchMaskGroupWidth(rowEntries);
const U64 hashSalt = ms->hashSalt;
U32 nbAttempts = 1U << cappedSearchLog;
size_t ml=4-1;
U32 hash;
/* DMS/DDS variables that may be referenced laster */
const ZSTD_MatchState_t* const dms = ms->dictMatchState;
/* Initialize the following variables to satisfy static analyzer */
size_t ddsIdx = 0;
U32 ddsExtraAttempts = 0; /* cctx hash tables are limited in searches, but allow extra searches into DDS */
U32 dmsTag = 0;
U32* dmsRow = NULL;
BYTE* dmsTagRow = NULL;
if (dictMode == ZSTD_dedicatedDictSearch) {
const U32 ddsHashLog = dms->cParams.hashLog - ZSTD_LAZY_DDSS_BUCKET_LOG;
{ /* Prefetch DDS hashtable entry */
ddsIdx = ZSTD_hashPtr(ip, ddsHashLog, mls) << ZSTD_LAZY_DDSS_BUCKET_LOG;
PREFETCH_L1(&dms->hashTable[ddsIdx]);
}
ddsExtraAttempts = cParams->searchLog > rowLog ? 1U << (cParams->searchLog - rowLog) : 0;
}
if (dictMode == ZSTD_dictMatchState) {
/* Prefetch DMS rows */
U32* const dmsHashTable = dms->hashTable;
BYTE* const dmsTagTable = dms->tagTable;
U32 const dmsHash = (U32)ZSTD_hashPtr(ip, dms->rowHashLog + ZSTD_ROW_HASH_TAG_BITS, mls);
U32 const dmsRelRow = (dmsHash >> ZSTD_ROW_HASH_TAG_BITS) << rowLog;
dmsTag = dmsHash & ZSTD_ROW_HASH_TAG_MASK;
dmsTagRow = (BYTE*)(dmsTagTable + dmsRelRow);
dmsRow = dmsHashTable + dmsRelRow;
ZSTD_row_prefetch(dmsHashTable, dmsTagTable, dmsRelRow, rowLog);
}
/* Update the hashTable and tagTable up to (but not including) ip */
if (!ms->lazySkipping) {
ZSTD_row_update_internal(ms, ip, mls, rowLog, rowMask, 1 /* useCache */);
hash = ZSTD_row_nextCachedHash(hashCache, hashTable, tagTable, base, curr, hashLog, rowLog, mls, hashSalt);
} else {
/* Stop inserting every position when in the lazy skipping mode.
* The hash cache is also not kept up to date in this mode.
*/
hash = (U32)ZSTD_hashPtrSalted(ip, hashLog + ZSTD_ROW_HASH_TAG_BITS, mls, hashSalt);
ms->nextToUpdate = curr;
}
ms->hashSaltEntropy += hash; /* collect salt entropy */
{ /* Get the hash for ip, compute the appropriate row */
U32 const relRow = (hash >> ZSTD_ROW_HASH_TAG_BITS) << rowLog;
U32 const tag = hash & ZSTD_ROW_HASH_TAG_MASK;
U32* const row = hashTable + relRow;
BYTE* tagRow = (BYTE*)(tagTable + relRow);
U32 const headGrouped = (*tagRow & rowMask) * groupWidth;
U32 matchBuffer[ZSTD_ROW_HASH_MAX_ENTRIES];
size_t numMatches = 0;
size_t currMatch = 0;
ZSTD_VecMask matches = ZSTD_row_getMatchMask(tagRow, (BYTE)tag, headGrouped, rowEntries);
/* Cycle through the matches and prefetch */
for (; (matches > 0) && (nbAttempts > 0); matches &= (matches - 1)) {
U32 const matchPos = ((headGrouped + ZSTD_VecMask_next(matches)) / groupWidth) & rowMask;
U32 const matchIndex = row[matchPos];
if(matchPos == 0) continue;
assert(numMatches < rowEntries);
if (matchIndex < lowLimit)
break;
if ((dictMode != ZSTD_extDict) || matchIndex >= dictLimit) {
PREFETCH_L1(base + matchIndex);
} else {
PREFETCH_L1(dictBase + matchIndex);
}
matchBuffer[numMatches++] = matchIndex;
--nbAttempts;
}
/* Speed opt: insert current byte into hashtable too. This allows us to avoid one iteration of the loop
in ZSTD_row_update_internal() at the next search. */
{
U32 const pos = ZSTD_row_nextIndex(tagRow, rowMask);
tagRow[pos] = (BYTE)tag;
row[pos] = ms->nextToUpdate++;
}
/* Return the longest match */
for (; currMatch < numMatches; ++currMatch) {
U32 const matchIndex = matchBuffer[currMatch];
size_t currentMl=0;
assert(matchIndex < curr);
assert(matchIndex >= lowLimit);
if ((dictMode != ZSTD_extDict) || matchIndex >= dictLimit) {
const BYTE* const match = base + matchIndex;
assert(matchIndex >= dictLimit); /* ensures this is true if dictMode != ZSTD_extDict */
/* read 4B starting from (match + ml + 1 - sizeof(U32)) */
if (MEM_read32(match + ml - 3) == MEM_read32(ip + ml - 3)) /* potentially better */
currentMl = ZSTD_count(ip, match, iLimit);
} else {
const BYTE* const match = dictBase + matchIndex;
assert(match+4 <= dictEnd);
if (MEM_read32(match) == MEM_read32(ip)) /* assumption : matchIndex <= dictLimit-4 (by table construction) */
currentMl = ZSTD_count_2segments(ip+4, match+4, iLimit, dictEnd, prefixStart) + 4;
}
/* Save best solution */
if (currentMl > ml) {
ml = currentMl;
*offsetPtr = OFFSET_TO_OFFBASE(curr - matchIndex);
if (ip+currentMl == iLimit) break; /* best possible, avoids read overflow on next attempt */
}
}
}
assert(nbAttempts <= (1U << ZSTD_SEARCHLOG_MAX)); /* Check we haven't underflowed. */
if (dictMode == ZSTD_dedicatedDictSearch) {
ml = ZSTD_dedicatedDictSearch_lazy_search(offsetPtr, ml, nbAttempts + ddsExtraAttempts, dms,
ip, iLimit, prefixStart, curr, dictLimit, ddsIdx);
} else if (dictMode == ZSTD_dictMatchState) {
/* TODO: Measure and potentially add prefetching to DMS */
const U32 dmsLowestIndex = dms->window.dictLimit;
const BYTE* const dmsBase = dms->window.base;
const BYTE* const dmsEnd = dms->window.nextSrc;
const U32 dmsSize = (U32)(dmsEnd - dmsBase);
const U32 dmsIndexDelta = dictLimit - dmsSize;
{ U32 const headGrouped = (*dmsTagRow & rowMask) * groupWidth;
U32 matchBuffer[ZSTD_ROW_HASH_MAX_ENTRIES];
size_t numMatches = 0;
size_t currMatch = 0;
ZSTD_VecMask matches = ZSTD_row_getMatchMask(dmsTagRow, (BYTE)dmsTag, headGrouped, rowEntries);
for (; (matches > 0) && (nbAttempts > 0); matches &= (matches - 1)) {
U32 const matchPos = ((headGrouped + ZSTD_VecMask_next(matches)) / groupWidth) & rowMask;
U32 const matchIndex = dmsRow[matchPos];
if(matchPos == 0) continue;
if (matchIndex < dmsLowestIndex)
break;
PREFETCH_L1(dmsBase + matchIndex);
matchBuffer[numMatches++] = matchIndex;
--nbAttempts;
}
/* Return the longest match */
for (; currMatch < numMatches; ++currMatch) {
U32 const matchIndex = matchBuffer[currMatch];
size_t currentMl=0;
assert(matchIndex >= dmsLowestIndex);
assert(matchIndex < curr);
{ const BYTE* const match = dmsBase + matchIndex;
assert(match+4 <= dmsEnd);
if (MEM_read32(match) == MEM_read32(ip))
currentMl = ZSTD_count_2segments(ip+4, match+4, iLimit, dmsEnd, prefixStart) + 4;
}
if (currentMl > ml) {
ml = currentMl;
assert(curr > matchIndex + dmsIndexDelta);
*offsetPtr = OFFSET_TO_OFFBASE(curr - (matchIndex + dmsIndexDelta));
if (ip+currentMl == iLimit) break;
}
}
}
}
return ml;
}
/**
* Generate search functions templated on (dictMode, mls, rowLog).
* These functions are outlined for code size & compilation time.
* ZSTD_searchMax() dispatches to the correct implementation function.
*
* TODO: The start of the search function involves loading and calculating a
* bunch of constants from the ZSTD_MatchState_t. These computations could be
* done in an initialization function, and saved somewhere in the match state.
* Then we could pass a pointer to the saved state instead of the match state,
* and avoid duplicate computations.
*
* TODO: Move the match re-winding into searchMax. This improves compression
* ratio, and unlocks further simplifications with the next TODO.
*
* TODO: Try moving the repcode search into searchMax. After the re-winding
* and repcode search are in searchMax, there is no more logic in the match
* finder loop that requires knowledge about the dictMode. So we should be
* able to avoid force inlining it, and we can join the extDict loop with
* the single segment loop. It should go in searchMax instead of its own
* function to avoid having multiple virtual function calls per search.
*/
#define ZSTD_BT_SEARCH_FN(dictMode, mls) ZSTD_BtFindBestMatch_##dictMode##_##mls
#define ZSTD_HC_SEARCH_FN(dictMode, mls) ZSTD_HcFindBestMatch_##dictMode##_##mls
#define ZSTD_ROW_SEARCH_FN(dictMode, mls, rowLog) ZSTD_RowFindBestMatch_##dictMode##_##mls##_##rowLog
#define ZSTD_SEARCH_FN_ATTRS FORCE_NOINLINE
#define GEN_ZSTD_BT_SEARCH_FN(dictMode, mls) \
ZSTD_SEARCH_FN_ATTRS size_t ZSTD_BT_SEARCH_FN(dictMode, mls)( \
ZSTD_MatchState_t* ms, \
const BYTE* ip, const BYTE* const iLimit, \
size_t* offBasePtr) \
{ \
assert(MAX(4, MIN(6, ms->cParams.minMatch)) == mls); \
return ZSTD_BtFindBestMatch(ms, ip, iLimit, offBasePtr, mls, ZSTD_##dictMode); \
} \
#define GEN_ZSTD_HC_SEARCH_FN(dictMode, mls) \
ZSTD_SEARCH_FN_ATTRS size_t ZSTD_HC_SEARCH_FN(dictMode, mls)( \
ZSTD_MatchState_t* ms, \
const BYTE* ip, const BYTE* const iLimit, \
size_t* offsetPtr) \
{ \
assert(MAX(4, MIN(6, ms->cParams.minMatch)) == mls); \
return ZSTD_HcFindBestMatch(ms, ip, iLimit, offsetPtr, mls, ZSTD_##dictMode); \
} \
#define GEN_ZSTD_ROW_SEARCH_FN(dictMode, mls, rowLog) \
ZSTD_SEARCH_FN_ATTRS size_t ZSTD_ROW_SEARCH_FN(dictMode, mls, rowLog)( \
ZSTD_MatchState_t* ms, \
const BYTE* ip, const BYTE* const iLimit, \
size_t* offsetPtr) \
{ \
assert(MAX(4, MIN(6, ms->cParams.minMatch)) == mls); \
assert(MAX(4, MIN(6, ms->cParams.searchLog)) == rowLog); \
return ZSTD_RowFindBestMatch(ms, ip, iLimit, offsetPtr, mls, ZSTD_##dictMode, rowLog); \
} \
#define ZSTD_FOR_EACH_ROWLOG(X, dictMode, mls) \
X(dictMode, mls, 4) \
X(dictMode, mls, 5) \
X(dictMode, mls, 6)
#define ZSTD_FOR_EACH_MLS_ROWLOG(X, dictMode) \
ZSTD_FOR_EACH_ROWLOG(X, dictMode, 4) \
ZSTD_FOR_EACH_ROWLOG(X, dictMode, 5) \
ZSTD_FOR_EACH_ROWLOG(X, dictMode, 6)
#define ZSTD_FOR_EACH_MLS(X, dictMode) \
X(dictMode, 4) \
X(dictMode, 5) \
X(dictMode, 6)
#define ZSTD_FOR_EACH_DICT_MODE(X, ...) \
X(__VA_ARGS__, noDict) \
X(__VA_ARGS__, extDict) \
X(__VA_ARGS__, dictMatchState) \
X(__VA_ARGS__, dedicatedDictSearch)
/* Generate row search fns for each combination of (dictMode, mls, rowLog) */
ZSTD_FOR_EACH_DICT_MODE(ZSTD_FOR_EACH_MLS_ROWLOG, GEN_ZSTD_ROW_SEARCH_FN)
/* Generate binary Tree search fns for each combination of (dictMode, mls) */
ZSTD_FOR_EACH_DICT_MODE(ZSTD_FOR_EACH_MLS, GEN_ZSTD_BT_SEARCH_FN)
/* Generate hash chain search fns for each combination of (dictMode, mls) */
ZSTD_FOR_EACH_DICT_MODE(ZSTD_FOR_EACH_MLS, GEN_ZSTD_HC_SEARCH_FN)
typedef enum { search_hashChain=0, search_binaryTree=1, search_rowHash=2 } searchMethod_e;
#define GEN_ZSTD_CALL_BT_SEARCH_FN(dictMode, mls) \
case mls: \
return ZSTD_BT_SEARCH_FN(dictMode, mls)(ms, ip, iend, offsetPtr);
#define GEN_ZSTD_CALL_HC_SEARCH_FN(dictMode, mls) \
case mls: \
return ZSTD_HC_SEARCH_FN(dictMode, mls)(ms, ip, iend, offsetPtr);
#define GEN_ZSTD_CALL_ROW_SEARCH_FN(dictMode, mls, rowLog) \
case rowLog: \
return ZSTD_ROW_SEARCH_FN(dictMode, mls, rowLog)(ms, ip, iend, offsetPtr);
#define ZSTD_SWITCH_MLS(X, dictMode) \
switch (mls) { \
ZSTD_FOR_EACH_MLS(X, dictMode) \
}
#define ZSTD_SWITCH_ROWLOG(dictMode, mls) \
case mls: \
switch (rowLog) { \
ZSTD_FOR_EACH_ROWLOG(GEN_ZSTD_CALL_ROW_SEARCH_FN, dictMode, mls) \
} \
ZSTD_UNREACHABLE; \
break;
#define ZSTD_SWITCH_SEARCH_METHOD(dictMode) \
switch (searchMethod) { \
case search_hashChain: \
ZSTD_SWITCH_MLS(GEN_ZSTD_CALL_HC_SEARCH_FN, dictMode) \
break; \
case search_binaryTree: \
ZSTD_SWITCH_MLS(GEN_ZSTD_CALL_BT_SEARCH_FN, dictMode) \
break; \
case search_rowHash: \
ZSTD_SWITCH_MLS(ZSTD_SWITCH_ROWLOG, dictMode) \
break; \
} \
ZSTD_UNREACHABLE;
/**
* Searches for the longest match at @p ip.
* Dispatches to the correct implementation function based on the
* (searchMethod, dictMode, mls, rowLog). We use switch statements
* here instead of using an indirect function call through a function
* pointer because after Spectre and Meltdown mitigations, indirect
* function calls can be very costly, especially in the kernel.
*
* NOTE: dictMode and searchMethod should be templated, so those switch
* statements should be optimized out. Only the mls & rowLog switches
* should be left.
*
* @param ms The match state.
* @param ip The position to search at.
* @param iend The end of the input data.
* @param[out] offsetPtr Stores the match offset into this pointer.
* @param mls The minimum search length, in the range [4, 6].
* @param rowLog The row log (if applicable), in the range [4, 6].
* @param searchMethod The search method to use (templated).
* @param dictMode The dictMode (templated).
*
* @returns The length of the longest match found, or < mls if no match is found.
* If a match is found its offset is stored in @p offsetPtr.
*/
FORCE_INLINE_TEMPLATE size_t ZSTD_searchMax(
ZSTD_MatchState_t* ms,
const BYTE* ip,
const BYTE* iend,
size_t* offsetPtr,
U32 const mls,
U32 const rowLog,
searchMethod_e const searchMethod,
ZSTD_dictMode_e const dictMode)
{
if (dictMode == ZSTD_noDict) {
ZSTD_SWITCH_SEARCH_METHOD(noDict)
} else if (dictMode == ZSTD_extDict) {
ZSTD_SWITCH_SEARCH_METHOD(extDict)
} else if (dictMode == ZSTD_dictMatchState) {
ZSTD_SWITCH_SEARCH_METHOD(dictMatchState)
} else if (dictMode == ZSTD_dedicatedDictSearch) {
ZSTD_SWITCH_SEARCH_METHOD(dedicatedDictSearch)
}
ZSTD_UNREACHABLE;
return 0;
}
/* *******************************
* Common parser - lazy strategy
*********************************/
FORCE_INLINE_TEMPLATE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
size_t ZSTD_compressBlock_lazy_generic(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore,
U32 rep[ZSTD_REP_NUM],
const void* src, size_t srcSize,
const searchMethod_e searchMethod, const U32 depth,
ZSTD_dictMode_e const dictMode)
{
const BYTE* const istart = (const BYTE*)src;
const BYTE* ip = istart;
const BYTE* anchor = istart;
const BYTE* const iend = istart + srcSize;
const BYTE* const ilimit = (searchMethod == search_rowHash) ? iend - 8 - ZSTD_ROW_HASH_CACHE_SIZE : iend - 8;
const BYTE* const base = ms->window.base;
const U32 prefixLowestIndex = ms->window.dictLimit;
const BYTE* const prefixLowest = base + prefixLowestIndex;
const U32 mls = BOUNDED(4, ms->cParams.minMatch, 6);
const U32 rowLog = BOUNDED(4, ms->cParams.searchLog, 6);
U32 offset_1 = rep[0], offset_2 = rep[1];
U32 offsetSaved1 = 0, offsetSaved2 = 0;
const int isDMS = dictMode == ZSTD_dictMatchState;
const int isDDS = dictMode == ZSTD_dedicatedDictSearch;
const int isDxS = isDMS || isDDS;
const ZSTD_MatchState_t* const dms = ms->dictMatchState;
const U32 dictLowestIndex = isDxS ? dms->window.dictLimit : 0;
const BYTE* const dictBase = isDxS ? dms->window.base : NULL;
const BYTE* const dictLowest = isDxS ? dictBase + dictLowestIndex : NULL;
const BYTE* const dictEnd = isDxS ? dms->window.nextSrc : NULL;
const U32 dictIndexDelta = isDxS ?
prefixLowestIndex - (U32)(dictEnd - dictBase) :
0;
const U32 dictAndPrefixLength = (U32)((ip - prefixLowest) + (dictEnd - dictLowest));
DEBUGLOG(5, "ZSTD_compressBlock_lazy_generic (dictMode=%u) (searchFunc=%u)", (U32)dictMode, (U32)searchMethod);
ip += (dictAndPrefixLength == 0);
if (dictMode == ZSTD_noDict) {
U32 const curr = (U32)(ip - base);
U32 const windowLow = ZSTD_getLowestPrefixIndex(ms, curr, ms->cParams.windowLog);
U32 const maxRep = curr - windowLow;
if (offset_2 > maxRep) offsetSaved2 = offset_2, offset_2 = 0;
if (offset_1 > maxRep) offsetSaved1 = offset_1, offset_1 = 0;
}
if (isDxS) {
/* dictMatchState repCode checks don't currently handle repCode == 0
* disabling. */
assert(offset_1 <= dictAndPrefixLength);
assert(offset_2 <= dictAndPrefixLength);
}
/* Reset the lazy skipping state */
ms->lazySkipping = 0;
if (searchMethod == search_rowHash) {
ZSTD_row_fillHashCache(ms, base, rowLog, mls, ms->nextToUpdate, ilimit);
}
/* Match Loop */
#if defined(__GNUC__) && defined(__x86_64__)
/* I've measured random a 5% speed loss on levels 5 & 6 (greedy) when the
* code alignment is perturbed. To fix the instability align the loop on 32-bytes.
*/
__asm__(".p2align 5");
#endif
while (ip < ilimit) {
size_t matchLength=0;
size_t offBase = REPCODE1_TO_OFFBASE;
const BYTE* start=ip+1;
DEBUGLOG(7, "search baseline (depth 0)");
/* check repCode */
if (isDxS) {
const U32 repIndex = (U32)(ip - base) + 1 - offset_1;
const BYTE* repMatch = ((dictMode == ZSTD_dictMatchState || dictMode == ZSTD_dedicatedDictSearch)
&& repIndex < prefixLowestIndex) ?
dictBase + (repIndex - dictIndexDelta) :
base + repIndex;
if ((ZSTD_index_overlap_check(prefixLowestIndex, repIndex))
&& (MEM_read32(repMatch) == MEM_read32(ip+1)) ) {
const BYTE* repMatchEnd = repIndex < prefixLowestIndex ? dictEnd : iend;
matchLength = ZSTD_count_2segments(ip+1+4, repMatch+4, iend, repMatchEnd, prefixLowest) + 4;
if (depth==0) goto _storeSequence;
}
}
if ( dictMode == ZSTD_noDict
&& ((offset_1 > 0) & (MEM_read32(ip+1-offset_1) == MEM_read32(ip+1)))) {
matchLength = ZSTD_count(ip+1+4, ip+1+4-offset_1, iend) + 4;
if (depth==0) goto _storeSequence;
}
/* first search (depth 0) */
{ size_t offbaseFound = 999999999;
size_t const ml2 = ZSTD_searchMax(ms, ip, iend, &offbaseFound, mls, rowLog, searchMethod, dictMode);
if (ml2 > matchLength)
matchLength = ml2, start = ip, offBase = offbaseFound;
}
if (matchLength < 4) {
size_t const step = ((size_t)(ip-anchor) >> kSearchStrength) + 1; /* jump faster over incompressible sections */;
ip += step;
/* Enter the lazy skipping mode once we are skipping more than 8 bytes at a time.
* In this mode we stop inserting every position into our tables, and only insert
* positions that we search, which is one in step positions.
* The exact cutoff is flexible, I've just chosen a number that is reasonably high,
* so we minimize the compression ratio loss in "normal" scenarios. This mode gets
* triggered once we've gone 2KB without finding any matches.
*/
ms->lazySkipping = step > kLazySkippingStep;
continue;
}
/* let's try to find a better solution */
if (depth>=1)
while (ip<ilimit) {
DEBUGLOG(7, "search depth 1");
ip ++;
if ( (dictMode == ZSTD_noDict)
&& (offBase) && ((offset_1>0) & (MEM_read32(ip) == MEM_read32(ip - offset_1)))) {
size_t const mlRep = ZSTD_count(ip+4, ip+4-offset_1, iend) + 4;
int const gain2 = (int)(mlRep * 3);
int const gain1 = (int)(matchLength*3 - ZSTD_highbit32((U32)offBase) + 1);
if ((mlRep >= 4) && (gain2 > gain1))
matchLength = mlRep, offBase = REPCODE1_TO_OFFBASE, start = ip;
}
if (isDxS) {
const U32 repIndex = (U32)(ip - base) - offset_1;
const BYTE* repMatch = repIndex < prefixLowestIndex ?
dictBase + (repIndex - dictIndexDelta) :
base + repIndex;
if ((ZSTD_index_overlap_check(prefixLowestIndex, repIndex))
&& (MEM_read32(repMatch) == MEM_read32(ip)) ) {
const BYTE* repMatchEnd = repIndex < prefixLowestIndex ? dictEnd : iend;
size_t const mlRep = ZSTD_count_2segments(ip+4, repMatch+4, iend, repMatchEnd, prefixLowest) + 4;
int const gain2 = (int)(mlRep * 3);
int const gain1 = (int)(matchLength*3 - ZSTD_highbit32((U32)offBase) + 1);
if ((mlRep >= 4) && (gain2 > gain1))
matchLength = mlRep, offBase = REPCODE1_TO_OFFBASE, start = ip;
}
}
{ size_t ofbCandidate=999999999;
size_t const ml2 = ZSTD_searchMax(ms, ip, iend, &ofbCandidate, mls, rowLog, searchMethod, dictMode);
int const gain2 = (int)(ml2*4 - ZSTD_highbit32((U32)ofbCandidate)); /* raw approx */
int const gain1 = (int)(matchLength*4 - ZSTD_highbit32((U32)offBase) + 4);
if ((ml2 >= 4) && (gain2 > gain1)) {
matchLength = ml2, offBase = ofbCandidate, start = ip;
continue; /* search a better one */
} }
/* let's find an even better one */
if ((depth==2) && (ip<ilimit)) {
DEBUGLOG(7, "search depth 2");
ip ++;
if ( (dictMode == ZSTD_noDict)
&& (offBase) && ((offset_1>0) & (MEM_read32(ip) == MEM_read32(ip - offset_1)))) {
size_t const mlRep = ZSTD_count(ip+4, ip+4-offset_1, iend) + 4;
int const gain2 = (int)(mlRep * 4);
int const gain1 = (int)(matchLength*4 - ZSTD_highbit32((U32)offBase) + 1);
if ((mlRep >= 4) && (gain2 > gain1))
matchLength = mlRep, offBase = REPCODE1_TO_OFFBASE, start = ip;
}
if (isDxS) {
const U32 repIndex = (U32)(ip - base) - offset_1;
const BYTE* repMatch = repIndex < prefixLowestIndex ?
dictBase + (repIndex - dictIndexDelta) :
base + repIndex;
if ((ZSTD_index_overlap_check(prefixLowestIndex, repIndex))
&& (MEM_read32(repMatch) == MEM_read32(ip)) ) {
const BYTE* repMatchEnd = repIndex < prefixLowestIndex ? dictEnd : iend;
size_t const mlRep = ZSTD_count_2segments(ip+4, repMatch+4, iend, repMatchEnd, prefixLowest) + 4;
int const gain2 = (int)(mlRep * 4);
int const gain1 = (int)(matchLength*4 - ZSTD_highbit32((U32)offBase) + 1);
if ((mlRep >= 4) && (gain2 > gain1))
matchLength = mlRep, offBase = REPCODE1_TO_OFFBASE, start = ip;
}
}
{ size_t ofbCandidate=999999999;
size_t const ml2 = ZSTD_searchMax(ms, ip, iend, &ofbCandidate, mls, rowLog, searchMethod, dictMode);
int const gain2 = (int)(ml2*4 - ZSTD_highbit32((U32)ofbCandidate)); /* raw approx */
int const gain1 = (int)(matchLength*4 - ZSTD_highbit32((U32)offBase) + 7);
if ((ml2 >= 4) && (gain2 > gain1)) {
matchLength = ml2, offBase = ofbCandidate, start = ip;
continue;
} } }
break; /* nothing found : store previous solution */
}
/* NOTE:
* Pay attention that `start[-value]` can lead to strange undefined behavior
* notably if `value` is unsigned, resulting in a large positive `-value`.
*/
/* catch up */
if (OFFBASE_IS_OFFSET(offBase)) {
if (dictMode == ZSTD_noDict) {
while ( ((start > anchor) & (start - OFFBASE_TO_OFFSET(offBase) > prefixLowest))
&& (start[-1] == (start-OFFBASE_TO_OFFSET(offBase))[-1]) ) /* only search for offset within prefix */
{ start--; matchLength++; }
}
if (isDxS) {
U32 const matchIndex = (U32)((size_t)(start-base) - OFFBASE_TO_OFFSET(offBase));
const BYTE* match = (matchIndex < prefixLowestIndex) ? dictBase + matchIndex - dictIndexDelta : base + matchIndex;
const BYTE* const mStart = (matchIndex < prefixLowestIndex) ? dictLowest : prefixLowest;
while ((start>anchor) && (match>mStart) && (start[-1] == match[-1])) { start--; match--; matchLength++; } /* catch up */
}
offset_2 = offset_1; offset_1 = (U32)OFFBASE_TO_OFFSET(offBase);
}
/* store sequence */
_storeSequence:
{ size_t const litLength = (size_t)(start - anchor);
ZSTD_storeSeq(seqStore, litLength, anchor, iend, (U32)offBase, matchLength);
anchor = ip = start + matchLength;
}
if (ms->lazySkipping) {
/* We've found a match, disable lazy skipping mode, and refill the hash cache. */
if (searchMethod == search_rowHash) {
ZSTD_row_fillHashCache(ms, base, rowLog, mls, ms->nextToUpdate, ilimit);
}
ms->lazySkipping = 0;
}
/* check immediate repcode */
if (isDxS) {
while (ip <= ilimit) {
U32 const current2 = (U32)(ip-base);
U32 const repIndex = current2 - offset_2;
const BYTE* repMatch = repIndex < prefixLowestIndex ?
dictBase - dictIndexDelta + repIndex :
base + repIndex;
if ( (ZSTD_index_overlap_check(prefixLowestIndex, repIndex))
&& (MEM_read32(repMatch) == MEM_read32(ip)) ) {
const BYTE* const repEnd2 = repIndex < prefixLowestIndex ? dictEnd : iend;
matchLength = ZSTD_count_2segments(ip+4, repMatch+4, iend, repEnd2, prefixLowest) + 4;
offBase = offset_2; offset_2 = offset_1; offset_1 = (U32)offBase; /* swap offset_2 <=> offset_1 */
ZSTD_storeSeq(seqStore, 0, anchor, iend, REPCODE1_TO_OFFBASE, matchLength);
ip += matchLength;
anchor = ip;
continue;
}
break;
}
}
if (dictMode == ZSTD_noDict) {
while ( ((ip <= ilimit) & (offset_2>0))
&& (MEM_read32(ip) == MEM_read32(ip - offset_2)) ) {
/* store sequence */
matchLength = ZSTD_count(ip+4, ip+4-offset_2, iend) + 4;
offBase = offset_2; offset_2 = offset_1; offset_1 = (U32)offBase; /* swap repcodes */
ZSTD_storeSeq(seqStore, 0, anchor, iend, REPCODE1_TO_OFFBASE, matchLength);
ip += matchLength;
anchor = ip;
continue; /* faster when present ... (?) */
} } }
/* If offset_1 started invalid (offsetSaved1 != 0) and became valid (offset_1 != 0),
* rotate saved offsets. See comment in ZSTD_compressBlock_fast_noDict for more context. */
offsetSaved2 = ((offsetSaved1 != 0) && (offset_1 != 0)) ? offsetSaved1 : offsetSaved2;
/* save reps for next block */
rep[0] = offset_1 ? offset_1 : offsetSaved1;
rep[1] = offset_2 ? offset_2 : offsetSaved2;
/* Return the last literals size */
return (size_t)(iend - anchor);
}
#endif /* build exclusions */
#ifndef ZSTD_EXCLUDE_GREEDY_BLOCK_COMPRESSOR
size_t ZSTD_compressBlock_greedy(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_generic(ms, seqStore, rep, src, srcSize, search_hashChain, 0, ZSTD_noDict);
}
size_t ZSTD_compressBlock_greedy_dictMatchState(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_generic(ms, seqStore, rep, src, srcSize, search_hashChain, 0, ZSTD_dictMatchState);
}
size_t ZSTD_compressBlock_greedy_dedicatedDictSearch(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_generic(ms, seqStore, rep, src, srcSize, search_hashChain, 0, ZSTD_dedicatedDictSearch);
}
size_t ZSTD_compressBlock_greedy_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_generic(ms, seqStore, rep, src, srcSize, search_rowHash, 0, ZSTD_noDict);
}
size_t ZSTD_compressBlock_greedy_dictMatchState_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_generic(ms, seqStore, rep, src, srcSize, search_rowHash, 0, ZSTD_dictMatchState);
}
size_t ZSTD_compressBlock_greedy_dedicatedDictSearch_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_generic(ms, seqStore, rep, src, srcSize, search_rowHash, 0, ZSTD_dedicatedDictSearch);
}
#endif
#ifndef ZSTD_EXCLUDE_LAZY_BLOCK_COMPRESSOR
size_t ZSTD_compressBlock_lazy(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_generic(ms, seqStore, rep, src, srcSize, search_hashChain, 1, ZSTD_noDict);
}
size_t ZSTD_compressBlock_lazy_dictMatchState(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_generic(ms, seqStore, rep, src, srcSize, search_hashChain, 1, ZSTD_dictMatchState);
}
size_t ZSTD_compressBlock_lazy_dedicatedDictSearch(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_generic(ms, seqStore, rep, src, srcSize, search_hashChain, 1, ZSTD_dedicatedDictSearch);
}
size_t ZSTD_compressBlock_lazy_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_generic(ms, seqStore, rep, src, srcSize, search_rowHash, 1, ZSTD_noDict);
}
size_t ZSTD_compressBlock_lazy_dictMatchState_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_generic(ms, seqStore, rep, src, srcSize, search_rowHash, 1, ZSTD_dictMatchState);
}
size_t ZSTD_compressBlock_lazy_dedicatedDictSearch_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_generic(ms, seqStore, rep, src, srcSize, search_rowHash, 1, ZSTD_dedicatedDictSearch);
}
#endif
#ifndef ZSTD_EXCLUDE_LAZY2_BLOCK_COMPRESSOR
size_t ZSTD_compressBlock_lazy2(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_generic(ms, seqStore, rep, src, srcSize, search_hashChain, 2, ZSTD_noDict);
}
size_t ZSTD_compressBlock_lazy2_dictMatchState(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_generic(ms, seqStore, rep, src, srcSize, search_hashChain, 2, ZSTD_dictMatchState);
}
size_t ZSTD_compressBlock_lazy2_dedicatedDictSearch(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_generic(ms, seqStore, rep, src, srcSize, search_hashChain, 2, ZSTD_dedicatedDictSearch);
}
size_t ZSTD_compressBlock_lazy2_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_generic(ms, seqStore, rep, src, srcSize, search_rowHash, 2, ZSTD_noDict);
}
size_t ZSTD_compressBlock_lazy2_dictMatchState_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_generic(ms, seqStore, rep, src, srcSize, search_rowHash, 2, ZSTD_dictMatchState);
}
size_t ZSTD_compressBlock_lazy2_dedicatedDictSearch_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_generic(ms, seqStore, rep, src, srcSize, search_rowHash, 2, ZSTD_dedicatedDictSearch);
}
#endif
#ifndef ZSTD_EXCLUDE_BTLAZY2_BLOCK_COMPRESSOR
size_t ZSTD_compressBlock_btlazy2(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_generic(ms, seqStore, rep, src, srcSize, search_binaryTree, 2, ZSTD_noDict);
}
size_t ZSTD_compressBlock_btlazy2_dictMatchState(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_generic(ms, seqStore, rep, src, srcSize, search_binaryTree, 2, ZSTD_dictMatchState);
}
#endif
#if !defined(ZSTD_EXCLUDE_GREEDY_BLOCK_COMPRESSOR) \
|| !defined(ZSTD_EXCLUDE_LAZY_BLOCK_COMPRESSOR) \
|| !defined(ZSTD_EXCLUDE_LAZY2_BLOCK_COMPRESSOR) \
|| !defined(ZSTD_EXCLUDE_BTLAZY2_BLOCK_COMPRESSOR)
FORCE_INLINE_TEMPLATE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
size_t ZSTD_compressBlock_lazy_extDict_generic(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore,
U32 rep[ZSTD_REP_NUM],
const void* src, size_t srcSize,
const searchMethod_e searchMethod, const U32 depth)
{
const BYTE* const istart = (const BYTE*)src;
const BYTE* ip = istart;
const BYTE* anchor = istart;
const BYTE* const iend = istart + srcSize;
const BYTE* const ilimit = searchMethod == search_rowHash ? iend - 8 - ZSTD_ROW_HASH_CACHE_SIZE : iend - 8;
const BYTE* const base = ms->window.base;
const U32 dictLimit = ms->window.dictLimit;
const BYTE* const prefixStart = base + dictLimit;
const BYTE* const dictBase = ms->window.dictBase;
const BYTE* const dictEnd = dictBase + dictLimit;
const BYTE* const dictStart = dictBase + ms->window.lowLimit;
const U32 windowLog = ms->cParams.windowLog;
const U32 mls = BOUNDED(4, ms->cParams.minMatch, 6);
const U32 rowLog = BOUNDED(4, ms->cParams.searchLog, 6);
U32 offset_1 = rep[0], offset_2 = rep[1];
DEBUGLOG(5, "ZSTD_compressBlock_lazy_extDict_generic (searchFunc=%u)", (U32)searchMethod);
/* Reset the lazy skipping state */
ms->lazySkipping = 0;
/* init */
ip += (ip == prefixStart);
if (searchMethod == search_rowHash) {
ZSTD_row_fillHashCache(ms, base, rowLog, mls, ms->nextToUpdate, ilimit);
}
/* Match Loop */
#if defined(__GNUC__) && defined(__x86_64__)
/* I've measured random a 5% speed loss on levels 5 & 6 (greedy) when the
* code alignment is perturbed. To fix the instability align the loop on 32-bytes.
*/
__asm__(".p2align 5");
#endif
while (ip < ilimit) {
size_t matchLength=0;
size_t offBase = REPCODE1_TO_OFFBASE;
const BYTE* start=ip+1;
U32 curr = (U32)(ip-base);
/* check repCode */
{ const U32 windowLow = ZSTD_getLowestMatchIndex(ms, curr+1, windowLog);
const U32 repIndex = (U32)(curr+1 - offset_1);
const BYTE* const repBase = repIndex < dictLimit ? dictBase : base;
const BYTE* const repMatch = repBase + repIndex;
if ( (ZSTD_index_overlap_check(dictLimit, repIndex))
& (offset_1 <= curr+1 - windowLow) ) /* note: we are searching at curr+1 */
if (MEM_read32(ip+1) == MEM_read32(repMatch)) {
/* repcode detected we should take it */
const BYTE* const repEnd = repIndex < dictLimit ? dictEnd : iend;
matchLength = ZSTD_count_2segments(ip+1+4, repMatch+4, iend, repEnd, prefixStart) + 4;
if (depth==0) goto _storeSequence;
} }
/* first search (depth 0) */
{ size_t ofbCandidate = 999999999;
size_t const ml2 = ZSTD_searchMax(ms, ip, iend, &ofbCandidate, mls, rowLog, searchMethod, ZSTD_extDict);
if (ml2 > matchLength)
matchLength = ml2, start = ip, offBase = ofbCandidate;
}
if (matchLength < 4) {
size_t const step = ((size_t)(ip-anchor) >> kSearchStrength);
ip += step + 1; /* jump faster over incompressible sections */
/* Enter the lazy skipping mode once we are skipping more than 8 bytes at a time.
* In this mode we stop inserting every position into our tables, and only insert
* positions that we search, which is one in step positions.
* The exact cutoff is flexible, I've just chosen a number that is reasonably high,
* so we minimize the compression ratio loss in "normal" scenarios. This mode gets
* triggered once we've gone 2KB without finding any matches.
*/
ms->lazySkipping = step > kLazySkippingStep;
continue;
}
/* let's try to find a better solution */
if (depth>=1)
while (ip<ilimit) {
ip ++;
curr++;
/* check repCode */
if (offBase) {
const U32 windowLow = ZSTD_getLowestMatchIndex(ms, curr, windowLog);
const U32 repIndex = (U32)(curr - offset_1);
const BYTE* const repBase = repIndex < dictLimit ? dictBase : base;
const BYTE* const repMatch = repBase + repIndex;
if ( (ZSTD_index_overlap_check(dictLimit, repIndex))
& (offset_1 <= curr - windowLow) ) /* equivalent to `curr > repIndex >= windowLow` */
if (MEM_read32(ip) == MEM_read32(repMatch)) {
/* repcode detected */
const BYTE* const repEnd = repIndex < dictLimit ? dictEnd : iend;
size_t const repLength = ZSTD_count_2segments(ip+4, repMatch+4, iend, repEnd, prefixStart) + 4;
int const gain2 = (int)(repLength * 3);
int const gain1 = (int)(matchLength*3 - ZSTD_highbit32((U32)offBase) + 1);
if ((repLength >= 4) && (gain2 > gain1))
matchLength = repLength, offBase = REPCODE1_TO_OFFBASE, start = ip;
} }
/* search match, depth 1 */
{ size_t ofbCandidate = 999999999;
size_t const ml2 = ZSTD_searchMax(ms, ip, iend, &ofbCandidate, mls, rowLog, searchMethod, ZSTD_extDict);
int const gain2 = (int)(ml2*4 - ZSTD_highbit32((U32)ofbCandidate)); /* raw approx */
int const gain1 = (int)(matchLength*4 - ZSTD_highbit32((U32)offBase) + 4);
if ((ml2 >= 4) && (gain2 > gain1)) {
matchLength = ml2, offBase = ofbCandidate, start = ip;
continue; /* search a better one */
} }
/* let's find an even better one */
if ((depth==2) && (ip<ilimit)) {
ip ++;
curr++;
/* check repCode */
if (offBase) {
const U32 windowLow = ZSTD_getLowestMatchIndex(ms, curr, windowLog);
const U32 repIndex = (U32)(curr - offset_1);
const BYTE* const repBase = repIndex < dictLimit ? dictBase : base;
const BYTE* const repMatch = repBase + repIndex;
if ( (ZSTD_index_overlap_check(dictLimit, repIndex))
& (offset_1 <= curr - windowLow) ) /* equivalent to `curr > repIndex >= windowLow` */
if (MEM_read32(ip) == MEM_read32(repMatch)) {
/* repcode detected */
const BYTE* const repEnd = repIndex < dictLimit ? dictEnd : iend;
size_t const repLength = ZSTD_count_2segments(ip+4, repMatch+4, iend, repEnd, prefixStart) + 4;
int const gain2 = (int)(repLength * 4);
int const gain1 = (int)(matchLength*4 - ZSTD_highbit32((U32)offBase) + 1);
if ((repLength >= 4) && (gain2 > gain1))
matchLength = repLength, offBase = REPCODE1_TO_OFFBASE, start = ip;
} }
/* search match, depth 2 */
{ size_t ofbCandidate = 999999999;
size_t const ml2 = ZSTD_searchMax(ms, ip, iend, &ofbCandidate, mls, rowLog, searchMethod, ZSTD_extDict);
int const gain2 = (int)(ml2*4 - ZSTD_highbit32((U32)ofbCandidate)); /* raw approx */
int const gain1 = (int)(matchLength*4 - ZSTD_highbit32((U32)offBase) + 7);
if ((ml2 >= 4) && (gain2 > gain1)) {
matchLength = ml2, offBase = ofbCandidate, start = ip;
continue;
} } }
break; /* nothing found : store previous solution */
}
/* catch up */
if (OFFBASE_IS_OFFSET(offBase)) {
U32 const matchIndex = (U32)((size_t)(start-base) - OFFBASE_TO_OFFSET(offBase));
const BYTE* match = (matchIndex < dictLimit) ? dictBase + matchIndex : base + matchIndex;
const BYTE* const mStart = (matchIndex < dictLimit) ? dictStart : prefixStart;
while ((start>anchor) && (match>mStart) && (start[-1] == match[-1])) { start--; match--; matchLength++; } /* catch up */
offset_2 = offset_1; offset_1 = (U32)OFFBASE_TO_OFFSET(offBase);
}
/* store sequence */
_storeSequence:
{ size_t const litLength = (size_t)(start - anchor);
ZSTD_storeSeq(seqStore, litLength, anchor, iend, (U32)offBase, matchLength);
anchor = ip = start + matchLength;
}
if (ms->lazySkipping) {
/* We've found a match, disable lazy skipping mode, and refill the hash cache. */
if (searchMethod == search_rowHash) {
ZSTD_row_fillHashCache(ms, base, rowLog, mls, ms->nextToUpdate, ilimit);
}
ms->lazySkipping = 0;
}
/* check immediate repcode */
while (ip <= ilimit) {
const U32 repCurrent = (U32)(ip-base);
const U32 windowLow = ZSTD_getLowestMatchIndex(ms, repCurrent, windowLog);
const U32 repIndex = repCurrent - offset_2;
const BYTE* const repBase = repIndex < dictLimit ? dictBase : base;
const BYTE* const repMatch = repBase + repIndex;
if ( (ZSTD_index_overlap_check(dictLimit, repIndex))
& (offset_2 <= repCurrent - windowLow) ) /* equivalent to `curr > repIndex >= windowLow` */
if (MEM_read32(ip) == MEM_read32(repMatch)) {
/* repcode detected we should take it */
const BYTE* const repEnd = repIndex < dictLimit ? dictEnd : iend;
matchLength = ZSTD_count_2segments(ip+4, repMatch+4, iend, repEnd, prefixStart) + 4;
offBase = offset_2; offset_2 = offset_1; offset_1 = (U32)offBase; /* swap offset history */
ZSTD_storeSeq(seqStore, 0, anchor, iend, REPCODE1_TO_OFFBASE, matchLength);
ip += matchLength;
anchor = ip;
continue; /* faster when present ... (?) */
}
break;
} }
/* Save reps for next block */
rep[0] = offset_1;
rep[1] = offset_2;
/* Return the last literals size */
return (size_t)(iend - anchor);
}
#endif /* build exclusions */
#ifndef ZSTD_EXCLUDE_GREEDY_BLOCK_COMPRESSOR
size_t ZSTD_compressBlock_greedy_extDict(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_extDict_generic(ms, seqStore, rep, src, srcSize, search_hashChain, 0);
}
size_t ZSTD_compressBlock_greedy_extDict_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_extDict_generic(ms, seqStore, rep, src, srcSize, search_rowHash, 0);
}
#endif
#ifndef ZSTD_EXCLUDE_LAZY_BLOCK_COMPRESSOR
size_t ZSTD_compressBlock_lazy_extDict(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_extDict_generic(ms, seqStore, rep, src, srcSize, search_hashChain, 1);
}
size_t ZSTD_compressBlock_lazy_extDict_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_extDict_generic(ms, seqStore, rep, src, srcSize, search_rowHash, 1);
}
#endif
#ifndef ZSTD_EXCLUDE_LAZY2_BLOCK_COMPRESSOR
size_t ZSTD_compressBlock_lazy2_extDict(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_extDict_generic(ms, seqStore, rep, src, srcSize, search_hashChain, 2);
}
size_t ZSTD_compressBlock_lazy2_extDict_row(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_extDict_generic(ms, seqStore, rep, src, srcSize, search_rowHash, 2);
}
#endif
#ifndef ZSTD_EXCLUDE_BTLAZY2_BLOCK_COMPRESSOR
size_t ZSTD_compressBlock_btlazy2_extDict(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
void const* src, size_t srcSize)
{
return ZSTD_compressBlock_lazy_extDict_generic(ms, seqStore, rep, src, srcSize, search_binaryTree, 2);
}
#endif
/**** ended inlining compress/zstd_lazy.c ****/
/**** start inlining compress/zstd_ldm.c ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/**** skipping file: zstd_ldm.h ****/
/**** skipping file: ../common/debug.h ****/
/**** skipping file: ../common/xxhash.h ****/
/**** skipping file: zstd_fast.h ****/
/**** skipping file: zstd_double_fast.h ****/
/**** start inlining zstd_ldm_geartab.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_LDM_GEARTAB_H
#define ZSTD_LDM_GEARTAB_H
/**** skipping file: ../common/compiler.h ****/
/**** skipping file: ../common/mem.h ****/
static UNUSED_ATTR const U64 ZSTD_ldm_gearTab[256] = {
0xf5b8f72c5f77775c, 0x84935f266b7ac412, 0xb647ada9ca730ccc,
0xb065bb4b114fb1de, 0x34584e7e8c3a9fd0, 0x4e97e17c6ae26b05,
0x3a03d743bc99a604, 0xcecd042422c4044f, 0x76de76c58524259e,
0x9c8528f65badeaca, 0x86563706e2097529, 0x2902475fa375d889,
0xafb32a9739a5ebe6, 0xce2714da3883e639, 0x21eaf821722e69e,
0x37b628620b628, 0x49a8d455d88caf5, 0x8556d711e6958140,
0x4f7ae74fc605c1f, 0x829f0c3468bd3a20, 0x4ffdc885c625179e,
0x8473de048a3daf1b, 0x51008822b05646b2, 0x69d75d12b2d1cc5f,
0x8c9d4a19159154bc, 0xc3cc10f4abbd4003, 0xd06ddc1cecb97391,
0xbe48e6e7ed80302e, 0x3481db31cee03547, 0xacc3f67cdaa1d210,
0x65cb771d8c7f96cc, 0x8eb27177055723dd, 0xc789950d44cd94be,
0x934feadc3700b12b, 0x5e485f11edbdf182, 0x1e2e2a46fd64767a,
0x2969ca71d82efa7c, 0x9d46e9935ebbba2e, 0xe056b67e05e6822b,
0x94d73f55739d03a0, 0xcd7010bdb69b5a03, 0x455ef9fcd79b82f4,
0x869cb54a8749c161, 0x38d1a4fa6185d225, 0xb475166f94bbe9bb,
0xa4143548720959f1, 0x7aed4780ba6b26ba, 0xd0ce264439e02312,
0x84366d746078d508, 0xa8ce973c72ed17be, 0x21c323a29a430b01,
0x9962d617e3af80ee, 0xab0ce91d9c8cf75b, 0x530e8ee6d19a4dbc,
0x2ef68c0cf53f5d72, 0xc03a681640a85506, 0x496e4e9f9c310967,
0x78580472b59b14a0, 0x273824c23b388577, 0x66bf923ad45cb553,
0x47ae1a5a2492ba86, 0x35e304569e229659, 0x4765182a46870b6f,
0x6cbab625e9099412, 0xddac9a2e598522c1, 0x7172086e666624f2,
0xdf5003ca503b7837, 0x88c0c1db78563d09, 0x58d51865acfc289d,
0x177671aec65224f1, 0xfb79d8a241e967d7, 0x2be1e101cad9a49a,
0x6625682f6e29186b, 0x399553457ac06e50, 0x35dffb4c23abb74,
0x429db2591f54aade, 0xc52802a8037d1009, 0x6acb27381f0b25f3,
0xf45e2551ee4f823b, 0x8b0ea2d99580c2f7, 0x3bed519cbcb4e1e1,
0xff452823dbb010a, 0x9d42ed614f3dd267, 0x5b9313c06257c57b,
0xa114b8008b5e1442, 0xc1fe311c11c13d4b, 0x66e8763ea34c5568,
0x8b982af1c262f05d, 0xee8876faaa75fbb7, 0x8a62a4d0d172bb2a,
0xc13d94a3b7449a97, 0x6dbbba9dc15d037c, 0xc786101f1d92e0f1,
0xd78681a907a0b79b, 0xf61aaf2962c9abb9, 0x2cfd16fcd3cb7ad9,
0x868c5b6744624d21, 0x25e650899c74ddd7, 0xba042af4a7c37463,
0x4eb1a539465a3eca, 0xbe09dbf03b05d5ca, 0x774e5a362b5472ba,
0x47a1221229d183cd, 0x504b0ca18ef5a2df, 0xdffbdfbde2456eb9,
0x46cd2b2fbee34634, 0xf2aef8fe819d98c3, 0x357f5276d4599d61,
0x24a5483879c453e3, 0x88026889192b4b9, 0x28da96671782dbec,
0x4ef37c40588e9aaa, 0x8837b90651bc9fb3, 0xc164f741d3f0e5d6,
0xbc135a0a704b70ba, 0x69cd868f7622ada, 0xbc37ba89e0b9c0ab,
0x47c14a01323552f6, 0x4f00794bacee98bb, 0x7107de7d637a69d5,
0x88af793bb6f2255e, 0xf3c6466b8799b598, 0xc288c616aa7f3b59,
0x81ca63cf42fca3fd, 0x88d85ace36a2674b, 0xd056bd3792389e7,
0xe55c396c4e9dd32d, 0xbefb504571e6c0a6, 0x96ab32115e91e8cc,
0xbf8acb18de8f38d1, 0x66dae58801672606, 0x833b6017872317fb,
0xb87c16f2d1c92864, 0xdb766a74e58b669c, 0x89659f85c61417be,
0xc8daad856011ea0c, 0x76a4b565b6fe7eae, 0xa469d085f6237312,
0xaaf0365683a3e96c, 0x4dbb746f8424f7b8, 0x638755af4e4acc1,
0x3d7807f5bde64486, 0x17be6d8f5bbb7639, 0x903f0cd44dc35dc,
0x67b672eafdf1196c, 0xa676ff93ed4c82f1, 0x521d1004c5053d9d,
0x37ba9ad09ccc9202, 0x84e54d297aacfb51, 0xa0b4b776a143445,
0x820d471e20b348e, 0x1874383cb83d46dc, 0x97edeec7a1efe11c,
0xb330e50b1bdc42aa, 0x1dd91955ce70e032, 0xa514cdb88f2939d5,
0x2791233fd90db9d3, 0x7b670a4cc50f7a9b, 0x77c07d2a05c6dfa5,
0xe3778b6646d0a6fa, 0xb39c8eda47b56749, 0x933ed448addbef28,
0xaf846af6ab7d0bf4, 0xe5af208eb666e49, 0x5e6622f73534cd6a,
0x297daeca42ef5b6e, 0x862daef3d35539a6, 0xe68722498f8e1ea9,
0x981c53093dc0d572, 0xfa09b0bfbf86fbf5, 0x30b1e96166219f15,
0x70e7d466bdc4fb83, 0x5a66736e35f2a8e9, 0xcddb59d2b7c1baef,
0xd6c7d247d26d8996, 0xea4e39eac8de1ba3, 0x539c8bb19fa3aff2,
0x9f90e4c5fd508d8, 0xa34e5956fbaf3385, 0x2e2f8e151d3ef375,
0x173691e9b83faec1, 0xb85a8d56bf016379, 0x8382381267408ae3,
0xb90f901bbdc0096d, 0x7c6ad32933bcec65, 0x76bb5e2f2c8ad595,
0x390f851a6cf46d28, 0xc3e6064da1c2da72, 0xc52a0c101cfa5389,
0xd78eaf84a3fbc530, 0x3781b9e2288b997e, 0x73c2f6dea83d05c4,
0x4228e364c5b5ed7, 0x9d7a3edf0da43911, 0x8edcfeda24686756,
0x5e7667a7b7a9b3a1, 0x4c4f389fa143791d, 0xb08bc1023da7cddc,
0x7ab4be3ae529b1cc, 0x754e6132dbe74ff9, 0x71635442a839df45,
0x2f6fb1643fbe52de, 0x961e0a42cf7a8177, 0xf3b45d83d89ef2ea,
0xee3de4cf4a6e3e9b, 0xcd6848542c3295e7, 0xe4cee1664c78662f,
0x9947548b474c68c4, 0x25d73777a5ed8b0b, 0xc915b1d636b7fc,
0x21c2ba75d9b0d2da, 0x5f6b5dcf608a64a1, 0xdcf333255ff9570c,
0x633b922418ced4ee, 0xc136dde0b004b34a, 0x58cc83b05d4b2f5a,
0x5eb424dda28e42d2, 0x62df47369739cd98, 0xb4e0b42485e4ce17,
0x16e1f0c1f9a8d1e7, 0x8ec3916707560ebf, 0x62ba6e2df2cc9db3,
0xcbf9f4ff77d83a16, 0x78d9d7d07d2bbcc4, 0xef554ce1e02c41f4,
0x8d7581127eccf94d, 0xa9b53336cb3c8a05, 0x38c42c0bf45c4f91,
0x640893cdf4488863, 0x80ec34bc575ea568, 0x39f324f5b48eaa40,
0xe9d9ed1f8eff527f, 0x9224fc058cc5a214, 0xbaba00b04cfe7741,
0x309a9f120fcf52af, 0xa558f3ec65626212, 0x424bec8b7adabe2f,
0x41622513a6aea433, 0xb88da2d5324ca798, 0xd287733b245528a4,
0x9a44697e6d68aec3, 0x7b1093be2f49bb28, 0x50bbec632e3d8aad,
0x6cd90723e1ea8283, 0x897b9e7431b02bf3, 0x219efdcb338a7047,
0x3b0311f0a27c0656, 0xdb17bf91c0db96e7, 0x8cd4fd6b4e85a5b2,
0xfab071054ba6409d, 0x40d6fe831fa9dfd9, 0xaf358debad7d791e,
0xeb8d0e25a65e3e58, 0xbbcbd3df14e08580, 0xcf751f27ecdab2b,
0x2b4da14f2613d8f4
};
#endif /* ZSTD_LDM_GEARTAB_H */
/**** ended inlining zstd_ldm_geartab.h ****/
#define LDM_BUCKET_SIZE_LOG 4
#define LDM_MIN_MATCH_LENGTH 64
#define LDM_HASH_RLOG 7
typedef struct {
U64 rolling;
U64 stopMask;
} ldmRollingHashState_t;
/** ZSTD_ldm_gear_init():
*
* Initializes the rolling hash state such that it will honor the
* settings in params. */
static void ZSTD_ldm_gear_init(ldmRollingHashState_t* state, ldmParams_t const* params)
{
unsigned maxBitsInMask = MIN(params->minMatchLength, 64);
unsigned hashRateLog = params->hashRateLog;
state->rolling = ~(U32)0;
/* The choice of the splitting criterion is subject to two conditions:
* 1. it has to trigger on average every 2^(hashRateLog) bytes;
* 2. ideally, it has to depend on a window of minMatchLength bytes.
*
* In the gear hash algorithm, bit n depends on the last n bytes;
* so in order to obtain a good quality splitting criterion it is
* preferable to use bits with high weight.
*
* To match condition 1 we use a mask with hashRateLog bits set
* and, because of the previous remark, we make sure these bits
* have the highest possible weight while still respecting
* condition 2.
*/
if (hashRateLog > 0 && hashRateLog <= maxBitsInMask) {
state->stopMask = (((U64)1 << hashRateLog) - 1) << (maxBitsInMask - hashRateLog);
} else {
/* In this degenerate case we simply honor the hash rate. */
state->stopMask = ((U64)1 << hashRateLog) - 1;
}
}
/** ZSTD_ldm_gear_reset()
* Feeds [data, data + minMatchLength) into the hash without registering any
* splits. This effectively resets the hash state. This is used when skipping
* over data, either at the beginning of a block, or skipping sections.
*/
static void ZSTD_ldm_gear_reset(ldmRollingHashState_t* state,
BYTE const* data, size_t minMatchLength)
{
U64 hash = state->rolling;
size_t n = 0;
#define GEAR_ITER_ONCE() do { \
hash = (hash << 1) + ZSTD_ldm_gearTab[data[n] & 0xff]; \
n += 1; \
} while (0)
while (n + 3 < minMatchLength) {
GEAR_ITER_ONCE();
GEAR_ITER_ONCE();
GEAR_ITER_ONCE();
GEAR_ITER_ONCE();
}
while (n < minMatchLength) {
GEAR_ITER_ONCE();
}
#undef GEAR_ITER_ONCE
}
/** ZSTD_ldm_gear_feed():
*
* Registers in the splits array all the split points found in the first
* size bytes following the data pointer. This function terminates when
* either all the data has been processed or LDM_BATCH_SIZE splits are
* present in the splits array.
*
* Precondition: The splits array must not be full.
* Returns: The number of bytes processed. */
static size_t ZSTD_ldm_gear_feed(ldmRollingHashState_t* state,
BYTE const* data, size_t size,
size_t* splits, unsigned* numSplits)
{
size_t n;
U64 hash, mask;
hash = state->rolling;
mask = state->stopMask;
n = 0;
#define GEAR_ITER_ONCE() do { \
hash = (hash << 1) + ZSTD_ldm_gearTab[data[n] & 0xff]; \
n += 1; \
if (UNLIKELY((hash & mask) == 0)) { \
splits[*numSplits] = n; \
*numSplits += 1; \
if (*numSplits == LDM_BATCH_SIZE) \
goto done; \
} \
} while (0)
while (n + 3 < size) {
GEAR_ITER_ONCE();
GEAR_ITER_ONCE();
GEAR_ITER_ONCE();
GEAR_ITER_ONCE();
}
while (n < size) {
GEAR_ITER_ONCE();
}
#undef GEAR_ITER_ONCE
done:
state->rolling = hash;
return n;
}
void ZSTD_ldm_adjustParameters(ldmParams_t* params,
const ZSTD_compressionParameters* cParams)
{
params->windowLog = cParams->windowLog;
ZSTD_STATIC_ASSERT(LDM_BUCKET_SIZE_LOG <= ZSTD_LDM_BUCKETSIZELOG_MAX);
DEBUGLOG(4, "ZSTD_ldm_adjustParameters");
if (params->hashRateLog == 0) {
if (params->hashLog > 0) {
/* if params->hashLog is set, derive hashRateLog from it */
assert(params->hashLog <= ZSTD_HASHLOG_MAX);
if (params->windowLog > params->hashLog) {
params->hashRateLog = params->windowLog - params->hashLog;
}
} else {
assert(1 <= (int)cParams->strategy && (int)cParams->strategy <= 9);
/* mapping from [fast, rate7] to [btultra2, rate4] */
params->hashRateLog = 7 - (cParams->strategy/3);
}
}
if (params->hashLog == 0) {
params->hashLog = BOUNDED(ZSTD_HASHLOG_MIN, params->windowLog - params->hashRateLog, ZSTD_HASHLOG_MAX);
}
if (params->minMatchLength == 0) {
params->minMatchLength = LDM_MIN_MATCH_LENGTH;
if (cParams->strategy >= ZSTD_btultra)
params->minMatchLength /= 2;
}
if (params->bucketSizeLog==0) {
assert(1 <= (int)cParams->strategy && (int)cParams->strategy <= 9);
params->bucketSizeLog = BOUNDED(LDM_BUCKET_SIZE_LOG, (U32)cParams->strategy, ZSTD_LDM_BUCKETSIZELOG_MAX);
}
params->bucketSizeLog = MIN(params->bucketSizeLog, params->hashLog);
}
size_t ZSTD_ldm_getTableSize(ldmParams_t params)
{
size_t const ldmHSize = ((size_t)1) << params.hashLog;
size_t const ldmBucketSizeLog = MIN(params.bucketSizeLog, params.hashLog);
size_t const ldmBucketSize = ((size_t)1) << (params.hashLog - ldmBucketSizeLog);
size_t const totalSize = ZSTD_cwksp_alloc_size(ldmBucketSize)
+ ZSTD_cwksp_alloc_size(ldmHSize * sizeof(ldmEntry_t));
return params.enableLdm == ZSTD_ps_enable ? totalSize : 0;
}
size_t ZSTD_ldm_getMaxNbSeq(ldmParams_t params, size_t maxChunkSize)
{
return params.enableLdm == ZSTD_ps_enable ? (maxChunkSize / params.minMatchLength) : 0;
}
/** ZSTD_ldm_getBucket() :
* Returns a pointer to the start of the bucket associated with hash. */
static ldmEntry_t* ZSTD_ldm_getBucket(
const ldmState_t* ldmState, size_t hash, U32 const bucketSizeLog)
{
return ldmState->hashTable + (hash << bucketSizeLog);
}
/** ZSTD_ldm_insertEntry() :
* Insert the entry with corresponding hash into the hash table */
static void ZSTD_ldm_insertEntry(ldmState_t* ldmState,
size_t const hash, const ldmEntry_t entry,
U32 const bucketSizeLog)
{
BYTE* const pOffset = ldmState->bucketOffsets + hash;
unsigned const offset = *pOffset;
*(ZSTD_ldm_getBucket(ldmState, hash, bucketSizeLog) + offset) = entry;
*pOffset = (BYTE)((offset + 1) & ((1u << bucketSizeLog) - 1));
}
/** ZSTD_ldm_countBackwardsMatch() :
* Returns the number of bytes that match backwards before pIn and pMatch.
*
* We count only bytes where pMatch >= pBase and pIn >= pAnchor. */
static size_t ZSTD_ldm_countBackwardsMatch(
const BYTE* pIn, const BYTE* pAnchor,
const BYTE* pMatch, const BYTE* pMatchBase)
{
size_t matchLength = 0;
while (pIn > pAnchor && pMatch > pMatchBase && pIn[-1] == pMatch[-1]) {
pIn--;
pMatch--;
matchLength++;
}
return matchLength;
}
/** ZSTD_ldm_countBackwardsMatch_2segments() :
* Returns the number of bytes that match backwards from pMatch,
* even with the backwards match spanning 2 different segments.
*
* On reaching `pMatchBase`, start counting from mEnd */
static size_t ZSTD_ldm_countBackwardsMatch_2segments(
const BYTE* pIn, const BYTE* pAnchor,
const BYTE* pMatch, const BYTE* pMatchBase,
const BYTE* pExtDictStart, const BYTE* pExtDictEnd)
{
size_t matchLength = ZSTD_ldm_countBackwardsMatch(pIn, pAnchor, pMatch, pMatchBase);
if (pMatch - matchLength != pMatchBase || pMatchBase == pExtDictStart) {
/* If backwards match is entirely in the extDict or prefix, immediately return */
return matchLength;
}
DEBUGLOG(7, "ZSTD_ldm_countBackwardsMatch_2segments: found 2-parts backwards match (length in prefix==%zu)", matchLength);
matchLength += ZSTD_ldm_countBackwardsMatch(pIn - matchLength, pAnchor, pExtDictEnd, pExtDictStart);
DEBUGLOG(7, "final backwards match length = %zu", matchLength);
return matchLength;
}
/** ZSTD_ldm_fillFastTables() :
*
* Fills the relevant tables for the ZSTD_fast and ZSTD_dfast strategies.
* This is similar to ZSTD_loadDictionaryContent.
*
* The tables for the other strategies are filled within their
* block compressors. */
static size_t ZSTD_ldm_fillFastTables(ZSTD_MatchState_t* ms,
void const* end)
{
const BYTE* const iend = (const BYTE*)end;
switch(ms->cParams.strategy)
{
case ZSTD_fast:
ZSTD_fillHashTable(ms, iend, ZSTD_dtlm_fast, ZSTD_tfp_forCCtx);
break;
case ZSTD_dfast:
#ifndef ZSTD_EXCLUDE_DFAST_BLOCK_COMPRESSOR
ZSTD_fillDoubleHashTable(ms, iend, ZSTD_dtlm_fast, ZSTD_tfp_forCCtx);
#else
assert(0); /* shouldn't be called: cparams should've been adjusted. */
#endif
break;
case ZSTD_greedy:
case ZSTD_lazy:
case ZSTD_lazy2:
case ZSTD_btlazy2:
case ZSTD_btopt:
case ZSTD_btultra:
case ZSTD_btultra2:
break;
default:
assert(0); /* not possible : not a valid strategy id */
}
return 0;
}
void ZSTD_ldm_fillHashTable(
ldmState_t* ldmState, const BYTE* ip,
const BYTE* iend, ldmParams_t const* params)
{
U32 const minMatchLength = params->minMatchLength;
U32 const bucketSizeLog = params->bucketSizeLog;
U32 const hBits = params->hashLog - bucketSizeLog;
BYTE const* const base = ldmState->window.base;
BYTE const* const istart = ip;
ldmRollingHashState_t hashState;
size_t* const splits = ldmState->splitIndices;
unsigned numSplits;
DEBUGLOG(5, "ZSTD_ldm_fillHashTable");
ZSTD_ldm_gear_init(&hashState, params);
while (ip < iend) {
size_t hashed;
unsigned n;
numSplits = 0;
hashed = ZSTD_ldm_gear_feed(&hashState, ip, (size_t)(iend - ip), splits, &numSplits);
for (n = 0; n < numSplits; n++) {
if (ip + splits[n] >= istart + minMatchLength) {
BYTE const* const split = ip + splits[n] - minMatchLength;
U64 const xxhash = XXH64(split, minMatchLength, 0);
U32 const hash = (U32)(xxhash & (((U32)1 << hBits) - 1));
ldmEntry_t entry;
entry.offset = (U32)(split - base);
entry.checksum = (U32)(xxhash >> 32);
ZSTD_ldm_insertEntry(ldmState, hash, entry, params->bucketSizeLog);
}
}
ip += hashed;
}
}
/** ZSTD_ldm_limitTableUpdate() :
*
* Sets cctx->nextToUpdate to a position corresponding closer to anchor
* if it is far way
* (after a long match, only update tables a limited amount). */
static void ZSTD_ldm_limitTableUpdate(ZSTD_MatchState_t* ms, const BYTE* anchor)
{
U32 const curr = (U32)(anchor - ms->window.base);
if (curr > ms->nextToUpdate + 1024) {
ms->nextToUpdate =
curr - MIN(512, curr - ms->nextToUpdate - 1024);
}
}
static
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
size_t ZSTD_ldm_generateSequences_internal(
ldmState_t* ldmState, RawSeqStore_t* rawSeqStore,
ldmParams_t const* params, void const* src, size_t srcSize)
{
/* LDM parameters */
int const extDict = ZSTD_window_hasExtDict(ldmState->window);
U32 const minMatchLength = params->minMatchLength;
U32 const entsPerBucket = 1U << params->bucketSizeLog;
U32 const hBits = params->hashLog - params->bucketSizeLog;
/* Prefix and extDict parameters */
U32 const dictLimit = ldmState->window.dictLimit;
U32 const lowestIndex = extDict ? ldmState->window.lowLimit : dictLimit;
BYTE const* const base = ldmState->window.base;
BYTE const* const dictBase = extDict ? ldmState->window.dictBase : NULL;
BYTE const* const dictStart = extDict ? dictBase + lowestIndex : NULL;
BYTE const* const dictEnd = extDict ? dictBase + dictLimit : NULL;
BYTE const* const lowPrefixPtr = base + dictLimit;
/* Input bounds */
BYTE const* const istart = (BYTE const*)src;
BYTE const* const iend = istart + srcSize;
BYTE const* const ilimit = iend - HASH_READ_SIZE;
/* Input positions */
BYTE const* anchor = istart;
BYTE const* ip = istart;
/* Rolling hash state */
ldmRollingHashState_t hashState;
/* Arrays for staged-processing */
size_t* const splits = ldmState->splitIndices;
ldmMatchCandidate_t* const candidates = ldmState->matchCandidates;
unsigned numSplits;
if (srcSize < minMatchLength)
return iend - anchor;
/* Initialize the rolling hash state with the first minMatchLength bytes */
ZSTD_ldm_gear_init(&hashState, params);
ZSTD_ldm_gear_reset(&hashState, ip, minMatchLength);
ip += minMatchLength;
while (ip < ilimit) {
size_t hashed;
unsigned n;
numSplits = 0;
hashed = ZSTD_ldm_gear_feed(&hashState, ip, ilimit - ip,
splits, &numSplits);
for (n = 0; n < numSplits; n++) {
BYTE const* const split = ip + splits[n] - minMatchLength;
U64 const xxhash = XXH64(split, minMatchLength, 0);
U32 const hash = (U32)(xxhash & (((U32)1 << hBits) - 1));
candidates[n].split = split;
candidates[n].hash = hash;
candidates[n].checksum = (U32)(xxhash >> 32);
candidates[n].bucket = ZSTD_ldm_getBucket(ldmState, hash, params->bucketSizeLog);
PREFETCH_L1(candidates[n].bucket);
}
for (n = 0; n < numSplits; n++) {
size_t forwardMatchLength = 0, backwardMatchLength = 0,
bestMatchLength = 0, mLength;
U32 offset;
BYTE const* const split = candidates[n].split;
U32 const checksum = candidates[n].checksum;
U32 const hash = candidates[n].hash;
ldmEntry_t* const bucket = candidates[n].bucket;
ldmEntry_t const* cur;
ldmEntry_t const* bestEntry = NULL;
ldmEntry_t newEntry;
newEntry.offset = (U32)(split - base);
newEntry.checksum = checksum;
/* If a split point would generate a sequence overlapping with
* the previous one, we merely register it in the hash table and
* move on */
if (split < anchor) {
ZSTD_ldm_insertEntry(ldmState, hash, newEntry, params->bucketSizeLog);
continue;
}
for (cur = bucket; cur < bucket + entsPerBucket; cur++) {
size_t curForwardMatchLength, curBackwardMatchLength,
curTotalMatchLength;
if (cur->checksum != checksum || cur->offset <= lowestIndex) {
continue;
}
if (extDict) {
BYTE const* const curMatchBase =
cur->offset < dictLimit ? dictBase : base;
BYTE const* const pMatch = curMatchBase + cur->offset;
BYTE const* const matchEnd =
cur->offset < dictLimit ? dictEnd : iend;
BYTE const* const lowMatchPtr =
cur->offset < dictLimit ? dictStart : lowPrefixPtr;
curForwardMatchLength =
ZSTD_count_2segments(split, pMatch, iend, matchEnd, lowPrefixPtr);
if (curForwardMatchLength < minMatchLength) {
continue;
}
curBackwardMatchLength = ZSTD_ldm_countBackwardsMatch_2segments(
split, anchor, pMatch, lowMatchPtr, dictStart, dictEnd);
} else { /* !extDict */
BYTE const* const pMatch = base + cur->offset;
curForwardMatchLength = ZSTD_count(split, pMatch, iend);
if (curForwardMatchLength < minMatchLength) {
continue;
}
curBackwardMatchLength =
ZSTD_ldm_countBackwardsMatch(split, anchor, pMatch, lowPrefixPtr);
}
curTotalMatchLength = curForwardMatchLength + curBackwardMatchLength;
if (curTotalMatchLength > bestMatchLength) {
bestMatchLength = curTotalMatchLength;
forwardMatchLength = curForwardMatchLength;
backwardMatchLength = curBackwardMatchLength;
bestEntry = cur;
}
}
/* No match found -- insert an entry into the hash table
* and process the next candidate match */
if (bestEntry == NULL) {
ZSTD_ldm_insertEntry(ldmState, hash, newEntry, params->bucketSizeLog);
continue;
}
/* Match found */
offset = (U32)(split - base) - bestEntry->offset;
mLength = forwardMatchLength + backwardMatchLength;
{
rawSeq* const seq = rawSeqStore->seq + rawSeqStore->size;
/* Out of sequence storage */
if (rawSeqStore->size == rawSeqStore->capacity)
return ERROR(dstSize_tooSmall);
seq->litLength = (U32)(split - backwardMatchLength - anchor);
seq->matchLength = (U32)mLength;
seq->offset = offset;
rawSeqStore->size++;
}
/* Insert the current entry into the hash table --- it must be
* done after the previous block to avoid clobbering bestEntry */
ZSTD_ldm_insertEntry(ldmState, hash, newEntry, params->bucketSizeLog);
anchor = split + forwardMatchLength;
/* If we find a match that ends after the data that we've hashed
* then we have a repeating, overlapping, pattern. E.g. all zeros.
* If one repetition of the pattern matches our `stopMask` then all
* repetitions will. We don't need to insert them all into out table,
* only the first one. So skip over overlapping matches.
* This is a major speed boost (20x) for compressing a single byte
* repeated, when that byte ends up in the table.
*/
if (anchor > ip + hashed) {
ZSTD_ldm_gear_reset(&hashState, anchor - minMatchLength, minMatchLength);
/* Continue the outer loop at anchor (ip + hashed == anchor). */
ip = anchor - hashed;
break;
}
}
ip += hashed;
}
return iend - anchor;
}
/*! ZSTD_ldm_reduceTable() :
* reduce table indexes by `reducerValue` */
static void ZSTD_ldm_reduceTable(ldmEntry_t* const table, U32 const size,
U32 const reducerValue)
{
U32 u;
for (u = 0; u < size; u++) {
if (table[u].offset < reducerValue) table[u].offset = 0;
else table[u].offset -= reducerValue;
}
}
size_t ZSTD_ldm_generateSequences(
ldmState_t* ldmState, RawSeqStore_t* sequences,
ldmParams_t const* params, void const* src, size_t srcSize)
{
U32 const maxDist = 1U << params->windowLog;
BYTE const* const istart = (BYTE const*)src;
BYTE const* const iend = istart + srcSize;
size_t const kMaxChunkSize = 1 << 20;
size_t const nbChunks = (srcSize / kMaxChunkSize) + ((srcSize % kMaxChunkSize) != 0);
size_t chunk;
size_t leftoverSize = 0;
assert(ZSTD_CHUNKSIZE_MAX >= kMaxChunkSize);
/* Check that ZSTD_window_update() has been called for this chunk prior
* to passing it to this function.
*/
assert(ldmState->window.nextSrc >= (BYTE const*)src + srcSize);
/* The input could be very large (in zstdmt), so it must be broken up into
* chunks to enforce the maximum distance and handle overflow correction.
*/
assert(sequences->pos <= sequences->size);
assert(sequences->size <= sequences->capacity);
for (chunk = 0; chunk < nbChunks && sequences->size < sequences->capacity; ++chunk) {
BYTE const* const chunkStart = istart + chunk * kMaxChunkSize;
size_t const remaining = (size_t)(iend - chunkStart);
BYTE const *const chunkEnd =
(remaining < kMaxChunkSize) ? iend : chunkStart + kMaxChunkSize;
size_t const chunkSize = chunkEnd - chunkStart;
size_t newLeftoverSize;
size_t const prevSize = sequences->size;
assert(chunkStart < iend);
/* 1. Perform overflow correction if necessary. */
if (ZSTD_window_needOverflowCorrection(ldmState->window, 0, maxDist, ldmState->loadedDictEnd, chunkStart, chunkEnd)) {
U32 const ldmHSize = 1U << params->hashLog;
U32 const correction = ZSTD_window_correctOverflow(
&ldmState->window, /* cycleLog */ 0, maxDist, chunkStart);
ZSTD_ldm_reduceTable(ldmState->hashTable, ldmHSize, correction);
/* invalidate dictionaries on overflow correction */
ldmState->loadedDictEnd = 0;
}
/* 2. We enforce the maximum offset allowed.
*
* kMaxChunkSize should be small enough that we don't lose too much of
* the window through early invalidation.
* TODO: * Test the chunk size.
* * Try invalidation after the sequence generation and test the
* offset against maxDist directly.
*
* NOTE: Because of dictionaries + sequence splitting we MUST make sure
* that any offset used is valid at the END of the sequence, since it may
* be split into two sequences. This condition holds when using
* ZSTD_window_enforceMaxDist(), but if we move to checking offsets
* against maxDist directly, we'll have to carefully handle that case.
*/
ZSTD_window_enforceMaxDist(&ldmState->window, chunkEnd, maxDist, &ldmState->loadedDictEnd, NULL);
/* 3. Generate the sequences for the chunk, and get newLeftoverSize. */
newLeftoverSize = ZSTD_ldm_generateSequences_internal(
ldmState, sequences, params, chunkStart, chunkSize);
if (ZSTD_isError(newLeftoverSize))
return newLeftoverSize;
/* 4. We add the leftover literals from previous iterations to the first
* newly generated sequence, or add the `newLeftoverSize` if none are
* generated.
*/
/* Prepend the leftover literals from the last call */
if (prevSize < sequences->size) {
sequences->seq[prevSize].litLength += (U32)leftoverSize;
leftoverSize = newLeftoverSize;
} else {
assert(newLeftoverSize == chunkSize);
leftoverSize += chunkSize;
}
}
return 0;
}
void
ZSTD_ldm_skipSequences(RawSeqStore_t* rawSeqStore, size_t srcSize, U32 const minMatch)
{
while (srcSize > 0 && rawSeqStore->pos < rawSeqStore->size) {
rawSeq* seq = rawSeqStore->seq + rawSeqStore->pos;
if (srcSize <= seq->litLength) {
/* Skip past srcSize literals */
seq->litLength -= (U32)srcSize;
return;
}
srcSize -= seq->litLength;
seq->litLength = 0;
if (srcSize < seq->matchLength) {
/* Skip past the first srcSize of the match */
seq->matchLength -= (U32)srcSize;
if (seq->matchLength < minMatch) {
/* The match is too short, omit it */
if (rawSeqStore->pos + 1 < rawSeqStore->size) {
seq[1].litLength += seq[0].matchLength;
}
rawSeqStore->pos++;
}
return;
}
srcSize -= seq->matchLength;
seq->matchLength = 0;
rawSeqStore->pos++;
}
}
/**
* If the sequence length is longer than remaining then the sequence is split
* between this block and the next.
*
* Returns the current sequence to handle, or if the rest of the block should
* be literals, it returns a sequence with offset == 0.
*/
static rawSeq maybeSplitSequence(RawSeqStore_t* rawSeqStore,
U32 const remaining, U32 const minMatch)
{
rawSeq sequence = rawSeqStore->seq[rawSeqStore->pos];
assert(sequence.offset > 0);
/* Likely: No partial sequence */
if (remaining >= sequence.litLength + sequence.matchLength) {
rawSeqStore->pos++;
return sequence;
}
/* Cut the sequence short (offset == 0 ==> rest is literals). */
if (remaining <= sequence.litLength) {
sequence.offset = 0;
} else if (remaining < sequence.litLength + sequence.matchLength) {
sequence.matchLength = remaining - sequence.litLength;
if (sequence.matchLength < minMatch) {
sequence.offset = 0;
}
}
/* Skip past `remaining` bytes for the future sequences. */
ZSTD_ldm_skipSequences(rawSeqStore, remaining, minMatch);
return sequence;
}
void ZSTD_ldm_skipRawSeqStoreBytes(RawSeqStore_t* rawSeqStore, size_t nbBytes) {
U32 currPos = (U32)(rawSeqStore->posInSequence + nbBytes);
while (currPos && rawSeqStore->pos < rawSeqStore->size) {
rawSeq currSeq = rawSeqStore->seq[rawSeqStore->pos];
if (currPos >= currSeq.litLength + currSeq.matchLength) {
currPos -= currSeq.litLength + currSeq.matchLength;
rawSeqStore->pos++;
} else {
rawSeqStore->posInSequence = currPos;
break;
}
}
if (currPos == 0 || rawSeqStore->pos == rawSeqStore->size) {
rawSeqStore->posInSequence = 0;
}
}
size_t ZSTD_ldm_blockCompress(RawSeqStore_t* rawSeqStore,
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
ZSTD_ParamSwitch_e useRowMatchFinder,
void const* src, size_t srcSize)
{
const ZSTD_compressionParameters* const cParams = &ms->cParams;
unsigned const minMatch = cParams->minMatch;
ZSTD_BlockCompressor_f const blockCompressor =
ZSTD_selectBlockCompressor(cParams->strategy, useRowMatchFinder, ZSTD_matchState_dictMode(ms));
/* Input bounds */
BYTE const* const istart = (BYTE const*)src;
BYTE const* const iend = istart + srcSize;
/* Input positions */
BYTE const* ip = istart;
DEBUGLOG(5, "ZSTD_ldm_blockCompress: srcSize=%zu", srcSize);
/* If using opt parser, use LDMs only as candidates rather than always accepting them */
if (cParams->strategy >= ZSTD_btopt) {
size_t lastLLSize;
ms->ldmSeqStore = rawSeqStore;
lastLLSize = blockCompressor(ms, seqStore, rep, src, srcSize);
ZSTD_ldm_skipRawSeqStoreBytes(rawSeqStore, srcSize);
return lastLLSize;
}
assert(rawSeqStore->pos <= rawSeqStore->size);
assert(rawSeqStore->size <= rawSeqStore->capacity);
/* Loop through each sequence and apply the block compressor to the literals */
while (rawSeqStore->pos < rawSeqStore->size && ip < iend) {
/* maybeSplitSequence updates rawSeqStore->pos */
rawSeq const sequence = maybeSplitSequence(rawSeqStore,
(U32)(iend - ip), minMatch);
/* End signal */
if (sequence.offset == 0)
break;
assert(ip + sequence.litLength + sequence.matchLength <= iend);
/* Fill tables for block compressor */
ZSTD_ldm_limitTableUpdate(ms, ip);
ZSTD_ldm_fillFastTables(ms, ip);
/* Run the block compressor */
DEBUGLOG(5, "pos %u : calling block compressor on segment of size %u", (unsigned)(ip-istart), sequence.litLength);
{
int i;
size_t const newLitLength =
blockCompressor(ms, seqStore, rep, ip, sequence.litLength);
ip += sequence.litLength;
/* Update the repcodes */
for (i = ZSTD_REP_NUM - 1; i > 0; i--)
rep[i] = rep[i-1];
rep[0] = sequence.offset;
/* Store the sequence */
ZSTD_storeSeq(seqStore, newLitLength, ip - newLitLength, iend,
OFFSET_TO_OFFBASE(sequence.offset),
sequence.matchLength);
ip += sequence.matchLength;
}
}
/* Fill the tables for the block compressor */
ZSTD_ldm_limitTableUpdate(ms, ip);
ZSTD_ldm_fillFastTables(ms, ip);
/* Compress the last literals */
return blockCompressor(ms, seqStore, rep, ip, iend - ip);
}
/**** ended inlining compress/zstd_ldm.c ****/
/**** start inlining compress/zstd_opt.c ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/**** skipping file: zstd_compress_internal.h ****/
/**** skipping file: hist.h ****/
/**** skipping file: zstd_opt.h ****/
#if !defined(ZSTD_EXCLUDE_BTLAZY2_BLOCK_COMPRESSOR) \
|| !defined(ZSTD_EXCLUDE_BTOPT_BLOCK_COMPRESSOR) \
|| !defined(ZSTD_EXCLUDE_BTULTRA_BLOCK_COMPRESSOR)
#define ZSTD_LITFREQ_ADD 2 /* scaling factor for litFreq, so that frequencies adapt faster to new stats */
#define ZSTD_MAX_PRICE (1<<30)
#define ZSTD_PREDEF_THRESHOLD 8 /* if srcSize < ZSTD_PREDEF_THRESHOLD, symbols' cost is assumed static, directly determined by pre-defined distributions */
/*-*************************************
* Price functions for optimal parser
***************************************/
#if 0 /* approximation at bit level (for tests) */
# define BITCOST_ACCURACY 0
# define BITCOST_MULTIPLIER (1 << BITCOST_ACCURACY)
# define WEIGHT(stat, opt) ((void)(opt), ZSTD_bitWeight(stat))
#elif 0 /* fractional bit accuracy (for tests) */
# define BITCOST_ACCURACY 8
# define BITCOST_MULTIPLIER (1 << BITCOST_ACCURACY)
# define WEIGHT(stat,opt) ((void)(opt), ZSTD_fracWeight(stat))
#else /* opt==approx, ultra==accurate */
# define BITCOST_ACCURACY 8
# define BITCOST_MULTIPLIER (1 << BITCOST_ACCURACY)
# define WEIGHT(stat,opt) ((opt) ? ZSTD_fracWeight(stat) : ZSTD_bitWeight(stat))
#endif
/* ZSTD_bitWeight() :
* provide estimated "cost" of a stat in full bits only */
MEM_STATIC U32 ZSTD_bitWeight(U32 stat)
{
return (ZSTD_highbit32(stat+1) * BITCOST_MULTIPLIER);
}
/* ZSTD_fracWeight() :
* provide fractional-bit "cost" of a stat,
* using linear interpolation approximation */
MEM_STATIC U32 ZSTD_fracWeight(U32 rawStat)
{
U32 const stat = rawStat + 1;
U32 const hb = ZSTD_highbit32(stat);
U32 const BWeight = hb * BITCOST_MULTIPLIER;
/* Fweight was meant for "Fractional weight"
* but it's effectively a value between 1 and 2
* using fixed point arithmetic */
U32 const FWeight = (stat << BITCOST_ACCURACY) >> hb;
U32 const weight = BWeight + FWeight;
assert(hb + BITCOST_ACCURACY < 31);
return weight;
}
#if (DEBUGLEVEL>=2)
/* debugging function,
* @return price in bytes as fractional value
* for debug messages only */
MEM_STATIC double ZSTD_fCost(int price)
{
return (double)price / (BITCOST_MULTIPLIER*8);
}
#endif
static int ZSTD_compressedLiterals(optState_t const* const optPtr)
{
return optPtr->literalCompressionMode != ZSTD_ps_disable;
}
static void ZSTD_setBasePrices(optState_t* optPtr, int optLevel)
{
if (ZSTD_compressedLiterals(optPtr))
optPtr->litSumBasePrice = WEIGHT(optPtr->litSum, optLevel);
optPtr->litLengthSumBasePrice = WEIGHT(optPtr->litLengthSum, optLevel);
optPtr->matchLengthSumBasePrice = WEIGHT(optPtr->matchLengthSum, optLevel);
optPtr->offCodeSumBasePrice = WEIGHT(optPtr->offCodeSum, optLevel);
}
static U32 sum_u32(const unsigned table[], size_t nbElts)
{
size_t n;
U32 total = 0;
for (n=0; n<nbElts; n++) {
total += table[n];
}
return total;
}
typedef enum { base_0possible=0, base_1guaranteed=1 } base_directive_e;
static U32
ZSTD_downscaleStats(unsigned* table, U32 lastEltIndex, U32 shift, base_directive_e base1)
{
U32 s, sum=0;
DEBUGLOG(5, "ZSTD_downscaleStats (nbElts=%u, shift=%u)",
(unsigned)lastEltIndex+1, (unsigned)shift );
assert(shift < 30);
for (s=0; s<lastEltIndex+1; s++) {
unsigned const base = base1 ? 1 : (table[s]>0);
unsigned const newStat = base + (table[s] >> shift);
sum += newStat;
table[s] = newStat;
}
return sum;
}
/* ZSTD_scaleStats() :
* reduce all elt frequencies in table if sum too large
* return the resulting sum of elements */
static U32 ZSTD_scaleStats(unsigned* table, U32 lastEltIndex, U32 logTarget)
{
U32 const prevsum = sum_u32(table, lastEltIndex+1);
U32 const factor = prevsum >> logTarget;
DEBUGLOG(5, "ZSTD_scaleStats (nbElts=%u, target=%u)", (unsigned)lastEltIndex+1, (unsigned)logTarget);
assert(logTarget < 30);
if (factor <= 1) return prevsum;
return ZSTD_downscaleStats(table, lastEltIndex, ZSTD_highbit32(factor), base_1guaranteed);
}
/* ZSTD_rescaleFreqs() :
* if first block (detected by optPtr->litLengthSum == 0) : init statistics
* take hints from dictionary if there is one
* and init from zero if there is none,
* using src for literals stats, and baseline stats for sequence symbols
* otherwise downscale existing stats, to be used as seed for next block.
*/
static void
ZSTD_rescaleFreqs(optState_t* const optPtr,
const BYTE* const src, size_t const srcSize,
int const optLevel)
{
int const compressedLiterals = ZSTD_compressedLiterals(optPtr);
DEBUGLOG(5, "ZSTD_rescaleFreqs (srcSize=%u)", (unsigned)srcSize);
optPtr->priceType = zop_dynamic;
if (optPtr->litLengthSum == 0) { /* no literals stats collected -> first block assumed -> init */
/* heuristic: use pre-defined stats for too small inputs */
if (srcSize <= ZSTD_PREDEF_THRESHOLD) {
DEBUGLOG(5, "srcSize <= %i : use predefined stats", ZSTD_PREDEF_THRESHOLD);
optPtr->priceType = zop_predef;
}
assert(optPtr->symbolCosts != NULL);
if (optPtr->symbolCosts->huf.repeatMode == HUF_repeat_valid) {
/* huffman stats covering the full value set : table presumed generated by dictionary */
optPtr->priceType = zop_dynamic;
if (compressedLiterals) {
/* generate literals statistics from huffman table */
unsigned lit;
assert(optPtr->litFreq != NULL);
optPtr->litSum = 0;
for (lit=0; lit<=MaxLit; lit++) {
U32 const scaleLog = 11; /* scale to 2K */
U32 const bitCost = HUF_getNbBitsFromCTable(optPtr->symbolCosts->huf.CTable, lit);
assert(bitCost <= scaleLog);
optPtr->litFreq[lit] = bitCost ? 1 << (scaleLog-bitCost) : 1 /*minimum to calculate cost*/;
optPtr->litSum += optPtr->litFreq[lit];
} }
{ unsigned ll;
FSE_CState_t llstate;
FSE_initCState(&llstate, optPtr->symbolCosts->fse.litlengthCTable);
optPtr->litLengthSum = 0;
for (ll=0; ll<=MaxLL; ll++) {
U32 const scaleLog = 10; /* scale to 1K */
U32 const bitCost = FSE_getMaxNbBits(llstate.symbolTT, ll);
assert(bitCost < scaleLog);
optPtr->litLengthFreq[ll] = bitCost ? 1 << (scaleLog-bitCost) : 1 /*minimum to calculate cost*/;
optPtr->litLengthSum += optPtr->litLengthFreq[ll];
} }
{ unsigned ml;
FSE_CState_t mlstate;
FSE_initCState(&mlstate, optPtr->symbolCosts->fse.matchlengthCTable);
optPtr->matchLengthSum = 0;
for (ml=0; ml<=MaxML; ml++) {
U32 const scaleLog = 10;
U32 const bitCost = FSE_getMaxNbBits(mlstate.symbolTT, ml);
assert(bitCost < scaleLog);
optPtr->matchLengthFreq[ml] = bitCost ? 1 << (scaleLog-bitCost) : 1 /*minimum to calculate cost*/;
optPtr->matchLengthSum += optPtr->matchLengthFreq[ml];
} }
{ unsigned of;
FSE_CState_t ofstate;
FSE_initCState(&ofstate, optPtr->symbolCosts->fse.offcodeCTable);
optPtr->offCodeSum = 0;
for (of=0; of<=MaxOff; of++) {
U32 const scaleLog = 10;
U32 const bitCost = FSE_getMaxNbBits(ofstate.symbolTT, of);
assert(bitCost < scaleLog);
optPtr->offCodeFreq[of] = bitCost ? 1 << (scaleLog-bitCost) : 1 /*minimum to calculate cost*/;
optPtr->offCodeSum += optPtr->offCodeFreq[of];
} }
} else { /* first block, no dictionary */
assert(optPtr->litFreq != NULL);
if (compressedLiterals) {
/* base initial cost of literals on direct frequency within src */
unsigned lit = MaxLit;
HIST_count_simple(optPtr->litFreq, &lit, src, srcSize); /* use raw first block to init statistics */
optPtr->litSum = ZSTD_downscaleStats(optPtr->litFreq, MaxLit, 8, base_0possible);
}
{ unsigned const baseLLfreqs[MaxLL+1] = {
4, 2, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1
};
ZSTD_memcpy(optPtr->litLengthFreq, baseLLfreqs, sizeof(baseLLfreqs));
optPtr->litLengthSum = sum_u32(baseLLfreqs, MaxLL+1);
}
{ unsigned ml;
for (ml=0; ml<=MaxML; ml++)
optPtr->matchLengthFreq[ml] = 1;
}
optPtr->matchLengthSum = MaxML+1;
{ unsigned const baseOFCfreqs[MaxOff+1] = {
6, 2, 1, 1, 2, 3, 4, 4,
4, 3, 2, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1
};
ZSTD_memcpy(optPtr->offCodeFreq, baseOFCfreqs, sizeof(baseOFCfreqs));
optPtr->offCodeSum = sum_u32(baseOFCfreqs, MaxOff+1);
}
}
} else { /* new block : scale down accumulated statistics */
if (compressedLiterals)
optPtr->litSum = ZSTD_scaleStats(optPtr->litFreq, MaxLit, 12);
optPtr->litLengthSum = ZSTD_scaleStats(optPtr->litLengthFreq, MaxLL, 11);
optPtr->matchLengthSum = ZSTD_scaleStats(optPtr->matchLengthFreq, MaxML, 11);
optPtr->offCodeSum = ZSTD_scaleStats(optPtr->offCodeFreq, MaxOff, 11);
}
ZSTD_setBasePrices(optPtr, optLevel);
}
/* ZSTD_rawLiteralsCost() :
* price of literals (only) in specified segment (which length can be 0).
* does not include price of literalLength symbol */
static U32 ZSTD_rawLiteralsCost(const BYTE* const literals, U32 const litLength,
const optState_t* const optPtr,
int optLevel)
{
DEBUGLOG(8, "ZSTD_rawLiteralsCost (%u literals)", litLength);
if (litLength == 0) return 0;
if (!ZSTD_compressedLiterals(optPtr))
return (litLength << 3) * BITCOST_MULTIPLIER; /* Uncompressed - 8 bytes per literal. */
if (optPtr->priceType == zop_predef)
return (litLength*6) * BITCOST_MULTIPLIER; /* 6 bit per literal - no statistic used */
/* dynamic statistics */
{ U32 price = optPtr->litSumBasePrice * litLength;
U32 const litPriceMax = optPtr->litSumBasePrice - BITCOST_MULTIPLIER;
U32 u;
assert(optPtr->litSumBasePrice >= BITCOST_MULTIPLIER);
for (u=0; u < litLength; u++) {
U32 litPrice = WEIGHT(optPtr->litFreq[literals[u]], optLevel);
if (UNLIKELY(litPrice > litPriceMax)) litPrice = litPriceMax;
price -= litPrice;
}
return price;
}
}
/* ZSTD_litLengthPrice() :
* cost of literalLength symbol */
static U32 ZSTD_litLengthPrice(U32 const litLength, const optState_t* const optPtr, int optLevel)
{
assert(litLength <= ZSTD_BLOCKSIZE_MAX);
if (optPtr->priceType == zop_predef)
return WEIGHT(litLength, optLevel);
/* ZSTD_LLcode() can't compute litLength price for sizes >= ZSTD_BLOCKSIZE_MAX
* because it isn't representable in the zstd format.
* So instead just pretend it would cost 1 bit more than ZSTD_BLOCKSIZE_MAX - 1.
* In such a case, the block would be all literals.
*/
if (litLength == ZSTD_BLOCKSIZE_MAX)
return BITCOST_MULTIPLIER + ZSTD_litLengthPrice(ZSTD_BLOCKSIZE_MAX - 1, optPtr, optLevel);
/* dynamic statistics */
{ U32 const llCode = ZSTD_LLcode(litLength);
return (LL_bits[llCode] * BITCOST_MULTIPLIER)
+ optPtr->litLengthSumBasePrice
- WEIGHT(optPtr->litLengthFreq[llCode], optLevel);
}
}
/* ZSTD_getMatchPrice() :
* Provides the cost of the match part (offset + matchLength) of a sequence.
* Must be combined with ZSTD_fullLiteralsCost() to get the full cost of a sequence.
* @offBase : sumtype, representing an offset or a repcode, and using numeric representation of ZSTD_storeSeq()
* @optLevel: when <2, favors small offset for decompression speed (improved cache efficiency)
*/
FORCE_INLINE_TEMPLATE U32
ZSTD_getMatchPrice(U32 const offBase,
U32 const matchLength,
const optState_t* const optPtr,
int const optLevel)
{
U32 price;
U32 const offCode = ZSTD_highbit32(offBase);
U32 const mlBase = matchLength - MINMATCH;
assert(matchLength >= MINMATCH);
if (optPtr->priceType == zop_predef) /* fixed scheme, does not use statistics */
return WEIGHT(mlBase, optLevel)
+ ((16 + offCode) * BITCOST_MULTIPLIER); /* emulated offset cost */
/* dynamic statistics */
price = (offCode * BITCOST_MULTIPLIER) + (optPtr->offCodeSumBasePrice - WEIGHT(optPtr->offCodeFreq[offCode], optLevel));
if ((optLevel<2) /*static*/ && offCode >= 20)
price += (offCode-19)*2 * BITCOST_MULTIPLIER; /* handicap for long distance offsets, favor decompression speed */
/* match Length */
{ U32 const mlCode = ZSTD_MLcode(mlBase);
price += (ML_bits[mlCode] * BITCOST_MULTIPLIER) + (optPtr->matchLengthSumBasePrice - WEIGHT(optPtr->matchLengthFreq[mlCode], optLevel));
}
price += BITCOST_MULTIPLIER / 5; /* heuristic : make matches a bit more costly to favor less sequences -> faster decompression speed */
DEBUGLOG(8, "ZSTD_getMatchPrice(ml:%u) = %u", matchLength, price);
return price;
}
/* ZSTD_updateStats() :
* assumption : literals + litLength <= iend */
static void ZSTD_updateStats(optState_t* const optPtr,
U32 litLength, const BYTE* literals,
U32 offBase, U32 matchLength)
{
/* literals */
if (ZSTD_compressedLiterals(optPtr)) {
U32 u;
for (u=0; u < litLength; u++)
optPtr->litFreq[literals[u]] += ZSTD_LITFREQ_ADD;
optPtr->litSum += litLength*ZSTD_LITFREQ_ADD;
}
/* literal Length */
{ U32 const llCode = ZSTD_LLcode(litLength);
optPtr->litLengthFreq[llCode]++;
optPtr->litLengthSum++;
}
/* offset code : follows storeSeq() numeric representation */
{ U32 const offCode = ZSTD_highbit32(offBase);
assert(offCode <= MaxOff);
optPtr->offCodeFreq[offCode]++;
optPtr->offCodeSum++;
}
/* match Length */
{ U32 const mlBase = matchLength - MINMATCH;
U32 const mlCode = ZSTD_MLcode(mlBase);
optPtr->matchLengthFreq[mlCode]++;
optPtr->matchLengthSum++;
}
}
/* ZSTD_readMINMATCH() :
* function safe only for comparisons
* assumption : memPtr must be at least 4 bytes before end of buffer */
MEM_STATIC U32 ZSTD_readMINMATCH(const void* memPtr, U32 length)
{
switch (length)
{
default :
case 4 : return MEM_read32(memPtr);
case 3 : if (MEM_isLittleEndian())
return MEM_read32(memPtr)<<8;
else
return MEM_read32(memPtr)>>8;
}
}
/* Update hashTable3 up to ip (excluded)
Assumption : always within prefix (i.e. not within extDict) */
static
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
U32 ZSTD_insertAndFindFirstIndexHash3 (const ZSTD_MatchState_t* ms,
U32* nextToUpdate3,
const BYTE* const ip)
{
U32* const hashTable3 = ms->hashTable3;
U32 const hashLog3 = ms->hashLog3;
const BYTE* const base = ms->window.base;
U32 idx = *nextToUpdate3;
U32 const target = (U32)(ip - base);
size_t const hash3 = ZSTD_hash3Ptr(ip, hashLog3);
assert(hashLog3 > 0);
while(idx < target) {
hashTable3[ZSTD_hash3Ptr(base+idx, hashLog3)] = idx;
idx++;
}
*nextToUpdate3 = target;
return hashTable3[hash3];
}
/*-*************************************
* Binary Tree search
***************************************/
/** ZSTD_insertBt1() : add one or multiple positions to tree.
* @param ip assumed <= iend-8 .
* @param target The target of ZSTD_updateTree_internal() - we are filling to this position
* @return : nb of positions added */
static
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
U32 ZSTD_insertBt1(
const ZSTD_MatchState_t* ms,
const BYTE* const ip, const BYTE* const iend,
U32 const target,
U32 const mls, const int extDict)
{
const ZSTD_compressionParameters* const cParams = &ms->cParams;
U32* const hashTable = ms->hashTable;
U32 const hashLog = cParams->hashLog;
size_t const h = ZSTD_hashPtr(ip, hashLog, mls);
U32* const bt = ms->chainTable;
U32 const btLog = cParams->chainLog - 1;
U32 const btMask = (1 << btLog) - 1;
U32 matchIndex = hashTable[h];
size_t commonLengthSmaller=0, commonLengthLarger=0;
const BYTE* const base = ms->window.base;
const BYTE* const dictBase = ms->window.dictBase;
const U32 dictLimit = ms->window.dictLimit;
const BYTE* const dictEnd = dictBase + dictLimit;
const BYTE* const prefixStart = base + dictLimit;
const BYTE* match;
const U32 curr = (U32)(ip-base);
const U32 btLow = btMask >= curr ? 0 : curr - btMask;
U32* smallerPtr = bt + 2*(curr&btMask);
U32* largerPtr = smallerPtr + 1;
U32 dummy32; /* to be nullified at the end */
/* windowLow is based on target because
* we only need positions that will be in the window at the end of the tree update.
*/
U32 const windowLow = ZSTD_getLowestMatchIndex(ms, target, cParams->windowLog);
U32 matchEndIdx = curr+8+1;
size_t bestLength = 8;
U32 nbCompares = 1U << cParams->searchLog;
#ifdef ZSTD_C_PREDICT
U32 predictedSmall = *(bt + 2*((curr-1)&btMask) + 0);
U32 predictedLarge = *(bt + 2*((curr-1)&btMask) + 1);
predictedSmall += (predictedSmall>0);
predictedLarge += (predictedLarge>0);
#endif /* ZSTD_C_PREDICT */
DEBUGLOG(8, "ZSTD_insertBt1 (%u)", curr);
assert(curr <= target);
assert(ip <= iend-8); /* required for h calculation */
hashTable[h] = curr; /* Update Hash Table */
assert(windowLow > 0);
for (; nbCompares && (matchIndex >= windowLow); --nbCompares) {
U32* const nextPtr = bt + 2*(matchIndex & btMask);
size_t matchLength = MIN(commonLengthSmaller, commonLengthLarger); /* guaranteed minimum nb of common bytes */
assert(matchIndex < curr);
#ifdef ZSTD_C_PREDICT /* note : can create issues when hlog small <= 11 */
const U32* predictPtr = bt + 2*((matchIndex-1) & btMask); /* written this way, as bt is a roll buffer */
if (matchIndex == predictedSmall) {
/* no need to check length, result known */
*smallerPtr = matchIndex;
if (matchIndex <= btLow) { smallerPtr=&dummy32; break; } /* beyond tree size, stop the search */
smallerPtr = nextPtr+1; /* new "smaller" => larger of match */
matchIndex = nextPtr[1]; /* new matchIndex larger than previous (closer to current) */
predictedSmall = predictPtr[1] + (predictPtr[1]>0);
continue;
}
if (matchIndex == predictedLarge) {
*largerPtr = matchIndex;
if (matchIndex <= btLow) { largerPtr=&dummy32; break; } /* beyond tree size, stop the search */
largerPtr = nextPtr;
matchIndex = nextPtr[0];
predictedLarge = predictPtr[0] + (predictPtr[0]>0);
continue;
}
#endif
if (!extDict || (matchIndex+matchLength >= dictLimit)) {
assert(matchIndex+matchLength >= dictLimit); /* might be wrong if actually extDict */
match = base + matchIndex;
matchLength += ZSTD_count(ip+matchLength, match+matchLength, iend);
} else {
match = dictBase + matchIndex;
matchLength += ZSTD_count_2segments(ip+matchLength, match+matchLength, iend, dictEnd, prefixStart);
if (matchIndex+matchLength >= dictLimit)
match = base + matchIndex; /* to prepare for next usage of match[matchLength] */
}
if (matchLength > bestLength) {
bestLength = matchLength;
if (matchLength > matchEndIdx - matchIndex)
matchEndIdx = matchIndex + (U32)matchLength;
}
if (ip+matchLength == iend) { /* equal : no way to know if inf or sup */
break; /* drop , to guarantee consistency ; miss a bit of compression, but other solutions can corrupt tree */
}
if (match[matchLength] < ip[matchLength]) { /* necessarily within buffer */
/* match is smaller than current */
*smallerPtr = matchIndex; /* update smaller idx */
commonLengthSmaller = matchLength; /* all smaller will now have at least this guaranteed common length */
if (matchIndex <= btLow) { smallerPtr=&dummy32; break; } /* beyond tree size, stop searching */
smallerPtr = nextPtr+1; /* new "candidate" => larger than match, which was smaller than target */
matchIndex = nextPtr[1]; /* new matchIndex, larger than previous and closer to current */
} else {
/* match is larger than current */
*largerPtr = matchIndex;
commonLengthLarger = matchLength;
if (matchIndex <= btLow) { largerPtr=&dummy32; break; } /* beyond tree size, stop searching */
largerPtr = nextPtr;
matchIndex = nextPtr[0];
} }
*smallerPtr = *largerPtr = 0;
{ U32 positions = 0;
if (bestLength > 384) positions = MIN(192, (U32)(bestLength - 384)); /* speed optimization */
assert(matchEndIdx > curr + 8);
return MAX(positions, matchEndIdx - (curr + 8));
}
}
FORCE_INLINE_TEMPLATE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
void ZSTD_updateTree_internal(
ZSTD_MatchState_t* ms,
const BYTE* const ip, const BYTE* const iend,
const U32 mls, const ZSTD_dictMode_e dictMode)
{
const BYTE* const base = ms->window.base;
U32 const target = (U32)(ip - base);
U32 idx = ms->nextToUpdate;
DEBUGLOG(7, "ZSTD_updateTree_internal, from %u to %u (dictMode:%u)",
idx, target, dictMode);
while(idx < target) {
U32 const forward = ZSTD_insertBt1(ms, base+idx, iend, target, mls, dictMode == ZSTD_extDict);
assert(idx < (U32)(idx + forward));
idx += forward;
}
assert((size_t)(ip - base) <= (size_t)(U32)(-1));
assert((size_t)(iend - base) <= (size_t)(U32)(-1));
ms->nextToUpdate = target;
}
void ZSTD_updateTree(ZSTD_MatchState_t* ms, const BYTE* ip, const BYTE* iend) {
ZSTD_updateTree_internal(ms, ip, iend, ms->cParams.minMatch, ZSTD_noDict);
}
FORCE_INLINE_TEMPLATE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
U32
ZSTD_insertBtAndGetAllMatches (
ZSTD_match_t* matches, /* store result (found matches) in this table (presumed large enough) */
ZSTD_MatchState_t* ms,
U32* nextToUpdate3,
const BYTE* const ip, const BYTE* const iLimit,
const ZSTD_dictMode_e dictMode,
const U32 rep[ZSTD_REP_NUM],
const U32 ll0, /* tells if associated literal length is 0 or not. This value must be 0 or 1 */
const U32 lengthToBeat,
const U32 mls /* template */)
{
const ZSTD_compressionParameters* const cParams = &ms->cParams;
U32 const sufficient_len = MIN(cParams->targetLength, ZSTD_OPT_NUM -1);
const BYTE* const base = ms->window.base;
U32 const curr = (U32)(ip-base);
U32 const hashLog = cParams->hashLog;
U32 const minMatch = (mls==3) ? 3 : 4;
U32* const hashTable = ms->hashTable;
size_t const h = ZSTD_hashPtr(ip, hashLog, mls);
U32 matchIndex = hashTable[h];
U32* const bt = ms->chainTable;
U32 const btLog = cParams->chainLog - 1;
U32 const btMask= (1U << btLog) - 1;
size_t commonLengthSmaller=0, commonLengthLarger=0;
const BYTE* const dictBase = ms->window.dictBase;
U32 const dictLimit = ms->window.dictLimit;
const BYTE* const dictEnd = dictBase + dictLimit;
const BYTE* const prefixStart = base + dictLimit;
U32 const btLow = (btMask >= curr) ? 0 : curr - btMask;
U32 const windowLow = ZSTD_getLowestMatchIndex(ms, curr, cParams->windowLog);
U32 const matchLow = windowLow ? windowLow : 1;
U32* smallerPtr = bt + 2*(curr&btMask);
U32* largerPtr = bt + 2*(curr&btMask) + 1;
U32 matchEndIdx = curr+8+1; /* farthest referenced position of any match => detects repetitive patterns */
U32 dummy32; /* to be nullified at the end */
U32 mnum = 0;
U32 nbCompares = 1U << cParams->searchLog;
const ZSTD_MatchState_t* dms = dictMode == ZSTD_dictMatchState ? ms->dictMatchState : NULL;
const ZSTD_compressionParameters* const dmsCParams =
dictMode == ZSTD_dictMatchState ? &dms->cParams : NULL;
const BYTE* const dmsBase = dictMode == ZSTD_dictMatchState ? dms->window.base : NULL;
const BYTE* const dmsEnd = dictMode == ZSTD_dictMatchState ? dms->window.nextSrc : NULL;
U32 const dmsHighLimit = dictMode == ZSTD_dictMatchState ? (U32)(dmsEnd - dmsBase) : 0;
U32 const dmsLowLimit = dictMode == ZSTD_dictMatchState ? dms->window.lowLimit : 0;
U32 const dmsIndexDelta = dictMode == ZSTD_dictMatchState ? windowLow - dmsHighLimit : 0;
U32 const dmsHashLog = dictMode == ZSTD_dictMatchState ? dmsCParams->hashLog : hashLog;
U32 const dmsBtLog = dictMode == ZSTD_dictMatchState ? dmsCParams->chainLog - 1 : btLog;
U32 const dmsBtMask = dictMode == ZSTD_dictMatchState ? (1U << dmsBtLog) - 1 : 0;
U32 const dmsBtLow = dictMode == ZSTD_dictMatchState && dmsBtMask < dmsHighLimit - dmsLowLimit ? dmsHighLimit - dmsBtMask : dmsLowLimit;
size_t bestLength = lengthToBeat-1;
DEBUGLOG(8, "ZSTD_insertBtAndGetAllMatches: current=%u", curr);
/* check repCode */
assert(ll0 <= 1); /* necessarily 1 or 0 */
{ U32 const lastR = ZSTD_REP_NUM + ll0;
U32 repCode;
for (repCode = ll0; repCode < lastR; repCode++) {
U32 const repOffset = (repCode==ZSTD_REP_NUM) ? (rep[0] - 1) : rep[repCode];
U32 const repIndex = curr - repOffset;
U32 repLen = 0;
assert(curr >= dictLimit);
if (repOffset-1 /* intentional overflow, discards 0 and -1 */ < curr-dictLimit) { /* equivalent to `curr > repIndex >= dictLimit` */
/* We must validate the repcode offset because when we're using a dictionary the
* valid offset range shrinks when the dictionary goes out of bounds.
*/
if ((repIndex >= windowLow) & (ZSTD_readMINMATCH(ip, minMatch) == ZSTD_readMINMATCH(ip - repOffset, minMatch))) {
repLen = (U32)ZSTD_count(ip+minMatch, ip+minMatch-repOffset, iLimit) + minMatch;
}
} else { /* repIndex < dictLimit || repIndex >= curr */
const BYTE* const repMatch = dictMode == ZSTD_dictMatchState ?
dmsBase + repIndex - dmsIndexDelta :
dictBase + repIndex;
assert(curr >= windowLow);
if ( dictMode == ZSTD_extDict
&& ( ((repOffset-1) /*intentional overflow*/ < curr - windowLow) /* equivalent to `curr > repIndex >= windowLow` */
& (ZSTD_index_overlap_check(dictLimit, repIndex)) )
&& (ZSTD_readMINMATCH(ip, minMatch) == ZSTD_readMINMATCH(repMatch, minMatch)) ) {
repLen = (U32)ZSTD_count_2segments(ip+minMatch, repMatch+minMatch, iLimit, dictEnd, prefixStart) + minMatch;
}
if (dictMode == ZSTD_dictMatchState
&& ( ((repOffset-1) /*intentional overflow*/ < curr - (dmsLowLimit + dmsIndexDelta)) /* equivalent to `curr > repIndex >= dmsLowLimit` */
& (ZSTD_index_overlap_check(dictLimit, repIndex)) )
&& (ZSTD_readMINMATCH(ip, minMatch) == ZSTD_readMINMATCH(repMatch, minMatch)) ) {
repLen = (U32)ZSTD_count_2segments(ip+minMatch, repMatch+minMatch, iLimit, dmsEnd, prefixStart) + minMatch;
} }
/* save longer solution */
if (repLen > bestLength) {
DEBUGLOG(8, "found repCode %u (ll0:%u, offset:%u) of length %u",
repCode, ll0, repOffset, repLen);
bestLength = repLen;
matches[mnum].off = REPCODE_TO_OFFBASE(repCode - ll0 + 1); /* expect value between 1 and 3 */
matches[mnum].len = (U32)repLen;
mnum++;
if ( (repLen > sufficient_len)
| (ip+repLen == iLimit) ) { /* best possible */
return mnum;
} } } }
/* HC3 match finder */
if ((mls == 3) /*static*/ && (bestLength < mls)) {
U32 const matchIndex3 = ZSTD_insertAndFindFirstIndexHash3(ms, nextToUpdate3, ip);
if ((matchIndex3 >= matchLow)
& (curr - matchIndex3 < (1<<18)) /*heuristic : longer distance likely too expensive*/ ) {
size_t mlen;
if ((dictMode == ZSTD_noDict) /*static*/ || (dictMode == ZSTD_dictMatchState) /*static*/ || (matchIndex3 >= dictLimit)) {
const BYTE* const match = base + matchIndex3;
mlen = ZSTD_count(ip, match, iLimit);
} else {
const BYTE* const match = dictBase + matchIndex3;
mlen = ZSTD_count_2segments(ip, match, iLimit, dictEnd, prefixStart);
}
/* save best solution */
if (mlen >= mls /* == 3 > bestLength */) {
DEBUGLOG(8, "found small match with hlog3, of length %u",
(U32)mlen);
bestLength = mlen;
assert(curr > matchIndex3);
assert(mnum==0); /* no prior solution */
matches[0].off = OFFSET_TO_OFFBASE(curr - matchIndex3);
matches[0].len = (U32)mlen;
mnum = 1;
if ( (mlen > sufficient_len) |
(ip+mlen == iLimit) ) { /* best possible length */
ms->nextToUpdate = curr+1; /* skip insertion */
return 1;
} } }
/* no dictMatchState lookup: dicts don't have a populated HC3 table */
} /* if (mls == 3) */
hashTable[h] = curr; /* Update Hash Table */
for (; nbCompares && (matchIndex >= matchLow); --nbCompares) {
U32* const nextPtr = bt + 2*(matchIndex & btMask);
const BYTE* match;
size_t matchLength = MIN(commonLengthSmaller, commonLengthLarger); /* guaranteed minimum nb of common bytes */
assert(curr > matchIndex);
if ((dictMode == ZSTD_noDict) || (dictMode == ZSTD_dictMatchState) || (matchIndex+matchLength >= dictLimit)) {
assert(matchIndex+matchLength >= dictLimit); /* ensure the condition is correct when !extDict */
match = base + matchIndex;
if (matchIndex >= dictLimit) assert(memcmp(match, ip, matchLength) == 0); /* ensure early section of match is equal as expected */
matchLength += ZSTD_count(ip+matchLength, match+matchLength, iLimit);
} else {
match = dictBase + matchIndex;
assert(memcmp(match, ip, matchLength) == 0); /* ensure early section of match is equal as expected */
matchLength += ZSTD_count_2segments(ip+matchLength, match+matchLength, iLimit, dictEnd, prefixStart);
if (matchIndex+matchLength >= dictLimit)
match = base + matchIndex; /* prepare for match[matchLength] read */
}
if (matchLength > bestLength) {
DEBUGLOG(8, "found match of length %u at distance %u (offBase=%u)",
(U32)matchLength, curr - matchIndex, OFFSET_TO_OFFBASE(curr - matchIndex));
assert(matchEndIdx > matchIndex);
if (matchLength > matchEndIdx - matchIndex)
matchEndIdx = matchIndex + (U32)matchLength;
bestLength = matchLength;
matches[mnum].off = OFFSET_TO_OFFBASE(curr - matchIndex);
matches[mnum].len = (U32)matchLength;
mnum++;
if ( (matchLength > ZSTD_OPT_NUM)
| (ip+matchLength == iLimit) /* equal : no way to know if inf or sup */) {
if (dictMode == ZSTD_dictMatchState) nbCompares = 0; /* break should also skip searching dms */
break; /* drop, to preserve bt consistency (miss a little bit of compression) */
} }
if (match[matchLength] < ip[matchLength]) {
/* match smaller than current */
*smallerPtr = matchIndex; /* update smaller idx */
commonLengthSmaller = matchLength; /* all smaller will now have at least this guaranteed common length */
if (matchIndex <= btLow) { smallerPtr=&dummy32; break; } /* beyond tree size, stop the search */
smallerPtr = nextPtr+1; /* new candidate => larger than match, which was smaller than current */
matchIndex = nextPtr[1]; /* new matchIndex, larger than previous, closer to current */
} else {
*largerPtr = matchIndex;
commonLengthLarger = matchLength;
if (matchIndex <= btLow) { largerPtr=&dummy32; break; } /* beyond tree size, stop the search */
largerPtr = nextPtr;
matchIndex = nextPtr[0];
} }
*smallerPtr = *largerPtr = 0;
assert(nbCompares <= (1U << ZSTD_SEARCHLOG_MAX)); /* Check we haven't underflowed. */
if (dictMode == ZSTD_dictMatchState && nbCompares) {
size_t const dmsH = ZSTD_hashPtr(ip, dmsHashLog, mls);
U32 dictMatchIndex = dms->hashTable[dmsH];
const U32* const dmsBt = dms->chainTable;
commonLengthSmaller = commonLengthLarger = 0;
for (; nbCompares && (dictMatchIndex > dmsLowLimit); --nbCompares) {
const U32* const nextPtr = dmsBt + 2*(dictMatchIndex & dmsBtMask);
size_t matchLength = MIN(commonLengthSmaller, commonLengthLarger); /* guaranteed minimum nb of common bytes */
const BYTE* match = dmsBase + dictMatchIndex;
matchLength += ZSTD_count_2segments(ip+matchLength, match+matchLength, iLimit, dmsEnd, prefixStart);
if (dictMatchIndex+matchLength >= dmsHighLimit)
match = base + dictMatchIndex + dmsIndexDelta; /* to prepare for next usage of match[matchLength] */
if (matchLength > bestLength) {
matchIndex = dictMatchIndex + dmsIndexDelta;
DEBUGLOG(8, "found dms match of length %u at distance %u (offBase=%u)",
(U32)matchLength, curr - matchIndex, OFFSET_TO_OFFBASE(curr - matchIndex));
if (matchLength > matchEndIdx - matchIndex)
matchEndIdx = matchIndex + (U32)matchLength;
bestLength = matchLength;
matches[mnum].off = OFFSET_TO_OFFBASE(curr - matchIndex);
matches[mnum].len = (U32)matchLength;
mnum++;
if ( (matchLength > ZSTD_OPT_NUM)
| (ip+matchLength == iLimit) /* equal : no way to know if inf or sup */) {
break; /* drop, to guarantee consistency (miss a little bit of compression) */
} }
if (dictMatchIndex <= dmsBtLow) { break; } /* beyond tree size, stop the search */
if (match[matchLength] < ip[matchLength]) {
commonLengthSmaller = matchLength; /* all smaller will now have at least this guaranteed common length */
dictMatchIndex = nextPtr[1]; /* new matchIndex larger than previous (closer to current) */
} else {
/* match is larger than current */
commonLengthLarger = matchLength;
dictMatchIndex = nextPtr[0];
} } } /* if (dictMode == ZSTD_dictMatchState) */
assert(matchEndIdx > curr+8);
ms->nextToUpdate = matchEndIdx - 8; /* skip repetitive patterns */
return mnum;
}
typedef U32 (*ZSTD_getAllMatchesFn)(
ZSTD_match_t*,
ZSTD_MatchState_t*,
U32*,
const BYTE*,
const BYTE*,
const U32 rep[ZSTD_REP_NUM],
U32 const ll0,
U32 const lengthToBeat);
FORCE_INLINE_TEMPLATE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
U32 ZSTD_btGetAllMatches_internal(
ZSTD_match_t* matches,
ZSTD_MatchState_t* ms,
U32* nextToUpdate3,
const BYTE* ip,
const BYTE* const iHighLimit,
const U32 rep[ZSTD_REP_NUM],
U32 const ll0,
U32 const lengthToBeat,
const ZSTD_dictMode_e dictMode,
const U32 mls)
{
assert(BOUNDED(3, ms->cParams.minMatch, 6) == mls);
DEBUGLOG(8, "ZSTD_BtGetAllMatches(dictMode=%d, mls=%u)", (int)dictMode, mls);
if (ip < ms->window.base + ms->nextToUpdate)
return 0; /* skipped area */
ZSTD_updateTree_internal(ms, ip, iHighLimit, mls, dictMode);
return ZSTD_insertBtAndGetAllMatches(matches, ms, nextToUpdate3, ip, iHighLimit, dictMode, rep, ll0, lengthToBeat, mls);
}
#define ZSTD_BT_GET_ALL_MATCHES_FN(dictMode, mls) ZSTD_btGetAllMatches_##dictMode##_##mls
#define GEN_ZSTD_BT_GET_ALL_MATCHES_(dictMode, mls) \
static U32 ZSTD_BT_GET_ALL_MATCHES_FN(dictMode, mls)( \
ZSTD_match_t* matches, \
ZSTD_MatchState_t* ms, \
U32* nextToUpdate3, \
const BYTE* ip, \
const BYTE* const iHighLimit, \
const U32 rep[ZSTD_REP_NUM], \
U32 const ll0, \
U32 const lengthToBeat) \
{ \
return ZSTD_btGetAllMatches_internal( \
matches, ms, nextToUpdate3, ip, iHighLimit, \
rep, ll0, lengthToBeat, ZSTD_##dictMode, mls); \
}
#define GEN_ZSTD_BT_GET_ALL_MATCHES(dictMode) \
GEN_ZSTD_BT_GET_ALL_MATCHES_(dictMode, 3) \
GEN_ZSTD_BT_GET_ALL_MATCHES_(dictMode, 4) \
GEN_ZSTD_BT_GET_ALL_MATCHES_(dictMode, 5) \
GEN_ZSTD_BT_GET_ALL_MATCHES_(dictMode, 6)
GEN_ZSTD_BT_GET_ALL_MATCHES(noDict)
GEN_ZSTD_BT_GET_ALL_MATCHES(extDict)
GEN_ZSTD_BT_GET_ALL_MATCHES(dictMatchState)
#define ZSTD_BT_GET_ALL_MATCHES_ARRAY(dictMode) \
{ \
ZSTD_BT_GET_ALL_MATCHES_FN(dictMode, 3), \
ZSTD_BT_GET_ALL_MATCHES_FN(dictMode, 4), \
ZSTD_BT_GET_ALL_MATCHES_FN(dictMode, 5), \
ZSTD_BT_GET_ALL_MATCHES_FN(dictMode, 6) \
}
static ZSTD_getAllMatchesFn
ZSTD_selectBtGetAllMatches(ZSTD_MatchState_t const* ms, ZSTD_dictMode_e const dictMode)
{
ZSTD_getAllMatchesFn const getAllMatchesFns[3][4] = {
ZSTD_BT_GET_ALL_MATCHES_ARRAY(noDict),
ZSTD_BT_GET_ALL_MATCHES_ARRAY(extDict),
ZSTD_BT_GET_ALL_MATCHES_ARRAY(dictMatchState)
};
U32 const mls = BOUNDED(3, ms->cParams.minMatch, 6);
assert((U32)dictMode < 3);
assert(mls - 3 < 4);
return getAllMatchesFns[(int)dictMode][mls - 3];
}
/*************************
* LDM helper functions *
*************************/
/* Struct containing info needed to make decision about ldm inclusion */
typedef struct {
RawSeqStore_t seqStore; /* External match candidates store for this block */
U32 startPosInBlock; /* Start position of the current match candidate */
U32 endPosInBlock; /* End position of the current match candidate */
U32 offset; /* Offset of the match candidate */
} ZSTD_optLdm_t;
/* ZSTD_optLdm_skipRawSeqStoreBytes():
* Moves forward in @rawSeqStore by @nbBytes,
* which will update the fields 'pos' and 'posInSequence'.
*/
static void ZSTD_optLdm_skipRawSeqStoreBytes(RawSeqStore_t* rawSeqStore, size_t nbBytes)
{
U32 currPos = (U32)(rawSeqStore->posInSequence + nbBytes);
while (currPos && rawSeqStore->pos < rawSeqStore->size) {
rawSeq currSeq = rawSeqStore->seq[rawSeqStore->pos];
if (currPos >= currSeq.litLength + currSeq.matchLength) {
currPos -= currSeq.litLength + currSeq.matchLength;
rawSeqStore->pos++;
} else {
rawSeqStore->posInSequence = currPos;
break;
}
}
if (currPos == 0 || rawSeqStore->pos == rawSeqStore->size) {
rawSeqStore->posInSequence = 0;
}
}
/* ZSTD_opt_getNextMatchAndUpdateSeqStore():
* Calculates the beginning and end of the next match in the current block.
* Updates 'pos' and 'posInSequence' of the ldmSeqStore.
*/
static void
ZSTD_opt_getNextMatchAndUpdateSeqStore(ZSTD_optLdm_t* optLdm, U32 currPosInBlock,
U32 blockBytesRemaining)
{
rawSeq currSeq;
U32 currBlockEndPos;
U32 literalsBytesRemaining;
U32 matchBytesRemaining;
/* Setting match end position to MAX to ensure we never use an LDM during this block */
if (optLdm->seqStore.size == 0 || optLdm->seqStore.pos >= optLdm->seqStore.size) {
optLdm->startPosInBlock = UINT_MAX;
optLdm->endPosInBlock = UINT_MAX;
return;
}
/* Calculate appropriate bytes left in matchLength and litLength
* after adjusting based on ldmSeqStore->posInSequence */
currSeq = optLdm->seqStore.seq[optLdm->seqStore.pos];
assert(optLdm->seqStore.posInSequence <= currSeq.litLength + currSeq.matchLength);
currBlockEndPos = currPosInBlock + blockBytesRemaining;
literalsBytesRemaining = (optLdm->seqStore.posInSequence < currSeq.litLength) ?
currSeq.litLength - (U32)optLdm->seqStore.posInSequence :
0;
matchBytesRemaining = (literalsBytesRemaining == 0) ?
currSeq.matchLength - ((U32)optLdm->seqStore.posInSequence - currSeq.litLength) :
currSeq.matchLength;
/* If there are more literal bytes than bytes remaining in block, no ldm is possible */
if (literalsBytesRemaining >= blockBytesRemaining) {
optLdm->startPosInBlock = UINT_MAX;
optLdm->endPosInBlock = UINT_MAX;
ZSTD_optLdm_skipRawSeqStoreBytes(&optLdm->seqStore, blockBytesRemaining);
return;
}
/* Matches may be < minMatch by this process. In that case, we will reject them
when we are deciding whether or not to add the ldm */
optLdm->startPosInBlock = currPosInBlock + literalsBytesRemaining;
optLdm->endPosInBlock = optLdm->startPosInBlock + matchBytesRemaining;
optLdm->offset = currSeq.offset;
if (optLdm->endPosInBlock > currBlockEndPos) {
/* Match ends after the block ends, we can't use the whole match */
optLdm->endPosInBlock = currBlockEndPos;
ZSTD_optLdm_skipRawSeqStoreBytes(&optLdm->seqStore, currBlockEndPos - currPosInBlock);
} else {
/* Consume nb of bytes equal to size of sequence left */
ZSTD_optLdm_skipRawSeqStoreBytes(&optLdm->seqStore, literalsBytesRemaining + matchBytesRemaining);
}
}
/* ZSTD_optLdm_maybeAddMatch():
* Adds a match if it's long enough,
* based on it's 'matchStartPosInBlock' and 'matchEndPosInBlock',
* into 'matches'. Maintains the correct ordering of 'matches'.
*/
static void ZSTD_optLdm_maybeAddMatch(ZSTD_match_t* matches, U32* nbMatches,
const ZSTD_optLdm_t* optLdm, U32 currPosInBlock,
U32 minMatch)
{
U32 const posDiff = currPosInBlock - optLdm->startPosInBlock;
/* Note: ZSTD_match_t actually contains offBase and matchLength (before subtracting MINMATCH) */
U32 const candidateMatchLength = optLdm->endPosInBlock - optLdm->startPosInBlock - posDiff;
/* Ensure that current block position is not outside of the match */
if (currPosInBlock < optLdm->startPosInBlock
|| currPosInBlock >= optLdm->endPosInBlock
|| candidateMatchLength < minMatch) {
return;
}
if (*nbMatches == 0 || ((candidateMatchLength > matches[*nbMatches-1].len) && *nbMatches < ZSTD_OPT_NUM)) {
U32 const candidateOffBase = OFFSET_TO_OFFBASE(optLdm->offset);
DEBUGLOG(6, "ZSTD_optLdm_maybeAddMatch(): Adding ldm candidate match (offBase: %u matchLength %u) at block position=%u",
candidateOffBase, candidateMatchLength, currPosInBlock);
matches[*nbMatches].len = candidateMatchLength;
matches[*nbMatches].off = candidateOffBase;
(*nbMatches)++;
}
}
/* ZSTD_optLdm_processMatchCandidate():
* Wrapper function to update ldm seq store and call ldm functions as necessary.
*/
static void
ZSTD_optLdm_processMatchCandidate(ZSTD_optLdm_t* optLdm,
ZSTD_match_t* matches, U32* nbMatches,
U32 currPosInBlock, U32 remainingBytes,
U32 minMatch)
{
if (optLdm->seqStore.size == 0 || optLdm->seqStore.pos >= optLdm->seqStore.size) {
return;
}
if (currPosInBlock >= optLdm->endPosInBlock) {
if (currPosInBlock > optLdm->endPosInBlock) {
/* The position at which ZSTD_optLdm_processMatchCandidate() is called is not necessarily
* at the end of a match from the ldm seq store, and will often be some bytes
* over beyond matchEndPosInBlock. As such, we need to correct for these "overshoots"
*/
U32 const posOvershoot = currPosInBlock - optLdm->endPosInBlock;
ZSTD_optLdm_skipRawSeqStoreBytes(&optLdm->seqStore, posOvershoot);
}
ZSTD_opt_getNextMatchAndUpdateSeqStore(optLdm, currPosInBlock, remainingBytes);
}
ZSTD_optLdm_maybeAddMatch(matches, nbMatches, optLdm, currPosInBlock, minMatch);
}
/*-*******************************
* Optimal parser
*********************************/
#if 0 /* debug */
static void
listStats(const U32* table, int lastEltID)
{
int const nbElts = lastEltID + 1;
int enb;
for (enb=0; enb < nbElts; enb++) {
(void)table;
/* RAWLOG(2, "%3i:%3i, ", enb, table[enb]); */
RAWLOG(2, "%4i,", table[enb]);
}
RAWLOG(2, " \n");
}
#endif
#define LIT_PRICE(_p) (int)ZSTD_rawLiteralsCost(_p, 1, optStatePtr, optLevel)
#define LL_PRICE(_l) (int)ZSTD_litLengthPrice(_l, optStatePtr, optLevel)
#define LL_INCPRICE(_l) (LL_PRICE(_l) - LL_PRICE(_l-1))
FORCE_INLINE_TEMPLATE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
size_t
ZSTD_compressBlock_opt_generic(ZSTD_MatchState_t* ms,
SeqStore_t* seqStore,
U32 rep[ZSTD_REP_NUM],
const void* src, size_t srcSize,
const int optLevel,
const ZSTD_dictMode_e dictMode)
{
optState_t* const optStatePtr = &ms->opt;
const BYTE* const istart = (const BYTE*)src;
const BYTE* ip = istart;
const BYTE* anchor = istart;
const BYTE* const iend = istart + srcSize;
const BYTE* const ilimit = iend - 8;
const BYTE* const base = ms->window.base;
const BYTE* const prefixStart = base + ms->window.dictLimit;
const ZSTD_compressionParameters* const cParams = &ms->cParams;
ZSTD_getAllMatchesFn getAllMatches = ZSTD_selectBtGetAllMatches(ms, dictMode);
U32 const sufficient_len = MIN(cParams->targetLength, ZSTD_OPT_NUM -1);
U32 const minMatch = (cParams->minMatch == 3) ? 3 : 4;
U32 nextToUpdate3 = ms->nextToUpdate;
ZSTD_optimal_t* const opt = optStatePtr->priceTable;
ZSTD_match_t* const matches = optStatePtr->matchTable;
ZSTD_optimal_t lastStretch;
ZSTD_optLdm_t optLdm;
ZSTD_memset(&lastStretch, 0, sizeof(ZSTD_optimal_t));
optLdm.seqStore = ms->ldmSeqStore ? *ms->ldmSeqStore : kNullRawSeqStore;
optLdm.endPosInBlock = optLdm.startPosInBlock = optLdm.offset = 0;
ZSTD_opt_getNextMatchAndUpdateSeqStore(&optLdm, (U32)(ip-istart), (U32)(iend-ip));
/* init */
DEBUGLOG(5, "ZSTD_compressBlock_opt_generic: current=%u, prefix=%u, nextToUpdate=%u",
(U32)(ip - base), ms->window.dictLimit, ms->nextToUpdate);
assert(optLevel <= 2);
ZSTD_rescaleFreqs(optStatePtr, (const BYTE*)src, srcSize, optLevel);
ip += (ip==prefixStart);
/* Match Loop */
while (ip < ilimit) {
U32 cur, last_pos = 0;
/* find first match */
{ U32 const litlen = (U32)(ip - anchor);
U32 const ll0 = !litlen;
U32 nbMatches = getAllMatches(matches, ms, &nextToUpdate3, ip, iend, rep, ll0, minMatch);
ZSTD_optLdm_processMatchCandidate(&optLdm, matches, &nbMatches,
(U32)(ip-istart), (U32)(iend-ip),
minMatch);
if (!nbMatches) {
DEBUGLOG(8, "no match found at cPos %u", (unsigned)(ip-istart));
ip++;
continue;
}
/* Match found: let's store this solution, and eventually find more candidates.
* During this forward pass, @opt is used to store stretches,
* defined as "a match followed by N literals".
* Note how this is different from a Sequence, which is "N literals followed by a match".
* Storing stretches allows us to store different match predecessors
* for each literal position part of a literals run. */
/* initialize opt[0] */
opt[0].mlen = 0; /* there are only literals so far */
opt[0].litlen = litlen;
/* No need to include the actual price of the literals before the first match
* because it is static for the duration of the forward pass, and is included
* in every subsequent price. But, we include the literal length because
* the cost variation of litlen depends on the value of litlen.
*/
opt[0].price = LL_PRICE(litlen);
ZSTD_STATIC_ASSERT(sizeof(opt[0].rep[0]) == sizeof(rep[0]));
ZSTD_memcpy(&opt[0].rep, rep, sizeof(opt[0].rep));
/* large match -> immediate encoding */
{ U32 const maxML = matches[nbMatches-1].len;
U32 const maxOffBase = matches[nbMatches-1].off;
DEBUGLOG(6, "found %u matches of maxLength=%u and maxOffBase=%u at cPos=%u => start new series",
nbMatches, maxML, maxOffBase, (U32)(ip-prefixStart));
if (maxML > sufficient_len) {
lastStretch.litlen = 0;
lastStretch.mlen = maxML;
lastStretch.off = maxOffBase;
DEBUGLOG(6, "large match (%u>%u) => immediate encoding",
maxML, sufficient_len);
cur = 0;
last_pos = maxML;
goto _shortestPath;
} }
/* set prices for first matches starting position == 0 */
assert(opt[0].price >= 0);
{ U32 pos;
U32 matchNb;
for (pos = 1; pos < minMatch; pos++) {
opt[pos].price = ZSTD_MAX_PRICE;
opt[pos].mlen = 0;
opt[pos].litlen = litlen + pos;
}
for (matchNb = 0; matchNb < nbMatches; matchNb++) {
U32 const offBase = matches[matchNb].off;
U32 const end = matches[matchNb].len;
for ( ; pos <= end ; pos++ ) {
int const matchPrice = (int)ZSTD_getMatchPrice(offBase, pos, optStatePtr, optLevel);
int const sequencePrice = opt[0].price + matchPrice;
DEBUGLOG(7, "rPos:%u => set initial price : %.2f",
pos, ZSTD_fCost(sequencePrice));
opt[pos].mlen = pos;
opt[pos].off = offBase;
opt[pos].litlen = 0; /* end of match */
opt[pos].price = sequencePrice + LL_PRICE(0);
}
}
last_pos = pos-1;
opt[pos].price = ZSTD_MAX_PRICE;
}
}
/* check further positions */
for (cur = 1; cur <= last_pos; cur++) {
const BYTE* const inr = ip + cur;
assert(cur <= ZSTD_OPT_NUM);
DEBUGLOG(7, "cPos:%i==rPos:%u", (int)(inr-istart), cur);
/* Fix current position with one literal if cheaper */
{ U32 const litlen = opt[cur-1].litlen + 1;
int const price = opt[cur-1].price
+ LIT_PRICE(ip+cur-1)
+ LL_INCPRICE(litlen);
assert(price < 1000000000); /* overflow check */
if (price <= opt[cur].price) {
ZSTD_optimal_t const prevMatch = opt[cur];
DEBUGLOG(7, "cPos:%i==rPos:%u : better price (%.2f<=%.2f) using literal (ll==%u) (hist:%u,%u,%u)",
(int)(inr-istart), cur, ZSTD_fCost(price), ZSTD_fCost(opt[cur].price), litlen,
opt[cur-1].rep[0], opt[cur-1].rep[1], opt[cur-1].rep[2]);
opt[cur] = opt[cur-1];
opt[cur].litlen = litlen;
opt[cur].price = price;
if ( (optLevel >= 1) /* additional check only for higher modes */
&& (prevMatch.litlen == 0) /* replace a match */
&& (LL_INCPRICE(1) < 0) /* ll1 is cheaper than ll0 */
&& LIKELY(ip + cur < iend)
) {
/* check next position, in case it would be cheaper */
int with1literal = prevMatch.price + LIT_PRICE(ip+cur) + LL_INCPRICE(1);
int withMoreLiterals = price + LIT_PRICE(ip+cur) + LL_INCPRICE(litlen+1);
DEBUGLOG(7, "then at next rPos %u : match+1lit %.2f vs %ulits %.2f",
cur+1, ZSTD_fCost(with1literal), litlen+1, ZSTD_fCost(withMoreLiterals));
if ( (with1literal < withMoreLiterals)
&& (with1literal < opt[cur+1].price) ) {
/* update offset history - before it disappears */
U32 const prev = cur - prevMatch.mlen;
Repcodes_t const newReps = ZSTD_newRep(opt[prev].rep, prevMatch.off, opt[prev].litlen==0);
assert(cur >= prevMatch.mlen);
DEBUGLOG(7, "==> match+1lit is cheaper (%.2f < %.2f) (hist:%u,%u,%u) !",
ZSTD_fCost(with1literal), ZSTD_fCost(withMoreLiterals),
newReps.rep[0], newReps.rep[1], newReps.rep[2] );
opt[cur+1] = prevMatch; /* mlen & offbase */
ZSTD_memcpy(opt[cur+1].rep, &newReps, sizeof(Repcodes_t));
opt[cur+1].litlen = 1;
opt[cur+1].price = with1literal;
if (last_pos < cur+1) last_pos = cur+1;
}
}
} else {
DEBUGLOG(7, "cPos:%i==rPos:%u : literal would cost more (%.2f>%.2f)",
(int)(inr-istart), cur, ZSTD_fCost(price), ZSTD_fCost(opt[cur].price));
}
}
/* Offset history is not updated during match comparison.
* Do it here, now that the match is selected and confirmed.
*/
ZSTD_STATIC_ASSERT(sizeof(opt[cur].rep) == sizeof(Repcodes_t));
assert(cur >= opt[cur].mlen);
if (opt[cur].litlen == 0) {
/* just finished a match => alter offset history */
U32 const prev = cur - opt[cur].mlen;
Repcodes_t const newReps = ZSTD_newRep(opt[prev].rep, opt[cur].off, opt[prev].litlen==0);
ZSTD_memcpy(opt[cur].rep, &newReps, sizeof(Repcodes_t));
}
/* last match must start at a minimum distance of 8 from oend */
if (inr > ilimit) continue;
if (cur == last_pos) break;
if ( (optLevel==0) /*static_test*/
&& (opt[cur+1].price <= opt[cur].price + (BITCOST_MULTIPLIER/2)) ) {
DEBUGLOG(7, "skip current position : next rPos(%u) price is cheaper", cur+1);
continue; /* skip unpromising positions; about ~+6% speed, -0.01 ratio */
}
assert(opt[cur].price >= 0);
{ U32 const ll0 = (opt[cur].litlen == 0);
int const previousPrice = opt[cur].price;
int const basePrice = previousPrice + LL_PRICE(0);
U32 nbMatches = getAllMatches(matches, ms, &nextToUpdate3, inr, iend, opt[cur].rep, ll0, minMatch);
U32 matchNb;
ZSTD_optLdm_processMatchCandidate(&optLdm, matches, &nbMatches,
(U32)(inr-istart), (U32)(iend-inr),
minMatch);
if (!nbMatches) {
DEBUGLOG(7, "rPos:%u : no match found", cur);
continue;
}
{ U32 const longestML = matches[nbMatches-1].len;
DEBUGLOG(7, "cPos:%i==rPos:%u, found %u matches, of longest ML=%u",
(int)(inr-istart), cur, nbMatches, longestML);
if ( (longestML > sufficient_len)
|| (cur + longestML >= ZSTD_OPT_NUM)
|| (ip + cur + longestML >= iend) ) {
lastStretch.mlen = longestML;
lastStretch.off = matches[nbMatches-1].off;
lastStretch.litlen = 0;
last_pos = cur + longestML;
goto _shortestPath;
} }
/* set prices using matches found at position == cur */
for (matchNb = 0; matchNb < nbMatches; matchNb++) {
U32 const offset = matches[matchNb].off;
U32 const lastML = matches[matchNb].len;
U32 const startML = (matchNb>0) ? matches[matchNb-1].len+1 : minMatch;
U32 mlen;
DEBUGLOG(7, "testing match %u => offBase=%4u, mlen=%2u, llen=%2u",
matchNb, matches[matchNb].off, lastML, opt[cur].litlen);
for (mlen = lastML; mlen >= startML; mlen--) { /* scan downward */
U32 const pos = cur + mlen;
int const price = basePrice + (int)ZSTD_getMatchPrice(offset, mlen, optStatePtr, optLevel);
if ((pos > last_pos) || (price < opt[pos].price)) {
DEBUGLOG(7, "rPos:%u (ml=%2u) => new better price (%.2f<%.2f)",
pos, mlen, ZSTD_fCost(price), ZSTD_fCost(opt[pos].price));
while (last_pos < pos) {
/* fill empty positions, for future comparisons */
last_pos++;
opt[last_pos].price = ZSTD_MAX_PRICE;
opt[last_pos].litlen = !0; /* just needs to be != 0, to mean "not an end of match" */
}
opt[pos].mlen = mlen;
opt[pos].off = offset;
opt[pos].litlen = 0;
opt[pos].price = price;
} else {
DEBUGLOG(7, "rPos:%u (ml=%2u) => new price is worse (%.2f>=%.2f)",
pos, mlen, ZSTD_fCost(price), ZSTD_fCost(opt[pos].price));
if (optLevel==0) break; /* early update abort; gets ~+10% speed for about -0.01 ratio loss */
}
} } }
opt[last_pos+1].price = ZSTD_MAX_PRICE;
} /* for (cur = 1; cur <= last_pos; cur++) */
lastStretch = opt[last_pos];
assert(cur >= lastStretch.mlen);
cur = last_pos - lastStretch.mlen;
_shortestPath: /* cur, last_pos, best_mlen, best_off have to be set */
assert(opt[0].mlen == 0);
assert(last_pos >= lastStretch.mlen);
assert(cur == last_pos - lastStretch.mlen);
if (lastStretch.mlen==0) {
/* no solution : all matches have been converted into literals */
assert(lastStretch.litlen == (ip - anchor) + last_pos);
ip += last_pos;
continue;
}
assert(lastStretch.off > 0);
/* Update offset history */
if (lastStretch.litlen == 0) {
/* finishing on a match : update offset history */
Repcodes_t const reps = ZSTD_newRep(opt[cur].rep, lastStretch.off, opt[cur].litlen==0);
ZSTD_memcpy(rep, &reps, sizeof(Repcodes_t));
} else {
ZSTD_memcpy(rep, lastStretch.rep, sizeof(Repcodes_t));
assert(cur >= lastStretch.litlen);
cur -= lastStretch.litlen;
}
/* Let's write the shortest path solution.
* It is stored in @opt in reverse order,
* starting from @storeEnd (==cur+2),
* effectively partially @opt overwriting.
* Content is changed too:
* - So far, @opt stored stretches, aka a match followed by literals
* - Now, it will store sequences, aka literals followed by a match
*/
{ U32 const storeEnd = cur + 2;
U32 storeStart = storeEnd;
U32 stretchPos = cur;
DEBUGLOG(6, "start reverse traversal (last_pos:%u, cur:%u)",
last_pos, cur); (void)last_pos;
assert(storeEnd < ZSTD_OPT_SIZE);
DEBUGLOG(6, "last stretch copied into pos=%u (llen=%u,mlen=%u,ofc=%u)",
storeEnd, lastStretch.litlen, lastStretch.mlen, lastStretch.off);
if (lastStretch.litlen > 0) {
/* last "sequence" is unfinished: just a bunch of literals */
opt[storeEnd].litlen = lastStretch.litlen;
opt[storeEnd].mlen = 0;
storeStart = storeEnd-1;
opt[storeStart] = lastStretch;
} {
opt[storeEnd] = lastStretch; /* note: litlen will be fixed */
storeStart = storeEnd;
}
while (1) {
ZSTD_optimal_t nextStretch = opt[stretchPos];
opt[storeStart].litlen = nextStretch.litlen;
DEBUGLOG(6, "selected sequence (llen=%u,mlen=%u,ofc=%u)",
opt[storeStart].litlen, opt[storeStart].mlen, opt[storeStart].off);
if (nextStretch.mlen == 0) {
/* reaching beginning of segment */
break;
}
storeStart--;
opt[storeStart] = nextStretch; /* note: litlen will be fixed */
assert(nextStretch.litlen + nextStretch.mlen <= stretchPos);
stretchPos -= nextStretch.litlen + nextStretch.mlen;
}
/* save sequences */
DEBUGLOG(6, "sending selected sequences into seqStore");
{ U32 storePos;
for (storePos=storeStart; storePos <= storeEnd; storePos++) {
U32 const llen = opt[storePos].litlen;
U32 const mlen = opt[storePos].mlen;
U32 const offBase = opt[storePos].off;
U32 const advance = llen + mlen;
DEBUGLOG(6, "considering seq starting at %i, llen=%u, mlen=%u",
(int)(anchor - istart), (unsigned)llen, (unsigned)mlen);
if (mlen==0) { /* only literals => must be last "sequence", actually starting a new stream of sequences */
assert(storePos == storeEnd); /* must be last sequence */
ip = anchor + llen; /* last "sequence" is a bunch of literals => don't progress anchor */
continue; /* will finish */
}
assert(anchor + llen <= iend);
ZSTD_updateStats(optStatePtr, llen, anchor, offBase, mlen);
ZSTD_storeSeq(seqStore, llen, anchor, iend, offBase, mlen);
anchor += advance;
ip = anchor;
} }
DEBUGLOG(7, "new offset history : %u, %u, %u", rep[0], rep[1], rep[2]);
/* update all costs */
ZSTD_setBasePrices(optStatePtr, optLevel);
}
} /* while (ip < ilimit) */
/* Return the last literals size */
return (size_t)(iend - anchor);
}
#endif /* build exclusions */
#ifndef ZSTD_EXCLUDE_BTOPT_BLOCK_COMPRESSOR
static size_t ZSTD_compressBlock_opt0(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
const void* src, size_t srcSize, const ZSTD_dictMode_e dictMode)
{
return ZSTD_compressBlock_opt_generic(ms, seqStore, rep, src, srcSize, 0 /* optLevel */, dictMode);
}
#endif
#ifndef ZSTD_EXCLUDE_BTULTRA_BLOCK_COMPRESSOR
static size_t ZSTD_compressBlock_opt2(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
const void* src, size_t srcSize, const ZSTD_dictMode_e dictMode)
{
return ZSTD_compressBlock_opt_generic(ms, seqStore, rep, src, srcSize, 2 /* optLevel */, dictMode);
}
#endif
#ifndef ZSTD_EXCLUDE_BTOPT_BLOCK_COMPRESSOR
size_t ZSTD_compressBlock_btopt(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
const void* src, size_t srcSize)
{
DEBUGLOG(5, "ZSTD_compressBlock_btopt");
return ZSTD_compressBlock_opt0(ms, seqStore, rep, src, srcSize, ZSTD_noDict);
}
#endif
#ifndef ZSTD_EXCLUDE_BTULTRA_BLOCK_COMPRESSOR
/* ZSTD_initStats_ultra():
* make a first compression pass, just to seed stats with more accurate starting values.
* only works on first block, with no dictionary and no ldm.
* this function cannot error out, its narrow contract must be respected.
*/
static
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
void ZSTD_initStats_ultra(ZSTD_MatchState_t* ms,
SeqStore_t* seqStore,
U32 rep[ZSTD_REP_NUM],
const void* src, size_t srcSize)
{
U32 tmpRep[ZSTD_REP_NUM]; /* updated rep codes will sink here */
ZSTD_memcpy(tmpRep, rep, sizeof(tmpRep));
DEBUGLOG(4, "ZSTD_initStats_ultra (srcSize=%zu)", srcSize);
assert(ms->opt.litLengthSum == 0); /* first block */
assert(seqStore->sequences == seqStore->sequencesStart); /* no ldm */
assert(ms->window.dictLimit == ms->window.lowLimit); /* no dictionary */
assert(ms->window.dictLimit - ms->nextToUpdate <= 1); /* no prefix (note: intentional overflow, defined as 2-complement) */
ZSTD_compressBlock_opt2(ms, seqStore, tmpRep, src, srcSize, ZSTD_noDict); /* generate stats into ms->opt*/
/* invalidate first scan from history, only keep entropy stats */
ZSTD_resetSeqStore(seqStore);
ms->window.base -= srcSize;
ms->window.dictLimit += (U32)srcSize;
ms->window.lowLimit = ms->window.dictLimit;
ms->nextToUpdate = ms->window.dictLimit;
}
size_t ZSTD_compressBlock_btultra(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
const void* src, size_t srcSize)
{
DEBUGLOG(5, "ZSTD_compressBlock_btultra (srcSize=%zu)", srcSize);
return ZSTD_compressBlock_opt2(ms, seqStore, rep, src, srcSize, ZSTD_noDict);
}
size_t ZSTD_compressBlock_btultra2(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
const void* src, size_t srcSize)
{
U32 const curr = (U32)((const BYTE*)src - ms->window.base);
DEBUGLOG(5, "ZSTD_compressBlock_btultra2 (srcSize=%zu)", srcSize);
/* 2-passes strategy:
* this strategy makes a first pass over first block to collect statistics
* in order to seed next round's statistics with it.
* After 1st pass, function forgets history, and starts a new block.
* Consequently, this can only work if no data has been previously loaded in tables,
* aka, no dictionary, no prefix, no ldm preprocessing.
* The compression ratio gain is generally small (~0.5% on first block),
* the cost is 2x cpu time on first block. */
assert(srcSize <= ZSTD_BLOCKSIZE_MAX);
if ( (ms->opt.litLengthSum==0) /* first block */
&& (seqStore->sequences == seqStore->sequencesStart) /* no ldm */
&& (ms->window.dictLimit == ms->window.lowLimit) /* no dictionary */
&& (curr == ms->window.dictLimit) /* start of frame, nothing already loaded nor skipped */
&& (srcSize > ZSTD_PREDEF_THRESHOLD) /* input large enough to not employ default stats */
) {
ZSTD_initStats_ultra(ms, seqStore, rep, src, srcSize);
}
return ZSTD_compressBlock_opt2(ms, seqStore, rep, src, srcSize, ZSTD_noDict);
}
#endif
#ifndef ZSTD_EXCLUDE_BTOPT_BLOCK_COMPRESSOR
size_t ZSTD_compressBlock_btopt_dictMatchState(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
const void* src, size_t srcSize)
{
return ZSTD_compressBlock_opt0(ms, seqStore, rep, src, srcSize, ZSTD_dictMatchState);
}
size_t ZSTD_compressBlock_btopt_extDict(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
const void* src, size_t srcSize)
{
return ZSTD_compressBlock_opt0(ms, seqStore, rep, src, srcSize, ZSTD_extDict);
}
#endif
#ifndef ZSTD_EXCLUDE_BTULTRA_BLOCK_COMPRESSOR
size_t ZSTD_compressBlock_btultra_dictMatchState(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
const void* src, size_t srcSize)
{
return ZSTD_compressBlock_opt2(ms, seqStore, rep, src, srcSize, ZSTD_dictMatchState);
}
size_t ZSTD_compressBlock_btultra_extDict(
ZSTD_MatchState_t* ms, SeqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
const void* src, size_t srcSize)
{
return ZSTD_compressBlock_opt2(ms, seqStore, rep, src, srcSize, ZSTD_extDict);
}
#endif
/* note : no btultra2 variant for extDict nor dictMatchState,
* because btultra2 is not meant to work with dictionaries
* and is only specific for the first block (no prefix) */
/**** ended inlining compress/zstd_opt.c ****/
#ifdef ZSTD_MULTITHREAD
/**** start inlining compress/zstdmt_compress.c ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/* ====== Compiler specifics ====== */
#if defined(_MSC_VER)
# pragma warning(disable : 4204) /* disable: C4204: non-constant aggregate initializer */
#endif
/* ====== Dependencies ====== */
/**** skipping file: ../common/allocations.h ****/
/**** skipping file: ../common/zstd_deps.h ****/
/**** skipping file: ../common/mem.h ****/
/**** skipping file: ../common/pool.h ****/
/**** skipping file: ../common/threading.h ****/
/**** skipping file: zstd_compress_internal.h ****/
/**** skipping file: zstd_ldm.h ****/
/**** skipping file: zstdmt_compress.h ****/
/* Guards code to support resizing the SeqPool.
* We will want to resize the SeqPool to save memory in the future.
* Until then, comment the code out since it is unused.
*/
#define ZSTD_RESIZE_SEQPOOL 0
/* ====== Debug ====== */
#if defined(DEBUGLEVEL) && (DEBUGLEVEL>=2) \
&& !defined(_MSC_VER) \
&& !defined(__MINGW32__)
# include <stdio.h>
# include <unistd.h>
# include <sys/times.h>
# define DEBUG_PRINTHEX(l,p,n) \
do { \
unsigned debug_u; \
for (debug_u=0; debug_u<(n); debug_u++) \
RAWLOG(l, "%02X ", ((const unsigned char*)(p))[debug_u]); \
RAWLOG(l, " \n"); \
} while (0)
static unsigned long long GetCurrentClockTimeMicroseconds(void)
{
static clock_t _ticksPerSecond = 0;
if (_ticksPerSecond <= 0) _ticksPerSecond = sysconf(_SC_CLK_TCK);
{ struct tms junk; clock_t newTicks = (clock_t) times(&junk);
return ((((unsigned long long)newTicks)*(1000000))/_ticksPerSecond);
} }
#define MUTEX_WAIT_TIME_DLEVEL 6
#define ZSTD_PTHREAD_MUTEX_LOCK(mutex) \
do { \
if (DEBUGLEVEL >= MUTEX_WAIT_TIME_DLEVEL) { \
unsigned long long const beforeTime = GetCurrentClockTimeMicroseconds(); \
ZSTD_pthread_mutex_lock(mutex); \
{ unsigned long long const afterTime = GetCurrentClockTimeMicroseconds(); \
unsigned long long const elapsedTime = (afterTime-beforeTime); \
if (elapsedTime > 1000) { \
/* or whatever threshold you like; I'm using 1 millisecond here */ \
DEBUGLOG(MUTEX_WAIT_TIME_DLEVEL, \
"Thread took %llu microseconds to acquire mutex %s \n", \
elapsedTime, #mutex); \
} } \
} else { \
ZSTD_pthread_mutex_lock(mutex); \
} \
} while (0)
#else
# define ZSTD_PTHREAD_MUTEX_LOCK(m) ZSTD_pthread_mutex_lock(m)
# define DEBUG_PRINTHEX(l,p,n) do { } while (0)
#endif
/* ===== Buffer Pool ===== */
/* a single Buffer Pool can be invoked from multiple threads in parallel */
typedef struct buffer_s {
void* start;
size_t capacity;
} Buffer;
static const Buffer g_nullBuffer = { NULL, 0 };
typedef struct ZSTDMT_bufferPool_s {
ZSTD_pthread_mutex_t poolMutex;
size_t bufferSize;
unsigned totalBuffers;
unsigned nbBuffers;
ZSTD_customMem cMem;
Buffer* buffers;
} ZSTDMT_bufferPool;
static void ZSTDMT_freeBufferPool(ZSTDMT_bufferPool* bufPool)
{
DEBUGLOG(3, "ZSTDMT_freeBufferPool (address:%08X)", (U32)(size_t)bufPool);
if (!bufPool) return; /* compatibility with free on NULL */
if (bufPool->buffers) {
unsigned u;
for (u=0; u<bufPool->totalBuffers; u++) {
DEBUGLOG(4, "free buffer %2u (address:%08X)", u, (U32)(size_t)bufPool->buffers[u].start);
ZSTD_customFree(bufPool->buffers[u].start, bufPool->cMem);
}
ZSTD_customFree(bufPool->buffers, bufPool->cMem);
}
ZSTD_pthread_mutex_destroy(&bufPool->poolMutex);
ZSTD_customFree(bufPool, bufPool->cMem);
}
static ZSTDMT_bufferPool* ZSTDMT_createBufferPool(unsigned maxNbBuffers, ZSTD_customMem cMem)
{
ZSTDMT_bufferPool* const bufPool =
(ZSTDMT_bufferPool*)ZSTD_customCalloc(sizeof(ZSTDMT_bufferPool), cMem);
if (bufPool==NULL) return NULL;
if (ZSTD_pthread_mutex_init(&bufPool->poolMutex, NULL)) {
ZSTD_customFree(bufPool, cMem);
return NULL;
}
bufPool->buffers = (Buffer*)ZSTD_customCalloc(maxNbBuffers * sizeof(Buffer), cMem);
if (bufPool->buffers==NULL) {
ZSTDMT_freeBufferPool(bufPool);
return NULL;
}
bufPool->bufferSize = 64 KB;
bufPool->totalBuffers = maxNbBuffers;
bufPool->nbBuffers = 0;
bufPool->cMem = cMem;
return bufPool;
}
/* only works at initialization, not during compression */
static size_t ZSTDMT_sizeof_bufferPool(ZSTDMT_bufferPool* bufPool)
{
size_t const poolSize = sizeof(*bufPool);
size_t const arraySize = bufPool->totalBuffers * sizeof(Buffer);
unsigned u;
size_t totalBufferSize = 0;
ZSTD_pthread_mutex_lock(&bufPool->poolMutex);
for (u=0; u<bufPool->totalBuffers; u++)
totalBufferSize += bufPool->buffers[u].capacity;
ZSTD_pthread_mutex_unlock(&bufPool->poolMutex);
return poolSize + arraySize + totalBufferSize;
}
/* ZSTDMT_setBufferSize() :
* all future buffers provided by this buffer pool will have _at least_ this size
* note : it's better for all buffers to have same size,
* as they become freely interchangeable, reducing malloc/free usages and memory fragmentation */
static void ZSTDMT_setBufferSize(ZSTDMT_bufferPool* const bufPool, size_t const bSize)
{
ZSTD_pthread_mutex_lock(&bufPool->poolMutex);
DEBUGLOG(4, "ZSTDMT_setBufferSize: bSize = %u", (U32)bSize);
bufPool->bufferSize = bSize;
ZSTD_pthread_mutex_unlock(&bufPool->poolMutex);
}
static ZSTDMT_bufferPool* ZSTDMT_expandBufferPool(ZSTDMT_bufferPool* srcBufPool, unsigned maxNbBuffers)
{
if (srcBufPool==NULL) return NULL;
if (srcBufPool->totalBuffers >= maxNbBuffers) /* good enough */
return srcBufPool;
/* need a larger buffer pool */
{ ZSTD_customMem const cMem = srcBufPool->cMem;
size_t const bSize = srcBufPool->bufferSize; /* forward parameters */
ZSTDMT_bufferPool* newBufPool;
ZSTDMT_freeBufferPool(srcBufPool);
newBufPool = ZSTDMT_createBufferPool(maxNbBuffers, cMem);
if (newBufPool==NULL) return newBufPool;
ZSTDMT_setBufferSize(newBufPool, bSize);
return newBufPool;
}
}
/** ZSTDMT_getBuffer() :
* assumption : bufPool must be valid
* @return : a buffer, with start pointer and size
* note: allocation may fail, in this case, start==NULL and size==0 */
static Buffer ZSTDMT_getBuffer(ZSTDMT_bufferPool* bufPool)
{
size_t const bSize = bufPool->bufferSize;
DEBUGLOG(5, "ZSTDMT_getBuffer: bSize = %u", (U32)bufPool->bufferSize);
ZSTD_pthread_mutex_lock(&bufPool->poolMutex);
if (bufPool->nbBuffers) { /* try to use an existing buffer */
Buffer const buf = bufPool->buffers[--(bufPool->nbBuffers)];
size_t const availBufferSize = buf.capacity;
bufPool->buffers[bufPool->nbBuffers] = g_nullBuffer;
if ((availBufferSize >= bSize) & ((availBufferSize>>3) <= bSize)) {
/* large enough, but not too much */
DEBUGLOG(5, "ZSTDMT_getBuffer: provide buffer %u of size %u",
bufPool->nbBuffers, (U32)buf.capacity);
ZSTD_pthread_mutex_unlock(&bufPool->poolMutex);
return buf;
}
/* size conditions not respected : scratch this buffer, create new one */
DEBUGLOG(5, "ZSTDMT_getBuffer: existing buffer does not meet size conditions => freeing");
ZSTD_customFree(buf.start, bufPool->cMem);
}
ZSTD_pthread_mutex_unlock(&bufPool->poolMutex);
/* create new buffer */
DEBUGLOG(5, "ZSTDMT_getBuffer: create a new buffer");
{ Buffer buffer;
void* const start = ZSTD_customMalloc(bSize, bufPool->cMem);
buffer.start = start; /* note : start can be NULL if malloc fails ! */
buffer.capacity = (start==NULL) ? 0 : bSize;
if (start==NULL) {
DEBUGLOG(5, "ZSTDMT_getBuffer: buffer allocation failure !!");
} else {
DEBUGLOG(5, "ZSTDMT_getBuffer: created buffer of size %u", (U32)bSize);
}
return buffer;
}
}
#if ZSTD_RESIZE_SEQPOOL
/** ZSTDMT_resizeBuffer() :
* assumption : bufPool must be valid
* @return : a buffer that is at least the buffer pool buffer size.
* If a reallocation happens, the data in the input buffer is copied.
*/
static Buffer ZSTDMT_resizeBuffer(ZSTDMT_bufferPool* bufPool, Buffer buffer)
{
size_t const bSize = bufPool->bufferSize;
if (buffer.capacity < bSize) {
void* const start = ZSTD_customMalloc(bSize, bufPool->cMem);
Buffer newBuffer;
newBuffer.start = start;
newBuffer.capacity = start == NULL ? 0 : bSize;
if (start != NULL) {
assert(newBuffer.capacity >= buffer.capacity);
ZSTD_memcpy(newBuffer.start, buffer.start, buffer.capacity);
DEBUGLOG(5, "ZSTDMT_resizeBuffer: created buffer of size %u", (U32)bSize);
return newBuffer;
}
DEBUGLOG(5, "ZSTDMT_resizeBuffer: buffer allocation failure !!");
}
return buffer;
}
#endif
/* store buffer for later re-use, up to pool capacity */
static void ZSTDMT_releaseBuffer(ZSTDMT_bufferPool* bufPool, Buffer buf)
{
DEBUGLOG(5, "ZSTDMT_releaseBuffer");
if (buf.start == NULL) return; /* compatible with release on NULL */
ZSTD_pthread_mutex_lock(&bufPool->poolMutex);
if (bufPool->nbBuffers < bufPool->totalBuffers) {
bufPool->buffers[bufPool->nbBuffers++] = buf; /* stored for later use */
DEBUGLOG(5, "ZSTDMT_releaseBuffer: stored buffer of size %u in slot %u",
(U32)buf.capacity, (U32)(bufPool->nbBuffers-1));
ZSTD_pthread_mutex_unlock(&bufPool->poolMutex);
return;
}
ZSTD_pthread_mutex_unlock(&bufPool->poolMutex);
/* Reached bufferPool capacity (note: should not happen) */
DEBUGLOG(5, "ZSTDMT_releaseBuffer: pool capacity reached => freeing ");
ZSTD_customFree(buf.start, bufPool->cMem);
}
/* We need 2 output buffers per worker since each dstBuff must be flushed after it is released.
* The 3 additional buffers are as follows:
* 1 buffer for input loading
* 1 buffer for "next input" when submitting current one
* 1 buffer stuck in queue */
#define BUF_POOL_MAX_NB_BUFFERS(nbWorkers) (2*(nbWorkers) + 3)
/* After a worker releases its rawSeqStore, it is immediately ready for reuse.
* So we only need one seq buffer per worker. */
#define SEQ_POOL_MAX_NB_BUFFERS(nbWorkers) (nbWorkers)
/* ===== Seq Pool Wrapper ====== */
typedef ZSTDMT_bufferPool ZSTDMT_seqPool;
static size_t ZSTDMT_sizeof_seqPool(ZSTDMT_seqPool* seqPool)
{
return ZSTDMT_sizeof_bufferPool(seqPool);
}
static RawSeqStore_t bufferToSeq(Buffer buffer)
{
RawSeqStore_t seq = kNullRawSeqStore;
seq.seq = (rawSeq*)buffer.start;
seq.capacity = buffer.capacity / sizeof(rawSeq);
return seq;
}
static Buffer seqToBuffer(RawSeqStore_t seq)
{
Buffer buffer;
buffer.start = seq.seq;
buffer.capacity = seq.capacity * sizeof(rawSeq);
return buffer;
}
static RawSeqStore_t ZSTDMT_getSeq(ZSTDMT_seqPool* seqPool)
{
if (seqPool->bufferSize == 0) {
return kNullRawSeqStore;
}
return bufferToSeq(ZSTDMT_getBuffer(seqPool));
}
#if ZSTD_RESIZE_SEQPOOL
static RawSeqStore_t ZSTDMT_resizeSeq(ZSTDMT_seqPool* seqPool, RawSeqStore_t seq)
{
return bufferToSeq(ZSTDMT_resizeBuffer(seqPool, seqToBuffer(seq)));
}
#endif
static void ZSTDMT_releaseSeq(ZSTDMT_seqPool* seqPool, RawSeqStore_t seq)
{
ZSTDMT_releaseBuffer(seqPool, seqToBuffer(seq));
}
static void ZSTDMT_setNbSeq(ZSTDMT_seqPool* const seqPool, size_t const nbSeq)
{
ZSTDMT_setBufferSize(seqPool, nbSeq * sizeof(rawSeq));
}
static ZSTDMT_seqPool* ZSTDMT_createSeqPool(unsigned nbWorkers, ZSTD_customMem cMem)
{
ZSTDMT_seqPool* const seqPool = ZSTDMT_createBufferPool(SEQ_POOL_MAX_NB_BUFFERS(nbWorkers), cMem);
if (seqPool == NULL) return NULL;
ZSTDMT_setNbSeq(seqPool, 0);
return seqPool;
}
static void ZSTDMT_freeSeqPool(ZSTDMT_seqPool* seqPool)
{
ZSTDMT_freeBufferPool(seqPool);
}
static ZSTDMT_seqPool* ZSTDMT_expandSeqPool(ZSTDMT_seqPool* pool, U32 nbWorkers)
{
return ZSTDMT_expandBufferPool(pool, SEQ_POOL_MAX_NB_BUFFERS(nbWorkers));
}
/* ===== CCtx Pool ===== */
/* a single CCtx Pool can be invoked from multiple threads in parallel */
typedef struct {
ZSTD_pthread_mutex_t poolMutex;
int totalCCtx;
int availCCtx;
ZSTD_customMem cMem;
ZSTD_CCtx** cctxs;
} ZSTDMT_CCtxPool;
/* note : all CCtx borrowed from the pool must be reverted back to the pool _before_ freeing the pool */
static void ZSTDMT_freeCCtxPool(ZSTDMT_CCtxPool* pool)
{
if (!pool) return;
ZSTD_pthread_mutex_destroy(&pool->poolMutex);
if (pool->cctxs) {
int cid;
for (cid=0; cid<pool->totalCCtx; cid++)
ZSTD_freeCCtx(pool->cctxs[cid]); /* free compatible with NULL */
ZSTD_customFree(pool->cctxs, pool->cMem);
}
ZSTD_customFree(pool, pool->cMem);
}
/* ZSTDMT_createCCtxPool() :
* implies nbWorkers >= 1 , checked by caller ZSTDMT_createCCtx() */
static ZSTDMT_CCtxPool* ZSTDMT_createCCtxPool(int nbWorkers,
ZSTD_customMem cMem)
{
ZSTDMT_CCtxPool* const cctxPool =
(ZSTDMT_CCtxPool*) ZSTD_customCalloc(sizeof(ZSTDMT_CCtxPool), cMem);
assert(nbWorkers > 0);
if (!cctxPool) return NULL;
if (ZSTD_pthread_mutex_init(&cctxPool->poolMutex, NULL)) {
ZSTD_customFree(cctxPool, cMem);
return NULL;
}
cctxPool->totalCCtx = nbWorkers;
cctxPool->cctxs = (ZSTD_CCtx**)ZSTD_customCalloc(nbWorkers * sizeof(ZSTD_CCtx*), cMem);
if (!cctxPool->cctxs) {
ZSTDMT_freeCCtxPool(cctxPool);
return NULL;
}
cctxPool->cMem = cMem;
cctxPool->cctxs[0] = ZSTD_createCCtx_advanced(cMem);
if (!cctxPool->cctxs[0]) { ZSTDMT_freeCCtxPool(cctxPool); return NULL; }
cctxPool->availCCtx = 1; /* at least one cctx for single-thread mode */
DEBUGLOG(3, "cctxPool created, with %u workers", nbWorkers);
return cctxPool;
}
static ZSTDMT_CCtxPool* ZSTDMT_expandCCtxPool(ZSTDMT_CCtxPool* srcPool,
int nbWorkers)
{
if (srcPool==NULL) return NULL;
if (nbWorkers <= srcPool->totalCCtx) return srcPool; /* good enough */
/* need a larger cctx pool */
{ ZSTD_customMem const cMem = srcPool->cMem;
ZSTDMT_freeCCtxPool(srcPool);
return ZSTDMT_createCCtxPool(nbWorkers, cMem);
}
}
/* only works during initialization phase, not during compression */
static size_t ZSTDMT_sizeof_CCtxPool(ZSTDMT_CCtxPool* cctxPool)
{
ZSTD_pthread_mutex_lock(&cctxPool->poolMutex);
{ unsigned const nbWorkers = cctxPool->totalCCtx;
size_t const poolSize = sizeof(*cctxPool);
size_t const arraySize = cctxPool->totalCCtx * sizeof(ZSTD_CCtx*);
size_t totalCCtxSize = 0;
unsigned u;
for (u=0; u<nbWorkers; u++) {
totalCCtxSize += ZSTD_sizeof_CCtx(cctxPool->cctxs[u]);
}
ZSTD_pthread_mutex_unlock(&cctxPool->poolMutex);
assert(nbWorkers > 0);
return poolSize + arraySize + totalCCtxSize;
}
}
static ZSTD_CCtx* ZSTDMT_getCCtx(ZSTDMT_CCtxPool* cctxPool)
{
DEBUGLOG(5, "ZSTDMT_getCCtx");
ZSTD_pthread_mutex_lock(&cctxPool->poolMutex);
if (cctxPool->availCCtx) {
cctxPool->availCCtx--;
{ ZSTD_CCtx* const cctx = cctxPool->cctxs[cctxPool->availCCtx];
ZSTD_pthread_mutex_unlock(&cctxPool->poolMutex);
return cctx;
} }
ZSTD_pthread_mutex_unlock(&cctxPool->poolMutex);
DEBUGLOG(5, "create one more CCtx");
return ZSTD_createCCtx_advanced(cctxPool->cMem); /* note : can be NULL, when creation fails ! */
}
static void ZSTDMT_releaseCCtx(ZSTDMT_CCtxPool* pool, ZSTD_CCtx* cctx)
{
if (cctx==NULL) return; /* compatibility with release on NULL */
ZSTD_pthread_mutex_lock(&pool->poolMutex);
if (pool->availCCtx < pool->totalCCtx)
pool->cctxs[pool->availCCtx++] = cctx;
else {
/* pool overflow : should not happen, since totalCCtx==nbWorkers */
DEBUGLOG(4, "CCtx pool overflow : free cctx");
ZSTD_freeCCtx(cctx);
}
ZSTD_pthread_mutex_unlock(&pool->poolMutex);
}
/* ==== Serial State ==== */
typedef struct {
void const* start;
size_t size;
} Range;
typedef struct {
/* All variables in the struct are protected by mutex. */
ZSTD_pthread_mutex_t mutex;
ZSTD_pthread_cond_t cond;
ZSTD_CCtx_params params;
ldmState_t ldmState;
XXH64_state_t xxhState;
unsigned nextJobID;
/* Protects ldmWindow.
* Must be acquired after the main mutex when acquiring both.
*/
ZSTD_pthread_mutex_t ldmWindowMutex;
ZSTD_pthread_cond_t ldmWindowCond; /* Signaled when ldmWindow is updated */
ZSTD_window_t ldmWindow; /* A thread-safe copy of ldmState.window */
} SerialState;
static int
ZSTDMT_serialState_reset(SerialState* serialState,
ZSTDMT_seqPool* seqPool,
ZSTD_CCtx_params params,
size_t jobSize,
const void* dict, size_t const dictSize,
ZSTD_dictContentType_e dictContentType)
{
/* Adjust parameters */
if (params.ldmParams.enableLdm == ZSTD_ps_enable) {
DEBUGLOG(4, "LDM window size = %u KB", (1U << params.cParams.windowLog) >> 10);
ZSTD_ldm_adjustParameters(¶ms.ldmParams, ¶ms.cParams);
assert(params.ldmParams.hashLog >= params.ldmParams.bucketSizeLog);
assert(params.ldmParams.hashRateLog < 32);
} else {
ZSTD_memset(¶ms.ldmParams, 0, sizeof(params.ldmParams));
}
serialState->nextJobID = 0;
if (params.fParams.checksumFlag)
XXH64_reset(&serialState->xxhState, 0);
if (params.ldmParams.enableLdm == ZSTD_ps_enable) {
ZSTD_customMem cMem = params.customMem;
unsigned const hashLog = params.ldmParams.hashLog;
size_t const hashSize = ((size_t)1 << hashLog) * sizeof(ldmEntry_t);
unsigned const bucketLog =
params.ldmParams.hashLog - params.ldmParams.bucketSizeLog;
unsigned const prevBucketLog =
serialState->params.ldmParams.hashLog -
serialState->params.ldmParams.bucketSizeLog;
size_t const numBuckets = (size_t)1 << bucketLog;
/* Size the seq pool tables */
ZSTDMT_setNbSeq(seqPool, ZSTD_ldm_getMaxNbSeq(params.ldmParams, jobSize));
/* Reset the window */
ZSTD_window_init(&serialState->ldmState.window);
/* Resize tables and output space if necessary. */
if (serialState->ldmState.hashTable == NULL || serialState->params.ldmParams.hashLog < hashLog) {
ZSTD_customFree(serialState->ldmState.hashTable, cMem);
serialState->ldmState.hashTable = (ldmEntry_t*)ZSTD_customMalloc(hashSize, cMem);
}
if (serialState->ldmState.bucketOffsets == NULL || prevBucketLog < bucketLog) {
ZSTD_customFree(serialState->ldmState.bucketOffsets, cMem);
serialState->ldmState.bucketOffsets = (BYTE*)ZSTD_customMalloc(numBuckets, cMem);
}
if (!serialState->ldmState.hashTable || !serialState->ldmState.bucketOffsets)
return 1;
/* Zero the tables */
ZSTD_memset(serialState->ldmState.hashTable, 0, hashSize);
ZSTD_memset(serialState->ldmState.bucketOffsets, 0, numBuckets);
/* Update window state and fill hash table with dict */
serialState->ldmState.loadedDictEnd = 0;
if (dictSize > 0) {
if (dictContentType == ZSTD_dct_rawContent) {
BYTE const* const dictEnd = (const BYTE*)dict + dictSize;
ZSTD_window_update(&serialState->ldmState.window, dict, dictSize, /* forceNonContiguous */ 0);
ZSTD_ldm_fillHashTable(&serialState->ldmState, (const BYTE*)dict, dictEnd, ¶ms.ldmParams);
serialState->ldmState.loadedDictEnd = params.forceWindow ? 0 : (U32)(dictEnd - serialState->ldmState.window.base);
} else {
/* don't even load anything */
}
}
/* Initialize serialState's copy of ldmWindow. */
serialState->ldmWindow = serialState->ldmState.window;
}
serialState->params = params;
serialState->params.jobSize = (U32)jobSize;
return 0;
}
static int ZSTDMT_serialState_init(SerialState* serialState)
{
int initError = 0;
ZSTD_memset(serialState, 0, sizeof(*serialState));
initError |= ZSTD_pthread_mutex_init(&serialState->mutex, NULL);
initError |= ZSTD_pthread_cond_init(&serialState->cond, NULL);
initError |= ZSTD_pthread_mutex_init(&serialState->ldmWindowMutex, NULL);
initError |= ZSTD_pthread_cond_init(&serialState->ldmWindowCond, NULL);
return initError;
}
static void ZSTDMT_serialState_free(SerialState* serialState)
{
ZSTD_customMem cMem = serialState->params.customMem;
ZSTD_pthread_mutex_destroy(&serialState->mutex);
ZSTD_pthread_cond_destroy(&serialState->cond);
ZSTD_pthread_mutex_destroy(&serialState->ldmWindowMutex);
ZSTD_pthread_cond_destroy(&serialState->ldmWindowCond);
ZSTD_customFree(serialState->ldmState.hashTable, cMem);
ZSTD_customFree(serialState->ldmState.bucketOffsets, cMem);
}
static void
ZSTDMT_serialState_genSequences(SerialState* serialState,
RawSeqStore_t* seqStore,
Range src, unsigned jobID)
{
/* Wait for our turn */
ZSTD_PTHREAD_MUTEX_LOCK(&serialState->mutex);
while (serialState->nextJobID < jobID) {
DEBUGLOG(5, "wait for serialState->cond");
ZSTD_pthread_cond_wait(&serialState->cond, &serialState->mutex);
}
/* A future job may error and skip our job */
if (serialState->nextJobID == jobID) {
/* It is now our turn, do any processing necessary */
if (serialState->params.ldmParams.enableLdm == ZSTD_ps_enable) {
size_t error;
DEBUGLOG(6, "ZSTDMT_serialState_genSequences: LDM update");
assert(seqStore->seq != NULL && seqStore->pos == 0 &&
seqStore->size == 0 && seqStore->capacity > 0);
assert(src.size <= serialState->params.jobSize);
ZSTD_window_update(&serialState->ldmState.window, src.start, src.size, /* forceNonContiguous */ 0);
error = ZSTD_ldm_generateSequences(
&serialState->ldmState, seqStore,
&serialState->params.ldmParams, src.start, src.size);
/* We provide a large enough buffer to never fail. */
assert(!ZSTD_isError(error)); (void)error;
/* Update ldmWindow to match the ldmState.window and signal the main
* thread if it is waiting for a buffer.
*/
ZSTD_PTHREAD_MUTEX_LOCK(&serialState->ldmWindowMutex);
serialState->ldmWindow = serialState->ldmState.window;
ZSTD_pthread_cond_signal(&serialState->ldmWindowCond);
ZSTD_pthread_mutex_unlock(&serialState->ldmWindowMutex);
}
if (serialState->params.fParams.checksumFlag && src.size > 0)
XXH64_update(&serialState->xxhState, src.start, src.size);
}
/* Now it is the next jobs turn */
serialState->nextJobID++;
ZSTD_pthread_cond_broadcast(&serialState->cond);
ZSTD_pthread_mutex_unlock(&serialState->mutex);
}
static void
ZSTDMT_serialState_applySequences(const SerialState* serialState, /* just for an assert() check */
ZSTD_CCtx* jobCCtx,
const RawSeqStore_t* seqStore)
{
if (seqStore->size > 0) {
DEBUGLOG(5, "ZSTDMT_serialState_applySequences: uploading %u external sequences", (unsigned)seqStore->size);
assert(serialState->params.ldmParams.enableLdm == ZSTD_ps_enable); (void)serialState;
assert(jobCCtx);
ZSTD_referenceExternalSequences(jobCCtx, seqStore->seq, seqStore->size);
}
}
static void ZSTDMT_serialState_ensureFinished(SerialState* serialState,
unsigned jobID, size_t cSize)
{
ZSTD_PTHREAD_MUTEX_LOCK(&serialState->mutex);
if (serialState->nextJobID <= jobID) {
assert(ZSTD_isError(cSize)); (void)cSize;
DEBUGLOG(5, "Skipping past job %u because of error", jobID);
serialState->nextJobID = jobID + 1;
ZSTD_pthread_cond_broadcast(&serialState->cond);
ZSTD_PTHREAD_MUTEX_LOCK(&serialState->ldmWindowMutex);
ZSTD_window_clear(&serialState->ldmWindow);
ZSTD_pthread_cond_signal(&serialState->ldmWindowCond);
ZSTD_pthread_mutex_unlock(&serialState->ldmWindowMutex);
}
ZSTD_pthread_mutex_unlock(&serialState->mutex);
}
/* ------------------------------------------ */
/* ===== Worker thread ===== */
/* ------------------------------------------ */
static const Range kNullRange = { NULL, 0 };
typedef struct {
size_t consumed; /* SHARED - set0 by mtctx, then modified by worker AND read by mtctx */
size_t cSize; /* SHARED - set0 by mtctx, then modified by worker AND read by mtctx, then set0 by mtctx */
ZSTD_pthread_mutex_t job_mutex; /* Thread-safe - used by mtctx and worker */
ZSTD_pthread_cond_t job_cond; /* Thread-safe - used by mtctx and worker */
ZSTDMT_CCtxPool* cctxPool; /* Thread-safe - used by mtctx and (all) workers */
ZSTDMT_bufferPool* bufPool; /* Thread-safe - used by mtctx and (all) workers */
ZSTDMT_seqPool* seqPool; /* Thread-safe - used by mtctx and (all) workers */
SerialState* serial; /* Thread-safe - used by mtctx and (all) workers */
Buffer dstBuff; /* set by worker (or mtctx), then read by worker & mtctx, then modified by mtctx => no barrier */
Range prefix; /* set by mtctx, then read by worker & mtctx => no barrier */
Range src; /* set by mtctx, then read by worker & mtctx => no barrier */
unsigned jobID; /* set by mtctx, then read by worker => no barrier */
unsigned firstJob; /* set by mtctx, then read by worker => no barrier */
unsigned lastJob; /* set by mtctx, then read by worker => no barrier */
ZSTD_CCtx_params params; /* set by mtctx, then read by worker => no barrier */
const ZSTD_CDict* cdict; /* set by mtctx, then read by worker => no barrier */
unsigned long long fullFrameSize; /* set by mtctx, then read by worker => no barrier */
size_t dstFlushed; /* used only by mtctx */
unsigned frameChecksumNeeded; /* used only by mtctx */
} ZSTDMT_jobDescription;
#define JOB_ERROR(e) \
do { \
ZSTD_PTHREAD_MUTEX_LOCK(&job->job_mutex); \
job->cSize = e; \
ZSTD_pthread_mutex_unlock(&job->job_mutex); \
goto _endJob; \
} while (0)
/* ZSTDMT_compressionJob() is a POOL_function type */
static void ZSTDMT_compressionJob(void* jobDescription)
{
ZSTDMT_jobDescription* const job = (ZSTDMT_jobDescription*)jobDescription;
ZSTD_CCtx_params jobParams = job->params; /* do not modify job->params ! copy it, modify the copy */
ZSTD_CCtx* const cctx = ZSTDMT_getCCtx(job->cctxPool);
RawSeqStore_t rawSeqStore = ZSTDMT_getSeq(job->seqPool);
Buffer dstBuff = job->dstBuff;
size_t lastCBlockSize = 0;
DEBUGLOG(5, "ZSTDMT_compressionJob: job %u", job->jobID);
/* resources */
if (cctx==NULL) JOB_ERROR(ERROR(memory_allocation));
if (dstBuff.start == NULL) { /* streaming job : doesn't provide a dstBuffer */
dstBuff = ZSTDMT_getBuffer(job->bufPool);
if (dstBuff.start==NULL) JOB_ERROR(ERROR(memory_allocation));
job->dstBuff = dstBuff; /* this value can be read in ZSTDMT_flush, when it copies the whole job */
}
if (jobParams.ldmParams.enableLdm == ZSTD_ps_enable && rawSeqStore.seq == NULL)
JOB_ERROR(ERROR(memory_allocation));
/* Don't compute the checksum for chunks, since we compute it externally,
* but write it in the header.
*/
if (job->jobID != 0) jobParams.fParams.checksumFlag = 0;
/* Don't run LDM for the chunks, since we handle it externally */
jobParams.ldmParams.enableLdm = ZSTD_ps_disable;
/* Correct nbWorkers to 0. */
jobParams.nbWorkers = 0;
/* init */
/* Perform serial step as early as possible */
ZSTDMT_serialState_genSequences(job->serial, &rawSeqStore, job->src, job->jobID);
if (job->cdict) {
size_t const initError = ZSTD_compressBegin_advanced_internal(cctx, NULL, 0, ZSTD_dct_auto, ZSTD_dtlm_fast, job->cdict, &jobParams, job->fullFrameSize);
assert(job->firstJob); /* only allowed for first job */
if (ZSTD_isError(initError)) JOB_ERROR(initError);
} else {
U64 const pledgedSrcSize = job->firstJob ? job->fullFrameSize : job->src.size;
{ size_t const forceWindowError = ZSTD_CCtxParams_setParameter(&jobParams, ZSTD_c_forceMaxWindow, !job->firstJob);
if (ZSTD_isError(forceWindowError)) JOB_ERROR(forceWindowError);
}
if (!job->firstJob) {
size_t const err = ZSTD_CCtxParams_setParameter(&jobParams, ZSTD_c_deterministicRefPrefix, 0);
if (ZSTD_isError(err)) JOB_ERROR(err);
}
DEBUGLOG(6, "ZSTDMT_compressionJob: job %u: loading prefix of size %zu", job->jobID, job->prefix.size);
{ size_t const initError = ZSTD_compressBegin_advanced_internal(cctx,
job->prefix.start, job->prefix.size, ZSTD_dct_rawContent,
ZSTD_dtlm_fast,
NULL, /*cdict*/
&jobParams, pledgedSrcSize);
if (ZSTD_isError(initError)) JOB_ERROR(initError);
} }
/* External Sequences can only be applied after CCtx initialization */
ZSTDMT_serialState_applySequences(job->serial, cctx, &rawSeqStore);
if (!job->firstJob) { /* flush and overwrite frame header when it's not first job */
size_t const hSize = ZSTD_compressContinue_public(cctx, dstBuff.start, dstBuff.capacity, job->src.start, 0);
if (ZSTD_isError(hSize)) JOB_ERROR(hSize);
DEBUGLOG(5, "ZSTDMT_compressionJob: flush and overwrite %u bytes of frame header (not first job)", (U32)hSize);
ZSTD_invalidateRepCodes(cctx);
}
/* compress the entire job by smaller chunks, for better granularity */
{ size_t const chunkSize = 4*ZSTD_BLOCKSIZE_MAX;
int const nbChunks = (int)((job->src.size + (chunkSize-1)) / chunkSize);
const BYTE* ip = (const BYTE*) job->src.start;
BYTE* const ostart = (BYTE*)dstBuff.start;
BYTE* op = ostart;
BYTE* oend = op + dstBuff.capacity;
int chunkNb;
if (sizeof(size_t) > sizeof(int)) assert(job->src.size < ((size_t)INT_MAX) * chunkSize); /* check overflow */
DEBUGLOG(5, "ZSTDMT_compressionJob: compress %u bytes in %i blocks", (U32)job->src.size, nbChunks);
assert(job->cSize == 0);
for (chunkNb = 1; chunkNb < nbChunks; chunkNb++) {
size_t const cSize = ZSTD_compressContinue_public(cctx, op, oend-op, ip, chunkSize);
if (ZSTD_isError(cSize)) JOB_ERROR(cSize);
ip += chunkSize;
op += cSize; assert(op < oend);
/* stats */
ZSTD_PTHREAD_MUTEX_LOCK(&job->job_mutex);
job->cSize += cSize;
job->consumed = chunkSize * chunkNb;
DEBUGLOG(5, "ZSTDMT_compressionJob: compress new block : cSize==%u bytes (total: %u)",
(U32)cSize, (U32)job->cSize);
ZSTD_pthread_cond_signal(&job->job_cond); /* warns some more data is ready to be flushed */
ZSTD_pthread_mutex_unlock(&job->job_mutex);
}
/* last block */
assert(chunkSize > 0);
assert((chunkSize & (chunkSize - 1)) == 0); /* chunkSize must be power of 2 for mask==(chunkSize-1) to work */
if ((nbChunks > 0) | job->lastJob /*must output a "last block" flag*/ ) {
size_t const lastBlockSize1 = job->src.size & (chunkSize-1);
size_t const lastBlockSize = ((lastBlockSize1==0) & (job->src.size>=chunkSize)) ? chunkSize : lastBlockSize1;
size_t const cSize = (job->lastJob) ?
ZSTD_compressEnd_public(cctx, op, oend-op, ip, lastBlockSize) :
ZSTD_compressContinue_public(cctx, op, oend-op, ip, lastBlockSize);
if (ZSTD_isError(cSize)) JOB_ERROR(cSize);
lastCBlockSize = cSize;
} }
if (!job->firstJob) {
/* Double check that we don't have an ext-dict, because then our
* repcode invalidation doesn't work.
*/
assert(!ZSTD_window_hasExtDict(cctx->blockState.matchState.window));
}
ZSTD_CCtx_trace(cctx, 0);
_endJob:
ZSTDMT_serialState_ensureFinished(job->serial, job->jobID, job->cSize);
if (job->prefix.size > 0)
DEBUGLOG(5, "Finished with prefix: %zx", (size_t)job->prefix.start);
DEBUGLOG(5, "Finished with source: %zx", (size_t)job->src.start);
/* release resources */
ZSTDMT_releaseSeq(job->seqPool, rawSeqStore);
ZSTDMT_releaseCCtx(job->cctxPool, cctx);
/* report */
ZSTD_PTHREAD_MUTEX_LOCK(&job->job_mutex);
if (ZSTD_isError(job->cSize)) assert(lastCBlockSize == 0);
job->cSize += lastCBlockSize;
job->consumed = job->src.size; /* when job->consumed == job->src.size , compression job is presumed completed */
ZSTD_pthread_cond_signal(&job->job_cond);
ZSTD_pthread_mutex_unlock(&job->job_mutex);
}
/* ------------------------------------------ */
/* ===== Multi-threaded compression ===== */
/* ------------------------------------------ */
typedef struct {
Range prefix; /* read-only non-owned prefix buffer */
Buffer buffer;
size_t filled;
} InBuff_t;
typedef struct {
BYTE* buffer; /* The round input buffer. All jobs get references
* to pieces of the buffer. ZSTDMT_tryGetInputRange()
* handles handing out job input buffers, and makes
* sure it doesn't overlap with any pieces still in use.
*/
size_t capacity; /* The capacity of buffer. */
size_t pos; /* The position of the current inBuff in the round
* buffer. Updated past the end if the inBuff once
* the inBuff is sent to the worker thread.
* pos <= capacity.
*/
} RoundBuff_t;
static const RoundBuff_t kNullRoundBuff = {NULL, 0, 0};
#define RSYNC_LENGTH 32
/* Don't create chunks smaller than the zstd block size.
* This stops us from regressing compression ratio too much,
* and ensures our output fits in ZSTD_compressBound().
*
* If this is shrunk < ZSTD_BLOCKSIZELOG_MIN then
* ZSTD_COMPRESSBOUND() will need to be updated.
*/
#define RSYNC_MIN_BLOCK_LOG ZSTD_BLOCKSIZELOG_MAX
#define RSYNC_MIN_BLOCK_SIZE (1<<RSYNC_MIN_BLOCK_LOG)
typedef struct {
U64 hash;
U64 hitMask;
U64 primePower;
} RSyncState_t;
struct ZSTDMT_CCtx_s {
POOL_ctx* factory;
ZSTDMT_jobDescription* jobs;
ZSTDMT_bufferPool* bufPool;
ZSTDMT_CCtxPool* cctxPool;
ZSTDMT_seqPool* seqPool;
ZSTD_CCtx_params params;
size_t targetSectionSize;
size_t targetPrefixSize;
int jobReady; /* 1 => one job is already prepared, but pool has shortage of workers. Don't create a new job. */
InBuff_t inBuff;
RoundBuff_t roundBuff;
SerialState serial;
RSyncState_t rsync;
unsigned jobIDMask;
unsigned doneJobID;
unsigned nextJobID;
unsigned frameEnded;
unsigned allJobsCompleted;
unsigned long long frameContentSize;
unsigned long long consumed;
unsigned long long produced;
ZSTD_customMem cMem;
ZSTD_CDict* cdictLocal;
const ZSTD_CDict* cdict;
unsigned providedFactory: 1;
};
static void ZSTDMT_freeJobsTable(ZSTDMT_jobDescription* jobTable, U32 nbJobs, ZSTD_customMem cMem)
{
U32 jobNb;
if (jobTable == NULL) return;
for (jobNb=0; jobNb<nbJobs; jobNb++) {
ZSTD_pthread_mutex_destroy(&jobTable[jobNb].job_mutex);
ZSTD_pthread_cond_destroy(&jobTable[jobNb].job_cond);
}
ZSTD_customFree(jobTable, cMem);
}
/* ZSTDMT_allocJobsTable()
* allocate and init a job table.
* update *nbJobsPtr to next power of 2 value, as size of table */
static ZSTDMT_jobDescription* ZSTDMT_createJobsTable(U32* nbJobsPtr, ZSTD_customMem cMem)
{
U32 const nbJobsLog2 = ZSTD_highbit32(*nbJobsPtr) + 1;
U32 const nbJobs = 1 << nbJobsLog2;
U32 jobNb;
ZSTDMT_jobDescription* const jobTable = (ZSTDMT_jobDescription*)
ZSTD_customCalloc(nbJobs * sizeof(ZSTDMT_jobDescription), cMem);
int initError = 0;
if (jobTable==NULL) return NULL;
*nbJobsPtr = nbJobs;
for (jobNb=0; jobNb<nbJobs; jobNb++) {
initError |= ZSTD_pthread_mutex_init(&jobTable[jobNb].job_mutex, NULL);
initError |= ZSTD_pthread_cond_init(&jobTable[jobNb].job_cond, NULL);
}
if (initError != 0) {
ZSTDMT_freeJobsTable(jobTable, nbJobs, cMem);
return NULL;
}
return jobTable;
}
static size_t ZSTDMT_expandJobsTable (ZSTDMT_CCtx* mtctx, U32 nbWorkers) {
U32 nbJobs = nbWorkers + 2;
if (nbJobs > mtctx->jobIDMask+1) { /* need more job capacity */
ZSTDMT_freeJobsTable(mtctx->jobs, mtctx->jobIDMask+1, mtctx->cMem);
mtctx->jobIDMask = 0;
mtctx->jobs = ZSTDMT_createJobsTable(&nbJobs, mtctx->cMem);
if (mtctx->jobs==NULL) return ERROR(memory_allocation);
assert((nbJobs != 0) && ((nbJobs & (nbJobs - 1)) == 0)); /* ensure nbJobs is a power of 2 */
mtctx->jobIDMask = nbJobs - 1;
}
return 0;
}
/* ZSTDMT_CCtxParam_setNbWorkers():
* Internal use only */
static size_t ZSTDMT_CCtxParam_setNbWorkers(ZSTD_CCtx_params* params, unsigned nbWorkers)
{
return ZSTD_CCtxParams_setParameter(params, ZSTD_c_nbWorkers, (int)nbWorkers);
}
MEM_STATIC ZSTDMT_CCtx* ZSTDMT_createCCtx_advanced_internal(unsigned nbWorkers, ZSTD_customMem cMem, ZSTD_threadPool* pool)
{
ZSTDMT_CCtx* mtctx;
U32 nbJobs = nbWorkers + 2;
int initError;
DEBUGLOG(3, "ZSTDMT_createCCtx_advanced (nbWorkers = %u)", nbWorkers);
if (nbWorkers < 1) return NULL;
nbWorkers = MIN(nbWorkers , ZSTDMT_NBWORKERS_MAX);
if ((cMem.customAlloc!=NULL) ^ (cMem.customFree!=NULL))
/* invalid custom allocator */
return NULL;
mtctx = (ZSTDMT_CCtx*) ZSTD_customCalloc(sizeof(ZSTDMT_CCtx), cMem);
if (!mtctx) return NULL;
ZSTDMT_CCtxParam_setNbWorkers(&mtctx->params, nbWorkers);
mtctx->cMem = cMem;
mtctx->allJobsCompleted = 1;
if (pool != NULL) {
mtctx->factory = pool;
mtctx->providedFactory = 1;
}
else {
mtctx->factory = POOL_create_advanced(nbWorkers, 0, cMem);
mtctx->providedFactory = 0;
}
mtctx->jobs = ZSTDMT_createJobsTable(&nbJobs, cMem);
assert(nbJobs > 0); assert((nbJobs & (nbJobs - 1)) == 0); /* ensure nbJobs is a power of 2 */
mtctx->jobIDMask = nbJobs - 1;
mtctx->bufPool = ZSTDMT_createBufferPool(BUF_POOL_MAX_NB_BUFFERS(nbWorkers), cMem);
mtctx->cctxPool = ZSTDMT_createCCtxPool(nbWorkers, cMem);
mtctx->seqPool = ZSTDMT_createSeqPool(nbWorkers, cMem);
initError = ZSTDMT_serialState_init(&mtctx->serial);
mtctx->roundBuff = kNullRoundBuff;
if (!mtctx->factory | !mtctx->jobs | !mtctx->bufPool | !mtctx->cctxPool | !mtctx->seqPool | initError) {
ZSTDMT_freeCCtx(mtctx);
return NULL;
}
DEBUGLOG(3, "mt_cctx created, for %u threads", nbWorkers);
return mtctx;
}
ZSTDMT_CCtx* ZSTDMT_createCCtx_advanced(unsigned nbWorkers, ZSTD_customMem cMem, ZSTD_threadPool* pool)
{
#ifdef ZSTD_MULTITHREAD
return ZSTDMT_createCCtx_advanced_internal(nbWorkers, cMem, pool);
#else
(void)nbWorkers;
(void)cMem;
(void)pool;
return NULL;
#endif
}
/* ZSTDMT_releaseAllJobResources() :
* note : ensure all workers are killed first ! */
static void ZSTDMT_releaseAllJobResources(ZSTDMT_CCtx* mtctx)
{
unsigned jobID;
DEBUGLOG(3, "ZSTDMT_releaseAllJobResources");
for (jobID=0; jobID <= mtctx->jobIDMask; jobID++) {
/* Copy the mutex/cond out */
ZSTD_pthread_mutex_t const mutex = mtctx->jobs[jobID].job_mutex;
ZSTD_pthread_cond_t const cond = mtctx->jobs[jobID].job_cond;
DEBUGLOG(4, "job%02u: release dst address %08X", jobID, (U32)(size_t)mtctx->jobs[jobID].dstBuff.start);
ZSTDMT_releaseBuffer(mtctx->bufPool, mtctx->jobs[jobID].dstBuff);
/* Clear the job description, but keep the mutex/cond */
ZSTD_memset(&mtctx->jobs[jobID], 0, sizeof(mtctx->jobs[jobID]));
mtctx->jobs[jobID].job_mutex = mutex;
mtctx->jobs[jobID].job_cond = cond;
}
mtctx->inBuff.buffer = g_nullBuffer;
mtctx->inBuff.filled = 0;
mtctx->allJobsCompleted = 1;
}
static void ZSTDMT_waitForAllJobsCompleted(ZSTDMT_CCtx* mtctx)
{
DEBUGLOG(4, "ZSTDMT_waitForAllJobsCompleted");
while (mtctx->doneJobID < mtctx->nextJobID) {
unsigned const jobID = mtctx->doneJobID & mtctx->jobIDMask;
ZSTD_PTHREAD_MUTEX_LOCK(&mtctx->jobs[jobID].job_mutex);
while (mtctx->jobs[jobID].consumed < mtctx->jobs[jobID].src.size) {
DEBUGLOG(4, "waiting for jobCompleted signal from job %u", mtctx->doneJobID); /* we want to block when waiting for data to flush */
ZSTD_pthread_cond_wait(&mtctx->jobs[jobID].job_cond, &mtctx->jobs[jobID].job_mutex);
}
ZSTD_pthread_mutex_unlock(&mtctx->jobs[jobID].job_mutex);
mtctx->doneJobID++;
}
}
size_t ZSTDMT_freeCCtx(ZSTDMT_CCtx* mtctx)
{
if (mtctx==NULL) return 0; /* compatible with free on NULL */
if (!mtctx->providedFactory)
POOL_free(mtctx->factory); /* stop and free worker threads */
ZSTDMT_releaseAllJobResources(mtctx); /* release job resources into pools first */
ZSTDMT_freeJobsTable(mtctx->jobs, mtctx->jobIDMask+1, mtctx->cMem);
ZSTDMT_freeBufferPool(mtctx->bufPool);
ZSTDMT_freeCCtxPool(mtctx->cctxPool);
ZSTDMT_freeSeqPool(mtctx->seqPool);
ZSTDMT_serialState_free(&mtctx->serial);
ZSTD_freeCDict(mtctx->cdictLocal);
if (mtctx->roundBuff.buffer)
ZSTD_customFree(mtctx->roundBuff.buffer, mtctx->cMem);
ZSTD_customFree(mtctx, mtctx->cMem);
return 0;
}
size_t ZSTDMT_sizeof_CCtx(ZSTDMT_CCtx* mtctx)
{
if (mtctx == NULL) return 0; /* supports sizeof NULL */
return sizeof(*mtctx)
+ POOL_sizeof(mtctx->factory)
+ ZSTDMT_sizeof_bufferPool(mtctx->bufPool)
+ (mtctx->jobIDMask+1) * sizeof(ZSTDMT_jobDescription)
+ ZSTDMT_sizeof_CCtxPool(mtctx->cctxPool)
+ ZSTDMT_sizeof_seqPool(mtctx->seqPool)
+ ZSTD_sizeof_CDict(mtctx->cdictLocal)
+ mtctx->roundBuff.capacity;
}
/* ZSTDMT_resize() :
* @return : error code if fails, 0 on success */
static size_t ZSTDMT_resize(ZSTDMT_CCtx* mtctx, unsigned nbWorkers)
{
if (POOL_resize(mtctx->factory, nbWorkers)) return ERROR(memory_allocation);
FORWARD_IF_ERROR( ZSTDMT_expandJobsTable(mtctx, nbWorkers) , "");
mtctx->bufPool = ZSTDMT_expandBufferPool(mtctx->bufPool, BUF_POOL_MAX_NB_BUFFERS(nbWorkers));
if (mtctx->bufPool == NULL) return ERROR(memory_allocation);
mtctx->cctxPool = ZSTDMT_expandCCtxPool(mtctx->cctxPool, nbWorkers);
if (mtctx->cctxPool == NULL) return ERROR(memory_allocation);
mtctx->seqPool = ZSTDMT_expandSeqPool(mtctx->seqPool, nbWorkers);
if (mtctx->seqPool == NULL) return ERROR(memory_allocation);
ZSTDMT_CCtxParam_setNbWorkers(&mtctx->params, nbWorkers);
return 0;
}
/*! ZSTDMT_updateCParams_whileCompressing() :
* Updates a selected set of compression parameters, remaining compatible with currently active frame.
* New parameters will be applied to next compression job. */
void ZSTDMT_updateCParams_whileCompressing(ZSTDMT_CCtx* mtctx, const ZSTD_CCtx_params* cctxParams)
{
U32 const saved_wlog = mtctx->params.cParams.windowLog; /* Do not modify windowLog while compressing */
int const compressionLevel = cctxParams->compressionLevel;
DEBUGLOG(5, "ZSTDMT_updateCParams_whileCompressing (level:%i)",
compressionLevel);
mtctx->params.compressionLevel = compressionLevel;
{ ZSTD_compressionParameters cParams = ZSTD_getCParamsFromCCtxParams(cctxParams, ZSTD_CONTENTSIZE_UNKNOWN, 0, ZSTD_cpm_noAttachDict);
cParams.windowLog = saved_wlog;
mtctx->params.cParams = cParams;
}
}
/* ZSTDMT_getFrameProgression():
* tells how much data has been consumed (input) and produced (output) for current frame.
* able to count progression inside worker threads.
* Note : mutex will be acquired during statistics collection inside workers. */
ZSTD_frameProgression ZSTDMT_getFrameProgression(ZSTDMT_CCtx* mtctx)
{
ZSTD_frameProgression fps;
DEBUGLOG(5, "ZSTDMT_getFrameProgression");
fps.ingested = mtctx->consumed + mtctx->inBuff.filled;
fps.consumed = mtctx->consumed;
fps.produced = fps.flushed = mtctx->produced;
fps.currentJobID = mtctx->nextJobID;
fps.nbActiveWorkers = 0;
{ unsigned jobNb;
unsigned lastJobNb = mtctx->nextJobID + mtctx->jobReady; assert(mtctx->jobReady <= 1);
DEBUGLOG(6, "ZSTDMT_getFrameProgression: jobs: from %u to <%u (jobReady:%u)",
mtctx->doneJobID, lastJobNb, mtctx->jobReady);
for (jobNb = mtctx->doneJobID ; jobNb < lastJobNb ; jobNb++) {
unsigned const wJobID = jobNb & mtctx->jobIDMask;
ZSTDMT_jobDescription* jobPtr = &mtctx->jobs[wJobID];
ZSTD_pthread_mutex_lock(&jobPtr->job_mutex);
{ size_t const cResult = jobPtr->cSize;
size_t const produced = ZSTD_isError(cResult) ? 0 : cResult;
size_t const flushed = ZSTD_isError(cResult) ? 0 : jobPtr->dstFlushed;
assert(flushed <= produced);
fps.ingested += jobPtr->src.size;
fps.consumed += jobPtr->consumed;
fps.produced += produced;
fps.flushed += flushed;
fps.nbActiveWorkers += (jobPtr->consumed < jobPtr->src.size);
}
ZSTD_pthread_mutex_unlock(&mtctx->jobs[wJobID].job_mutex);
}
}
return fps;
}
size_t ZSTDMT_toFlushNow(ZSTDMT_CCtx* mtctx)
{
size_t toFlush;
unsigned const jobID = mtctx->doneJobID;
assert(jobID <= mtctx->nextJobID);
if (jobID == mtctx->nextJobID) return 0; /* no active job => nothing to flush */
/* look into oldest non-fully-flushed job */
{ unsigned const wJobID = jobID & mtctx->jobIDMask;
ZSTDMT_jobDescription* const jobPtr = &mtctx->jobs[wJobID];
ZSTD_pthread_mutex_lock(&jobPtr->job_mutex);
{ size_t const cResult = jobPtr->cSize;
size_t const produced = ZSTD_isError(cResult) ? 0 : cResult;
size_t const flushed = ZSTD_isError(cResult) ? 0 : jobPtr->dstFlushed;
assert(flushed <= produced);
assert(jobPtr->consumed <= jobPtr->src.size);
toFlush = produced - flushed;
/* if toFlush==0, nothing is available to flush.
* However, jobID is expected to still be active:
* if jobID was already completed and fully flushed,
* ZSTDMT_flushProduced() should have already moved onto next job.
* Therefore, some input has not yet been consumed. */
if (toFlush==0) {
assert(jobPtr->consumed < jobPtr->src.size);
}
}
ZSTD_pthread_mutex_unlock(&mtctx->jobs[wJobID].job_mutex);
}
return toFlush;
}
/* ------------------------------------------ */
/* ===== Multi-threaded compression ===== */
/* ------------------------------------------ */
static unsigned ZSTDMT_computeTargetJobLog(const ZSTD_CCtx_params* params)
{
unsigned jobLog;
if (params->ldmParams.enableLdm == ZSTD_ps_enable) {
/* In Long Range Mode, the windowLog is typically oversized.
* In which case, it's preferable to determine the jobSize
* based on cycleLog instead. */
jobLog = MAX(21, ZSTD_cycleLog(params->cParams.chainLog, params->cParams.strategy) + 3);
} else {
jobLog = MAX(20, params->cParams.windowLog + 2);
}
return MIN(jobLog, (unsigned)ZSTDMT_JOBLOG_MAX);
}
static int ZSTDMT_overlapLog_default(ZSTD_strategy strat)
{
switch(strat)
{
case ZSTD_btultra2:
return 9;
case ZSTD_btultra:
case ZSTD_btopt:
return 8;
case ZSTD_btlazy2:
case ZSTD_lazy2:
return 7;
case ZSTD_lazy:
case ZSTD_greedy:
case ZSTD_dfast:
case ZSTD_fast:
default:;
}
return 6;
}
static int ZSTDMT_overlapLog(int ovlog, ZSTD_strategy strat)
{
assert(0 <= ovlog && ovlog <= 9);
if (ovlog == 0) return ZSTDMT_overlapLog_default(strat);
return ovlog;
}
static size_t ZSTDMT_computeOverlapSize(const ZSTD_CCtx_params* params)
{
int const overlapRLog = 9 - ZSTDMT_overlapLog(params->overlapLog, params->cParams.strategy);
int ovLog = (overlapRLog >= 8) ? 0 : (params->cParams.windowLog - overlapRLog);
assert(0 <= overlapRLog && overlapRLog <= 8);
if (params->ldmParams.enableLdm == ZSTD_ps_enable) {
/* In Long Range Mode, the windowLog is typically oversized.
* In which case, it's preferable to determine the jobSize
* based on chainLog instead.
* Then, ovLog becomes a fraction of the jobSize, rather than windowSize */
ovLog = MIN(params->cParams.windowLog, ZSTDMT_computeTargetJobLog(params) - 2)
- overlapRLog;
}
assert(0 <= ovLog && ovLog <= ZSTD_WINDOWLOG_MAX);
DEBUGLOG(4, "overlapLog : %i", params->overlapLog);
DEBUGLOG(4, "overlap size : %i", 1 << ovLog);
return (ovLog==0) ? 0 : (size_t)1 << ovLog;
}
/* ====================================== */
/* ======= Streaming API ======= */
/* ====================================== */
size_t ZSTDMT_initCStream_internal(
ZSTDMT_CCtx* mtctx,
const void* dict, size_t dictSize, ZSTD_dictContentType_e dictContentType,
const ZSTD_CDict* cdict, ZSTD_CCtx_params params,
unsigned long long pledgedSrcSize)
{
DEBUGLOG(4, "ZSTDMT_initCStream_internal (pledgedSrcSize=%u, nbWorkers=%u, cctxPool=%u)",
(U32)pledgedSrcSize, params.nbWorkers, mtctx->cctxPool->totalCCtx);
/* params supposed partially fully validated at this point */
assert(!ZSTD_isError(ZSTD_checkCParams(params.cParams)));
assert(!((dict) && (cdict))); /* either dict or cdict, not both */
/* init */
if (params.nbWorkers != mtctx->params.nbWorkers)
FORWARD_IF_ERROR( ZSTDMT_resize(mtctx, (unsigned)params.nbWorkers) , "");
if (params.jobSize != 0 && params.jobSize < ZSTDMT_JOBSIZE_MIN) params.jobSize = ZSTDMT_JOBSIZE_MIN;
if (params.jobSize > (size_t)ZSTDMT_JOBSIZE_MAX) params.jobSize = (size_t)ZSTDMT_JOBSIZE_MAX;
if (mtctx->allJobsCompleted == 0) { /* previous compression not correctly finished */
ZSTDMT_waitForAllJobsCompleted(mtctx);
ZSTDMT_releaseAllJobResources(mtctx);
mtctx->allJobsCompleted = 1;
}
mtctx->params = params;
mtctx->frameContentSize = pledgedSrcSize;
ZSTD_freeCDict(mtctx->cdictLocal);
if (dict) {
mtctx->cdictLocal = ZSTD_createCDict_advanced(dict, dictSize,
ZSTD_dlm_byCopy, dictContentType, /* note : a loadPrefix becomes an internal CDict */
params.cParams, mtctx->cMem);
mtctx->cdict = mtctx->cdictLocal;
if (mtctx->cdictLocal == NULL) return ERROR(memory_allocation);
} else {
mtctx->cdictLocal = NULL;
mtctx->cdict = cdict;
}
mtctx->targetPrefixSize = ZSTDMT_computeOverlapSize(¶ms);
DEBUGLOG(4, "overlapLog=%i => %u KB", params.overlapLog, (U32)(mtctx->targetPrefixSize>>10));
mtctx->targetSectionSize = params.jobSize;
if (mtctx->targetSectionSize == 0) {
mtctx->targetSectionSize = 1ULL << ZSTDMT_computeTargetJobLog(¶ms);
}
assert(mtctx->targetSectionSize <= (size_t)ZSTDMT_JOBSIZE_MAX);
if (params.rsyncable) {
/* Aim for the targetsectionSize as the average job size. */
U32 const jobSizeKB = (U32)(mtctx->targetSectionSize >> 10);
U32 const rsyncBits = (assert(jobSizeKB >= 1), ZSTD_highbit32(jobSizeKB) + 10);
/* We refuse to create jobs < RSYNC_MIN_BLOCK_SIZE bytes, so make sure our
* expected job size is at least 4x larger. */
assert(rsyncBits >= RSYNC_MIN_BLOCK_LOG + 2);
DEBUGLOG(4, "rsyncLog = %u", rsyncBits);
mtctx->rsync.hash = 0;
mtctx->rsync.hitMask = (1ULL << rsyncBits) - 1;
mtctx->rsync.primePower = ZSTD_rollingHash_primePower(RSYNC_LENGTH);
}
if (mtctx->targetSectionSize < mtctx->targetPrefixSize) mtctx->targetSectionSize = mtctx->targetPrefixSize; /* job size must be >= overlap size */
DEBUGLOG(4, "Job Size : %u KB (note : set to %u)", (U32)(mtctx->targetSectionSize>>10), (U32)params.jobSize);
DEBUGLOG(4, "inBuff Size : %u KB", (U32)(mtctx->targetSectionSize>>10));
ZSTDMT_setBufferSize(mtctx->bufPool, ZSTD_compressBound(mtctx->targetSectionSize));
{
/* If ldm is enabled we need windowSize space. */
size_t const windowSize = mtctx->params.ldmParams.enableLdm == ZSTD_ps_enable ? (1U << mtctx->params.cParams.windowLog) : 0;
/* Two buffers of slack, plus extra space for the overlap
* This is the minimum slack that LDM works with. One extra because
* flush might waste up to targetSectionSize-1 bytes. Another extra
* for the overlap (if > 0), then one to fill which doesn't overlap
* with the LDM window.
*/
size_t const nbSlackBuffers = 2 + (mtctx->targetPrefixSize > 0);
size_t const slackSize = mtctx->targetSectionSize * nbSlackBuffers;
/* Compute the total size, and always have enough slack */
size_t const nbWorkers = MAX(mtctx->params.nbWorkers, 1);
size_t const sectionsSize = mtctx->targetSectionSize * nbWorkers;
size_t const capacity = MAX(windowSize, sectionsSize) + slackSize;
if (mtctx->roundBuff.capacity < capacity) {
if (mtctx->roundBuff.buffer)
ZSTD_customFree(mtctx->roundBuff.buffer, mtctx->cMem);
mtctx->roundBuff.buffer = (BYTE*)ZSTD_customMalloc(capacity, mtctx->cMem);
if (mtctx->roundBuff.buffer == NULL) {
mtctx->roundBuff.capacity = 0;
return ERROR(memory_allocation);
}
mtctx->roundBuff.capacity = capacity;
}
}
DEBUGLOG(4, "roundBuff capacity : %u KB", (U32)(mtctx->roundBuff.capacity>>10));
mtctx->roundBuff.pos = 0;
mtctx->inBuff.buffer = g_nullBuffer;
mtctx->inBuff.filled = 0;
mtctx->inBuff.prefix = kNullRange;
mtctx->doneJobID = 0;
mtctx->nextJobID = 0;
mtctx->frameEnded = 0;
mtctx->allJobsCompleted = 0;
mtctx->consumed = 0;
mtctx->produced = 0;
/* update dictionary */
ZSTD_freeCDict(mtctx->cdictLocal);
mtctx->cdictLocal = NULL;
mtctx->cdict = NULL;
if (dict) {
if (dictContentType == ZSTD_dct_rawContent) {
mtctx->inBuff.prefix.start = (const BYTE*)dict;
mtctx->inBuff.prefix.size = dictSize;
} else {
/* note : a loadPrefix becomes an internal CDict */
mtctx->cdictLocal = ZSTD_createCDict_advanced(dict, dictSize,
ZSTD_dlm_byRef, dictContentType,
params.cParams, mtctx->cMem);
mtctx->cdict = mtctx->cdictLocal;
if (mtctx->cdictLocal == NULL) return ERROR(memory_allocation);
}
} else {
mtctx->cdict = cdict;
}
if (ZSTDMT_serialState_reset(&mtctx->serial, mtctx->seqPool, params, mtctx->targetSectionSize,
dict, dictSize, dictContentType))
return ERROR(memory_allocation);
return 0;
}
/* ZSTDMT_writeLastEmptyBlock()
* Write a single empty block with an end-of-frame to finish a frame.
* Job must be created from streaming variant.
* This function is always successful if expected conditions are fulfilled.
*/
static void ZSTDMT_writeLastEmptyBlock(ZSTDMT_jobDescription* job)
{
assert(job->lastJob == 1);
assert(job->src.size == 0); /* last job is empty -> will be simplified into a last empty block */
assert(job->firstJob == 0); /* cannot be first job, as it also needs to create frame header */
assert(job->dstBuff.start == NULL); /* invoked from streaming variant only (otherwise, dstBuff might be user's output) */
job->dstBuff = ZSTDMT_getBuffer(job->bufPool);
if (job->dstBuff.start == NULL) {
job->cSize = ERROR(memory_allocation);
return;
}
assert(job->dstBuff.capacity >= ZSTD_blockHeaderSize); /* no buffer should ever be that small */
job->src = kNullRange;
job->cSize = ZSTD_writeLastEmptyBlock(job->dstBuff.start, job->dstBuff.capacity);
assert(!ZSTD_isError(job->cSize));
assert(job->consumed == 0);
}
static size_t ZSTDMT_createCompressionJob(ZSTDMT_CCtx* mtctx, size_t srcSize, ZSTD_EndDirective endOp)
{
unsigned const jobID = mtctx->nextJobID & mtctx->jobIDMask;
int const endFrame = (endOp == ZSTD_e_end);
if (mtctx->nextJobID > mtctx->doneJobID + mtctx->jobIDMask) {
DEBUGLOG(5, "ZSTDMT_createCompressionJob: will not create new job : table is full");
assert((mtctx->nextJobID & mtctx->jobIDMask) == (mtctx->doneJobID & mtctx->jobIDMask));
return 0;
}
if (!mtctx->jobReady) {
BYTE const* src = (BYTE const*)mtctx->inBuff.buffer.start;
DEBUGLOG(5, "ZSTDMT_createCompressionJob: preparing job %u to compress %u bytes with %u preload ",
mtctx->nextJobID, (U32)srcSize, (U32)mtctx->inBuff.prefix.size);
mtctx->jobs[jobID].src.start = src;
mtctx->jobs[jobID].src.size = srcSize;
assert(mtctx->inBuff.filled >= srcSize);
mtctx->jobs[jobID].prefix = mtctx->inBuff.prefix;
mtctx->jobs[jobID].consumed = 0;
mtctx->jobs[jobID].cSize = 0;
mtctx->jobs[jobID].params = mtctx->params;
mtctx->jobs[jobID].cdict = mtctx->nextJobID==0 ? mtctx->cdict : NULL;
mtctx->jobs[jobID].fullFrameSize = mtctx->frameContentSize;
mtctx->jobs[jobID].dstBuff = g_nullBuffer;
mtctx->jobs[jobID].cctxPool = mtctx->cctxPool;
mtctx->jobs[jobID].bufPool = mtctx->bufPool;
mtctx->jobs[jobID].seqPool = mtctx->seqPool;
mtctx->jobs[jobID].serial = &mtctx->serial;
mtctx->jobs[jobID].jobID = mtctx->nextJobID;
mtctx->jobs[jobID].firstJob = (mtctx->nextJobID==0);
mtctx->jobs[jobID].lastJob = endFrame;
mtctx->jobs[jobID].frameChecksumNeeded = mtctx->params.fParams.checksumFlag && endFrame && (mtctx->nextJobID>0);
mtctx->jobs[jobID].dstFlushed = 0;
/* Update the round buffer pos and clear the input buffer to be reset */
mtctx->roundBuff.pos += srcSize;
mtctx->inBuff.buffer = g_nullBuffer;
mtctx->inBuff.filled = 0;
/* Set the prefix for next job */
if (!endFrame) {
size_t const newPrefixSize = MIN(srcSize, mtctx->targetPrefixSize);
mtctx->inBuff.prefix.start = src + srcSize - newPrefixSize;
mtctx->inBuff.prefix.size = newPrefixSize;
} else { /* endFrame==1 => no need for another input buffer */
mtctx->inBuff.prefix = kNullRange;
mtctx->frameEnded = endFrame;
if (mtctx->nextJobID == 0) {
/* single job exception : checksum is already calculated directly within worker thread */
mtctx->params.fParams.checksumFlag = 0;
} }
if ( (srcSize == 0)
&& (mtctx->nextJobID>0)/*single job must also write frame header*/ ) {
DEBUGLOG(5, "ZSTDMT_createCompressionJob: creating a last empty block to end frame");
assert(endOp == ZSTD_e_end); /* only possible case : need to end the frame with an empty last block */
ZSTDMT_writeLastEmptyBlock(mtctx->jobs + jobID);
mtctx->nextJobID++;
return 0;
}
}
DEBUGLOG(5, "ZSTDMT_createCompressionJob: posting job %u : %u bytes (end:%u, jobNb == %u (mod:%u))",
mtctx->nextJobID,
(U32)mtctx->jobs[jobID].src.size,
mtctx->jobs[jobID].lastJob,
mtctx->nextJobID,
jobID);
if (POOL_tryAdd(mtctx->factory, ZSTDMT_compressionJob, &mtctx->jobs[jobID])) {
mtctx->nextJobID++;
mtctx->jobReady = 0;
} else {
DEBUGLOG(5, "ZSTDMT_createCompressionJob: no worker available for job %u", mtctx->nextJobID);
mtctx->jobReady = 1;
}
return 0;
}
/*! ZSTDMT_flushProduced() :
* flush whatever data has been produced but not yet flushed in current job.
* move to next job if current one is fully flushed.
* `output` : `pos` will be updated with amount of data flushed .
* `blockToFlush` : if >0, the function will block and wait if there is no data available to flush .
* @return : amount of data remaining within internal buffer, 0 if no more, 1 if unknown but > 0, or an error code */
static size_t ZSTDMT_flushProduced(ZSTDMT_CCtx* mtctx, ZSTD_outBuffer* output, unsigned blockToFlush, ZSTD_EndDirective end)
{
unsigned const wJobID = mtctx->doneJobID & mtctx->jobIDMask;
DEBUGLOG(5, "ZSTDMT_flushProduced (blocking:%u , job %u <= %u)",
blockToFlush, mtctx->doneJobID, mtctx->nextJobID);
assert(output->size >= output->pos);
ZSTD_PTHREAD_MUTEX_LOCK(&mtctx->jobs[wJobID].job_mutex);
if ( blockToFlush
&& (mtctx->doneJobID < mtctx->nextJobID) ) {
assert(mtctx->jobs[wJobID].dstFlushed <= mtctx->jobs[wJobID].cSize);
while (mtctx->jobs[wJobID].dstFlushed == mtctx->jobs[wJobID].cSize) { /* nothing to flush */
if (mtctx->jobs[wJobID].consumed == mtctx->jobs[wJobID].src.size) {
DEBUGLOG(5, "job %u is completely consumed (%u == %u) => don't wait for cond, there will be none",
mtctx->doneJobID, (U32)mtctx->jobs[wJobID].consumed, (U32)mtctx->jobs[wJobID].src.size);
break;
}
DEBUGLOG(5, "waiting for something to flush from job %u (currently flushed: %u bytes)",
mtctx->doneJobID, (U32)mtctx->jobs[wJobID].dstFlushed);
ZSTD_pthread_cond_wait(&mtctx->jobs[wJobID].job_cond, &mtctx->jobs[wJobID].job_mutex); /* block when nothing to flush but some to come */
} }
/* try to flush something */
{ size_t cSize = mtctx->jobs[wJobID].cSize; /* shared */
size_t const srcConsumed = mtctx->jobs[wJobID].consumed; /* shared */
size_t const srcSize = mtctx->jobs[wJobID].src.size; /* read-only, could be done after mutex lock, but no-declaration-after-statement */
ZSTD_pthread_mutex_unlock(&mtctx->jobs[wJobID].job_mutex);
if (ZSTD_isError(cSize)) {
DEBUGLOG(5, "ZSTDMT_flushProduced: job %u : compression error detected : %s",
mtctx->doneJobID, ZSTD_getErrorName(cSize));
ZSTDMT_waitForAllJobsCompleted(mtctx);
ZSTDMT_releaseAllJobResources(mtctx);
return cSize;
}
/* add frame checksum if necessary (can only happen once) */
assert(srcConsumed <= srcSize);
if ( (srcConsumed == srcSize) /* job completed -> worker no longer active */
&& mtctx->jobs[wJobID].frameChecksumNeeded ) {
U32 const checksum = (U32)XXH64_digest(&mtctx->serial.xxhState);
DEBUGLOG(4, "ZSTDMT_flushProduced: writing checksum : %08X \n", checksum);
MEM_writeLE32((char*)mtctx->jobs[wJobID].dstBuff.start + mtctx->jobs[wJobID].cSize, checksum);
cSize += 4;
mtctx->jobs[wJobID].cSize += 4; /* can write this shared value, as worker is no longer active */
mtctx->jobs[wJobID].frameChecksumNeeded = 0;
}
if (cSize > 0) { /* compression is ongoing or completed */
size_t const toFlush = MIN(cSize - mtctx->jobs[wJobID].dstFlushed, output->size - output->pos);
DEBUGLOG(5, "ZSTDMT_flushProduced: Flushing %u bytes from job %u (completion:%u/%u, generated:%u)",
(U32)toFlush, mtctx->doneJobID, (U32)srcConsumed, (U32)srcSize, (U32)cSize);
assert(mtctx->doneJobID < mtctx->nextJobID);
assert(cSize >= mtctx->jobs[wJobID].dstFlushed);
assert(mtctx->jobs[wJobID].dstBuff.start != NULL);
if (toFlush > 0) {
ZSTD_memcpy((char*)output->dst + output->pos,
(const char*)mtctx->jobs[wJobID].dstBuff.start + mtctx->jobs[wJobID].dstFlushed,
toFlush);
}
output->pos += toFlush;
mtctx->jobs[wJobID].dstFlushed += toFlush; /* can write : this value is only used by mtctx */
if ( (srcConsumed == srcSize) /* job is completed */
&& (mtctx->jobs[wJobID].dstFlushed == cSize) ) { /* output buffer fully flushed => free this job position */
DEBUGLOG(5, "Job %u completed (%u bytes), moving to next one",
mtctx->doneJobID, (U32)mtctx->jobs[wJobID].dstFlushed);
ZSTDMT_releaseBuffer(mtctx->bufPool, mtctx->jobs[wJobID].dstBuff);
DEBUGLOG(5, "dstBuffer released");
mtctx->jobs[wJobID].dstBuff = g_nullBuffer;
mtctx->jobs[wJobID].cSize = 0; /* ensure this job slot is considered "not started" in future check */
mtctx->consumed += srcSize;
mtctx->produced += cSize;
mtctx->doneJobID++;
} }
/* return value : how many bytes left in buffer ; fake it to 1 when unknown but >0 */
if (cSize > mtctx->jobs[wJobID].dstFlushed) return (cSize - mtctx->jobs[wJobID].dstFlushed);
if (srcSize > srcConsumed) return 1; /* current job not completely compressed */
}
if (mtctx->doneJobID < mtctx->nextJobID) return 1; /* some more jobs ongoing */
if (mtctx->jobReady) return 1; /* one job is ready to push, just not yet in the list */
if (mtctx->inBuff.filled > 0) return 1; /* input is not empty, and still needs to be converted into a job */
mtctx->allJobsCompleted = mtctx->frameEnded; /* all jobs are entirely flushed => if this one is last one, frame is completed */
if (end == ZSTD_e_end) return !mtctx->frameEnded; /* for ZSTD_e_end, question becomes : is frame completed ? instead of : are internal buffers fully flushed ? */
return 0; /* internal buffers fully flushed */
}
/**
* Returns the range of data used by the earliest job that is not yet complete.
* If the data of the first job is broken up into two segments, we cover both
* sections.
*/
static Range ZSTDMT_getInputDataInUse(ZSTDMT_CCtx* mtctx)
{
unsigned const firstJobID = mtctx->doneJobID;
unsigned const lastJobID = mtctx->nextJobID;
unsigned jobID;
/* no need to check during first round */
size_t roundBuffCapacity = mtctx->roundBuff.capacity;
size_t nbJobs1stRoundMin = roundBuffCapacity / mtctx->targetSectionSize;
if (lastJobID < nbJobs1stRoundMin) return kNullRange;
for (jobID = firstJobID; jobID < lastJobID; ++jobID) {
unsigned const wJobID = jobID & mtctx->jobIDMask;
size_t consumed;
ZSTD_PTHREAD_MUTEX_LOCK(&mtctx->jobs[wJobID].job_mutex);
consumed = mtctx->jobs[wJobID].consumed;
ZSTD_pthread_mutex_unlock(&mtctx->jobs[wJobID].job_mutex);
if (consumed < mtctx->jobs[wJobID].src.size) {
Range range = mtctx->jobs[wJobID].prefix;
if (range.size == 0) {
/* Empty prefix */
range = mtctx->jobs[wJobID].src;
}
/* Job source in multiple segments not supported yet */
assert(range.start <= mtctx->jobs[wJobID].src.start);
return range;
}
}
return kNullRange;
}
/**
* Returns non-zero iff buffer and range overlap.
*/
static int ZSTDMT_isOverlapped(Buffer buffer, Range range)
{
BYTE const* const bufferStart = (BYTE const*)buffer.start;
BYTE const* const rangeStart = (BYTE const*)range.start;
if (rangeStart == NULL || bufferStart == NULL)
return 0;
{
BYTE const* const bufferEnd = bufferStart + buffer.capacity;
BYTE const* const rangeEnd = rangeStart + range.size;
/* Empty ranges cannot overlap */
if (bufferStart == bufferEnd || rangeStart == rangeEnd)
return 0;
return bufferStart < rangeEnd && rangeStart < bufferEnd;
}
}
static int ZSTDMT_doesOverlapWindow(Buffer buffer, ZSTD_window_t window)
{
Range extDict;
Range prefix;
DEBUGLOG(5, "ZSTDMT_doesOverlapWindow");
extDict.start = window.dictBase + window.lowLimit;
extDict.size = window.dictLimit - window.lowLimit;
prefix.start = window.base + window.dictLimit;
prefix.size = window.nextSrc - (window.base + window.dictLimit);
DEBUGLOG(5, "extDict [0x%zx, 0x%zx)",
(size_t)extDict.start,
(size_t)extDict.start + extDict.size);
DEBUGLOG(5, "prefix [0x%zx, 0x%zx)",
(size_t)prefix.start,
(size_t)prefix.start + prefix.size);
return ZSTDMT_isOverlapped(buffer, extDict)
|| ZSTDMT_isOverlapped(buffer, prefix);
}
static void ZSTDMT_waitForLdmComplete(ZSTDMT_CCtx* mtctx, Buffer buffer)
{
if (mtctx->params.ldmParams.enableLdm == ZSTD_ps_enable) {
ZSTD_pthread_mutex_t* mutex = &mtctx->serial.ldmWindowMutex;
DEBUGLOG(5, "ZSTDMT_waitForLdmComplete");
DEBUGLOG(5, "source [0x%zx, 0x%zx)",
(size_t)buffer.start,
(size_t)buffer.start + buffer.capacity);
ZSTD_PTHREAD_MUTEX_LOCK(mutex);
while (ZSTDMT_doesOverlapWindow(buffer, mtctx->serial.ldmWindow)) {
DEBUGLOG(5, "Waiting for LDM to finish...");
ZSTD_pthread_cond_wait(&mtctx->serial.ldmWindowCond, mutex);
}
DEBUGLOG(6, "Done waiting for LDM to finish");
ZSTD_pthread_mutex_unlock(mutex);
}
}
/**
* Attempts to set the inBuff to the next section to fill.
* If any part of the new section is still in use we give up.
* Returns non-zero if the buffer is filled.
*/
static int ZSTDMT_tryGetInputRange(ZSTDMT_CCtx* mtctx)
{
Range const inUse = ZSTDMT_getInputDataInUse(mtctx);
size_t const spaceLeft = mtctx->roundBuff.capacity - mtctx->roundBuff.pos;
size_t const spaceNeeded = mtctx->targetSectionSize;
Buffer buffer;
DEBUGLOG(5, "ZSTDMT_tryGetInputRange");
assert(mtctx->inBuff.buffer.start == NULL);
assert(mtctx->roundBuff.capacity >= spaceNeeded);
if (spaceLeft < spaceNeeded) {
/* ZSTD_invalidateRepCodes() doesn't work for extDict variants.
* Simply copy the prefix to the beginning in that case.
*/
BYTE* const start = (BYTE*)mtctx->roundBuff.buffer;
size_t const prefixSize = mtctx->inBuff.prefix.size;
buffer.start = start;
buffer.capacity = prefixSize;
if (ZSTDMT_isOverlapped(buffer, inUse)) {
DEBUGLOG(5, "Waiting for buffer...");
return 0;
}
ZSTDMT_waitForLdmComplete(mtctx, buffer);
ZSTD_memmove(start, mtctx->inBuff.prefix.start, prefixSize);
mtctx->inBuff.prefix.start = start;
mtctx->roundBuff.pos = prefixSize;
}
buffer.start = mtctx->roundBuff.buffer + mtctx->roundBuff.pos;
buffer.capacity = spaceNeeded;
if (ZSTDMT_isOverlapped(buffer, inUse)) {
DEBUGLOG(5, "Waiting for buffer...");
return 0;
}
assert(!ZSTDMT_isOverlapped(buffer, mtctx->inBuff.prefix));
ZSTDMT_waitForLdmComplete(mtctx, buffer);
DEBUGLOG(5, "Using prefix range [%zx, %zx)",
(size_t)mtctx->inBuff.prefix.start,
(size_t)mtctx->inBuff.prefix.start + mtctx->inBuff.prefix.size);
DEBUGLOG(5, "Using source range [%zx, %zx)",
(size_t)buffer.start,
(size_t)buffer.start + buffer.capacity);
mtctx->inBuff.buffer = buffer;
mtctx->inBuff.filled = 0;
assert(mtctx->roundBuff.pos + buffer.capacity <= mtctx->roundBuff.capacity);
return 1;
}
typedef struct {
size_t toLoad; /* The number of bytes to load from the input. */
int flush; /* Boolean declaring if we must flush because we found a synchronization point. */
} SyncPoint;
/**
* Searches through the input for a synchronization point. If one is found, we
* will instruct the caller to flush, and return the number of bytes to load.
* Otherwise, we will load as many bytes as possible and instruct the caller
* to continue as normal.
*/
static SyncPoint
findSynchronizationPoint(ZSTDMT_CCtx const* mtctx, ZSTD_inBuffer const input)
{
BYTE const* const istart = (BYTE const*)input.src + input.pos;
U64 const primePower = mtctx->rsync.primePower;
U64 const hitMask = mtctx->rsync.hitMask;
SyncPoint syncPoint;
U64 hash;
BYTE const* prev;
size_t pos;
syncPoint.toLoad = MIN(input.size - input.pos, mtctx->targetSectionSize - mtctx->inBuff.filled);
syncPoint.flush = 0;
if (!mtctx->params.rsyncable)
/* Rsync is disabled. */
return syncPoint;
if (mtctx->inBuff.filled + input.size - input.pos < RSYNC_MIN_BLOCK_SIZE)
/* We don't emit synchronization points if it would produce too small blocks.
* We don't have enough input to find a synchronization point, so don't look.
*/
return syncPoint;
if (mtctx->inBuff.filled + syncPoint.toLoad < RSYNC_LENGTH)
/* Not enough to compute the hash.
* We will miss any synchronization points in this RSYNC_LENGTH byte
* window. However, since it depends only in the internal buffers, if the
* state is already synchronized, we will remain synchronized.
* Additionally, the probability that we miss a synchronization point is
* low: RSYNC_LENGTH / targetSectionSize.
*/
return syncPoint;
/* Initialize the loop variables. */
if (mtctx->inBuff.filled < RSYNC_MIN_BLOCK_SIZE) {
/* We don't need to scan the first RSYNC_MIN_BLOCK_SIZE positions
* because they can't possibly be a sync point. So we can start
* part way through the input buffer.
*/
pos = RSYNC_MIN_BLOCK_SIZE - mtctx->inBuff.filled;
if (pos >= RSYNC_LENGTH) {
prev = istart + pos - RSYNC_LENGTH;
hash = ZSTD_rollingHash_compute(prev, RSYNC_LENGTH);
} else {
assert(mtctx->inBuff.filled >= RSYNC_LENGTH);
prev = (BYTE const*)mtctx->inBuff.buffer.start + mtctx->inBuff.filled - RSYNC_LENGTH;
hash = ZSTD_rollingHash_compute(prev + pos, (RSYNC_LENGTH - pos));
hash = ZSTD_rollingHash_append(hash, istart, pos);
}
} else {
/* We have enough bytes buffered to initialize the hash,
* and have processed enough bytes to find a sync point.
* Start scanning at the beginning of the input.
*/
assert(mtctx->inBuff.filled >= RSYNC_MIN_BLOCK_SIZE);
assert(RSYNC_MIN_BLOCK_SIZE >= RSYNC_LENGTH);
pos = 0;
prev = (BYTE const*)mtctx->inBuff.buffer.start + mtctx->inBuff.filled - RSYNC_LENGTH;
hash = ZSTD_rollingHash_compute(prev, RSYNC_LENGTH);
if ((hash & hitMask) == hitMask) {
/* We're already at a sync point so don't load any more until
* we're able to flush this sync point.
* This likely happened because the job table was full so we
* couldn't add our job.
*/
syncPoint.toLoad = 0;
syncPoint.flush = 1;
return syncPoint;
}
}
/* Starting with the hash of the previous RSYNC_LENGTH bytes, roll
* through the input. If we hit a synchronization point, then cut the
* job off, and tell the compressor to flush the job. Otherwise, load
* all the bytes and continue as normal.
* If we go too long without a synchronization point (targetSectionSize)
* then a block will be emitted anyways, but this is okay, since if we
* are already synchronized we will remain synchronized.
*/
assert(pos < RSYNC_LENGTH || ZSTD_rollingHash_compute(istart + pos - RSYNC_LENGTH, RSYNC_LENGTH) == hash);
for (; pos < syncPoint.toLoad; ++pos) {
BYTE const toRemove = pos < RSYNC_LENGTH ? prev[pos] : istart[pos - RSYNC_LENGTH];
/* This assert is very expensive, and Debian compiles with asserts enabled.
* So disable it for now. We can get similar coverage by checking it at the
* beginning & end of the loop.
* assert(pos < RSYNC_LENGTH || ZSTD_rollingHash_compute(istart + pos - RSYNC_LENGTH, RSYNC_LENGTH) == hash);
*/
hash = ZSTD_rollingHash_rotate(hash, toRemove, istart[pos], primePower);
assert(mtctx->inBuff.filled + pos >= RSYNC_MIN_BLOCK_SIZE);
if ((hash & hitMask) == hitMask) {
syncPoint.toLoad = pos + 1;
syncPoint.flush = 1;
++pos; /* for assert */
break;
}
}
assert(pos < RSYNC_LENGTH || ZSTD_rollingHash_compute(istart + pos - RSYNC_LENGTH, RSYNC_LENGTH) == hash);
return syncPoint;
}
size_t ZSTDMT_nextInputSizeHint(const ZSTDMT_CCtx* mtctx)
{
size_t hintInSize = mtctx->targetSectionSize - mtctx->inBuff.filled;
if (hintInSize==0) hintInSize = mtctx->targetSectionSize;
return hintInSize;
}
/** ZSTDMT_compressStream_generic() :
* internal use only - exposed to be invoked from zstd_compress.c
* assumption : output and input are valid (pos <= size)
* @return : minimum amount of data remaining to flush, 0 if none */
size_t ZSTDMT_compressStream_generic(ZSTDMT_CCtx* mtctx,
ZSTD_outBuffer* output,
ZSTD_inBuffer* input,
ZSTD_EndDirective endOp)
{
unsigned forwardInputProgress = 0;
DEBUGLOG(5, "ZSTDMT_compressStream_generic (endOp=%u, srcSize=%u)",
(U32)endOp, (U32)(input->size - input->pos));
assert(output->pos <= output->size);
assert(input->pos <= input->size);
if ((mtctx->frameEnded) && (endOp==ZSTD_e_continue)) {
/* current frame being ended. Only flush/end are allowed */
return ERROR(stage_wrong);
}
/* fill input buffer */
if ( (!mtctx->jobReady)
&& (input->size > input->pos) ) { /* support NULL input */
if (mtctx->inBuff.buffer.start == NULL) {
assert(mtctx->inBuff.filled == 0); /* Can't fill an empty buffer */
if (!ZSTDMT_tryGetInputRange(mtctx)) {
/* It is only possible for this operation to fail if there are
* still compression jobs ongoing.
*/
DEBUGLOG(5, "ZSTDMT_tryGetInputRange failed");
assert(mtctx->doneJobID != mtctx->nextJobID);
} else
DEBUGLOG(5, "ZSTDMT_tryGetInputRange completed successfully : mtctx->inBuff.buffer.start = %p", mtctx->inBuff.buffer.start);
}
if (mtctx->inBuff.buffer.start != NULL) {
SyncPoint const syncPoint = findSynchronizationPoint(mtctx, *input);
if (syncPoint.flush && endOp == ZSTD_e_continue) {
endOp = ZSTD_e_flush;
}
assert(mtctx->inBuff.buffer.capacity >= mtctx->targetSectionSize);
DEBUGLOG(5, "ZSTDMT_compressStream_generic: adding %u bytes on top of %u to buffer of size %u",
(U32)syncPoint.toLoad, (U32)mtctx->inBuff.filled, (U32)mtctx->targetSectionSize);
ZSTD_memcpy((char*)mtctx->inBuff.buffer.start + mtctx->inBuff.filled, (const char*)input->src + input->pos, syncPoint.toLoad);
input->pos += syncPoint.toLoad;
mtctx->inBuff.filled += syncPoint.toLoad;
forwardInputProgress = syncPoint.toLoad>0;
}
}
if ((input->pos < input->size) && (endOp == ZSTD_e_end)) {
/* Can't end yet because the input is not fully consumed.
* We are in one of these cases:
* - mtctx->inBuff is NULL & empty: we couldn't get an input buffer so don't create a new job.
* - We filled the input buffer: flush this job but don't end the frame.
* - We hit a synchronization point: flush this job but don't end the frame.
*/
assert(mtctx->inBuff.filled == 0 || mtctx->inBuff.filled == mtctx->targetSectionSize || mtctx->params.rsyncable);
endOp = ZSTD_e_flush;
}
if ( (mtctx->jobReady)
|| (mtctx->inBuff.filled >= mtctx->targetSectionSize) /* filled enough : let's compress */
|| ((endOp != ZSTD_e_continue) && (mtctx->inBuff.filled > 0)) /* something to flush : let's go */
|| ((endOp == ZSTD_e_end) && (!mtctx->frameEnded)) ) { /* must finish the frame with a zero-size block */
size_t const jobSize = mtctx->inBuff.filled;
assert(mtctx->inBuff.filled <= mtctx->targetSectionSize);
FORWARD_IF_ERROR( ZSTDMT_createCompressionJob(mtctx, jobSize, endOp) , "");
}
/* check for potential compressed data ready to be flushed */
{ size_t const remainingToFlush = ZSTDMT_flushProduced(mtctx, output, !forwardInputProgress, endOp); /* block if there was no forward input progress */
if (input->pos < input->size) return MAX(remainingToFlush, 1); /* input not consumed : do not end flush yet */
DEBUGLOG(5, "end of ZSTDMT_compressStream_generic: remainingToFlush = %u", (U32)remainingToFlush);
return remainingToFlush;
}
}
/**** ended inlining compress/zstdmt_compress.c ****/
#endif
/**** start inlining decompress/huf_decompress.c ****/
/* ******************************************************************
* huff0 huffman decoder,
* part of Finite State Entropy library
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* You can contact the author at :
* - FSE+HUF source repository : https://github.com/Cyan4973/FiniteStateEntropy
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
****************************************************************** */
/* **************************************************************
* Dependencies
****************************************************************/
/**** skipping file: ../common/zstd_deps.h ****/
/**** skipping file: ../common/compiler.h ****/
/**** skipping file: ../common/bitstream.h ****/
/**** skipping file: ../common/fse.h ****/
/**** skipping file: ../common/huf.h ****/
/**** skipping file: ../common/error_private.h ****/
/**** skipping file: ../common/zstd_internal.h ****/
/**** skipping file: ../common/bits.h ****/
/* **************************************************************
* Constants
****************************************************************/
#define HUF_DECODER_FAST_TABLELOG 11
/* **************************************************************
* Macros
****************************************************************/
#ifdef HUF_DISABLE_FAST_DECODE
# define HUF_ENABLE_FAST_DECODE 0
#else
# define HUF_ENABLE_FAST_DECODE 1
#endif
/* These two optional macros force the use one way or another of the two
* Huffman decompression implementations. You can't force in both directions
* at the same time.
*/
#if defined(HUF_FORCE_DECOMPRESS_X1) && \
defined(HUF_FORCE_DECOMPRESS_X2)
#error "Cannot force the use of the X1 and X2 decoders at the same time!"
#endif
/* When DYNAMIC_BMI2 is enabled, fast decoders are only called when bmi2 is
* supported at runtime, so we can add the BMI2 target attribute.
* When it is disabled, we will still get BMI2 if it is enabled statically.
*/
#if DYNAMIC_BMI2
# define HUF_FAST_BMI2_ATTRS BMI2_TARGET_ATTRIBUTE
#else
# define HUF_FAST_BMI2_ATTRS
#endif
#ifdef __cplusplus
# define HUF_EXTERN_C extern "C"
#else
# define HUF_EXTERN_C
#endif
#define HUF_ASM_DECL HUF_EXTERN_C
#if DYNAMIC_BMI2
# define HUF_NEED_BMI2_FUNCTION 1
#else
# define HUF_NEED_BMI2_FUNCTION 0
#endif
/* **************************************************************
* Error Management
****************************************************************/
#define HUF_isError ERR_isError
/* **************************************************************
* Byte alignment for workSpace management
****************************************************************/
#define HUF_ALIGN(x, a) HUF_ALIGN_MASK((x), (a) - 1)
#define HUF_ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
/* **************************************************************
* BMI2 Variant Wrappers
****************************************************************/
typedef size_t (*HUF_DecompressUsingDTableFn)(void *dst, size_t dstSize,
const void *cSrc,
size_t cSrcSize,
const HUF_DTable *DTable);
#if DYNAMIC_BMI2
#define HUF_DGEN(fn) \
\
static size_t fn##_default( \
void* dst, size_t dstSize, \
const void* cSrc, size_t cSrcSize, \
const HUF_DTable* DTable) \
{ \
return fn##_body(dst, dstSize, cSrc, cSrcSize, DTable); \
} \
\
static BMI2_TARGET_ATTRIBUTE size_t fn##_bmi2( \
void* dst, size_t dstSize, \
const void* cSrc, size_t cSrcSize, \
const HUF_DTable* DTable) \
{ \
return fn##_body(dst, dstSize, cSrc, cSrcSize, DTable); \
} \
\
static size_t fn(void* dst, size_t dstSize, void const* cSrc, \
size_t cSrcSize, HUF_DTable const* DTable, int flags) \
{ \
if (flags & HUF_flags_bmi2) { \
return fn##_bmi2(dst, dstSize, cSrc, cSrcSize, DTable); \
} \
return fn##_default(dst, dstSize, cSrc, cSrcSize, DTable); \
}
#else
#define HUF_DGEN(fn) \
static size_t fn(void* dst, size_t dstSize, void const* cSrc, \
size_t cSrcSize, HUF_DTable const* DTable, int flags) \
{ \
(void)flags; \
return fn##_body(dst, dstSize, cSrc, cSrcSize, DTable); \
}
#endif
/*-***************************/
/* generic DTableDesc */
/*-***************************/
typedef struct { BYTE maxTableLog; BYTE tableType; BYTE tableLog; BYTE reserved; } DTableDesc;
static DTableDesc HUF_getDTableDesc(const HUF_DTable* table)
{
DTableDesc dtd;
ZSTD_memcpy(&dtd, table, sizeof(dtd));
return dtd;
}
static size_t HUF_initFastDStream(BYTE const* ip) {
BYTE const lastByte = ip[7];
size_t const bitsConsumed = lastByte ? 8 - ZSTD_highbit32(lastByte) : 0;
size_t const value = MEM_readLEST(ip) | 1;
assert(bitsConsumed <= 8);
assert(sizeof(size_t) == 8);
return value << bitsConsumed;
}
/**
* The input/output arguments to the Huffman fast decoding loop:
*
* ip [in/out] - The input pointers, must be updated to reflect what is consumed.
* op [in/out] - The output pointers, must be updated to reflect what is written.
* bits [in/out] - The bitstream containers, must be updated to reflect the current state.
* dt [in] - The decoding table.
* ilowest [in] - The beginning of the valid range of the input. Decoders may read
* down to this pointer. It may be below iend[0].
* oend [in] - The end of the output stream. op[3] must not cross oend.
* iend [in] - The end of each input stream. ip[i] may cross iend[i],
* as long as it is above ilowest, but that indicates corruption.
*/
typedef struct {
BYTE const* ip[4];
BYTE* op[4];
U64 bits[4];
void const* dt;
BYTE const* ilowest;
BYTE* oend;
BYTE const* iend[4];
} HUF_DecompressFastArgs;
typedef void (*HUF_DecompressFastLoopFn)(HUF_DecompressFastArgs*);
/**
* Initializes args for the fast decoding loop.
* @returns 1 on success
* 0 if the fallback implementation should be used.
* Or an error code on failure.
*/
static size_t HUF_DecompressFastArgs_init(HUF_DecompressFastArgs* args, void* dst, size_t dstSize, void const* src, size_t srcSize, const HUF_DTable* DTable)
{
void const* dt = DTable + 1;
U32 const dtLog = HUF_getDTableDesc(DTable).tableLog;
const BYTE* const istart = (const BYTE*)src;
BYTE* const oend = ZSTD_maybeNullPtrAdd((BYTE*)dst, dstSize);
/* The fast decoding loop assumes 64-bit little-endian.
* This condition is false on x32.
*/
if (!MEM_isLittleEndian() || MEM_32bits())
return 0;
/* Avoid nullptr addition */
if (dstSize == 0)
return 0;
assert(dst != NULL);
/* strict minimum : jump table + 1 byte per stream */
if (srcSize < 10)
return ERROR(corruption_detected);
/* Must have at least 8 bytes per stream because we don't handle initializing smaller bit containers.
* If table log is not correct at this point, fallback to the old decoder.
* On small inputs we don't have enough data to trigger the fast loop, so use the old decoder.
*/
if (dtLog != HUF_DECODER_FAST_TABLELOG)
return 0;
/* Read the jump table. */
{
size_t const length1 = MEM_readLE16(istart);
size_t const length2 = MEM_readLE16(istart+2);
size_t const length3 = MEM_readLE16(istart+4);
size_t const length4 = srcSize - (length1 + length2 + length3 + 6);
args->iend[0] = istart + 6; /* jumpTable */
args->iend[1] = args->iend[0] + length1;
args->iend[2] = args->iend[1] + length2;
args->iend[3] = args->iend[2] + length3;
/* HUF_initFastDStream() requires this, and this small of an input
* won't benefit from the ASM loop anyways.
*/
if (length1 < 8 || length2 < 8 || length3 < 8 || length4 < 8)
return 0;
if (length4 > srcSize) return ERROR(corruption_detected); /* overflow */
}
/* ip[] contains the position that is currently loaded into bits[]. */
args->ip[0] = args->iend[1] - sizeof(U64);
args->ip[1] = args->iend[2] - sizeof(U64);
args->ip[2] = args->iend[3] - sizeof(U64);
args->ip[3] = (BYTE const*)src + srcSize - sizeof(U64);
/* op[] contains the output pointers. */
args->op[0] = (BYTE*)dst;
args->op[1] = args->op[0] + (dstSize+3)/4;
args->op[2] = args->op[1] + (dstSize+3)/4;
args->op[3] = args->op[2] + (dstSize+3)/4;
/* No point to call the ASM loop for tiny outputs. */
if (args->op[3] >= oend)
return 0;
/* bits[] is the bit container.
* It is read from the MSB down to the LSB.
* It is shifted left as it is read, and zeros are
* shifted in. After the lowest valid bit a 1 is
* set, so that CountTrailingZeros(bits[]) can be used
* to count how many bits we've consumed.
*/
args->bits[0] = HUF_initFastDStream(args->ip[0]);
args->bits[1] = HUF_initFastDStream(args->ip[1]);
args->bits[2] = HUF_initFastDStream(args->ip[2]);
args->bits[3] = HUF_initFastDStream(args->ip[3]);
/* The decoders must be sure to never read beyond ilowest.
* This is lower than iend[0], but allowing decoders to read
* down to ilowest can allow an extra iteration or two in the
* fast loop.
*/
args->ilowest = istart;
args->oend = oend;
args->dt = dt;
return 1;
}
static size_t HUF_initRemainingDStream(BIT_DStream_t* bit, HUF_DecompressFastArgs const* args, int stream, BYTE* segmentEnd)
{
/* Validate that we haven't overwritten. */
if (args->op[stream] > segmentEnd)
return ERROR(corruption_detected);
/* Validate that we haven't read beyond iend[].
* Note that ip[] may be < iend[] because the MSB is
* the next bit to read, and we may have consumed 100%
* of the stream, so down to iend[i] - 8 is valid.
*/
if (args->ip[stream] < args->iend[stream] - 8)
return ERROR(corruption_detected);
/* Construct the BIT_DStream_t. */
assert(sizeof(size_t) == 8);
bit->bitContainer = MEM_readLEST(args->ip[stream]);
bit->bitsConsumed = ZSTD_countTrailingZeros64(args->bits[stream]);
bit->start = (const char*)args->ilowest;
bit->limitPtr = bit->start + sizeof(size_t);
bit->ptr = (const char*)args->ip[stream];
return 0;
}
/* Calls X(N) for each stream 0, 1, 2, 3. */
#define HUF_4X_FOR_EACH_STREAM(X) \
do { \
X(0); \
X(1); \
X(2); \
X(3); \
} while (0)
/* Calls X(N, var) for each stream 0, 1, 2, 3. */
#define HUF_4X_FOR_EACH_STREAM_WITH_VAR(X, var) \
do { \
X(0, (var)); \
X(1, (var)); \
X(2, (var)); \
X(3, (var)); \
} while (0)
#ifndef HUF_FORCE_DECOMPRESS_X2
/*-***************************/
/* single-symbol decoding */
/*-***************************/
typedef struct { BYTE nbBits; BYTE byte; } HUF_DEltX1; /* single-symbol decoding */
/**
* Packs 4 HUF_DEltX1 structs into a U64. This is used to lay down 4 entries at
* a time.
*/
static U64 HUF_DEltX1_set4(BYTE symbol, BYTE nbBits) {
U64 D4;
if (MEM_isLittleEndian()) {
D4 = (U64)((symbol << 8) + nbBits);
} else {
D4 = (U64)(symbol + (nbBits << 8));
}
assert(D4 < (1U << 16));
D4 *= 0x0001000100010001ULL;
return D4;
}
/**
* Increase the tableLog to targetTableLog and rescales the stats.
* If tableLog > targetTableLog this is a no-op.
* @returns New tableLog
*/
static U32 HUF_rescaleStats(BYTE* huffWeight, U32* rankVal, U32 nbSymbols, U32 tableLog, U32 targetTableLog)
{
if (tableLog > targetTableLog)
return tableLog;
if (tableLog < targetTableLog) {
U32 const scale = targetTableLog - tableLog;
U32 s;
/* Increase the weight for all non-zero probability symbols by scale. */
for (s = 0; s < nbSymbols; ++s) {
huffWeight[s] += (BYTE)((huffWeight[s] == 0) ? 0 : scale);
}
/* Update rankVal to reflect the new weights.
* All weights except 0 get moved to weight + scale.
* Weights [1, scale] are empty.
*/
for (s = targetTableLog; s > scale; --s) {
rankVal[s] = rankVal[s - scale];
}
for (s = scale; s > 0; --s) {
rankVal[s] = 0;
}
}
return targetTableLog;
}
typedef struct {
U32 rankVal[HUF_TABLELOG_ABSOLUTEMAX + 1];
U32 rankStart[HUF_TABLELOG_ABSOLUTEMAX + 1];
U32 statsWksp[HUF_READ_STATS_WORKSPACE_SIZE_U32];
BYTE symbols[HUF_SYMBOLVALUE_MAX + 1];
BYTE huffWeight[HUF_SYMBOLVALUE_MAX + 1];
} HUF_ReadDTableX1_Workspace;
size_t HUF_readDTableX1_wksp(HUF_DTable* DTable, const void* src, size_t srcSize, void* workSpace, size_t wkspSize, int flags)
{
U32 tableLog = 0;
U32 nbSymbols = 0;
size_t iSize;
void* const dtPtr = DTable + 1;
HUF_DEltX1* const dt = (HUF_DEltX1*)dtPtr;
HUF_ReadDTableX1_Workspace* wksp = (HUF_ReadDTableX1_Workspace*)workSpace;
DEBUG_STATIC_ASSERT(HUF_DECOMPRESS_WORKSPACE_SIZE >= sizeof(*wksp));
if (sizeof(*wksp) > wkspSize) return ERROR(tableLog_tooLarge);
DEBUG_STATIC_ASSERT(sizeof(DTableDesc) == sizeof(HUF_DTable));
/* ZSTD_memset(huffWeight, 0, sizeof(huffWeight)); */ /* is not necessary, even though some analyzer complain ... */
iSize = HUF_readStats_wksp(wksp->huffWeight, HUF_SYMBOLVALUE_MAX + 1, wksp->rankVal, &nbSymbols, &tableLog, src, srcSize, wksp->statsWksp, sizeof(wksp->statsWksp), flags);
if (HUF_isError(iSize)) return iSize;
/* Table header */
{ DTableDesc dtd = HUF_getDTableDesc(DTable);
U32 const maxTableLog = dtd.maxTableLog + 1;
U32 const targetTableLog = MIN(maxTableLog, HUF_DECODER_FAST_TABLELOG);
tableLog = HUF_rescaleStats(wksp->huffWeight, wksp->rankVal, nbSymbols, tableLog, targetTableLog);
if (tableLog > (U32)(dtd.maxTableLog+1)) return ERROR(tableLog_tooLarge); /* DTable too small, Huffman tree cannot fit in */
dtd.tableType = 0;
dtd.tableLog = (BYTE)tableLog;
ZSTD_memcpy(DTable, &dtd, sizeof(dtd));
}
/* Compute symbols and rankStart given rankVal:
*
* rankVal already contains the number of values of each weight.
*
* symbols contains the symbols ordered by weight. First are the rankVal[0]
* weight 0 symbols, followed by the rankVal[1] weight 1 symbols, and so on.
* symbols[0] is filled (but unused) to avoid a branch.
*
* rankStart contains the offset where each rank belongs in the DTable.
* rankStart[0] is not filled because there are no entries in the table for
* weight 0.
*/
{ int n;
U32 nextRankStart = 0;
int const unroll = 4;
int const nLimit = (int)nbSymbols - unroll + 1;
for (n=0; n<(int)tableLog+1; n++) {
U32 const curr = nextRankStart;
nextRankStart += wksp->rankVal[n];
wksp->rankStart[n] = curr;
}
for (n=0; n < nLimit; n += unroll) {
int u;
for (u=0; u < unroll; ++u) {
size_t const w = wksp->huffWeight[n+u];
wksp->symbols[wksp->rankStart[w]++] = (BYTE)(n+u);
}
}
for (; n < (int)nbSymbols; ++n) {
size_t const w = wksp->huffWeight[n];
wksp->symbols[wksp->rankStart[w]++] = (BYTE)n;
}
}
/* fill DTable
* We fill all entries of each weight in order.
* That way length is a constant for each iteration of the outer loop.
* We can switch based on the length to a different inner loop which is
* optimized for that particular case.
*/
{ U32 w;
int symbol = wksp->rankVal[0];
int rankStart = 0;
for (w=1; w<tableLog+1; ++w) {
int const symbolCount = wksp->rankVal[w];
int const length = (1 << w) >> 1;
int uStart = rankStart;
BYTE const nbBits = (BYTE)(tableLog + 1 - w);
int s;
int u;
switch (length) {
case 1:
for (s=0; s<symbolCount; ++s) {
HUF_DEltX1 D;
D.byte = wksp->symbols[symbol + s];
D.nbBits = nbBits;
dt[uStart] = D;
uStart += 1;
}
break;
case 2:
for (s=0; s<symbolCount; ++s) {
HUF_DEltX1 D;
D.byte = wksp->symbols[symbol + s];
D.nbBits = nbBits;
dt[uStart+0] = D;
dt[uStart+1] = D;
uStart += 2;
}
break;
case 4:
for (s=0; s<symbolCount; ++s) {
U64 const D4 = HUF_DEltX1_set4(wksp->symbols[symbol + s], nbBits);
MEM_write64(dt + uStart, D4);
uStart += 4;
}
break;
case 8:
for (s=0; s<symbolCount; ++s) {
U64 const D4 = HUF_DEltX1_set4(wksp->symbols[symbol + s], nbBits);
MEM_write64(dt + uStart, D4);
MEM_write64(dt + uStart + 4, D4);
uStart += 8;
}
break;
default:
for (s=0; s<symbolCount; ++s) {
U64 const D4 = HUF_DEltX1_set4(wksp->symbols[symbol + s], nbBits);
for (u=0; u < length; u += 16) {
MEM_write64(dt + uStart + u + 0, D4);
MEM_write64(dt + uStart + u + 4, D4);
MEM_write64(dt + uStart + u + 8, D4);
MEM_write64(dt + uStart + u + 12, D4);
}
assert(u == length);
uStart += length;
}
break;
}
symbol += symbolCount;
rankStart += symbolCount * length;
}
}
return iSize;
}
FORCE_INLINE_TEMPLATE BYTE
HUF_decodeSymbolX1(BIT_DStream_t* Dstream, const HUF_DEltX1* dt, const U32 dtLog)
{
size_t const val = BIT_lookBitsFast(Dstream, dtLog); /* note : dtLog >= 1 */
BYTE const c = dt[val].byte;
BIT_skipBits(Dstream, dt[val].nbBits);
return c;
}
#define HUF_DECODE_SYMBOLX1_0(ptr, DStreamPtr) \
do { *ptr++ = HUF_decodeSymbolX1(DStreamPtr, dt, dtLog); } while (0)
#define HUF_DECODE_SYMBOLX1_1(ptr, DStreamPtr) \
do { \
if (MEM_64bits() || (HUF_TABLELOG_MAX<=12)) \
HUF_DECODE_SYMBOLX1_0(ptr, DStreamPtr); \
} while (0)
#define HUF_DECODE_SYMBOLX1_2(ptr, DStreamPtr) \
do { \
if (MEM_64bits()) \
HUF_DECODE_SYMBOLX1_0(ptr, DStreamPtr); \
} while (0)
HINT_INLINE size_t
HUF_decodeStreamX1(BYTE* p, BIT_DStream_t* const bitDPtr, BYTE* const pEnd, const HUF_DEltX1* const dt, const U32 dtLog)
{
BYTE* const pStart = p;
/* up to 4 symbols at a time */
if ((pEnd - p) > 3) {
while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) & (p < pEnd-3)) {
HUF_DECODE_SYMBOLX1_2(p, bitDPtr);
HUF_DECODE_SYMBOLX1_1(p, bitDPtr);
HUF_DECODE_SYMBOLX1_2(p, bitDPtr);
HUF_DECODE_SYMBOLX1_0(p, bitDPtr);
}
} else {
BIT_reloadDStream(bitDPtr);
}
/* [0-3] symbols remaining */
if (MEM_32bits())
while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) & (p < pEnd))
HUF_DECODE_SYMBOLX1_0(p, bitDPtr);
/* no more data to retrieve from bitstream, no need to reload */
while (p < pEnd)
HUF_DECODE_SYMBOLX1_0(p, bitDPtr);
return (size_t)(pEnd-pStart);
}
FORCE_INLINE_TEMPLATE size_t
HUF_decompress1X1_usingDTable_internal_body(
void* dst, size_t dstSize,
const void* cSrc, size_t cSrcSize,
const HUF_DTable* DTable)
{
BYTE* op = (BYTE*)dst;
BYTE* const oend = ZSTD_maybeNullPtrAdd(op, dstSize);
const void* dtPtr = DTable + 1;
const HUF_DEltX1* const dt = (const HUF_DEltX1*)dtPtr;
BIT_DStream_t bitD;
DTableDesc const dtd = HUF_getDTableDesc(DTable);
U32 const dtLog = dtd.tableLog;
CHECK_F( BIT_initDStream(&bitD, cSrc, cSrcSize) );
HUF_decodeStreamX1(op, &bitD, oend, dt, dtLog);
if (!BIT_endOfDStream(&bitD)) return ERROR(corruption_detected);
return dstSize;
}
/* HUF_decompress4X1_usingDTable_internal_body():
* Conditions :
* @dstSize >= 6
*/
FORCE_INLINE_TEMPLATE size_t
HUF_decompress4X1_usingDTable_internal_body(
void* dst, size_t dstSize,
const void* cSrc, size_t cSrcSize,
const HUF_DTable* DTable)
{
/* Check */
if (cSrcSize < 10) return ERROR(corruption_detected); /* strict minimum : jump table + 1 byte per stream */
if (dstSize < 6) return ERROR(corruption_detected); /* stream 4-split doesn't work */
{ const BYTE* const istart = (const BYTE*) cSrc;
BYTE* const ostart = (BYTE*) dst;
BYTE* const oend = ostart + dstSize;
BYTE* const olimit = oend - 3;
const void* const dtPtr = DTable + 1;
const HUF_DEltX1* const dt = (const HUF_DEltX1*)dtPtr;
/* Init */
BIT_DStream_t bitD1;
BIT_DStream_t bitD2;
BIT_DStream_t bitD3;
BIT_DStream_t bitD4;
size_t const length1 = MEM_readLE16(istart);
size_t const length2 = MEM_readLE16(istart+2);
size_t const length3 = MEM_readLE16(istart+4);
size_t const length4 = cSrcSize - (length1 + length2 + length3 + 6);
const BYTE* const istart1 = istart + 6; /* jumpTable */
const BYTE* const istart2 = istart1 + length1;
const BYTE* const istart3 = istart2 + length2;
const BYTE* const istart4 = istart3 + length3;
const size_t segmentSize = (dstSize+3) / 4;
BYTE* const opStart2 = ostart + segmentSize;
BYTE* const opStart3 = opStart2 + segmentSize;
BYTE* const opStart4 = opStart3 + segmentSize;
BYTE* op1 = ostart;
BYTE* op2 = opStart2;
BYTE* op3 = opStart3;
BYTE* op4 = opStart4;
DTableDesc const dtd = HUF_getDTableDesc(DTable);
U32 const dtLog = dtd.tableLog;
U32 endSignal = 1;
if (length4 > cSrcSize) return ERROR(corruption_detected); /* overflow */
if (opStart4 > oend) return ERROR(corruption_detected); /* overflow */
assert(dstSize >= 6); /* validated above */
CHECK_F( BIT_initDStream(&bitD1, istart1, length1) );
CHECK_F( BIT_initDStream(&bitD2, istart2, length2) );
CHECK_F( BIT_initDStream(&bitD3, istart3, length3) );
CHECK_F( BIT_initDStream(&bitD4, istart4, length4) );
/* up to 16 symbols per loop (4 symbols per stream) in 64-bit mode */
if ((size_t)(oend - op4) >= sizeof(size_t)) {
for ( ; (endSignal) & (op4 < olimit) ; ) {
HUF_DECODE_SYMBOLX1_2(op1, &bitD1);
HUF_DECODE_SYMBOLX1_2(op2, &bitD2);
HUF_DECODE_SYMBOLX1_2(op3, &bitD3);
HUF_DECODE_SYMBOLX1_2(op4, &bitD4);
HUF_DECODE_SYMBOLX1_1(op1, &bitD1);
HUF_DECODE_SYMBOLX1_1(op2, &bitD2);
HUF_DECODE_SYMBOLX1_1(op3, &bitD3);
HUF_DECODE_SYMBOLX1_1(op4, &bitD4);
HUF_DECODE_SYMBOLX1_2(op1, &bitD1);
HUF_DECODE_SYMBOLX1_2(op2, &bitD2);
HUF_DECODE_SYMBOLX1_2(op3, &bitD3);
HUF_DECODE_SYMBOLX1_2(op4, &bitD4);
HUF_DECODE_SYMBOLX1_0(op1, &bitD1);
HUF_DECODE_SYMBOLX1_0(op2, &bitD2);
HUF_DECODE_SYMBOLX1_0(op3, &bitD3);
HUF_DECODE_SYMBOLX1_0(op4, &bitD4);
endSignal &= BIT_reloadDStreamFast(&bitD1) == BIT_DStream_unfinished;
endSignal &= BIT_reloadDStreamFast(&bitD2) == BIT_DStream_unfinished;
endSignal &= BIT_reloadDStreamFast(&bitD3) == BIT_DStream_unfinished;
endSignal &= BIT_reloadDStreamFast(&bitD4) == BIT_DStream_unfinished;
}
}
/* check corruption */
/* note : should not be necessary : op# advance in lock step, and we control op4.
* but curiously, binary generated by gcc 7.2 & 7.3 with -mbmi2 runs faster when >=1 test is present */
if (op1 > opStart2) return ERROR(corruption_detected);
if (op2 > opStart3) return ERROR(corruption_detected);
if (op3 > opStart4) return ERROR(corruption_detected);
/* note : op4 supposed already verified within main loop */
/* finish bitStreams one by one */
HUF_decodeStreamX1(op1, &bitD1, opStart2, dt, dtLog);
HUF_decodeStreamX1(op2, &bitD2, opStart3, dt, dtLog);
HUF_decodeStreamX1(op3, &bitD3, opStart4, dt, dtLog);
HUF_decodeStreamX1(op4, &bitD4, oend, dt, dtLog);
/* check */
{ U32 const endCheck = BIT_endOfDStream(&bitD1) & BIT_endOfDStream(&bitD2) & BIT_endOfDStream(&bitD3) & BIT_endOfDStream(&bitD4);
if (!endCheck) return ERROR(corruption_detected); }
/* decoded size */
return dstSize;
}
}
#if HUF_NEED_BMI2_FUNCTION
static BMI2_TARGET_ATTRIBUTE
size_t HUF_decompress4X1_usingDTable_internal_bmi2(void* dst, size_t dstSize, void const* cSrc,
size_t cSrcSize, HUF_DTable const* DTable) {
return HUF_decompress4X1_usingDTable_internal_body(dst, dstSize, cSrc, cSrcSize, DTable);
}
#endif
static
size_t HUF_decompress4X1_usingDTable_internal_default(void* dst, size_t dstSize, void const* cSrc,
size_t cSrcSize, HUF_DTable const* DTable) {
return HUF_decompress4X1_usingDTable_internal_body(dst, dstSize, cSrc, cSrcSize, DTable);
}
#if ZSTD_ENABLE_ASM_X86_64_BMI2
HUF_ASM_DECL void HUF_decompress4X1_usingDTable_internal_fast_asm_loop(HUF_DecompressFastArgs* args) ZSTDLIB_HIDDEN;
#endif
static HUF_FAST_BMI2_ATTRS
void HUF_decompress4X1_usingDTable_internal_fast_c_loop(HUF_DecompressFastArgs* args)
{
U64 bits[4];
BYTE const* ip[4];
BYTE* op[4];
U16 const* const dtable = (U16 const*)args->dt;
BYTE* const oend = args->oend;
BYTE const* const ilowest = args->ilowest;
/* Copy the arguments to local variables */
ZSTD_memcpy(&bits, &args->bits, sizeof(bits));
ZSTD_memcpy((void*)(&ip), &args->ip, sizeof(ip));
ZSTD_memcpy(&op, &args->op, sizeof(op));
assert(MEM_isLittleEndian());
assert(!MEM_32bits());
for (;;) {
BYTE* olimit;
int stream;
/* Assert loop preconditions */
#ifndef NDEBUG
for (stream = 0; stream < 4; ++stream) {
assert(op[stream] <= (stream == 3 ? oend : op[stream + 1]));
assert(ip[stream] >= ilowest);
}
#endif
/* Compute olimit */
{
/* Each iteration produces 5 output symbols per stream */
size_t const oiters = (size_t)(oend - op[3]) / 5;
/* Each iteration consumes up to 11 bits * 5 = 55 bits < 7 bytes
* per stream.
*/
size_t const iiters = (size_t)(ip[0] - ilowest) / 7;
/* We can safely run iters iterations before running bounds checks */
size_t const iters = MIN(oiters, iiters);
size_t const symbols = iters * 5;
/* We can simply check that op[3] < olimit, instead of checking all
* of our bounds, since we can't hit the other bounds until we've run
* iters iterations, which only happens when op[3] == olimit.
*/
olimit = op[3] + symbols;
/* Exit fast decoding loop once we reach the end. */
if (op[3] == olimit)
break;
/* Exit the decoding loop if any input pointer has crossed the
* previous one. This indicates corruption, and a precondition
* to our loop is that ip[i] >= ip[0].
*/
for (stream = 1; stream < 4; ++stream) {
if (ip[stream] < ip[stream - 1])
goto _out;
}
}
#ifndef NDEBUG
for (stream = 1; stream < 4; ++stream) {
assert(ip[stream] >= ip[stream - 1]);
}
#endif
#define HUF_4X1_DECODE_SYMBOL(_stream, _symbol) \
do { \
int const index = (int)(bits[(_stream)] >> 53); \
int const entry = (int)dtable[index]; \
bits[(_stream)] <<= (entry & 0x3F); \
op[(_stream)][(_symbol)] = (BYTE)((entry >> 8) & 0xFF); \
} while (0)
#define HUF_4X1_RELOAD_STREAM(_stream) \
do { \
int const ctz = ZSTD_countTrailingZeros64(bits[(_stream)]); \
int const nbBits = ctz & 7; \
int const nbBytes = ctz >> 3; \
op[(_stream)] += 5; \
ip[(_stream)] -= nbBytes; \
bits[(_stream)] = MEM_read64(ip[(_stream)]) | 1; \
bits[(_stream)] <<= nbBits; \
} while (0)
/* Manually unroll the loop because compilers don't consistently
* unroll the inner loops, which destroys performance.
*/
do {
/* Decode 5 symbols in each of the 4 streams */
HUF_4X_FOR_EACH_STREAM_WITH_VAR(HUF_4X1_DECODE_SYMBOL, 0);
HUF_4X_FOR_EACH_STREAM_WITH_VAR(HUF_4X1_DECODE_SYMBOL, 1);
HUF_4X_FOR_EACH_STREAM_WITH_VAR(HUF_4X1_DECODE_SYMBOL, 2);
HUF_4X_FOR_EACH_STREAM_WITH_VAR(HUF_4X1_DECODE_SYMBOL, 3);
HUF_4X_FOR_EACH_STREAM_WITH_VAR(HUF_4X1_DECODE_SYMBOL, 4);
/* Reload each of the 4 the bitstreams */
HUF_4X_FOR_EACH_STREAM(HUF_4X1_RELOAD_STREAM);
} while (op[3] < olimit);
#undef HUF_4X1_DECODE_SYMBOL
#undef HUF_4X1_RELOAD_STREAM
}
_out:
/* Save the final values of each of the state variables back to args. */
ZSTD_memcpy(&args->bits, &bits, sizeof(bits));
ZSTD_memcpy((void*)(&args->ip), &ip, sizeof(ip));
ZSTD_memcpy(&args->op, &op, sizeof(op));
}
/**
* @returns @p dstSize on success (>= 6)
* 0 if the fallback implementation should be used
* An error if an error occurred
*/
static HUF_FAST_BMI2_ATTRS
size_t
HUF_decompress4X1_usingDTable_internal_fast(
void* dst, size_t dstSize,
const void* cSrc, size_t cSrcSize,
const HUF_DTable* DTable,
HUF_DecompressFastLoopFn loopFn)
{
void const* dt = DTable + 1;
BYTE const* const ilowest = (BYTE const*)cSrc;
BYTE* const oend = ZSTD_maybeNullPtrAdd((BYTE*)dst, dstSize);
HUF_DecompressFastArgs args;
{ size_t const ret = HUF_DecompressFastArgs_init(&args, dst, dstSize, cSrc, cSrcSize, DTable);
FORWARD_IF_ERROR(ret, "Failed to init fast loop args");
if (ret == 0)
return 0;
}
assert(args.ip[0] >= args.ilowest);
loopFn(&args);
/* Our loop guarantees that ip[] >= ilowest and that we haven't
* overwritten any op[].
*/
assert(args.ip[0] >= ilowest);
assert(args.ip[0] >= ilowest);
assert(args.ip[1] >= ilowest);
assert(args.ip[2] >= ilowest);
assert(args.ip[3] >= ilowest);
assert(args.op[3] <= oend);
assert(ilowest == args.ilowest);
assert(ilowest + 6 == args.iend[0]);
(void)ilowest;
/* finish bit streams one by one. */
{ size_t const segmentSize = (dstSize+3) / 4;
BYTE* segmentEnd = (BYTE*)dst;
int i;
for (i = 0; i < 4; ++i) {
BIT_DStream_t bit;
if (segmentSize <= (size_t)(oend - segmentEnd))
segmentEnd += segmentSize;
else
segmentEnd = oend;
FORWARD_IF_ERROR(HUF_initRemainingDStream(&bit, &args, i, segmentEnd), "corruption");
/* Decompress and validate that we've produced exactly the expected length. */
args.op[i] += HUF_decodeStreamX1(args.op[i], &bit, segmentEnd, (HUF_DEltX1 const*)dt, HUF_DECODER_FAST_TABLELOG);
if (args.op[i] != segmentEnd) return ERROR(corruption_detected);
}
}
/* decoded size */
assert(dstSize != 0);
return dstSize;
}
HUF_DGEN(HUF_decompress1X1_usingDTable_internal)
static size_t HUF_decompress4X1_usingDTable_internal(void* dst, size_t dstSize, void const* cSrc,
size_t cSrcSize, HUF_DTable const* DTable, int flags)
{
HUF_DecompressUsingDTableFn fallbackFn = HUF_decompress4X1_usingDTable_internal_default;
HUF_DecompressFastLoopFn loopFn = HUF_decompress4X1_usingDTable_internal_fast_c_loop;
#if DYNAMIC_BMI2
if (flags & HUF_flags_bmi2) {
fallbackFn = HUF_decompress4X1_usingDTable_internal_bmi2;
# if ZSTD_ENABLE_ASM_X86_64_BMI2
if (!(flags & HUF_flags_disableAsm)) {
loopFn = HUF_decompress4X1_usingDTable_internal_fast_asm_loop;
}
# endif
} else {
return fallbackFn(dst, dstSize, cSrc, cSrcSize, DTable);
}
#endif
#if ZSTD_ENABLE_ASM_X86_64_BMI2 && defined(__BMI2__)
if (!(flags & HUF_flags_disableAsm)) {
loopFn = HUF_decompress4X1_usingDTable_internal_fast_asm_loop;
}
#endif
if (HUF_ENABLE_FAST_DECODE && !(flags & HUF_flags_disableFast)) {
size_t const ret = HUF_decompress4X1_usingDTable_internal_fast(dst, dstSize, cSrc, cSrcSize, DTable, loopFn);
if (ret != 0)
return ret;
}
return fallbackFn(dst, dstSize, cSrc, cSrcSize, DTable);
}
static size_t HUF_decompress4X1_DCtx_wksp(HUF_DTable* dctx, void* dst, size_t dstSize,
const void* cSrc, size_t cSrcSize,
void* workSpace, size_t wkspSize, int flags)
{
const BYTE* ip = (const BYTE*) cSrc;
size_t const hSize = HUF_readDTableX1_wksp(dctx, cSrc, cSrcSize, workSpace, wkspSize, flags);
if (HUF_isError(hSize)) return hSize;
if (hSize >= cSrcSize) return ERROR(srcSize_wrong);
ip += hSize; cSrcSize -= hSize;
return HUF_decompress4X1_usingDTable_internal(dst, dstSize, ip, cSrcSize, dctx, flags);
}
#endif /* HUF_FORCE_DECOMPRESS_X2 */
#ifndef HUF_FORCE_DECOMPRESS_X1
/* *************************/
/* double-symbols decoding */
/* *************************/
typedef struct { U16 sequence; BYTE nbBits; BYTE length; } HUF_DEltX2; /* double-symbols decoding */
typedef struct { BYTE symbol; } sortedSymbol_t;
typedef U32 rankValCol_t[HUF_TABLELOG_MAX + 1];
typedef rankValCol_t rankVal_t[HUF_TABLELOG_MAX];
/**
* Constructs a HUF_DEltX2 in a U32.
*/
static U32 HUF_buildDEltX2U32(U32 symbol, U32 nbBits, U32 baseSeq, int level)
{
U32 seq;
DEBUG_STATIC_ASSERT(offsetof(HUF_DEltX2, sequence) == 0);
DEBUG_STATIC_ASSERT(offsetof(HUF_DEltX2, nbBits) == 2);
DEBUG_STATIC_ASSERT(offsetof(HUF_DEltX2, length) == 3);
DEBUG_STATIC_ASSERT(sizeof(HUF_DEltX2) == sizeof(U32));
if (MEM_isLittleEndian()) {
seq = level == 1 ? symbol : (baseSeq + (symbol << 8));
return seq + (nbBits << 16) + ((U32)level << 24);
} else {
seq = level == 1 ? (symbol << 8) : ((baseSeq << 8) + symbol);
return (seq << 16) + (nbBits << 8) + (U32)level;
}
}
/**
* Constructs a HUF_DEltX2.
*/
static HUF_DEltX2 HUF_buildDEltX2(U32 symbol, U32 nbBits, U32 baseSeq, int level)
{
HUF_DEltX2 DElt;
U32 const val = HUF_buildDEltX2U32(symbol, nbBits, baseSeq, level);
DEBUG_STATIC_ASSERT(sizeof(DElt) == sizeof(val));
ZSTD_memcpy(&DElt, &val, sizeof(val));
return DElt;
}
/**
* Constructs 2 HUF_DEltX2s and packs them into a U64.
*/
static U64 HUF_buildDEltX2U64(U32 symbol, U32 nbBits, U16 baseSeq, int level)
{
U32 DElt = HUF_buildDEltX2U32(symbol, nbBits, baseSeq, level);
return (U64)DElt + ((U64)DElt << 32);
}
/**
* Fills the DTable rank with all the symbols from [begin, end) that are each
* nbBits long.
*
* @param DTableRank The start of the rank in the DTable.
* @param begin The first symbol to fill (inclusive).
* @param end The last symbol to fill (exclusive).
* @param nbBits Each symbol is nbBits long.
* @param tableLog The table log.
* @param baseSeq If level == 1 { 0 } else { the first level symbol }
* @param level The level in the table. Must be 1 or 2.
*/
static void HUF_fillDTableX2ForWeight(
HUF_DEltX2* DTableRank,
sortedSymbol_t const* begin, sortedSymbol_t const* end,
U32 nbBits, U32 tableLog,
U16 baseSeq, int const level)
{
U32 const length = 1U << ((tableLog - nbBits) & 0x1F /* quiet static-analyzer */);
const sortedSymbol_t* ptr;
assert(level >= 1 && level <= 2);
switch (length) {
case 1:
for (ptr = begin; ptr != end; ++ptr) {
HUF_DEltX2 const DElt = HUF_buildDEltX2(ptr->symbol, nbBits, baseSeq, level);
*DTableRank++ = DElt;
}
break;
case 2:
for (ptr = begin; ptr != end; ++ptr) {
HUF_DEltX2 const DElt = HUF_buildDEltX2(ptr->symbol, nbBits, baseSeq, level);
DTableRank[0] = DElt;
DTableRank[1] = DElt;
DTableRank += 2;
}
break;
case 4:
for (ptr = begin; ptr != end; ++ptr) {
U64 const DEltX2 = HUF_buildDEltX2U64(ptr->symbol, nbBits, baseSeq, level);
ZSTD_memcpy(DTableRank + 0, &DEltX2, sizeof(DEltX2));
ZSTD_memcpy(DTableRank + 2, &DEltX2, sizeof(DEltX2));
DTableRank += 4;
}
break;
case 8:
for (ptr = begin; ptr != end; ++ptr) {
U64 const DEltX2 = HUF_buildDEltX2U64(ptr->symbol, nbBits, baseSeq, level);
ZSTD_memcpy(DTableRank + 0, &DEltX2, sizeof(DEltX2));
ZSTD_memcpy(DTableRank + 2, &DEltX2, sizeof(DEltX2));
ZSTD_memcpy(DTableRank + 4, &DEltX2, sizeof(DEltX2));
ZSTD_memcpy(DTableRank + 6, &DEltX2, sizeof(DEltX2));
DTableRank += 8;
}
break;
default:
for (ptr = begin; ptr != end; ++ptr) {
U64 const DEltX2 = HUF_buildDEltX2U64(ptr->symbol, nbBits, baseSeq, level);
HUF_DEltX2* const DTableRankEnd = DTableRank + length;
for (; DTableRank != DTableRankEnd; DTableRank += 8) {
ZSTD_memcpy(DTableRank + 0, &DEltX2, sizeof(DEltX2));
ZSTD_memcpy(DTableRank + 2, &DEltX2, sizeof(DEltX2));
ZSTD_memcpy(DTableRank + 4, &DEltX2, sizeof(DEltX2));
ZSTD_memcpy(DTableRank + 6, &DEltX2, sizeof(DEltX2));
}
}
break;
}
}
/* HUF_fillDTableX2Level2() :
* `rankValOrigin` must be a table of at least (HUF_TABLELOG_MAX + 1) U32 */
static void HUF_fillDTableX2Level2(HUF_DEltX2* DTable, U32 targetLog, const U32 consumedBits,
const U32* rankVal, const int minWeight, const int maxWeight1,
const sortedSymbol_t* sortedSymbols, U32 const* rankStart,
U32 nbBitsBaseline, U16 baseSeq)
{
/* Fill skipped values (all positions up to rankVal[minWeight]).
* These are positions only get a single symbol because the combined weight
* is too large.
*/
if (minWeight>1) {
U32 const length = 1U << ((targetLog - consumedBits) & 0x1F /* quiet static-analyzer */);
U64 const DEltX2 = HUF_buildDEltX2U64(baseSeq, consumedBits, /* baseSeq */ 0, /* level */ 1);
int const skipSize = rankVal[minWeight];
assert(length > 1);
assert((U32)skipSize < length);
switch (length) {
case 2:
assert(skipSize == 1);
ZSTD_memcpy(DTable, &DEltX2, sizeof(DEltX2));
break;
case 4:
assert(skipSize <= 4);
ZSTD_memcpy(DTable + 0, &DEltX2, sizeof(DEltX2));
ZSTD_memcpy(DTable + 2, &DEltX2, sizeof(DEltX2));
break;
default:
{
int i;
for (i = 0; i < skipSize; i += 8) {
ZSTD_memcpy(DTable + i + 0, &DEltX2, sizeof(DEltX2));
ZSTD_memcpy(DTable + i + 2, &DEltX2, sizeof(DEltX2));
ZSTD_memcpy(DTable + i + 4, &DEltX2, sizeof(DEltX2));
ZSTD_memcpy(DTable + i + 6, &DEltX2, sizeof(DEltX2));
}
}
}
}
/* Fill each of the second level symbols by weight. */
{
int w;
for (w = minWeight; w < maxWeight1; ++w) {
int const begin = rankStart[w];
int const end = rankStart[w+1];
U32 const nbBits = nbBitsBaseline - w;
U32 const totalBits = nbBits + consumedBits;
HUF_fillDTableX2ForWeight(
DTable + rankVal[w],
sortedSymbols + begin, sortedSymbols + end,
totalBits, targetLog,
baseSeq, /* level */ 2);
}
}
}
static void HUF_fillDTableX2(HUF_DEltX2* DTable, const U32 targetLog,
const sortedSymbol_t* sortedList,
const U32* rankStart, rankValCol_t* rankValOrigin, const U32 maxWeight,
const U32 nbBitsBaseline)
{
U32* const rankVal = rankValOrigin[0];
const int scaleLog = nbBitsBaseline - targetLog; /* note : targetLog >= srcLog, hence scaleLog <= 1 */
const U32 minBits = nbBitsBaseline - maxWeight;
int w;
int const wEnd = (int)maxWeight + 1;
/* Fill DTable in order of weight. */
for (w = 1; w < wEnd; ++w) {
int const begin = (int)rankStart[w];
int const end = (int)rankStart[w+1];
U32 const nbBits = nbBitsBaseline - w;
if (targetLog-nbBits >= minBits) {
/* Enough room for a second symbol. */
int start = rankVal[w];
U32 const length = 1U << ((targetLog - nbBits) & 0x1F /* quiet static-analyzer */);
int minWeight = nbBits + scaleLog;
int s;
if (minWeight < 1) minWeight = 1;
/* Fill the DTable for every symbol of weight w.
* These symbols get at least 1 second symbol.
*/
for (s = begin; s != end; ++s) {
HUF_fillDTableX2Level2(
DTable + start, targetLog, nbBits,
rankValOrigin[nbBits], minWeight, wEnd,
sortedList, rankStart,
nbBitsBaseline, sortedList[s].symbol);
start += length;
}
} else {
/* Only a single symbol. */
HUF_fillDTableX2ForWeight(
DTable + rankVal[w],
sortedList + begin, sortedList + end,
nbBits, targetLog,
/* baseSeq */ 0, /* level */ 1);
}
}
}
typedef struct {
rankValCol_t rankVal[HUF_TABLELOG_MAX];
U32 rankStats[HUF_TABLELOG_MAX + 1];
U32 rankStart0[HUF_TABLELOG_MAX + 3];
sortedSymbol_t sortedSymbol[HUF_SYMBOLVALUE_MAX + 1];
BYTE weightList[HUF_SYMBOLVALUE_MAX + 1];
U32 calleeWksp[HUF_READ_STATS_WORKSPACE_SIZE_U32];
} HUF_ReadDTableX2_Workspace;
size_t HUF_readDTableX2_wksp(HUF_DTable* DTable,
const void* src, size_t srcSize,
void* workSpace, size_t wkspSize, int flags)
{
U32 tableLog, maxW, nbSymbols;
DTableDesc dtd = HUF_getDTableDesc(DTable);
U32 maxTableLog = dtd.maxTableLog;
size_t iSize;
void* dtPtr = DTable+1; /* force compiler to avoid strict-aliasing */
HUF_DEltX2* const dt = (HUF_DEltX2*)dtPtr;
U32 *rankStart;
HUF_ReadDTableX2_Workspace* const wksp = (HUF_ReadDTableX2_Workspace*)workSpace;
if (sizeof(*wksp) > wkspSize) return ERROR(GENERIC);
rankStart = wksp->rankStart0 + 1;
ZSTD_memset(wksp->rankStats, 0, sizeof(wksp->rankStats));
ZSTD_memset(wksp->rankStart0, 0, sizeof(wksp->rankStart0));
DEBUG_STATIC_ASSERT(sizeof(HUF_DEltX2) == sizeof(HUF_DTable)); /* if compiler fails here, assertion is wrong */
if (maxTableLog > HUF_TABLELOG_MAX) return ERROR(tableLog_tooLarge);
/* ZSTD_memset(weightList, 0, sizeof(weightList)); */ /* is not necessary, even though some analyzer complain ... */
iSize = HUF_readStats_wksp(wksp->weightList, HUF_SYMBOLVALUE_MAX + 1, wksp->rankStats, &nbSymbols, &tableLog, src, srcSize, wksp->calleeWksp, sizeof(wksp->calleeWksp), flags);
if (HUF_isError(iSize)) return iSize;
/* check result */
if (tableLog > maxTableLog) return ERROR(tableLog_tooLarge); /* DTable can't fit code depth */
if (tableLog <= HUF_DECODER_FAST_TABLELOG && maxTableLog > HUF_DECODER_FAST_TABLELOG) maxTableLog = HUF_DECODER_FAST_TABLELOG;
/* find maxWeight */
for (maxW = tableLog; wksp->rankStats[maxW]==0; maxW--) {} /* necessarily finds a solution before 0 */
/* Get start index of each weight */
{ U32 w, nextRankStart = 0;
for (w=1; w<maxW+1; w++) {
U32 curr = nextRankStart;
nextRankStart += wksp->rankStats[w];
rankStart[w] = curr;
}
rankStart[0] = nextRankStart; /* put all 0w symbols at the end of sorted list*/
rankStart[maxW+1] = nextRankStart;
}
/* sort symbols by weight */
{ U32 s;
for (s=0; s<nbSymbols; s++) {
U32 const w = wksp->weightList[s];
U32 const r = rankStart[w]++;
wksp->sortedSymbol[r].symbol = (BYTE)s;
}
rankStart[0] = 0; /* forget 0w symbols; this is beginning of weight(1) */
}
/* Build rankVal */
{ U32* const rankVal0 = wksp->rankVal[0];
{ int const rescale = (maxTableLog-tableLog) - 1; /* tableLog <= maxTableLog */
U32 nextRankVal = 0;
U32 w;
for (w=1; w<maxW+1; w++) {
U32 curr = nextRankVal;
nextRankVal += wksp->rankStats[w] << (w+rescale);
rankVal0[w] = curr;
} }
{ U32 const minBits = tableLog+1 - maxW;
U32 consumed;
for (consumed = minBits; consumed < maxTableLog - minBits + 1; consumed++) {
U32* const rankValPtr = wksp->rankVal[consumed];
U32 w;
for (w = 1; w < maxW+1; w++) {
rankValPtr[w] = rankVal0[w] >> consumed;
} } } }
HUF_fillDTableX2(dt, maxTableLog,
wksp->sortedSymbol,
wksp->rankStart0, wksp->rankVal, maxW,
tableLog+1);
dtd.tableLog = (BYTE)maxTableLog;
dtd.tableType = 1;
ZSTD_memcpy(DTable, &dtd, sizeof(dtd));
return iSize;
}
FORCE_INLINE_TEMPLATE U32
HUF_decodeSymbolX2(void* op, BIT_DStream_t* DStream, const HUF_DEltX2* dt, const U32 dtLog)
{
size_t const val = BIT_lookBitsFast(DStream, dtLog); /* note : dtLog >= 1 */
ZSTD_memcpy(op, &dt[val].sequence, 2);
BIT_skipBits(DStream, dt[val].nbBits);
return dt[val].length;
}
FORCE_INLINE_TEMPLATE U32
HUF_decodeLastSymbolX2(void* op, BIT_DStream_t* DStream, const HUF_DEltX2* dt, const U32 dtLog)
{
size_t const val = BIT_lookBitsFast(DStream, dtLog); /* note : dtLog >= 1 */
ZSTD_memcpy(op, &dt[val].sequence, 1);
if (dt[val].length==1) {
BIT_skipBits(DStream, dt[val].nbBits);
} else {
if (DStream->bitsConsumed < (sizeof(DStream->bitContainer)*8)) {
BIT_skipBits(DStream, dt[val].nbBits);
if (DStream->bitsConsumed > (sizeof(DStream->bitContainer)*8))
/* ugly hack; works only because it's the last symbol. Note : can't easily extract nbBits from just this symbol */
DStream->bitsConsumed = (sizeof(DStream->bitContainer)*8);
}
}
return 1;
}
#define HUF_DECODE_SYMBOLX2_0(ptr, DStreamPtr) \
do { ptr += HUF_decodeSymbolX2(ptr, DStreamPtr, dt, dtLog); } while (0)
#define HUF_DECODE_SYMBOLX2_1(ptr, DStreamPtr) \
do { \
if (MEM_64bits() || (HUF_TABLELOG_MAX<=12)) \
ptr += HUF_decodeSymbolX2(ptr, DStreamPtr, dt, dtLog); \
} while (0)
#define HUF_DECODE_SYMBOLX2_2(ptr, DStreamPtr) \
do { \
if (MEM_64bits()) \
ptr += HUF_decodeSymbolX2(ptr, DStreamPtr, dt, dtLog); \
} while (0)
HINT_INLINE size_t
HUF_decodeStreamX2(BYTE* p, BIT_DStream_t* bitDPtr, BYTE* const pEnd,
const HUF_DEltX2* const dt, const U32 dtLog)
{
BYTE* const pStart = p;
/* up to 8 symbols at a time */
if ((size_t)(pEnd - p) >= sizeof(bitDPtr->bitContainer)) {
if (dtLog <= 11 && MEM_64bits()) {
/* up to 10 symbols at a time */
while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) & (p < pEnd-9)) {
HUF_DECODE_SYMBOLX2_0(p, bitDPtr);
HUF_DECODE_SYMBOLX2_0(p, bitDPtr);
HUF_DECODE_SYMBOLX2_0(p, bitDPtr);
HUF_DECODE_SYMBOLX2_0(p, bitDPtr);
HUF_DECODE_SYMBOLX2_0(p, bitDPtr);
}
} else {
/* up to 8 symbols at a time */
while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) & (p < pEnd-(sizeof(bitDPtr->bitContainer)-1))) {
HUF_DECODE_SYMBOLX2_2(p, bitDPtr);
HUF_DECODE_SYMBOLX2_1(p, bitDPtr);
HUF_DECODE_SYMBOLX2_2(p, bitDPtr);
HUF_DECODE_SYMBOLX2_0(p, bitDPtr);
}
}
} else {
BIT_reloadDStream(bitDPtr);
}
/* closer to end : up to 2 symbols at a time */
if ((size_t)(pEnd - p) >= 2) {
while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) & (p <= pEnd-2))
HUF_DECODE_SYMBOLX2_0(p, bitDPtr);
while (p <= pEnd-2)
HUF_DECODE_SYMBOLX2_0(p, bitDPtr); /* no need to reload : reached the end of DStream */
}
if (p < pEnd)
p += HUF_decodeLastSymbolX2(p, bitDPtr, dt, dtLog);
return p-pStart;
}
FORCE_INLINE_TEMPLATE size_t
HUF_decompress1X2_usingDTable_internal_body(
void* dst, size_t dstSize,
const void* cSrc, size_t cSrcSize,
const HUF_DTable* DTable)
{
BIT_DStream_t bitD;
/* Init */
CHECK_F( BIT_initDStream(&bitD, cSrc, cSrcSize) );
/* decode */
{ BYTE* const ostart = (BYTE*) dst;
BYTE* const oend = ZSTD_maybeNullPtrAdd(ostart, dstSize);
const void* const dtPtr = DTable+1; /* force compiler to not use strict-aliasing */
const HUF_DEltX2* const dt = (const HUF_DEltX2*)dtPtr;
DTableDesc const dtd = HUF_getDTableDesc(DTable);
HUF_decodeStreamX2(ostart, &bitD, oend, dt, dtd.tableLog);
}
/* check */
if (!BIT_endOfDStream(&bitD)) return ERROR(corruption_detected);
/* decoded size */
return dstSize;
}
/* HUF_decompress4X2_usingDTable_internal_body():
* Conditions:
* @dstSize >= 6
*/
FORCE_INLINE_TEMPLATE size_t
HUF_decompress4X2_usingDTable_internal_body(
void* dst, size_t dstSize,
const void* cSrc, size_t cSrcSize,
const HUF_DTable* DTable)
{
if (cSrcSize < 10) return ERROR(corruption_detected); /* strict minimum : jump table + 1 byte per stream */
if (dstSize < 6) return ERROR(corruption_detected); /* stream 4-split doesn't work */
{ const BYTE* const istart = (const BYTE*) cSrc;
BYTE* const ostart = (BYTE*) dst;
BYTE* const oend = ostart + dstSize;
BYTE* const olimit = oend - (sizeof(size_t)-1);
const void* const dtPtr = DTable+1;
const HUF_DEltX2* const dt = (const HUF_DEltX2*)dtPtr;
/* Init */
BIT_DStream_t bitD1;
BIT_DStream_t bitD2;
BIT_DStream_t bitD3;
BIT_DStream_t bitD4;
size_t const length1 = MEM_readLE16(istart);
size_t const length2 = MEM_readLE16(istart+2);
size_t const length3 = MEM_readLE16(istart+4);
size_t const length4 = cSrcSize - (length1 + length2 + length3 + 6);
const BYTE* const istart1 = istart + 6; /* jumpTable */
const BYTE* const istart2 = istart1 + length1;
const BYTE* const istart3 = istart2 + length2;
const BYTE* const istart4 = istart3 + length3;
size_t const segmentSize = (dstSize+3) / 4;
BYTE* const opStart2 = ostart + segmentSize;
BYTE* const opStart3 = opStart2 + segmentSize;
BYTE* const opStart4 = opStart3 + segmentSize;
BYTE* op1 = ostart;
BYTE* op2 = opStart2;
BYTE* op3 = opStart3;
BYTE* op4 = opStart4;
U32 endSignal = 1;
DTableDesc const dtd = HUF_getDTableDesc(DTable);
U32 const dtLog = dtd.tableLog;
if (length4 > cSrcSize) return ERROR(corruption_detected); /* overflow */
if (opStart4 > oend) return ERROR(corruption_detected); /* overflow */
assert(dstSize >= 6 /* validated above */);
CHECK_F( BIT_initDStream(&bitD1, istart1, length1) );
CHECK_F( BIT_initDStream(&bitD2, istart2, length2) );
CHECK_F( BIT_initDStream(&bitD3, istart3, length3) );
CHECK_F( BIT_initDStream(&bitD4, istart4, length4) );
/* 16-32 symbols per loop (4-8 symbols per stream) */
if ((size_t)(oend - op4) >= sizeof(size_t)) {
for ( ; (endSignal) & (op4 < olimit); ) {
#if defined(__clang__) && (defined(__x86_64__) || defined(__i386__))
HUF_DECODE_SYMBOLX2_2(op1, &bitD1);
HUF_DECODE_SYMBOLX2_1(op1, &bitD1);
HUF_DECODE_SYMBOLX2_2(op1, &bitD1);
HUF_DECODE_SYMBOLX2_0(op1, &bitD1);
HUF_DECODE_SYMBOLX2_2(op2, &bitD2);
HUF_DECODE_SYMBOLX2_1(op2, &bitD2);
HUF_DECODE_SYMBOLX2_2(op2, &bitD2);
HUF_DECODE_SYMBOLX2_0(op2, &bitD2);
endSignal &= BIT_reloadDStreamFast(&bitD1) == BIT_DStream_unfinished;
endSignal &= BIT_reloadDStreamFast(&bitD2) == BIT_DStream_unfinished;
HUF_DECODE_SYMBOLX2_2(op3, &bitD3);
HUF_DECODE_SYMBOLX2_1(op3, &bitD3);
HUF_DECODE_SYMBOLX2_2(op3, &bitD3);
HUF_DECODE_SYMBOLX2_0(op3, &bitD3);
HUF_DECODE_SYMBOLX2_2(op4, &bitD4);
HUF_DECODE_SYMBOLX2_1(op4, &bitD4);
HUF_DECODE_SYMBOLX2_2(op4, &bitD4);
HUF_DECODE_SYMBOLX2_0(op4, &bitD4);
endSignal &= BIT_reloadDStreamFast(&bitD3) == BIT_DStream_unfinished;
endSignal &= BIT_reloadDStreamFast(&bitD4) == BIT_DStream_unfinished;
#else
HUF_DECODE_SYMBOLX2_2(op1, &bitD1);
HUF_DECODE_SYMBOLX2_2(op2, &bitD2);
HUF_DECODE_SYMBOLX2_2(op3, &bitD3);
HUF_DECODE_SYMBOLX2_2(op4, &bitD4);
HUF_DECODE_SYMBOLX2_1(op1, &bitD1);
HUF_DECODE_SYMBOLX2_1(op2, &bitD2);
HUF_DECODE_SYMBOLX2_1(op3, &bitD3);
HUF_DECODE_SYMBOLX2_1(op4, &bitD4);
HUF_DECODE_SYMBOLX2_2(op1, &bitD1);
HUF_DECODE_SYMBOLX2_2(op2, &bitD2);
HUF_DECODE_SYMBOLX2_2(op3, &bitD3);
HUF_DECODE_SYMBOLX2_2(op4, &bitD4);
HUF_DECODE_SYMBOLX2_0(op1, &bitD1);
HUF_DECODE_SYMBOLX2_0(op2, &bitD2);
HUF_DECODE_SYMBOLX2_0(op3, &bitD3);
HUF_DECODE_SYMBOLX2_0(op4, &bitD4);
endSignal = (U32)LIKELY((U32)
(BIT_reloadDStreamFast(&bitD1) == BIT_DStream_unfinished)
& (BIT_reloadDStreamFast(&bitD2) == BIT_DStream_unfinished)
& (BIT_reloadDStreamFast(&bitD3) == BIT_DStream_unfinished)
& (BIT_reloadDStreamFast(&bitD4) == BIT_DStream_unfinished));
#endif
}
}
/* check corruption */
if (op1 > opStart2) return ERROR(corruption_detected);
if (op2 > opStart3) return ERROR(corruption_detected);
if (op3 > opStart4) return ERROR(corruption_detected);
/* note : op4 already verified within main loop */
/* finish bitStreams one by one */
HUF_decodeStreamX2(op1, &bitD1, opStart2, dt, dtLog);
HUF_decodeStreamX2(op2, &bitD2, opStart3, dt, dtLog);
HUF_decodeStreamX2(op3, &bitD3, opStart4, dt, dtLog);
HUF_decodeStreamX2(op4, &bitD4, oend, dt, dtLog);
/* check */
{ U32 const endCheck = BIT_endOfDStream(&bitD1) & BIT_endOfDStream(&bitD2) & BIT_endOfDStream(&bitD3) & BIT_endOfDStream(&bitD4);
if (!endCheck) return ERROR(corruption_detected); }
/* decoded size */
return dstSize;
}
}
#if HUF_NEED_BMI2_FUNCTION
static BMI2_TARGET_ATTRIBUTE
size_t HUF_decompress4X2_usingDTable_internal_bmi2(void* dst, size_t dstSize, void const* cSrc,
size_t cSrcSize, HUF_DTable const* DTable) {
return HUF_decompress4X2_usingDTable_internal_body(dst, dstSize, cSrc, cSrcSize, DTable);
}
#endif
static
size_t HUF_decompress4X2_usingDTable_internal_default(void* dst, size_t dstSize, void const* cSrc,
size_t cSrcSize, HUF_DTable const* DTable) {
return HUF_decompress4X2_usingDTable_internal_body(dst, dstSize, cSrc, cSrcSize, DTable);
}
#if ZSTD_ENABLE_ASM_X86_64_BMI2
HUF_ASM_DECL void HUF_decompress4X2_usingDTable_internal_fast_asm_loop(HUF_DecompressFastArgs* args) ZSTDLIB_HIDDEN;
#endif
static HUF_FAST_BMI2_ATTRS
void HUF_decompress4X2_usingDTable_internal_fast_c_loop(HUF_DecompressFastArgs* args)
{
U64 bits[4];
BYTE const* ip[4];
BYTE* op[4];
BYTE* oend[4];
HUF_DEltX2 const* const dtable = (HUF_DEltX2 const*)args->dt;
BYTE const* const ilowest = args->ilowest;
/* Copy the arguments to local registers. */
ZSTD_memcpy(&bits, &args->bits, sizeof(bits));
ZSTD_memcpy((void*)(&ip), &args->ip, sizeof(ip));
ZSTD_memcpy(&op, &args->op, sizeof(op));
oend[0] = op[1];
oend[1] = op[2];
oend[2] = op[3];
oend[3] = args->oend;
assert(MEM_isLittleEndian());
assert(!MEM_32bits());
for (;;) {
BYTE* olimit;
int stream;
/* Assert loop preconditions */
#ifndef NDEBUG
for (stream = 0; stream < 4; ++stream) {
assert(op[stream] <= oend[stream]);
assert(ip[stream] >= ilowest);
}
#endif
/* Compute olimit */
{
/* Each loop does 5 table lookups for each of the 4 streams.
* Each table lookup consumes up to 11 bits of input, and produces
* up to 2 bytes of output.
*/
/* We can consume up to 7 bytes of input per iteration per stream.
* We also know that each input pointer is >= ip[0]. So we can run
* iters loops before running out of input.
*/
size_t iters = (size_t)(ip[0] - ilowest) / 7;
/* Each iteration can produce up to 10 bytes of output per stream.
* Each output stream my advance at different rates. So take the
* minimum number of safe iterations among all the output streams.
*/
for (stream = 0; stream < 4; ++stream) {
size_t const oiters = (size_t)(oend[stream] - op[stream]) / 10;
iters = MIN(iters, oiters);
}
/* Each iteration produces at least 5 output symbols. So until
* op[3] crosses olimit, we know we haven't executed iters
* iterations yet. This saves us maintaining an iters counter,
* at the expense of computing the remaining # of iterations
* more frequently.
*/
olimit = op[3] + (iters * 5);
/* Exit the fast decoding loop once we reach the end. */
if (op[3] == olimit)
break;
/* Exit the decoding loop if any input pointer has crossed the
* previous one. This indicates corruption, and a precondition
* to our loop is that ip[i] >= ip[0].
*/
for (stream = 1; stream < 4; ++stream) {
if (ip[stream] < ip[stream - 1])
goto _out;
}
}
#ifndef NDEBUG
for (stream = 1; stream < 4; ++stream) {
assert(ip[stream] >= ip[stream - 1]);
}
#endif
#define HUF_4X2_DECODE_SYMBOL(_stream, _decode3) \
do { \
if ((_decode3) || (_stream) != 3) { \
int const index = (int)(bits[(_stream)] >> 53); \
HUF_DEltX2 const entry = dtable[index]; \
MEM_write16(op[(_stream)], entry.sequence); \
bits[(_stream)] <<= (entry.nbBits) & 0x3F; \
op[(_stream)] += (entry.length); \
} \
} while (0)
#define HUF_4X2_RELOAD_STREAM(_stream) \
do { \
HUF_4X2_DECODE_SYMBOL(3, 1); \
{ \
int const ctz = ZSTD_countTrailingZeros64(bits[(_stream)]); \
int const nbBits = ctz & 7; \
int const nbBytes = ctz >> 3; \
ip[(_stream)] -= nbBytes; \
bits[(_stream)] = MEM_read64(ip[(_stream)]) | 1; \
bits[(_stream)] <<= nbBits; \
} \
} while (0)
/* Manually unroll the loop because compilers don't consistently
* unroll the inner loops, which destroys performance.
*/
do {
/* Decode 5 symbols from each of the first 3 streams.
* The final stream will be decoded during the reload phase
* to reduce register pressure.
*/
HUF_4X_FOR_EACH_STREAM_WITH_VAR(HUF_4X2_DECODE_SYMBOL, 0);
HUF_4X_FOR_EACH_STREAM_WITH_VAR(HUF_4X2_DECODE_SYMBOL, 0);
HUF_4X_FOR_EACH_STREAM_WITH_VAR(HUF_4X2_DECODE_SYMBOL, 0);
HUF_4X_FOR_EACH_STREAM_WITH_VAR(HUF_4X2_DECODE_SYMBOL, 0);
HUF_4X_FOR_EACH_STREAM_WITH_VAR(HUF_4X2_DECODE_SYMBOL, 0);
/* Decode one symbol from the final stream */
HUF_4X2_DECODE_SYMBOL(3, 1);
/* Decode 4 symbols from the final stream & reload bitstreams.
* The final stream is reloaded last, meaning that all 5 symbols
* are decoded from the final stream before it is reloaded.
*/
HUF_4X_FOR_EACH_STREAM(HUF_4X2_RELOAD_STREAM);
} while (op[3] < olimit);
}
#undef HUF_4X2_DECODE_SYMBOL
#undef HUF_4X2_RELOAD_STREAM
_out:
/* Save the final values of each of the state variables back to args. */
ZSTD_memcpy(&args->bits, &bits, sizeof(bits));
ZSTD_memcpy((void*)(&args->ip), &ip, sizeof(ip));
ZSTD_memcpy(&args->op, &op, sizeof(op));
}
static HUF_FAST_BMI2_ATTRS size_t
HUF_decompress4X2_usingDTable_internal_fast(
void* dst, size_t dstSize,
const void* cSrc, size_t cSrcSize,
const HUF_DTable* DTable,
HUF_DecompressFastLoopFn loopFn) {
void const* dt = DTable + 1;
const BYTE* const ilowest = (const BYTE*)cSrc;
BYTE* const oend = ZSTD_maybeNullPtrAdd((BYTE*)dst, dstSize);
HUF_DecompressFastArgs args;
{
size_t const ret = HUF_DecompressFastArgs_init(&args, dst, dstSize, cSrc, cSrcSize, DTable);
FORWARD_IF_ERROR(ret, "Failed to init asm args");
if (ret == 0)
return 0;
}
assert(args.ip[0] >= args.ilowest);
loopFn(&args);
/* note : op4 already verified within main loop */
assert(args.ip[0] >= ilowest);
assert(args.ip[1] >= ilowest);
assert(args.ip[2] >= ilowest);
assert(args.ip[3] >= ilowest);
assert(args.op[3] <= oend);
assert(ilowest == args.ilowest);
assert(ilowest + 6 == args.iend[0]);
(void)ilowest;
/* finish bitStreams one by one */
{
size_t const segmentSize = (dstSize+3) / 4;
BYTE* segmentEnd = (BYTE*)dst;
int i;
for (i = 0; i < 4; ++i) {
BIT_DStream_t bit;
if (segmentSize <= (size_t)(oend - segmentEnd))
segmentEnd += segmentSize;
else
segmentEnd = oend;
FORWARD_IF_ERROR(HUF_initRemainingDStream(&bit, &args, i, segmentEnd), "corruption");
args.op[i] += HUF_decodeStreamX2(args.op[i], &bit, segmentEnd, (HUF_DEltX2 const*)dt, HUF_DECODER_FAST_TABLELOG);
if (args.op[i] != segmentEnd)
return ERROR(corruption_detected);
}
}
/* decoded size */
return dstSize;
}
static size_t HUF_decompress4X2_usingDTable_internal(void* dst, size_t dstSize, void const* cSrc,
size_t cSrcSize, HUF_DTable const* DTable, int flags)
{
HUF_DecompressUsingDTableFn fallbackFn = HUF_decompress4X2_usingDTable_internal_default;
HUF_DecompressFastLoopFn loopFn = HUF_decompress4X2_usingDTable_internal_fast_c_loop;
#if DYNAMIC_BMI2
if (flags & HUF_flags_bmi2) {
fallbackFn = HUF_decompress4X2_usingDTable_internal_bmi2;
# if ZSTD_ENABLE_ASM_X86_64_BMI2
if (!(flags & HUF_flags_disableAsm)) {
loopFn = HUF_decompress4X2_usingDTable_internal_fast_asm_loop;
}
# endif
} else {
return fallbackFn(dst, dstSize, cSrc, cSrcSize, DTable);
}
#endif
#if ZSTD_ENABLE_ASM_X86_64_BMI2 && defined(__BMI2__)
if (!(flags & HUF_flags_disableAsm)) {
loopFn = HUF_decompress4X2_usingDTable_internal_fast_asm_loop;
}
#endif
if (HUF_ENABLE_FAST_DECODE && !(flags & HUF_flags_disableFast)) {
size_t const ret = HUF_decompress4X2_usingDTable_internal_fast(dst, dstSize, cSrc, cSrcSize, DTable, loopFn);
if (ret != 0)
return ret;
}
return fallbackFn(dst, dstSize, cSrc, cSrcSize, DTable);
}
HUF_DGEN(HUF_decompress1X2_usingDTable_internal)
size_t HUF_decompress1X2_DCtx_wksp(HUF_DTable* DCtx, void* dst, size_t dstSize,
const void* cSrc, size_t cSrcSize,
void* workSpace, size_t wkspSize, int flags)
{
const BYTE* ip = (const BYTE*) cSrc;
size_t const hSize = HUF_readDTableX2_wksp(DCtx, cSrc, cSrcSize,
workSpace, wkspSize, flags);
if (HUF_isError(hSize)) return hSize;
if (hSize >= cSrcSize) return ERROR(srcSize_wrong);
ip += hSize; cSrcSize -= hSize;
return HUF_decompress1X2_usingDTable_internal(dst, dstSize, ip, cSrcSize, DCtx, flags);
}
static size_t HUF_decompress4X2_DCtx_wksp(HUF_DTable* dctx, void* dst, size_t dstSize,
const void* cSrc, size_t cSrcSize,
void* workSpace, size_t wkspSize, int flags)
{
const BYTE* ip = (const BYTE*) cSrc;
size_t hSize = HUF_readDTableX2_wksp(dctx, cSrc, cSrcSize,
workSpace, wkspSize, flags);
if (HUF_isError(hSize)) return hSize;
if (hSize >= cSrcSize) return ERROR(srcSize_wrong);
ip += hSize; cSrcSize -= hSize;
return HUF_decompress4X2_usingDTable_internal(dst, dstSize, ip, cSrcSize, dctx, flags);
}
#endif /* HUF_FORCE_DECOMPRESS_X1 */
/* ***********************************/
/* Universal decompression selectors */
/* ***********************************/
#if !defined(HUF_FORCE_DECOMPRESS_X1) && !defined(HUF_FORCE_DECOMPRESS_X2)
typedef struct { U32 tableTime; U32 decode256Time; } algo_time_t;
static const algo_time_t algoTime[16 /* Quantization */][2 /* single, double */] =
{
/* single, double, quad */
{{0,0}, {1,1}}, /* Q==0 : impossible */
{{0,0}, {1,1}}, /* Q==1 : impossible */
{{ 150,216}, { 381,119}}, /* Q == 2 : 12-18% */
{{ 170,205}, { 514,112}}, /* Q == 3 : 18-25% */
{{ 177,199}, { 539,110}}, /* Q == 4 : 25-32% */
{{ 197,194}, { 644,107}}, /* Q == 5 : 32-38% */
{{ 221,192}, { 735,107}}, /* Q == 6 : 38-44% */
{{ 256,189}, { 881,106}}, /* Q == 7 : 44-50% */
{{ 359,188}, {1167,109}}, /* Q == 8 : 50-56% */
{{ 582,187}, {1570,114}}, /* Q == 9 : 56-62% */
{{ 688,187}, {1712,122}}, /* Q ==10 : 62-69% */
{{ 825,186}, {1965,136}}, /* Q ==11 : 69-75% */
{{ 976,185}, {2131,150}}, /* Q ==12 : 75-81% */
{{1180,186}, {2070,175}}, /* Q ==13 : 81-87% */
{{1377,185}, {1731,202}}, /* Q ==14 : 87-93% */
{{1412,185}, {1695,202}}, /* Q ==15 : 93-99% */
};
#endif
/** HUF_selectDecoder() :
* Tells which decoder is likely to decode faster,
* based on a set of pre-computed metrics.
* @return : 0==HUF_decompress4X1, 1==HUF_decompress4X2 .
* Assumption : 0 < dstSize <= 128 KB */
U32 HUF_selectDecoder (size_t dstSize, size_t cSrcSize)
{
assert(dstSize > 0);
assert(dstSize <= 128*1024);
#if defined(HUF_FORCE_DECOMPRESS_X1)
(void)dstSize;
(void)cSrcSize;
return 0;
#elif defined(HUF_FORCE_DECOMPRESS_X2)
(void)dstSize;
(void)cSrcSize;
return 1;
#else
/* decoder timing evaluation */
{ U32 const Q = (cSrcSize >= dstSize) ? 15 : (U32)(cSrcSize * 16 / dstSize); /* Q < 16 */
U32 const D256 = (U32)(dstSize >> 8);
U32 const DTime0 = algoTime[Q][0].tableTime + (algoTime[Q][0].decode256Time * D256);
U32 DTime1 = algoTime[Q][1].tableTime + (algoTime[Q][1].decode256Time * D256);
DTime1 += DTime1 >> 5; /* small advantage to algorithm using less memory, to reduce cache eviction */
return DTime1 < DTime0;
}
#endif
}
size_t HUF_decompress1X_DCtx_wksp(HUF_DTable* dctx, void* dst, size_t dstSize,
const void* cSrc, size_t cSrcSize,
void* workSpace, size_t wkspSize, int flags)
{
/* validation checks */
if (dstSize == 0) return ERROR(dstSize_tooSmall);
if (cSrcSize > dstSize) return ERROR(corruption_detected); /* invalid */
if (cSrcSize == dstSize) { ZSTD_memcpy(dst, cSrc, dstSize); return dstSize; } /* not compressed */
if (cSrcSize == 1) { ZSTD_memset(dst, *(const BYTE*)cSrc, dstSize); return dstSize; } /* RLE */
{ U32 const algoNb = HUF_selectDecoder(dstSize, cSrcSize);
#if defined(HUF_FORCE_DECOMPRESS_X1)
(void)algoNb;
assert(algoNb == 0);
return HUF_decompress1X1_DCtx_wksp(dctx, dst, dstSize, cSrc,
cSrcSize, workSpace, wkspSize, flags);
#elif defined(HUF_FORCE_DECOMPRESS_X2)
(void)algoNb;
assert(algoNb == 1);
return HUF_decompress1X2_DCtx_wksp(dctx, dst, dstSize, cSrc,
cSrcSize, workSpace, wkspSize, flags);
#else
return algoNb ? HUF_decompress1X2_DCtx_wksp(dctx, dst, dstSize, cSrc,
cSrcSize, workSpace, wkspSize, flags):
HUF_decompress1X1_DCtx_wksp(dctx, dst, dstSize, cSrc,
cSrcSize, workSpace, wkspSize, flags);
#endif
}
}
size_t HUF_decompress1X_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable, int flags)
{
DTableDesc const dtd = HUF_getDTableDesc(DTable);
#if defined(HUF_FORCE_DECOMPRESS_X1)
(void)dtd;
assert(dtd.tableType == 0);
return HUF_decompress1X1_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, flags);
#elif defined(HUF_FORCE_DECOMPRESS_X2)
(void)dtd;
assert(dtd.tableType == 1);
return HUF_decompress1X2_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, flags);
#else
return dtd.tableType ? HUF_decompress1X2_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, flags) :
HUF_decompress1X1_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, flags);
#endif
}
#ifndef HUF_FORCE_DECOMPRESS_X2
size_t HUF_decompress1X1_DCtx_wksp(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize, int flags)
{
const BYTE* ip = (const BYTE*) cSrc;
size_t const hSize = HUF_readDTableX1_wksp(dctx, cSrc, cSrcSize, workSpace, wkspSize, flags);
if (HUF_isError(hSize)) return hSize;
if (hSize >= cSrcSize) return ERROR(srcSize_wrong);
ip += hSize; cSrcSize -= hSize;
return HUF_decompress1X1_usingDTable_internal(dst, dstSize, ip, cSrcSize, dctx, flags);
}
#endif
size_t HUF_decompress4X_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable, int flags)
{
DTableDesc const dtd = HUF_getDTableDesc(DTable);
#if defined(HUF_FORCE_DECOMPRESS_X1)
(void)dtd;
assert(dtd.tableType == 0);
return HUF_decompress4X1_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, flags);
#elif defined(HUF_FORCE_DECOMPRESS_X2)
(void)dtd;
assert(dtd.tableType == 1);
return HUF_decompress4X2_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, flags);
#else
return dtd.tableType ? HUF_decompress4X2_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, flags) :
HUF_decompress4X1_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, flags);
#endif
}
size_t HUF_decompress4X_hufOnly_wksp(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize, int flags)
{
/* validation checks */
if (dstSize == 0) return ERROR(dstSize_tooSmall);
if (cSrcSize == 0) return ERROR(corruption_detected);
{ U32 const algoNb = HUF_selectDecoder(dstSize, cSrcSize);
#if defined(HUF_FORCE_DECOMPRESS_X1)
(void)algoNb;
assert(algoNb == 0);
return HUF_decompress4X1_DCtx_wksp(dctx, dst, dstSize, cSrc, cSrcSize, workSpace, wkspSize, flags);
#elif defined(HUF_FORCE_DECOMPRESS_X2)
(void)algoNb;
assert(algoNb == 1);
return HUF_decompress4X2_DCtx_wksp(dctx, dst, dstSize, cSrc, cSrcSize, workSpace, wkspSize, flags);
#else
return algoNb ? HUF_decompress4X2_DCtx_wksp(dctx, dst, dstSize, cSrc, cSrcSize, workSpace, wkspSize, flags) :
HUF_decompress4X1_DCtx_wksp(dctx, dst, dstSize, cSrc, cSrcSize, workSpace, wkspSize, flags);
#endif
}
}
/**** ended inlining decompress/huf_decompress.c ****/
/**** start inlining decompress/zstd_ddict.c ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/* zstd_ddict.c :
* concentrates all logic that needs to know the internals of ZSTD_DDict object */
/*-*******************************************************
* Dependencies
*********************************************************/
/**** skipping file: ../common/allocations.h ****/
/**** skipping file: ../common/zstd_deps.h ****/
/**** skipping file: ../common/cpu.h ****/
/**** skipping file: ../common/mem.h ****/
#define FSE_STATIC_LINKING_ONLY
/**** skipping file: ../common/fse.h ****/
/**** skipping file: ../common/huf.h ****/
/**** start inlining zstd_decompress_internal.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/* zstd_decompress_internal:
* objects and definitions shared within lib/decompress modules */
#ifndef ZSTD_DECOMPRESS_INTERNAL_H
#define ZSTD_DECOMPRESS_INTERNAL_H
/*-*******************************************************
* Dependencies
*********************************************************/
/**** skipping file: ../common/mem.h ****/
/**** skipping file: ../common/zstd_internal.h ****/
/*-*******************************************************
* Constants
*********************************************************/
static UNUSED_ATTR const U32 LL_base[MaxLL+1] = {
0, 1, 2, 3, 4, 5, 6, 7,
8, 9, 10, 11, 12, 13, 14, 15,
16, 18, 20, 22, 24, 28, 32, 40,
48, 64, 0x80, 0x100, 0x200, 0x400, 0x800, 0x1000,
0x2000, 0x4000, 0x8000, 0x10000 };
static UNUSED_ATTR const U32 OF_base[MaxOff+1] = {
0, 1, 1, 5, 0xD, 0x1D, 0x3D, 0x7D,
0xFD, 0x1FD, 0x3FD, 0x7FD, 0xFFD, 0x1FFD, 0x3FFD, 0x7FFD,
0xFFFD, 0x1FFFD, 0x3FFFD, 0x7FFFD, 0xFFFFD, 0x1FFFFD, 0x3FFFFD, 0x7FFFFD,
0xFFFFFD, 0x1FFFFFD, 0x3FFFFFD, 0x7FFFFFD, 0xFFFFFFD, 0x1FFFFFFD, 0x3FFFFFFD, 0x7FFFFFFD };
static UNUSED_ATTR const U8 OF_bits[MaxOff+1] = {
0, 1, 2, 3, 4, 5, 6, 7,
8, 9, 10, 11, 12, 13, 14, 15,
16, 17, 18, 19, 20, 21, 22, 23,
24, 25, 26, 27, 28, 29, 30, 31 };
static UNUSED_ATTR const U32 ML_base[MaxML+1] = {
3, 4, 5, 6, 7, 8, 9, 10,
11, 12, 13, 14, 15, 16, 17, 18,
19, 20, 21, 22, 23, 24, 25, 26,
27, 28, 29, 30, 31, 32, 33, 34,
35, 37, 39, 41, 43, 47, 51, 59,
67, 83, 99, 0x83, 0x103, 0x203, 0x403, 0x803,
0x1003, 0x2003, 0x4003, 0x8003, 0x10003 };
/*-*******************************************************
* Decompression types
*********************************************************/
typedef struct {
U32 fastMode;
U32 tableLog;
} ZSTD_seqSymbol_header;
typedef struct {
U16 nextState;
BYTE nbAdditionalBits;
BYTE nbBits;
U32 baseValue;
} ZSTD_seqSymbol;
#define SEQSYMBOL_TABLE_SIZE(log) (1 + (1 << (log)))
#define ZSTD_BUILD_FSE_TABLE_WKSP_SIZE (sizeof(S16) * (MaxSeq + 1) + (1u << MaxFSELog) + sizeof(U64))
#define ZSTD_BUILD_FSE_TABLE_WKSP_SIZE_U32 ((ZSTD_BUILD_FSE_TABLE_WKSP_SIZE + sizeof(U32) - 1) / sizeof(U32))
#define ZSTD_HUFFDTABLE_CAPACITY_LOG 12
typedef struct {
ZSTD_seqSymbol LLTable[SEQSYMBOL_TABLE_SIZE(LLFSELog)]; /* Note : Space reserved for FSE Tables */
ZSTD_seqSymbol OFTable[SEQSYMBOL_TABLE_SIZE(OffFSELog)]; /* is also used as temporary workspace while building hufTable during DDict creation */
ZSTD_seqSymbol MLTable[SEQSYMBOL_TABLE_SIZE(MLFSELog)]; /* and therefore must be at least HUF_DECOMPRESS_WORKSPACE_SIZE large */
HUF_DTable hufTable[HUF_DTABLE_SIZE(ZSTD_HUFFDTABLE_CAPACITY_LOG)]; /* can accommodate HUF_decompress4X */
U32 rep[ZSTD_REP_NUM];
U32 workspace[ZSTD_BUILD_FSE_TABLE_WKSP_SIZE_U32];
} ZSTD_entropyDTables_t;
typedef enum { ZSTDds_getFrameHeaderSize, ZSTDds_decodeFrameHeader,
ZSTDds_decodeBlockHeader, ZSTDds_decompressBlock,
ZSTDds_decompressLastBlock, ZSTDds_checkChecksum,
ZSTDds_decodeSkippableHeader, ZSTDds_skipFrame } ZSTD_dStage;
typedef enum { zdss_init=0, zdss_loadHeader,
zdss_read, zdss_load, zdss_flush } ZSTD_dStreamStage;
typedef enum {
ZSTD_use_indefinitely = -1, /* Use the dictionary indefinitely */
ZSTD_dont_use = 0, /* Do not use the dictionary (if one exists free it) */
ZSTD_use_once = 1 /* Use the dictionary once and set to ZSTD_dont_use */
} ZSTD_dictUses_e;
/* Hashset for storing references to multiple ZSTD_DDict within ZSTD_DCtx */
typedef struct {
const ZSTD_DDict** ddictPtrTable;
size_t ddictPtrTableSize;
size_t ddictPtrCount;
} ZSTD_DDictHashSet;
#ifndef ZSTD_DECODER_INTERNAL_BUFFER
# define ZSTD_DECODER_INTERNAL_BUFFER (1 << 16)
#endif
#define ZSTD_LBMIN 64
#define ZSTD_LBMAX (128 << 10)
/* extra buffer, compensates when dst is not large enough to store litBuffer */
#define ZSTD_LITBUFFEREXTRASIZE BOUNDED(ZSTD_LBMIN, ZSTD_DECODER_INTERNAL_BUFFER, ZSTD_LBMAX)
typedef enum {
ZSTD_not_in_dst = 0, /* Stored entirely within litExtraBuffer */
ZSTD_in_dst = 1, /* Stored entirely within dst (in memory after current output write) */
ZSTD_split = 2 /* Split between litExtraBuffer and dst */
} ZSTD_litLocation_e;
struct ZSTD_DCtx_s
{
const ZSTD_seqSymbol* LLTptr;
const ZSTD_seqSymbol* MLTptr;
const ZSTD_seqSymbol* OFTptr;
const HUF_DTable* HUFptr;
ZSTD_entropyDTables_t entropy;
U32 workspace[HUF_DECOMPRESS_WORKSPACE_SIZE_U32]; /* space needed when building huffman tables */
const void* previousDstEnd; /* detect continuity */
const void* prefixStart; /* start of current segment */
const void* virtualStart; /* virtual start of previous segment if it was just before current one */
const void* dictEnd; /* end of previous segment */
size_t expected;
ZSTD_FrameHeader fParams;
U64 processedCSize;
U64 decodedSize;
blockType_e bType; /* used in ZSTD_decompressContinue(), store blockType between block header decoding and block decompression stages */
ZSTD_dStage stage;
U32 litEntropy;
U32 fseEntropy;
XXH64_state_t xxhState;
size_t headerSize;
ZSTD_format_e format;
ZSTD_forceIgnoreChecksum_e forceIgnoreChecksum; /* User specified: if == 1, will ignore checksums in compressed frame. Default == 0 */
U32 validateChecksum; /* if == 1, will validate checksum. Is == 1 if (fParams.checksumFlag == 1) and (forceIgnoreChecksum == 0). */
const BYTE* litPtr;
ZSTD_customMem customMem;
size_t litSize;
size_t rleSize;
size_t staticSize;
int isFrameDecompression;
#if DYNAMIC_BMI2
int bmi2; /* == 1 if the CPU supports BMI2 and 0 otherwise. CPU support is determined dynamically once per context lifetime. */
#endif
/* dictionary */
ZSTD_DDict* ddictLocal;
const ZSTD_DDict* ddict; /* set by ZSTD_initDStream_usingDDict(), or ZSTD_DCtx_refDDict() */
U32 dictID;
int ddictIsCold; /* if == 1 : dictionary is "new" for working context, and presumed "cold" (not in cpu cache) */
ZSTD_dictUses_e dictUses;
ZSTD_DDictHashSet* ddictSet; /* Hash set for multiple ddicts */
ZSTD_refMultipleDDicts_e refMultipleDDicts; /* User specified: if == 1, will allow references to multiple DDicts. Default == 0 (disabled) */
int disableHufAsm;
int maxBlockSizeParam;
/* streaming */
ZSTD_dStreamStage streamStage;
char* inBuff;
size_t inBuffSize;
size_t inPos;
size_t maxWindowSize;
char* outBuff;
size_t outBuffSize;
size_t outStart;
size_t outEnd;
size_t lhSize;
#if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT>=1)
void* legacyContext;
U32 previousLegacyVersion;
U32 legacyVersion;
#endif
U32 hostageByte;
int noForwardProgress;
ZSTD_bufferMode_e outBufferMode;
ZSTD_outBuffer expectedOutBuffer;
/* workspace */
BYTE* litBuffer;
const BYTE* litBufferEnd;
ZSTD_litLocation_e litBufferLocation;
BYTE litExtraBuffer[ZSTD_LITBUFFEREXTRASIZE + WILDCOPY_OVERLENGTH]; /* literal buffer can be split between storage within dst and within this scratch buffer */
BYTE headerBuffer[ZSTD_FRAMEHEADERSIZE_MAX];
size_t oversizedDuration;
#ifdef FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION
void const* dictContentBeginForFuzzing;
void const* dictContentEndForFuzzing;
#endif
/* Tracing */
#if ZSTD_TRACE
ZSTD_TraceCtx traceCtx;
#endif
}; /* typedef'd to ZSTD_DCtx within "zstd.h" */
MEM_STATIC int ZSTD_DCtx_get_bmi2(const struct ZSTD_DCtx_s *dctx) {
#if DYNAMIC_BMI2
return dctx->bmi2;
#else
(void)dctx;
return 0;
#endif
}
/*-*******************************************************
* Shared internal functions
*********************************************************/
/*! ZSTD_loadDEntropy() :
* dict : must point at beginning of a valid zstd dictionary.
* @return : size of dictionary header (size of magic number + dict ID + entropy tables) */
size_t ZSTD_loadDEntropy(ZSTD_entropyDTables_t* entropy,
const void* const dict, size_t const dictSize);
/*! ZSTD_checkContinuity() :
* check if next `dst` follows previous position, where decompression ended.
* If yes, do nothing (continue on current segment).
* If not, classify previous segment as "external dictionary", and start a new segment.
* This function cannot fail. */
void ZSTD_checkContinuity(ZSTD_DCtx* dctx, const void* dst, size_t dstSize);
#endif /* ZSTD_DECOMPRESS_INTERNAL_H */
/**** ended inlining zstd_decompress_internal.h ****/
/**** start inlining zstd_ddict.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_DDICT_H
#define ZSTD_DDICT_H
/*-*******************************************************
* Dependencies
*********************************************************/
/**** skipping file: ../common/zstd_deps.h ****/
/**** skipping file: ../zstd.h ****/
/*-*******************************************************
* Interface
*********************************************************/
/* note: several prototypes are already published in `zstd.h` :
* ZSTD_createDDict()
* ZSTD_createDDict_byReference()
* ZSTD_createDDict_advanced()
* ZSTD_freeDDict()
* ZSTD_initStaticDDict()
* ZSTD_sizeof_DDict()
* ZSTD_estimateDDictSize()
* ZSTD_getDictID_fromDict()
*/
const void* ZSTD_DDict_dictContent(const ZSTD_DDict* ddict);
size_t ZSTD_DDict_dictSize(const ZSTD_DDict* ddict);
void ZSTD_copyDDictParameters(ZSTD_DCtx* dctx, const ZSTD_DDict* ddict);
#endif /* ZSTD_DDICT_H */
/**** ended inlining zstd_ddict.h ****/
#if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT>=1)
#error Using excluded file: ../legacy/zstd_legacy.h (re-amalgamate source to fix)
#endif
/*-*******************************************************
* Types
*********************************************************/
struct ZSTD_DDict_s {
void* dictBuffer;
const void* dictContent;
size_t dictSize;
ZSTD_entropyDTables_t entropy;
U32 dictID;
U32 entropyPresent;
ZSTD_customMem cMem;
}; /* typedef'd to ZSTD_DDict within "zstd.h" */
const void* ZSTD_DDict_dictContent(const ZSTD_DDict* ddict)
{
assert(ddict != NULL);
return ddict->dictContent;
}
size_t ZSTD_DDict_dictSize(const ZSTD_DDict* ddict)
{
assert(ddict != NULL);
return ddict->dictSize;
}
void ZSTD_copyDDictParameters(ZSTD_DCtx* dctx, const ZSTD_DDict* ddict)
{
DEBUGLOG(4, "ZSTD_copyDDictParameters");
assert(dctx != NULL);
assert(ddict != NULL);
dctx->dictID = ddict->dictID;
dctx->prefixStart = ddict->dictContent;
dctx->virtualStart = ddict->dictContent;
dctx->dictEnd = (const BYTE*)ddict->dictContent + ddict->dictSize;
dctx->previousDstEnd = dctx->dictEnd;
#ifdef FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION
dctx->dictContentBeginForFuzzing = dctx->prefixStart;
dctx->dictContentEndForFuzzing = dctx->previousDstEnd;
#endif
if (ddict->entropyPresent) {
dctx->litEntropy = 1;
dctx->fseEntropy = 1;
dctx->LLTptr = ddict->entropy.LLTable;
dctx->MLTptr = ddict->entropy.MLTable;
dctx->OFTptr = ddict->entropy.OFTable;
dctx->HUFptr = ddict->entropy.hufTable;
dctx->entropy.rep[0] = ddict->entropy.rep[0];
dctx->entropy.rep[1] = ddict->entropy.rep[1];
dctx->entropy.rep[2] = ddict->entropy.rep[2];
} else {
dctx->litEntropy = 0;
dctx->fseEntropy = 0;
}
}
static size_t
ZSTD_loadEntropy_intoDDict(ZSTD_DDict* ddict,
ZSTD_dictContentType_e dictContentType)
{
ddict->dictID = 0;
ddict->entropyPresent = 0;
if (dictContentType == ZSTD_dct_rawContent) return 0;
if (ddict->dictSize < 8) {
if (dictContentType == ZSTD_dct_fullDict)
return ERROR(dictionary_corrupted); /* only accept specified dictionaries */
return 0; /* pure content mode */
}
{ U32 const magic = MEM_readLE32(ddict->dictContent);
if (magic != ZSTD_MAGIC_DICTIONARY) {
if (dictContentType == ZSTD_dct_fullDict)
return ERROR(dictionary_corrupted); /* only accept specified dictionaries */
return 0; /* pure content mode */
}
}
ddict->dictID = MEM_readLE32((const char*)ddict->dictContent + ZSTD_FRAMEIDSIZE);
/* load entropy tables */
RETURN_ERROR_IF(ZSTD_isError(ZSTD_loadDEntropy(
&ddict->entropy, ddict->dictContent, ddict->dictSize)),
dictionary_corrupted, "");
ddict->entropyPresent = 1;
return 0;
}
static size_t ZSTD_initDDict_internal(ZSTD_DDict* ddict,
const void* dict, size_t dictSize,
ZSTD_dictLoadMethod_e dictLoadMethod,
ZSTD_dictContentType_e dictContentType)
{
if ((dictLoadMethod == ZSTD_dlm_byRef) || (!dict) || (!dictSize)) {
ddict->dictBuffer = NULL;
ddict->dictContent = dict;
if (!dict) dictSize = 0;
} else {
void* const internalBuffer = ZSTD_customMalloc(dictSize, ddict->cMem);
ddict->dictBuffer = internalBuffer;
ddict->dictContent = internalBuffer;
if (!internalBuffer) return ERROR(memory_allocation);
ZSTD_memcpy(internalBuffer, dict, dictSize);
}
ddict->dictSize = dictSize;
ddict->entropy.hufTable[0] = (HUF_DTable)((ZSTD_HUFFDTABLE_CAPACITY_LOG)*0x1000001); /* cover both little and big endian */
/* parse dictionary content */
FORWARD_IF_ERROR( ZSTD_loadEntropy_intoDDict(ddict, dictContentType) , "");
return 0;
}
ZSTD_DDict* ZSTD_createDDict_advanced(const void* dict, size_t dictSize,
ZSTD_dictLoadMethod_e dictLoadMethod,
ZSTD_dictContentType_e dictContentType,
ZSTD_customMem customMem)
{
if ((!customMem.customAlloc) ^ (!customMem.customFree)) return NULL;
{ ZSTD_DDict* const ddict = (ZSTD_DDict*) ZSTD_customMalloc(sizeof(ZSTD_DDict), customMem);
if (ddict == NULL) return NULL;
ddict->cMem = customMem;
{ size_t const initResult = ZSTD_initDDict_internal(ddict,
dict, dictSize,
dictLoadMethod, dictContentType);
if (ZSTD_isError(initResult)) {
ZSTD_freeDDict(ddict);
return NULL;
} }
return ddict;
}
}
/*! ZSTD_createDDict() :
* Create a digested dictionary, to start decompression without startup delay.
* `dict` content is copied inside DDict.
* Consequently, `dict` can be released after `ZSTD_DDict` creation */
ZSTD_DDict* ZSTD_createDDict(const void* dict, size_t dictSize)
{
ZSTD_customMem const allocator = { NULL, NULL, NULL };
return ZSTD_createDDict_advanced(dict, dictSize, ZSTD_dlm_byCopy, ZSTD_dct_auto, allocator);
}
/*! ZSTD_createDDict_byReference() :
* Create a digested dictionary, to start decompression without startup delay.
* Dictionary content is simply referenced, it will be accessed during decompression.
* Warning : dictBuffer must outlive DDict (DDict must be freed before dictBuffer) */
ZSTD_DDict* ZSTD_createDDict_byReference(const void* dictBuffer, size_t dictSize)
{
ZSTD_customMem const allocator = { NULL, NULL, NULL };
return ZSTD_createDDict_advanced(dictBuffer, dictSize, ZSTD_dlm_byRef, ZSTD_dct_auto, allocator);
}
const ZSTD_DDict* ZSTD_initStaticDDict(
void* sBuffer, size_t sBufferSize,
const void* dict, size_t dictSize,
ZSTD_dictLoadMethod_e dictLoadMethod,
ZSTD_dictContentType_e dictContentType)
{
size_t const neededSpace = sizeof(ZSTD_DDict)
+ (dictLoadMethod == ZSTD_dlm_byRef ? 0 : dictSize);
ZSTD_DDict* const ddict = (ZSTD_DDict*)sBuffer;
assert(sBuffer != NULL);
assert(dict != NULL);
if ((size_t)sBuffer & 7) return NULL; /* 8-aligned */
if (sBufferSize < neededSpace) return NULL;
if (dictLoadMethod == ZSTD_dlm_byCopy) {
ZSTD_memcpy(ddict+1, dict, dictSize); /* local copy */
dict = ddict+1;
}
if (ZSTD_isError( ZSTD_initDDict_internal(ddict,
dict, dictSize,
ZSTD_dlm_byRef, dictContentType) ))
return NULL;
return ddict;
}
size_t ZSTD_freeDDict(ZSTD_DDict* ddict)
{
if (ddict==NULL) return 0; /* support free on NULL */
{ ZSTD_customMem const cMem = ddict->cMem;
ZSTD_customFree(ddict->dictBuffer, cMem);
ZSTD_customFree(ddict, cMem);
return 0;
}
}
/*! ZSTD_estimateDDictSize() :
* Estimate amount of memory that will be needed to create a dictionary for decompression.
* Note : dictionary created by reference using ZSTD_dlm_byRef are smaller */
size_t ZSTD_estimateDDictSize(size_t dictSize, ZSTD_dictLoadMethod_e dictLoadMethod)
{
return sizeof(ZSTD_DDict) + (dictLoadMethod == ZSTD_dlm_byRef ? 0 : dictSize);
}
size_t ZSTD_sizeof_DDict(const ZSTD_DDict* ddict)
{
if (ddict==NULL) return 0; /* support sizeof on NULL */
return sizeof(*ddict) + (ddict->dictBuffer ? ddict->dictSize : 0) ;
}
/*! ZSTD_getDictID_fromDDict() :
* Provides the dictID of the dictionary loaded into `ddict`.
* If @return == 0, the dictionary is not conformant to Zstandard specification, or empty.
* Non-conformant dictionaries can still be loaded, but as content-only dictionaries. */
unsigned ZSTD_getDictID_fromDDict(const ZSTD_DDict* ddict)
{
if (ddict==NULL) return 0;
return ddict->dictID;
}
/**** ended inlining decompress/zstd_ddict.c ****/
/**** start inlining decompress/zstd_decompress.c ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/* ***************************************************************
* Tuning parameters
*****************************************************************/
/*!
* HEAPMODE :
* Select how default decompression function ZSTD_decompress() allocates its context,
* on stack (0), or into heap (1, default; requires malloc()).
* Note that functions with explicit context such as ZSTD_decompressDCtx() are unaffected.
*/
#ifndef ZSTD_HEAPMODE
# define ZSTD_HEAPMODE 1
#endif
/*!
* LEGACY_SUPPORT :
* if set to 1+, ZSTD_decompress() can decode older formats (v0.1+)
*/
#ifndef ZSTD_LEGACY_SUPPORT
# define ZSTD_LEGACY_SUPPORT 0
#endif
/*!
* MAXWINDOWSIZE_DEFAULT :
* maximum window size accepted by DStream __by default__.
* Frames requiring more memory will be rejected.
* It's possible to set a different limit using ZSTD_DCtx_setMaxWindowSize().
*/
#ifndef ZSTD_MAXWINDOWSIZE_DEFAULT
# define ZSTD_MAXWINDOWSIZE_DEFAULT (((U32)1 << ZSTD_WINDOWLOG_LIMIT_DEFAULT) + 1)
#endif
/*!
* NO_FORWARD_PROGRESS_MAX :
* maximum allowed nb of calls to ZSTD_decompressStream()
* without any forward progress
* (defined as: no byte read from input, and no byte flushed to output)
* before triggering an error.
*/
#ifndef ZSTD_NO_FORWARD_PROGRESS_MAX
# define ZSTD_NO_FORWARD_PROGRESS_MAX 16
#endif
/*-*******************************************************
* Dependencies
*********************************************************/
/**** skipping file: ../common/zstd_deps.h ****/
/**** skipping file: ../common/allocations.h ****/
/**** skipping file: ../common/error_private.h ****/
/**** skipping file: ../common/zstd_internal.h ****/
/**** skipping file: ../common/mem.h ****/
/**** skipping file: ../common/bits.h ****/
#define FSE_STATIC_LINKING_ONLY
/**** skipping file: ../common/fse.h ****/
/**** skipping file: ../common/huf.h ****/
/**** skipping file: ../common/xxhash.h ****/
/**** skipping file: zstd_decompress_internal.h ****/
/**** skipping file: zstd_ddict.h ****/
/**** start inlining zstd_decompress_block.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_DEC_BLOCK_H
#define ZSTD_DEC_BLOCK_H
/*-*******************************************************
* Dependencies
*********************************************************/
/**** skipping file: ../common/zstd_deps.h ****/
/**** skipping file: ../zstd.h ****/
/**** skipping file: ../common/zstd_internal.h ****/
/**** skipping file: zstd_decompress_internal.h ****/
/* === Prototypes === */
/* note: prototypes already published within `zstd.h` :
* ZSTD_decompressBlock()
*/
/* note: prototypes already published within `zstd_internal.h` :
* ZSTD_getcBlockSize()
* ZSTD_decodeSeqHeaders()
*/
/* Streaming state is used to inform allocation of the literal buffer */
typedef enum {
not_streaming = 0,
is_streaming = 1
} streaming_operation;
/* ZSTD_decompressBlock_internal() :
* decompress block, starting at `src`,
* into destination buffer `dst`.
* @return : decompressed block size,
* or an error code (which can be tested using ZSTD_isError())
*/
size_t ZSTD_decompressBlock_internal(ZSTD_DCtx* dctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize, const streaming_operation streaming);
/* ZSTD_buildFSETable() :
* generate FSE decoding table for one symbol (ll, ml or off)
* this function must be called with valid parameters only
* (dt is large enough, normalizedCounter distribution total is a power of 2, max is within range, etc.)
* in which case it cannot fail.
* The workspace must be 4-byte aligned and at least ZSTD_BUILD_FSE_TABLE_WKSP_SIZE bytes, which is
* defined in zstd_decompress_internal.h.
* Internal use only.
*/
void ZSTD_buildFSETable(ZSTD_seqSymbol* dt,
const short* normalizedCounter, unsigned maxSymbolValue,
const U32* baseValue, const U8* nbAdditionalBits,
unsigned tableLog, void* wksp, size_t wkspSize,
int bmi2);
/* Internal definition of ZSTD_decompressBlock() to avoid deprecation warnings. */
size_t ZSTD_decompressBlock_deprecated(ZSTD_DCtx* dctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize);
#endif /* ZSTD_DEC_BLOCK_H */
/**** ended inlining zstd_decompress_block.h ****/
#if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT>=1)
#error Using excluded file: ../legacy/zstd_legacy.h (re-amalgamate source to fix)
#endif
/*************************************
* Multiple DDicts Hashset internals *
*************************************/
#define DDICT_HASHSET_MAX_LOAD_FACTOR_COUNT_MULT 4
#define DDICT_HASHSET_MAX_LOAD_FACTOR_SIZE_MULT 3 /* These two constants represent SIZE_MULT/COUNT_MULT load factor without using a float.
* Currently, that means a 0.75 load factor.
* So, if count * COUNT_MULT / size * SIZE_MULT != 0, then we've exceeded
* the load factor of the ddict hash set.
*/
#define DDICT_HASHSET_TABLE_BASE_SIZE 64
#define DDICT_HASHSET_RESIZE_FACTOR 2
/* Hash function to determine starting position of dict insertion within the table
* Returns an index between [0, hashSet->ddictPtrTableSize]
*/
static size_t ZSTD_DDictHashSet_getIndex(const ZSTD_DDictHashSet* hashSet, U32 dictID) {
const U64 hash = XXH64(&dictID, sizeof(U32), 0);
/* DDict ptr table size is a multiple of 2, use size - 1 as mask to get index within [0, hashSet->ddictPtrTableSize) */
return hash & (hashSet->ddictPtrTableSize - 1);
}
/* Adds DDict to a hashset without resizing it.
* If inserting a DDict with a dictID that already exists in the set, replaces the one in the set.
* Returns 0 if successful, or a zstd error code if something went wrong.
*/
static size_t ZSTD_DDictHashSet_emplaceDDict(ZSTD_DDictHashSet* hashSet, const ZSTD_DDict* ddict) {
const U32 dictID = ZSTD_getDictID_fromDDict(ddict);
size_t idx = ZSTD_DDictHashSet_getIndex(hashSet, dictID);
const size_t idxRangeMask = hashSet->ddictPtrTableSize - 1;
RETURN_ERROR_IF(hashSet->ddictPtrCount == hashSet->ddictPtrTableSize, GENERIC, "Hash set is full!");
DEBUGLOG(4, "Hashed index: for dictID: %u is %zu", dictID, idx);
while (hashSet->ddictPtrTable[idx] != NULL) {
/* Replace existing ddict if inserting ddict with same dictID */
if (ZSTD_getDictID_fromDDict(hashSet->ddictPtrTable[idx]) == dictID) {
DEBUGLOG(4, "DictID already exists, replacing rather than adding");
hashSet->ddictPtrTable[idx] = ddict;
return 0;
}
idx &= idxRangeMask;
idx++;
}
DEBUGLOG(4, "Final idx after probing for dictID %u is: %zu", dictID, idx);
hashSet->ddictPtrTable[idx] = ddict;
hashSet->ddictPtrCount++;
return 0;
}
/* Expands hash table by factor of DDICT_HASHSET_RESIZE_FACTOR and
* rehashes all values, allocates new table, frees old table.
* Returns 0 on success, otherwise a zstd error code.
*/
static size_t ZSTD_DDictHashSet_expand(ZSTD_DDictHashSet* hashSet, ZSTD_customMem customMem) {
size_t newTableSize = hashSet->ddictPtrTableSize * DDICT_HASHSET_RESIZE_FACTOR;
const ZSTD_DDict** newTable = (const ZSTD_DDict**)ZSTD_customCalloc(sizeof(ZSTD_DDict*) * newTableSize, customMem);
const ZSTD_DDict** oldTable = hashSet->ddictPtrTable;
size_t oldTableSize = hashSet->ddictPtrTableSize;
size_t i;
DEBUGLOG(4, "Expanding DDict hash table! Old size: %zu new size: %zu", oldTableSize, newTableSize);
RETURN_ERROR_IF(!newTable, memory_allocation, "Expanded hashset allocation failed!");
hashSet->ddictPtrTable = newTable;
hashSet->ddictPtrTableSize = newTableSize;
hashSet->ddictPtrCount = 0;
for (i = 0; i < oldTableSize; ++i) {
if (oldTable[i] != NULL) {
FORWARD_IF_ERROR(ZSTD_DDictHashSet_emplaceDDict(hashSet, oldTable[i]), "");
}
}
ZSTD_customFree((void*)oldTable, customMem);
DEBUGLOG(4, "Finished re-hash");
return 0;
}
/* Fetches a DDict with the given dictID
* Returns the ZSTD_DDict* with the requested dictID. If it doesn't exist, then returns NULL.
*/
static const ZSTD_DDict* ZSTD_DDictHashSet_getDDict(ZSTD_DDictHashSet* hashSet, U32 dictID) {
size_t idx = ZSTD_DDictHashSet_getIndex(hashSet, dictID);
const size_t idxRangeMask = hashSet->ddictPtrTableSize - 1;
DEBUGLOG(4, "Hashed index: for dictID: %u is %zu", dictID, idx);
for (;;) {
size_t currDictID = ZSTD_getDictID_fromDDict(hashSet->ddictPtrTable[idx]);
if (currDictID == dictID || currDictID == 0) {
/* currDictID == 0 implies a NULL ddict entry */
break;
} else {
idx &= idxRangeMask; /* Goes to start of table when we reach the end */
idx++;
}
}
DEBUGLOG(4, "Final idx after probing for dictID %u is: %zu", dictID, idx);
return hashSet->ddictPtrTable[idx];
}
/* Allocates space for and returns a ddict hash set
* The hash set's ZSTD_DDict* table has all values automatically set to NULL to begin with.
* Returns NULL if allocation failed.
*/
static ZSTD_DDictHashSet* ZSTD_createDDictHashSet(ZSTD_customMem customMem) {
ZSTD_DDictHashSet* ret = (ZSTD_DDictHashSet*)ZSTD_customMalloc(sizeof(ZSTD_DDictHashSet), customMem);
DEBUGLOG(4, "Allocating new hash set");
if (!ret)
return NULL;
ret->ddictPtrTable = (const ZSTD_DDict**)ZSTD_customCalloc(DDICT_HASHSET_TABLE_BASE_SIZE * sizeof(ZSTD_DDict*), customMem);
if (!ret->ddictPtrTable) {
ZSTD_customFree(ret, customMem);
return NULL;
}
ret->ddictPtrTableSize = DDICT_HASHSET_TABLE_BASE_SIZE;
ret->ddictPtrCount = 0;
return ret;
}
/* Frees the table of ZSTD_DDict* within a hashset, then frees the hashset itself.
* Note: The ZSTD_DDict* within the table are NOT freed.
*/
static void ZSTD_freeDDictHashSet(ZSTD_DDictHashSet* hashSet, ZSTD_customMem customMem) {
DEBUGLOG(4, "Freeing ddict hash set");
if (hashSet && hashSet->ddictPtrTable) {
ZSTD_customFree((void*)hashSet->ddictPtrTable, customMem);
}
if (hashSet) {
ZSTD_customFree(hashSet, customMem);
}
}
/* Public function: Adds a DDict into the ZSTD_DDictHashSet, possibly triggering a resize of the hash set.
* Returns 0 on success, or a ZSTD error.
*/
static size_t ZSTD_DDictHashSet_addDDict(ZSTD_DDictHashSet* hashSet, const ZSTD_DDict* ddict, ZSTD_customMem customMem) {
DEBUGLOG(4, "Adding dict ID: %u to hashset with - Count: %zu Tablesize: %zu", ZSTD_getDictID_fromDDict(ddict), hashSet->ddictPtrCount, hashSet->ddictPtrTableSize);
if (hashSet->ddictPtrCount * DDICT_HASHSET_MAX_LOAD_FACTOR_COUNT_MULT / hashSet->ddictPtrTableSize * DDICT_HASHSET_MAX_LOAD_FACTOR_SIZE_MULT != 0) {
FORWARD_IF_ERROR(ZSTD_DDictHashSet_expand(hashSet, customMem), "");
}
FORWARD_IF_ERROR(ZSTD_DDictHashSet_emplaceDDict(hashSet, ddict), "");
return 0;
}
/*-*************************************************************
* Context management
***************************************************************/
size_t ZSTD_sizeof_DCtx (const ZSTD_DCtx* dctx)
{
if (dctx==NULL) return 0; /* support sizeof NULL */
return sizeof(*dctx)
+ ZSTD_sizeof_DDict(dctx->ddictLocal)
+ dctx->inBuffSize + dctx->outBuffSize;
}
size_t ZSTD_estimateDCtxSize(void) { return sizeof(ZSTD_DCtx); }
static size_t ZSTD_startingInputLength(ZSTD_format_e format)
{
size_t const startingInputLength = ZSTD_FRAMEHEADERSIZE_PREFIX(format);
/* only supports formats ZSTD_f_zstd1 and ZSTD_f_zstd1_magicless */
assert( (format == ZSTD_f_zstd1) || (format == ZSTD_f_zstd1_magicless) );
return startingInputLength;
}
static void ZSTD_DCtx_resetParameters(ZSTD_DCtx* dctx)
{
assert(dctx->streamStage == zdss_init);
dctx->format = ZSTD_f_zstd1;
dctx->maxWindowSize = ZSTD_MAXWINDOWSIZE_DEFAULT;
dctx->outBufferMode = ZSTD_bm_buffered;
dctx->forceIgnoreChecksum = ZSTD_d_validateChecksum;
dctx->refMultipleDDicts = ZSTD_rmd_refSingleDDict;
dctx->disableHufAsm = 0;
dctx->maxBlockSizeParam = 0;
}
static void ZSTD_initDCtx_internal(ZSTD_DCtx* dctx)
{
dctx->staticSize = 0;
dctx->ddict = NULL;
dctx->ddictLocal = NULL;
dctx->dictEnd = NULL;
dctx->ddictIsCold = 0;
dctx->dictUses = ZSTD_dont_use;
dctx->inBuff = NULL;
dctx->inBuffSize = 0;
dctx->outBuffSize = 0;
dctx->streamStage = zdss_init;
#if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT>=1)
dctx->legacyContext = NULL;
dctx->previousLegacyVersion = 0;
#endif
dctx->noForwardProgress = 0;
dctx->oversizedDuration = 0;
dctx->isFrameDecompression = 1;
#if DYNAMIC_BMI2
dctx->bmi2 = ZSTD_cpuSupportsBmi2();
#endif
dctx->ddictSet = NULL;
ZSTD_DCtx_resetParameters(dctx);
#ifdef FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION
dctx->dictContentEndForFuzzing = NULL;
#endif
}
ZSTD_DCtx* ZSTD_initStaticDCtx(void *workspace, size_t workspaceSize)
{
ZSTD_DCtx* const dctx = (ZSTD_DCtx*) workspace;
if ((size_t)workspace & 7) return NULL; /* 8-aligned */
if (workspaceSize < sizeof(ZSTD_DCtx)) return NULL; /* minimum size */
ZSTD_initDCtx_internal(dctx);
dctx->staticSize = workspaceSize;
dctx->inBuff = (char*)(dctx+1);
return dctx;
}
static ZSTD_DCtx* ZSTD_createDCtx_internal(ZSTD_customMem customMem) {
if ((!customMem.customAlloc) ^ (!customMem.customFree)) return NULL;
{ ZSTD_DCtx* const dctx = (ZSTD_DCtx*)ZSTD_customMalloc(sizeof(*dctx), customMem);
if (!dctx) return NULL;
dctx->customMem = customMem;
ZSTD_initDCtx_internal(dctx);
return dctx;
}
}
ZSTD_DCtx* ZSTD_createDCtx_advanced(ZSTD_customMem customMem)
{
return ZSTD_createDCtx_internal(customMem);
}
ZSTD_DCtx* ZSTD_createDCtx(void)
{
DEBUGLOG(3, "ZSTD_createDCtx");
return ZSTD_createDCtx_internal(ZSTD_defaultCMem);
}
static void ZSTD_clearDict(ZSTD_DCtx* dctx)
{
ZSTD_freeDDict(dctx->ddictLocal);
dctx->ddictLocal = NULL;
dctx->ddict = NULL;
dctx->dictUses = ZSTD_dont_use;
}
size_t ZSTD_freeDCtx(ZSTD_DCtx* dctx)
{
if (dctx==NULL) return 0; /* support free on NULL */
RETURN_ERROR_IF(dctx->staticSize, memory_allocation, "not compatible with static DCtx");
{ ZSTD_customMem const cMem = dctx->customMem;
ZSTD_clearDict(dctx);
ZSTD_customFree(dctx->inBuff, cMem);
dctx->inBuff = NULL;
#if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT >= 1)
if (dctx->legacyContext)
ZSTD_freeLegacyStreamContext(dctx->legacyContext, dctx->previousLegacyVersion);
#endif
if (dctx->ddictSet) {
ZSTD_freeDDictHashSet(dctx->ddictSet, cMem);
dctx->ddictSet = NULL;
}
ZSTD_customFree(dctx, cMem);
return 0;
}
}
/* no longer useful */
void ZSTD_copyDCtx(ZSTD_DCtx* dstDCtx, const ZSTD_DCtx* srcDCtx)
{
size_t const toCopy = (size_t)((char*)(&dstDCtx->inBuff) - (char*)dstDCtx);
ZSTD_memcpy(dstDCtx, srcDCtx, toCopy); /* no need to copy workspace */
}
/* Given a dctx with a digested frame params, re-selects the correct ZSTD_DDict based on
* the requested dict ID from the frame. If there exists a reference to the correct ZSTD_DDict, then
* accordingly sets the ddict to be used to decompress the frame.
*
* If no DDict is found, then no action is taken, and the ZSTD_DCtx::ddict remains as-is.
*
* ZSTD_d_refMultipleDDicts must be enabled for this function to be called.
*/
static void ZSTD_DCtx_selectFrameDDict(ZSTD_DCtx* dctx) {
assert(dctx->refMultipleDDicts && dctx->ddictSet);
DEBUGLOG(4, "Adjusting DDict based on requested dict ID from frame");
if (dctx->ddict) {
const ZSTD_DDict* frameDDict = ZSTD_DDictHashSet_getDDict(dctx->ddictSet, dctx->fParams.dictID);
if (frameDDict) {
DEBUGLOG(4, "DDict found!");
ZSTD_clearDict(dctx);
dctx->dictID = dctx->fParams.dictID;
dctx->ddict = frameDDict;
dctx->dictUses = ZSTD_use_indefinitely;
}
}
}
/*-*************************************************************
* Frame header decoding
***************************************************************/
/*! ZSTD_isFrame() :
* Tells if the content of `buffer` starts with a valid Frame Identifier.
* Note : Frame Identifier is 4 bytes. If `size < 4`, @return will always be 0.
* Note 2 : Legacy Frame Identifiers are considered valid only if Legacy Support is enabled.
* Note 3 : Skippable Frame Identifiers are considered valid. */
unsigned ZSTD_isFrame(const void* buffer, size_t size)
{
if (size < ZSTD_FRAMEIDSIZE) return 0;
{ U32 const magic = MEM_readLE32(buffer);
if (magic == ZSTD_MAGICNUMBER) return 1;
if ((magic & ZSTD_MAGIC_SKIPPABLE_MASK) == ZSTD_MAGIC_SKIPPABLE_START) return 1;
}
#if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT >= 1)
if (ZSTD_isLegacy(buffer, size)) return 1;
#endif
return 0;
}
/*! ZSTD_isSkippableFrame() :
* Tells if the content of `buffer` starts with a valid Frame Identifier for a skippable frame.
* Note : Frame Identifier is 4 bytes. If `size < 4`, @return will always be 0.
*/
unsigned ZSTD_isSkippableFrame(const void* buffer, size_t size)
{
if (size < ZSTD_FRAMEIDSIZE) return 0;
{ U32 const magic = MEM_readLE32(buffer);
if ((magic & ZSTD_MAGIC_SKIPPABLE_MASK) == ZSTD_MAGIC_SKIPPABLE_START) return 1;
}
return 0;
}
/** ZSTD_frameHeaderSize_internal() :
* srcSize must be large enough to reach header size fields.
* note : only works for formats ZSTD_f_zstd1 and ZSTD_f_zstd1_magicless.
* @return : size of the Frame Header
* or an error code, which can be tested with ZSTD_isError() */
static size_t ZSTD_frameHeaderSize_internal(const void* src, size_t srcSize, ZSTD_format_e format)
{
size_t const minInputSize = ZSTD_startingInputLength(format);
RETURN_ERROR_IF(srcSize < minInputSize, srcSize_wrong, "");
{ BYTE const fhd = ((const BYTE*)src)[minInputSize-1];
U32 const dictID= fhd & 3;
U32 const singleSegment = (fhd >> 5) & 1;
U32 const fcsId = fhd >> 6;
return minInputSize + !singleSegment
+ ZSTD_did_fieldSize[dictID] + ZSTD_fcs_fieldSize[fcsId]
+ (singleSegment && !fcsId);
}
}
/** ZSTD_frameHeaderSize() :
* srcSize must be >= ZSTD_frameHeaderSize_prefix.
* @return : size of the Frame Header,
* or an error code (if srcSize is too small) */
size_t ZSTD_frameHeaderSize(const void* src, size_t srcSize)
{
return ZSTD_frameHeaderSize_internal(src, srcSize, ZSTD_f_zstd1);
}
/** ZSTD_getFrameHeader_advanced() :
* decode Frame Header, or require larger `srcSize`.
* note : only works for formats ZSTD_f_zstd1 and ZSTD_f_zstd1_magicless
* @return : 0, `zfhPtr` is correctly filled,
* >0, `srcSize` is too small, value is wanted `srcSize` amount,
** or an error code, which can be tested using ZSTD_isError() */
size_t ZSTD_getFrameHeader_advanced(ZSTD_FrameHeader* zfhPtr, const void* src, size_t srcSize, ZSTD_format_e format)
{
const BYTE* ip = (const BYTE*)src;
size_t const minInputSize = ZSTD_startingInputLength(format);
DEBUGLOG(5, "ZSTD_getFrameHeader_advanced: minInputSize = %zu, srcSize = %zu", minInputSize, srcSize);
if (srcSize > 0) {
/* note : technically could be considered an assert(), since it's an invalid entry */
RETURN_ERROR_IF(src==NULL, GENERIC, "invalid parameter : src==NULL, but srcSize>0");
}
if (srcSize < minInputSize) {
if (srcSize > 0 && format != ZSTD_f_zstd1_magicless) {
/* when receiving less than @minInputSize bytes,
* control these bytes at least correspond to a supported magic number
* in order to error out early if they don't.
**/
size_t const toCopy = MIN(4, srcSize);
unsigned char hbuf[4]; MEM_writeLE32(hbuf, ZSTD_MAGICNUMBER);
assert(src != NULL);
ZSTD_memcpy(hbuf, src, toCopy);
if ( MEM_readLE32(hbuf) != ZSTD_MAGICNUMBER ) {
/* not a zstd frame : let's check if it's a skippable frame */
MEM_writeLE32(hbuf, ZSTD_MAGIC_SKIPPABLE_START);
ZSTD_memcpy(hbuf, src, toCopy);
if ((MEM_readLE32(hbuf) & ZSTD_MAGIC_SKIPPABLE_MASK) != ZSTD_MAGIC_SKIPPABLE_START) {
RETURN_ERROR(prefix_unknown,
"first bytes don't correspond to any supported magic number");
} } }
return minInputSize;
}
ZSTD_memset(zfhPtr, 0, sizeof(*zfhPtr)); /* not strictly necessary, but static analyzers may not understand that zfhPtr will be read only if return value is zero, since they are 2 different signals */
if ( (format != ZSTD_f_zstd1_magicless)
&& (MEM_readLE32(src) != ZSTD_MAGICNUMBER) ) {
if ((MEM_readLE32(src) & ZSTD_MAGIC_SKIPPABLE_MASK) == ZSTD_MAGIC_SKIPPABLE_START) {
/* skippable frame */
if (srcSize < ZSTD_SKIPPABLEHEADERSIZE)
return ZSTD_SKIPPABLEHEADERSIZE; /* magic number + frame length */
ZSTD_memset(zfhPtr, 0, sizeof(*zfhPtr));
zfhPtr->frameType = ZSTD_skippableFrame;
zfhPtr->dictID = MEM_readLE32(src) - ZSTD_MAGIC_SKIPPABLE_START;
zfhPtr->headerSize = ZSTD_SKIPPABLEHEADERSIZE;
zfhPtr->frameContentSize = MEM_readLE32((const char *)src + ZSTD_FRAMEIDSIZE);
return 0;
}
RETURN_ERROR(prefix_unknown, "");
}
/* ensure there is enough `srcSize` to fully read/decode frame header */
{ size_t const fhsize = ZSTD_frameHeaderSize_internal(src, srcSize, format);
if (srcSize < fhsize) return fhsize;
zfhPtr->headerSize = (U32)fhsize;
}
{ BYTE const fhdByte = ip[minInputSize-1];
size_t pos = minInputSize;
U32 const dictIDSizeCode = fhdByte&3;
U32 const checksumFlag = (fhdByte>>2)&1;
U32 const singleSegment = (fhdByte>>5)&1;
U32 const fcsID = fhdByte>>6;
U64 windowSize = 0;
U32 dictID = 0;
U64 frameContentSize = ZSTD_CONTENTSIZE_UNKNOWN;
RETURN_ERROR_IF((fhdByte & 0x08) != 0, frameParameter_unsupported,
"reserved bits, must be zero");
if (!singleSegment) {
BYTE const wlByte = ip[pos++];
U32 const windowLog = (wlByte >> 3) + ZSTD_WINDOWLOG_ABSOLUTEMIN;
RETURN_ERROR_IF(windowLog > ZSTD_WINDOWLOG_MAX, frameParameter_windowTooLarge, "");
windowSize = (1ULL << windowLog);
windowSize += (windowSize >> 3) * (wlByte&7);
}
switch(dictIDSizeCode)
{
default:
assert(0); /* impossible */
ZSTD_FALLTHROUGH;
case 0 : break;
case 1 : dictID = ip[pos]; pos++; break;
case 2 : dictID = MEM_readLE16(ip+pos); pos+=2; break;
case 3 : dictID = MEM_readLE32(ip+pos); pos+=4; break;
}
switch(fcsID)
{
default:
assert(0); /* impossible */
ZSTD_FALLTHROUGH;
case 0 : if (singleSegment) frameContentSize = ip[pos]; break;
case 1 : frameContentSize = MEM_readLE16(ip+pos)+256; break;
case 2 : frameContentSize = MEM_readLE32(ip+pos); break;
case 3 : frameContentSize = MEM_readLE64(ip+pos); break;
}
if (singleSegment) windowSize = frameContentSize;
zfhPtr->frameType = ZSTD_frame;
zfhPtr->frameContentSize = frameContentSize;
zfhPtr->windowSize = windowSize;
zfhPtr->blockSizeMax = (unsigned) MIN(windowSize, ZSTD_BLOCKSIZE_MAX);
zfhPtr->dictID = dictID;
zfhPtr->checksumFlag = checksumFlag;
}
return 0;
}
/** ZSTD_getFrameHeader() :
* decode Frame Header, or require larger `srcSize`.
* note : this function does not consume input, it only reads it.
* @return : 0, `zfhPtr` is correctly filled,
* >0, `srcSize` is too small, value is wanted `srcSize` amount,
* or an error code, which can be tested using ZSTD_isError() */
size_t ZSTD_getFrameHeader(ZSTD_FrameHeader* zfhPtr, const void* src, size_t srcSize)
{
return ZSTD_getFrameHeader_advanced(zfhPtr, src, srcSize, ZSTD_f_zstd1);
}
/** ZSTD_getFrameContentSize() :
* compatible with legacy mode
* @return : decompressed size of the single frame pointed to be `src` if known, otherwise
* - ZSTD_CONTENTSIZE_UNKNOWN if the size cannot be determined
* - ZSTD_CONTENTSIZE_ERROR if an error occurred (e.g. invalid magic number, srcSize too small) */
unsigned long long ZSTD_getFrameContentSize(const void *src, size_t srcSize)
{
#if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT >= 1)
if (ZSTD_isLegacy(src, srcSize)) {
unsigned long long const ret = ZSTD_getDecompressedSize_legacy(src, srcSize);
return ret == 0 ? ZSTD_CONTENTSIZE_UNKNOWN : ret;
}
#endif
{ ZSTD_FrameHeader zfh;
if (ZSTD_getFrameHeader(&zfh, src, srcSize) != 0)
return ZSTD_CONTENTSIZE_ERROR;
if (zfh.frameType == ZSTD_skippableFrame) {
return 0;
} else {
return zfh.frameContentSize;
} }
}
static size_t readSkippableFrameSize(void const* src, size_t srcSize)
{
size_t const skippableHeaderSize = ZSTD_SKIPPABLEHEADERSIZE;
U32 sizeU32;
RETURN_ERROR_IF(srcSize < ZSTD_SKIPPABLEHEADERSIZE, srcSize_wrong, "");
sizeU32 = MEM_readLE32((BYTE const*)src + ZSTD_FRAMEIDSIZE);
RETURN_ERROR_IF((U32)(sizeU32 + ZSTD_SKIPPABLEHEADERSIZE) < sizeU32,
frameParameter_unsupported, "");
{ size_t const skippableSize = skippableHeaderSize + sizeU32;
RETURN_ERROR_IF(skippableSize > srcSize, srcSize_wrong, "");
return skippableSize;
}
}
/*! ZSTD_readSkippableFrame() :
* Retrieves content of a skippable frame, and writes it to dst buffer.
*
* The parameter magicVariant will receive the magicVariant that was supplied when the frame was written,
* i.e. magicNumber - ZSTD_MAGIC_SKIPPABLE_START. This can be NULL if the caller is not interested
* in the magicVariant.
*
* Returns an error if destination buffer is not large enough, or if this is not a valid skippable frame.
*
* @return : number of bytes written or a ZSTD error.
*/
size_t ZSTD_readSkippableFrame(void* dst, size_t dstCapacity,
unsigned* magicVariant, /* optional, can be NULL */
const void* src, size_t srcSize)
{
RETURN_ERROR_IF(srcSize < ZSTD_SKIPPABLEHEADERSIZE, srcSize_wrong, "");
{ U32 const magicNumber = MEM_readLE32(src);
size_t skippableFrameSize = readSkippableFrameSize(src, srcSize);
size_t skippableContentSize = skippableFrameSize - ZSTD_SKIPPABLEHEADERSIZE;
/* check input validity */
RETURN_ERROR_IF(!ZSTD_isSkippableFrame(src, srcSize), frameParameter_unsupported, "");
RETURN_ERROR_IF(skippableFrameSize < ZSTD_SKIPPABLEHEADERSIZE || skippableFrameSize > srcSize, srcSize_wrong, "");
RETURN_ERROR_IF(skippableContentSize > dstCapacity, dstSize_tooSmall, "");
/* deliver payload */
if (skippableContentSize > 0 && dst != NULL)
ZSTD_memcpy(dst, (const BYTE *)src + ZSTD_SKIPPABLEHEADERSIZE, skippableContentSize);
if (magicVariant != NULL)
*magicVariant = magicNumber - ZSTD_MAGIC_SKIPPABLE_START;
return skippableContentSize;
}
}
/** ZSTD_findDecompressedSize() :
* `srcSize` must be the exact length of some number of ZSTD compressed and/or
* skippable frames
* note: compatible with legacy mode
* @return : decompressed size of the frames contained */
unsigned long long ZSTD_findDecompressedSize(const void* src, size_t srcSize)
{
unsigned long long totalDstSize = 0;
while (srcSize >= ZSTD_startingInputLength(ZSTD_f_zstd1)) {
U32 const magicNumber = MEM_readLE32(src);
if ((magicNumber & ZSTD_MAGIC_SKIPPABLE_MASK) == ZSTD_MAGIC_SKIPPABLE_START) {
size_t const skippableSize = readSkippableFrameSize(src, srcSize);
if (ZSTD_isError(skippableSize)) return ZSTD_CONTENTSIZE_ERROR;
assert(skippableSize <= srcSize);
src = (const BYTE *)src + skippableSize;
srcSize -= skippableSize;
continue;
}
{ unsigned long long const fcs = ZSTD_getFrameContentSize(src, srcSize);
if (fcs >= ZSTD_CONTENTSIZE_ERROR) return fcs;
if (totalDstSize + fcs < totalDstSize)
return ZSTD_CONTENTSIZE_ERROR; /* check for overflow */
totalDstSize += fcs;
}
/* skip to next frame */
{ size_t const frameSrcSize = ZSTD_findFrameCompressedSize(src, srcSize);
if (ZSTD_isError(frameSrcSize)) return ZSTD_CONTENTSIZE_ERROR;
assert(frameSrcSize <= srcSize);
src = (const BYTE *)src + frameSrcSize;
srcSize -= frameSrcSize;
}
} /* while (srcSize >= ZSTD_frameHeaderSize_prefix) */
if (srcSize) return ZSTD_CONTENTSIZE_ERROR;
return totalDstSize;
}
/** ZSTD_getDecompressedSize() :
* compatible with legacy mode
* @return : decompressed size if known, 0 otherwise
note : 0 can mean any of the following :
- frame content is empty
- decompressed size field is not present in frame header
- frame header unknown / not supported
- frame header not complete (`srcSize` too small) */
unsigned long long ZSTD_getDecompressedSize(const void* src, size_t srcSize)
{
unsigned long long const ret = ZSTD_getFrameContentSize(src, srcSize);
ZSTD_STATIC_ASSERT(ZSTD_CONTENTSIZE_ERROR < ZSTD_CONTENTSIZE_UNKNOWN);
return (ret >= ZSTD_CONTENTSIZE_ERROR) ? 0 : ret;
}
/** ZSTD_decodeFrameHeader() :
* `headerSize` must be the size provided by ZSTD_frameHeaderSize().
* If multiple DDict references are enabled, also will choose the correct DDict to use.
* @return : 0 if success, or an error code, which can be tested using ZSTD_isError() */
static size_t ZSTD_decodeFrameHeader(ZSTD_DCtx* dctx, const void* src, size_t headerSize)
{
size_t const result = ZSTD_getFrameHeader_advanced(&(dctx->fParams), src, headerSize, dctx->format);
if (ZSTD_isError(result)) return result; /* invalid header */
RETURN_ERROR_IF(result>0, srcSize_wrong, "headerSize too small");
/* Reference DDict requested by frame if dctx references multiple ddicts */
if (dctx->refMultipleDDicts == ZSTD_rmd_refMultipleDDicts && dctx->ddictSet) {
ZSTD_DCtx_selectFrameDDict(dctx);
}
#ifndef FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION
/* Skip the dictID check in fuzzing mode, because it makes the search
* harder.
*/
RETURN_ERROR_IF(dctx->fParams.dictID && (dctx->dictID != dctx->fParams.dictID),
dictionary_wrong, "");
#endif
dctx->validateChecksum = (dctx->fParams.checksumFlag && !dctx->forceIgnoreChecksum) ? 1 : 0;
if (dctx->validateChecksum) XXH64_reset(&dctx->xxhState, 0);
dctx->processedCSize += headerSize;
return 0;
}
static ZSTD_frameSizeInfo ZSTD_errorFrameSizeInfo(size_t ret)
{
ZSTD_frameSizeInfo frameSizeInfo;
frameSizeInfo.compressedSize = ret;
frameSizeInfo.decompressedBound = ZSTD_CONTENTSIZE_ERROR;
return frameSizeInfo;
}
static ZSTD_frameSizeInfo ZSTD_findFrameSizeInfo(const void* src, size_t srcSize, ZSTD_format_e format)
{
ZSTD_frameSizeInfo frameSizeInfo;
ZSTD_memset(&frameSizeInfo, 0, sizeof(ZSTD_frameSizeInfo));
#if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT >= 1)
if (format == ZSTD_f_zstd1 && ZSTD_isLegacy(src, srcSize))
return ZSTD_findFrameSizeInfoLegacy(src, srcSize);
#endif
if (format == ZSTD_f_zstd1 && (srcSize >= ZSTD_SKIPPABLEHEADERSIZE)
&& (MEM_readLE32(src) & ZSTD_MAGIC_SKIPPABLE_MASK) == ZSTD_MAGIC_SKIPPABLE_START) {
frameSizeInfo.compressedSize = readSkippableFrameSize(src, srcSize);
assert(ZSTD_isError(frameSizeInfo.compressedSize) ||
frameSizeInfo.compressedSize <= srcSize);
return frameSizeInfo;
} else {
const BYTE* ip = (const BYTE*)src;
const BYTE* const ipstart = ip;
size_t remainingSize = srcSize;
size_t nbBlocks = 0;
ZSTD_FrameHeader zfh;
/* Extract Frame Header */
{ size_t const ret = ZSTD_getFrameHeader_advanced(&zfh, src, srcSize, format);
if (ZSTD_isError(ret))
return ZSTD_errorFrameSizeInfo(ret);
if (ret > 0)
return ZSTD_errorFrameSizeInfo(ERROR(srcSize_wrong));
}
ip += zfh.headerSize;
remainingSize -= zfh.headerSize;
/* Iterate over each block */
while (1) {
blockProperties_t blockProperties;
size_t const cBlockSize = ZSTD_getcBlockSize(ip, remainingSize, &blockProperties);
if (ZSTD_isError(cBlockSize))
return ZSTD_errorFrameSizeInfo(cBlockSize);
if (ZSTD_blockHeaderSize + cBlockSize > remainingSize)
return ZSTD_errorFrameSizeInfo(ERROR(srcSize_wrong));
ip += ZSTD_blockHeaderSize + cBlockSize;
remainingSize -= ZSTD_blockHeaderSize + cBlockSize;
nbBlocks++;
if (blockProperties.lastBlock) break;
}
/* Final frame content checksum */
if (zfh.checksumFlag) {
if (remainingSize < 4)
return ZSTD_errorFrameSizeInfo(ERROR(srcSize_wrong));
ip += 4;
}
frameSizeInfo.nbBlocks = nbBlocks;
frameSizeInfo.compressedSize = (size_t)(ip - ipstart);
frameSizeInfo.decompressedBound = (zfh.frameContentSize != ZSTD_CONTENTSIZE_UNKNOWN)
? zfh.frameContentSize
: (unsigned long long)nbBlocks * zfh.blockSizeMax;
return frameSizeInfo;
}
}
static size_t ZSTD_findFrameCompressedSize_advanced(const void *src, size_t srcSize, ZSTD_format_e format) {
ZSTD_frameSizeInfo const frameSizeInfo = ZSTD_findFrameSizeInfo(src, srcSize, format);
return frameSizeInfo.compressedSize;
}
/** ZSTD_findFrameCompressedSize() :
* See docs in zstd.h
* Note: compatible with legacy mode */
size_t ZSTD_findFrameCompressedSize(const void *src, size_t srcSize)
{
return ZSTD_findFrameCompressedSize_advanced(src, srcSize, ZSTD_f_zstd1);
}
/** ZSTD_decompressBound() :
* compatible with legacy mode
* `src` must point to the start of a ZSTD frame or a skippable frame
* `srcSize` must be at least as large as the frame contained
* @return : the maximum decompressed size of the compressed source
*/
unsigned long long ZSTD_decompressBound(const void* src, size_t srcSize)
{
unsigned long long bound = 0;
/* Iterate over each frame */
while (srcSize > 0) {
ZSTD_frameSizeInfo const frameSizeInfo = ZSTD_findFrameSizeInfo(src, srcSize, ZSTD_f_zstd1);
size_t const compressedSize = frameSizeInfo.compressedSize;
unsigned long long const decompressedBound = frameSizeInfo.decompressedBound;
if (ZSTD_isError(compressedSize) || decompressedBound == ZSTD_CONTENTSIZE_ERROR)
return ZSTD_CONTENTSIZE_ERROR;
assert(srcSize >= compressedSize);
src = (const BYTE*)src + compressedSize;
srcSize -= compressedSize;
bound += decompressedBound;
}
return bound;
}
size_t ZSTD_decompressionMargin(void const* src, size_t srcSize)
{
size_t margin = 0;
unsigned maxBlockSize = 0;
/* Iterate over each frame */
while (srcSize > 0) {
ZSTD_frameSizeInfo const frameSizeInfo = ZSTD_findFrameSizeInfo(src, srcSize, ZSTD_f_zstd1);
size_t const compressedSize = frameSizeInfo.compressedSize;
unsigned long long const decompressedBound = frameSizeInfo.decompressedBound;
ZSTD_FrameHeader zfh;
FORWARD_IF_ERROR(ZSTD_getFrameHeader(&zfh, src, srcSize), "");
if (ZSTD_isError(compressedSize) || decompressedBound == ZSTD_CONTENTSIZE_ERROR)
return ERROR(corruption_detected);
if (zfh.frameType == ZSTD_frame) {
/* Add the frame header to our margin */
margin += zfh.headerSize;
/* Add the checksum to our margin */
margin += zfh.checksumFlag ? 4 : 0;
/* Add 3 bytes per block */
margin += 3 * frameSizeInfo.nbBlocks;
/* Compute the max block size */
maxBlockSize = MAX(maxBlockSize, zfh.blockSizeMax);
} else {
assert(zfh.frameType == ZSTD_skippableFrame);
/* Add the entire skippable frame size to our margin. */
margin += compressedSize;
}
assert(srcSize >= compressedSize);
src = (const BYTE*)src + compressedSize;
srcSize -= compressedSize;
}
/* Add the max block size back to the margin. */
margin += maxBlockSize;
return margin;
}
/*-*************************************************************
* Frame decoding
***************************************************************/
/** ZSTD_insertBlock() :
* insert `src` block into `dctx` history. Useful to track uncompressed blocks. */
size_t ZSTD_insertBlock(ZSTD_DCtx* dctx, const void* blockStart, size_t blockSize)
{
DEBUGLOG(5, "ZSTD_insertBlock: %u bytes", (unsigned)blockSize);
ZSTD_checkContinuity(dctx, blockStart, blockSize);
dctx->previousDstEnd = (const char*)blockStart + blockSize;
return blockSize;
}
static size_t ZSTD_copyRawBlock(void* dst, size_t dstCapacity,
const void* src, size_t srcSize)
{
DEBUGLOG(5, "ZSTD_copyRawBlock");
RETURN_ERROR_IF(srcSize > dstCapacity, dstSize_tooSmall, "");
if (dst == NULL) {
if (srcSize == 0) return 0;
RETURN_ERROR(dstBuffer_null, "");
}
ZSTD_memmove(dst, src, srcSize);
return srcSize;
}
static size_t ZSTD_setRleBlock(void* dst, size_t dstCapacity,
BYTE b,
size_t regenSize)
{
RETURN_ERROR_IF(regenSize > dstCapacity, dstSize_tooSmall, "");
if (dst == NULL) {
if (regenSize == 0) return 0;
RETURN_ERROR(dstBuffer_null, "");
}
ZSTD_memset(dst, b, regenSize);
return regenSize;
}
static void ZSTD_DCtx_trace_end(ZSTD_DCtx const* dctx, U64 uncompressedSize, U64 compressedSize, int streaming)
{
#if ZSTD_TRACE
if (dctx->traceCtx && ZSTD_trace_decompress_end != NULL) {
ZSTD_Trace trace;
ZSTD_memset(&trace, 0, sizeof(trace));
trace.version = ZSTD_VERSION_NUMBER;
trace.streaming = streaming;
if (dctx->ddict) {
trace.dictionaryID = ZSTD_getDictID_fromDDict(dctx->ddict);
trace.dictionarySize = ZSTD_DDict_dictSize(dctx->ddict);
trace.dictionaryIsCold = dctx->ddictIsCold;
}
trace.uncompressedSize = (size_t)uncompressedSize;
trace.compressedSize = (size_t)compressedSize;
trace.dctx = dctx;
ZSTD_trace_decompress_end(dctx->traceCtx, &trace);
}
#else
(void)dctx;
(void)uncompressedSize;
(void)compressedSize;
(void)streaming;
#endif
}
/*! ZSTD_decompressFrame() :
* @dctx must be properly initialized
* will update *srcPtr and *srcSizePtr,
* to make *srcPtr progress by one frame. */
static size_t ZSTD_decompressFrame(ZSTD_DCtx* dctx,
void* dst, size_t dstCapacity,
const void** srcPtr, size_t *srcSizePtr)
{
const BYTE* const istart = (const BYTE*)(*srcPtr);
const BYTE* ip = istart;
BYTE* const ostart = (BYTE*)dst;
BYTE* const oend = dstCapacity != 0 ? ostart + dstCapacity : ostart;
BYTE* op = ostart;
size_t remainingSrcSize = *srcSizePtr;
DEBUGLOG(4, "ZSTD_decompressFrame (srcSize:%i)", (int)*srcSizePtr);
/* check */
RETURN_ERROR_IF(
remainingSrcSize < ZSTD_FRAMEHEADERSIZE_MIN(dctx->format)+ZSTD_blockHeaderSize,
srcSize_wrong, "");
/* Frame Header */
{ size_t const frameHeaderSize = ZSTD_frameHeaderSize_internal(
ip, ZSTD_FRAMEHEADERSIZE_PREFIX(dctx->format), dctx->format);
if (ZSTD_isError(frameHeaderSize)) return frameHeaderSize;
RETURN_ERROR_IF(remainingSrcSize < frameHeaderSize+ZSTD_blockHeaderSize,
srcSize_wrong, "");
FORWARD_IF_ERROR( ZSTD_decodeFrameHeader(dctx, ip, frameHeaderSize) , "");
ip += frameHeaderSize; remainingSrcSize -= frameHeaderSize;
}
/* Shrink the blockSizeMax if enabled */
if (dctx->maxBlockSizeParam != 0)
dctx->fParams.blockSizeMax = MIN(dctx->fParams.blockSizeMax, (unsigned)dctx->maxBlockSizeParam);
/* Loop on each block */
while (1) {
BYTE* oBlockEnd = oend;
size_t decodedSize;
blockProperties_t blockProperties;
memset(&blockProperties, 0, sizeof(blockProperties)); // shut up gcc warning
size_t const cBlockSize = ZSTD_getcBlockSize(ip, remainingSrcSize, &blockProperties);
if (ZSTD_isError(cBlockSize)) return cBlockSize;
ip += ZSTD_blockHeaderSize;
remainingSrcSize -= ZSTD_blockHeaderSize;
RETURN_ERROR_IF(cBlockSize > remainingSrcSize, srcSize_wrong, "");
if (ip >= op && ip < oBlockEnd) {
/* We are decompressing in-place. Limit the output pointer so that we
* don't overwrite the block that we are currently reading. This will
* fail decompression if the input & output pointers aren't spaced
* far enough apart.
*
* This is important to set, even when the pointers are far enough
* apart, because ZSTD_decompressBlock_internal() can decide to store
* literals in the output buffer, after the block it is decompressing.
* Since we don't want anything to overwrite our input, we have to tell
* ZSTD_decompressBlock_internal to never write past ip.
*
* See ZSTD_allocateLiteralsBuffer() for reference.
*/
oBlockEnd = op + (ip - op);
}
switch(blockProperties.blockType)
{
case bt_compressed:
assert(dctx->isFrameDecompression == 1);
decodedSize = ZSTD_decompressBlock_internal(dctx, op, (size_t)(oBlockEnd-op), ip, cBlockSize, not_streaming);
break;
case bt_raw :
/* Use oend instead of oBlockEnd because this function is safe to overlap. It uses memmove. */
decodedSize = ZSTD_copyRawBlock(op, (size_t)(oend-op), ip, cBlockSize);
break;
case bt_rle :
decodedSize = ZSTD_setRleBlock(op, (size_t)(oBlockEnd-op), *ip, blockProperties.origSize);
break;
case bt_reserved :
default:
RETURN_ERROR(corruption_detected, "invalid block type");
}
FORWARD_IF_ERROR(decodedSize, "Block decompression failure");
DEBUGLOG(5, "Decompressed block of dSize = %u", (unsigned)decodedSize);
if (dctx->validateChecksum) {
XXH64_update(&dctx->xxhState, op, decodedSize);
}
if (decodedSize) /* support dst = NULL,0 */ {
op += decodedSize;
}
assert(ip != NULL);
ip += cBlockSize;
remainingSrcSize -= cBlockSize;
if (blockProperties.lastBlock) break;
}
if (dctx->fParams.frameContentSize != ZSTD_CONTENTSIZE_UNKNOWN) {
RETURN_ERROR_IF((U64)(op-ostart) != dctx->fParams.frameContentSize,
corruption_detected, "");
}
if (dctx->fParams.checksumFlag) { /* Frame content checksum verification */
RETURN_ERROR_IF(remainingSrcSize<4, checksum_wrong, "");
if (!dctx->forceIgnoreChecksum) {
U32 const checkCalc = (U32)XXH64_digest(&dctx->xxhState);
U32 checkRead;
checkRead = MEM_readLE32(ip);
RETURN_ERROR_IF(checkRead != checkCalc, checksum_wrong, "");
}
ip += 4;
remainingSrcSize -= 4;
}
ZSTD_DCtx_trace_end(dctx, (U64)(op-ostart), (U64)(ip-istart), /* streaming */ 0);
/* Allow caller to get size read */
DEBUGLOG(4, "ZSTD_decompressFrame: decompressed frame of size %i, consuming %i bytes of input", (int)(op-ostart), (int)(ip - (const BYTE*)*srcPtr));
*srcPtr = ip;
*srcSizePtr = remainingSrcSize;
return (size_t)(op-ostart);
}
static
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
size_t ZSTD_decompressMultiFrame(ZSTD_DCtx* dctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
const void* dict, size_t dictSize,
const ZSTD_DDict* ddict)
{
void* const dststart = dst;
int moreThan1Frame = 0;
DEBUGLOG(5, "ZSTD_decompressMultiFrame");
assert(dict==NULL || ddict==NULL); /* either dict or ddict set, not both */
if (ddict) {
dict = ZSTD_DDict_dictContent(ddict);
dictSize = ZSTD_DDict_dictSize(ddict);
}
while (srcSize >= ZSTD_startingInputLength(dctx->format)) {
#if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT >= 1)
if (dctx->format == ZSTD_f_zstd1 && ZSTD_isLegacy(src, srcSize)) {
size_t decodedSize;
size_t const frameSize = ZSTD_findFrameCompressedSizeLegacy(src, srcSize);
if (ZSTD_isError(frameSize)) return frameSize;
RETURN_ERROR_IF(dctx->staticSize, memory_allocation,
"legacy support is not compatible with static dctx");
decodedSize = ZSTD_decompressLegacy(dst, dstCapacity, src, frameSize, dict, dictSize);
if (ZSTD_isError(decodedSize)) return decodedSize;
{
unsigned long long const expectedSize = ZSTD_getFrameContentSize(src, srcSize);
RETURN_ERROR_IF(expectedSize == ZSTD_CONTENTSIZE_ERROR, corruption_detected, "Corrupted frame header!");
if (expectedSize != ZSTD_CONTENTSIZE_UNKNOWN) {
RETURN_ERROR_IF(expectedSize != decodedSize, corruption_detected,
"Frame header size does not match decoded size!");
}
}
assert(decodedSize <= dstCapacity);
dst = (BYTE*)dst + decodedSize;
dstCapacity -= decodedSize;
src = (const BYTE*)src + frameSize;
srcSize -= frameSize;
continue;
}
#endif
if (dctx->format == ZSTD_f_zstd1 && srcSize >= 4) {
U32 const magicNumber = MEM_readLE32(src);
DEBUGLOG(5, "reading magic number %08X", (unsigned)magicNumber);
if ((magicNumber & ZSTD_MAGIC_SKIPPABLE_MASK) == ZSTD_MAGIC_SKIPPABLE_START) {
/* skippable frame detected : skip it */
size_t const skippableSize = readSkippableFrameSize(src, srcSize);
FORWARD_IF_ERROR(skippableSize, "invalid skippable frame");
assert(skippableSize <= srcSize);
src = (const BYTE *)src + skippableSize;
srcSize -= skippableSize;
continue; /* check next frame */
} }
if (ddict) {
/* we were called from ZSTD_decompress_usingDDict */
FORWARD_IF_ERROR(ZSTD_decompressBegin_usingDDict(dctx, ddict), "");
} else {
/* this will initialize correctly with no dict if dict == NULL, so
* use this in all cases but ddict */
FORWARD_IF_ERROR(ZSTD_decompressBegin_usingDict(dctx, dict, dictSize), "");
}
ZSTD_checkContinuity(dctx, dst, dstCapacity);
{ const size_t res = ZSTD_decompressFrame(dctx, dst, dstCapacity,
&src, &srcSize);
RETURN_ERROR_IF(
(ZSTD_getErrorCode(res) == ZSTD_error_prefix_unknown)
&& (moreThan1Frame==1),
srcSize_wrong,
"At least one frame successfully completed, "
"but following bytes are garbage: "
"it's more likely to be a srcSize error, "
"specifying more input bytes than size of frame(s). "
"Note: one could be unlucky, it might be a corruption error instead, "
"happening right at the place where we expect zstd magic bytes. "
"But this is _much_ less likely than a srcSize field error.");
if (ZSTD_isError(res)) return res;
assert(res <= dstCapacity);
if (res != 0)
dst = (BYTE*)dst + res;
dstCapacity -= res;
}
moreThan1Frame = 1;
} /* while (srcSize >= ZSTD_frameHeaderSize_prefix) */
RETURN_ERROR_IF(srcSize, srcSize_wrong, "input not entirely consumed");
return (size_t)((BYTE*)dst - (BYTE*)dststart);
}
size_t ZSTD_decompress_usingDict(ZSTD_DCtx* dctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
const void* dict, size_t dictSize)
{
return ZSTD_decompressMultiFrame(dctx, dst, dstCapacity, src, srcSize, dict, dictSize, NULL);
}
static ZSTD_DDict const* ZSTD_getDDict(ZSTD_DCtx* dctx)
{
switch (dctx->dictUses) {
default:
assert(0 /* Impossible */);
ZSTD_FALLTHROUGH;
case ZSTD_dont_use:
ZSTD_clearDict(dctx);
return NULL;
case ZSTD_use_indefinitely:
return dctx->ddict;
case ZSTD_use_once:
dctx->dictUses = ZSTD_dont_use;
return dctx->ddict;
}
}
size_t ZSTD_decompressDCtx(ZSTD_DCtx* dctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize)
{
return ZSTD_decompress_usingDDict(dctx, dst, dstCapacity, src, srcSize, ZSTD_getDDict(dctx));
}
size_t ZSTD_decompress(void* dst, size_t dstCapacity, const void* src, size_t srcSize)
{
#if defined(ZSTD_HEAPMODE) && (ZSTD_HEAPMODE>=1)
size_t regenSize;
ZSTD_DCtx* const dctx = ZSTD_createDCtx_internal(ZSTD_defaultCMem);
RETURN_ERROR_IF(dctx==NULL, memory_allocation, "NULL pointer!");
regenSize = ZSTD_decompressDCtx(dctx, dst, dstCapacity, src, srcSize);
ZSTD_freeDCtx(dctx);
return regenSize;
#else /* stack mode */
ZSTD_DCtx dctx;
ZSTD_initDCtx_internal(&dctx);
return ZSTD_decompressDCtx(&dctx, dst, dstCapacity, src, srcSize);
#endif
}
/*-**************************************
* Advanced Streaming Decompression API
* Bufferless and synchronous
****************************************/
size_t ZSTD_nextSrcSizeToDecompress(ZSTD_DCtx* dctx) { return dctx->expected; }
/**
* Similar to ZSTD_nextSrcSizeToDecompress(), but when a block input can be streamed, we
* allow taking a partial block as the input. Currently only raw uncompressed blocks can
* be streamed.
*
* For blocks that can be streamed, this allows us to reduce the latency until we produce
* output, and avoid copying the input.
*
* @param inputSize - The total amount of input that the caller currently has.
*/
static size_t ZSTD_nextSrcSizeToDecompressWithInputSize(ZSTD_DCtx* dctx, size_t inputSize) {
if (!(dctx->stage == ZSTDds_decompressBlock || dctx->stage == ZSTDds_decompressLastBlock))
return dctx->expected;
if (dctx->bType != bt_raw)
return dctx->expected;
return BOUNDED(1, inputSize, dctx->expected);
}
ZSTD_nextInputType_e ZSTD_nextInputType(ZSTD_DCtx* dctx) {
switch(dctx->stage)
{
default: /* should not happen */
assert(0);
ZSTD_FALLTHROUGH;
case ZSTDds_getFrameHeaderSize:
ZSTD_FALLTHROUGH;
case ZSTDds_decodeFrameHeader:
return ZSTDnit_frameHeader;
case ZSTDds_decodeBlockHeader:
return ZSTDnit_blockHeader;
case ZSTDds_decompressBlock:
return ZSTDnit_block;
case ZSTDds_decompressLastBlock:
return ZSTDnit_lastBlock;
case ZSTDds_checkChecksum:
return ZSTDnit_checksum;
case ZSTDds_decodeSkippableHeader:
ZSTD_FALLTHROUGH;
case ZSTDds_skipFrame:
return ZSTDnit_skippableFrame;
}
}
static int ZSTD_isSkipFrame(ZSTD_DCtx* dctx) { return dctx->stage == ZSTDds_skipFrame; }
/** ZSTD_decompressContinue() :
* srcSize : must be the exact nb of bytes expected (see ZSTD_nextSrcSizeToDecompress())
* @return : nb of bytes generated into `dst` (necessarily <= `dstCapacity)
* or an error code, which can be tested using ZSTD_isError() */
size_t ZSTD_decompressContinue(ZSTD_DCtx* dctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize)
{
DEBUGLOG(5, "ZSTD_decompressContinue (srcSize:%u)", (unsigned)srcSize);
/* Sanity check */
RETURN_ERROR_IF(srcSize != ZSTD_nextSrcSizeToDecompressWithInputSize(dctx, srcSize), srcSize_wrong, "not allowed");
ZSTD_checkContinuity(dctx, dst, dstCapacity);
dctx->processedCSize += srcSize;
switch (dctx->stage)
{
case ZSTDds_getFrameHeaderSize :
assert(src != NULL);
if (dctx->format == ZSTD_f_zstd1) { /* allows header */
assert(srcSize >= ZSTD_FRAMEIDSIZE); /* to read skippable magic number */
if ((MEM_readLE32(src) & ZSTD_MAGIC_SKIPPABLE_MASK) == ZSTD_MAGIC_SKIPPABLE_START) { /* skippable frame */
ZSTD_memcpy(dctx->headerBuffer, src, srcSize);
dctx->expected = ZSTD_SKIPPABLEHEADERSIZE - srcSize; /* remaining to load to get full skippable frame header */
dctx->stage = ZSTDds_decodeSkippableHeader;
return 0;
} }
dctx->headerSize = ZSTD_frameHeaderSize_internal(src, srcSize, dctx->format);
if (ZSTD_isError(dctx->headerSize)) return dctx->headerSize;
ZSTD_memcpy(dctx->headerBuffer, src, srcSize);
dctx->expected = dctx->headerSize - srcSize;
dctx->stage = ZSTDds_decodeFrameHeader;
return 0;
case ZSTDds_decodeFrameHeader:
assert(src != NULL);
ZSTD_memcpy(dctx->headerBuffer + (dctx->headerSize - srcSize), src, srcSize);
FORWARD_IF_ERROR(ZSTD_decodeFrameHeader(dctx, dctx->headerBuffer, dctx->headerSize), "");
dctx->expected = ZSTD_blockHeaderSize;
dctx->stage = ZSTDds_decodeBlockHeader;
return 0;
case ZSTDds_decodeBlockHeader:
{ blockProperties_t bp;
size_t const cBlockSize = ZSTD_getcBlockSize(src, ZSTD_blockHeaderSize, &bp);
if (ZSTD_isError(cBlockSize)) return cBlockSize;
RETURN_ERROR_IF(cBlockSize > dctx->fParams.blockSizeMax, corruption_detected, "Block Size Exceeds Maximum");
dctx->expected = cBlockSize;
dctx->bType = bp.blockType;
dctx->rleSize = bp.origSize;
if (cBlockSize) {
dctx->stage = bp.lastBlock ? ZSTDds_decompressLastBlock : ZSTDds_decompressBlock;
return 0;
}
/* empty block */
if (bp.lastBlock) {
if (dctx->fParams.checksumFlag) {
dctx->expected = 4;
dctx->stage = ZSTDds_checkChecksum;
} else {
dctx->expected = 0; /* end of frame */
dctx->stage = ZSTDds_getFrameHeaderSize;
}
} else {
dctx->expected = ZSTD_blockHeaderSize; /* jump to next header */
dctx->stage = ZSTDds_decodeBlockHeader;
}
return 0;
}
case ZSTDds_decompressLastBlock:
case ZSTDds_decompressBlock:
DEBUGLOG(5, "ZSTD_decompressContinue: case ZSTDds_decompressBlock");
{ size_t rSize;
switch(dctx->bType)
{
case bt_compressed:
DEBUGLOG(5, "ZSTD_decompressContinue: case bt_compressed");
assert(dctx->isFrameDecompression == 1);
rSize = ZSTD_decompressBlock_internal(dctx, dst, dstCapacity, src, srcSize, is_streaming);
dctx->expected = 0; /* Streaming not supported */
break;
case bt_raw :
assert(srcSize <= dctx->expected);
rSize = ZSTD_copyRawBlock(dst, dstCapacity, src, srcSize);
FORWARD_IF_ERROR(rSize, "ZSTD_copyRawBlock failed");
assert(rSize == srcSize);
dctx->expected -= rSize;
break;
case bt_rle :
rSize = ZSTD_setRleBlock(dst, dstCapacity, *(const BYTE*)src, dctx->rleSize);
dctx->expected = 0; /* Streaming not supported */
break;
case bt_reserved : /* should never happen */
default:
RETURN_ERROR(corruption_detected, "invalid block type");
}
FORWARD_IF_ERROR(rSize, "");
RETURN_ERROR_IF(rSize > dctx->fParams.blockSizeMax, corruption_detected, "Decompressed Block Size Exceeds Maximum");
DEBUGLOG(5, "ZSTD_decompressContinue: decoded size from block : %u", (unsigned)rSize);
dctx->decodedSize += rSize;
if (dctx->validateChecksum) XXH64_update(&dctx->xxhState, dst, rSize);
dctx->previousDstEnd = (char*)dst + rSize;
/* Stay on the same stage until we are finished streaming the block. */
if (dctx->expected > 0) {
return rSize;
}
if (dctx->stage == ZSTDds_decompressLastBlock) { /* end of frame */
DEBUGLOG(4, "ZSTD_decompressContinue: decoded size from frame : %u", (unsigned)dctx->decodedSize);
RETURN_ERROR_IF(
dctx->fParams.frameContentSize != ZSTD_CONTENTSIZE_UNKNOWN
&& dctx->decodedSize != dctx->fParams.frameContentSize,
corruption_detected, "");
if (dctx->fParams.checksumFlag) { /* another round for frame checksum */
dctx->expected = 4;
dctx->stage = ZSTDds_checkChecksum;
} else {
ZSTD_DCtx_trace_end(dctx, dctx->decodedSize, dctx->processedCSize, /* streaming */ 1);
dctx->expected = 0; /* ends here */
dctx->stage = ZSTDds_getFrameHeaderSize;
}
} else {
dctx->stage = ZSTDds_decodeBlockHeader;
dctx->expected = ZSTD_blockHeaderSize;
}
return rSize;
}
case ZSTDds_checkChecksum:
assert(srcSize == 4); /* guaranteed by dctx->expected */
{
if (dctx->validateChecksum) {
U32 const h32 = (U32)XXH64_digest(&dctx->xxhState);
U32 const check32 = MEM_readLE32(src);
DEBUGLOG(4, "ZSTD_decompressContinue: checksum : calculated %08X :: %08X read", (unsigned)h32, (unsigned)check32);
RETURN_ERROR_IF(check32 != h32, checksum_wrong, "");
}
ZSTD_DCtx_trace_end(dctx, dctx->decodedSize, dctx->processedCSize, /* streaming */ 1);
dctx->expected = 0;
dctx->stage = ZSTDds_getFrameHeaderSize;
return 0;
}
case ZSTDds_decodeSkippableHeader:
assert(src != NULL);
assert(srcSize <= ZSTD_SKIPPABLEHEADERSIZE);
assert(dctx->format != ZSTD_f_zstd1_magicless);
ZSTD_memcpy(dctx->headerBuffer + (ZSTD_SKIPPABLEHEADERSIZE - srcSize), src, srcSize); /* complete skippable header */
dctx->expected = MEM_readLE32(dctx->headerBuffer + ZSTD_FRAMEIDSIZE); /* note : dctx->expected can grow seriously large, beyond local buffer size */
dctx->stage = ZSTDds_skipFrame;
return 0;
case ZSTDds_skipFrame:
dctx->expected = 0;
dctx->stage = ZSTDds_getFrameHeaderSize;
return 0;
default:
assert(0); /* impossible */
RETURN_ERROR(GENERIC, "impossible to reach"); /* some compilers require default to do something */
}
}
static size_t ZSTD_refDictContent(ZSTD_DCtx* dctx, const void* dict, size_t dictSize)
{
dctx->dictEnd = dctx->previousDstEnd;
dctx->virtualStart = (const char*)dict - ((const char*)(dctx->previousDstEnd) - (const char*)(dctx->prefixStart));
dctx->prefixStart = dict;
dctx->previousDstEnd = (const char*)dict + dictSize;
#ifdef FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION
dctx->dictContentBeginForFuzzing = dctx->prefixStart;
dctx->dictContentEndForFuzzing = dctx->previousDstEnd;
#endif
return 0;
}
/*! ZSTD_loadDEntropy() :
* dict : must point at beginning of a valid zstd dictionary.
* @return : size of entropy tables read */
size_t
ZSTD_loadDEntropy(ZSTD_entropyDTables_t* entropy,
const void* const dict, size_t const dictSize)
{
const BYTE* dictPtr = (const BYTE*)dict;
const BYTE* const dictEnd = dictPtr + dictSize;
RETURN_ERROR_IF(dictSize <= 8, dictionary_corrupted, "dict is too small");
assert(MEM_readLE32(dict) == ZSTD_MAGIC_DICTIONARY); /* dict must be valid */
dictPtr += 8; /* skip header = magic + dictID */
ZSTD_STATIC_ASSERT(offsetof(ZSTD_entropyDTables_t, OFTable) == offsetof(ZSTD_entropyDTables_t, LLTable) + sizeof(entropy->LLTable));
ZSTD_STATIC_ASSERT(offsetof(ZSTD_entropyDTables_t, MLTable) == offsetof(ZSTD_entropyDTables_t, OFTable) + sizeof(entropy->OFTable));
ZSTD_STATIC_ASSERT(sizeof(entropy->LLTable) + sizeof(entropy->OFTable) + sizeof(entropy->MLTable) >= HUF_DECOMPRESS_WORKSPACE_SIZE);
{ void* const workspace = &entropy->LLTable; /* use fse tables as temporary workspace; implies fse tables are grouped together */
size_t const workspaceSize = sizeof(entropy->LLTable) + sizeof(entropy->OFTable) + sizeof(entropy->MLTable);
#ifdef HUF_FORCE_DECOMPRESS_X1
/* in minimal huffman, we always use X1 variants */
size_t const hSize = HUF_readDTableX1_wksp(entropy->hufTable,
dictPtr, dictEnd - dictPtr,
workspace, workspaceSize, /* flags */ 0);
#else
size_t const hSize = HUF_readDTableX2_wksp(entropy->hufTable,
dictPtr, (size_t)(dictEnd - dictPtr),
workspace, workspaceSize, /* flags */ 0);
#endif
RETURN_ERROR_IF(HUF_isError(hSize), dictionary_corrupted, "");
dictPtr += hSize;
}
{ short offcodeNCount[MaxOff+1];
unsigned offcodeMaxValue = MaxOff, offcodeLog;
size_t const offcodeHeaderSize = FSE_readNCount(offcodeNCount, &offcodeMaxValue, &offcodeLog, dictPtr, (size_t)(dictEnd-dictPtr));
RETURN_ERROR_IF(FSE_isError(offcodeHeaderSize), dictionary_corrupted, "");
RETURN_ERROR_IF(offcodeMaxValue > MaxOff, dictionary_corrupted, "");
RETURN_ERROR_IF(offcodeLog > OffFSELog, dictionary_corrupted, "");
ZSTD_buildFSETable( entropy->OFTable,
offcodeNCount, offcodeMaxValue,
OF_base, OF_bits,
offcodeLog,
entropy->workspace, sizeof(entropy->workspace),
/* bmi2 */0);
dictPtr += offcodeHeaderSize;
}
{ short matchlengthNCount[MaxML+1];
unsigned matchlengthMaxValue = MaxML, matchlengthLog;
size_t const matchlengthHeaderSize = FSE_readNCount(matchlengthNCount, &matchlengthMaxValue, &matchlengthLog, dictPtr, (size_t)(dictEnd-dictPtr));
RETURN_ERROR_IF(FSE_isError(matchlengthHeaderSize), dictionary_corrupted, "");
RETURN_ERROR_IF(matchlengthMaxValue > MaxML, dictionary_corrupted, "");
RETURN_ERROR_IF(matchlengthLog > MLFSELog, dictionary_corrupted, "");
ZSTD_buildFSETable( entropy->MLTable,
matchlengthNCount, matchlengthMaxValue,
ML_base, ML_bits,
matchlengthLog,
entropy->workspace, sizeof(entropy->workspace),
/* bmi2 */ 0);
dictPtr += matchlengthHeaderSize;
}
{ short litlengthNCount[MaxLL+1];
unsigned litlengthMaxValue = MaxLL, litlengthLog;
size_t const litlengthHeaderSize = FSE_readNCount(litlengthNCount, &litlengthMaxValue, &litlengthLog, dictPtr, (size_t)(dictEnd-dictPtr));
RETURN_ERROR_IF(FSE_isError(litlengthHeaderSize), dictionary_corrupted, "");
RETURN_ERROR_IF(litlengthMaxValue > MaxLL, dictionary_corrupted, "");
RETURN_ERROR_IF(litlengthLog > LLFSELog, dictionary_corrupted, "");
ZSTD_buildFSETable( entropy->LLTable,
litlengthNCount, litlengthMaxValue,
LL_base, LL_bits,
litlengthLog,
entropy->workspace, sizeof(entropy->workspace),
/* bmi2 */ 0);
dictPtr += litlengthHeaderSize;
}
RETURN_ERROR_IF(dictPtr+12 > dictEnd, dictionary_corrupted, "");
{ int i;
size_t const dictContentSize = (size_t)(dictEnd - (dictPtr+12));
for (i=0; i<3; i++) {
U32 const rep = MEM_readLE32(dictPtr); dictPtr += 4;
RETURN_ERROR_IF(rep==0 || rep > dictContentSize,
dictionary_corrupted, "");
entropy->rep[i] = rep;
} }
return (size_t)(dictPtr - (const BYTE*)dict);
}
static size_t ZSTD_decompress_insertDictionary(ZSTD_DCtx* dctx, const void* dict, size_t dictSize)
{
if (dictSize < 8) return ZSTD_refDictContent(dctx, dict, dictSize);
{ U32 const magic = MEM_readLE32(dict);
if (magic != ZSTD_MAGIC_DICTIONARY) {
return ZSTD_refDictContent(dctx, dict, dictSize); /* pure content mode */
} }
dctx->dictID = MEM_readLE32((const char*)dict + ZSTD_FRAMEIDSIZE);
/* load entropy tables */
{ size_t const eSize = ZSTD_loadDEntropy(&dctx->entropy, dict, dictSize);
RETURN_ERROR_IF(ZSTD_isError(eSize), dictionary_corrupted, "");
dict = (const char*)dict + eSize;
dictSize -= eSize;
}
dctx->litEntropy = dctx->fseEntropy = 1;
/* reference dictionary content */
return ZSTD_refDictContent(dctx, dict, dictSize);
}
size_t ZSTD_decompressBegin(ZSTD_DCtx* dctx)
{
assert(dctx != NULL);
#if ZSTD_TRACE
dctx->traceCtx = (ZSTD_trace_decompress_begin != NULL) ? ZSTD_trace_decompress_begin(dctx) : 0;
#endif
dctx->expected = ZSTD_startingInputLength(dctx->format); /* dctx->format must be properly set */
dctx->stage = ZSTDds_getFrameHeaderSize;
dctx->processedCSize = 0;
dctx->decodedSize = 0;
dctx->previousDstEnd = NULL;
dctx->prefixStart = NULL;
dctx->virtualStart = NULL;
dctx->dictEnd = NULL;
dctx->entropy.hufTable[0] = (HUF_DTable)((ZSTD_HUFFDTABLE_CAPACITY_LOG)*0x1000001); /* cover both little and big endian */
dctx->litEntropy = dctx->fseEntropy = 0;
dctx->dictID = 0;
dctx->bType = bt_reserved;
dctx->isFrameDecompression = 1;
ZSTD_STATIC_ASSERT(sizeof(dctx->entropy.rep) == sizeof(repStartValue));
ZSTD_memcpy(dctx->entropy.rep, repStartValue, sizeof(repStartValue)); /* initial repcodes */
dctx->LLTptr = dctx->entropy.LLTable;
dctx->MLTptr = dctx->entropy.MLTable;
dctx->OFTptr = dctx->entropy.OFTable;
dctx->HUFptr = dctx->entropy.hufTable;
return 0;
}
size_t ZSTD_decompressBegin_usingDict(ZSTD_DCtx* dctx, const void* dict, size_t dictSize)
{
FORWARD_IF_ERROR( ZSTD_decompressBegin(dctx) , "");
if (dict && dictSize)
RETURN_ERROR_IF(
ZSTD_isError(ZSTD_decompress_insertDictionary(dctx, dict, dictSize)),
dictionary_corrupted, "");
return 0;
}
/* ====== ZSTD_DDict ====== */
size_t ZSTD_decompressBegin_usingDDict(ZSTD_DCtx* dctx, const ZSTD_DDict* ddict)
{
DEBUGLOG(4, "ZSTD_decompressBegin_usingDDict");
assert(dctx != NULL);
if (ddict) {
const char* const dictStart = (const char*)ZSTD_DDict_dictContent(ddict);
size_t const dictSize = ZSTD_DDict_dictSize(ddict);
const void* const dictEnd = dictStart + dictSize;
dctx->ddictIsCold = (dctx->dictEnd != dictEnd);
DEBUGLOG(4, "DDict is %s",
dctx->ddictIsCold ? "~cold~" : "hot!");
}
FORWARD_IF_ERROR( ZSTD_decompressBegin(dctx) , "");
if (ddict) { /* NULL ddict is equivalent to no dictionary */
ZSTD_copyDDictParameters(dctx, ddict);
}
return 0;
}
/*! ZSTD_getDictID_fromDict() :
* Provides the dictID stored within dictionary.
* if @return == 0, the dictionary is not conformant with Zstandard specification.
* It can still be loaded, but as a content-only dictionary. */
unsigned ZSTD_getDictID_fromDict(const void* dict, size_t dictSize)
{
if (dictSize < 8) return 0;
if (MEM_readLE32(dict) != ZSTD_MAGIC_DICTIONARY) return 0;
return MEM_readLE32((const char*)dict + ZSTD_FRAMEIDSIZE);
}
/*! ZSTD_getDictID_fromFrame() :
* Provides the dictID required to decompress frame stored within `src`.
* If @return == 0, the dictID could not be decoded.
* This could for one of the following reasons :
* - The frame does not require a dictionary (most common case).
* - The frame was built with dictID intentionally removed.
* Needed dictionary is a hidden piece of information.
* Note : this use case also happens when using a non-conformant dictionary.
* - `srcSize` is too small, and as a result, frame header could not be decoded.
* Note : possible if `srcSize < ZSTD_FRAMEHEADERSIZE_MAX`.
* - This is not a Zstandard frame.
* When identifying the exact failure cause, it's possible to use
* ZSTD_getFrameHeader(), which will provide a more precise error code. */
unsigned ZSTD_getDictID_fromFrame(const void* src, size_t srcSize)
{
ZSTD_FrameHeader zfp = { 0, 0, 0, ZSTD_frame, 0, 0, 0, 0, 0 };
size_t const hError = ZSTD_getFrameHeader(&zfp, src, srcSize);
if (ZSTD_isError(hError)) return 0;
return zfp.dictID;
}
/*! ZSTD_decompress_usingDDict() :
* Decompression using a pre-digested Dictionary
* Use dictionary without significant overhead. */
size_t ZSTD_decompress_usingDDict(ZSTD_DCtx* dctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize,
const ZSTD_DDict* ddict)
{
/* pass content and size in case legacy frames are encountered */
return ZSTD_decompressMultiFrame(dctx, dst, dstCapacity, src, srcSize,
NULL, 0,
ddict);
}
/*=====================================
* Streaming decompression
*====================================*/
ZSTD_DStream* ZSTD_createDStream(void)
{
DEBUGLOG(3, "ZSTD_createDStream");
return ZSTD_createDCtx_internal(ZSTD_defaultCMem);
}
ZSTD_DStream* ZSTD_initStaticDStream(void *workspace, size_t workspaceSize)
{
return ZSTD_initStaticDCtx(workspace, workspaceSize);
}
ZSTD_DStream* ZSTD_createDStream_advanced(ZSTD_customMem customMem)
{
return ZSTD_createDCtx_internal(customMem);
}
size_t ZSTD_freeDStream(ZSTD_DStream* zds)
{
return ZSTD_freeDCtx(zds);
}
/* *** Initialization *** */
size_t ZSTD_DStreamInSize(void) { return ZSTD_BLOCKSIZE_MAX + ZSTD_blockHeaderSize; }
size_t ZSTD_DStreamOutSize(void) { return ZSTD_BLOCKSIZE_MAX; }
size_t ZSTD_DCtx_loadDictionary_advanced(ZSTD_DCtx* dctx,
const void* dict, size_t dictSize,
ZSTD_dictLoadMethod_e dictLoadMethod,
ZSTD_dictContentType_e dictContentType)
{
RETURN_ERROR_IF(dctx->streamStage != zdss_init, stage_wrong, "");
ZSTD_clearDict(dctx);
if (dict && dictSize != 0) {
dctx->ddictLocal = ZSTD_createDDict_advanced(dict, dictSize, dictLoadMethod, dictContentType, dctx->customMem);
RETURN_ERROR_IF(dctx->ddictLocal == NULL, memory_allocation, "NULL pointer!");
dctx->ddict = dctx->ddictLocal;
dctx->dictUses = ZSTD_use_indefinitely;
}
return 0;
}
size_t ZSTD_DCtx_loadDictionary_byReference(ZSTD_DCtx* dctx, const void* dict, size_t dictSize)
{
return ZSTD_DCtx_loadDictionary_advanced(dctx, dict, dictSize, ZSTD_dlm_byRef, ZSTD_dct_auto);
}
size_t ZSTD_DCtx_loadDictionary(ZSTD_DCtx* dctx, const void* dict, size_t dictSize)
{
return ZSTD_DCtx_loadDictionary_advanced(dctx, dict, dictSize, ZSTD_dlm_byCopy, ZSTD_dct_auto);
}
size_t ZSTD_DCtx_refPrefix_advanced(ZSTD_DCtx* dctx, const void* prefix, size_t prefixSize, ZSTD_dictContentType_e dictContentType)
{
FORWARD_IF_ERROR(ZSTD_DCtx_loadDictionary_advanced(dctx, prefix, prefixSize, ZSTD_dlm_byRef, dictContentType), "");
dctx->dictUses = ZSTD_use_once;
return 0;
}
size_t ZSTD_DCtx_refPrefix(ZSTD_DCtx* dctx, const void* prefix, size_t prefixSize)
{
return ZSTD_DCtx_refPrefix_advanced(dctx, prefix, prefixSize, ZSTD_dct_rawContent);
}
/* ZSTD_initDStream_usingDict() :
* return : expected size, aka ZSTD_startingInputLength().
* this function cannot fail */
size_t ZSTD_initDStream_usingDict(ZSTD_DStream* zds, const void* dict, size_t dictSize)
{
DEBUGLOG(4, "ZSTD_initDStream_usingDict");
FORWARD_IF_ERROR( ZSTD_DCtx_reset(zds, ZSTD_reset_session_only) , "");
FORWARD_IF_ERROR( ZSTD_DCtx_loadDictionary(zds, dict, dictSize) , "");
return ZSTD_startingInputLength(zds->format);
}
/* note : this variant can't fail */
size_t ZSTD_initDStream(ZSTD_DStream* zds)
{
DEBUGLOG(4, "ZSTD_initDStream");
FORWARD_IF_ERROR(ZSTD_DCtx_reset(zds, ZSTD_reset_session_only), "");
FORWARD_IF_ERROR(ZSTD_DCtx_refDDict(zds, NULL), "");
return ZSTD_startingInputLength(zds->format);
}
/* ZSTD_initDStream_usingDDict() :
* ddict will just be referenced, and must outlive decompression session
* this function cannot fail */
size_t ZSTD_initDStream_usingDDict(ZSTD_DStream* dctx, const ZSTD_DDict* ddict)
{
DEBUGLOG(4, "ZSTD_initDStream_usingDDict");
FORWARD_IF_ERROR( ZSTD_DCtx_reset(dctx, ZSTD_reset_session_only) , "");
FORWARD_IF_ERROR( ZSTD_DCtx_refDDict(dctx, ddict) , "");
return ZSTD_startingInputLength(dctx->format);
}
/* ZSTD_resetDStream() :
* return : expected size, aka ZSTD_startingInputLength().
* this function cannot fail */
size_t ZSTD_resetDStream(ZSTD_DStream* dctx)
{
DEBUGLOG(4, "ZSTD_resetDStream");
FORWARD_IF_ERROR(ZSTD_DCtx_reset(dctx, ZSTD_reset_session_only), "");
return ZSTD_startingInputLength(dctx->format);
}
size_t ZSTD_DCtx_refDDict(ZSTD_DCtx* dctx, const ZSTD_DDict* ddict)
{
RETURN_ERROR_IF(dctx->streamStage != zdss_init, stage_wrong, "");
ZSTD_clearDict(dctx);
if (ddict) {
dctx->ddict = ddict;
dctx->dictUses = ZSTD_use_indefinitely;
if (dctx->refMultipleDDicts == ZSTD_rmd_refMultipleDDicts) {
if (dctx->ddictSet == NULL) {
dctx->ddictSet = ZSTD_createDDictHashSet(dctx->customMem);
if (!dctx->ddictSet) {
RETURN_ERROR(memory_allocation, "Failed to allocate memory for hash set!");
}
}
assert(!dctx->staticSize); /* Impossible: ddictSet cannot have been allocated if static dctx */
FORWARD_IF_ERROR(ZSTD_DDictHashSet_addDDict(dctx->ddictSet, ddict, dctx->customMem), "");
}
}
return 0;
}
/* ZSTD_DCtx_setMaxWindowSize() :
* note : no direct equivalence in ZSTD_DCtx_setParameter,
* since this version sets windowSize, and the other sets windowLog */
size_t ZSTD_DCtx_setMaxWindowSize(ZSTD_DCtx* dctx, size_t maxWindowSize)
{
ZSTD_bounds const bounds = ZSTD_dParam_getBounds(ZSTD_d_windowLogMax);
size_t const min = (size_t)1 << bounds.lowerBound;
size_t const max = (size_t)1 << bounds.upperBound;
RETURN_ERROR_IF(dctx->streamStage != zdss_init, stage_wrong, "");
RETURN_ERROR_IF(maxWindowSize < min, parameter_outOfBound, "");
RETURN_ERROR_IF(maxWindowSize > max, parameter_outOfBound, "");
dctx->maxWindowSize = maxWindowSize;
return 0;
}
size_t ZSTD_DCtx_setFormat(ZSTD_DCtx* dctx, ZSTD_format_e format)
{
return ZSTD_DCtx_setParameter(dctx, ZSTD_d_format, (int)format);
}
ZSTD_bounds ZSTD_dParam_getBounds(ZSTD_dParameter dParam)
{
ZSTD_bounds bounds = { 0, 0, 0 };
switch(dParam) {
case ZSTD_d_windowLogMax:
bounds.lowerBound = ZSTD_WINDOWLOG_ABSOLUTEMIN;
bounds.upperBound = ZSTD_WINDOWLOG_MAX;
return bounds;
case ZSTD_d_format:
bounds.lowerBound = (int)ZSTD_f_zstd1;
bounds.upperBound = (int)ZSTD_f_zstd1_magicless;
ZSTD_STATIC_ASSERT(ZSTD_f_zstd1 < ZSTD_f_zstd1_magicless);
return bounds;
case ZSTD_d_stableOutBuffer:
bounds.lowerBound = (int)ZSTD_bm_buffered;
bounds.upperBound = (int)ZSTD_bm_stable;
return bounds;
case ZSTD_d_forceIgnoreChecksum:
bounds.lowerBound = (int)ZSTD_d_validateChecksum;
bounds.upperBound = (int)ZSTD_d_ignoreChecksum;
return bounds;
case ZSTD_d_refMultipleDDicts:
bounds.lowerBound = (int)ZSTD_rmd_refSingleDDict;
bounds.upperBound = (int)ZSTD_rmd_refMultipleDDicts;
return bounds;
case ZSTD_d_disableHuffmanAssembly:
bounds.lowerBound = 0;
bounds.upperBound = 1;
return bounds;
case ZSTD_d_maxBlockSize:
bounds.lowerBound = ZSTD_BLOCKSIZE_MAX_MIN;
bounds.upperBound = ZSTD_BLOCKSIZE_MAX;
return bounds;
default:;
}
bounds.error = ERROR(parameter_unsupported);
return bounds;
}
/* ZSTD_dParam_withinBounds:
* @return 1 if value is within dParam bounds,
* 0 otherwise */
static int ZSTD_dParam_withinBounds(ZSTD_dParameter dParam, int value)
{
ZSTD_bounds const bounds = ZSTD_dParam_getBounds(dParam);
if (ZSTD_isError(bounds.error)) return 0;
if (value < bounds.lowerBound) return 0;
if (value > bounds.upperBound) return 0;
return 1;
}
#define CHECK_DBOUNDS(p,v) { \
RETURN_ERROR_IF(!ZSTD_dParam_withinBounds(p, v), parameter_outOfBound, ""); \
}
size_t ZSTD_DCtx_getParameter(ZSTD_DCtx* dctx, ZSTD_dParameter param, int* value)
{
switch (param) {
case ZSTD_d_windowLogMax:
*value = (int)ZSTD_highbit32((U32)dctx->maxWindowSize);
return 0;
case ZSTD_d_format:
*value = (int)dctx->format;
return 0;
case ZSTD_d_stableOutBuffer:
*value = (int)dctx->outBufferMode;
return 0;
case ZSTD_d_forceIgnoreChecksum:
*value = (int)dctx->forceIgnoreChecksum;
return 0;
case ZSTD_d_refMultipleDDicts:
*value = (int)dctx->refMultipleDDicts;
return 0;
case ZSTD_d_disableHuffmanAssembly:
*value = (int)dctx->disableHufAsm;
return 0;
case ZSTD_d_maxBlockSize:
*value = dctx->maxBlockSizeParam;
return 0;
default:;
}
RETURN_ERROR(parameter_unsupported, "");
}
size_t ZSTD_DCtx_setParameter(ZSTD_DCtx* dctx, ZSTD_dParameter dParam, int value)
{
RETURN_ERROR_IF(dctx->streamStage != zdss_init, stage_wrong, "");
switch(dParam) {
case ZSTD_d_windowLogMax:
if (value == 0) value = ZSTD_WINDOWLOG_LIMIT_DEFAULT;
CHECK_DBOUNDS(ZSTD_d_windowLogMax, value);
dctx->maxWindowSize = ((size_t)1) << value;
return 0;
case ZSTD_d_format:
CHECK_DBOUNDS(ZSTD_d_format, value);
dctx->format = (ZSTD_format_e)value;
return 0;
case ZSTD_d_stableOutBuffer:
CHECK_DBOUNDS(ZSTD_d_stableOutBuffer, value);
dctx->outBufferMode = (ZSTD_bufferMode_e)value;
return 0;
case ZSTD_d_forceIgnoreChecksum:
CHECK_DBOUNDS(ZSTD_d_forceIgnoreChecksum, value);
dctx->forceIgnoreChecksum = (ZSTD_forceIgnoreChecksum_e)value;
return 0;
case ZSTD_d_refMultipleDDicts:
CHECK_DBOUNDS(ZSTD_d_refMultipleDDicts, value);
if (dctx->staticSize != 0) {
RETURN_ERROR(parameter_unsupported, "Static dctx does not support multiple DDicts!");
}
dctx->refMultipleDDicts = (ZSTD_refMultipleDDicts_e)value;
return 0;
case ZSTD_d_disableHuffmanAssembly:
CHECK_DBOUNDS(ZSTD_d_disableHuffmanAssembly, value);
dctx->disableHufAsm = value != 0;
return 0;
case ZSTD_d_maxBlockSize:
if (value != 0) CHECK_DBOUNDS(ZSTD_d_maxBlockSize, value);
dctx->maxBlockSizeParam = value;
return 0;
default:;
}
RETURN_ERROR(parameter_unsupported, "");
}
size_t ZSTD_DCtx_reset(ZSTD_DCtx* dctx, ZSTD_ResetDirective reset)
{
if ( (reset == ZSTD_reset_session_only)
|| (reset == ZSTD_reset_session_and_parameters) ) {
dctx->streamStage = zdss_init;
dctx->noForwardProgress = 0;
dctx->isFrameDecompression = 1;
}
if ( (reset == ZSTD_reset_parameters)
|| (reset == ZSTD_reset_session_and_parameters) ) {
RETURN_ERROR_IF(dctx->streamStage != zdss_init, stage_wrong, "");
ZSTD_clearDict(dctx);
ZSTD_DCtx_resetParameters(dctx);
}
return 0;
}
size_t ZSTD_sizeof_DStream(const ZSTD_DStream* dctx)
{
return ZSTD_sizeof_DCtx(dctx);
}
static size_t ZSTD_decodingBufferSize_internal(unsigned long long windowSize, unsigned long long frameContentSize, size_t blockSizeMax)
{
size_t const blockSize = MIN((size_t)MIN(windowSize, ZSTD_BLOCKSIZE_MAX), blockSizeMax);
/* We need blockSize + WILDCOPY_OVERLENGTH worth of buffer so that if a block
* ends at windowSize + WILDCOPY_OVERLENGTH + 1 bytes, we can start writing
* the block at the beginning of the output buffer, and maintain a full window.
*
* We need another blockSize worth of buffer so that we can store split
* literals at the end of the block without overwriting the extDict window.
*/
unsigned long long const neededRBSize = windowSize + (blockSize * 2) + (WILDCOPY_OVERLENGTH * 2);
unsigned long long const neededSize = MIN(frameContentSize, neededRBSize);
size_t const minRBSize = (size_t) neededSize;
RETURN_ERROR_IF((unsigned long long)minRBSize != neededSize,
frameParameter_windowTooLarge, "");
return minRBSize;
}
size_t ZSTD_decodingBufferSize_min(unsigned long long windowSize, unsigned long long frameContentSize)
{
return ZSTD_decodingBufferSize_internal(windowSize, frameContentSize, ZSTD_BLOCKSIZE_MAX);
}
size_t ZSTD_estimateDStreamSize(size_t windowSize)
{
size_t const blockSize = MIN(windowSize, ZSTD_BLOCKSIZE_MAX);
size_t const inBuffSize = blockSize; /* no block can be larger */
size_t const outBuffSize = ZSTD_decodingBufferSize_min(windowSize, ZSTD_CONTENTSIZE_UNKNOWN);
return ZSTD_estimateDCtxSize() + inBuffSize + outBuffSize;
}
size_t ZSTD_estimateDStreamSize_fromFrame(const void* src, size_t srcSize)
{
U32 const windowSizeMax = 1U << ZSTD_WINDOWLOG_MAX; /* note : should be user-selectable, but requires an additional parameter (or a dctx) */
ZSTD_FrameHeader zfh;
size_t const err = ZSTD_getFrameHeader(&zfh, src, srcSize);
if (ZSTD_isError(err)) return err;
RETURN_ERROR_IF(err>0, srcSize_wrong, "");
RETURN_ERROR_IF(zfh.windowSize > windowSizeMax,
frameParameter_windowTooLarge, "");
return ZSTD_estimateDStreamSize((size_t)zfh.windowSize);
}
/* ***** Decompression ***** */
static int ZSTD_DCtx_isOverflow(ZSTD_DStream* zds, size_t const neededInBuffSize, size_t const neededOutBuffSize)
{
return (zds->inBuffSize + zds->outBuffSize) >= (neededInBuffSize + neededOutBuffSize) * ZSTD_WORKSPACETOOLARGE_FACTOR;
}
static void ZSTD_DCtx_updateOversizedDuration(ZSTD_DStream* zds, size_t const neededInBuffSize, size_t const neededOutBuffSize)
{
if (ZSTD_DCtx_isOverflow(zds, neededInBuffSize, neededOutBuffSize))
zds->oversizedDuration++;
else
zds->oversizedDuration = 0;
}
static int ZSTD_DCtx_isOversizedTooLong(ZSTD_DStream* zds)
{
return zds->oversizedDuration >= ZSTD_WORKSPACETOOLARGE_MAXDURATION;
}
/* Checks that the output buffer hasn't changed if ZSTD_obm_stable is used. */
static size_t ZSTD_checkOutBuffer(ZSTD_DStream const* zds, ZSTD_outBuffer const* output)
{
ZSTD_outBuffer const expect = zds->expectedOutBuffer;
/* No requirement when ZSTD_obm_stable is not enabled. */
if (zds->outBufferMode != ZSTD_bm_stable)
return 0;
/* Any buffer is allowed in zdss_init, this must be the same for every other call until
* the context is reset.
*/
if (zds->streamStage == zdss_init)
return 0;
/* The buffer must match our expectation exactly. */
if (expect.dst == output->dst && expect.pos == output->pos && expect.size == output->size)
return 0;
RETURN_ERROR(dstBuffer_wrong, "ZSTD_d_stableOutBuffer enabled but output differs!");
}
/* Calls ZSTD_decompressContinue() with the right parameters for ZSTD_decompressStream()
* and updates the stage and the output buffer state. This call is extracted so it can be
* used both when reading directly from the ZSTD_inBuffer, and in buffered input mode.
* NOTE: You must break after calling this function since the streamStage is modified.
*/
static size_t ZSTD_decompressContinueStream(
ZSTD_DStream* zds, char** op, char* oend,
void const* src, size_t srcSize) {
int const isSkipFrame = ZSTD_isSkipFrame(zds);
if (zds->outBufferMode == ZSTD_bm_buffered) {
size_t const dstSize = isSkipFrame ? 0 : zds->outBuffSize - zds->outStart;
size_t const decodedSize = ZSTD_decompressContinue(zds,
zds->outBuff + zds->outStart, dstSize, src, srcSize);
FORWARD_IF_ERROR(decodedSize, "");
if (!decodedSize && !isSkipFrame) {
zds->streamStage = zdss_read;
} else {
zds->outEnd = zds->outStart + decodedSize;
zds->streamStage = zdss_flush;
}
} else {
/* Write directly into the output buffer */
size_t const dstSize = isSkipFrame ? 0 : (size_t)(oend - *op);
size_t const decodedSize = ZSTD_decompressContinue(zds, *op, dstSize, src, srcSize);
FORWARD_IF_ERROR(decodedSize, "");
*op += decodedSize;
/* Flushing is not needed. */
zds->streamStage = zdss_read;
assert(*op <= oend);
assert(zds->outBufferMode == ZSTD_bm_stable);
}
return 0;
}
size_t ZSTD_decompressStream(ZSTD_DStream* zds, ZSTD_outBuffer* output, ZSTD_inBuffer* input)
{
const char* const src = (const char*)input->src;
const char* const istart = input->pos != 0 ? src + input->pos : src;
const char* const iend = input->size != 0 ? src + input->size : src;
const char* ip = istart;
char* const dst = (char*)output->dst;
char* const ostart = output->pos != 0 ? dst + output->pos : dst;
char* const oend = output->size != 0 ? dst + output->size : dst;
char* op = ostart;
U32 someMoreWork = 1;
DEBUGLOG(5, "ZSTD_decompressStream");
assert(zds != NULL);
RETURN_ERROR_IF(
input->pos > input->size,
srcSize_wrong,
"forbidden. in: pos: %u vs size: %u",
(U32)input->pos, (U32)input->size);
RETURN_ERROR_IF(
output->pos > output->size,
dstSize_tooSmall,
"forbidden. out: pos: %u vs size: %u",
(U32)output->pos, (U32)output->size);
DEBUGLOG(5, "input size : %u", (U32)(input->size - input->pos));
FORWARD_IF_ERROR(ZSTD_checkOutBuffer(zds, output), "");
while (someMoreWork) {
switch(zds->streamStage)
{
case zdss_init :
DEBUGLOG(5, "stage zdss_init => transparent reset ");
zds->streamStage = zdss_loadHeader;
zds->lhSize = zds->inPos = zds->outStart = zds->outEnd = 0;
#if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT>=1)
zds->legacyVersion = 0;
#endif
zds->hostageByte = 0;
zds->expectedOutBuffer = *output;
ZSTD_FALLTHROUGH;
case zdss_loadHeader :
DEBUGLOG(5, "stage zdss_loadHeader (srcSize : %u)", (U32)(iend - ip));
#if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT>=1)
if (zds->legacyVersion) {
RETURN_ERROR_IF(zds->staticSize, memory_allocation,
"legacy support is incompatible with static dctx");
{ size_t const hint = ZSTD_decompressLegacyStream(zds->legacyContext, zds->legacyVersion, output, input);
if (hint==0) zds->streamStage = zdss_init;
return hint;
} }
#endif
{ size_t const hSize = ZSTD_getFrameHeader_advanced(&zds->fParams, zds->headerBuffer, zds->lhSize, zds->format);
if (zds->refMultipleDDicts && zds->ddictSet) {
ZSTD_DCtx_selectFrameDDict(zds);
}
if (ZSTD_isError(hSize)) {
#if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT>=1)
U32 const legacyVersion = ZSTD_isLegacy(istart, iend-istart);
if (legacyVersion) {
ZSTD_DDict const* const ddict = ZSTD_getDDict(zds);
const void* const dict = ddict ? ZSTD_DDict_dictContent(ddict) : NULL;
size_t const dictSize = ddict ? ZSTD_DDict_dictSize(ddict) : 0;
DEBUGLOG(5, "ZSTD_decompressStream: detected legacy version v0.%u", legacyVersion);
RETURN_ERROR_IF(zds->staticSize, memory_allocation,
"legacy support is incompatible with static dctx");
FORWARD_IF_ERROR(ZSTD_initLegacyStream(&zds->legacyContext,
zds->previousLegacyVersion, legacyVersion,
dict, dictSize), "");
zds->legacyVersion = zds->previousLegacyVersion = legacyVersion;
{ size_t const hint = ZSTD_decompressLegacyStream(zds->legacyContext, legacyVersion, output, input);
if (hint==0) zds->streamStage = zdss_init; /* or stay in stage zdss_loadHeader */
return hint;
} }
#endif
return hSize; /* error */
}
if (hSize != 0) { /* need more input */
size_t const toLoad = hSize - zds->lhSize; /* if hSize!=0, hSize > zds->lhSize */
size_t const remainingInput = (size_t)(iend-ip);
assert(iend >= ip);
if (toLoad > remainingInput) { /* not enough input to load full header */
if (remainingInput > 0) {
ZSTD_memcpy(zds->headerBuffer + zds->lhSize, ip, remainingInput);
zds->lhSize += remainingInput;
}
input->pos = input->size;
/* check first few bytes */
FORWARD_IF_ERROR(
ZSTD_getFrameHeader_advanced(&zds->fParams, zds->headerBuffer, zds->lhSize, zds->format),
"First few bytes detected incorrect" );
/* return hint input size */
return (MAX((size_t)ZSTD_FRAMEHEADERSIZE_MIN(zds->format), hSize) - zds->lhSize) + ZSTD_blockHeaderSize; /* remaining header bytes + next block header */
}
assert(ip != NULL);
ZSTD_memcpy(zds->headerBuffer + zds->lhSize, ip, toLoad); zds->lhSize = hSize; ip += toLoad;
break;
} }
/* check for single-pass mode opportunity */
if (zds->fParams.frameContentSize != ZSTD_CONTENTSIZE_UNKNOWN
&& zds->fParams.frameType != ZSTD_skippableFrame
&& (U64)(size_t)(oend-op) >= zds->fParams.frameContentSize) {
size_t const cSize = ZSTD_findFrameCompressedSize_advanced(istart, (size_t)(iend-istart), zds->format);
if (cSize <= (size_t)(iend-istart)) {
/* shortcut : using single-pass mode */
size_t const decompressedSize = ZSTD_decompress_usingDDict(zds, op, (size_t)(oend-op), istart, cSize, ZSTD_getDDict(zds));
if (ZSTD_isError(decompressedSize)) return decompressedSize;
DEBUGLOG(4, "shortcut to single-pass ZSTD_decompress_usingDDict()");
assert(istart != NULL);
ip = istart + cSize;
op = op ? op + decompressedSize : op; /* can occur if frameContentSize = 0 (empty frame) */
zds->expected = 0;
zds->streamStage = zdss_init;
someMoreWork = 0;
break;
} }
/* Check output buffer is large enough for ZSTD_odm_stable. */
if (zds->outBufferMode == ZSTD_bm_stable
&& zds->fParams.frameType != ZSTD_skippableFrame
&& zds->fParams.frameContentSize != ZSTD_CONTENTSIZE_UNKNOWN
&& (U64)(size_t)(oend-op) < zds->fParams.frameContentSize) {
RETURN_ERROR(dstSize_tooSmall, "ZSTD_obm_stable passed but ZSTD_outBuffer is too small");
}
/* Consume header (see ZSTDds_decodeFrameHeader) */
DEBUGLOG(4, "Consume header");
FORWARD_IF_ERROR(ZSTD_decompressBegin_usingDDict(zds, ZSTD_getDDict(zds)), "");
if (zds->format == ZSTD_f_zstd1
&& (MEM_readLE32(zds->headerBuffer) & ZSTD_MAGIC_SKIPPABLE_MASK) == ZSTD_MAGIC_SKIPPABLE_START) { /* skippable frame */
zds->expected = MEM_readLE32(zds->headerBuffer + ZSTD_FRAMEIDSIZE);
zds->stage = ZSTDds_skipFrame;
} else {
FORWARD_IF_ERROR(ZSTD_decodeFrameHeader(zds, zds->headerBuffer, zds->lhSize), "");
zds->expected = ZSTD_blockHeaderSize;
zds->stage = ZSTDds_decodeBlockHeader;
}
/* control buffer memory usage */
DEBUGLOG(4, "Control max memory usage (%u KB <= max %u KB)",
(U32)(zds->fParams.windowSize >>10),
(U32)(zds->maxWindowSize >> 10) );
zds->fParams.windowSize = MAX(zds->fParams.windowSize, 1U << ZSTD_WINDOWLOG_ABSOLUTEMIN);
RETURN_ERROR_IF(zds->fParams.windowSize > zds->maxWindowSize,
frameParameter_windowTooLarge, "");
if (zds->maxBlockSizeParam != 0)
zds->fParams.blockSizeMax = MIN(zds->fParams.blockSizeMax, (unsigned)zds->maxBlockSizeParam);
/* Adapt buffer sizes to frame header instructions */
{ size_t const neededInBuffSize = MAX(zds->fParams.blockSizeMax, 4 /* frame checksum */);
size_t const neededOutBuffSize = zds->outBufferMode == ZSTD_bm_buffered
? ZSTD_decodingBufferSize_internal(zds->fParams.windowSize, zds->fParams.frameContentSize, zds->fParams.blockSizeMax)
: 0;
ZSTD_DCtx_updateOversizedDuration(zds, neededInBuffSize, neededOutBuffSize);
{ int const tooSmall = (zds->inBuffSize < neededInBuffSize) || (zds->outBuffSize < neededOutBuffSize);
int const tooLarge = ZSTD_DCtx_isOversizedTooLong(zds);
if (tooSmall || tooLarge) {
size_t const bufferSize = neededInBuffSize + neededOutBuffSize;
DEBUGLOG(4, "inBuff : from %u to %u",
(U32)zds->inBuffSize, (U32)neededInBuffSize);
DEBUGLOG(4, "outBuff : from %u to %u",
(U32)zds->outBuffSize, (U32)neededOutBuffSize);
if (zds->staticSize) { /* static DCtx */
DEBUGLOG(4, "staticSize : %u", (U32)zds->staticSize);
assert(zds->staticSize >= sizeof(ZSTD_DCtx)); /* controlled at init */
RETURN_ERROR_IF(
bufferSize > zds->staticSize - sizeof(ZSTD_DCtx),
memory_allocation, "");
} else {
ZSTD_customFree(zds->inBuff, zds->customMem);
zds->inBuffSize = 0;
zds->outBuffSize = 0;
zds->inBuff = (char*)ZSTD_customMalloc(bufferSize, zds->customMem);
RETURN_ERROR_IF(zds->inBuff == NULL, memory_allocation, "");
}
zds->inBuffSize = neededInBuffSize;
zds->outBuff = zds->inBuff + zds->inBuffSize;
zds->outBuffSize = neededOutBuffSize;
} } }
zds->streamStage = zdss_read;
ZSTD_FALLTHROUGH;
case zdss_read:
DEBUGLOG(5, "stage zdss_read");
{ size_t const neededInSize = ZSTD_nextSrcSizeToDecompressWithInputSize(zds, (size_t)(iend - ip));
DEBUGLOG(5, "neededInSize = %u", (U32)neededInSize);
if (neededInSize==0) { /* end of frame */
zds->streamStage = zdss_init;
someMoreWork = 0;
break;
}
if ((size_t)(iend-ip) >= neededInSize) { /* decode directly from src */
FORWARD_IF_ERROR(ZSTD_decompressContinueStream(zds, &op, oend, ip, neededInSize), "");
assert(ip != NULL);
ip += neededInSize;
/* Function modifies the stage so we must break */
break;
} }
if (ip==iend) { someMoreWork = 0; break; } /* no more input */
zds->streamStage = zdss_load;
ZSTD_FALLTHROUGH;
case zdss_load:
{ size_t const neededInSize = ZSTD_nextSrcSizeToDecompress(zds);
size_t const toLoad = neededInSize - zds->inPos;
int const isSkipFrame = ZSTD_isSkipFrame(zds);
size_t loadedSize;
/* At this point we shouldn't be decompressing a block that we can stream. */
assert(neededInSize == ZSTD_nextSrcSizeToDecompressWithInputSize(zds, (size_t)(iend - ip)));
if (isSkipFrame) {
loadedSize = MIN(toLoad, (size_t)(iend-ip));
} else {
RETURN_ERROR_IF(toLoad > zds->inBuffSize - zds->inPos,
corruption_detected,
"should never happen");
loadedSize = ZSTD_limitCopy(zds->inBuff + zds->inPos, toLoad, ip, (size_t)(iend-ip));
}
if (loadedSize != 0) {
/* ip may be NULL */
ip += loadedSize;
zds->inPos += loadedSize;
}
if (loadedSize < toLoad) { someMoreWork = 0; break; } /* not enough input, wait for more */
/* decode loaded input */
zds->inPos = 0; /* input is consumed */
FORWARD_IF_ERROR(ZSTD_decompressContinueStream(zds, &op, oend, zds->inBuff, neededInSize), "");
/* Function modifies the stage so we must break */
break;
}
case zdss_flush:
{
size_t const toFlushSize = zds->outEnd - zds->outStart;
size_t const flushedSize = ZSTD_limitCopy(op, (size_t)(oend-op), zds->outBuff + zds->outStart, toFlushSize);
op = op ? op + flushedSize : op;
zds->outStart += flushedSize;
if (flushedSize == toFlushSize) { /* flush completed */
zds->streamStage = zdss_read;
if ( (zds->outBuffSize < zds->fParams.frameContentSize)
&& (zds->outStart + zds->fParams.blockSizeMax > zds->outBuffSize) ) {
DEBUGLOG(5, "restart filling outBuff from beginning (left:%i, needed:%u)",
(int)(zds->outBuffSize - zds->outStart),
(U32)zds->fParams.blockSizeMax);
zds->outStart = zds->outEnd = 0;
}
break;
} }
/* cannot complete flush */
someMoreWork = 0;
break;
default:
assert(0); /* impossible */
RETURN_ERROR(GENERIC, "impossible to reach"); /* some compilers require default to do something */
} }
/* result */
input->pos = (size_t)(ip - (const char*)(input->src));
output->pos = (size_t)(op - (char*)(output->dst));
/* Update the expected output buffer for ZSTD_obm_stable. */
zds->expectedOutBuffer = *output;
if ((ip==istart) && (op==ostart)) { /* no forward progress */
zds->noForwardProgress ++;
if (zds->noForwardProgress >= ZSTD_NO_FORWARD_PROGRESS_MAX) {
RETURN_ERROR_IF(op==oend, noForwardProgress_destFull, "");
RETURN_ERROR_IF(ip==iend, noForwardProgress_inputEmpty, "");
assert(0);
}
} else {
zds->noForwardProgress = 0;
}
{ size_t nextSrcSizeHint = ZSTD_nextSrcSizeToDecompress(zds);
if (!nextSrcSizeHint) { /* frame fully decoded */
if (zds->outEnd == zds->outStart) { /* output fully flushed */
if (zds->hostageByte) {
if (input->pos >= input->size) {
/* can't release hostage (not present) */
zds->streamStage = zdss_read;
return 1;
}
input->pos++; /* release hostage */
} /* zds->hostageByte */
return 0;
} /* zds->outEnd == zds->outStart */
if (!zds->hostageByte) { /* output not fully flushed; keep last byte as hostage; will be released when all output is flushed */
input->pos--; /* note : pos > 0, otherwise, impossible to finish reading last block */
zds->hostageByte=1;
}
return 1;
} /* nextSrcSizeHint==0 */
nextSrcSizeHint += ZSTD_blockHeaderSize * (ZSTD_nextInputType(zds) == ZSTDnit_block); /* preload header of next block */
assert(zds->inPos <= nextSrcSizeHint);
nextSrcSizeHint -= zds->inPos; /* part already loaded*/
return nextSrcSizeHint;
}
}
size_t ZSTD_decompressStream_simpleArgs (
ZSTD_DCtx* dctx,
void* dst, size_t dstCapacity, size_t* dstPos,
const void* src, size_t srcSize, size_t* srcPos)
{
ZSTD_outBuffer output;
ZSTD_inBuffer input;
output.dst = dst;
output.size = dstCapacity;
output.pos = *dstPos;
input.src = src;
input.size = srcSize;
input.pos = *srcPos;
{ size_t const cErr = ZSTD_decompressStream(dctx, &output, &input);
*dstPos = output.pos;
*srcPos = input.pos;
return cErr;
}
}
/**** ended inlining decompress/zstd_decompress.c ****/
/**** start inlining decompress/zstd_decompress_block.c ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/* zstd_decompress_block :
* this module takes care of decompressing _compressed_ block */
/*-*******************************************************
* Dependencies
*********************************************************/
/**** skipping file: ../common/zstd_deps.h ****/
/**** skipping file: ../common/compiler.h ****/
/**** skipping file: ../common/cpu.h ****/
/**** skipping file: ../common/mem.h ****/
#define FSE_STATIC_LINKING_ONLY
/**** skipping file: ../common/fse.h ****/
/**** skipping file: ../common/huf.h ****/
/**** skipping file: ../common/zstd_internal.h ****/
/**** skipping file: zstd_decompress_internal.h ****/
/**** skipping file: zstd_ddict.h ****/
/**** skipping file: zstd_decompress_block.h ****/
/**** skipping file: ../common/bits.h ****/
/*_*******************************************************
* Macros
**********************************************************/
/* These two optional macros force the use one way or another of the two
* ZSTD_decompressSequences implementations. You can't force in both directions
* at the same time.
*/
#if defined(ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT) && \
defined(ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG)
#error "Cannot force the use of the short and the long ZSTD_decompressSequences variants!"
#endif
/*_*******************************************************
* Memory operations
**********************************************************/
static void ZSTD_copy4(void* dst, const void* src) { ZSTD_memcpy(dst, src, 4); }
/*-*************************************************************
* Block decoding
***************************************************************/
static size_t ZSTD_blockSizeMax(ZSTD_DCtx const* dctx)
{
size_t const blockSizeMax = dctx->isFrameDecompression ? dctx->fParams.blockSizeMax : ZSTD_BLOCKSIZE_MAX;
assert(blockSizeMax <= ZSTD_BLOCKSIZE_MAX);
return blockSizeMax;
}
/*! ZSTD_getcBlockSize() :
* Provides the size of compressed block from block header `src` */
size_t ZSTD_getcBlockSize(const void* src, size_t srcSize,
blockProperties_t* bpPtr)
{
RETURN_ERROR_IF(srcSize < ZSTD_blockHeaderSize, srcSize_wrong, "");
{ U32 const cBlockHeader = MEM_readLE24(src);
U32 const cSize = cBlockHeader >> 3;
bpPtr->lastBlock = cBlockHeader & 1;
bpPtr->blockType = (blockType_e)((cBlockHeader >> 1) & 3);
bpPtr->origSize = cSize; /* only useful for RLE */
if (bpPtr->blockType == bt_rle) return 1;
RETURN_ERROR_IF(bpPtr->blockType == bt_reserved, corruption_detected, "");
return cSize;
}
}
/* Allocate buffer for literals, either overlapping current dst, or split between dst and litExtraBuffer, or stored entirely within litExtraBuffer */
static void ZSTD_allocateLiteralsBuffer(ZSTD_DCtx* dctx, void* const dst, const size_t dstCapacity, const size_t litSize,
const streaming_operation streaming, const size_t expectedWriteSize, const unsigned splitImmediately)
{
size_t const blockSizeMax = ZSTD_blockSizeMax(dctx);
assert(litSize <= blockSizeMax);
assert(dctx->isFrameDecompression || streaming == not_streaming);
assert(expectedWriteSize <= blockSizeMax);
if (streaming == not_streaming && dstCapacity > blockSizeMax + WILDCOPY_OVERLENGTH + litSize + WILDCOPY_OVERLENGTH) {
/* If we aren't streaming, we can just put the literals after the output
* of the current block. We don't need to worry about overwriting the
* extDict of our window, because it doesn't exist.
* So if we have space after the end of the block, just put it there.
*/
dctx->litBuffer = (BYTE*)dst + blockSizeMax + WILDCOPY_OVERLENGTH;
dctx->litBufferEnd = dctx->litBuffer + litSize;
dctx->litBufferLocation = ZSTD_in_dst;
} else if (litSize <= ZSTD_LITBUFFEREXTRASIZE) {
/* Literals fit entirely within the extra buffer, put them there to avoid
* having to split the literals.
*/
dctx->litBuffer = dctx->litExtraBuffer;
dctx->litBufferEnd = dctx->litBuffer + litSize;
dctx->litBufferLocation = ZSTD_not_in_dst;
} else {
assert(blockSizeMax > ZSTD_LITBUFFEREXTRASIZE);
/* Literals must be split between the output block and the extra lit
* buffer. We fill the extra lit buffer with the tail of the literals,
* and put the rest of the literals at the end of the block, with
* WILDCOPY_OVERLENGTH of buffer room to allow for overreads.
* This MUST not write more than our maxBlockSize beyond dst, because in
* streaming mode, that could overwrite part of our extDict window.
*/
if (splitImmediately) {
/* won't fit in litExtraBuffer, so it will be split between end of dst and extra buffer */
dctx->litBuffer = (BYTE*)dst + expectedWriteSize - litSize + ZSTD_LITBUFFEREXTRASIZE - WILDCOPY_OVERLENGTH;
dctx->litBufferEnd = dctx->litBuffer + litSize - ZSTD_LITBUFFEREXTRASIZE;
} else {
/* initially this will be stored entirely in dst during huffman decoding, it will partially be shifted to litExtraBuffer after */
dctx->litBuffer = (BYTE*)dst + expectedWriteSize - litSize;
dctx->litBufferEnd = (BYTE*)dst + expectedWriteSize;
}
dctx->litBufferLocation = ZSTD_split;
assert(dctx->litBufferEnd <= (BYTE*)dst + expectedWriteSize);
}
}
/*! ZSTD_decodeLiteralsBlock() :
* Where it is possible to do so without being stomped by the output during decompression, the literals block will be stored
* in the dstBuffer. If there is room to do so, it will be stored in full in the excess dst space after where the current
* block will be output. Otherwise it will be stored at the end of the current dst blockspace, with a small portion being
* stored in dctx->litExtraBuffer to help keep it "ahead" of the current output write.
*
* @return : nb of bytes read from src (< srcSize )
* note : symbol not declared but exposed for fullbench */
static size_t ZSTD_decodeLiteralsBlock(ZSTD_DCtx* dctx,
const void* src, size_t srcSize, /* note : srcSize < BLOCKSIZE */
void* dst, size_t dstCapacity, const streaming_operation streaming)
{
DEBUGLOG(5, "ZSTD_decodeLiteralsBlock");
RETURN_ERROR_IF(srcSize < MIN_CBLOCK_SIZE, corruption_detected, "");
{ const BYTE* const istart = (const BYTE*) src;
SymbolEncodingType_e const litEncType = (SymbolEncodingType_e)(istart[0] & 3);
size_t const blockSizeMax = ZSTD_blockSizeMax(dctx);
switch(litEncType)
{
case set_repeat:
DEBUGLOG(5, "set_repeat flag : re-using stats from previous compressed literals block");
RETURN_ERROR_IF(dctx->litEntropy==0, dictionary_corrupted, "");
ZSTD_FALLTHROUGH;
case set_compressed:
RETURN_ERROR_IF(srcSize < 5, corruption_detected, "srcSize >= MIN_CBLOCK_SIZE == 2; here we need up to 5 for case 3");
{ size_t lhSize, litSize, litCSize;
U32 singleStream=0;
U32 const lhlCode = (istart[0] >> 2) & 3;
U32 const lhc = MEM_readLE32(istart);
size_t hufSuccess;
size_t expectedWriteSize = MIN(blockSizeMax, dstCapacity);
int const flags = 0
| (ZSTD_DCtx_get_bmi2(dctx) ? HUF_flags_bmi2 : 0)
| (dctx->disableHufAsm ? HUF_flags_disableAsm : 0);
switch(lhlCode)
{
case 0: case 1: default: /* note : default is impossible, since lhlCode into [0..3] */
/* 2 - 2 - 10 - 10 */
singleStream = !lhlCode;
lhSize = 3;
litSize = (lhc >> 4) & 0x3FF;
litCSize = (lhc >> 14) & 0x3FF;
break;
case 2:
/* 2 - 2 - 14 - 14 */
lhSize = 4;
litSize = (lhc >> 4) & 0x3FFF;
litCSize = lhc >> 18;
break;
case 3:
/* 2 - 2 - 18 - 18 */
lhSize = 5;
litSize = (lhc >> 4) & 0x3FFFF;
litCSize = (lhc >> 22) + ((size_t)istart[4] << 10);
break;
}
RETURN_ERROR_IF(litSize > 0 && dst == NULL, dstSize_tooSmall, "NULL not handled");
RETURN_ERROR_IF(litSize > blockSizeMax, corruption_detected, "");
if (!singleStream)
RETURN_ERROR_IF(litSize < MIN_LITERALS_FOR_4_STREAMS, literals_headerWrong,
"Not enough literals (%zu) for the 4-streams mode (min %u)",
litSize, MIN_LITERALS_FOR_4_STREAMS);
RETURN_ERROR_IF(litCSize + lhSize > srcSize, corruption_detected, "");
RETURN_ERROR_IF(expectedWriteSize < litSize , dstSize_tooSmall, "");
ZSTD_allocateLiteralsBuffer(dctx, dst, dstCapacity, litSize, streaming, expectedWriteSize, 0);
/* prefetch huffman table if cold */
if (dctx->ddictIsCold && (litSize > 768 /* heuristic */)) {
PREFETCH_AREA(dctx->HUFptr, sizeof(dctx->entropy.hufTable));
}
if (litEncType==set_repeat) {
if (singleStream) {
hufSuccess = HUF_decompress1X_usingDTable(
dctx->litBuffer, litSize, istart+lhSize, litCSize,
dctx->HUFptr, flags);
} else {
assert(litSize >= MIN_LITERALS_FOR_4_STREAMS);
hufSuccess = HUF_decompress4X_usingDTable(
dctx->litBuffer, litSize, istart+lhSize, litCSize,
dctx->HUFptr, flags);
}
} else {
if (singleStream) {
#if defined(HUF_FORCE_DECOMPRESS_X2)
hufSuccess = HUF_decompress1X_DCtx_wksp(
dctx->entropy.hufTable, dctx->litBuffer, litSize,
istart+lhSize, litCSize, dctx->workspace,
sizeof(dctx->workspace), flags);
#else
hufSuccess = HUF_decompress1X1_DCtx_wksp(
dctx->entropy.hufTable, dctx->litBuffer, litSize,
istart+lhSize, litCSize, dctx->workspace,
sizeof(dctx->workspace), flags);
#endif
} else {
hufSuccess = HUF_decompress4X_hufOnly_wksp(
dctx->entropy.hufTable, dctx->litBuffer, litSize,
istart+lhSize, litCSize, dctx->workspace,
sizeof(dctx->workspace), flags);
}
}
if (dctx->litBufferLocation == ZSTD_split)
{
assert(litSize > ZSTD_LITBUFFEREXTRASIZE);
ZSTD_memcpy(dctx->litExtraBuffer, dctx->litBufferEnd - ZSTD_LITBUFFEREXTRASIZE, ZSTD_LITBUFFEREXTRASIZE);
ZSTD_memmove(dctx->litBuffer + ZSTD_LITBUFFEREXTRASIZE - WILDCOPY_OVERLENGTH, dctx->litBuffer, litSize - ZSTD_LITBUFFEREXTRASIZE);
dctx->litBuffer += ZSTD_LITBUFFEREXTRASIZE - WILDCOPY_OVERLENGTH;
dctx->litBufferEnd -= WILDCOPY_OVERLENGTH;
assert(dctx->litBufferEnd <= (BYTE*)dst + blockSizeMax);
}
RETURN_ERROR_IF(HUF_isError(hufSuccess), corruption_detected, "");
dctx->litPtr = dctx->litBuffer;
dctx->litSize = litSize;
dctx->litEntropy = 1;
if (litEncType==set_compressed) dctx->HUFptr = dctx->entropy.hufTable;
return litCSize + lhSize;
}
case set_basic:
{ size_t litSize, lhSize;
U32 const lhlCode = ((istart[0]) >> 2) & 3;
size_t expectedWriteSize = MIN(blockSizeMax, dstCapacity);
switch(lhlCode)
{
case 0: case 2: default: /* note : default is impossible, since lhlCode into [0..3] */
lhSize = 1;
litSize = istart[0] >> 3;
break;
case 1:
lhSize = 2;
litSize = MEM_readLE16(istart) >> 4;
break;
case 3:
lhSize = 3;
RETURN_ERROR_IF(srcSize<3, corruption_detected, "srcSize >= MIN_CBLOCK_SIZE == 2; here we need lhSize = 3");
litSize = MEM_readLE24(istart) >> 4;
break;
}
RETURN_ERROR_IF(litSize > 0 && dst == NULL, dstSize_tooSmall, "NULL not handled");
RETURN_ERROR_IF(litSize > blockSizeMax, corruption_detected, "");
RETURN_ERROR_IF(expectedWriteSize < litSize, dstSize_tooSmall, "");
ZSTD_allocateLiteralsBuffer(dctx, dst, dstCapacity, litSize, streaming, expectedWriteSize, 1);
if (lhSize+litSize+WILDCOPY_OVERLENGTH > srcSize) { /* risk reading beyond src buffer with wildcopy */
RETURN_ERROR_IF(litSize+lhSize > srcSize, corruption_detected, "");
if (dctx->litBufferLocation == ZSTD_split)
{
ZSTD_memcpy(dctx->litBuffer, istart + lhSize, litSize - ZSTD_LITBUFFEREXTRASIZE);
ZSTD_memcpy(dctx->litExtraBuffer, istart + lhSize + litSize - ZSTD_LITBUFFEREXTRASIZE, ZSTD_LITBUFFEREXTRASIZE);
}
else
{
ZSTD_memcpy(dctx->litBuffer, istart + lhSize, litSize);
}
dctx->litPtr = dctx->litBuffer;
dctx->litSize = litSize;
return lhSize+litSize;
}
/* direct reference into compressed stream */
dctx->litPtr = istart+lhSize;
dctx->litSize = litSize;
dctx->litBufferEnd = dctx->litPtr + litSize;
dctx->litBufferLocation = ZSTD_not_in_dst;
return lhSize+litSize;
}
case set_rle:
{ U32 const lhlCode = ((istart[0]) >> 2) & 3;
size_t litSize, lhSize;
size_t expectedWriteSize = MIN(blockSizeMax, dstCapacity);
switch(lhlCode)
{
case 0: case 2: default: /* note : default is impossible, since lhlCode into [0..3] */
lhSize = 1;
litSize = istart[0] >> 3;
break;
case 1:
lhSize = 2;
RETURN_ERROR_IF(srcSize<3, corruption_detected, "srcSize >= MIN_CBLOCK_SIZE == 2; here we need lhSize+1 = 3");
litSize = MEM_readLE16(istart) >> 4;
break;
case 3:
lhSize = 3;
RETURN_ERROR_IF(srcSize<4, corruption_detected, "srcSize >= MIN_CBLOCK_SIZE == 2; here we need lhSize+1 = 4");
litSize = MEM_readLE24(istart) >> 4;
break;
}
RETURN_ERROR_IF(litSize > 0 && dst == NULL, dstSize_tooSmall, "NULL not handled");
RETURN_ERROR_IF(litSize > blockSizeMax, corruption_detected, "");
RETURN_ERROR_IF(expectedWriteSize < litSize, dstSize_tooSmall, "");
ZSTD_allocateLiteralsBuffer(dctx, dst, dstCapacity, litSize, streaming, expectedWriteSize, 1);
if (dctx->litBufferLocation == ZSTD_split)
{
ZSTD_memset(dctx->litBuffer, istart[lhSize], litSize - ZSTD_LITBUFFEREXTRASIZE);
ZSTD_memset(dctx->litExtraBuffer, istart[lhSize], ZSTD_LITBUFFEREXTRASIZE);
}
else
{
ZSTD_memset(dctx->litBuffer, istart[lhSize], litSize);
}
dctx->litPtr = dctx->litBuffer;
dctx->litSize = litSize;
return lhSize+1;
}
default:
RETURN_ERROR(corruption_detected, "impossible");
}
}
}
/* Hidden declaration for fullbench */
size_t ZSTD_decodeLiteralsBlock_wrapper(ZSTD_DCtx* dctx,
const void* src, size_t srcSize,
void* dst, size_t dstCapacity);
size_t ZSTD_decodeLiteralsBlock_wrapper(ZSTD_DCtx* dctx,
const void* src, size_t srcSize,
void* dst, size_t dstCapacity)
{
dctx->isFrameDecompression = 0;
return ZSTD_decodeLiteralsBlock(dctx, src, srcSize, dst, dstCapacity, not_streaming);
}
/* Default FSE distribution tables.
* These are pre-calculated FSE decoding tables using default distributions as defined in specification :
* https://github.com/facebook/zstd/blob/release/doc/zstd_compression_format.md#default-distributions
* They were generated programmatically with following method :
* - start from default distributions, present in /lib/common/zstd_internal.h
* - generate tables normally, using ZSTD_buildFSETable()
* - printout the content of tables
* - prettify output, report below, test with fuzzer to ensure it's correct */
/* Default FSE distribution table for Literal Lengths */
static const ZSTD_seqSymbol LL_defaultDTable[(1<<LL_DEFAULTNORMLOG)+1] = {
{ 1, 1, 1, LL_DEFAULTNORMLOG}, /* header : fastMode, tableLog */
/* nextState, nbAddBits, nbBits, baseVal */
{ 0, 0, 4, 0}, { 16, 0, 4, 0},
{ 32, 0, 5, 1}, { 0, 0, 5, 3},
{ 0, 0, 5, 4}, { 0, 0, 5, 6},
{ 0, 0, 5, 7}, { 0, 0, 5, 9},
{ 0, 0, 5, 10}, { 0, 0, 5, 12},
{ 0, 0, 6, 14}, { 0, 1, 5, 16},
{ 0, 1, 5, 20}, { 0, 1, 5, 22},
{ 0, 2, 5, 28}, { 0, 3, 5, 32},
{ 0, 4, 5, 48}, { 32, 6, 5, 64},
{ 0, 7, 5, 128}, { 0, 8, 6, 256},
{ 0, 10, 6, 1024}, { 0, 12, 6, 4096},
{ 32, 0, 4, 0}, { 0, 0, 4, 1},
{ 0, 0, 5, 2}, { 32, 0, 5, 4},
{ 0, 0, 5, 5}, { 32, 0, 5, 7},
{ 0, 0, 5, 8}, { 32, 0, 5, 10},
{ 0, 0, 5, 11}, { 0, 0, 6, 13},
{ 32, 1, 5, 16}, { 0, 1, 5, 18},
{ 32, 1, 5, 22}, { 0, 2, 5, 24},
{ 32, 3, 5, 32}, { 0, 3, 5, 40},
{ 0, 6, 4, 64}, { 16, 6, 4, 64},
{ 32, 7, 5, 128}, { 0, 9, 6, 512},
{ 0, 11, 6, 2048}, { 48, 0, 4, 0},
{ 16, 0, 4, 1}, { 32, 0, 5, 2},
{ 32, 0, 5, 3}, { 32, 0, 5, 5},
{ 32, 0, 5, 6}, { 32, 0, 5, 8},
{ 32, 0, 5, 9}, { 32, 0, 5, 11},
{ 32, 0, 5, 12}, { 0, 0, 6, 15},
{ 32, 1, 5, 18}, { 32, 1, 5, 20},
{ 32, 2, 5, 24}, { 32, 2, 5, 28},
{ 32, 3, 5, 40}, { 32, 4, 5, 48},
{ 0, 16, 6,65536}, { 0, 15, 6,32768},
{ 0, 14, 6,16384}, { 0, 13, 6, 8192},
}; /* LL_defaultDTable */
/* Default FSE distribution table for Offset Codes */
static const ZSTD_seqSymbol OF_defaultDTable[(1<<OF_DEFAULTNORMLOG)+1] = {
{ 1, 1, 1, OF_DEFAULTNORMLOG}, /* header : fastMode, tableLog */
/* nextState, nbAddBits, nbBits, baseVal */
{ 0, 0, 5, 0}, { 0, 6, 4, 61},
{ 0, 9, 5, 509}, { 0, 15, 5,32765},
{ 0, 21, 5,2097149}, { 0, 3, 5, 5},
{ 0, 7, 4, 125}, { 0, 12, 5, 4093},
{ 0, 18, 5,262141}, { 0, 23, 5,8388605},
{ 0, 5, 5, 29}, { 0, 8, 4, 253},
{ 0, 14, 5,16381}, { 0, 20, 5,1048573},
{ 0, 2, 5, 1}, { 16, 7, 4, 125},
{ 0, 11, 5, 2045}, { 0, 17, 5,131069},
{ 0, 22, 5,4194301}, { 0, 4, 5, 13},
{ 16, 8, 4, 253}, { 0, 13, 5, 8189},
{ 0, 19, 5,524285}, { 0, 1, 5, 1},
{ 16, 6, 4, 61}, { 0, 10, 5, 1021},
{ 0, 16, 5,65533}, { 0, 28, 5,268435453},
{ 0, 27, 5,134217725}, { 0, 26, 5,67108861},
{ 0, 25, 5,33554429}, { 0, 24, 5,16777213},
}; /* OF_defaultDTable */
/* Default FSE distribution table for Match Lengths */
static const ZSTD_seqSymbol ML_defaultDTable[(1<<ML_DEFAULTNORMLOG)+1] = {
{ 1, 1, 1, ML_DEFAULTNORMLOG}, /* header : fastMode, tableLog */
/* nextState, nbAddBits, nbBits, baseVal */
{ 0, 0, 6, 3}, { 0, 0, 4, 4},
{ 32, 0, 5, 5}, { 0, 0, 5, 6},
{ 0, 0, 5, 8}, { 0, 0, 5, 9},
{ 0, 0, 5, 11}, { 0, 0, 6, 13},
{ 0, 0, 6, 16}, { 0, 0, 6, 19},
{ 0, 0, 6, 22}, { 0, 0, 6, 25},
{ 0, 0, 6, 28}, { 0, 0, 6, 31},
{ 0, 0, 6, 34}, { 0, 1, 6, 37},
{ 0, 1, 6, 41}, { 0, 2, 6, 47},
{ 0, 3, 6, 59}, { 0, 4, 6, 83},
{ 0, 7, 6, 131}, { 0, 9, 6, 515},
{ 16, 0, 4, 4}, { 0, 0, 4, 5},
{ 32, 0, 5, 6}, { 0, 0, 5, 7},
{ 32, 0, 5, 9}, { 0, 0, 5, 10},
{ 0, 0, 6, 12}, { 0, 0, 6, 15},
{ 0, 0, 6, 18}, { 0, 0, 6, 21},
{ 0, 0, 6, 24}, { 0, 0, 6, 27},
{ 0, 0, 6, 30}, { 0, 0, 6, 33},
{ 0, 1, 6, 35}, { 0, 1, 6, 39},
{ 0, 2, 6, 43}, { 0, 3, 6, 51},
{ 0, 4, 6, 67}, { 0, 5, 6, 99},
{ 0, 8, 6, 259}, { 32, 0, 4, 4},
{ 48, 0, 4, 4}, { 16, 0, 4, 5},
{ 32, 0, 5, 7}, { 32, 0, 5, 8},
{ 32, 0, 5, 10}, { 32, 0, 5, 11},
{ 0, 0, 6, 14}, { 0, 0, 6, 17},
{ 0, 0, 6, 20}, { 0, 0, 6, 23},
{ 0, 0, 6, 26}, { 0, 0, 6, 29},
{ 0, 0, 6, 32}, { 0, 16, 6,65539},
{ 0, 15, 6,32771}, { 0, 14, 6,16387},
{ 0, 13, 6, 8195}, { 0, 12, 6, 4099},
{ 0, 11, 6, 2051}, { 0, 10, 6, 1027},
}; /* ML_defaultDTable */
static void ZSTD_buildSeqTable_rle(ZSTD_seqSymbol* dt, U32 baseValue, U8 nbAddBits)
{
void* ptr = dt;
ZSTD_seqSymbol_header* const DTableH = (ZSTD_seqSymbol_header*)ptr;
ZSTD_seqSymbol* const cell = dt + 1;
DTableH->tableLog = 0;
DTableH->fastMode = 0;
cell->nbBits = 0;
cell->nextState = 0;
assert(nbAddBits < 255);
cell->nbAdditionalBits = nbAddBits;
cell->baseValue = baseValue;
}
/* ZSTD_buildFSETable() :
* generate FSE decoding table for one symbol (ll, ml or off)
* cannot fail if input is valid =>
* all inputs are presumed validated at this stage */
FORCE_INLINE_TEMPLATE
void ZSTD_buildFSETable_body(ZSTD_seqSymbol* dt,
const short* normalizedCounter, unsigned maxSymbolValue,
const U32* baseValue, const U8* nbAdditionalBits,
unsigned tableLog, void* wksp, size_t wkspSize)
{
ZSTD_seqSymbol* const tableDecode = dt+1;
U32 const maxSV1 = maxSymbolValue + 1;
U32 const tableSize = 1 << tableLog;
U16* symbolNext = (U16*)wksp;
BYTE* spread = (BYTE*)(symbolNext + MaxSeq + 1);
U32 highThreshold = tableSize - 1;
/* Sanity Checks */
assert(maxSymbolValue <= MaxSeq);
assert(tableLog <= MaxFSELog);
assert(wkspSize >= ZSTD_BUILD_FSE_TABLE_WKSP_SIZE);
(void)wkspSize;
/* Init, lay down lowprob symbols */
{ ZSTD_seqSymbol_header DTableH;
DTableH.tableLog = tableLog;
DTableH.fastMode = 1;
{ S16 const largeLimit= (S16)(1 << (tableLog-1));
U32 s;
for (s=0; s<maxSV1; s++) {
if (normalizedCounter[s]==-1) {
tableDecode[highThreshold--].baseValue = s;
symbolNext[s] = 1;
} else {
if (normalizedCounter[s] >= largeLimit) DTableH.fastMode=0;
assert(normalizedCounter[s]>=0);
symbolNext[s] = (U16)normalizedCounter[s];
} } }
ZSTD_memcpy(dt, &DTableH, sizeof(DTableH));
}
/* Spread symbols */
assert(tableSize <= 512);
/* Specialized symbol spreading for the case when there are
* no low probability (-1 count) symbols. When compressing
* small blocks we avoid low probability symbols to hit this
* case, since header decoding speed matters more.
*/
if (highThreshold == tableSize - 1) {
size_t const tableMask = tableSize-1;
size_t const step = FSE_TABLESTEP(tableSize);
/* First lay down the symbols in order.
* We use a uint64_t to lay down 8 bytes at a time. This reduces branch
* misses since small blocks generally have small table logs, so nearly
* all symbols have counts <= 8. We ensure we have 8 bytes at the end of
* our buffer to handle the over-write.
*/
{
U64 const add = 0x0101010101010101ull;
size_t pos = 0;
U64 sv = 0;
U32 s;
for (s=0; s<maxSV1; ++s, sv += add) {
int i;
int const n = normalizedCounter[s];
MEM_write64(spread + pos, sv);
for (i = 8; i < n; i += 8) {
MEM_write64(spread + pos + i, sv);
}
assert(n>=0);
pos += (size_t)n;
}
}
/* Now we spread those positions across the table.
* The benefit of doing it in two stages is that we avoid the
* variable size inner loop, which caused lots of branch misses.
* Now we can run through all the positions without any branch misses.
* We unroll the loop twice, since that is what empirically worked best.
*/
{
size_t position = 0;
size_t s;
size_t const unroll = 2;
assert(tableSize % unroll == 0); /* FSE_MIN_TABLELOG is 5 */
for (s = 0; s < (size_t)tableSize; s += unroll) {
size_t u;
for (u = 0; u < unroll; ++u) {
size_t const uPosition = (position + (u * step)) & tableMask;
tableDecode[uPosition].baseValue = spread[s + u];
}
position = (position + (unroll * step)) & tableMask;
}
assert(position == 0);
}
} else {
U32 const tableMask = tableSize-1;
U32 const step = FSE_TABLESTEP(tableSize);
U32 s, position = 0;
for (s=0; s<maxSV1; s++) {
int i;
int const n = normalizedCounter[s];
for (i=0; i<n; i++) {
tableDecode[position].baseValue = s;
position = (position + step) & tableMask;
while (UNLIKELY(position > highThreshold)) position = (position + step) & tableMask; /* lowprob area */
} }
assert(position == 0); /* position must reach all cells once, otherwise normalizedCounter is incorrect */
}
/* Build Decoding table */
{
U32 u;
for (u=0; u<tableSize; u++) {
U32 const symbol = tableDecode[u].baseValue;
U32 const nextState = symbolNext[symbol]++;
tableDecode[u].nbBits = (BYTE) (tableLog - ZSTD_highbit32(nextState) );
tableDecode[u].nextState = (U16) ( (nextState << tableDecode[u].nbBits) - tableSize);
assert(nbAdditionalBits[symbol] < 255);
tableDecode[u].nbAdditionalBits = nbAdditionalBits[symbol];
tableDecode[u].baseValue = baseValue[symbol];
}
}
}
/* Avoids the FORCE_INLINE of the _body() function. */
static void ZSTD_buildFSETable_body_default(ZSTD_seqSymbol* dt,
const short* normalizedCounter, unsigned maxSymbolValue,
const U32* baseValue, const U8* nbAdditionalBits,
unsigned tableLog, void* wksp, size_t wkspSize)
{
ZSTD_buildFSETable_body(dt, normalizedCounter, maxSymbolValue,
baseValue, nbAdditionalBits, tableLog, wksp, wkspSize);
}
#if DYNAMIC_BMI2
BMI2_TARGET_ATTRIBUTE static void ZSTD_buildFSETable_body_bmi2(ZSTD_seqSymbol* dt,
const short* normalizedCounter, unsigned maxSymbolValue,
const U32* baseValue, const U8* nbAdditionalBits,
unsigned tableLog, void* wksp, size_t wkspSize)
{
ZSTD_buildFSETable_body(dt, normalizedCounter, maxSymbolValue,
baseValue, nbAdditionalBits, tableLog, wksp, wkspSize);
}
#endif
void ZSTD_buildFSETable(ZSTD_seqSymbol* dt,
const short* normalizedCounter, unsigned maxSymbolValue,
const U32* baseValue, const U8* nbAdditionalBits,
unsigned tableLog, void* wksp, size_t wkspSize, int bmi2)
{
#if DYNAMIC_BMI2
if (bmi2) {
ZSTD_buildFSETable_body_bmi2(dt, normalizedCounter, maxSymbolValue,
baseValue, nbAdditionalBits, tableLog, wksp, wkspSize);
return;
}
#endif
(void)bmi2;
ZSTD_buildFSETable_body_default(dt, normalizedCounter, maxSymbolValue,
baseValue, nbAdditionalBits, tableLog, wksp, wkspSize);
}
/*! ZSTD_buildSeqTable() :
* @return : nb bytes read from src,
* or an error code if it fails */
static size_t ZSTD_buildSeqTable(ZSTD_seqSymbol* DTableSpace, const ZSTD_seqSymbol** DTablePtr,
SymbolEncodingType_e type, unsigned max, U32 maxLog,
const void* src, size_t srcSize,
const U32* baseValue, const U8* nbAdditionalBits,
const ZSTD_seqSymbol* defaultTable, U32 flagRepeatTable,
int ddictIsCold, int nbSeq, U32* wksp, size_t wkspSize,
int bmi2)
{
switch(type)
{
case set_rle :
RETURN_ERROR_IF(!srcSize, srcSize_wrong, "");
RETURN_ERROR_IF((*(const BYTE*)src) > max, corruption_detected, "");
{ U32 const symbol = *(const BYTE*)src;
U32 const baseline = baseValue[symbol];
U8 const nbBits = nbAdditionalBits[symbol];
ZSTD_buildSeqTable_rle(DTableSpace, baseline, nbBits);
}
*DTablePtr = DTableSpace;
return 1;
case set_basic :
*DTablePtr = defaultTable;
return 0;
case set_repeat:
RETURN_ERROR_IF(!flagRepeatTable, corruption_detected, "");
/* prefetch FSE table if used */
if (ddictIsCold && (nbSeq > 24 /* heuristic */)) {
const void* const pStart = *DTablePtr;
size_t const pSize = sizeof(ZSTD_seqSymbol) * (SEQSYMBOL_TABLE_SIZE(maxLog));
PREFETCH_AREA(pStart, pSize);
}
return 0;
case set_compressed :
{ unsigned tableLog;
S16 norm[MaxSeq+1];
size_t const headerSize = FSE_readNCount(norm, &max, &tableLog, src, srcSize);
RETURN_ERROR_IF(FSE_isError(headerSize), corruption_detected, "");
RETURN_ERROR_IF(tableLog > maxLog, corruption_detected, "");
ZSTD_buildFSETable(DTableSpace, norm, max, baseValue, nbAdditionalBits, tableLog, wksp, wkspSize, bmi2);
*DTablePtr = DTableSpace;
return headerSize;
}
default :
assert(0);
RETURN_ERROR(GENERIC, "impossible");
}
}
size_t ZSTD_decodeSeqHeaders(ZSTD_DCtx* dctx, int* nbSeqPtr,
const void* src, size_t srcSize)
{
const BYTE* const istart = (const BYTE*)src;
const BYTE* const iend = istart + srcSize;
const BYTE* ip = istart;
int nbSeq;
DEBUGLOG(5, "ZSTD_decodeSeqHeaders");
/* check */
RETURN_ERROR_IF(srcSize < MIN_SEQUENCES_SIZE, srcSize_wrong, "");
/* SeqHead */
nbSeq = *ip++;
if (nbSeq > 0x7F) {
if (nbSeq == 0xFF) {
RETURN_ERROR_IF(ip+2 > iend, srcSize_wrong, "");
nbSeq = MEM_readLE16(ip) + LONGNBSEQ;
ip+=2;
} else {
RETURN_ERROR_IF(ip >= iend, srcSize_wrong, "");
nbSeq = ((nbSeq-0x80)<<8) + *ip++;
}
}
*nbSeqPtr = nbSeq;
if (nbSeq == 0) {
/* No sequence : section ends immediately */
RETURN_ERROR_IF(ip != iend, corruption_detected,
"extraneous data present in the Sequences section");
return (size_t)(ip - istart);
}
/* FSE table descriptors */
RETURN_ERROR_IF(ip+1 > iend, srcSize_wrong, ""); /* minimum possible size: 1 byte for symbol encoding types */
RETURN_ERROR_IF(*ip & 3, corruption_detected, ""); /* The last field, Reserved, must be all-zeroes. */
{ SymbolEncodingType_e const LLtype = (SymbolEncodingType_e)(*ip >> 6);
SymbolEncodingType_e const OFtype = (SymbolEncodingType_e)((*ip >> 4) & 3);
SymbolEncodingType_e const MLtype = (SymbolEncodingType_e)((*ip >> 2) & 3);
ip++;
/* Build DTables */
{ size_t const llhSize = ZSTD_buildSeqTable(dctx->entropy.LLTable, &dctx->LLTptr,
LLtype, MaxLL, LLFSELog,
ip, iend-ip,
LL_base, LL_bits,
LL_defaultDTable, dctx->fseEntropy,
dctx->ddictIsCold, nbSeq,
dctx->workspace, sizeof(dctx->workspace),
ZSTD_DCtx_get_bmi2(dctx));
RETURN_ERROR_IF(ZSTD_isError(llhSize), corruption_detected, "ZSTD_buildSeqTable failed");
ip += llhSize;
}
{ size_t const ofhSize = ZSTD_buildSeqTable(dctx->entropy.OFTable, &dctx->OFTptr,
OFtype, MaxOff, OffFSELog,
ip, iend-ip,
OF_base, OF_bits,
OF_defaultDTable, dctx->fseEntropy,
dctx->ddictIsCold, nbSeq,
dctx->workspace, sizeof(dctx->workspace),
ZSTD_DCtx_get_bmi2(dctx));
RETURN_ERROR_IF(ZSTD_isError(ofhSize), corruption_detected, "ZSTD_buildSeqTable failed");
ip += ofhSize;
}
{ size_t const mlhSize = ZSTD_buildSeqTable(dctx->entropy.MLTable, &dctx->MLTptr,
MLtype, MaxML, MLFSELog,
ip, iend-ip,
ML_base, ML_bits,
ML_defaultDTable, dctx->fseEntropy,
dctx->ddictIsCold, nbSeq,
dctx->workspace, sizeof(dctx->workspace),
ZSTD_DCtx_get_bmi2(dctx));
RETURN_ERROR_IF(ZSTD_isError(mlhSize), corruption_detected, "ZSTD_buildSeqTable failed");
ip += mlhSize;
}
}
return ip-istart;
}
typedef struct {
size_t litLength;
size_t matchLength;
size_t offset;
} seq_t;
typedef struct {
size_t state;
const ZSTD_seqSymbol* table;
} ZSTD_fseState;
typedef struct {
BIT_DStream_t DStream;
ZSTD_fseState stateLL;
ZSTD_fseState stateOffb;
ZSTD_fseState stateML;
size_t prevOffset[ZSTD_REP_NUM];
} seqState_t;
/*! ZSTD_overlapCopy8() :
* Copies 8 bytes from ip to op and updates op and ip where ip <= op.
* If the offset is < 8 then the offset is spread to at least 8 bytes.
*
* Precondition: *ip <= *op
* Postcondition: *op - *op >= 8
*/
HINT_INLINE void ZSTD_overlapCopy8(BYTE** op, BYTE const** ip, size_t offset) {
assert(*ip <= *op);
if (offset < 8) {
/* close range match, overlap */
static const U32 dec32table[] = { 0, 1, 2, 1, 4, 4, 4, 4 }; /* added */
static const int dec64table[] = { 8, 8, 8, 7, 8, 9,10,11 }; /* subtracted */
int const sub2 = dec64table[offset];
(*op)[0] = (*ip)[0];
(*op)[1] = (*ip)[1];
(*op)[2] = (*ip)[2];
(*op)[3] = (*ip)[3];
*ip += dec32table[offset];
ZSTD_copy4(*op+4, *ip);
*ip -= sub2;
} else {
ZSTD_copy8(*op, *ip);
}
*ip += 8;
*op += 8;
assert(*op - *ip >= 8);
}
/*! ZSTD_safecopy() :
* Specialized version of memcpy() that is allowed to READ up to WILDCOPY_OVERLENGTH past the input buffer
* and write up to 16 bytes past oend_w (op >= oend_w is allowed).
* This function is only called in the uncommon case where the sequence is near the end of the block. It
* should be fast for a single long sequence, but can be slow for several short sequences.
*
* @param ovtype controls the overlap detection
* - ZSTD_no_overlap: The source and destination are guaranteed to be at least WILDCOPY_VECLEN bytes apart.
* - ZSTD_overlap_src_before_dst: The src and dst may overlap and may be any distance apart.
* The src buffer must be before the dst buffer.
*/
static void ZSTD_safecopy(BYTE* op, const BYTE* const oend_w, BYTE const* ip, ptrdiff_t length, ZSTD_overlap_e ovtype) {
ptrdiff_t const diff = op - ip;
BYTE* const oend = op + length;
assert((ovtype == ZSTD_no_overlap && (diff <= -8 || diff >= 8 || op >= oend_w)) ||
(ovtype == ZSTD_overlap_src_before_dst && diff >= 0));
if (length < 8) {
/* Handle short lengths. */
while (op < oend) *op++ = *ip++;
return;
}
if (ovtype == ZSTD_overlap_src_before_dst) {
/* Copy 8 bytes and ensure the offset >= 8 when there can be overlap. */
assert(length >= 8);
ZSTD_overlapCopy8(&op, &ip, diff);
length -= 8;
assert(op - ip >= 8);
assert(op <= oend);
}
if (oend <= oend_w) {
/* No risk of overwrite. */
ZSTD_wildcopy(op, ip, length, ovtype);
return;
}
if (op <= oend_w) {
/* Wildcopy until we get close to the end. */
assert(oend > oend_w);
ZSTD_wildcopy(op, ip, oend_w - op, ovtype);
ip += oend_w - op;
op += oend_w - op;
}
/* Handle the leftovers. */
while (op < oend) *op++ = *ip++;
}
/* ZSTD_safecopyDstBeforeSrc():
* This version allows overlap with dst before src, or handles the non-overlap case with dst after src
* Kept separate from more common ZSTD_safecopy case to avoid performance impact to the safecopy common case */
static void ZSTD_safecopyDstBeforeSrc(BYTE* op, const BYTE* ip, ptrdiff_t length) {
ptrdiff_t const diff = op - ip;
BYTE* const oend = op + length;
if (length < 8 || diff > -8) {
/* Handle short lengths, close overlaps, and dst not before src. */
while (op < oend) *op++ = *ip++;
return;
}
if (op <= oend - WILDCOPY_OVERLENGTH && diff < -WILDCOPY_VECLEN) {
ZSTD_wildcopy(op, ip, oend - WILDCOPY_OVERLENGTH - op, ZSTD_no_overlap);
ip += oend - WILDCOPY_OVERLENGTH - op;
op += oend - WILDCOPY_OVERLENGTH - op;
}
/* Handle the leftovers. */
while (op < oend) *op++ = *ip++;
}
/* ZSTD_execSequenceEnd():
* This version handles cases that are near the end of the output buffer. It requires
* more careful checks to make sure there is no overflow. By separating out these hard
* and unlikely cases, we can speed up the common cases.
*
* NOTE: This function needs to be fast for a single long sequence, but doesn't need
* to be optimized for many small sequences, since those fall into ZSTD_execSequence().
*/
FORCE_NOINLINE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
size_t ZSTD_execSequenceEnd(BYTE* op,
BYTE* const oend, seq_t sequence,
const BYTE** litPtr, const BYTE* const litLimit,
const BYTE* const prefixStart, const BYTE* const virtualStart, const BYTE* const dictEnd)
{
BYTE* const oLitEnd = op + sequence.litLength;
size_t const sequenceLength = sequence.litLength + sequence.matchLength;
const BYTE* const iLitEnd = *litPtr + sequence.litLength;
const BYTE* match = oLitEnd - sequence.offset;
BYTE* const oend_w = oend - WILDCOPY_OVERLENGTH;
/* bounds checks : careful of address space overflow in 32-bit mode */
RETURN_ERROR_IF(sequenceLength > (size_t)(oend - op), dstSize_tooSmall, "last match must fit within dstBuffer");
RETURN_ERROR_IF(sequence.litLength > (size_t)(litLimit - *litPtr), corruption_detected, "try to read beyond literal buffer");
assert(op < op + sequenceLength);
assert(oLitEnd < op + sequenceLength);
/* copy literals */
ZSTD_safecopy(op, oend_w, *litPtr, sequence.litLength, ZSTD_no_overlap);
op = oLitEnd;
*litPtr = iLitEnd;
/* copy Match */
if (sequence.offset > (size_t)(oLitEnd - prefixStart)) {
/* offset beyond prefix */
RETURN_ERROR_IF(sequence.offset > (size_t)(oLitEnd - virtualStart), corruption_detected, "");
match = dictEnd - (prefixStart - match);
if (match + sequence.matchLength <= dictEnd) {
ZSTD_memmove(oLitEnd, match, sequence.matchLength);
return sequenceLength;
}
/* span extDict & currentPrefixSegment */
{ size_t const length1 = dictEnd - match;
ZSTD_memmove(oLitEnd, match, length1);
op = oLitEnd + length1;
sequence.matchLength -= length1;
match = prefixStart;
}
}
ZSTD_safecopy(op, oend_w, match, sequence.matchLength, ZSTD_overlap_src_before_dst);
return sequenceLength;
}
/* ZSTD_execSequenceEndSplitLitBuffer():
* This version is intended to be used during instances where the litBuffer is still split. It is kept separate to avoid performance impact for the good case.
*/
FORCE_NOINLINE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
size_t ZSTD_execSequenceEndSplitLitBuffer(BYTE* op,
BYTE* const oend, const BYTE* const oend_w, seq_t sequence,
const BYTE** litPtr, const BYTE* const litLimit,
const BYTE* const prefixStart, const BYTE* const virtualStart, const BYTE* const dictEnd)
{
BYTE* const oLitEnd = op + sequence.litLength;
size_t const sequenceLength = sequence.litLength + sequence.matchLength;
const BYTE* const iLitEnd = *litPtr + sequence.litLength;
const BYTE* match = oLitEnd - sequence.offset;
/* bounds checks : careful of address space overflow in 32-bit mode */
RETURN_ERROR_IF(sequenceLength > (size_t)(oend - op), dstSize_tooSmall, "last match must fit within dstBuffer");
RETURN_ERROR_IF(sequence.litLength > (size_t)(litLimit - *litPtr), corruption_detected, "try to read beyond literal buffer");
assert(op < op + sequenceLength);
assert(oLitEnd < op + sequenceLength);
/* copy literals */
RETURN_ERROR_IF(op > *litPtr && op < *litPtr + sequence.litLength, dstSize_tooSmall, "output should not catch up to and overwrite literal buffer");
ZSTD_safecopyDstBeforeSrc(op, *litPtr, sequence.litLength);
op = oLitEnd;
*litPtr = iLitEnd;
/* copy Match */
if (sequence.offset > (size_t)(oLitEnd - prefixStart)) {
/* offset beyond prefix */
RETURN_ERROR_IF(sequence.offset > (size_t)(oLitEnd - virtualStart), corruption_detected, "");
match = dictEnd - (prefixStart - match);
if (match + sequence.matchLength <= dictEnd) {
ZSTD_memmove(oLitEnd, match, sequence.matchLength);
return sequenceLength;
}
/* span extDict & currentPrefixSegment */
{ size_t const length1 = dictEnd - match;
ZSTD_memmove(oLitEnd, match, length1);
op = oLitEnd + length1;
sequence.matchLength -= length1;
match = prefixStart;
}
}
ZSTD_safecopy(op, oend_w, match, sequence.matchLength, ZSTD_overlap_src_before_dst);
return sequenceLength;
}
HINT_INLINE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
size_t ZSTD_execSequence(BYTE* op,
BYTE* const oend, seq_t sequence,
const BYTE** litPtr, const BYTE* const litLimit,
const BYTE* const prefixStart, const BYTE* const virtualStart, const BYTE* const dictEnd)
{
BYTE* const oLitEnd = op + sequence.litLength;
size_t const sequenceLength = sequence.litLength + sequence.matchLength;
BYTE* const oMatchEnd = op + sequenceLength; /* risk : address space overflow (32-bits) */
BYTE* const oend_w = oend - WILDCOPY_OVERLENGTH; /* risk : address space underflow on oend=NULL */
const BYTE* const iLitEnd = *litPtr + sequence.litLength;
const BYTE* match = oLitEnd - sequence.offset;
assert(op != NULL /* Precondition */);
assert(oend_w < oend /* No underflow */);
#if defined(__aarch64__)
/* prefetch sequence starting from match that will be used for copy later */
PREFETCH_L1(match);
#endif
/* Handle edge cases in a slow path:
* - Read beyond end of literals
* - Match end is within WILDCOPY_OVERLIMIT of oend
* - 32-bit mode and the match length overflows
*/
if (UNLIKELY(
iLitEnd > litLimit ||
oMatchEnd > oend_w ||
(MEM_32bits() && (size_t)(oend - op) < sequenceLength + WILDCOPY_OVERLENGTH)))
return ZSTD_execSequenceEnd(op, oend, sequence, litPtr, litLimit, prefixStart, virtualStart, dictEnd);
/* Assumptions (everything else goes into ZSTD_execSequenceEnd()) */
assert(op <= oLitEnd /* No overflow */);
assert(oLitEnd < oMatchEnd /* Non-zero match & no overflow */);
assert(oMatchEnd <= oend /* No underflow */);
assert(iLitEnd <= litLimit /* Literal length is in bounds */);
assert(oLitEnd <= oend_w /* Can wildcopy literals */);
assert(oMatchEnd <= oend_w /* Can wildcopy matches */);
/* Copy Literals:
* Split out litLength <= 16 since it is nearly always true. +1.6% on gcc-9.
* We likely don't need the full 32-byte wildcopy.
*/
assert(WILDCOPY_OVERLENGTH >= 16);
ZSTD_copy16(op, (*litPtr));
if (UNLIKELY(sequence.litLength > 16)) {
ZSTD_wildcopy(op + 16, (*litPtr) + 16, sequence.litLength - 16, ZSTD_no_overlap);
}
op = oLitEnd;
*litPtr = iLitEnd; /* update for next sequence */
/* Copy Match */
if (sequence.offset > (size_t)(oLitEnd - prefixStart)) {
/* offset beyond prefix -> go into extDict */
RETURN_ERROR_IF(UNLIKELY(sequence.offset > (size_t)(oLitEnd - virtualStart)), corruption_detected, "");
match = dictEnd + (match - prefixStart);
if (match + sequence.matchLength <= dictEnd) {
ZSTD_memmove(oLitEnd, match, sequence.matchLength);
return sequenceLength;
}
/* span extDict & currentPrefixSegment */
{ size_t const length1 = dictEnd - match;
ZSTD_memmove(oLitEnd, match, length1);
op = oLitEnd + length1;
sequence.matchLength -= length1;
match = prefixStart;
}
}
/* Match within prefix of 1 or more bytes */
assert(op <= oMatchEnd);
assert(oMatchEnd <= oend_w);
assert(match >= prefixStart);
assert(sequence.matchLength >= 1);
/* Nearly all offsets are >= WILDCOPY_VECLEN bytes, which means we can use wildcopy
* without overlap checking.
*/
if (LIKELY(sequence.offset >= WILDCOPY_VECLEN)) {
/* We bet on a full wildcopy for matches, since we expect matches to be
* longer than literals (in general). In silesia, ~10% of matches are longer
* than 16 bytes.
*/
ZSTD_wildcopy(op, match, (ptrdiff_t)sequence.matchLength, ZSTD_no_overlap);
return sequenceLength;
}
assert(sequence.offset < WILDCOPY_VECLEN);
/* Copy 8 bytes and spread the offset to be >= 8. */
ZSTD_overlapCopy8(&op, &match, sequence.offset);
/* If the match length is > 8 bytes, then continue with the wildcopy. */
if (sequence.matchLength > 8) {
assert(op < oMatchEnd);
ZSTD_wildcopy(op, match, (ptrdiff_t)sequence.matchLength - 8, ZSTD_overlap_src_before_dst);
}
return sequenceLength;
}
HINT_INLINE
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
size_t ZSTD_execSequenceSplitLitBuffer(BYTE* op,
BYTE* const oend, const BYTE* const oend_w, seq_t sequence,
const BYTE** litPtr, const BYTE* const litLimit,
const BYTE* const prefixStart, const BYTE* const virtualStart, const BYTE* const dictEnd)
{
BYTE* const oLitEnd = op + sequence.litLength;
size_t const sequenceLength = sequence.litLength + sequence.matchLength;
BYTE* const oMatchEnd = op + sequenceLength; /* risk : address space overflow (32-bits) */
const BYTE* const iLitEnd = *litPtr + sequence.litLength;
const BYTE* match = oLitEnd - sequence.offset;
assert(op != NULL /* Precondition */);
assert(oend_w < oend /* No underflow */);
/* Handle edge cases in a slow path:
* - Read beyond end of literals
* - Match end is within WILDCOPY_OVERLIMIT of oend
* - 32-bit mode and the match length overflows
*/
if (UNLIKELY(
iLitEnd > litLimit ||
oMatchEnd > oend_w ||
(MEM_32bits() && (size_t)(oend - op) < sequenceLength + WILDCOPY_OVERLENGTH)))
return ZSTD_execSequenceEndSplitLitBuffer(op, oend, oend_w, sequence, litPtr, litLimit, prefixStart, virtualStart, dictEnd);
/* Assumptions (everything else goes into ZSTD_execSequenceEnd()) */
assert(op <= oLitEnd /* No overflow */);
assert(oLitEnd < oMatchEnd /* Non-zero match & no overflow */);
assert(oMatchEnd <= oend /* No underflow */);
assert(iLitEnd <= litLimit /* Literal length is in bounds */);
assert(oLitEnd <= oend_w /* Can wildcopy literals */);
assert(oMatchEnd <= oend_w /* Can wildcopy matches */);
/* Copy Literals:
* Split out litLength <= 16 since it is nearly always true. +1.6% on gcc-9.
* We likely don't need the full 32-byte wildcopy.
*/
assert(WILDCOPY_OVERLENGTH >= 16);
ZSTD_copy16(op, (*litPtr));
if (UNLIKELY(sequence.litLength > 16)) {
ZSTD_wildcopy(op+16, (*litPtr)+16, sequence.litLength-16, ZSTD_no_overlap);
}
op = oLitEnd;
*litPtr = iLitEnd; /* update for next sequence */
/* Copy Match */
if (sequence.offset > (size_t)(oLitEnd - prefixStart)) {
/* offset beyond prefix -> go into extDict */
RETURN_ERROR_IF(UNLIKELY(sequence.offset > (size_t)(oLitEnd - virtualStart)), corruption_detected, "");
match = dictEnd + (match - prefixStart);
if (match + sequence.matchLength <= dictEnd) {
ZSTD_memmove(oLitEnd, match, sequence.matchLength);
return sequenceLength;
}
/* span extDict & currentPrefixSegment */
{ size_t const length1 = dictEnd - match;
ZSTD_memmove(oLitEnd, match, length1);
op = oLitEnd + length1;
sequence.matchLength -= length1;
match = prefixStart;
} }
/* Match within prefix of 1 or more bytes */
assert(op <= oMatchEnd);
assert(oMatchEnd <= oend_w);
assert(match >= prefixStart);
assert(sequence.matchLength >= 1);
/* Nearly all offsets are >= WILDCOPY_VECLEN bytes, which means we can use wildcopy
* without overlap checking.
*/
if (LIKELY(sequence.offset >= WILDCOPY_VECLEN)) {
/* We bet on a full wildcopy for matches, since we expect matches to be
* longer than literals (in general). In silesia, ~10% of matches are longer
* than 16 bytes.
*/
ZSTD_wildcopy(op, match, (ptrdiff_t)sequence.matchLength, ZSTD_no_overlap);
return sequenceLength;
}
assert(sequence.offset < WILDCOPY_VECLEN);
/* Copy 8 bytes and spread the offset to be >= 8. */
ZSTD_overlapCopy8(&op, &match, sequence.offset);
/* If the match length is > 8 bytes, then continue with the wildcopy. */
if (sequence.matchLength > 8) {
assert(op < oMatchEnd);
ZSTD_wildcopy(op, match, (ptrdiff_t)sequence.matchLength-8, ZSTD_overlap_src_before_dst);
}
return sequenceLength;
}
static void
ZSTD_initFseState(ZSTD_fseState* DStatePtr, BIT_DStream_t* bitD, const ZSTD_seqSymbol* dt)
{
const void* ptr = dt;
const ZSTD_seqSymbol_header* const DTableH = (const ZSTD_seqSymbol_header*)ptr;
DStatePtr->state = BIT_readBits(bitD, DTableH->tableLog);
DEBUGLOG(6, "ZSTD_initFseState : val=%u using %u bits",
(U32)DStatePtr->state, DTableH->tableLog);
BIT_reloadDStream(bitD);
DStatePtr->table = dt + 1;
}
FORCE_INLINE_TEMPLATE void
ZSTD_updateFseStateWithDInfo(ZSTD_fseState* DStatePtr, BIT_DStream_t* bitD, U16 nextState, U32 nbBits)
{
size_t const lowBits = BIT_readBits(bitD, nbBits);
DStatePtr->state = nextState + lowBits;
}
/* We need to add at most (ZSTD_WINDOWLOG_MAX_32 - 1) bits to read the maximum
* offset bits. But we can only read at most STREAM_ACCUMULATOR_MIN_32
* bits before reloading. This value is the maximum number of bytes we read
* after reloading when we are decoding long offsets.
*/
#define LONG_OFFSETS_MAX_EXTRA_BITS_32 \
(ZSTD_WINDOWLOG_MAX_32 > STREAM_ACCUMULATOR_MIN_32 \
? ZSTD_WINDOWLOG_MAX_32 - STREAM_ACCUMULATOR_MIN_32 \
: 0)
typedef enum { ZSTD_lo_isRegularOffset, ZSTD_lo_isLongOffset=1 } ZSTD_longOffset_e;
/**
* ZSTD_decodeSequence():
* @p longOffsets : tells the decoder to reload more bit while decoding large offsets
* only used in 32-bit mode
* @return : Sequence (litL + matchL + offset)
*/
FORCE_INLINE_TEMPLATE seq_t
ZSTD_decodeSequence(seqState_t* seqState, const ZSTD_longOffset_e longOffsets, const int isLastSeq)
{
seq_t seq;
/*
* ZSTD_seqSymbol is a 64 bits wide structure.
* It can be loaded in one operation
* and its fields extracted by simply shifting or bit-extracting on aarch64.
* GCC doesn't recognize this and generates more unnecessary ldr/ldrb/ldrh
* operations that cause performance drop. This can be avoided by using this
* ZSTD_memcpy hack.
*/
#if defined(__aarch64__) && (defined(__GNUC__) && !defined(__clang__))
ZSTD_seqSymbol llDInfoS, mlDInfoS, ofDInfoS;
ZSTD_seqSymbol* const llDInfo = &llDInfoS;
ZSTD_seqSymbol* const mlDInfo = &mlDInfoS;
ZSTD_seqSymbol* const ofDInfo = &ofDInfoS;
ZSTD_memcpy(llDInfo, seqState->stateLL.table + seqState->stateLL.state, sizeof(ZSTD_seqSymbol));
ZSTD_memcpy(mlDInfo, seqState->stateML.table + seqState->stateML.state, sizeof(ZSTD_seqSymbol));
ZSTD_memcpy(ofDInfo, seqState->stateOffb.table + seqState->stateOffb.state, sizeof(ZSTD_seqSymbol));
#else
const ZSTD_seqSymbol* const llDInfo = seqState->stateLL.table + seqState->stateLL.state;
const ZSTD_seqSymbol* const mlDInfo = seqState->stateML.table + seqState->stateML.state;
const ZSTD_seqSymbol* const ofDInfo = seqState->stateOffb.table + seqState->stateOffb.state;
#endif
seq.matchLength = mlDInfo->baseValue;
seq.litLength = llDInfo->baseValue;
{ U32 const ofBase = ofDInfo->baseValue;
BYTE const llBits = llDInfo->nbAdditionalBits;
BYTE const mlBits = mlDInfo->nbAdditionalBits;
BYTE const ofBits = ofDInfo->nbAdditionalBits;
BYTE const totalBits = llBits+mlBits+ofBits;
U16 const llNext = llDInfo->nextState;
U16 const mlNext = mlDInfo->nextState;
U16 const ofNext = ofDInfo->nextState;
U32 const llnbBits = llDInfo->nbBits;
U32 const mlnbBits = mlDInfo->nbBits;
U32 const ofnbBits = ofDInfo->nbBits;
assert(llBits <= MaxLLBits);
assert(mlBits <= MaxMLBits);
assert(ofBits <= MaxOff);
/*
* As gcc has better branch and block analyzers, sometimes it is only
* valuable to mark likeliness for clang, it gives around 3-4% of
* performance.
*/
/* sequence */
{ size_t offset;
if (ofBits > 1) {
ZSTD_STATIC_ASSERT(ZSTD_lo_isLongOffset == 1);
ZSTD_STATIC_ASSERT(LONG_OFFSETS_MAX_EXTRA_BITS_32 == 5);
ZSTD_STATIC_ASSERT(STREAM_ACCUMULATOR_MIN_32 > LONG_OFFSETS_MAX_EXTRA_BITS_32);
ZSTD_STATIC_ASSERT(STREAM_ACCUMULATOR_MIN_32 - LONG_OFFSETS_MAX_EXTRA_BITS_32 >= MaxMLBits);
if (MEM_32bits() && longOffsets && (ofBits >= STREAM_ACCUMULATOR_MIN_32)) {
/* Always read extra bits, this keeps the logic simple,
* avoids branches, and avoids accidentally reading 0 bits.
*/
U32 const extraBits = LONG_OFFSETS_MAX_EXTRA_BITS_32;
offset = ofBase + (BIT_readBitsFast(&seqState->DStream, ofBits - extraBits) << extraBits);
BIT_reloadDStream(&seqState->DStream);
offset += BIT_readBitsFast(&seqState->DStream, extraBits);
} else {
offset = ofBase + BIT_readBitsFast(&seqState->DStream, ofBits/*>0*/); /* <= (ZSTD_WINDOWLOG_MAX-1) bits */
if (MEM_32bits()) BIT_reloadDStream(&seqState->DStream);
}
seqState->prevOffset[2] = seqState->prevOffset[1];
seqState->prevOffset[1] = seqState->prevOffset[0];
seqState->prevOffset[0] = offset;
} else {
U32 const ll0 = (llDInfo->baseValue == 0);
if (LIKELY((ofBits == 0))) {
offset = seqState->prevOffset[ll0];
seqState->prevOffset[1] = seqState->prevOffset[!ll0];
seqState->prevOffset[0] = offset;
} else {
offset = ofBase + ll0 + BIT_readBitsFast(&seqState->DStream, 1);
{ size_t temp = (offset==3) ? seqState->prevOffset[0] - 1 : seqState->prevOffset[offset];
temp -= !temp; /* 0 is not valid: input corrupted => force offset to -1 => corruption detected at execSequence */
if (offset != 1) seqState->prevOffset[2] = seqState->prevOffset[1];
seqState->prevOffset[1] = seqState->prevOffset[0];
seqState->prevOffset[0] = offset = temp;
} } }
seq.offset = offset;
}
if (mlBits > 0)
seq.matchLength += BIT_readBitsFast(&seqState->DStream, mlBits/*>0*/);
if (MEM_32bits() && (mlBits+llBits >= STREAM_ACCUMULATOR_MIN_32-LONG_OFFSETS_MAX_EXTRA_BITS_32))
BIT_reloadDStream(&seqState->DStream);
if (MEM_64bits() && UNLIKELY(totalBits >= STREAM_ACCUMULATOR_MIN_64-(LLFSELog+MLFSELog+OffFSELog)))
BIT_reloadDStream(&seqState->DStream);
/* Ensure there are enough bits to read the rest of data in 64-bit mode. */
ZSTD_STATIC_ASSERT(16+LLFSELog+MLFSELog+OffFSELog < STREAM_ACCUMULATOR_MIN_64);
if (llBits > 0)
seq.litLength += BIT_readBitsFast(&seqState->DStream, llBits/*>0*/);
if (MEM_32bits())
BIT_reloadDStream(&seqState->DStream);
DEBUGLOG(6, "seq: litL=%u, matchL=%u, offset=%u",
(U32)seq.litLength, (U32)seq.matchLength, (U32)seq.offset);
if (!isLastSeq) {
/* don't update FSE state for last Sequence */
ZSTD_updateFseStateWithDInfo(&seqState->stateLL, &seqState->DStream, llNext, llnbBits); /* <= 9 bits */
ZSTD_updateFseStateWithDInfo(&seqState->stateML, &seqState->DStream, mlNext, mlnbBits); /* <= 9 bits */
if (MEM_32bits()) BIT_reloadDStream(&seqState->DStream); /* <= 18 bits */
ZSTD_updateFseStateWithDInfo(&seqState->stateOffb, &seqState->DStream, ofNext, ofnbBits); /* <= 8 bits */
BIT_reloadDStream(&seqState->DStream);
}
}
return seq;
}
#if defined(FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION) && defined(FUZZING_ASSERT_VALID_SEQUENCE)
#if DEBUGLEVEL >= 1
static int ZSTD_dictionaryIsActive(ZSTD_DCtx const* dctx, BYTE const* prefixStart, BYTE const* oLitEnd)
{
size_t const windowSize = dctx->fParams.windowSize;
/* No dictionary used. */
if (dctx->dictContentEndForFuzzing == NULL) return 0;
/* Dictionary is our prefix. */
if (prefixStart == dctx->dictContentBeginForFuzzing) return 1;
/* Dictionary is not our ext-dict. */
if (dctx->dictEnd != dctx->dictContentEndForFuzzing) return 0;
/* Dictionary is not within our window size. */
if ((size_t)(oLitEnd - prefixStart) >= windowSize) return 0;
/* Dictionary is active. */
return 1;
}
#endif
static void ZSTD_assertValidSequence(
ZSTD_DCtx const* dctx,
BYTE const* op, BYTE const* oend,
seq_t const seq,
BYTE const* prefixStart, BYTE const* virtualStart)
{
#if DEBUGLEVEL >= 1
if (dctx->isFrameDecompression) {
size_t const windowSize = dctx->fParams.windowSize;
size_t const sequenceSize = seq.litLength + seq.matchLength;
BYTE const* const oLitEnd = op + seq.litLength;
DEBUGLOG(6, "Checking sequence: litL=%u matchL=%u offset=%u",
(U32)seq.litLength, (U32)seq.matchLength, (U32)seq.offset);
assert(op <= oend);
assert((size_t)(oend - op) >= sequenceSize);
assert(sequenceSize <= ZSTD_blockSizeMax(dctx));
if (ZSTD_dictionaryIsActive(dctx, prefixStart, oLitEnd)) {
size_t const dictSize = (size_t)((char const*)dctx->dictContentEndForFuzzing - (char const*)dctx->dictContentBeginForFuzzing);
/* Offset must be within the dictionary. */
assert(seq.offset <= (size_t)(oLitEnd - virtualStart));
assert(seq.offset <= windowSize + dictSize);
} else {
/* Offset must be within our window. */
assert(seq.offset <= windowSize);
}
}
#else
(void)dctx, (void)op, (void)oend, (void)seq, (void)prefixStart, (void)virtualStart;
#endif
}
#endif
#ifndef ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG
FORCE_INLINE_TEMPLATE size_t
DONT_VECTORIZE
ZSTD_decompressSequences_bodySplitLitBuffer( ZSTD_DCtx* dctx,
void* dst, size_t maxDstSize,
const void* seqStart, size_t seqSize, int nbSeq,
const ZSTD_longOffset_e isLongOffset)
{
const BYTE* ip = (const BYTE*)seqStart;
const BYTE* const iend = ip + seqSize;
BYTE* const ostart = (BYTE*)dst;
BYTE* const oend = ZSTD_maybeNullPtrAdd(ostart, maxDstSize);
BYTE* op = ostart;
const BYTE* litPtr = dctx->litPtr;
const BYTE* litBufferEnd = dctx->litBufferEnd;
const BYTE* const prefixStart = (const BYTE*) (dctx->prefixStart);
const BYTE* const vBase = (const BYTE*) (dctx->virtualStart);
const BYTE* const dictEnd = (const BYTE*) (dctx->dictEnd);
DEBUGLOG(5, "ZSTD_decompressSequences_bodySplitLitBuffer (%i seqs)", nbSeq);
/* Literals are split between internal buffer & output buffer */
if (nbSeq) {
seqState_t seqState;
dctx->fseEntropy = 1;
{ U32 i; for (i=0; i<ZSTD_REP_NUM; i++) seqState.prevOffset[i] = dctx->entropy.rep[i]; }
RETURN_ERROR_IF(
ERR_isError(BIT_initDStream(&seqState.DStream, ip, iend-ip)),
corruption_detected, "");
ZSTD_initFseState(&seqState.stateLL, &seqState.DStream, dctx->LLTptr);
ZSTD_initFseState(&seqState.stateOffb, &seqState.DStream, dctx->OFTptr);
ZSTD_initFseState(&seqState.stateML, &seqState.DStream, dctx->MLTptr);
assert(dst != NULL);
ZSTD_STATIC_ASSERT(
BIT_DStream_unfinished < BIT_DStream_completed &&
BIT_DStream_endOfBuffer < BIT_DStream_completed &&
BIT_DStream_completed < BIT_DStream_overflow);
/* decompress without overrunning litPtr begins */
{ seq_t sequence = {0,0,0}; /* some static analyzer believe that @sequence is not initialized (it necessarily is, since for(;;) loop as at least one iteration) */
/* Align the decompression loop to 32 + 16 bytes.
*
* zstd compiled with gcc-9 on an Intel i9-9900k shows 10% decompression
* speed swings based on the alignment of the decompression loop. This
* performance swing is caused by parts of the decompression loop falling
* out of the DSB. The entire decompression loop should fit in the DSB,
* when it can't we get much worse performance. You can measure if you've
* hit the good case or the bad case with this perf command for some
* compressed file test.zst:
*
* perf stat -e cycles -e instructions -e idq.all_dsb_cycles_any_uops \
* -e idq.all_mite_cycles_any_uops -- ./zstd -tq test.zst
*
* If you see most cycles served out of the MITE you've hit the bad case.
* If you see most cycles served out of the DSB you've hit the good case.
* If it is pretty even then you may be in an okay case.
*
* This issue has been reproduced on the following CPUs:
* - Kabylake: Macbook Pro (15-inch, 2019) 2.4 GHz Intel Core i9
* Use Instruments->Counters to get DSB/MITE cycles.
* I never got performance swings, but I was able to
* go from the good case of mostly DSB to half of the
* cycles served from MITE.
* - Coffeelake: Intel i9-9900k
* - Coffeelake: Intel i7-9700k
*
* I haven't been able to reproduce the instability or DSB misses on any
* of the following CPUS:
* - Haswell
* - Broadwell: Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GH
* - Skylake
*
* Alignment is done for each of the three major decompression loops:
* - ZSTD_decompressSequences_bodySplitLitBuffer - presplit section of the literal buffer
* - ZSTD_decompressSequences_bodySplitLitBuffer - postsplit section of the literal buffer
* - ZSTD_decompressSequences_body
* Alignment choices are made to minimize large swings on bad cases and influence on performance
* from changes external to this code, rather than to overoptimize on the current commit.
*
* If you are seeing performance stability this script can help test.
* It tests on 4 commits in zstd where I saw performance change.
*
* https://gist.github.com/terrelln/9889fc06a423fd5ca6e99351564473f4
*/
#if defined(__GNUC__) && defined(__x86_64__)
__asm__(".p2align 6");
# if __GNUC__ >= 7
/* good for gcc-7, gcc-9, and gcc-11 */
__asm__("nop");
__asm__(".p2align 5");
__asm__("nop");
__asm__(".p2align 4");
# if __GNUC__ == 8 || __GNUC__ == 10
/* good for gcc-8 and gcc-10 */
__asm__("nop");
__asm__(".p2align 3");
# endif
# endif
#endif
/* Handle the initial state where litBuffer is currently split between dst and litExtraBuffer */
for ( ; nbSeq; nbSeq--) {
sequence = ZSTD_decodeSequence(&seqState, isLongOffset, nbSeq==1);
if (litPtr + sequence.litLength > dctx->litBufferEnd) break;
{ size_t const oneSeqSize = ZSTD_execSequenceSplitLitBuffer(op, oend, litPtr + sequence.litLength - WILDCOPY_OVERLENGTH, sequence, &litPtr, litBufferEnd, prefixStart, vBase, dictEnd);
#if defined(FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION) && defined(FUZZING_ASSERT_VALID_SEQUENCE)
assert(!ZSTD_isError(oneSeqSize));
ZSTD_assertValidSequence(dctx, op, oend, sequence, prefixStart, vBase);
#endif
if (UNLIKELY(ZSTD_isError(oneSeqSize)))
return oneSeqSize;
DEBUGLOG(6, "regenerated sequence size : %u", (U32)oneSeqSize);
op += oneSeqSize;
} }
DEBUGLOG(6, "reached: (litPtr + sequence.litLength > dctx->litBufferEnd)");
/* If there are more sequences, they will need to read literals from litExtraBuffer; copy over the remainder from dst and update litPtr and litEnd */
if (nbSeq > 0) {
const size_t leftoverLit = dctx->litBufferEnd - litPtr;
DEBUGLOG(6, "There are %i sequences left, and %zu/%zu literals left in buffer", nbSeq, leftoverLit, sequence.litLength);
if (leftoverLit) {
RETURN_ERROR_IF(leftoverLit > (size_t)(oend - op), dstSize_tooSmall, "remaining lit must fit within dstBuffer");
ZSTD_safecopyDstBeforeSrc(op, litPtr, leftoverLit);
sequence.litLength -= leftoverLit;
op += leftoverLit;
}
litPtr = dctx->litExtraBuffer;
litBufferEnd = dctx->litExtraBuffer + ZSTD_LITBUFFEREXTRASIZE;
dctx->litBufferLocation = ZSTD_not_in_dst;
{ size_t const oneSeqSize = ZSTD_execSequence(op, oend, sequence, &litPtr, litBufferEnd, prefixStart, vBase, dictEnd);
#if defined(FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION) && defined(FUZZING_ASSERT_VALID_SEQUENCE)
assert(!ZSTD_isError(oneSeqSize));
ZSTD_assertValidSequence(dctx, op, oend, sequence, prefixStart, vBase);
#endif
if (UNLIKELY(ZSTD_isError(oneSeqSize)))
return oneSeqSize;
DEBUGLOG(6, "regenerated sequence size : %u", (U32)oneSeqSize);
op += oneSeqSize;
}
nbSeq--;
}
}
if (nbSeq > 0) {
/* there is remaining lit from extra buffer */
#if defined(__GNUC__) && defined(__x86_64__)
__asm__(".p2align 6");
__asm__("nop");
# if __GNUC__ != 7
/* worse for gcc-7 better for gcc-8, gcc-9, and gcc-10 and clang */
__asm__(".p2align 4");
__asm__("nop");
__asm__(".p2align 3");
# elif __GNUC__ >= 11
__asm__(".p2align 3");
# else
__asm__(".p2align 5");
__asm__("nop");
__asm__(".p2align 3");
# endif
#endif
for ( ; nbSeq ; nbSeq--) {
seq_t const sequence = ZSTD_decodeSequence(&seqState, isLongOffset, nbSeq==1);
size_t const oneSeqSize = ZSTD_execSequence(op, oend, sequence, &litPtr, litBufferEnd, prefixStart, vBase, dictEnd);
#if defined(FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION) && defined(FUZZING_ASSERT_VALID_SEQUENCE)
assert(!ZSTD_isError(oneSeqSize));
ZSTD_assertValidSequence(dctx, op, oend, sequence, prefixStart, vBase);
#endif
if (UNLIKELY(ZSTD_isError(oneSeqSize)))
return oneSeqSize;
DEBUGLOG(6, "regenerated sequence size : %u", (U32)oneSeqSize);
op += oneSeqSize;
}
}
/* check if reached exact end */
DEBUGLOG(5, "ZSTD_decompressSequences_bodySplitLitBuffer: after decode loop, remaining nbSeq : %i", nbSeq);
RETURN_ERROR_IF(nbSeq, corruption_detected, "");
DEBUGLOG(5, "bitStream : start=%p, ptr=%p, bitsConsumed=%u", seqState.DStream.start, seqState.DStream.ptr, seqState.DStream.bitsConsumed);
RETURN_ERROR_IF(!BIT_endOfDStream(&seqState.DStream), corruption_detected, "");
/* save reps for next block */
{ U32 i; for (i=0; i<ZSTD_REP_NUM; i++) dctx->entropy.rep[i] = (U32)(seqState.prevOffset[i]); }
}
/* last literal segment */
if (dctx->litBufferLocation == ZSTD_split) {
/* split hasn't been reached yet, first get dst then copy litExtraBuffer */
size_t const lastLLSize = (size_t)(litBufferEnd - litPtr);
DEBUGLOG(6, "copy last literals from segment : %u", (U32)lastLLSize);
RETURN_ERROR_IF(lastLLSize > (size_t)(oend - op), dstSize_tooSmall, "");
if (op != NULL) {
ZSTD_memmove(op, litPtr, lastLLSize);
op += lastLLSize;
}
litPtr = dctx->litExtraBuffer;
litBufferEnd = dctx->litExtraBuffer + ZSTD_LITBUFFEREXTRASIZE;
dctx->litBufferLocation = ZSTD_not_in_dst;
}
/* copy last literals from internal buffer */
{ size_t const lastLLSize = (size_t)(litBufferEnd - litPtr);
DEBUGLOG(6, "copy last literals from internal buffer : %u", (U32)lastLLSize);
RETURN_ERROR_IF(lastLLSize > (size_t)(oend-op), dstSize_tooSmall, "");
if (op != NULL) {
ZSTD_memcpy(op, litPtr, lastLLSize);
op += lastLLSize;
} }
DEBUGLOG(6, "decoded block of size %u bytes", (U32)(op - ostart));
return (size_t)(op - ostart);
}
FORCE_INLINE_TEMPLATE size_t
DONT_VECTORIZE
ZSTD_decompressSequences_body(ZSTD_DCtx* dctx,
void* dst, size_t maxDstSize,
const void* seqStart, size_t seqSize, int nbSeq,
const ZSTD_longOffset_e isLongOffset)
{
const BYTE* ip = (const BYTE*)seqStart;
const BYTE* const iend = ip + seqSize;
BYTE* const ostart = (BYTE*)dst;
BYTE* const oend = dctx->litBufferLocation == ZSTD_not_in_dst ? ZSTD_maybeNullPtrAdd(ostart, maxDstSize) : dctx->litBuffer;
BYTE* op = ostart;
const BYTE* litPtr = dctx->litPtr;
const BYTE* const litEnd = litPtr + dctx->litSize;
const BYTE* const prefixStart = (const BYTE*)(dctx->prefixStart);
const BYTE* const vBase = (const BYTE*)(dctx->virtualStart);
const BYTE* const dictEnd = (const BYTE*)(dctx->dictEnd);
DEBUGLOG(5, "ZSTD_decompressSequences_body: nbSeq = %d", nbSeq);
/* Regen sequences */
if (nbSeq) {
seqState_t seqState;
dctx->fseEntropy = 1;
{ U32 i; for (i = 0; i < ZSTD_REP_NUM; i++) seqState.prevOffset[i] = dctx->entropy.rep[i]; }
RETURN_ERROR_IF(
ERR_isError(BIT_initDStream(&seqState.DStream, ip, iend - ip)),
corruption_detected, "");
ZSTD_initFseState(&seqState.stateLL, &seqState.DStream, dctx->LLTptr);
ZSTD_initFseState(&seqState.stateOffb, &seqState.DStream, dctx->OFTptr);
ZSTD_initFseState(&seqState.stateML, &seqState.DStream, dctx->MLTptr);
assert(dst != NULL);
#if defined(__GNUC__) && defined(__x86_64__)
__asm__(".p2align 6");
__asm__("nop");
# if __GNUC__ >= 7
__asm__(".p2align 5");
__asm__("nop");
__asm__(".p2align 3");
# else
__asm__(".p2align 4");
__asm__("nop");
__asm__(".p2align 3");
# endif
#endif
for ( ; nbSeq ; nbSeq--) {
seq_t const sequence = ZSTD_decodeSequence(&seqState, isLongOffset, nbSeq==1);
size_t const oneSeqSize = ZSTD_execSequence(op, oend, sequence, &litPtr, litEnd, prefixStart, vBase, dictEnd);
#if defined(FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION) && defined(FUZZING_ASSERT_VALID_SEQUENCE)
assert(!ZSTD_isError(oneSeqSize));
ZSTD_assertValidSequence(dctx, op, oend, sequence, prefixStart, vBase);
#endif
if (UNLIKELY(ZSTD_isError(oneSeqSize)))
return oneSeqSize;
DEBUGLOG(6, "regenerated sequence size : %u", (U32)oneSeqSize);
op += oneSeqSize;
}
/* check if reached exact end */
assert(nbSeq == 0);
RETURN_ERROR_IF(!BIT_endOfDStream(&seqState.DStream), corruption_detected, "");
/* save reps for next block */
{ U32 i; for (i=0; i<ZSTD_REP_NUM; i++) dctx->entropy.rep[i] = (U32)(seqState.prevOffset[i]); }
}
/* last literal segment */
{ size_t const lastLLSize = (size_t)(litEnd - litPtr);
DEBUGLOG(6, "copy last literals : %u", (U32)lastLLSize);
RETURN_ERROR_IF(lastLLSize > (size_t)(oend-op), dstSize_tooSmall, "");
if (op != NULL) {
ZSTD_memcpy(op, litPtr, lastLLSize);
op += lastLLSize;
} }
DEBUGLOG(6, "decoded block of size %u bytes", (U32)(op - ostart));
return (size_t)(op - ostart);
}
static size_t
ZSTD_decompressSequences_default(ZSTD_DCtx* dctx,
void* dst, size_t maxDstSize,
const void* seqStart, size_t seqSize, int nbSeq,
const ZSTD_longOffset_e isLongOffset)
{
return ZSTD_decompressSequences_body(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
}
static size_t
ZSTD_decompressSequencesSplitLitBuffer_default(ZSTD_DCtx* dctx,
void* dst, size_t maxDstSize,
const void* seqStart, size_t seqSize, int nbSeq,
const ZSTD_longOffset_e isLongOffset)
{
return ZSTD_decompressSequences_bodySplitLitBuffer(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
}
#endif /* ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG */
#ifndef ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT
FORCE_INLINE_TEMPLATE
size_t ZSTD_prefetchMatch(size_t prefetchPos, seq_t const sequence,
const BYTE* const prefixStart, const BYTE* const dictEnd)
{
prefetchPos += sequence.litLength;
{ const BYTE* const matchBase = (sequence.offset > prefetchPos) ? dictEnd : prefixStart;
/* note : this operation can overflow when seq.offset is really too large, which can only happen when input is corrupted.
* No consequence though : memory address is only used for prefetching, not for dereferencing */
const BYTE* const match = ZSTD_wrappedPtrSub(ZSTD_wrappedPtrAdd(matchBase, prefetchPos), sequence.offset);
PREFETCH_L1(match); PREFETCH_L1(match+CACHELINE_SIZE); /* note : it's safe to invoke PREFETCH() on any memory address, including invalid ones */
}
return prefetchPos + sequence.matchLength;
}
/* This decoding function employs prefetching
* to reduce latency impact of cache misses.
* It's generally employed when block contains a significant portion of long-distance matches
* or when coupled with a "cold" dictionary */
FORCE_INLINE_TEMPLATE size_t
ZSTD_decompressSequencesLong_body(
ZSTD_DCtx* dctx,
void* dst, size_t maxDstSize,
const void* seqStart, size_t seqSize, int nbSeq,
const ZSTD_longOffset_e isLongOffset)
{
const BYTE* ip = (const BYTE*)seqStart;
const BYTE* const iend = ip + seqSize;
BYTE* const ostart = (BYTE*)dst;
BYTE* const oend = dctx->litBufferLocation == ZSTD_in_dst ? dctx->litBuffer : ZSTD_maybeNullPtrAdd(ostart, maxDstSize);
BYTE* op = ostart;
const BYTE* litPtr = dctx->litPtr;
const BYTE* litBufferEnd = dctx->litBufferEnd;
const BYTE* const prefixStart = (const BYTE*) (dctx->prefixStart);
const BYTE* const dictStart = (const BYTE*) (dctx->virtualStart);
const BYTE* const dictEnd = (const BYTE*) (dctx->dictEnd);
/* Regen sequences */
if (nbSeq) {
#define STORED_SEQS 8
#define STORED_SEQS_MASK (STORED_SEQS-1)
#define ADVANCED_SEQS STORED_SEQS
seq_t sequences[STORED_SEQS];
int const seqAdvance = MIN(nbSeq, ADVANCED_SEQS);
seqState_t seqState;
int seqNb;
size_t prefetchPos = (size_t)(op-prefixStart); /* track position relative to prefixStart */
dctx->fseEntropy = 1;
{ int i; for (i=0; i<ZSTD_REP_NUM; i++) seqState.prevOffset[i] = dctx->entropy.rep[i]; }
assert(dst != NULL);
assert(iend >= ip);
RETURN_ERROR_IF(
ERR_isError(BIT_initDStream(&seqState.DStream, ip, iend-ip)),
corruption_detected, "");
ZSTD_initFseState(&seqState.stateLL, &seqState.DStream, dctx->LLTptr);
ZSTD_initFseState(&seqState.stateOffb, &seqState.DStream, dctx->OFTptr);
ZSTD_initFseState(&seqState.stateML, &seqState.DStream, dctx->MLTptr);
/* prepare in advance */
for (seqNb=0; seqNb<seqAdvance; seqNb++) {
seq_t const sequence = ZSTD_decodeSequence(&seqState, isLongOffset, seqNb == nbSeq-1);
prefetchPos = ZSTD_prefetchMatch(prefetchPos, sequence, prefixStart, dictEnd);
sequences[seqNb] = sequence;
}
/* decompress without stomping litBuffer */
for (; seqNb < nbSeq; seqNb++) {
seq_t sequence = ZSTD_decodeSequence(&seqState, isLongOffset, seqNb == nbSeq-1);
if (dctx->litBufferLocation == ZSTD_split && litPtr + sequences[(seqNb - ADVANCED_SEQS) & STORED_SEQS_MASK].litLength > dctx->litBufferEnd) {
/* lit buffer is reaching split point, empty out the first buffer and transition to litExtraBuffer */
const size_t leftoverLit = dctx->litBufferEnd - litPtr;
if (leftoverLit)
{
RETURN_ERROR_IF(leftoverLit > (size_t)(oend - op), dstSize_tooSmall, "remaining lit must fit within dstBuffer");
ZSTD_safecopyDstBeforeSrc(op, litPtr, leftoverLit);
sequences[(seqNb - ADVANCED_SEQS) & STORED_SEQS_MASK].litLength -= leftoverLit;
op += leftoverLit;
}
litPtr = dctx->litExtraBuffer;
litBufferEnd = dctx->litExtraBuffer + ZSTD_LITBUFFEREXTRASIZE;
dctx->litBufferLocation = ZSTD_not_in_dst;
{ size_t const oneSeqSize = ZSTD_execSequence(op, oend, sequences[(seqNb - ADVANCED_SEQS) & STORED_SEQS_MASK], &litPtr, litBufferEnd, prefixStart, dictStart, dictEnd);
#if defined(FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION) && defined(FUZZING_ASSERT_VALID_SEQUENCE)
assert(!ZSTD_isError(oneSeqSize));
ZSTD_assertValidSequence(dctx, op, oend, sequences[(seqNb - ADVANCED_SEQS) & STORED_SEQS_MASK], prefixStart, dictStart);
#endif
if (ZSTD_isError(oneSeqSize)) return oneSeqSize;
prefetchPos = ZSTD_prefetchMatch(prefetchPos, sequence, prefixStart, dictEnd);
sequences[seqNb & STORED_SEQS_MASK] = sequence;
op += oneSeqSize;
} }
else
{
/* lit buffer is either wholly contained in first or second split, or not split at all*/
size_t const oneSeqSize = dctx->litBufferLocation == ZSTD_split ?
ZSTD_execSequenceSplitLitBuffer(op, oend, litPtr + sequences[(seqNb - ADVANCED_SEQS) & STORED_SEQS_MASK].litLength - WILDCOPY_OVERLENGTH, sequences[(seqNb - ADVANCED_SEQS) & STORED_SEQS_MASK], &litPtr, litBufferEnd, prefixStart, dictStart, dictEnd) :
ZSTD_execSequence(op, oend, sequences[(seqNb - ADVANCED_SEQS) & STORED_SEQS_MASK], &litPtr, litBufferEnd, prefixStart, dictStart, dictEnd);
#if defined(FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION) && defined(FUZZING_ASSERT_VALID_SEQUENCE)
assert(!ZSTD_isError(oneSeqSize));
ZSTD_assertValidSequence(dctx, op, oend, sequences[(seqNb - ADVANCED_SEQS) & STORED_SEQS_MASK], prefixStart, dictStart);
#endif
if (ZSTD_isError(oneSeqSize)) return oneSeqSize;
prefetchPos = ZSTD_prefetchMatch(prefetchPos, sequence, prefixStart, dictEnd);
sequences[seqNb & STORED_SEQS_MASK] = sequence;
op += oneSeqSize;
}
}
RETURN_ERROR_IF(!BIT_endOfDStream(&seqState.DStream), corruption_detected, "");
/* finish queue */
seqNb -= seqAdvance;
for ( ; seqNb<nbSeq ; seqNb++) {
seq_t *sequence = &(sequences[seqNb&STORED_SEQS_MASK]);
if (dctx->litBufferLocation == ZSTD_split && litPtr + sequence->litLength > dctx->litBufferEnd) {
const size_t leftoverLit = dctx->litBufferEnd - litPtr;
if (leftoverLit) {
RETURN_ERROR_IF(leftoverLit > (size_t)(oend - op), dstSize_tooSmall, "remaining lit must fit within dstBuffer");
ZSTD_safecopyDstBeforeSrc(op, litPtr, leftoverLit);
sequence->litLength -= leftoverLit;
op += leftoverLit;
}
litPtr = dctx->litExtraBuffer;
litBufferEnd = dctx->litExtraBuffer + ZSTD_LITBUFFEREXTRASIZE;
dctx->litBufferLocation = ZSTD_not_in_dst;
{ size_t const oneSeqSize = ZSTD_execSequence(op, oend, *sequence, &litPtr, litBufferEnd, prefixStart, dictStart, dictEnd);
#if defined(FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION) && defined(FUZZING_ASSERT_VALID_SEQUENCE)
assert(!ZSTD_isError(oneSeqSize));
ZSTD_assertValidSequence(dctx, op, oend, sequences[seqNb&STORED_SEQS_MASK], prefixStart, dictStart);
#endif
if (ZSTD_isError(oneSeqSize)) return oneSeqSize;
op += oneSeqSize;
}
}
else
{
size_t const oneSeqSize = dctx->litBufferLocation == ZSTD_split ?
ZSTD_execSequenceSplitLitBuffer(op, oend, litPtr + sequence->litLength - WILDCOPY_OVERLENGTH, *sequence, &litPtr, litBufferEnd, prefixStart, dictStart, dictEnd) :
ZSTD_execSequence(op, oend, *sequence, &litPtr, litBufferEnd, prefixStart, dictStart, dictEnd);
#if defined(FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION) && defined(FUZZING_ASSERT_VALID_SEQUENCE)
assert(!ZSTD_isError(oneSeqSize));
ZSTD_assertValidSequence(dctx, op, oend, sequences[seqNb&STORED_SEQS_MASK], prefixStart, dictStart);
#endif
if (ZSTD_isError(oneSeqSize)) return oneSeqSize;
op += oneSeqSize;
}
}
/* save reps for next block */
{ U32 i; for (i=0; i<ZSTD_REP_NUM; i++) dctx->entropy.rep[i] = (U32)(seqState.prevOffset[i]); }
}
/* last literal segment */
if (dctx->litBufferLocation == ZSTD_split) { /* first deplete literal buffer in dst, then copy litExtraBuffer */
size_t const lastLLSize = litBufferEnd - litPtr;
RETURN_ERROR_IF(lastLLSize > (size_t)(oend - op), dstSize_tooSmall, "");
if (op != NULL) {
ZSTD_memmove(op, litPtr, lastLLSize);
op += lastLLSize;
}
litPtr = dctx->litExtraBuffer;
litBufferEnd = dctx->litExtraBuffer + ZSTD_LITBUFFEREXTRASIZE;
}
{ size_t const lastLLSize = litBufferEnd - litPtr;
RETURN_ERROR_IF(lastLLSize > (size_t)(oend-op), dstSize_tooSmall, "");
if (op != NULL) {
ZSTD_memmove(op, litPtr, lastLLSize);
op += lastLLSize;
}
}
return (size_t)(op - ostart);
}
static size_t
ZSTD_decompressSequencesLong_default(ZSTD_DCtx* dctx,
void* dst, size_t maxDstSize,
const void* seqStart, size_t seqSize, int nbSeq,
const ZSTD_longOffset_e isLongOffset)
{
return ZSTD_decompressSequencesLong_body(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
}
#endif /* ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT */
#if DYNAMIC_BMI2
#ifndef ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG
static BMI2_TARGET_ATTRIBUTE size_t
DONT_VECTORIZE
ZSTD_decompressSequences_bmi2(ZSTD_DCtx* dctx,
void* dst, size_t maxDstSize,
const void* seqStart, size_t seqSize, int nbSeq,
const ZSTD_longOffset_e isLongOffset)
{
return ZSTD_decompressSequences_body(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
}
static BMI2_TARGET_ATTRIBUTE size_t
DONT_VECTORIZE
ZSTD_decompressSequencesSplitLitBuffer_bmi2(ZSTD_DCtx* dctx,
void* dst, size_t maxDstSize,
const void* seqStart, size_t seqSize, int nbSeq,
const ZSTD_longOffset_e isLongOffset)
{
return ZSTD_decompressSequences_bodySplitLitBuffer(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
}
#endif /* ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG */
#ifndef ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT
static BMI2_TARGET_ATTRIBUTE size_t
ZSTD_decompressSequencesLong_bmi2(ZSTD_DCtx* dctx,
void* dst, size_t maxDstSize,
const void* seqStart, size_t seqSize, int nbSeq,
const ZSTD_longOffset_e isLongOffset)
{
return ZSTD_decompressSequencesLong_body(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
}
#endif /* ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT */
#endif /* DYNAMIC_BMI2 */
#ifndef ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG
static size_t
ZSTD_decompressSequences(ZSTD_DCtx* dctx, void* dst, size_t maxDstSize,
const void* seqStart, size_t seqSize, int nbSeq,
const ZSTD_longOffset_e isLongOffset)
{
DEBUGLOG(5, "ZSTD_decompressSequences");
#if DYNAMIC_BMI2
if (ZSTD_DCtx_get_bmi2(dctx)) {
return ZSTD_decompressSequences_bmi2(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
}
#endif
return ZSTD_decompressSequences_default(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
}
static size_t
ZSTD_decompressSequencesSplitLitBuffer(ZSTD_DCtx* dctx, void* dst, size_t maxDstSize,
const void* seqStart, size_t seqSize, int nbSeq,
const ZSTD_longOffset_e isLongOffset)
{
DEBUGLOG(5, "ZSTD_decompressSequencesSplitLitBuffer");
#if DYNAMIC_BMI2
if (ZSTD_DCtx_get_bmi2(dctx)) {
return ZSTD_decompressSequencesSplitLitBuffer_bmi2(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
}
#endif
return ZSTD_decompressSequencesSplitLitBuffer_default(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
}
#endif /* ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG */
#ifndef ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT
/* ZSTD_decompressSequencesLong() :
* decompression function triggered when a minimum share of offsets is considered "long",
* aka out of cache.
* note : "long" definition seems overloaded here, sometimes meaning "wider than bitstream register", and sometimes meaning "farther than memory cache distance".
* This function will try to mitigate main memory latency through the use of prefetching */
static size_t
ZSTD_decompressSequencesLong(ZSTD_DCtx* dctx,
void* dst, size_t maxDstSize,
const void* seqStart, size_t seqSize, int nbSeq,
const ZSTD_longOffset_e isLongOffset)
{
DEBUGLOG(5, "ZSTD_decompressSequencesLong");
#if DYNAMIC_BMI2
if (ZSTD_DCtx_get_bmi2(dctx)) {
return ZSTD_decompressSequencesLong_bmi2(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
}
#endif
return ZSTD_decompressSequencesLong_default(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
}
#endif /* ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT */
/**
* @returns The total size of the history referenceable by zstd, including
* both the prefix and the extDict. At @p op any offset larger than this
* is invalid.
*/
static size_t ZSTD_totalHistorySize(BYTE* op, BYTE const* virtualStart)
{
return (size_t)(op - virtualStart);
}
typedef struct {
unsigned longOffsetShare;
unsigned maxNbAdditionalBits;
} ZSTD_OffsetInfo;
/* ZSTD_getOffsetInfo() :
* condition : offTable must be valid
* @return : "share" of long offsets (arbitrarily defined as > (1<<23))
* compared to maximum possible of (1<<OffFSELog),
* as well as the maximum number additional bits required.
*/
static ZSTD_OffsetInfo
ZSTD_getOffsetInfo(const ZSTD_seqSymbol* offTable, int nbSeq)
{
ZSTD_OffsetInfo info = {0, 0};
/* If nbSeq == 0, then the offTable is uninitialized, but we have
* no sequences, so both values should be 0.
*/
if (nbSeq != 0) {
const void* ptr = offTable;
U32 const tableLog = ((const ZSTD_seqSymbol_header*)ptr)[0].tableLog;
const ZSTD_seqSymbol* table = offTable + 1;
U32 const max = 1 << tableLog;
U32 u;
DEBUGLOG(5, "ZSTD_getLongOffsetsShare: (tableLog=%u)", tableLog);
assert(max <= (1 << OffFSELog)); /* max not too large */
for (u=0; u<max; u++) {
info.maxNbAdditionalBits = MAX(info.maxNbAdditionalBits, table[u].nbAdditionalBits);
if (table[u].nbAdditionalBits > 22) info.longOffsetShare += 1;
}
assert(tableLog <= OffFSELog);
info.longOffsetShare <<= (OffFSELog - tableLog); /* scale to OffFSELog */
}
return info;
}
/**
* @returns The maximum offset we can decode in one read of our bitstream, without
* reloading more bits in the middle of the offset bits read. Any offsets larger
* than this must use the long offset decoder.
*/
static size_t ZSTD_maxShortOffset(void)
{
if (MEM_64bits()) {
/* We can decode any offset without reloading bits.
* This might change if the max window size grows.
*/
ZSTD_STATIC_ASSERT(ZSTD_WINDOWLOG_MAX <= 31);
return (size_t)-1;
} else {
/* The maximum offBase is (1 << (STREAM_ACCUMULATOR_MIN + 1)) - 1.
* This offBase would require STREAM_ACCUMULATOR_MIN extra bits.
* Then we have to subtract ZSTD_REP_NUM to get the maximum possible offset.
*/
size_t const maxOffbase = ((size_t)1 << (STREAM_ACCUMULATOR_MIN + 1)) - 1;
size_t const maxOffset = maxOffbase - ZSTD_REP_NUM;
assert(ZSTD_highbit32((U32)maxOffbase) == STREAM_ACCUMULATOR_MIN);
return maxOffset;
}
}
size_t
ZSTD_decompressBlock_internal(ZSTD_DCtx* dctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize, const streaming_operation streaming)
{ /* blockType == blockCompressed */
const BYTE* ip = (const BYTE*)src;
DEBUGLOG(5, "ZSTD_decompressBlock_internal (cSize : %u)", (unsigned)srcSize);
/* Note : the wording of the specification
* allows compressed block to be sized exactly ZSTD_blockSizeMax(dctx).
* This generally does not happen, as it makes little sense,
* since an uncompressed block would feature same size and have no decompression cost.
* Also, note that decoder from reference libzstd before < v1.5.4
* would consider this edge case as an error.
* As a consequence, avoid generating compressed blocks of size ZSTD_blockSizeMax(dctx)
* for broader compatibility with the deployed ecosystem of zstd decoders */
RETURN_ERROR_IF(srcSize > ZSTD_blockSizeMax(dctx), srcSize_wrong, "");
/* Decode literals section */
{ size_t const litCSize = ZSTD_decodeLiteralsBlock(dctx, src, srcSize, dst, dstCapacity, streaming);
DEBUGLOG(5, "ZSTD_decodeLiteralsBlock : cSize=%u, nbLiterals=%zu", (U32)litCSize, dctx->litSize);
if (ZSTD_isError(litCSize)) return litCSize;
ip += litCSize;
srcSize -= litCSize;
}
/* Build Decoding Tables */
{
/* Compute the maximum block size, which must also work when !frame and fParams are unset.
* Additionally, take the min with dstCapacity to ensure that the totalHistorySize fits in a size_t.
*/
size_t const blockSizeMax = MIN(dstCapacity, ZSTD_blockSizeMax(dctx));
size_t const totalHistorySize = ZSTD_totalHistorySize(ZSTD_maybeNullPtrAdd((BYTE*)dst, blockSizeMax), (BYTE const*)dctx->virtualStart);
/* isLongOffset must be true if there are long offsets.
* Offsets are long if they are larger than ZSTD_maxShortOffset().
* We don't expect that to be the case in 64-bit mode.
*
* We check here to see if our history is large enough to allow long offsets.
* If it isn't, then we can't possible have (valid) long offsets. If the offset
* is invalid, then it is okay to read it incorrectly.
*
* If isLongOffsets is true, then we will later check our decoding table to see
* if it is even possible to generate long offsets.
*/
ZSTD_longOffset_e isLongOffset = (ZSTD_longOffset_e)(MEM_32bits() && (totalHistorySize > ZSTD_maxShortOffset()));
/* These macros control at build-time which decompressor implementation
* we use. If neither is defined, we do some inspection and dispatch at
* runtime.
*/
#if !defined(ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT) && \
!defined(ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG)
int usePrefetchDecoder = dctx->ddictIsCold;
#else
/* Set to 1 to avoid computing offset info if we don't need to.
* Otherwise this value is ignored.
*/
int usePrefetchDecoder = 1;
#endif
int nbSeq;
size_t const seqHSize = ZSTD_decodeSeqHeaders(dctx, &nbSeq, ip, srcSize);
if (ZSTD_isError(seqHSize)) return seqHSize;
ip += seqHSize;
srcSize -= seqHSize;
RETURN_ERROR_IF((dst == NULL || dstCapacity == 0) && nbSeq > 0, dstSize_tooSmall, "NULL not handled");
RETURN_ERROR_IF(MEM_64bits() && sizeof(size_t) == sizeof(void*) && (size_t)(-1) - (size_t)dst < (size_t)(1 << 20), dstSize_tooSmall,
"invalid dst");
/* If we could potentially have long offsets, or we might want to use the prefetch decoder,
* compute information about the share of long offsets, and the maximum nbAdditionalBits.
* NOTE: could probably use a larger nbSeq limit
*/
if (isLongOffset || (!usePrefetchDecoder && (totalHistorySize > (1u << 24)) && (nbSeq > 8))) {
ZSTD_OffsetInfo const info = ZSTD_getOffsetInfo(dctx->OFTptr, nbSeq);
if (isLongOffset && info.maxNbAdditionalBits <= STREAM_ACCUMULATOR_MIN) {
/* If isLongOffset, but the maximum number of additional bits that we see in our table is small
* enough, then we know it is impossible to have too long an offset in this block, so we can
* use the regular offset decoder.
*/
isLongOffset = ZSTD_lo_isRegularOffset;
}
if (!usePrefetchDecoder) {
U32 const minShare = MEM_64bits() ? 7 : 20; /* heuristic values, correspond to 2.73% and 7.81% */
usePrefetchDecoder = (info.longOffsetShare >= minShare);
}
}
dctx->ddictIsCold = 0;
#if !defined(ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT) && \
!defined(ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG)
if (usePrefetchDecoder) {
#else
(void)usePrefetchDecoder;
{
#endif
#ifndef ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT
return ZSTD_decompressSequencesLong(dctx, dst, dstCapacity, ip, srcSize, nbSeq, isLongOffset);
#endif
}
#ifndef ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG
/* else */
if (dctx->litBufferLocation == ZSTD_split)
return ZSTD_decompressSequencesSplitLitBuffer(dctx, dst, dstCapacity, ip, srcSize, nbSeq, isLongOffset);
else
return ZSTD_decompressSequences(dctx, dst, dstCapacity, ip, srcSize, nbSeq, isLongOffset);
#endif
}
}
ZSTD_ALLOW_POINTER_OVERFLOW_ATTR
void ZSTD_checkContinuity(ZSTD_DCtx* dctx, const void* dst, size_t dstSize)
{
if (dst != dctx->previousDstEnd && dstSize > 0) { /* not contiguous */
dctx->dictEnd = dctx->previousDstEnd;
dctx->virtualStart = (const char*)dst - ((const char*)(dctx->previousDstEnd) - (const char*)(dctx->prefixStart));
dctx->prefixStart = dst;
dctx->previousDstEnd = dst;
}
}
size_t ZSTD_decompressBlock_deprecated(ZSTD_DCtx* dctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize)
{
size_t dSize;
dctx->isFrameDecompression = 0;
ZSTD_checkContinuity(dctx, dst, dstCapacity);
dSize = ZSTD_decompressBlock_internal(dctx, dst, dstCapacity, src, srcSize, not_streaming);
FORWARD_IF_ERROR(dSize, "");
dctx->previousDstEnd = (char*)dst + dSize;
return dSize;
}
/* NOTE: Must just wrap ZSTD_decompressBlock_deprecated() */
size_t ZSTD_decompressBlock(ZSTD_DCtx* dctx,
void* dst, size_t dstCapacity,
const void* src, size_t srcSize)
{
return ZSTD_decompressBlock_deprecated(dctx, dst, dstCapacity, src, srcSize);
}
/**** ended inlining decompress/zstd_decompress_block.c ****/
/**** start inlining dictBuilder/cover.c ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/* *****************************************************************************
* Constructs a dictionary using a heuristic based on the following paper:
*
* Liao, Petri, Moffat, Wirth
* Effective Construction of Relative Lempel-Ziv Dictionaries
* Published in WWW 2016.
*
* Adapted from code originally written by @ot (Giuseppe Ottaviano).
******************************************************************************/
/*-*************************************
* Dependencies
***************************************/
/* qsort_r is an extension. */
#if defined(__linux) || defined(__linux__) || defined(linux) || defined(__gnu_linux__) || \
defined(__CYGWIN__) || defined(__MSYS__)
#if !defined(_GNU_SOURCE) && !defined(__ANDROID__) /* NDK doesn't ship qsort_r(). */
#define _GNU_SOURCE
#endif
#endif
#include <stdio.h> /* fprintf */
#include <stdlib.h> /* malloc, free, qsort_r */
#include <string.h> /* memset */
#include <time.h> /* clock */
#ifndef ZDICT_STATIC_LINKING_ONLY
# define ZDICT_STATIC_LINKING_ONLY
#endif
/**** skipping file: ../common/mem.h ****/
/**** skipping file: ../common/pool.h ****/
/**** skipping file: ../common/threading.h ****/
/**** skipping file: ../common/zstd_internal.h ****/
/**** skipping file: ../common/bits.h ****/
/**** start inlining ../zdict.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZSTD_ZDICT_H
#define ZSTD_ZDICT_H
/*====== Dependencies ======*/
#include <stddef.h> /* size_t */
#if defined (__cplusplus)
extern "C" {
#endif
/* ===== ZDICTLIB_API : control library symbols visibility ===== */
#ifndef ZDICTLIB_VISIBLE
/* Backwards compatibility with old macro name */
# ifdef ZDICTLIB_VISIBILITY
# define ZDICTLIB_VISIBLE ZDICTLIB_VISIBILITY
# elif defined(__GNUC__) && (__GNUC__ >= 4) && !defined(__MINGW32__)
# define ZDICTLIB_VISIBLE __attribute__ ((visibility ("default")))
# else
# define ZDICTLIB_VISIBLE
# endif
#endif
#ifndef ZDICTLIB_HIDDEN
# if defined(__GNUC__) && (__GNUC__ >= 4) && !defined(__MINGW32__)
# define ZDICTLIB_HIDDEN __attribute__ ((visibility ("hidden")))
# else
# define ZDICTLIB_HIDDEN
# endif
#endif
#if defined(ZSTD_DLL_EXPORT) && (ZSTD_DLL_EXPORT==1)
# define ZDICTLIB_API __declspec(dllexport) ZDICTLIB_VISIBLE
#elif defined(ZSTD_DLL_IMPORT) && (ZSTD_DLL_IMPORT==1)
# define ZDICTLIB_API __declspec(dllimport) ZDICTLIB_VISIBLE /* It isn't required but allows to generate better code, saving a function pointer load from the IAT and an indirect jump.*/
#else
# define ZDICTLIB_API ZDICTLIB_VISIBLE
#endif
/*******************************************************************************
* Zstd dictionary builder
*
* FAQ
* ===
* Why should I use a dictionary?
* ------------------------------
*
* Zstd can use dictionaries to improve compression ratio of small data.
* Traditionally small files don't compress well because there is very little
* repetition in a single sample, since it is small. But, if you are compressing
* many similar files, like a bunch of JSON records that share the same
* structure, you can train a dictionary on ahead of time on some samples of
* these files. Then, zstd can use the dictionary to find repetitions that are
* present across samples. This can vastly improve compression ratio.
*
* When is a dictionary useful?
* ----------------------------
*
* Dictionaries are useful when compressing many small files that are similar.
* The larger a file is, the less benefit a dictionary will have. Generally,
* we don't expect dictionary compression to be effective past 100KB. And the
* smaller a file is, the more we would expect the dictionary to help.
*
* How do I use a dictionary?
* --------------------------
*
* Simply pass the dictionary to the zstd compressor with
* `ZSTD_CCtx_loadDictionary()`. The same dictionary must then be passed to
* the decompressor, using `ZSTD_DCtx_loadDictionary()`. There are other
* more advanced functions that allow selecting some options, see zstd.h for
* complete documentation.
*
* What is a zstd dictionary?
* --------------------------
*
* A zstd dictionary has two pieces: Its header, and its content. The header
* contains a magic number, the dictionary ID, and entropy tables. These
* entropy tables allow zstd to save on header costs in the compressed file,
* which really matters for small data. The content is just bytes, which are
* repeated content that is common across many samples.
*
* What is a raw content dictionary?
* ---------------------------------
*
* A raw content dictionary is just bytes. It doesn't have a zstd dictionary
* header, a dictionary ID, or entropy tables. Any buffer is a valid raw
* content dictionary.
*
* How do I train a dictionary?
* ----------------------------
*
* Gather samples from your use case. These samples should be similar to each
* other. If you have several use cases, you could try to train one dictionary
* per use case.
*
* Pass those samples to `ZDICT_trainFromBuffer()` and that will train your
* dictionary. There are a few advanced versions of this function, but this
* is a great starting point. If you want to further tune your dictionary
* you could try `ZDICT_optimizeTrainFromBuffer_cover()`. If that is too slow
* you can try `ZDICT_optimizeTrainFromBuffer_fastCover()`.
*
* If the dictionary training function fails, that is likely because you
* either passed too few samples, or a dictionary would not be effective
* for your data. Look at the messages that the dictionary trainer printed,
* if it doesn't say too few samples, then a dictionary would not be effective.
*
* How large should my dictionary be?
* ----------------------------------
*
* A reasonable dictionary size, the `dictBufferCapacity`, is about 100KB.
* The zstd CLI defaults to a 110KB dictionary. You likely don't need a
* dictionary larger than that. But, most use cases can get away with a
* smaller dictionary. The advanced dictionary builders can automatically
* shrink the dictionary for you, and select the smallest size that doesn't
* hurt compression ratio too much. See the `shrinkDict` parameter.
* A smaller dictionary can save memory, and potentially speed up
* compression.
*
* How many samples should I provide to the dictionary builder?
* ------------------------------------------------------------
*
* We generally recommend passing ~100x the size of the dictionary
* in samples. A few thousand should suffice. Having too few samples
* can hurt the dictionaries effectiveness. Having more samples will
* only improve the dictionaries effectiveness. But having too many
* samples can slow down the dictionary builder.
*
* How do I determine if a dictionary will be effective?
* -----------------------------------------------------
*
* Simply train a dictionary and try it out. You can use zstd's built in
* benchmarking tool to test the dictionary effectiveness.
*
* # Benchmark levels 1-3 without a dictionary
* zstd -b1e3 -r /path/to/my/files
* # Benchmark levels 1-3 with a dictionary
* zstd -b1e3 -r /path/to/my/files -D /path/to/my/dictionary
*
* When should I retrain a dictionary?
* -----------------------------------
*
* You should retrain a dictionary when its effectiveness drops. Dictionary
* effectiveness drops as the data you are compressing changes. Generally, we do
* expect dictionaries to "decay" over time, as your data changes, but the rate
* at which they decay depends on your use case. Internally, we regularly
* retrain dictionaries, and if the new dictionary performs significantly
* better than the old dictionary, we will ship the new dictionary.
*
* I have a raw content dictionary, how do I turn it into a zstd dictionary?
* -------------------------------------------------------------------------
*
* If you have a raw content dictionary, e.g. by manually constructing it, or
* using a third-party dictionary builder, you can turn it into a zstd
* dictionary by using `ZDICT_finalizeDictionary()`. You'll also have to
* provide some samples of the data. It will add the zstd header to the
* raw content, which contains a dictionary ID and entropy tables, which
* will improve compression ratio, and allow zstd to write the dictionary ID
* into the frame, if you so choose.
*
* Do I have to use zstd's dictionary builder?
* -------------------------------------------
*
* No! You can construct dictionary content however you please, it is just
* bytes. It will always be valid as a raw content dictionary. If you want
* a zstd dictionary, which can improve compression ratio, use
* `ZDICT_finalizeDictionary()`.
*
* What is the attack surface of a zstd dictionary?
* ------------------------------------------------
*
* Zstd is heavily fuzz tested, including loading fuzzed dictionaries, so
* zstd should never crash, or access out-of-bounds memory no matter what
* the dictionary is. However, if an attacker can control the dictionary
* during decompression, they can cause zstd to generate arbitrary bytes,
* just like if they controlled the compressed data.
*
******************************************************************************/
/*! ZDICT_trainFromBuffer():
* Train a dictionary from an array of samples.
* Redirect towards ZDICT_optimizeTrainFromBuffer_fastCover() single-threaded, with d=8, steps=4,
* f=20, and accel=1.
* Samples must be stored concatenated in a single flat buffer `samplesBuffer`,
* supplied with an array of sizes `samplesSizes`, providing the size of each sample, in order.
* The resulting dictionary will be saved into `dictBuffer`.
* @return: size of dictionary stored into `dictBuffer` (<= `dictBufferCapacity`)
* or an error code, which can be tested with ZDICT_isError().
* Note: Dictionary training will fail if there are not enough samples to construct a
* dictionary, or if most of the samples are too small (< 8 bytes being the lower limit).
* If dictionary training fails, you should use zstd without a dictionary, as the dictionary
* would've been ineffective anyways. If you believe your samples would benefit from a dictionary
* please open an issue with details, and we can look into it.
* Note: ZDICT_trainFromBuffer()'s memory usage is about 6 MB.
* Tips: In general, a reasonable dictionary has a size of ~ 100 KB.
* It's possible to select smaller or larger size, just by specifying `dictBufferCapacity`.
* In general, it's recommended to provide a few thousands samples, though this can vary a lot.
* It's recommended that total size of all samples be about ~x100 times the target size of dictionary.
*/
ZDICTLIB_API size_t ZDICT_trainFromBuffer(void* dictBuffer, size_t dictBufferCapacity,
const void* samplesBuffer,
const size_t* samplesSizes, unsigned nbSamples);
typedef struct {
int compressionLevel; /**< optimize for a specific zstd compression level; 0 means default */
unsigned notificationLevel; /**< Write log to stderr; 0 = none (default); 1 = errors; 2 = progression; 3 = details; 4 = debug; */
unsigned dictID; /**< force dictID value; 0 means auto mode (32-bits random value)
* NOTE: The zstd format reserves some dictionary IDs for future use.
* You may use them in private settings, but be warned that they
* may be used by zstd in a public dictionary registry in the future.
* These dictionary IDs are:
* - low range : <= 32767
* - high range : >= (2^31)
*/
} ZDICT_params_t;
/*! ZDICT_finalizeDictionary():
* Given a custom content as a basis for dictionary, and a set of samples,
* finalize dictionary by adding headers and statistics according to the zstd
* dictionary format.
*
* Samples must be stored concatenated in a flat buffer `samplesBuffer`,
* supplied with an array of sizes `samplesSizes`, providing the size of each
* sample in order. The samples are used to construct the statistics, so they
* should be representative of what you will compress with this dictionary.
*
* The compression level can be set in `parameters`. You should pass the
* compression level you expect to use in production. The statistics for each
* compression level differ, so tuning the dictionary for the compression level
* can help quite a bit.
*
* You can set an explicit dictionary ID in `parameters`, or allow us to pick
* a random dictionary ID for you, but we can't guarantee no collisions.
*
* The dstDictBuffer and the dictContent may overlap, and the content will be
* appended to the end of the header. If the header + the content doesn't fit in
* maxDictSize the beginning of the content is truncated to make room, since it
* is presumed that the most profitable content is at the end of the dictionary,
* since that is the cheapest to reference.
*
* `maxDictSize` must be >= max(dictContentSize, ZDICT_DICTSIZE_MIN).
*
* @return: size of dictionary stored into `dstDictBuffer` (<= `maxDictSize`),
* or an error code, which can be tested by ZDICT_isError().
* Note: ZDICT_finalizeDictionary() will push notifications into stderr if
* instructed to, using notificationLevel>0.
* NOTE: This function currently may fail in several edge cases including:
* * Not enough samples
* * Samples are uncompressible
* * Samples are all exactly the same
*/
ZDICTLIB_API size_t ZDICT_finalizeDictionary(void* dstDictBuffer, size_t maxDictSize,
const void* dictContent, size_t dictContentSize,
const void* samplesBuffer, const size_t* samplesSizes, unsigned nbSamples,
ZDICT_params_t parameters);
/*====== Helper functions ======*/
ZDICTLIB_API unsigned ZDICT_getDictID(const void* dictBuffer, size_t dictSize); /**< extracts dictID; @return zero if error (not a valid dictionary) */
ZDICTLIB_API size_t ZDICT_getDictHeaderSize(const void* dictBuffer, size_t dictSize); /* returns dict header size; returns a ZSTD error code on failure */
ZDICTLIB_API unsigned ZDICT_isError(size_t errorCode);
ZDICTLIB_API const char* ZDICT_getErrorName(size_t errorCode);
#if defined (__cplusplus)
}
#endif
#endif /* ZSTD_ZDICT_H */
#if defined(ZDICT_STATIC_LINKING_ONLY) && !defined(ZSTD_ZDICT_H_STATIC)
#define ZSTD_ZDICT_H_STATIC
#if defined (__cplusplus)
extern "C" {
#endif
/* This can be overridden externally to hide static symbols. */
#ifndef ZDICTLIB_STATIC_API
# if defined(ZSTD_DLL_EXPORT) && (ZSTD_DLL_EXPORT==1)
# define ZDICTLIB_STATIC_API __declspec(dllexport) ZDICTLIB_VISIBLE
# elif defined(ZSTD_DLL_IMPORT) && (ZSTD_DLL_IMPORT==1)
# define ZDICTLIB_STATIC_API __declspec(dllimport) ZDICTLIB_VISIBLE
# else
# define ZDICTLIB_STATIC_API ZDICTLIB_VISIBLE
# endif
#endif
/* ====================================================================================
* The definitions in this section are considered experimental.
* They should never be used with a dynamic library, as they may change in the future.
* They are provided for advanced usages.
* Use them only in association with static linking.
* ==================================================================================== */
#define ZDICT_DICTSIZE_MIN 256
/* Deprecated: Remove in v1.6.0 */
#define ZDICT_CONTENTSIZE_MIN 128
/*! ZDICT_cover_params_t:
* k and d are the only required parameters.
* For others, value 0 means default.
*/
typedef struct {
unsigned k; /* Segment size : constraint: 0 < k : Reasonable range [16, 2048+] */
unsigned d; /* dmer size : constraint: 0 < d <= k : Reasonable range [6, 16] */
unsigned steps; /* Number of steps : Only used for optimization : 0 means default (40) : Higher means more parameters checked */
unsigned nbThreads; /* Number of threads : constraint: 0 < nbThreads : 1 means single-threaded : Only used for optimization : Ignored if ZSTD_MULTITHREAD is not defined */
double splitPoint; /* Percentage of samples used for training: Only used for optimization : the first nbSamples * splitPoint samples will be used to training, the last nbSamples * (1 - splitPoint) samples will be used for testing, 0 means default (1.0), 1.0 when all samples are used for both training and testing */
unsigned shrinkDict; /* Train dictionaries to shrink in size starting from the minimum size and selects the smallest dictionary that is shrinkDictMaxRegression% worse than the largest dictionary. 0 means no shrinking and 1 means shrinking */
unsigned shrinkDictMaxRegression; /* Sets shrinkDictMaxRegression so that a smaller dictionary can be at worse shrinkDictMaxRegression% worse than the max dict size dictionary. */
ZDICT_params_t zParams;
} ZDICT_cover_params_t;
typedef struct {
unsigned k; /* Segment size : constraint: 0 < k : Reasonable range [16, 2048+] */
unsigned d; /* dmer size : constraint: 0 < d <= k : Reasonable range [6, 16] */
unsigned f; /* log of size of frequency array : constraint: 0 < f <= 31 : 1 means default(20)*/
unsigned steps; /* Number of steps : Only used for optimization : 0 means default (40) : Higher means more parameters checked */
unsigned nbThreads; /* Number of threads : constraint: 0 < nbThreads : 1 means single-threaded : Only used for optimization : Ignored if ZSTD_MULTITHREAD is not defined */
double splitPoint; /* Percentage of samples used for training: Only used for optimization : the first nbSamples * splitPoint samples will be used to training, the last nbSamples * (1 - splitPoint) samples will be used for testing, 0 means default (0.75), 1.0 when all samples are used for both training and testing */
unsigned accel; /* Acceleration level: constraint: 0 < accel <= 10, higher means faster and less accurate, 0 means default(1) */
unsigned shrinkDict; /* Train dictionaries to shrink in size starting from the minimum size and selects the smallest dictionary that is shrinkDictMaxRegression% worse than the largest dictionary. 0 means no shrinking and 1 means shrinking */
unsigned shrinkDictMaxRegression; /* Sets shrinkDictMaxRegression so that a smaller dictionary can be at worse shrinkDictMaxRegression% worse than the max dict size dictionary. */
ZDICT_params_t zParams;
} ZDICT_fastCover_params_t;
/*! ZDICT_trainFromBuffer_cover():
* Train a dictionary from an array of samples using the COVER algorithm.
* Samples must be stored concatenated in a single flat buffer `samplesBuffer`,
* supplied with an array of sizes `samplesSizes`, providing the size of each sample, in order.
* The resulting dictionary will be saved into `dictBuffer`.
* @return: size of dictionary stored into `dictBuffer` (<= `dictBufferCapacity`)
* or an error code, which can be tested with ZDICT_isError().
* See ZDICT_trainFromBuffer() for details on failure modes.
* Note: ZDICT_trainFromBuffer_cover() requires about 9 bytes of memory for each input byte.
* Tips: In general, a reasonable dictionary has a size of ~ 100 KB.
* It's possible to select smaller or larger size, just by specifying `dictBufferCapacity`.
* In general, it's recommended to provide a few thousands samples, though this can vary a lot.
* It's recommended that total size of all samples be about ~x100 times the target size of dictionary.
*/
ZDICTLIB_STATIC_API size_t ZDICT_trainFromBuffer_cover(
void *dictBuffer, size_t dictBufferCapacity,
const void *samplesBuffer, const size_t *samplesSizes, unsigned nbSamples,
ZDICT_cover_params_t parameters);
/*! ZDICT_optimizeTrainFromBuffer_cover():
* The same requirements as above hold for all the parameters except `parameters`.
* This function tries many parameter combinations and picks the best parameters.
* `*parameters` is filled with the best parameters found,
* dictionary constructed with those parameters is stored in `dictBuffer`.
*
* All of the parameters d, k, steps are optional.
* If d is non-zero then we don't check multiple values of d, otherwise we check d = {6, 8}.
* if steps is zero it defaults to its default value.
* If k is non-zero then we don't check multiple values of k, otherwise we check steps values in [50, 2000].
*
* @return: size of dictionary stored into `dictBuffer` (<= `dictBufferCapacity`)
* or an error code, which can be tested with ZDICT_isError().
* On success `*parameters` contains the parameters selected.
* See ZDICT_trainFromBuffer() for details on failure modes.
* Note: ZDICT_optimizeTrainFromBuffer_cover() requires about 8 bytes of memory for each input byte and additionally another 5 bytes of memory for each byte of memory for each thread.
*/
ZDICTLIB_STATIC_API size_t ZDICT_optimizeTrainFromBuffer_cover(
void* dictBuffer, size_t dictBufferCapacity,
const void* samplesBuffer, const size_t* samplesSizes, unsigned nbSamples,
ZDICT_cover_params_t* parameters);
/*! ZDICT_trainFromBuffer_fastCover():
* Train a dictionary from an array of samples using a modified version of COVER algorithm.
* Samples must be stored concatenated in a single flat buffer `samplesBuffer`,
* supplied with an array of sizes `samplesSizes`, providing the size of each sample, in order.
* d and k are required.
* All other parameters are optional, will use default values if not provided
* The resulting dictionary will be saved into `dictBuffer`.
* @return: size of dictionary stored into `dictBuffer` (<= `dictBufferCapacity`)
* or an error code, which can be tested with ZDICT_isError().
* See ZDICT_trainFromBuffer() for details on failure modes.
* Note: ZDICT_trainFromBuffer_fastCover() requires 6 * 2^f bytes of memory.
* Tips: In general, a reasonable dictionary has a size of ~ 100 KB.
* It's possible to select smaller or larger size, just by specifying `dictBufferCapacity`.
* In general, it's recommended to provide a few thousands samples, though this can vary a lot.
* It's recommended that total size of all samples be about ~x100 times the target size of dictionary.
*/
ZDICTLIB_STATIC_API size_t ZDICT_trainFromBuffer_fastCover(void *dictBuffer,
size_t dictBufferCapacity, const void *samplesBuffer,
const size_t *samplesSizes, unsigned nbSamples,
ZDICT_fastCover_params_t parameters);
/*! ZDICT_optimizeTrainFromBuffer_fastCover():
* The same requirements as above hold for all the parameters except `parameters`.
* This function tries many parameter combinations (specifically, k and d combinations)
* and picks the best parameters. `*parameters` is filled with the best parameters found,
* dictionary constructed with those parameters is stored in `dictBuffer`.
* All of the parameters d, k, steps, f, and accel are optional.
* If d is non-zero then we don't check multiple values of d, otherwise we check d = {6, 8}.
* if steps is zero it defaults to its default value.
* If k is non-zero then we don't check multiple values of k, otherwise we check steps values in [50, 2000].
* If f is zero, default value of 20 is used.
* If accel is zero, default value of 1 is used.
*
* @return: size of dictionary stored into `dictBuffer` (<= `dictBufferCapacity`)
* or an error code, which can be tested with ZDICT_isError().
* On success `*parameters` contains the parameters selected.
* See ZDICT_trainFromBuffer() for details on failure modes.
* Note: ZDICT_optimizeTrainFromBuffer_fastCover() requires about 6 * 2^f bytes of memory for each thread.
*/
ZDICTLIB_STATIC_API size_t ZDICT_optimizeTrainFromBuffer_fastCover(void* dictBuffer,
size_t dictBufferCapacity, const void* samplesBuffer,
const size_t* samplesSizes, unsigned nbSamples,
ZDICT_fastCover_params_t* parameters);
typedef struct {
unsigned selectivityLevel; /* 0 means default; larger => select more => larger dictionary */
ZDICT_params_t zParams;
} ZDICT_legacy_params_t;
/*! ZDICT_trainFromBuffer_legacy():
* Train a dictionary from an array of samples.
* Samples must be stored concatenated in a single flat buffer `samplesBuffer`,
* supplied with an array of sizes `samplesSizes`, providing the size of each sample, in order.
* The resulting dictionary will be saved into `dictBuffer`.
* `parameters` is optional and can be provided with values set to 0 to mean "default".
* @return: size of dictionary stored into `dictBuffer` (<= `dictBufferCapacity`)
* or an error code, which can be tested with ZDICT_isError().
* See ZDICT_trainFromBuffer() for details on failure modes.
* Tips: In general, a reasonable dictionary has a size of ~ 100 KB.
* It's possible to select smaller or larger size, just by specifying `dictBufferCapacity`.
* In general, it's recommended to provide a few thousands samples, though this can vary a lot.
* It's recommended that total size of all samples be about ~x100 times the target size of dictionary.
* Note: ZDICT_trainFromBuffer_legacy() will send notifications into stderr if instructed to, using notificationLevel>0.
*/
ZDICTLIB_STATIC_API size_t ZDICT_trainFromBuffer_legacy(
void* dictBuffer, size_t dictBufferCapacity,
const void* samplesBuffer, const size_t* samplesSizes, unsigned nbSamples,
ZDICT_legacy_params_t parameters);
/* Deprecation warnings */
/* It is generally possible to disable deprecation warnings from compiler,
for example with -Wno-deprecated-declarations for gcc
or _CRT_SECURE_NO_WARNINGS in Visual.
Otherwise, it's also possible to manually define ZDICT_DISABLE_DEPRECATE_WARNINGS */
#ifdef ZDICT_DISABLE_DEPRECATE_WARNINGS
# define ZDICT_DEPRECATED(message) /* disable deprecation warnings */
#else
# define ZDICT_GCC_VERSION (__GNUC__ * 100 + __GNUC_MINOR__)
# if defined (__cplusplus) && (__cplusplus >= 201402) /* C++14 or greater */
# define ZDICT_DEPRECATED(message) [[deprecated(message)]]
# elif defined(__clang__) || (ZDICT_GCC_VERSION >= 405)
# define ZDICT_DEPRECATED(message) __attribute__((deprecated(message)))
# elif (ZDICT_GCC_VERSION >= 301)
# define ZDICT_DEPRECATED(message) __attribute__((deprecated))
# elif defined(_MSC_VER)
# define ZDICT_DEPRECATED(message) __declspec(deprecated(message))
# else
# pragma message("WARNING: You need to implement ZDICT_DEPRECATED for this compiler")
# define ZDICT_DEPRECATED(message)
# endif
#endif /* ZDICT_DISABLE_DEPRECATE_WARNINGS */
ZDICT_DEPRECATED("use ZDICT_finalizeDictionary() instead")
ZDICTLIB_STATIC_API
size_t ZDICT_addEntropyTablesFromBuffer(void* dictBuffer, size_t dictContentSize, size_t dictBufferCapacity,
const void* samplesBuffer, const size_t* samplesSizes, unsigned nbSamples);
#if defined (__cplusplus)
}
#endif
#endif /* ZSTD_ZDICT_H_STATIC */
/**** ended inlining ../zdict.h ****/
/**** start inlining cover.h ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
#ifndef ZDICT_STATIC_LINKING_ONLY
# define ZDICT_STATIC_LINKING_ONLY
#endif
/**** skipping file: ../common/threading.h ****/
/**** skipping file: ../common/mem.h ****/
/**** skipping file: ../zdict.h ****/
/**
* COVER_best_t is used for two purposes:
* 1. Synchronizing threads.
* 2. Saving the best parameters and dictionary.
*
* All of the methods except COVER_best_init() are thread safe if zstd is
* compiled with multithreaded support.
*/
typedef struct COVER_best_s {
ZSTD_pthread_mutex_t mutex;
ZSTD_pthread_cond_t cond;
size_t liveJobs;
void *dict;
size_t dictSize;
ZDICT_cover_params_t parameters;
size_t compressedSize;
} COVER_best_t;
/**
* A segment is a range in the source as well as the score of the segment.
*/
typedef struct {
U32 begin;
U32 end;
U32 score;
} COVER_segment_t;
/**
*Number of epochs and size of each epoch.
*/
typedef struct {
U32 num;
U32 size;
} COVER_epoch_info_t;
/**
* Struct used for the dictionary selection function.
*/
typedef struct COVER_dictSelection {
BYTE* dictContent;
size_t dictSize;
size_t totalCompressedSize;
} COVER_dictSelection_t;
/**
* Computes the number of epochs and the size of each epoch.
* We will make sure that each epoch gets at least 10 * k bytes.
*
* The COVER algorithms divide the data up into epochs of equal size and
* select one segment from each epoch.
*
* @param maxDictSize The maximum allowed dictionary size.
* @param nbDmers The number of dmers we are training on.
* @param k The parameter k (segment size).
* @param passes The target number of passes over the dmer corpus.
* More passes means a better dictionary.
*/
COVER_epoch_info_t COVER_computeEpochs(U32 maxDictSize, U32 nbDmers,
U32 k, U32 passes);
/**
* Warns the user when their corpus is too small.
*/
void COVER_warnOnSmallCorpus(size_t maxDictSize, size_t nbDmers, int displayLevel);
/**
* Checks total compressed size of a dictionary
*/
size_t COVER_checkTotalCompressedSize(const ZDICT_cover_params_t parameters,
const size_t *samplesSizes, const BYTE *samples,
size_t *offsets,
size_t nbTrainSamples, size_t nbSamples,
BYTE *const dict, size_t dictBufferCapacity);
/**
* Returns the sum of the sample sizes.
*/
size_t COVER_sum(const size_t *samplesSizes, unsigned nbSamples) ;
/**
* Initialize the `COVER_best_t`.
*/
void COVER_best_init(COVER_best_t *best);
/**
* Wait until liveJobs == 0.
*/
void COVER_best_wait(COVER_best_t *best);
/**
* Call COVER_best_wait() and then destroy the COVER_best_t.
*/
void COVER_best_destroy(COVER_best_t *best);
/**
* Called when a thread is about to be launched.
* Increments liveJobs.
*/
void COVER_best_start(COVER_best_t *best);
/**
* Called when a thread finishes executing, both on error or success.
* Decrements liveJobs and signals any waiting threads if liveJobs == 0.
* If this dictionary is the best so far save it and its parameters.
*/
void COVER_best_finish(COVER_best_t *best, ZDICT_cover_params_t parameters,
COVER_dictSelection_t selection);
/**
* Error function for COVER_selectDict function. Checks if the return
* value is an error.
*/
unsigned COVER_dictSelectionIsError(COVER_dictSelection_t selection);
/**
* Error function for COVER_selectDict function. Returns a struct where
* return.totalCompressedSize is a ZSTD error.
*/
COVER_dictSelection_t COVER_dictSelectionError(size_t error);
/**
* Always call after selectDict is called to free up used memory from
* newly created dictionary.
*/
void COVER_dictSelectionFree(COVER_dictSelection_t selection);
/**
* Called to finalize the dictionary and select one based on whether or not
* the shrink-dict flag was enabled. If enabled the dictionary used is the
* smallest dictionary within a specified regression of the compressed size
* from the largest dictionary.
*/
COVER_dictSelection_t COVER_selectDict(BYTE* customDictContent, size_t dictBufferCapacity,
size_t dictContentSize, const BYTE* samplesBuffer, const size_t* samplesSizes, unsigned nbFinalizeSamples,
size_t nbCheckSamples, size_t nbSamples, ZDICT_cover_params_t params, size_t* offsets, size_t totalCompressedSize);
/**** ended inlining cover.h ****/
/*-*************************************
* Constants
***************************************/
/**
* There are 32bit indexes used to ref samples, so limit samples size to 4GB
* on 64bit builds.
* For 32bit builds we choose 1 GB.
* Most 32bit platforms have 2GB user-mode addressable space and we allocate a large
* contiguous buffer, so 1GB is already a high limit.
*/
#define COVER_MAX_SAMPLES_SIZE (sizeof(size_t) == 8 ? ((unsigned)-1) : ((unsigned)1 GB))
#define COVER_DEFAULT_SPLITPOINT 1.0
/*-*************************************
* Console display
***************************************/
#ifndef LOCALDISPLAYLEVEL
static int g_displayLevel = 0;
#endif
#undef DISPLAY
#define DISPLAY(...) \
{ \
fprintf(stderr, __VA_ARGS__); \
fflush(stderr); \
}
#undef LOCALDISPLAYLEVEL
#define LOCALDISPLAYLEVEL(displayLevel, l, ...) \
if (displayLevel >= l) { \
DISPLAY(__VA_ARGS__); \
} /* 0 : no display; 1: errors; 2: default; 3: details; 4: debug */
#undef DISPLAYLEVEL
#define DISPLAYLEVEL(l, ...) LOCALDISPLAYLEVEL(g_displayLevel, l, __VA_ARGS__)
#ifndef LOCALDISPLAYUPDATE
static const clock_t g_refreshRate = CLOCKS_PER_SEC * 15 / 100;
static clock_t g_time = 0;
#endif
#undef LOCALDISPLAYUPDATE
#define LOCALDISPLAYUPDATE(displayLevel, l, ...) \
if (displayLevel >= l) { \
if ((clock() - g_time > g_refreshRate) || (displayLevel >= 4)) { \
g_time = clock(); \
DISPLAY(__VA_ARGS__); \
} \
}
#undef DISPLAYUPDATE
#define DISPLAYUPDATE(l, ...) LOCALDISPLAYUPDATE(g_displayLevel, l, __VA_ARGS__)
/*-*************************************
* Hash table
***************************************
* A small specialized hash map for storing activeDmers.
* The map does not resize, so if it becomes full it will loop forever.
* Thus, the map must be large enough to store every value.
* The map implements linear probing and keeps its load less than 0.5.
*/
#define MAP_EMPTY_VALUE ((U32)-1)
typedef struct COVER_map_pair_t_s {
U32 key;
U32 value;
} COVER_map_pair_t;
typedef struct COVER_map_s {
COVER_map_pair_t *data;
U32 sizeLog;
U32 size;
U32 sizeMask;
} COVER_map_t;
/**
* Clear the map.
*/
static void COVER_map_clear(COVER_map_t *map) {
memset(map->data, MAP_EMPTY_VALUE, map->size * sizeof(COVER_map_pair_t));
}
/**
* Initializes a map of the given size.
* Returns 1 on success and 0 on failure.
* The map must be destroyed with COVER_map_destroy().
* The map is only guaranteed to be large enough to hold size elements.
*/
static int COVER_map_init(COVER_map_t *map, U32 size) {
map->sizeLog = ZSTD_highbit32(size) + 2;
map->size = (U32)1 << map->sizeLog;
map->sizeMask = map->size - 1;
map->data = (COVER_map_pair_t *)malloc(map->size * sizeof(COVER_map_pair_t));
if (!map->data) {
map->sizeLog = 0;
map->size = 0;
return 0;
}
COVER_map_clear(map);
return 1;
}
/**
* Internal hash function
*/
static const U32 COVER_prime4bytes = 2654435761U;
static U32 COVER_map_hash(COVER_map_t *map, U32 key) {
return (key * COVER_prime4bytes) >> (32 - map->sizeLog);
}
/**
* Helper function that returns the index that a key should be placed into.
*/
static U32 COVER_map_index(COVER_map_t *map, U32 key) {
const U32 hash = COVER_map_hash(map, key);
U32 i;
for (i = hash;; i = (i + 1) & map->sizeMask) {
COVER_map_pair_t *pos = &map->data[i];
if (pos->value == MAP_EMPTY_VALUE) {
return i;
}
if (pos->key == key) {
return i;
}
}
}
/**
* Returns the pointer to the value for key.
* If key is not in the map, it is inserted and the value is set to 0.
* The map must not be full.
*/
static U32 *COVER_map_at(COVER_map_t *map, U32 key) {
COVER_map_pair_t *pos = &map->data[COVER_map_index(map, key)];
if (pos->value == MAP_EMPTY_VALUE) {
pos->key = key;
pos->value = 0;
}
return &pos->value;
}
/**
* Deletes key from the map if present.
*/
static void COVER_map_remove(COVER_map_t *map, U32 key) {
U32 i = COVER_map_index(map, key);
COVER_map_pair_t *del = &map->data[i];
U32 shift = 1;
if (del->value == MAP_EMPTY_VALUE) {
return;
}
for (i = (i + 1) & map->sizeMask;; i = (i + 1) & map->sizeMask) {
COVER_map_pair_t *const pos = &map->data[i];
/* If the position is empty we are done */
if (pos->value == MAP_EMPTY_VALUE) {
del->value = MAP_EMPTY_VALUE;
return;
}
/* If pos can be moved to del do so */
if (((i - COVER_map_hash(map, pos->key)) & map->sizeMask) >= shift) {
del->key = pos->key;
del->value = pos->value;
del = pos;
shift = 1;
} else {
++shift;
}
}
}
/**
* Destroys a map that is inited with COVER_map_init().
*/
static void COVER_map_destroy(COVER_map_t *map) {
if (map->data) {
free(map->data);
}
map->data = NULL;
map->size = 0;
}
/*-*************************************
* Context
***************************************/
typedef struct {
const BYTE *samples;
size_t *offsets;
const size_t *samplesSizes;
size_t nbSamples;
size_t nbTrainSamples;
size_t nbTestSamples;
U32 *suffix;
size_t suffixSize;
U32 *freqs;
U32 *dmerAt;
unsigned d;
} COVER_ctx_t;
#if !defined(_GNU_SOURCE) && !defined(__APPLE__) && !defined(_MSC_VER)
/* C90 only offers qsort() that needs a global context. */
static COVER_ctx_t *g_coverCtx = NULL;
#endif
/*-*************************************
* Helper functions
***************************************/
/**
* Returns the sum of the sample sizes.
*/
size_t COVER_sum(const size_t *samplesSizes, unsigned nbSamples) {
size_t sum = 0;
unsigned i;
for (i = 0; i < nbSamples; ++i) {
sum += samplesSizes[i];
}
return sum;
}
/**
* Returns -1 if the dmer at lp is less than the dmer at rp.
* Return 0 if the dmers at lp and rp are equal.
* Returns 1 if the dmer at lp is greater than the dmer at rp.
*/
static int COVER_cmp(COVER_ctx_t *ctx, const void *lp, const void *rp) {
U32 const lhs = *(U32 const *)lp;
U32 const rhs = *(U32 const *)rp;
return memcmp(ctx->samples + lhs, ctx->samples + rhs, ctx->d);
}
/**
* Faster version for d <= 8.
*/
static int COVER_cmp8(COVER_ctx_t *ctx, const void *lp, const void *rp) {
U64 const mask = (ctx->d == 8) ? (U64)-1 : (((U64)1 << (8 * ctx->d)) - 1);
U64 const lhs = MEM_readLE64(ctx->samples + *(U32 const *)lp) & mask;
U64 const rhs = MEM_readLE64(ctx->samples + *(U32 const *)rp) & mask;
if (lhs < rhs) {
return -1;
}
return (lhs > rhs);
}
/**
* Same as COVER_cmp() except ties are broken by pointer value
*/
#if (defined(_WIN32) && defined(_MSC_VER)) || defined(__APPLE__)
static int WIN_CDECL COVER_strict_cmp(void* g_coverCtx, const void* lp, const void* rp) {
#elif defined(_GNU_SOURCE)
static int COVER_strict_cmp(const void *lp, const void *rp, void *g_coverCtx) {
#else /* C90 fallback.*/
static int COVER_strict_cmp(const void *lp, const void *rp) {
#endif
int result = COVER_cmp((COVER_ctx_t*)g_coverCtx, lp, rp);
if (result == 0) {
result = lp < rp ? -1 : 1;
}
return result;
}
/**
* Faster version for d <= 8.
*/
#if (defined(_WIN32) && defined(_MSC_VER)) || defined(__APPLE__)
static int WIN_CDECL COVER_strict_cmp8(void* g_coverCtx, const void* lp, const void* rp) {
#elif defined(_GNU_SOURCE)
static int COVER_strict_cmp8(const void *lp, const void *rp, void *g_coverCtx) {
#else /* C90 fallback.*/
static int COVER_strict_cmp8(const void *lp, const void *rp) {
#endif
int result = COVER_cmp8((COVER_ctx_t*)g_coverCtx, lp, rp);
if (result == 0) {
result = lp < rp ? -1 : 1;
}
return result;
}
/**
* Abstract away divergence of qsort_r() parameters.
* Hopefully when C11 become the norm, we will be able
* to clean it up.
*/
static void stableSort(COVER_ctx_t *ctx) {
#if defined(__APPLE__)
qsort_r(ctx->suffix, ctx->suffixSize, sizeof(U32),
ctx,
(ctx->d <= 8 ? &COVER_strict_cmp8 : &COVER_strict_cmp));
#elif defined(_GNU_SOURCE)
qsort_r(ctx->suffix, ctx->suffixSize, sizeof(U32),
(ctx->d <= 8 ? &COVER_strict_cmp8 : &COVER_strict_cmp),
ctx);
#elif defined(_WIN32) && defined(_MSC_VER)
qsort_s(ctx->suffix, ctx->suffixSize, sizeof(U32),
(ctx->d <= 8 ? &COVER_strict_cmp8 : &COVER_strict_cmp),
ctx);
#elif defined(__OpenBSD__)
g_coverCtx = ctx;
mergesort(ctx->suffix, ctx->suffixSize, sizeof(U32),
(ctx->d <= 8 ? &COVER_strict_cmp8 : &COVER_strict_cmp));
#else /* C90 fallback.*/
g_coverCtx = ctx;
/* TODO(cavalcanti): implement a reentrant qsort() when is not available. */
qsort(ctx->suffix, ctx->suffixSize, sizeof(U32),
(ctx->d <= 8 ? &COVER_strict_cmp8 : &COVER_strict_cmp));
#endif
}
/**
* Returns the first pointer in [first, last) whose element does not compare
* less than value. If no such element exists it returns last.
*/
static const size_t *COVER_lower_bound(const size_t* first, const size_t* last,
size_t value) {
size_t count = (size_t)(last - first);
assert(last >= first);
while (count != 0) {
size_t step = count / 2;
const size_t *ptr = first;
ptr += step;
if (*ptr < value) {
first = ++ptr;
count -= step + 1;
} else {
count = step;
}
}
return first;
}
/**
* Generic groupBy function.
* Groups an array sorted by cmp into groups with equivalent values.
* Calls grp for each group.
*/
static void
COVER_groupBy(const void *data, size_t count, size_t size, COVER_ctx_t *ctx,
int (*cmp)(COVER_ctx_t *, const void *, const void *),
void (*grp)(COVER_ctx_t *, const void *, const void *)) {
const BYTE *ptr = (const BYTE *)data;
size_t num = 0;
while (num < count) {
const BYTE *grpEnd = ptr + size;
++num;
while (num < count && cmp(ctx, ptr, grpEnd) == 0) {
grpEnd += size;
++num;
}
grp(ctx, ptr, grpEnd);
ptr = grpEnd;
}
}
/*-*************************************
* Cover functions
***************************************/
/**
* Called on each group of positions with the same dmer.
* Counts the frequency of each dmer and saves it in the suffix array.
* Fills `ctx->dmerAt`.
*/
static void COVER_group(COVER_ctx_t *ctx, const void *group,
const void *groupEnd) {
/* The group consists of all the positions with the same first d bytes. */
const U32 *grpPtr = (const U32 *)group;
const U32 *grpEnd = (const U32 *)groupEnd;
/* The dmerId is how we will reference this dmer.
* This allows us to map the whole dmer space to a much smaller space, the
* size of the suffix array.
*/
const U32 dmerId = (U32)(grpPtr - ctx->suffix);
/* Count the number of samples this dmer shows up in */
U32 freq = 0;
/* Details */
const size_t *curOffsetPtr = ctx->offsets;
const size_t *offsetsEnd = ctx->offsets + ctx->nbSamples;
/* Once *grpPtr >= curSampleEnd this occurrence of the dmer is in a
* different sample than the last.
*/
size_t curSampleEnd = ctx->offsets[0];
for (; grpPtr != grpEnd; ++grpPtr) {
/* Save the dmerId for this position so we can get back to it. */
ctx->dmerAt[*grpPtr] = dmerId;
/* Dictionaries only help for the first reference to the dmer.
* After that zstd can reference the match from the previous reference.
* So only count each dmer once for each sample it is in.
*/
if (*grpPtr < curSampleEnd) {
continue;
}
freq += 1;
/* Binary search to find the end of the sample *grpPtr is in.
* In the common case that grpPtr + 1 == grpEnd we can skip the binary
* search because the loop is over.
*/
if (grpPtr + 1 != grpEnd) {
const size_t *sampleEndPtr =
COVER_lower_bound(curOffsetPtr, offsetsEnd, *grpPtr);
curSampleEnd = *sampleEndPtr;
curOffsetPtr = sampleEndPtr + 1;
}
}
/* At this point we are never going to look at this segment of the suffix
* array again. We take advantage of this fact to save memory.
* We store the frequency of the dmer in the first position of the group,
* which is dmerId.
*/
ctx->suffix[dmerId] = freq;
}
/**
* Selects the best segment in an epoch.
* Segments of are scored according to the function:
*
* Let F(d) be the frequency of dmer d.
* Let S_i be the dmer at position i of segment S which has length k.
*
* Score(S) = F(S_1) + F(S_2) + ... + F(S_{k-d+1})
*
* Once the dmer d is in the dictionary we set F(d) = 0.
*/
static COVER_segment_t COVER_selectSegment(const COVER_ctx_t *ctx, U32 *freqs,
COVER_map_t *activeDmers, U32 begin,
U32 end,
ZDICT_cover_params_t parameters) {
/* Constants */
const U32 k = parameters.k;
const U32 d = parameters.d;
const U32 dmersInK = k - d + 1;
/* Try each segment (activeSegment) and save the best (bestSegment) */
COVER_segment_t bestSegment = {0, 0, 0};
COVER_segment_t activeSegment;
/* Reset the activeDmers in the segment */
COVER_map_clear(activeDmers);
/* The activeSegment starts at the beginning of the epoch. */
activeSegment.begin = begin;
activeSegment.end = begin;
activeSegment.score = 0;
/* Slide the activeSegment through the whole epoch.
* Save the best segment in bestSegment.
*/
while (activeSegment.end < end) {
/* The dmerId for the dmer at the next position */
U32 newDmer = ctx->dmerAt[activeSegment.end];
/* The entry in activeDmers for this dmerId */
U32 *newDmerOcc = COVER_map_at(activeDmers, newDmer);
/* If the dmer isn't already present in the segment add its score. */
if (*newDmerOcc == 0) {
/* The paper suggest using the L-0.5 norm, but experiments show that it
* doesn't help.
*/
activeSegment.score += freqs[newDmer];
}
/* Add the dmer to the segment */
activeSegment.end += 1;
*newDmerOcc += 1;
/* If the window is now too large, drop the first position */
if (activeSegment.end - activeSegment.begin == dmersInK + 1) {
U32 delDmer = ctx->dmerAt[activeSegment.begin];
U32 *delDmerOcc = COVER_map_at(activeDmers, delDmer);
activeSegment.begin += 1;
*delDmerOcc -= 1;
/* If this is the last occurrence of the dmer, subtract its score */
if (*delDmerOcc == 0) {
COVER_map_remove(activeDmers, delDmer);
activeSegment.score -= freqs[delDmer];
}
}
/* If this segment is the best so far save it */
if (activeSegment.score > bestSegment.score) {
bestSegment = activeSegment;
}
}
{
/* Trim off the zero frequency head and tail from the segment. */
U32 newBegin = bestSegment.end;
U32 newEnd = bestSegment.begin;
U32 pos;
for (pos = bestSegment.begin; pos != bestSegment.end; ++pos) {
U32 freq = freqs[ctx->dmerAt[pos]];
if (freq != 0) {
newBegin = MIN(newBegin, pos);
newEnd = pos + 1;
}
}
bestSegment.begin = newBegin;
bestSegment.end = newEnd;
}
{
/* Zero out the frequency of each dmer covered by the chosen segment. */
U32 pos;
for (pos = bestSegment.begin; pos != bestSegment.end; ++pos) {
freqs[ctx->dmerAt[pos]] = 0;
}
}
return bestSegment;
}
/**
* Check the validity of the parameters.
* Returns non-zero if the parameters are valid and 0 otherwise.
*/
static int COVER_checkParameters(ZDICT_cover_params_t parameters,
size_t maxDictSize) {
/* k and d are required parameters */
if (parameters.d == 0 || parameters.k == 0) {
return 0;
}
/* k <= maxDictSize */
if (parameters.k > maxDictSize) {
return 0;
}
/* d <= k */
if (parameters.d > parameters.k) {
return 0;
}
/* 0 < splitPoint <= 1 */
if (parameters.splitPoint <= 0 || parameters.splitPoint > 1){
return 0;
}
return 1;
}
/**
* Clean up a context initialized with `COVER_ctx_init()`.
*/
static void COVER_ctx_destroy(COVER_ctx_t *ctx) {
if (!ctx) {
return;
}
if (ctx->suffix) {
free(ctx->suffix);
ctx->suffix = NULL;
}
if (ctx->freqs) {
free(ctx->freqs);
ctx->freqs = NULL;
}
if (ctx->dmerAt) {
free(ctx->dmerAt);
ctx->dmerAt = NULL;
}
if (ctx->offsets) {
free(ctx->offsets);
ctx->offsets = NULL;
}
}
/**
* Prepare a context for dictionary building.
* The context is only dependent on the parameter `d` and can be used multiple
* times.
* Returns 0 on success or error code on error.
* The context must be destroyed with `COVER_ctx_destroy()`.
*/
static size_t COVER_ctx_init(COVER_ctx_t *ctx, const void *samplesBuffer,
const size_t *samplesSizes, unsigned nbSamples,
unsigned d, double splitPoint)
{
const BYTE *const samples = (const BYTE *)samplesBuffer;
const size_t totalSamplesSize = COVER_sum(samplesSizes, nbSamples);
/* Split samples into testing and training sets */
const unsigned nbTrainSamples = splitPoint < 1.0 ? (unsigned)((double)nbSamples * splitPoint) : nbSamples;
const unsigned nbTestSamples = splitPoint < 1.0 ? nbSamples - nbTrainSamples : nbSamples;
const size_t trainingSamplesSize = splitPoint < 1.0 ? COVER_sum(samplesSizes, nbTrainSamples) : totalSamplesSize;
const size_t testSamplesSize = splitPoint < 1.0 ? COVER_sum(samplesSizes + nbTrainSamples, nbTestSamples) : totalSamplesSize;
/* Checks */
if (totalSamplesSize < MAX(d, sizeof(U64)) ||
totalSamplesSize >= (size_t)COVER_MAX_SAMPLES_SIZE) {
DISPLAYLEVEL(1, "Total samples size is too large (%u MB), maximum size is %u MB\n",
(unsigned)(totalSamplesSize>>20), (COVER_MAX_SAMPLES_SIZE >> 20));
return ERROR(srcSize_wrong);
}
/* Check if there are at least 5 training samples */
if (nbTrainSamples < 5) {
DISPLAYLEVEL(1, "Total number of training samples is %u and is invalid.", nbTrainSamples);
return ERROR(srcSize_wrong);
}
/* Check if there's testing sample */
if (nbTestSamples < 1) {
DISPLAYLEVEL(1, "Total number of testing samples is %u and is invalid.", nbTestSamples);
return ERROR(srcSize_wrong);
}
/* Zero the context */
memset(ctx, 0, sizeof(*ctx));
DISPLAYLEVEL(2, "Training on %u samples of total size %u\n", nbTrainSamples,
(unsigned)trainingSamplesSize);
DISPLAYLEVEL(2, "Testing on %u samples of total size %u\n", nbTestSamples,
(unsigned)testSamplesSize);
ctx->samples = samples;
ctx->samplesSizes = samplesSizes;
ctx->nbSamples = nbSamples;
ctx->nbTrainSamples = nbTrainSamples;
ctx->nbTestSamples = nbTestSamples;
/* Partial suffix array */
ctx->suffixSize = trainingSamplesSize - MAX(d, sizeof(U64)) + 1;
ctx->suffix = (U32 *)malloc(ctx->suffixSize * sizeof(U32));
/* Maps index to the dmerID */
ctx->dmerAt = (U32 *)malloc(ctx->suffixSize * sizeof(U32));
/* The offsets of each file */
ctx->offsets = (size_t *)malloc((nbSamples + 1) * sizeof(size_t));
if (!ctx->suffix || !ctx->dmerAt || !ctx->offsets) {
DISPLAYLEVEL(1, "Failed to allocate scratch buffers\n");
COVER_ctx_destroy(ctx);
return ERROR(memory_allocation);
}
ctx->freqs = NULL;
ctx->d = d;
/* Fill offsets from the samplesSizes */
{
U32 i;
ctx->offsets[0] = 0;
for (i = 1; i <= nbSamples; ++i) {
ctx->offsets[i] = ctx->offsets[i - 1] + samplesSizes[i - 1];
}
}
DISPLAYLEVEL(2, "Constructing partial suffix array\n");
{
/* suffix is a partial suffix array.
* It only sorts suffixes by their first parameters.d bytes.
* The sort is stable, so each dmer group is sorted by position in input.
*/
U32 i;
for (i = 0; i < ctx->suffixSize; ++i) {
ctx->suffix[i] = i;
}
stableSort(ctx);
}
DISPLAYLEVEL(2, "Computing frequencies\n");
/* For each dmer group (group of positions with the same first d bytes):
* 1. For each position we set dmerAt[position] = dmerID. The dmerID is
* (groupBeginPtr - suffix). This allows us to go from position to
* dmerID so we can look up values in freq.
* 2. We calculate how many samples the dmer occurs in and save it in
* freqs[dmerId].
*/
COVER_groupBy(ctx->suffix, ctx->suffixSize, sizeof(U32), ctx,
(ctx->d <= 8 ? &COVER_cmp8 : &COVER_cmp), &COVER_group);
ctx->freqs = ctx->suffix;
ctx->suffix = NULL;
return 0;
}
void COVER_warnOnSmallCorpus(size_t maxDictSize, size_t nbDmers, int displayLevel)
{
const double ratio = (double)nbDmers / (double)maxDictSize;
if (ratio >= 10) {
return;
}
LOCALDISPLAYLEVEL(displayLevel, 1,
"WARNING: The maximum dictionary size %u is too large "
"compared to the source size %u! "
"size(source)/size(dictionary) = %f, but it should be >= "
"10! This may lead to a subpar dictionary! We recommend "
"training on sources at least 10x, and preferably 100x "
"the size of the dictionary! \n", (U32)maxDictSize,
(U32)nbDmers, ratio);
}
COVER_epoch_info_t COVER_computeEpochs(U32 maxDictSize,
U32 nbDmers, U32 k, U32 passes)
{
const U32 minEpochSize = k * 10;
COVER_epoch_info_t epochs;
epochs.num = MAX(1, maxDictSize / k / passes);
epochs.size = nbDmers / epochs.num;
if (epochs.size >= minEpochSize) {
assert(epochs.size * epochs.num <= nbDmers);
return epochs;
}
epochs.size = MIN(minEpochSize, nbDmers);
epochs.num = nbDmers / epochs.size;
assert(epochs.size * epochs.num <= nbDmers);
return epochs;
}
/**
* Given the prepared context build the dictionary.
*/
static size_t COVER_buildDictionary(const COVER_ctx_t *ctx, U32 *freqs,
COVER_map_t *activeDmers, void *dictBuffer,
size_t dictBufferCapacity,
ZDICT_cover_params_t parameters) {
BYTE *const dict = (BYTE *)dictBuffer;
size_t tail = dictBufferCapacity;
/* Divide the data into epochs. We will select one segment from each epoch. */
const COVER_epoch_info_t epochs = COVER_computeEpochs(
(U32)dictBufferCapacity, (U32)ctx->suffixSize, parameters.k, 4);
const size_t maxZeroScoreRun = MAX(10, MIN(100, epochs.num >> 3));
size_t zeroScoreRun = 0;
size_t epoch;
DISPLAYLEVEL(2, "Breaking content into %u epochs of size %u\n",
(U32)epochs.num, (U32)epochs.size);
/* Loop through the epochs until there are no more segments or the dictionary
* is full.
*/
for (epoch = 0; tail > 0; epoch = (epoch + 1) % epochs.num) {
const U32 epochBegin = (U32)(epoch * epochs.size);
const U32 epochEnd = epochBegin + epochs.size;
size_t segmentSize;
/* Select a segment */
COVER_segment_t segment = COVER_selectSegment(
ctx, freqs, activeDmers, epochBegin, epochEnd, parameters);
/* If the segment covers no dmers, then we are out of content.
* There may be new content in other epochs, for continue for some time.
*/
if (segment.score == 0) {
if (++zeroScoreRun >= maxZeroScoreRun) {
break;
}
continue;
}
zeroScoreRun = 0;
/* Trim the segment if necessary and if it is too small then we are done */
segmentSize = MIN(segment.end - segment.begin + parameters.d - 1, tail);
if (segmentSize < parameters.d) {
break;
}
/* We fill the dictionary from the back to allow the best segments to be
* referenced with the smallest offsets.
*/
tail -= segmentSize;
memcpy(dict + tail, ctx->samples + segment.begin, segmentSize);
DISPLAYUPDATE(
2, "\r%u%% ",
(unsigned)(((dictBufferCapacity - tail) * 100) / dictBufferCapacity));
}
DISPLAYLEVEL(2, "\r%79s\r", "");
return tail;
}
ZDICTLIB_STATIC_API size_t ZDICT_trainFromBuffer_cover(
void *dictBuffer, size_t dictBufferCapacity,
const void *samplesBuffer, const size_t *samplesSizes, unsigned nbSamples,
ZDICT_cover_params_t parameters)
{
BYTE* const dict = (BYTE*)dictBuffer;
COVER_ctx_t ctx;
COVER_map_t activeDmers;
parameters.splitPoint = 1.0;
/* Initialize global data */
g_displayLevel = (int)parameters.zParams.notificationLevel;
/* Checks */
if (!COVER_checkParameters(parameters, dictBufferCapacity)) {
DISPLAYLEVEL(1, "Cover parameters incorrect\n");
return ERROR(parameter_outOfBound);
}
if (nbSamples == 0) {
DISPLAYLEVEL(1, "Cover must have at least one input file\n");
return ERROR(srcSize_wrong);
}
if (dictBufferCapacity < ZDICT_DICTSIZE_MIN) {
DISPLAYLEVEL(1, "dictBufferCapacity must be at least %u\n",
ZDICT_DICTSIZE_MIN);
return ERROR(dstSize_tooSmall);
}
/* Initialize context and activeDmers */
{
size_t const initVal = COVER_ctx_init(&ctx, samplesBuffer, samplesSizes, nbSamples,
parameters.d, parameters.splitPoint);
if (ZSTD_isError(initVal)) {
return initVal;
}
}
COVER_warnOnSmallCorpus(dictBufferCapacity, ctx.suffixSize, g_displayLevel);
if (!COVER_map_init(&activeDmers, parameters.k - parameters.d + 1)) {
DISPLAYLEVEL(1, "Failed to allocate dmer map: out of memory\n");
COVER_ctx_destroy(&ctx);
return ERROR(memory_allocation);
}
DISPLAYLEVEL(2, "Building dictionary\n");
{
const size_t tail =
COVER_buildDictionary(&ctx, ctx.freqs, &activeDmers, dictBuffer,
dictBufferCapacity, parameters);
const size_t dictionarySize = ZDICT_finalizeDictionary(
dict, dictBufferCapacity, dict + tail, dictBufferCapacity - tail,
samplesBuffer, samplesSizes, nbSamples, parameters.zParams);
if (!ZSTD_isError(dictionarySize)) {
DISPLAYLEVEL(2, "Constructed dictionary of size %u\n",
(unsigned)dictionarySize);
}
COVER_ctx_destroy(&ctx);
COVER_map_destroy(&activeDmers);
return dictionarySize;
}
}
size_t COVER_checkTotalCompressedSize(const ZDICT_cover_params_t parameters,
const size_t *samplesSizes, const BYTE *samples,
size_t *offsets,
size_t nbTrainSamples, size_t nbSamples,
BYTE *const dict, size_t dictBufferCapacity) {
size_t totalCompressedSize = ERROR(GENERIC);
/* Pointers */
ZSTD_CCtx *cctx;
ZSTD_CDict *cdict;
void *dst;
/* Local variables */
size_t dstCapacity;
size_t i;
/* Allocate dst with enough space to compress the maximum sized sample */
{
size_t maxSampleSize = 0;
i = parameters.splitPoint < 1.0 ? nbTrainSamples : 0;
for (; i < nbSamples; ++i) {
maxSampleSize = MAX(samplesSizes[i], maxSampleSize);
}
dstCapacity = ZSTD_compressBound(maxSampleSize);
dst = malloc(dstCapacity);
}
/* Create the cctx and cdict */
cctx = ZSTD_createCCtx();
cdict = ZSTD_createCDict(dict, dictBufferCapacity,
parameters.zParams.compressionLevel);
if (!dst || !cctx || !cdict) {
goto _compressCleanup;
}
/* Compress each sample and sum their sizes (or error) */
totalCompressedSize = dictBufferCapacity;
i = parameters.splitPoint < 1.0 ? nbTrainSamples : 0;
for (; i < nbSamples; ++i) {
const size_t size = ZSTD_compress_usingCDict(
cctx, dst, dstCapacity, samples + offsets[i],
samplesSizes[i], cdict);
if (ZSTD_isError(size)) {
totalCompressedSize = size;
goto _compressCleanup;
}
totalCompressedSize += size;
}
_compressCleanup:
ZSTD_freeCCtx(cctx);
ZSTD_freeCDict(cdict);
if (dst) {
free(dst);
}
return totalCompressedSize;
}
/**
* Initialize the `COVER_best_t`.
*/
void COVER_best_init(COVER_best_t *best) {
if (best==NULL) return; /* compatible with init on NULL */
(void)ZSTD_pthread_mutex_init(&best->mutex, NULL);
(void)ZSTD_pthread_cond_init(&best->cond, NULL);
best->liveJobs = 0;
best->dict = NULL;
best->dictSize = 0;
best->compressedSize = (size_t)-1;
memset(&best->parameters, 0, sizeof(best->parameters));
}
/**
* Wait until liveJobs == 0.
*/
void COVER_best_wait(COVER_best_t *best) {
if (!best) {
return;
}
ZSTD_pthread_mutex_lock(&best->mutex);
while (best->liveJobs != 0) {
ZSTD_pthread_cond_wait(&best->cond, &best->mutex);
}
ZSTD_pthread_mutex_unlock(&best->mutex);
}
/**
* Call COVER_best_wait() and then destroy the COVER_best_t.
*/
void COVER_best_destroy(COVER_best_t *best) {
if (!best) {
return;
}
COVER_best_wait(best);
if (best->dict) {
free(best->dict);
}
ZSTD_pthread_mutex_destroy(&best->mutex);
ZSTD_pthread_cond_destroy(&best->cond);
}
/**
* Called when a thread is about to be launched.
* Increments liveJobs.
*/
void COVER_best_start(COVER_best_t *best) {
if (!best) {
return;
}
ZSTD_pthread_mutex_lock(&best->mutex);
++best->liveJobs;
ZSTD_pthread_mutex_unlock(&best->mutex);
}
/**
* Called when a thread finishes executing, both on error or success.
* Decrements liveJobs and signals any waiting threads if liveJobs == 0.
* If this dictionary is the best so far save it and its parameters.
*/
void COVER_best_finish(COVER_best_t* best,
ZDICT_cover_params_t parameters,
COVER_dictSelection_t selection)
{
void* dict = selection.dictContent;
size_t compressedSize = selection.totalCompressedSize;
size_t dictSize = selection.dictSize;
if (!best) {
return;
}
{
size_t liveJobs;
ZSTD_pthread_mutex_lock(&best->mutex);
--best->liveJobs;
liveJobs = best->liveJobs;
/* If the new dictionary is better */
if (compressedSize < best->compressedSize) {
/* Allocate space if necessary */
if (!best->dict || best->dictSize < dictSize) {
if (best->dict) {
free(best->dict);
}
best->dict = malloc(dictSize);
if (!best->dict) {
best->compressedSize = ERROR(GENERIC);
best->dictSize = 0;
ZSTD_pthread_cond_signal(&best->cond);
ZSTD_pthread_mutex_unlock(&best->mutex);
return;
}
}
/* Save the dictionary, parameters, and size */
if (dict) {
memcpy(best->dict, dict, dictSize);
best->dictSize = dictSize;
best->parameters = parameters;
best->compressedSize = compressedSize;
}
}
if (liveJobs == 0) {
ZSTD_pthread_cond_broadcast(&best->cond);
}
ZSTD_pthread_mutex_unlock(&best->mutex);
}
}
static COVER_dictSelection_t setDictSelection(BYTE* buf, size_t s, size_t csz)
{
COVER_dictSelection_t ds;
ds.dictContent = buf;
ds.dictSize = s;
ds.totalCompressedSize = csz;
return ds;
}
COVER_dictSelection_t COVER_dictSelectionError(size_t error) {
return setDictSelection(NULL, 0, error);
}
unsigned COVER_dictSelectionIsError(COVER_dictSelection_t selection) {
return (ZSTD_isError(selection.totalCompressedSize) || !selection.dictContent);
}
void COVER_dictSelectionFree(COVER_dictSelection_t selection){
free(selection.dictContent);
}
COVER_dictSelection_t COVER_selectDict(BYTE* customDictContent, size_t dictBufferCapacity,
size_t dictContentSize, const BYTE* samplesBuffer, const size_t* samplesSizes, unsigned nbFinalizeSamples,
size_t nbCheckSamples, size_t nbSamples, ZDICT_cover_params_t params, size_t* offsets, size_t totalCompressedSize) {
size_t largestDict = 0;
size_t largestCompressed = 0;
BYTE* customDictContentEnd = customDictContent + dictContentSize;
BYTE* largestDictbuffer = (BYTE*)malloc(dictBufferCapacity);
BYTE* candidateDictBuffer = (BYTE*)malloc(dictBufferCapacity);
double regressionTolerance = ((double)params.shrinkDictMaxRegression / 100.0) + 1.00;
if (!largestDictbuffer || !candidateDictBuffer) {
free(largestDictbuffer);
free(candidateDictBuffer);
return COVER_dictSelectionError(dictContentSize);
}
/* Initial dictionary size and compressed size */
memcpy(largestDictbuffer, customDictContent, dictContentSize);
dictContentSize = ZDICT_finalizeDictionary(
largestDictbuffer, dictBufferCapacity, customDictContent, dictContentSize,
samplesBuffer, samplesSizes, nbFinalizeSamples, params.zParams);
if (ZDICT_isError(dictContentSize)) {
free(largestDictbuffer);
free(candidateDictBuffer);
return COVER_dictSelectionError(dictContentSize);
}
totalCompressedSize = COVER_checkTotalCompressedSize(params, samplesSizes,
samplesBuffer, offsets,
nbCheckSamples, nbSamples,
largestDictbuffer, dictContentSize);
if (ZSTD_isError(totalCompressedSize)) {
free(largestDictbuffer);
free(candidateDictBuffer);
return COVER_dictSelectionError(totalCompressedSize);
}
if (params.shrinkDict == 0) {
free(candidateDictBuffer);
return setDictSelection(largestDictbuffer, dictContentSize, totalCompressedSize);
}
largestDict = dictContentSize;
largestCompressed = totalCompressedSize;
dictContentSize = ZDICT_DICTSIZE_MIN;
/* Largest dict is initially at least ZDICT_DICTSIZE_MIN */
while (dictContentSize < largestDict) {
memcpy(candidateDictBuffer, largestDictbuffer, largestDict);
dictContentSize = ZDICT_finalizeDictionary(
candidateDictBuffer, dictBufferCapacity, customDictContentEnd - dictContentSize, dictContentSize,
samplesBuffer, samplesSizes, nbFinalizeSamples, params.zParams);
if (ZDICT_isError(dictContentSize)) {
free(largestDictbuffer);
free(candidateDictBuffer);
return COVER_dictSelectionError(dictContentSize);
}
totalCompressedSize = COVER_checkTotalCompressedSize(params, samplesSizes,
samplesBuffer, offsets,
nbCheckSamples, nbSamples,
candidateDictBuffer, dictContentSize);
if (ZSTD_isError(totalCompressedSize)) {
free(largestDictbuffer);
free(candidateDictBuffer);
return COVER_dictSelectionError(totalCompressedSize);
}
if ((double)totalCompressedSize <= (double)largestCompressed * regressionTolerance) {
free(largestDictbuffer);
return setDictSelection( candidateDictBuffer, dictContentSize, totalCompressedSize );
}
dictContentSize *= 2;
}
dictContentSize = largestDict;
totalCompressedSize = largestCompressed;
free(candidateDictBuffer);
return setDictSelection( largestDictbuffer, dictContentSize, totalCompressedSize );
}
/**
* Parameters for COVER_tryParameters().
*/
typedef struct COVER_tryParameters_data_s {
const COVER_ctx_t *ctx;
COVER_best_t *best;
size_t dictBufferCapacity;
ZDICT_cover_params_t parameters;
} COVER_tryParameters_data_t;
/**
* Tries a set of parameters and updates the COVER_best_t with the results.
* This function is thread safe if zstd is compiled with multithreaded support.
* It takes its parameters as an *OWNING* opaque pointer to support threading.
*/
static void COVER_tryParameters(void *opaque)
{
/* Save parameters as local variables */
COVER_tryParameters_data_t *const data = (COVER_tryParameters_data_t*)opaque;
const COVER_ctx_t *const ctx = data->ctx;
const ZDICT_cover_params_t parameters = data->parameters;
size_t dictBufferCapacity = data->dictBufferCapacity;
size_t totalCompressedSize = ERROR(GENERIC);
/* Allocate space for hash table, dict, and freqs */
COVER_map_t activeDmers;
BYTE* const dict = (BYTE*)malloc(dictBufferCapacity);
COVER_dictSelection_t selection = COVER_dictSelectionError(ERROR(GENERIC));
U32* const freqs = (U32*)malloc(ctx->suffixSize * sizeof(U32));
if (!COVER_map_init(&activeDmers, parameters.k - parameters.d + 1)) {
DISPLAYLEVEL(1, "Failed to allocate dmer map: out of memory\n");
goto _cleanup;
}
if (!dict || !freqs) {
DISPLAYLEVEL(1, "Failed to allocate buffers: out of memory\n");
goto _cleanup;
}
/* Copy the frequencies because we need to modify them */
memcpy(freqs, ctx->freqs, ctx->suffixSize * sizeof(U32));
/* Build the dictionary */
{
const size_t tail = COVER_buildDictionary(ctx, freqs, &activeDmers, dict,
dictBufferCapacity, parameters);
selection = COVER_selectDict(dict + tail, dictBufferCapacity, dictBufferCapacity - tail,
ctx->samples, ctx->samplesSizes, (unsigned)ctx->nbTrainSamples, ctx->nbTrainSamples, ctx->nbSamples, parameters, ctx->offsets,
totalCompressedSize);
if (COVER_dictSelectionIsError(selection)) {
DISPLAYLEVEL(1, "Failed to select dictionary\n");
goto _cleanup;
}
}
_cleanup:
free(dict);
COVER_best_finish(data->best, parameters, selection);
free(data);
COVER_map_destroy(&activeDmers);
COVER_dictSelectionFree(selection);
free(freqs);
}
ZDICTLIB_STATIC_API size_t ZDICT_optimizeTrainFromBuffer_cover(
void* dictBuffer, size_t dictBufferCapacity, const void* samplesBuffer,
const size_t* samplesSizes, unsigned nbSamples,
ZDICT_cover_params_t* parameters)
{
/* constants */
const unsigned nbThreads = parameters->nbThreads;
const double splitPoint =
parameters->splitPoint <= 0.0 ? COVER_DEFAULT_SPLITPOINT : parameters->splitPoint;
const unsigned kMinD = parameters->d == 0 ? 6 : parameters->d;
const unsigned kMaxD = parameters->d == 0 ? 8 : parameters->d;
const unsigned kMinK = parameters->k == 0 ? 50 : parameters->k;
const unsigned kMaxK = parameters->k == 0 ? 2000 : parameters->k;
const unsigned kSteps = parameters->steps == 0 ? 40 : parameters->steps;
const unsigned kStepSize = MAX((kMaxK - kMinK) / kSteps, 1);
const unsigned kIterations =
(1 + (kMaxD - kMinD) / 2) * (1 + (kMaxK - kMinK) / kStepSize);
const unsigned shrinkDict = 0;
/* Local variables */
const int displayLevel = parameters->zParams.notificationLevel;
unsigned iteration = 1;
unsigned d;
unsigned k;
COVER_best_t best;
POOL_ctx *pool = NULL;
int warned = 0;
/* Checks */
if (splitPoint <= 0 || splitPoint > 1) {
LOCALDISPLAYLEVEL(displayLevel, 1, "Incorrect parameters\n");
return ERROR(parameter_outOfBound);
}
if (kMinK < kMaxD || kMaxK < kMinK) {
LOCALDISPLAYLEVEL(displayLevel, 1, "Incorrect parameters\n");
return ERROR(parameter_outOfBound);
}
if (nbSamples == 0) {
DISPLAYLEVEL(1, "Cover must have at least one input file\n");
return ERROR(srcSize_wrong);
}
if (dictBufferCapacity < ZDICT_DICTSIZE_MIN) {
DISPLAYLEVEL(1, "dictBufferCapacity must be at least %u\n",
ZDICT_DICTSIZE_MIN);
return ERROR(dstSize_tooSmall);
}
if (nbThreads > 1) {
pool = POOL_create(nbThreads, 1);
if (!pool) {
return ERROR(memory_allocation);
}
}
/* Initialization */
COVER_best_init(&best);
/* Turn down global display level to clean up display at level 2 and below */
g_displayLevel = displayLevel == 0 ? 0 : displayLevel - 1;
/* Loop through d first because each new value needs a new context */
LOCALDISPLAYLEVEL(displayLevel, 2, "Trying %u different sets of parameters\n",
kIterations);
for (d = kMinD; d <= kMaxD; d += 2) {
/* Initialize the context for this value of d */
COVER_ctx_t ctx;
LOCALDISPLAYLEVEL(displayLevel, 3, "d=%u\n", d);
{
const size_t initVal = COVER_ctx_init(&ctx, samplesBuffer, samplesSizes, nbSamples, d, splitPoint);
if (ZSTD_isError(initVal)) {
LOCALDISPLAYLEVEL(displayLevel, 1, "Failed to initialize context\n");
COVER_best_destroy(&best);
POOL_free(pool);
return initVal;
}
}
if (!warned) {
COVER_warnOnSmallCorpus(dictBufferCapacity, ctx.suffixSize, displayLevel);
warned = 1;
}
/* Loop through k reusing the same context */
for (k = kMinK; k <= kMaxK; k += kStepSize) {
/* Prepare the arguments */
COVER_tryParameters_data_t *data = (COVER_tryParameters_data_t *)malloc(
sizeof(COVER_tryParameters_data_t));
LOCALDISPLAYLEVEL(displayLevel, 3, "k=%u\n", k);
if (!data) {
LOCALDISPLAYLEVEL(displayLevel, 1, "Failed to allocate parameters\n");
COVER_best_destroy(&best);
COVER_ctx_destroy(&ctx);
POOL_free(pool);
return ERROR(memory_allocation);
}
data->ctx = &ctx;
data->best = &best;
data->dictBufferCapacity = dictBufferCapacity;
data->parameters = *parameters;
data->parameters.k = k;
data->parameters.d = d;
data->parameters.splitPoint = splitPoint;
data->parameters.steps = kSteps;
data->parameters.shrinkDict = shrinkDict;
data->parameters.zParams.notificationLevel = g_displayLevel;
/* Check the parameters */
if (!COVER_checkParameters(data->parameters, dictBufferCapacity)) {
DISPLAYLEVEL(1, "Cover parameters incorrect\n");
free(data);
continue;
}
/* Call the function and pass ownership of data to it */
COVER_best_start(&best);
if (pool) {
POOL_add(pool, &COVER_tryParameters, data);
} else {
COVER_tryParameters(data);
}
/* Print status */
LOCALDISPLAYUPDATE(displayLevel, 2, "\r%u%% ",
(unsigned)((iteration * 100) / kIterations));
++iteration;
}
COVER_best_wait(&best);
COVER_ctx_destroy(&ctx);
}
LOCALDISPLAYLEVEL(displayLevel, 2, "\r%79s\r", "");
/* Fill the output buffer and parameters with output of the best parameters */
{
const size_t dictSize = best.dictSize;
if (ZSTD_isError(best.compressedSize)) {
const size_t compressedSize = best.compressedSize;
COVER_best_destroy(&best);
POOL_free(pool);
return compressedSize;
}
*parameters = best.parameters;
memcpy(dictBuffer, best.dict, dictSize);
COVER_best_destroy(&best);
POOL_free(pool);
return dictSize;
}
}
/**** ended inlining dictBuilder/cover.c ****/
/**** start inlining dictBuilder/divsufsort.c ****/
/*
* divsufsort.c for libdivsufsort-lite
* Copyright (c) 2003-2008 Yuta Mori All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person
* obtaining a copy of this software and associated documentation
* files (the "Software"), to deal in the Software without
* restriction, including without limitation the rights to use,
* copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following
* conditions:
*
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
* OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
* WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
/*- Compiler specifics -*/
#ifdef __clang__
#pragma clang diagnostic ignored "-Wshorten-64-to-32"
#endif
#if defined(_MSC_VER)
# pragma warning(disable : 4244)
# pragma warning(disable : 4127) /* C4127 : Condition expression is constant */
#endif
/*- Dependencies -*/
#include <assert.h>
#include <stdio.h>
#include <stdlib.h>
/**** start inlining divsufsort.h ****/
/*
* divsufsort.h for libdivsufsort-lite
* Copyright (c) 2003-2008 Yuta Mori All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person
* obtaining a copy of this software and associated documentation
* files (the "Software"), to deal in the Software without
* restriction, including without limitation the rights to use,
* copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following
* conditions:
*
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
* OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
* WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef _DIVSUFSORT_H
#define _DIVSUFSORT_H 1
/*- Prototypes -*/
/**
* Constructs the suffix array of a given string.
* @param T [0..n-1] The input string.
* @param SA [0..n-1] The output array of suffixes.
* @param n The length of the given string.
* @param openMP enables OpenMP optimization.
* @return 0 if no error occurred, -1 or -2 otherwise.
*/
int
divsufsort(const unsigned char *T, int *SA, int n, int openMP);
/**
* Constructs the burrows-wheeler transformed string of a given string.
* @param T [0..n-1] The input string.
* @param U [0..n-1] The output string. (can be T)
* @param A [0..n-1] The temporary array. (can be NULL)
* @param n The length of the given string.
* @param num_indexes The length of secondary indexes array. (can be NULL)
* @param indexes The secondary indexes array. (can be NULL)
* @param openMP enables OpenMP optimization.
* @return The primary index if no error occurred, -1 or -2 otherwise.
*/
int
divbwt(const unsigned char *T, unsigned char *U, int *A, int n, unsigned char * num_indexes, int * indexes, int openMP);
#endif /* _DIVSUFSORT_H */
/**** ended inlining divsufsort.h ****/
/*- Constants -*/
#if defined(INLINE)
# undef INLINE
#endif
#if !defined(INLINE)
# define INLINE __inline
#endif
#if defined(ALPHABET_SIZE) && (ALPHABET_SIZE < 1)
# undef ALPHABET_SIZE
#endif
#if !defined(ALPHABET_SIZE)
# define ALPHABET_SIZE (256)
#endif
#define BUCKET_A_SIZE (ALPHABET_SIZE)
#define BUCKET_B_SIZE (ALPHABET_SIZE * ALPHABET_SIZE)
#if defined(SS_INSERTIONSORT_THRESHOLD)
# if SS_INSERTIONSORT_THRESHOLD < 1
# undef SS_INSERTIONSORT_THRESHOLD
# define SS_INSERTIONSORT_THRESHOLD (1)
# endif
#else
# define SS_INSERTIONSORT_THRESHOLD (8)
#endif
#if defined(SS_BLOCKSIZE)
# if SS_BLOCKSIZE < 0
# undef SS_BLOCKSIZE
# define SS_BLOCKSIZE (0)
# elif 32768 <= SS_BLOCKSIZE
# undef SS_BLOCKSIZE
# define SS_BLOCKSIZE (32767)
# endif
#else
# define SS_BLOCKSIZE (1024)
#endif
/* minstacksize = log(SS_BLOCKSIZE) / log(3) * 2 */
#if SS_BLOCKSIZE == 0
# define SS_MISORT_STACKSIZE (96)
#elif SS_BLOCKSIZE <= 4096
# define SS_MISORT_STACKSIZE (16)
#else
# define SS_MISORT_STACKSIZE (24)
#endif
#define SS_SMERGE_STACKSIZE (32)
#define TR_INSERTIONSORT_THRESHOLD (8)
#define TR_STACKSIZE (64)
/*- Macros -*/
#ifndef SWAP
# define SWAP(_a, _b) do { t = (_a); (_a) = (_b); (_b) = t; } while(0)
#endif /* SWAP */
#ifndef MIN
# define MIN(_a, _b) (((_a) < (_b)) ? (_a) : (_b))
#endif /* MIN */
#ifndef MAX
# define MAX(_a, _b) (((_a) > (_b)) ? (_a) : (_b))
#endif /* MAX */
#define STACK_PUSH(_a, _b, _c, _d)\
do {\
assert(ssize < STACK_SIZE);\
stack[ssize].a = (_a), stack[ssize].b = (_b),\
stack[ssize].c = (_c), stack[ssize++].d = (_d);\
} while(0)
#define STACK_PUSH5(_a, _b, _c, _d, _e)\
do {\
assert(ssize < STACK_SIZE);\
stack[ssize].a = (_a), stack[ssize].b = (_b),\
stack[ssize].c = (_c), stack[ssize].d = (_d), stack[ssize++].e = (_e);\
} while(0)
#define STACK_POP(_a, _b, _c, _d)\
do {\
assert(0 <= ssize);\
if(ssize == 0) { return; }\
(_a) = stack[--ssize].a, (_b) = stack[ssize].b,\
(_c) = stack[ssize].c, (_d) = stack[ssize].d;\
} while(0)
#define STACK_POP5(_a, _b, _c, _d, _e)\
do {\
assert(0 <= ssize);\
if(ssize == 0) { return; }\
(_a) = stack[--ssize].a, (_b) = stack[ssize].b,\
(_c) = stack[ssize].c, (_d) = stack[ssize].d, (_e) = stack[ssize].e;\
} while(0)
#define BUCKET_A(_c0) bucket_A[(_c0)]
#if ALPHABET_SIZE == 256
#define BUCKET_B(_c0, _c1) (bucket_B[((_c1) << 8) | (_c0)])
#define BUCKET_BSTAR(_c0, _c1) (bucket_B[((_c0) << 8) | (_c1)])
#else
#define BUCKET_B(_c0, _c1) (bucket_B[(_c1) * ALPHABET_SIZE + (_c0)])
#define BUCKET_BSTAR(_c0, _c1) (bucket_B[(_c0) * ALPHABET_SIZE + (_c1)])
#endif
/*- Private Functions -*/
static const int lg_table[256]= {
-1,0,1,1,2,2,2,2,3,3,3,3,3,3,3,3,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,
5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,
6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,
6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,
7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,
7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,
7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,
7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7
};
#if (SS_BLOCKSIZE == 0) || (SS_INSERTIONSORT_THRESHOLD < SS_BLOCKSIZE)
static INLINE
int
ss_ilg(int n) {
#if SS_BLOCKSIZE == 0
return (n & 0xffff0000) ?
((n & 0xff000000) ?
24 + lg_table[(n >> 24) & 0xff] :
16 + lg_table[(n >> 16) & 0xff]) :
((n & 0x0000ff00) ?
8 + lg_table[(n >> 8) & 0xff] :
0 + lg_table[(n >> 0) & 0xff]);
#elif SS_BLOCKSIZE < 256
return lg_table[n];
#else
return (n & 0xff00) ?
8 + lg_table[(n >> 8) & 0xff] :
0 + lg_table[(n >> 0) & 0xff];
#endif
}
#endif /* (SS_BLOCKSIZE == 0) || (SS_INSERTIONSORT_THRESHOLD < SS_BLOCKSIZE) */
#if SS_BLOCKSIZE != 0
static const int sqq_table[256] = {
0, 16, 22, 27, 32, 35, 39, 42, 45, 48, 50, 53, 55, 57, 59, 61,
64, 65, 67, 69, 71, 73, 75, 76, 78, 80, 81, 83, 84, 86, 87, 89,
90, 91, 93, 94, 96, 97, 98, 99, 101, 102, 103, 104, 106, 107, 108, 109,
110, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126,
128, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142,
143, 144, 144, 145, 146, 147, 148, 149, 150, 150, 151, 152, 153, 154, 155, 155,
156, 157, 158, 159, 160, 160, 161, 162, 163, 163, 164, 165, 166, 167, 167, 168,
169, 170, 170, 171, 172, 173, 173, 174, 175, 176, 176, 177, 178, 178, 179, 180,
181, 181, 182, 183, 183, 184, 185, 185, 186, 187, 187, 188, 189, 189, 190, 191,
192, 192, 193, 193, 194, 195, 195, 196, 197, 197, 198, 199, 199, 200, 201, 201,
202, 203, 203, 204, 204, 205, 206, 206, 207, 208, 208, 209, 209, 210, 211, 211,
212, 212, 213, 214, 214, 215, 215, 216, 217, 217, 218, 218, 219, 219, 220, 221,
221, 222, 222, 223, 224, 224, 225, 225, 226, 226, 227, 227, 228, 229, 229, 230,
230, 231, 231, 232, 232, 233, 234, 234, 235, 235, 236, 236, 237, 237, 238, 238,
239, 240, 240, 241, 241, 242, 242, 243, 243, 244, 244, 245, 245, 246, 246, 247,
247, 248, 248, 249, 249, 250, 250, 251, 251, 252, 252, 253, 253, 254, 254, 255
};
static INLINE
int
ss_isqrt(int x) {
int y, e;
if(x >= (SS_BLOCKSIZE * SS_BLOCKSIZE)) { return SS_BLOCKSIZE; }
e = (x & 0xffff0000) ?
((x & 0xff000000) ?
24 + lg_table[(x >> 24) & 0xff] :
16 + lg_table[(x >> 16) & 0xff]) :
((x & 0x0000ff00) ?
8 + lg_table[(x >> 8) & 0xff] :
0 + lg_table[(x >> 0) & 0xff]);
if(e >= 16) {
y = sqq_table[x >> ((e - 6) - (e & 1))] << ((e >> 1) - 7);
if(e >= 24) { y = (y + 1 + x / y) >> 1; }
y = (y + 1 + x / y) >> 1;
} else if(e >= 8) {
y = (sqq_table[x >> ((e - 6) - (e & 1))] >> (7 - (e >> 1))) + 1;
} else {
return sqq_table[x] >> 4;
}
return (x < (y * y)) ? y - 1 : y;
}
#endif /* SS_BLOCKSIZE != 0 */
/*---------------------------------------------------------------------------*/
/* Compares two suffixes. */
static INLINE
int
ss_compare(const unsigned char *T,
const int *p1, const int *p2,
int depth) {
const unsigned char *U1, *U2, *U1n, *U2n;
for(U1 = T + depth + *p1,
U2 = T + depth + *p2,
U1n = T + *(p1 + 1) + 2,
U2n = T + *(p2 + 1) + 2;
(U1 < U1n) && (U2 < U2n) && (*U1 == *U2);
++U1, ++U2) {
}
return U1 < U1n ?
(U2 < U2n ? *U1 - *U2 : 1) :
(U2 < U2n ? -1 : 0);
}
/*---------------------------------------------------------------------------*/
#if (SS_BLOCKSIZE != 1) && (SS_INSERTIONSORT_THRESHOLD != 1)
/* Insertionsort for small size groups */
static
void
ss_insertionsort(const unsigned char *T, const int *PA,
int *first, int *last, int depth) {
int *i, *j;
int t;
int r;
for(i = last - 2; first <= i; --i) {
for(t = *i, j = i + 1; 0 < (r = ss_compare(T, PA + t, PA + *j, depth));) {
do { *(j - 1) = *j; } while((++j < last) && (*j < 0));
if(last <= j) { break; }
}
if(r == 0) { *j = ~*j; }
*(j - 1) = t;
}
}
#endif /* (SS_BLOCKSIZE != 1) && (SS_INSERTIONSORT_THRESHOLD != 1) */
/*---------------------------------------------------------------------------*/
#if (SS_BLOCKSIZE == 0) || (SS_INSERTIONSORT_THRESHOLD < SS_BLOCKSIZE)
static INLINE
void
ss_fixdown(const unsigned char *Td, const int *PA,
int *SA, int i, int size) {
int j, k;
int v;
int c, d, e;
for(v = SA[i], c = Td[PA[v]]; (j = 2 * i + 1) < size; SA[i] = SA[k], i = k) {
d = Td[PA[SA[k = j++]]];
if(d < (e = Td[PA[SA[j]]])) { k = j; d = e; }
if(d <= c) { break; }
}
SA[i] = v;
}
/* Simple top-down heapsort. */
static
void
ss_heapsort(const unsigned char *Td, const int *PA, int *SA, int size) {
int i, m;
int t;
m = size;
if((size % 2) == 0) {
m--;
if(Td[PA[SA[m / 2]]] < Td[PA[SA[m]]]) { SWAP(SA[m], SA[m / 2]); }
}
for(i = m / 2 - 1; 0 <= i; --i) { ss_fixdown(Td, PA, SA, i, m); }
if((size % 2) == 0) { SWAP(SA[0], SA[m]); ss_fixdown(Td, PA, SA, 0, m); }
for(i = m - 1; 0 < i; --i) {
t = SA[0], SA[0] = SA[i];
ss_fixdown(Td, PA, SA, 0, i);
SA[i] = t;
}
}
/*---------------------------------------------------------------------------*/
/* Returns the median of three elements. */
static INLINE
int *
ss_median3(const unsigned char *Td, const int *PA,
int *v1, int *v2, int *v3) {
int *t;
if(Td[PA[*v1]] > Td[PA[*v2]]) { SWAP(v1, v2); }
if(Td[PA[*v2]] > Td[PA[*v3]]) {
if(Td[PA[*v1]] > Td[PA[*v3]]) { return v1; }
else { return v3; }
}
return v2;
}
/* Returns the median of five elements. */
static INLINE
int *
ss_median5(const unsigned char *Td, const int *PA,
int *v1, int *v2, int *v3, int *v4, int *v5) {
int *t;
if(Td[PA[*v2]] > Td[PA[*v3]]) { SWAP(v2, v3); }
if(Td[PA[*v4]] > Td[PA[*v5]]) { SWAP(v4, v5); }
if(Td[PA[*v2]] > Td[PA[*v4]]) { SWAP(v2, v4); SWAP(v3, v5); }
if(Td[PA[*v1]] > Td[PA[*v3]]) { SWAP(v1, v3); }
if(Td[PA[*v1]] > Td[PA[*v4]]) { SWAP(v1, v4); SWAP(v3, v5); }
if(Td[PA[*v3]] > Td[PA[*v4]]) { return v4; }
return v3;
}
/* Returns the pivot element. */
static INLINE
int *
ss_pivot(const unsigned char *Td, const int *PA, int *first, int *last) {
int *middle;
int t;
t = last - first;
middle = first + t / 2;
if(t <= 512) {
if(t <= 32) {
return ss_median3(Td, PA, first, middle, last - 1);
} else {
t >>= 2;
return ss_median5(Td, PA, first, first + t, middle, last - 1 - t, last - 1);
}
}
t >>= 3;
first = ss_median3(Td, PA, first, first + t, first + (t << 1));
middle = ss_median3(Td, PA, middle - t, middle, middle + t);
last = ss_median3(Td, PA, last - 1 - (t << 1), last - 1 - t, last - 1);
return ss_median3(Td, PA, first, middle, last);
}
/*---------------------------------------------------------------------------*/
/* Binary partition for substrings. */
static INLINE
int *
ss_partition(const int *PA,
int *first, int *last, int depth) {
int *a, *b;
int t;
for(a = first - 1, b = last;;) {
for(; (++a < b) && ((PA[*a] + depth) >= (PA[*a + 1] + 1));) { *a = ~*a; }
for(; (a < --b) && ((PA[*b] + depth) < (PA[*b + 1] + 1));) { }
if(b <= a) { break; }
t = ~*b;
*b = *a;
*a = t;
}
if(first < a) { *first = ~*first; }
return a;
}
/* Multikey introsort for medium size groups. */
static
void
ss_mintrosort(const unsigned char *T, const int *PA,
int *first, int *last,
int depth) {
#define STACK_SIZE SS_MISORT_STACKSIZE
struct { int *a, *b, c; int d; } stack[STACK_SIZE];
const unsigned char *Td;
int *a, *b, *c, *d, *e, *f;
int s, t;
int ssize;
int limit;
int v, x = 0;
for(ssize = 0, limit = ss_ilg(last - first);;) {
if((last - first) <= SS_INSERTIONSORT_THRESHOLD) {
#if 1 < SS_INSERTIONSORT_THRESHOLD
if(1 < (last - first)) { ss_insertionsort(T, PA, first, last, depth); }
#endif
STACK_POP(first, last, depth, limit);
continue;
}
Td = T + depth;
if(limit-- == 0) { ss_heapsort(Td, PA, first, last - first); }
if(limit < 0) {
for(a = first + 1, v = Td[PA[*first]]; a < last; ++a) {
if((x = Td[PA[*a]]) != v) {
if(1 < (a - first)) { break; }
v = x;
first = a;
}
}
if(Td[PA[*first] - 1] < v) {
first = ss_partition(PA, first, a, depth);
}
if((a - first) <= (last - a)) {
if(1 < (a - first)) {
STACK_PUSH(a, last, depth, -1);
last = a, depth += 1, limit = ss_ilg(a - first);
} else {
first = a, limit = -1;
}
} else {
if(1 < (last - a)) {
STACK_PUSH(first, a, depth + 1, ss_ilg(a - first));
first = a, limit = -1;
} else {
last = a, depth += 1, limit = ss_ilg(a - first);
}
}
continue;
}
/* choose pivot */
a = ss_pivot(Td, PA, first, last);
v = Td[PA[*a]];
SWAP(*first, *a);
/* partition */
for(b = first; (++b < last) && ((x = Td[PA[*b]]) == v);) { }
if(((a = b) < last) && (x < v)) {
for(; (++b < last) && ((x = Td[PA[*b]]) <= v);) {
if(x == v) { SWAP(*b, *a); ++a; }
}
}
for(c = last; (b < --c) && ((x = Td[PA[*c]]) == v);) { }
if((b < (d = c)) && (x > v)) {
for(; (b < --c) && ((x = Td[PA[*c]]) >= v);) {
if(x == v) { SWAP(*c, *d); --d; }
}
}
for(; b < c;) {
SWAP(*b, *c);
for(; (++b < c) && ((x = Td[PA[*b]]) <= v);) {
if(x == v) { SWAP(*b, *a); ++a; }
}
for(; (b < --c) && ((x = Td[PA[*c]]) >= v);) {
if(x == v) { SWAP(*c, *d); --d; }
}
}
if(a <= d) {
c = b - 1;
if((s = a - first) > (t = b - a)) { s = t; }
for(e = first, f = b - s; 0 < s; --s, ++e, ++f) { SWAP(*e, *f); }
if((s = d - c) > (t = last - d - 1)) { s = t; }
for(e = b, f = last - s; 0 < s; --s, ++e, ++f) { SWAP(*e, *f); }
a = first + (b - a), c = last - (d - c);
b = (v <= Td[PA[*a] - 1]) ? a : ss_partition(PA, a, c, depth);
if((a - first) <= (last - c)) {
if((last - c) <= (c - b)) {
STACK_PUSH(b, c, depth + 1, ss_ilg(c - b));
STACK_PUSH(c, last, depth, limit);
last = a;
} else if((a - first) <= (c - b)) {
STACK_PUSH(c, last, depth, limit);
STACK_PUSH(b, c, depth + 1, ss_ilg(c - b));
last = a;
} else {
STACK_PUSH(c, last, depth, limit);
STACK_PUSH(first, a, depth, limit);
first = b, last = c, depth += 1, limit = ss_ilg(c - b);
}
} else {
if((a - first) <= (c - b)) {
STACK_PUSH(b, c, depth + 1, ss_ilg(c - b));
STACK_PUSH(first, a, depth, limit);
first = c;
} else if((last - c) <= (c - b)) {
STACK_PUSH(first, a, depth, limit);
STACK_PUSH(b, c, depth + 1, ss_ilg(c - b));
first = c;
} else {
STACK_PUSH(first, a, depth, limit);
STACK_PUSH(c, last, depth, limit);
first = b, last = c, depth += 1, limit = ss_ilg(c - b);
}
}
} else {
limit += 1;
if(Td[PA[*first] - 1] < v) {
first = ss_partition(PA, first, last, depth);
limit = ss_ilg(last - first);
}
depth += 1;
}
}
#undef STACK_SIZE
}
#endif /* (SS_BLOCKSIZE == 0) || (SS_INSERTIONSORT_THRESHOLD < SS_BLOCKSIZE) */
/*---------------------------------------------------------------------------*/
#if SS_BLOCKSIZE != 0
static INLINE
void
ss_blockswap(int *a, int *b, int n) {
int t;
for(; 0 < n; --n, ++a, ++b) {
t = *a, *a = *b, *b = t;
}
}
static INLINE
void
ss_rotate(int *first, int *middle, int *last) {
int *a, *b, t;
int l, r;
l = middle - first, r = last - middle;
for(; (0 < l) && (0 < r);) {
if(l == r) { ss_blockswap(first, middle, l); break; }
if(l < r) {
a = last - 1, b = middle - 1;
t = *a;
do {
*a-- = *b, *b-- = *a;
if(b < first) {
*a = t;
last = a;
if((r -= l + 1) <= l) { break; }
a -= 1, b = middle - 1;
t = *a;
}
} while(1);
} else {
a = first, b = middle;
t = *a;
do {
*a++ = *b, *b++ = *a;
if(last <= b) {
*a = t;
first = a + 1;
if((l -= r + 1) <= r) { break; }
a += 1, b = middle;
t = *a;
}
} while(1);
}
}
}
/*---------------------------------------------------------------------------*/
static
void
ss_inplacemerge(const unsigned char *T, const int *PA,
int *first, int *middle, int *last,
int depth) {
const int *p;
int *a, *b;
int len, half;
int q, r;
int x;
for(;;) {
if(*(last - 1) < 0) { x = 1; p = PA + ~*(last - 1); }
else { x = 0; p = PA + *(last - 1); }
for(a = first, len = middle - first, half = len >> 1, r = -1;
0 < len;
len = half, half >>= 1) {
b = a + half;
q = ss_compare(T, PA + ((0 <= *b) ? *b : ~*b), p, depth);
if(q < 0) {
a = b + 1;
half -= (len & 1) ^ 1;
} else {
r = q;
}
}
if(a < middle) {
if(r == 0) { *a = ~*a; }
ss_rotate(a, middle, last);
last -= middle - a;
middle = a;
if(first == middle) { break; }
}
--last;
if(x != 0) { while(*--last < 0) { } }
if(middle == last) { break; }
}
}
/*---------------------------------------------------------------------------*/
/* Merge-forward with internal buffer. */
static
void
ss_mergeforward(const unsigned char *T, const int *PA,
int *first, int *middle, int *last,
int *buf, int depth) {
int *a, *b, *c, *bufend;
int t;
int r;
bufend = buf + (middle - first) - 1;
ss_blockswap(buf, first, middle - first);
for(t = *(a = first), b = buf, c = middle;;) {
r = ss_compare(T, PA + *b, PA + *c, depth);
if(r < 0) {
do {
*a++ = *b;
if(bufend <= b) { *bufend = t; return; }
*b++ = *a;
} while(*b < 0);
} else if(r > 0) {
do {
*a++ = *c, *c++ = *a;
if(last <= c) {
while(b < bufend) { *a++ = *b, *b++ = *a; }
*a = *b, *b = t;
return;
}
} while(*c < 0);
} else {
*c = ~*c;
do {
*a++ = *b;
if(bufend <= b) { *bufend = t; return; }
*b++ = *a;
} while(*b < 0);
do {
*a++ = *c, *c++ = *a;
if(last <= c) {
while(b < bufend) { *a++ = *b, *b++ = *a; }
*a = *b, *b = t;
return;
}
} while(*c < 0);
}
}
}
/* Merge-backward with internal buffer. */
static
void
ss_mergebackward(const unsigned char *T, const int *PA,
int *first, int *middle, int *last,
int *buf, int depth) {
const int *p1, *p2;
int *a, *b, *c, *bufend;
int t;
int r;
int x;
bufend = buf + (last - middle) - 1;
ss_blockswap(buf, middle, last - middle);
x = 0;
if(*bufend < 0) { p1 = PA + ~*bufend; x |= 1; }
else { p1 = PA + *bufend; }
if(*(middle - 1) < 0) { p2 = PA + ~*(middle - 1); x |= 2; }
else { p2 = PA + *(middle - 1); }
for(t = *(a = last - 1), b = bufend, c = middle - 1;;) {
r = ss_compare(T, p1, p2, depth);
if(0 < r) {
if(x & 1) { do { *a-- = *b, *b-- = *a; } while(*b < 0); x ^= 1; }
*a-- = *b;
if(b <= buf) { *buf = t; break; }
*b-- = *a;
if(*b < 0) { p1 = PA + ~*b; x |= 1; }
else { p1 = PA + *b; }
} else if(r < 0) {
if(x & 2) { do { *a-- = *c, *c-- = *a; } while(*c < 0); x ^= 2; }
*a-- = *c, *c-- = *a;
if(c < first) {
while(buf < b) { *a-- = *b, *b-- = *a; }
*a = *b, *b = t;
break;
}
if(*c < 0) { p2 = PA + ~*c; x |= 2; }
else { p2 = PA + *c; }
} else {
if(x & 1) { do { *a-- = *b, *b-- = *a; } while(*b < 0); x ^= 1; }
*a-- = ~*b;
if(b <= buf) { *buf = t; break; }
*b-- = *a;
if(x & 2) { do { *a-- = *c, *c-- = *a; } while(*c < 0); x ^= 2; }
*a-- = *c, *c-- = *a;
if(c < first) {
while(buf < b) { *a-- = *b, *b-- = *a; }
*a = *b, *b = t;
break;
}
if(*b < 0) { p1 = PA + ~*b; x |= 1; }
else { p1 = PA + *b; }
if(*c < 0) { p2 = PA + ~*c; x |= 2; }
else { p2 = PA + *c; }
}
}
}
/* D&C based merge. */
static
void
ss_swapmerge(const unsigned char *T, const int *PA,
int *first, int *middle, int *last,
int *buf, int bufsize, int depth) {
#define STACK_SIZE SS_SMERGE_STACKSIZE
#define GETIDX(a) ((0 <= (a)) ? (a) : (~(a)))
#define MERGE_CHECK(a, b, c)\
do {\
if(((c) & 1) ||\
(((c) & 2) && (ss_compare(T, PA + GETIDX(*((a) - 1)), PA + *(a), depth) == 0))) {\
*(a) = ~*(a);\
}\
if(((c) & 4) && ((ss_compare(T, PA + GETIDX(*((b) - 1)), PA + *(b), depth) == 0))) {\
*(b) = ~*(b);\
}\
} while(0)
struct { int *a, *b, *c; int d; } stack[STACK_SIZE];
int *l, *r, *lm, *rm;
int m, len, half;
int ssize;
int check, next;
for(check = 0, ssize = 0;;) {
if((last - middle) <= bufsize) {
if((first < middle) && (middle < last)) {
ss_mergebackward(T, PA, first, middle, last, buf, depth);
}
MERGE_CHECK(first, last, check);
STACK_POP(first, middle, last, check);
continue;
}
if((middle - first) <= bufsize) {
if(first < middle) {
ss_mergeforward(T, PA, first, middle, last, buf, depth);
}
MERGE_CHECK(first, last, check);
STACK_POP(first, middle, last, check);
continue;
}
for(m = 0, len = MIN(middle - first, last - middle), half = len >> 1;
0 < len;
len = half, half >>= 1) {
if(ss_compare(T, PA + GETIDX(*(middle + m + half)),
PA + GETIDX(*(middle - m - half - 1)), depth) < 0) {
m += half + 1;
half -= (len & 1) ^ 1;
}
}
if(0 < m) {
lm = middle - m, rm = middle + m;
ss_blockswap(lm, middle, m);
l = r = middle, next = 0;
if(rm < last) {
if(*rm < 0) {
*rm = ~*rm;
if(first < lm) { for(; *--l < 0;) { } next |= 4; }
next |= 1;
} else if(first < lm) {
for(; *r < 0; ++r) { }
next |= 2;
}
}
if((l - first) <= (last - r)) {
STACK_PUSH(r, rm, last, (next & 3) | (check & 4));
middle = lm, last = l, check = (check & 3) | (next & 4);
} else {
if((next & 2) && (r == middle)) { next ^= 6; }
STACK_PUSH(first, lm, l, (check & 3) | (next & 4));
first = r, middle = rm, check = (next & 3) | (check & 4);
}
} else {
if(ss_compare(T, PA + GETIDX(*(middle - 1)), PA + *middle, depth) == 0) {
*middle = ~*middle;
}
MERGE_CHECK(first, last, check);
STACK_POP(first, middle, last, check);
}
}
#undef STACK_SIZE
}
#endif /* SS_BLOCKSIZE != 0 */
/*---------------------------------------------------------------------------*/
/* Substring sort */
static
void
sssort(const unsigned char *T, const int *PA,
int *first, int *last,
int *buf, int bufsize,
int depth, int n, int lastsuffix) {
int *a;
#if SS_BLOCKSIZE != 0
int *b, *middle, *curbuf;
int j, k, curbufsize, limit;
#endif
int i;
if(lastsuffix != 0) { ++first; }
#if SS_BLOCKSIZE == 0
ss_mintrosort(T, PA, first, last, depth);
#else
if((bufsize < SS_BLOCKSIZE) &&
(bufsize < (last - first)) &&
(bufsize < (limit = ss_isqrt(last - first)))) {
if(SS_BLOCKSIZE < limit) { limit = SS_BLOCKSIZE; }
buf = middle = last - limit, bufsize = limit;
} else {
middle = last, limit = 0;
}
for(a = first, i = 0; SS_BLOCKSIZE < (middle - a); a += SS_BLOCKSIZE, ++i) {
#if SS_INSERTIONSORT_THRESHOLD < SS_BLOCKSIZE
ss_mintrosort(T, PA, a, a + SS_BLOCKSIZE, depth);
#elif 1 < SS_BLOCKSIZE
ss_insertionsort(T, PA, a, a + SS_BLOCKSIZE, depth);
#endif
curbufsize = last - (a + SS_BLOCKSIZE);
curbuf = a + SS_BLOCKSIZE;
if(curbufsize <= bufsize) { curbufsize = bufsize, curbuf = buf; }
for(b = a, k = SS_BLOCKSIZE, j = i; j & 1; b -= k, k <<= 1, j >>= 1) {
ss_swapmerge(T, PA, b - k, b, b + k, curbuf, curbufsize, depth);
}
}
#if SS_INSERTIONSORT_THRESHOLD < SS_BLOCKSIZE
ss_mintrosort(T, PA, a, middle, depth);
#elif 1 < SS_BLOCKSIZE
ss_insertionsort(T, PA, a, middle, depth);
#endif
for(k = SS_BLOCKSIZE; i != 0; k <<= 1, i >>= 1) {
if(i & 1) {
ss_swapmerge(T, PA, a - k, a, middle, buf, bufsize, depth);
a -= k;
}
}
if(limit != 0) {
#if SS_INSERTIONSORT_THRESHOLD < SS_BLOCKSIZE
ss_mintrosort(T, PA, middle, last, depth);
#elif 1 < SS_BLOCKSIZE
ss_insertionsort(T, PA, middle, last, depth);
#endif
ss_inplacemerge(T, PA, first, middle, last, depth);
}
#endif
if(lastsuffix != 0) {
/* Insert last type B* suffix. */
int PAi[2]; PAi[0] = PA[*(first - 1)], PAi[1] = n - 2;
for(a = first, i = *(first - 1);
(a < last) && ((*a < 0) || (0 < ss_compare(T, &(PAi[0]), PA + *a, depth)));
++a) {
*(a - 1) = *a;
}
*(a - 1) = i;
}
}
/*---------------------------------------------------------------------------*/
static INLINE
int
tr_ilg(int n) {
return (n & 0xffff0000) ?
((n & 0xff000000) ?
24 + lg_table[(n >> 24) & 0xff] :
16 + lg_table[(n >> 16) & 0xff]) :
((n & 0x0000ff00) ?
8 + lg_table[(n >> 8) & 0xff] :
0 + lg_table[(n >> 0) & 0xff]);
}
/*---------------------------------------------------------------------------*/
/* Simple insertionsort for small size groups. */
static
void
tr_insertionsort(const int *ISAd, int *first, int *last) {
int *a, *b;
int t, r;
for(a = first + 1; a < last; ++a) {
for(t = *a, b = a - 1; 0 > (r = ISAd[t] - ISAd[*b]);) {
do { *(b + 1) = *b; } while((first <= --b) && (*b < 0));
if(b < first) { break; }
}
if(r == 0) { *b = ~*b; }
*(b + 1) = t;
}
}
/*---------------------------------------------------------------------------*/
static INLINE
void
tr_fixdown(const int *ISAd, int *SA, int i, int size) {
int j, k;
int v;
int c, d, e;
for(v = SA[i], c = ISAd[v]; (j = 2 * i + 1) < size; SA[i] = SA[k], i = k) {
d = ISAd[SA[k = j++]];
if(d < (e = ISAd[SA[j]])) { k = j; d = e; }
if(d <= c) { break; }
}
SA[i] = v;
}
/* Simple top-down heapsort. */
static
void
tr_heapsort(const int *ISAd, int *SA, int size) {
int i, m;
int t;
m = size;
if((size % 2) == 0) {
m--;
if(ISAd[SA[m / 2]] < ISAd[SA[m]]) { SWAP(SA[m], SA[m / 2]); }
}
for(i = m / 2 - 1; 0 <= i; --i) { tr_fixdown(ISAd, SA, i, m); }
if((size % 2) == 0) { SWAP(SA[0], SA[m]); tr_fixdown(ISAd, SA, 0, m); }
for(i = m - 1; 0 < i; --i) {
t = SA[0], SA[0] = SA[i];
tr_fixdown(ISAd, SA, 0, i);
SA[i] = t;
}
}
/*---------------------------------------------------------------------------*/
/* Returns the median of three elements. */
static INLINE
int *
tr_median3(const int *ISAd, int *v1, int *v2, int *v3) {
int *t;
if(ISAd[*v1] > ISAd[*v2]) { SWAP(v1, v2); }
if(ISAd[*v2] > ISAd[*v3]) {
if(ISAd[*v1] > ISAd[*v3]) { return v1; }
else { return v3; }
}
return v2;
}
/* Returns the median of five elements. */
static INLINE
int *
tr_median5(const int *ISAd,
int *v1, int *v2, int *v3, int *v4, int *v5) {
int *t;
if(ISAd[*v2] > ISAd[*v3]) { SWAP(v2, v3); }
if(ISAd[*v4] > ISAd[*v5]) { SWAP(v4, v5); }
if(ISAd[*v2] > ISAd[*v4]) { SWAP(v2, v4); SWAP(v3, v5); }
if(ISAd[*v1] > ISAd[*v3]) { SWAP(v1, v3); }
if(ISAd[*v1] > ISAd[*v4]) { SWAP(v1, v4); SWAP(v3, v5); }
if(ISAd[*v3] > ISAd[*v4]) { return v4; }
return v3;
}
/* Returns the pivot element. */
static INLINE
int *
tr_pivot(const int *ISAd, int *first, int *last) {
int *middle;
int t;
t = last - first;
middle = first + t / 2;
if(t <= 512) {
if(t <= 32) {
return tr_median3(ISAd, first, middle, last - 1);
} else {
t >>= 2;
return tr_median5(ISAd, first, first + t, middle, last - 1 - t, last - 1);
}
}
t >>= 3;
first = tr_median3(ISAd, first, first + t, first + (t << 1));
middle = tr_median3(ISAd, middle - t, middle, middle + t);
last = tr_median3(ISAd, last - 1 - (t << 1), last - 1 - t, last - 1);
return tr_median3(ISAd, first, middle, last);
}
/*---------------------------------------------------------------------------*/
typedef struct _trbudget_t trbudget_t;
struct _trbudget_t {
int chance;
int remain;
int incval;
int count;
};
static INLINE
void
trbudget_init(trbudget_t *budget, int chance, int incval) {
budget->chance = chance;
budget->remain = budget->incval = incval;
}
static INLINE
int
trbudget_check(trbudget_t *budget, int size) {
if(size <= budget->remain) { budget->remain -= size; return 1; }
if(budget->chance == 0) { budget->count += size; return 0; }
budget->remain += budget->incval - size;
budget->chance -= 1;
return 1;
}
/*---------------------------------------------------------------------------*/
static INLINE
void
tr_partition(const int *ISAd,
int *first, int *middle, int *last,
int **pa, int **pb, int v) {
int *a, *b, *c, *d, *e, *f;
int t, s;
int x = 0;
for(b = middle - 1; (++b < last) && ((x = ISAd[*b]) == v);) { }
if(((a = b) < last) && (x < v)) {
for(; (++b < last) && ((x = ISAd[*b]) <= v);) {
if(x == v) { SWAP(*b, *a); ++a; }
}
}
for(c = last; (b < --c) && ((x = ISAd[*c]) == v);) { }
if((b < (d = c)) && (x > v)) {
for(; (b < --c) && ((x = ISAd[*c]) >= v);) {
if(x == v) { SWAP(*c, *d); --d; }
}
}
for(; b < c;) {
SWAP(*b, *c);
for(; (++b < c) && ((x = ISAd[*b]) <= v);) {
if(x == v) { SWAP(*b, *a); ++a; }
}
for(; (b < --c) && ((x = ISAd[*c]) >= v);) {
if(x == v) { SWAP(*c, *d); --d; }
}
}
if(a <= d) {
c = b - 1;
if((s = a - first) > (t = b - a)) { s = t; }
for(e = first, f = b - s; 0 < s; --s, ++e, ++f) { SWAP(*e, *f); }
if((s = d - c) > (t = last - d - 1)) { s = t; }
for(e = b, f = last - s; 0 < s; --s, ++e, ++f) { SWAP(*e, *f); }
first += (b - a), last -= (d - c);
}
*pa = first, *pb = last;
}
static
void
tr_copy(int *ISA, const int *SA,
int *first, int *a, int *b, int *last,
int depth) {
/* sort suffixes of middle partition
by using sorted order of suffixes of left and right partition. */
int *c, *d, *e;
int s, v;
v = b - SA - 1;
for(c = first, d = a - 1; c <= d; ++c) {
if((0 <= (s = *c - depth)) && (ISA[s] == v)) {
*++d = s;
ISA[s] = d - SA;
}
}
for(c = last - 1, e = d + 1, d = b; e < d; --c) {
if((0 <= (s = *c - depth)) && (ISA[s] == v)) {
*--d = s;
ISA[s] = d - SA;
}
}
}
static
void
tr_partialcopy(int *ISA, const int *SA,
int *first, int *a, int *b, int *last,
int depth) {
int *c, *d, *e;
int s, v;
int rank, lastrank, newrank = -1;
v = b - SA - 1;
lastrank = -1;
for(c = first, d = a - 1; c <= d; ++c) {
if((0 <= (s = *c - depth)) && (ISA[s] == v)) {
*++d = s;
rank = ISA[s + depth];
if(lastrank != rank) { lastrank = rank; newrank = d - SA; }
ISA[s] = newrank;
}
}
lastrank = -1;
for(e = d; first <= e; --e) {
rank = ISA[*e];
if(lastrank != rank) { lastrank = rank; newrank = e - SA; }
if(newrank != rank) { ISA[*e] = newrank; }
}
lastrank = -1;
for(c = last - 1, e = d + 1, d = b; e < d; --c) {
if((0 <= (s = *c - depth)) && (ISA[s] == v)) {
*--d = s;
rank = ISA[s + depth];
if(lastrank != rank) { lastrank = rank; newrank = d - SA; }
ISA[s] = newrank;
}
}
}
static
void
tr_introsort(int *ISA, const int *ISAd,
int *SA, int *first, int *last,
trbudget_t *budget) {
#define STACK_SIZE TR_STACKSIZE
struct { const int *a; int *b, *c; int d, e; }stack[STACK_SIZE];
int *a, *b, *c;
int t;
int v, x = 0;
int incr = ISAd - ISA;
int limit, next;
int ssize, trlink = -1;
for(ssize = 0, limit = tr_ilg(last - first);;) {
if(limit < 0) {
if(limit == -1) {
/* tandem repeat partition */
tr_partition(ISAd - incr, first, first, last, &a, &b, last - SA - 1);
/* update ranks */
if(a < last) {
for(c = first, v = a - SA - 1; c < a; ++c) { ISA[*c] = v; }
}
if(b < last) {
for(c = a, v = b - SA - 1; c < b; ++c) { ISA[*c] = v; }
}
/* push */
if(1 < (b - a)) {
STACK_PUSH5(NULL, a, b, 0, 0);
STACK_PUSH5(ISAd - incr, first, last, -2, trlink);
trlink = ssize - 2;
}
if((a - first) <= (last - b)) {
if(1 < (a - first)) {
STACK_PUSH5(ISAd, b, last, tr_ilg(last - b), trlink);
last = a, limit = tr_ilg(a - first);
} else if(1 < (last - b)) {
first = b, limit = tr_ilg(last - b);
} else {
STACK_POP5(ISAd, first, last, limit, trlink);
}
} else {
if(1 < (last - b)) {
STACK_PUSH5(ISAd, first, a, tr_ilg(a - first), trlink);
first = b, limit = tr_ilg(last - b);
} else if(1 < (a - first)) {
last = a, limit = tr_ilg(a - first);
} else {
STACK_POP5(ISAd, first, last, limit, trlink);
}
}
} else if(limit == -2) {
/* tandem repeat copy */
a = stack[--ssize].b, b = stack[ssize].c;
if(stack[ssize].d == 0) {
tr_copy(ISA, SA, first, a, b, last, ISAd - ISA);
} else {
if(0 <= trlink) { stack[trlink].d = -1; }
tr_partialcopy(ISA, SA, first, a, b, last, ISAd - ISA);
}
STACK_POP5(ISAd, first, last, limit, trlink);
} else {
/* sorted partition */
if(0 <= *first) {
a = first;
do { ISA[*a] = a - SA; } while((++a < last) && (0 <= *a));
first = a;
}
if(first < last) {
a = first; do { *a = ~*a; } while(*++a < 0);
next = (ISA[*a] != ISAd[*a]) ? tr_ilg(a - first + 1) : -1;
if(++a < last) { for(b = first, v = a - SA - 1; b < a; ++b) { ISA[*b] = v; } }
/* push */
if(trbudget_check(budget, a - first)) {
if((a - first) <= (last - a)) {
STACK_PUSH5(ISAd, a, last, -3, trlink);
ISAd += incr, last = a, limit = next;
} else {
if(1 < (last - a)) {
STACK_PUSH5(ISAd + incr, first, a, next, trlink);
first = a, limit = -3;
} else {
ISAd += incr, last = a, limit = next;
}
}
} else {
if(0 <= trlink) { stack[trlink].d = -1; }
if(1 < (last - a)) {
first = a, limit = -3;
} else {
STACK_POP5(ISAd, first, last, limit, trlink);
}
}
} else {
STACK_POP5(ISAd, first, last, limit, trlink);
}
}
continue;
}
if((last - first) <= TR_INSERTIONSORT_THRESHOLD) {
tr_insertionsort(ISAd, first, last);
limit = -3;
continue;
}
if(limit-- == 0) {
tr_heapsort(ISAd, first, last - first);
for(a = last - 1; first < a; a = b) {
for(x = ISAd[*a], b = a - 1; (first <= b) && (ISAd[*b] == x); --b) { *b = ~*b; }
}
limit = -3;
continue;
}
/* choose pivot */
a = tr_pivot(ISAd, first, last);
SWAP(*first, *a);
v = ISAd[*first];
/* partition */
tr_partition(ISAd, first, first + 1, last, &a, &b, v);
if((last - first) != (b - a)) {
next = (ISA[*a] != v) ? tr_ilg(b - a) : -1;
/* update ranks */
for(c = first, v = a - SA - 1; c < a; ++c) { ISA[*c] = v; }
if(b < last) { for(c = a, v = b - SA - 1; c < b; ++c) { ISA[*c] = v; } }
/* push */
if((1 < (b - a)) && (trbudget_check(budget, b - a))) {
if((a - first) <= (last - b)) {
if((last - b) <= (b - a)) {
if(1 < (a - first)) {
STACK_PUSH5(ISAd + incr, a, b, next, trlink);
STACK_PUSH5(ISAd, b, last, limit, trlink);
last = a;
} else if(1 < (last - b)) {
STACK_PUSH5(ISAd + incr, a, b, next, trlink);
first = b;
} else {
ISAd += incr, first = a, last = b, limit = next;
}
} else if((a - first) <= (b - a)) {
if(1 < (a - first)) {
STACK_PUSH5(ISAd, b, last, limit, trlink);
STACK_PUSH5(ISAd + incr, a, b, next, trlink);
last = a;
} else {
STACK_PUSH5(ISAd, b, last, limit, trlink);
ISAd += incr, first = a, last = b, limit = next;
}
} else {
STACK_PUSH5(ISAd, b, last, limit, trlink);
STACK_PUSH5(ISAd, first, a, limit, trlink);
ISAd += incr, first = a, last = b, limit = next;
}
} else {
if((a - first) <= (b - a)) {
if(1 < (last - b)) {
STACK_PUSH5(ISAd + incr, a, b, next, trlink);
STACK_PUSH5(ISAd, first, a, limit, trlink);
first = b;
} else if(1 < (a - first)) {
STACK_PUSH5(ISAd + incr, a, b, next, trlink);
last = a;
} else {
ISAd += incr, first = a, last = b, limit = next;
}
} else if((last - b) <= (b - a)) {
if(1 < (last - b)) {
STACK_PUSH5(ISAd, first, a, limit, trlink);
STACK_PUSH5(ISAd + incr, a, b, next, trlink);
first = b;
} else {
STACK_PUSH5(ISAd, first, a, limit, trlink);
ISAd += incr, first = a, last = b, limit = next;
}
} else {
STACK_PUSH5(ISAd, first, a, limit, trlink);
STACK_PUSH5(ISAd, b, last, limit, trlink);
ISAd += incr, first = a, last = b, limit = next;
}
}
} else {
if((1 < (b - a)) && (0 <= trlink)) { stack[trlink].d = -1; }
if((a - first) <= (last - b)) {
if(1 < (a - first)) {
STACK_PUSH5(ISAd, b, last, limit, trlink);
last = a;
} else if(1 < (last - b)) {
first = b;
} else {
STACK_POP5(ISAd, first, last, limit, trlink);
}
} else {
if(1 < (last - b)) {
STACK_PUSH5(ISAd, first, a, limit, trlink);
first = b;
} else if(1 < (a - first)) {
last = a;
} else {
STACK_POP5(ISAd, first, last, limit, trlink);
}
}
}
} else {
if(trbudget_check(budget, last - first)) {
limit = tr_ilg(last - first), ISAd += incr;
} else {
if(0 <= trlink) { stack[trlink].d = -1; }
STACK_POP5(ISAd, first, last, limit, trlink);
}
}
}
#undef STACK_SIZE
}
/*---------------------------------------------------------------------------*/
/* Tandem repeat sort */
static
void
trsort(int *ISA, int *SA, int n, int depth) {
int *ISAd;
int *first, *last;
trbudget_t budget;
int t, skip, unsorted;
trbudget_init(&budget, tr_ilg(n) * 2 / 3, n);
/* trbudget_init(&budget, tr_ilg(n) * 3 / 4, n); */
for(ISAd = ISA + depth; -n < *SA; ISAd += ISAd - ISA) {
first = SA;
skip = 0;
unsorted = 0;
do {
if((t = *first) < 0) { first -= t; skip += t; }
else {
if(skip != 0) { *(first + skip) = skip; skip = 0; }
last = SA + ISA[t] + 1;
if(1 < (last - first)) {
budget.count = 0;
tr_introsort(ISA, ISAd, SA, first, last, &budget);
if(budget.count != 0) { unsorted += budget.count; }
else { skip = first - last; }
} else if((last - first) == 1) {
skip = -1;
}
first = last;
}
} while(first < (SA + n));
if(skip != 0) { *(first + skip) = skip; }
if(unsorted == 0) { break; }
}
}
/*---------------------------------------------------------------------------*/
/* Sorts suffixes of type B*. */
static
int
sort_typeBstar(const unsigned char *T, int *SA,
int *bucket_A, int *bucket_B,
int n, int openMP) {
int *PAb, *ISAb, *buf;
#ifdef LIBBSC_OPENMP
int *curbuf;
int l;
#endif
int i, j, k, t, m, bufsize;
int c0, c1;
#ifdef LIBBSC_OPENMP
int d0, d1;
#endif
(void)openMP;
/* Initialize bucket arrays. */
for(i = 0; i < BUCKET_A_SIZE; ++i) { bucket_A[i] = 0; }
for(i = 0; i < BUCKET_B_SIZE; ++i) { bucket_B[i] = 0; }
/* Count the number of occurrences of the first one or two characters of each
type A, B and B* suffix. Moreover, store the beginning position of all
type B* suffixes into the array SA. */
for(i = n - 1, m = n, c0 = T[n - 1]; 0 <= i;) {
/* type A suffix. */
do { ++BUCKET_A(c1 = c0); } while((0 <= --i) && ((c0 = T[i]) >= c1));
if(0 <= i) {
/* type B* suffix. */
++BUCKET_BSTAR(c0, c1);
SA[--m] = i;
/* type B suffix. */
for(--i, c1 = c0; (0 <= i) && ((c0 = T[i]) <= c1); --i, c1 = c0) {
++BUCKET_B(c0, c1);
}
}
}
m = n - m;
/*
note:
A type B* suffix is lexicographically smaller than a type B suffix that
begins with the same first two characters.
*/
/* Calculate the index of start/end point of each bucket. */
for(c0 = 0, i = 0, j = 0; c0 < ALPHABET_SIZE; ++c0) {
t = i + BUCKET_A(c0);
BUCKET_A(c0) = i + j; /* start point */
i = t + BUCKET_B(c0, c0);
for(c1 = c0 + 1; c1 < ALPHABET_SIZE; ++c1) {
j += BUCKET_BSTAR(c0, c1);
BUCKET_BSTAR(c0, c1) = j; /* end point */
i += BUCKET_B(c0, c1);
}
}
if(0 < m) {
/* Sort the type B* suffixes by their first two characters. */
PAb = SA + n - m; ISAb = SA + m;
for(i = m - 2; 0 <= i; --i) {
t = PAb[i], c0 = T[t], c1 = T[t + 1];
SA[--BUCKET_BSTAR(c0, c1)] = i;
}
t = PAb[m - 1], c0 = T[t], c1 = T[t + 1];
SA[--BUCKET_BSTAR(c0, c1)] = m - 1;
/* Sort the type B* substrings using sssort. */
#ifdef LIBBSC_OPENMP
if (openMP)
{
buf = SA + m;
c0 = ALPHABET_SIZE - 2, c1 = ALPHABET_SIZE - 1, j = m;
#pragma omp parallel default(shared) private(bufsize, curbuf, k, l, d0, d1)
{
bufsize = (n - (2 * m)) / omp_get_num_threads();
curbuf = buf + omp_get_thread_num() * bufsize;
k = 0;
for(;;) {
#pragma omp critical(sssort_lock)
{
if(0 < (l = j)) {
d0 = c0, d1 = c1;
do {
k = BUCKET_BSTAR(d0, d1);
if(--d1 <= d0) {
d1 = ALPHABET_SIZE - 1;
if(--d0 < 0) { break; }
}
} while(((l - k) <= 1) && (0 < (l = k)));
c0 = d0, c1 = d1, j = k;
}
}
if(l == 0) { break; }
sssort(T, PAb, SA + k, SA + l,
curbuf, bufsize, 2, n, *(SA + k) == (m - 1));
}
}
}
else
{
buf = SA + m, bufsize = n - (2 * m);
for(c0 = ALPHABET_SIZE - 2, j = m; 0 < j; --c0) {
for(c1 = ALPHABET_SIZE - 1; c0 < c1; j = i, --c1) {
i = BUCKET_BSTAR(c0, c1);
if(1 < (j - i)) {
sssort(T, PAb, SA + i, SA + j,
buf, bufsize, 2, n, *(SA + i) == (m - 1));
}
}
}
}
#else
buf = SA + m, bufsize = n - (2 * m);
for(c0 = ALPHABET_SIZE - 2, j = m; 0 < j; --c0) {
for(c1 = ALPHABET_SIZE - 1; c0 < c1; j = i, --c1) {
i = BUCKET_BSTAR(c0, c1);
if(1 < (j - i)) {
sssort(T, PAb, SA + i, SA + j,
buf, bufsize, 2, n, *(SA + i) == (m - 1));
}
}
}
#endif
/* Compute ranks of type B* substrings. */
for(i = m - 1; 0 <= i; --i) {
if(0 <= SA[i]) {
j = i;
do { ISAb[SA[i]] = i; } while((0 <= --i) && (0 <= SA[i]));
SA[i + 1] = i - j;
if(i <= 0) { break; }
}
j = i;
do { ISAb[SA[i] = ~SA[i]] = j; } while(SA[--i] < 0);
ISAb[SA[i]] = j;
}
/* Construct the inverse suffix array of type B* suffixes using trsort. */
trsort(ISAb, SA, m, 1);
/* Set the sorted order of type B* suffixes. */
for(i = n - 1, j = m, c0 = T[n - 1]; 0 <= i;) {
for(--i, c1 = c0; (0 <= i) && ((c0 = T[i]) >= c1); --i, c1 = c0) { }
if(0 <= i) {
t = i;
for(--i, c1 = c0; (0 <= i) && ((c0 = T[i]) <= c1); --i, c1 = c0) { }
SA[ISAb[--j]] = ((t == 0) || (1 < (t - i))) ? t : ~t;
}
}
/* Calculate the index of start/end point of each bucket. */
BUCKET_B(ALPHABET_SIZE - 1, ALPHABET_SIZE - 1) = n; /* end point */
for(c0 = ALPHABET_SIZE - 2, k = m - 1; 0 <= c0; --c0) {
i = BUCKET_A(c0 + 1) - 1;
for(c1 = ALPHABET_SIZE - 1; c0 < c1; --c1) {
t = i - BUCKET_B(c0, c1);
BUCKET_B(c0, c1) = i; /* end point */
/* Move all type B* suffixes to the correct position. */
for(i = t, j = BUCKET_BSTAR(c0, c1);
j <= k;
--i, --k) { SA[i] = SA[k]; }
}
BUCKET_BSTAR(c0, c0 + 1) = i - BUCKET_B(c0, c0) + 1; /* start point */
BUCKET_B(c0, c0) = i; /* end point */
}
}
return m;
}
/* Constructs the suffix array by using the sorted order of type B* suffixes. */
static
void
construct_SA(const unsigned char *T, int *SA,
int *bucket_A, int *bucket_B,
int n, int m) {
int *i, *j, *k;
int s;
int c0, c1, c2;
if(0 < m) {
/* Construct the sorted order of type B suffixes by using
the sorted order of type B* suffixes. */
for(c1 = ALPHABET_SIZE - 2; 0 <= c1; --c1) {
/* Scan the suffix array from right to left. */
for(i = SA + BUCKET_BSTAR(c1, c1 + 1),
j = SA + BUCKET_A(c1 + 1) - 1, k = NULL, c2 = -1;
i <= j;
--j) {
if(0 < (s = *j)) {
assert(T[s] == c1);
assert(((s + 1) < n) && (T[s] <= T[s + 1]));
assert(T[s - 1] <= T[s]);
*j = ~s;
c0 = T[--s];
if((0 < s) && (T[s - 1] > c0)) { s = ~s; }
if(c0 != c2) {
if(0 <= c2) { BUCKET_B(c2, c1) = k - SA; }
k = SA + BUCKET_B(c2 = c0, c1);
}
assert(k < j); assert(k != NULL);
*k-- = s;
} else {
assert(((s == 0) && (T[s] == c1)) || (s < 0));
*j = ~s;
}
}
}
}
/* Construct the suffix array by using
the sorted order of type B suffixes. */
k = SA + BUCKET_A(c2 = T[n - 1]);
*k++ = (T[n - 2] < c2) ? ~(n - 1) : (n - 1);
/* Scan the suffix array from left to right. */
for(i = SA, j = SA + n; i < j; ++i) {
if(0 < (s = *i)) {
assert(T[s - 1] >= T[s]);
c0 = T[--s];
if((s == 0) || (T[s - 1] < c0)) { s = ~s; }
if(c0 != c2) {
BUCKET_A(c2) = k - SA;
k = SA + BUCKET_A(c2 = c0);
}
assert(i < k);
*k++ = s;
} else {
assert(s < 0);
*i = ~s;
}
}
}
/* Constructs the burrows-wheeler transformed string directly
by using the sorted order of type B* suffixes. */
static
int
construct_BWT(const unsigned char *T, int *SA,
int *bucket_A, int *bucket_B,
int n, int m) {
int *i, *j, *k, *orig;
int s;
int c0, c1, c2;
if(0 < m) {
/* Construct the sorted order of type B suffixes by using
the sorted order of type B* suffixes. */
for(c1 = ALPHABET_SIZE - 2; 0 <= c1; --c1) {
/* Scan the suffix array from right to left. */
for(i = SA + BUCKET_BSTAR(c1, c1 + 1),
j = SA + BUCKET_A(c1 + 1) - 1, k = NULL, c2 = -1;
i <= j;
--j) {
if(0 < (s = *j)) {
assert(T[s] == c1);
assert(((s + 1) < n) && (T[s] <= T[s + 1]));
assert(T[s - 1] <= T[s]);
c0 = T[--s];
*j = ~((int)c0);
if((0 < s) && (T[s - 1] > c0)) { s = ~s; }
if(c0 != c2) {
if(0 <= c2) { BUCKET_B(c2, c1) = k - SA; }
k = SA + BUCKET_B(c2 = c0, c1);
}
assert(k < j); assert(k != NULL);
*k-- = s;
} else if(s != 0) {
*j = ~s;
#ifndef NDEBUG
} else {
assert(T[s] == c1);
#endif
}
}
}
}
/* Construct the BWTed string by using
the sorted order of type B suffixes. */
k = SA + BUCKET_A(c2 = T[n - 1]);
*k++ = (T[n - 2] < c2) ? ~((int)T[n - 2]) : (n - 1);
/* Scan the suffix array from left to right. */
for(i = SA, j = SA + n, orig = SA; i < j; ++i) {
if(0 < (s = *i)) {
assert(T[s - 1] >= T[s]);
c0 = T[--s];
*i = c0;
if((0 < s) && (T[s - 1] < c0)) { s = ~((int)T[s - 1]); }
if(c0 != c2) {
BUCKET_A(c2) = k - SA;
k = SA + BUCKET_A(c2 = c0);
}
assert(i < k);
*k++ = s;
} else if(s != 0) {
*i = ~s;
} else {
orig = i;
}
}
return orig - SA;
}
/* Constructs the burrows-wheeler transformed string directly
by using the sorted order of type B* suffixes. */
static
int
construct_BWT_indexes(const unsigned char *T, int *SA,
int *bucket_A, int *bucket_B,
int n, int m,
unsigned char * num_indexes, int * indexes) {
int *i, *j, *k, *orig;
int s;
int c0, c1, c2;
int mod = n / 8;
{
mod |= mod >> 1; mod |= mod >> 2;
mod |= mod >> 4; mod |= mod >> 8;
mod |= mod >> 16; mod >>= 1;
*num_indexes = (unsigned char)((n - 1) / (mod + 1));
}
if(0 < m) {
/* Construct the sorted order of type B suffixes by using
the sorted order of type B* suffixes. */
for(c1 = ALPHABET_SIZE - 2; 0 <= c1; --c1) {
/* Scan the suffix array from right to left. */
for(i = SA + BUCKET_BSTAR(c1, c1 + 1),
j = SA + BUCKET_A(c1 + 1) - 1, k = NULL, c2 = -1;
i <= j;
--j) {
if(0 < (s = *j)) {
assert(T[s] == c1);
assert(((s + 1) < n) && (T[s] <= T[s + 1]));
assert(T[s - 1] <= T[s]);
if ((s & mod) == 0) indexes[s / (mod + 1) - 1] = j - SA;
c0 = T[--s];
*j = ~((int)c0);
if((0 < s) && (T[s - 1] > c0)) { s = ~s; }
if(c0 != c2) {
if(0 <= c2) { BUCKET_B(c2, c1) = k - SA; }
k = SA + BUCKET_B(c2 = c0, c1);
}
assert(k < j); assert(k != NULL);
*k-- = s;
} else if(s != 0) {
*j = ~s;
#ifndef NDEBUG
} else {
assert(T[s] == c1);
#endif
}
}
}
}
/* Construct the BWTed string by using
the sorted order of type B suffixes. */
k = SA + BUCKET_A(c2 = T[n - 1]);
if (T[n - 2] < c2) {
if (((n - 1) & mod) == 0) indexes[(n - 1) / (mod + 1) - 1] = k - SA;
*k++ = ~((int)T[n - 2]);
}
else {
*k++ = n - 1;
}
/* Scan the suffix array from left to right. */
for(i = SA, j = SA + n, orig = SA; i < j; ++i) {
if(0 < (s = *i)) {
assert(T[s - 1] >= T[s]);
if ((s & mod) == 0) indexes[s / (mod + 1) - 1] = i - SA;
c0 = T[--s];
*i = c0;
if(c0 != c2) {
BUCKET_A(c2) = k - SA;
k = SA + BUCKET_A(c2 = c0);
}
assert(i < k);
if((0 < s) && (T[s - 1] < c0)) {
if ((s & mod) == 0) indexes[s / (mod + 1) - 1] = k - SA;
*k++ = ~((int)T[s - 1]);
} else
*k++ = s;
} else if(s != 0) {
*i = ~s;
} else {
orig = i;
}
}
return orig - SA;
}
/*---------------------------------------------------------------------------*/
/*- Function -*/
int
divsufsort(const unsigned char *T, int *SA, int n, int openMP) {
int *bucket_A, *bucket_B;
int m;
int err = 0;
/* Check arguments. */
if((T == NULL) || (SA == NULL) || (n < 0)) { return -1; }
else if(n == 0) { return 0; }
else if(n == 1) { SA[0] = 0; return 0; }
else if(n == 2) { m = (T[0] < T[1]); SA[m ^ 1] = 0, SA[m] = 1; return 0; }
bucket_A = (int *)malloc(BUCKET_A_SIZE * sizeof(int));
bucket_B = (int *)malloc(BUCKET_B_SIZE * sizeof(int));
/* Suffixsort. */
if((bucket_A != NULL) && (bucket_B != NULL)) {
m = sort_typeBstar(T, SA, bucket_A, bucket_B, n, openMP);
construct_SA(T, SA, bucket_A, bucket_B, n, m);
} else {
err = -2;
}
free(bucket_B);
free(bucket_A);
return err;
}
int
divbwt(const unsigned char *T, unsigned char *U, int *A, int n, unsigned char * num_indexes, int * indexes, int openMP) {
int *B;
int *bucket_A, *bucket_B;
int m, pidx, i;
/* Check arguments. */
if((T == NULL) || (U == NULL) || (n < 0)) { return -1; }
else if(n <= 1) { if(n == 1) { U[0] = T[0]; } return n; }
if((B = A) == NULL) { B = (int *)malloc((size_t)(n + 1) * sizeof(int)); }
bucket_A = (int *)malloc(BUCKET_A_SIZE * sizeof(int));
bucket_B = (int *)malloc(BUCKET_B_SIZE * sizeof(int));
/* Burrows-Wheeler Transform. */
if((B != NULL) && (bucket_A != NULL) && (bucket_B != NULL)) {
m = sort_typeBstar(T, B, bucket_A, bucket_B, n, openMP);
if (num_indexes == NULL || indexes == NULL) {
pidx = construct_BWT(T, B, bucket_A, bucket_B, n, m);
} else {
pidx = construct_BWT_indexes(T, B, bucket_A, bucket_B, n, m, num_indexes, indexes);
}
/* Copy to output string. */
U[0] = T[n - 1];
for(i = 0; i < pidx; ++i) { U[i + 1] = (unsigned char)B[i]; }
for(i += 1; i < n; ++i) { U[i] = (unsigned char)B[i]; }
pidx += 1;
} else {
pidx = -2;
}
free(bucket_B);
free(bucket_A);
if(A == NULL) { free(B); }
return pidx;
}
/**** ended inlining dictBuilder/divsufsort.c ****/
/**** start inlining dictBuilder/fastcover.c ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/*-*************************************
* Dependencies
***************************************/
#include <stdio.h> /* fprintf */
#include <stdlib.h> /* malloc, free, qsort */
#include <string.h> /* memset */
#include <time.h> /* clock */
#ifndef ZDICT_STATIC_LINKING_ONLY
# define ZDICT_STATIC_LINKING_ONLY
#endif
/**** skipping file: ../common/mem.h ****/
/**** skipping file: ../common/pool.h ****/
/**** skipping file: ../common/threading.h ****/
/**** skipping file: ../common/zstd_internal.h ****/
/**** skipping file: ../compress/zstd_compress_internal.h ****/
/**** skipping file: ../zdict.h ****/
/**** skipping file: cover.h ****/
/*-*************************************
* Constants
***************************************/
/**
* There are 32bit indexes used to ref samples, so limit samples size to 4GB
* on 64bit builds.
* For 32bit builds we choose 1 GB.
* Most 32bit platforms have 2GB user-mode addressable space and we allocate a large
* contiguous buffer, so 1GB is already a high limit.
*/
#define FASTCOVER_MAX_SAMPLES_SIZE (sizeof(size_t) == 8 ? ((unsigned)-1) : ((unsigned)1 GB))
#define FASTCOVER_MAX_F 31
#define FASTCOVER_MAX_ACCEL 10
#define FASTCOVER_DEFAULT_SPLITPOINT 0.75
#define DEFAULT_F 20
#define DEFAULT_ACCEL 1
/*-*************************************
* Console display
***************************************/
#ifndef LOCALDISPLAYLEVEL
static int g_displayLevel = 0;
#endif
#undef DISPLAY
#define DISPLAY(...) \
{ \
fprintf(stderr, __VA_ARGS__); \
fflush(stderr); \
}
#undef LOCALDISPLAYLEVEL
#define LOCALDISPLAYLEVEL(displayLevel, l, ...) \
if (displayLevel >= l) { \
DISPLAY(__VA_ARGS__); \
} /* 0 : no display; 1: errors; 2: default; 3: details; 4: debug */
#undef DISPLAYLEVEL
#define DISPLAYLEVEL(l, ...) LOCALDISPLAYLEVEL(g_displayLevel, l, __VA_ARGS__)
#ifndef LOCALDISPLAYUPDATE
static const clock_t g_refreshRate = CLOCKS_PER_SEC * 15 / 100;
static clock_t g_time = 0;
#endif
#undef LOCALDISPLAYUPDATE
#define LOCALDISPLAYUPDATE(displayLevel, l, ...) \
if (displayLevel >= l) { \
if ((clock() - g_time > g_refreshRate) || (displayLevel >= 4)) { \
g_time = clock(); \
DISPLAY(__VA_ARGS__); \
} \
}
#undef DISPLAYUPDATE
#define DISPLAYUPDATE(l, ...) LOCALDISPLAYUPDATE(g_displayLevel, l, __VA_ARGS__)
/*-*************************************
* Hash Functions
***************************************/
/**
* Hash the d-byte value pointed to by p and mod 2^f into the frequency vector
*/
static size_t FASTCOVER_hashPtrToIndex(const void* p, U32 f, unsigned d) {
if (d == 6) {
return ZSTD_hash6Ptr(p, f);
}
return ZSTD_hash8Ptr(p, f);
}
/*-*************************************
* Acceleration
***************************************/
typedef struct {
unsigned finalize; /* Percentage of training samples used for ZDICT_finalizeDictionary */
unsigned skip; /* Number of dmer skipped between each dmer counted in computeFrequency */
} FASTCOVER_accel_t;
static const FASTCOVER_accel_t FASTCOVER_defaultAccelParameters[FASTCOVER_MAX_ACCEL+1] = {
{ 100, 0 }, /* accel = 0, should not happen because accel = 0 defaults to accel = 1 */
{ 100, 0 }, /* accel = 1 */
{ 50, 1 }, /* accel = 2 */
{ 34, 2 }, /* accel = 3 */
{ 25, 3 }, /* accel = 4 */
{ 20, 4 }, /* accel = 5 */
{ 17, 5 }, /* accel = 6 */
{ 14, 6 }, /* accel = 7 */
{ 13, 7 }, /* accel = 8 */
{ 11, 8 }, /* accel = 9 */
{ 10, 9 }, /* accel = 10 */
};
/*-*************************************
* Context
***************************************/
typedef struct {
const BYTE *samples;
size_t *offsets;
const size_t *samplesSizes;
size_t nbSamples;
size_t nbTrainSamples;
size_t nbTestSamples;
size_t nbDmers;
U32 *freqs;
unsigned d;
unsigned f;
FASTCOVER_accel_t accelParams;
} FASTCOVER_ctx_t;
/*-*************************************
* Helper functions
***************************************/
/**
* Selects the best segment in an epoch.
* Segments of are scored according to the function:
*
* Let F(d) be the frequency of all dmers with hash value d.
* Let S_i be hash value of the dmer at position i of segment S which has length k.
*
* Score(S) = F(S_1) + F(S_2) + ... + F(S_{k-d+1})
*
* Once the dmer with hash value d is in the dictionary we set F(d) = 0.
*/
static COVER_segment_t FASTCOVER_selectSegment(const FASTCOVER_ctx_t *ctx,
U32 *freqs, U32 begin, U32 end,
ZDICT_cover_params_t parameters,
U16* segmentFreqs) {
/* Constants */
const U32 k = parameters.k;
const U32 d = parameters.d;
const U32 f = ctx->f;
const U32 dmersInK = k - d + 1;
/* Try each segment (activeSegment) and save the best (bestSegment) */
COVER_segment_t bestSegment = {0, 0, 0};
COVER_segment_t activeSegment;
/* Reset the activeDmers in the segment */
/* The activeSegment starts at the beginning of the epoch. */
activeSegment.begin = begin;
activeSegment.end = begin;
activeSegment.score = 0;
/* Slide the activeSegment through the whole epoch.
* Save the best segment in bestSegment.
*/
while (activeSegment.end < end) {
/* Get hash value of current dmer */
const size_t idx = FASTCOVER_hashPtrToIndex(ctx->samples + activeSegment.end, f, d);
/* Add frequency of this index to score if this is the first occurrence of index in active segment */
if (segmentFreqs[idx] == 0) {
activeSegment.score += freqs[idx];
}
/* Increment end of segment and segmentFreqs*/
activeSegment.end += 1;
segmentFreqs[idx] += 1;
/* If the window is now too large, drop the first position */
if (activeSegment.end - activeSegment.begin == dmersInK + 1) {
/* Get hash value of the dmer to be eliminated from active segment */
const size_t delIndex = FASTCOVER_hashPtrToIndex(ctx->samples + activeSegment.begin, f, d);
segmentFreqs[delIndex] -= 1;
/* Subtract frequency of this index from score if this is the last occurrence of this index in active segment */
if (segmentFreqs[delIndex] == 0) {
activeSegment.score -= freqs[delIndex];
}
/* Increment start of segment */
activeSegment.begin += 1;
}
/* If this segment is the best so far save it */
if (activeSegment.score > bestSegment.score) {
bestSegment = activeSegment;
}
}
/* Zero out rest of segmentFreqs array */
while (activeSegment.begin < end) {
const size_t delIndex = FASTCOVER_hashPtrToIndex(ctx->samples + activeSegment.begin, f, d);
segmentFreqs[delIndex] -= 1;
activeSegment.begin += 1;
}
{
/* Zero the frequency of hash value of each dmer covered by the chosen segment. */
U32 pos;
for (pos = bestSegment.begin; pos != bestSegment.end; ++pos) {
const size_t i = FASTCOVER_hashPtrToIndex(ctx->samples + pos, f, d);
freqs[i] = 0;
}
}
return bestSegment;
}
static int FASTCOVER_checkParameters(ZDICT_cover_params_t parameters,
size_t maxDictSize, unsigned f,
unsigned accel) {
/* k, d, and f are required parameters */
if (parameters.d == 0 || parameters.k == 0) {
return 0;
}
/* d has to be 6 or 8 */
if (parameters.d != 6 && parameters.d != 8) {
return 0;
}
/* k <= maxDictSize */
if (parameters.k > maxDictSize) {
return 0;
}
/* d <= k */
if (parameters.d > parameters.k) {
return 0;
}
/* 0 < f <= FASTCOVER_MAX_F*/
if (f > FASTCOVER_MAX_F || f == 0) {
return 0;
}
/* 0 < splitPoint <= 1 */
if (parameters.splitPoint <= 0 || parameters.splitPoint > 1) {
return 0;
}
/* 0 < accel <= 10 */
if (accel > 10 || accel == 0) {
return 0;
}
return 1;
}
/**
* Clean up a context initialized with `FASTCOVER_ctx_init()`.
*/
static void
FASTCOVER_ctx_destroy(FASTCOVER_ctx_t* ctx)
{
if (!ctx) return;
free(ctx->freqs);
ctx->freqs = NULL;
free(ctx->offsets);
ctx->offsets = NULL;
}
/**
* Calculate for frequency of hash value of each dmer in ctx->samples
*/
static void
FASTCOVER_computeFrequency(U32* freqs, const FASTCOVER_ctx_t* ctx)
{
const unsigned f = ctx->f;
const unsigned d = ctx->d;
const unsigned skip = ctx->accelParams.skip;
const unsigned readLength = MAX(d, 8);
size_t i;
assert(ctx->nbTrainSamples >= 5);
assert(ctx->nbTrainSamples <= ctx->nbSamples);
for (i = 0; i < ctx->nbTrainSamples; i++) {
size_t start = ctx->offsets[i]; /* start of current dmer */
size_t const currSampleEnd = ctx->offsets[i+1];
while (start + readLength <= currSampleEnd) {
const size_t dmerIndex = FASTCOVER_hashPtrToIndex(ctx->samples + start, f, d);
freqs[dmerIndex]++;
start = start + skip + 1;
}
}
}
/**
* Prepare a context for dictionary building.
* The context is only dependent on the parameter `d` and can be used multiple
* times.
* Returns 0 on success or error code on error.
* The context must be destroyed with `FASTCOVER_ctx_destroy()`.
*/
static size_t
FASTCOVER_ctx_init(FASTCOVER_ctx_t* ctx,
const void* samplesBuffer,
const size_t* samplesSizes, unsigned nbSamples,
unsigned d, double splitPoint, unsigned f,
FASTCOVER_accel_t accelParams)
{
const BYTE* const samples = (const BYTE*)samplesBuffer;
const size_t totalSamplesSize = COVER_sum(samplesSizes, nbSamples);
/* Split samples into testing and training sets */
const unsigned nbTrainSamples = splitPoint < 1.0 ? (unsigned)((double)nbSamples * splitPoint) : nbSamples;
const unsigned nbTestSamples = splitPoint < 1.0 ? nbSamples - nbTrainSamples : nbSamples;
const size_t trainingSamplesSize = splitPoint < 1.0 ? COVER_sum(samplesSizes, nbTrainSamples) : totalSamplesSize;
const size_t testSamplesSize = splitPoint < 1.0 ? COVER_sum(samplesSizes + nbTrainSamples, nbTestSamples) : totalSamplesSize;
/* Checks */
if (totalSamplesSize < MAX(d, sizeof(U64)) ||
totalSamplesSize >= (size_t)FASTCOVER_MAX_SAMPLES_SIZE) {
DISPLAYLEVEL(1, "Total samples size is too large (%u MB), maximum size is %u MB\n",
(unsigned)(totalSamplesSize >> 20), (FASTCOVER_MAX_SAMPLES_SIZE >> 20));
return ERROR(srcSize_wrong);
}
/* Check if there are at least 5 training samples */
if (nbTrainSamples < 5) {
DISPLAYLEVEL(1, "Total number of training samples is %u and is invalid\n", nbTrainSamples);
return ERROR(srcSize_wrong);
}
/* Check if there's testing sample */
if (nbTestSamples < 1) {
DISPLAYLEVEL(1, "Total number of testing samples is %u and is invalid.\n", nbTestSamples);
return ERROR(srcSize_wrong);
}
/* Zero the context */
memset(ctx, 0, sizeof(*ctx));
DISPLAYLEVEL(2, "Training on %u samples of total size %u\n", nbTrainSamples,
(unsigned)trainingSamplesSize);
DISPLAYLEVEL(2, "Testing on %u samples of total size %u\n", nbTestSamples,
(unsigned)testSamplesSize);
ctx->samples = samples;
ctx->samplesSizes = samplesSizes;
ctx->nbSamples = nbSamples;
ctx->nbTrainSamples = nbTrainSamples;
ctx->nbTestSamples = nbTestSamples;
ctx->nbDmers = trainingSamplesSize - MAX(d, sizeof(U64)) + 1;
ctx->d = d;
ctx->f = f;
ctx->accelParams = accelParams;
/* The offsets of each file */
ctx->offsets = (size_t*)calloc((nbSamples + 1), sizeof(size_t));
if (ctx->offsets == NULL) {
DISPLAYLEVEL(1, "Failed to allocate scratch buffers \n");
FASTCOVER_ctx_destroy(ctx);
return ERROR(memory_allocation);
}
/* Fill offsets from the samplesSizes */
{ U32 i;
ctx->offsets[0] = 0;
assert(nbSamples >= 5);
for (i = 1; i <= nbSamples; ++i) {
ctx->offsets[i] = ctx->offsets[i - 1] + samplesSizes[i - 1];
}
}
/* Initialize frequency array of size 2^f */
ctx->freqs = (U32*)calloc(((U64)1 << f), sizeof(U32));
if (ctx->freqs == NULL) {
DISPLAYLEVEL(1, "Failed to allocate frequency table \n");
FASTCOVER_ctx_destroy(ctx);
return ERROR(memory_allocation);
}
DISPLAYLEVEL(2, "Computing frequencies\n");
FASTCOVER_computeFrequency(ctx->freqs, ctx);
return 0;
}
/**
* Given the prepared context build the dictionary.
*/
static size_t
FASTCOVER_buildDictionary(const FASTCOVER_ctx_t* ctx,
U32* freqs,
void* dictBuffer, size_t dictBufferCapacity,
ZDICT_cover_params_t parameters,
U16* segmentFreqs)
{
BYTE *const dict = (BYTE *)dictBuffer;
size_t tail = dictBufferCapacity;
/* Divide the data into epochs. We will select one segment from each epoch. */
const COVER_epoch_info_t epochs = COVER_computeEpochs(
(U32)dictBufferCapacity, (U32)ctx->nbDmers, parameters.k, 1);
const size_t maxZeroScoreRun = 10;
size_t zeroScoreRun = 0;
size_t epoch;
DISPLAYLEVEL(2, "Breaking content into %u epochs of size %u\n",
(U32)epochs.num, (U32)epochs.size);
/* Loop through the epochs until there are no more segments or the dictionary
* is full.
*/
for (epoch = 0; tail > 0; epoch = (epoch + 1) % epochs.num) {
const U32 epochBegin = (U32)(epoch * epochs.size);
const U32 epochEnd = epochBegin + epochs.size;
size_t segmentSize;
/* Select a segment */
COVER_segment_t segment = FASTCOVER_selectSegment(
ctx, freqs, epochBegin, epochEnd, parameters, segmentFreqs);
/* If the segment covers no dmers, then we are out of content.
* There may be new content in other epochs, for continue for some time.
*/
if (segment.score == 0) {
if (++zeroScoreRun >= maxZeroScoreRun) {
break;
}
continue;
}
zeroScoreRun = 0;
/* Trim the segment if necessary and if it is too small then we are done */
segmentSize = MIN(segment.end - segment.begin + parameters.d - 1, tail);
if (segmentSize < parameters.d) {
break;
}
/* We fill the dictionary from the back to allow the best segments to be
* referenced with the smallest offsets.
*/
tail -= segmentSize;
memcpy(dict + tail, ctx->samples + segment.begin, segmentSize);
DISPLAYUPDATE(
2, "\r%u%% ",
(unsigned)(((dictBufferCapacity - tail) * 100) / dictBufferCapacity));
}
DISPLAYLEVEL(2, "\r%79s\r", "");
return tail;
}
/**
* Parameters for FASTCOVER_tryParameters().
*/
typedef struct FASTCOVER_tryParameters_data_s {
const FASTCOVER_ctx_t* ctx;
COVER_best_t* best;
size_t dictBufferCapacity;
ZDICT_cover_params_t parameters;
} FASTCOVER_tryParameters_data_t;
/**
* Tries a set of parameters and updates the COVER_best_t with the results.
* This function is thread safe if zstd is compiled with multithreaded support.
* It takes its parameters as an *OWNING* opaque pointer to support threading.
*/
static void FASTCOVER_tryParameters(void* opaque)
{
/* Save parameters as local variables */
FASTCOVER_tryParameters_data_t *const data = (FASTCOVER_tryParameters_data_t*)opaque;
const FASTCOVER_ctx_t *const ctx = data->ctx;
const ZDICT_cover_params_t parameters = data->parameters;
size_t dictBufferCapacity = data->dictBufferCapacity;
size_t totalCompressedSize = ERROR(GENERIC);
/* Initialize array to keep track of frequency of dmer within activeSegment */
U16* segmentFreqs = (U16*)calloc(((U64)1 << ctx->f), sizeof(U16));
/* Allocate space for hash table, dict, and freqs */
BYTE *const dict = (BYTE*)malloc(dictBufferCapacity);
COVER_dictSelection_t selection = COVER_dictSelectionError(ERROR(GENERIC));
U32* freqs = (U32*) malloc(((U64)1 << ctx->f) * sizeof(U32));
if (!segmentFreqs || !dict || !freqs) {
DISPLAYLEVEL(1, "Failed to allocate buffers: out of memory\n");
goto _cleanup;
}
/* Copy the frequencies because we need to modify them */
memcpy(freqs, ctx->freqs, ((U64)1 << ctx->f) * sizeof(U32));
/* Build the dictionary */
{ const size_t tail = FASTCOVER_buildDictionary(ctx, freqs, dict, dictBufferCapacity,
parameters, segmentFreqs);
const unsigned nbFinalizeSamples = (unsigned)(ctx->nbTrainSamples * ctx->accelParams.finalize / 100);
selection = COVER_selectDict(dict + tail, dictBufferCapacity, dictBufferCapacity - tail,
ctx->samples, ctx->samplesSizes, nbFinalizeSamples, ctx->nbTrainSamples, ctx->nbSamples, parameters, ctx->offsets,
totalCompressedSize);
if (COVER_dictSelectionIsError(selection)) {
DISPLAYLEVEL(1, "Failed to select dictionary\n");
goto _cleanup;
}
}
_cleanup:
free(dict);
COVER_best_finish(data->best, parameters, selection);
free(data);
free(segmentFreqs);
COVER_dictSelectionFree(selection);
free(freqs);
}
static void
FASTCOVER_convertToCoverParams(ZDICT_fastCover_params_t fastCoverParams,
ZDICT_cover_params_t* coverParams)
{
coverParams->k = fastCoverParams.k;
coverParams->d = fastCoverParams.d;
coverParams->steps = fastCoverParams.steps;
coverParams->nbThreads = fastCoverParams.nbThreads;
coverParams->splitPoint = fastCoverParams.splitPoint;
coverParams->zParams = fastCoverParams.zParams;
coverParams->shrinkDict = fastCoverParams.shrinkDict;
}
static void
FASTCOVER_convertToFastCoverParams(ZDICT_cover_params_t coverParams,
ZDICT_fastCover_params_t* fastCoverParams,
unsigned f, unsigned accel)
{
fastCoverParams->k = coverParams.k;
fastCoverParams->d = coverParams.d;
fastCoverParams->steps = coverParams.steps;
fastCoverParams->nbThreads = coverParams.nbThreads;
fastCoverParams->splitPoint = coverParams.splitPoint;
fastCoverParams->f = f;
fastCoverParams->accel = accel;
fastCoverParams->zParams = coverParams.zParams;
fastCoverParams->shrinkDict = coverParams.shrinkDict;
}
ZDICTLIB_STATIC_API size_t
ZDICT_trainFromBuffer_fastCover(void* dictBuffer, size_t dictBufferCapacity,
const void* samplesBuffer,
const size_t* samplesSizes, unsigned nbSamples,
ZDICT_fastCover_params_t parameters)
{
BYTE* const dict = (BYTE*)dictBuffer;
FASTCOVER_ctx_t ctx;
ZDICT_cover_params_t coverParams;
FASTCOVER_accel_t accelParams;
/* Initialize global data */
g_displayLevel = (int)parameters.zParams.notificationLevel;
/* Assign splitPoint and f if not provided */
parameters.splitPoint = 1.0;
parameters.f = parameters.f == 0 ? DEFAULT_F : parameters.f;
parameters.accel = parameters.accel == 0 ? DEFAULT_ACCEL : parameters.accel;
/* Convert to cover parameter */
memset(&coverParams, 0 , sizeof(coverParams));
FASTCOVER_convertToCoverParams(parameters, &coverParams);
/* Checks */
if (!FASTCOVER_checkParameters(coverParams, dictBufferCapacity, parameters.f,
parameters.accel)) {
DISPLAYLEVEL(1, "FASTCOVER parameters incorrect\n");
return ERROR(parameter_outOfBound);
}
if (nbSamples == 0) {
DISPLAYLEVEL(1, "FASTCOVER must have at least one input file\n");
return ERROR(srcSize_wrong);
}
if (dictBufferCapacity < ZDICT_DICTSIZE_MIN) {
DISPLAYLEVEL(1, "dictBufferCapacity must be at least %u\n",
ZDICT_DICTSIZE_MIN);
return ERROR(dstSize_tooSmall);
}
/* Assign corresponding FASTCOVER_accel_t to accelParams*/
accelParams = FASTCOVER_defaultAccelParameters[parameters.accel];
/* Initialize context */
{
size_t const initVal = FASTCOVER_ctx_init(&ctx, samplesBuffer, samplesSizes, nbSamples,
coverParams.d, parameters.splitPoint, parameters.f,
accelParams);
if (ZSTD_isError(initVal)) {
DISPLAYLEVEL(1, "Failed to initialize context\n");
return initVal;
}
}
COVER_warnOnSmallCorpus(dictBufferCapacity, ctx.nbDmers, g_displayLevel);
/* Build the dictionary */
DISPLAYLEVEL(2, "Building dictionary\n");
{
/* Initialize array to keep track of frequency of dmer within activeSegment */
U16* segmentFreqs = (U16 *)calloc(((U64)1 << parameters.f), sizeof(U16));
const size_t tail = FASTCOVER_buildDictionary(&ctx, ctx.freqs, dictBuffer,
dictBufferCapacity, coverParams, segmentFreqs);
const unsigned nbFinalizeSamples = (unsigned)(ctx.nbTrainSamples * ctx.accelParams.finalize / 100);
const size_t dictionarySize = ZDICT_finalizeDictionary(
dict, dictBufferCapacity, dict + tail, dictBufferCapacity - tail,
samplesBuffer, samplesSizes, nbFinalizeSamples, coverParams.zParams);
if (!ZSTD_isError(dictionarySize)) {
DISPLAYLEVEL(2, "Constructed dictionary of size %u\n",
(unsigned)dictionarySize);
}
FASTCOVER_ctx_destroy(&ctx);
free(segmentFreqs);
return dictionarySize;
}
}
ZDICTLIB_STATIC_API size_t
ZDICT_optimizeTrainFromBuffer_fastCover(
void* dictBuffer, size_t dictBufferCapacity,
const void* samplesBuffer,
const size_t* samplesSizes, unsigned nbSamples,
ZDICT_fastCover_params_t* parameters)
{
ZDICT_cover_params_t coverParams;
FASTCOVER_accel_t accelParams;
/* constants */
const unsigned nbThreads = parameters->nbThreads;
const double splitPoint =
parameters->splitPoint <= 0.0 ? FASTCOVER_DEFAULT_SPLITPOINT : parameters->splitPoint;
const unsigned kMinD = parameters->d == 0 ? 6 : parameters->d;
const unsigned kMaxD = parameters->d == 0 ? 8 : parameters->d;
const unsigned kMinK = parameters->k == 0 ? 50 : parameters->k;
const unsigned kMaxK = parameters->k == 0 ? 2000 : parameters->k;
const unsigned kSteps = parameters->steps == 0 ? 40 : parameters->steps;
const unsigned kStepSize = MAX((kMaxK - kMinK) / kSteps, 1);
const unsigned kIterations =
(1 + (kMaxD - kMinD) / 2) * (1 + (kMaxK - kMinK) / kStepSize);
const unsigned f = parameters->f == 0 ? DEFAULT_F : parameters->f;
const unsigned accel = parameters->accel == 0 ? DEFAULT_ACCEL : parameters->accel;
const unsigned shrinkDict = 0;
/* Local variables */
const int displayLevel = (int)parameters->zParams.notificationLevel;
unsigned iteration = 1;
unsigned d;
unsigned k;
COVER_best_t best;
POOL_ctx *pool = NULL;
int warned = 0;
/* Checks */
if (splitPoint <= 0 || splitPoint > 1) {
LOCALDISPLAYLEVEL(displayLevel, 1, "Incorrect splitPoint\n");
return ERROR(parameter_outOfBound);
}
if (accel == 0 || accel > FASTCOVER_MAX_ACCEL) {
LOCALDISPLAYLEVEL(displayLevel, 1, "Incorrect accel\n");
return ERROR(parameter_outOfBound);
}
if (kMinK < kMaxD || kMaxK < kMinK) {
LOCALDISPLAYLEVEL(displayLevel, 1, "Incorrect k\n");
return ERROR(parameter_outOfBound);
}
if (nbSamples == 0) {
LOCALDISPLAYLEVEL(displayLevel, 1, "FASTCOVER must have at least one input file\n");
return ERROR(srcSize_wrong);
}
if (dictBufferCapacity < ZDICT_DICTSIZE_MIN) {
LOCALDISPLAYLEVEL(displayLevel, 1, "dictBufferCapacity must be at least %u\n",
ZDICT_DICTSIZE_MIN);
return ERROR(dstSize_tooSmall);
}
if (nbThreads > 1) {
pool = POOL_create(nbThreads, 1);
if (!pool) {
return ERROR(memory_allocation);
}
}
/* Initialization */
COVER_best_init(&best);
memset(&coverParams, 0 , sizeof(coverParams));
FASTCOVER_convertToCoverParams(*parameters, &coverParams);
accelParams = FASTCOVER_defaultAccelParameters[accel];
/* Turn down global display level to clean up display at level 2 and below */
g_displayLevel = displayLevel == 0 ? 0 : displayLevel - 1;
/* Loop through d first because each new value needs a new context */
LOCALDISPLAYLEVEL(displayLevel, 2, "Trying %u different sets of parameters\n",
kIterations);
for (d = kMinD; d <= kMaxD; d += 2) {
/* Initialize the context for this value of d */
FASTCOVER_ctx_t ctx;
LOCALDISPLAYLEVEL(displayLevel, 3, "d=%u\n", d);
{
size_t const initVal = FASTCOVER_ctx_init(&ctx, samplesBuffer, samplesSizes, nbSamples, d, splitPoint, f, accelParams);
if (ZSTD_isError(initVal)) {
LOCALDISPLAYLEVEL(displayLevel, 1, "Failed to initialize context\n");
COVER_best_destroy(&best);
POOL_free(pool);
return initVal;
}
}
if (!warned) {
COVER_warnOnSmallCorpus(dictBufferCapacity, ctx.nbDmers, displayLevel);
warned = 1;
}
/* Loop through k reusing the same context */
for (k = kMinK; k <= kMaxK; k += kStepSize) {
/* Prepare the arguments */
FASTCOVER_tryParameters_data_t *data = (FASTCOVER_tryParameters_data_t *)malloc(
sizeof(FASTCOVER_tryParameters_data_t));
LOCALDISPLAYLEVEL(displayLevel, 3, "k=%u\n", k);
if (!data) {
LOCALDISPLAYLEVEL(displayLevel, 1, "Failed to allocate parameters\n");
COVER_best_destroy(&best);
FASTCOVER_ctx_destroy(&ctx);
POOL_free(pool);
return ERROR(memory_allocation);
}
data->ctx = &ctx;
data->best = &best;
data->dictBufferCapacity = dictBufferCapacity;
data->parameters = coverParams;
data->parameters.k = k;
data->parameters.d = d;
data->parameters.splitPoint = splitPoint;
data->parameters.steps = kSteps;
data->parameters.shrinkDict = shrinkDict;
data->parameters.zParams.notificationLevel = (unsigned)g_displayLevel;
/* Check the parameters */
if (!FASTCOVER_checkParameters(data->parameters, dictBufferCapacity,
data->ctx->f, accel)) {
DISPLAYLEVEL(1, "FASTCOVER parameters incorrect\n");
free(data);
continue;
}
/* Call the function and pass ownership of data to it */
COVER_best_start(&best);
if (pool) {
POOL_add(pool, &FASTCOVER_tryParameters, data);
} else {
FASTCOVER_tryParameters(data);
}
/* Print status */
LOCALDISPLAYUPDATE(displayLevel, 2, "\r%u%% ",
(unsigned)((iteration * 100) / kIterations));
++iteration;
}
COVER_best_wait(&best);
FASTCOVER_ctx_destroy(&ctx);
}
LOCALDISPLAYLEVEL(displayLevel, 2, "\r%79s\r", "");
/* Fill the output buffer and parameters with output of the best parameters */
{
const size_t dictSize = best.dictSize;
if (ZSTD_isError(best.compressedSize)) {
const size_t compressedSize = best.compressedSize;
COVER_best_destroy(&best);
POOL_free(pool);
return compressedSize;
}
FASTCOVER_convertToFastCoverParams(best.parameters, parameters, f, accel);
memcpy(dictBuffer, best.dict, dictSize);
COVER_best_destroy(&best);
POOL_free(pool);
return dictSize;
}
}
/**** ended inlining dictBuilder/fastcover.c ****/
/**** start inlining dictBuilder/zdict.c ****/
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
* All rights reserved.
*
* This source code is licensed under both the BSD-style license (found in the
* LICENSE file in the root directory of this source tree) and the GPLv2 (found
* in the COPYING file in the root directory of this source tree).
* You may select, at your option, one of the above-listed licenses.
*/
/*-**************************************
* Tuning parameters
****************************************/
#define MINRATIO 4 /* minimum nb of apparition to be selected in dictionary */
#define ZDICT_MAX_SAMPLES_SIZE (2000U << 20)
#define ZDICT_MIN_SAMPLES_SIZE (ZDICT_CONTENTSIZE_MIN * MINRATIO)
/*-**************************************
* Compiler Options
****************************************/
/* Unix Large Files support (>4GB) */
#define _FILE_OFFSET_BITS 64
#if (defined(__sun__) && (!defined(__LP64__))) /* Sun Solaris 32-bits requires specific definitions */
# ifndef _LARGEFILE_SOURCE
# define _LARGEFILE_SOURCE
# endif
#elif ! defined(__LP64__) /* No point defining Large file for 64 bit */
# ifndef _LARGEFILE64_SOURCE
# define _LARGEFILE64_SOURCE
# endif
#endif
/*-*************************************
* Dependencies
***************************************/
#include <stdlib.h> /* malloc, free */
#include <string.h> /* memset */
#include <stdio.h> /* fprintf, fopen, ftello64 */
#include <time.h> /* clock */
#ifndef ZDICT_STATIC_LINKING_ONLY
# define ZDICT_STATIC_LINKING_ONLY
#endif
/**** skipping file: ../common/mem.h ****/
/**** skipping file: ../common/fse.h ****/
/**** skipping file: ../common/huf.h ****/
/**** skipping file: ../common/zstd_internal.h ****/
/**** skipping file: ../common/xxhash.h ****/
/**** skipping file: ../compress/zstd_compress_internal.h ****/
/**** skipping file: ../zdict.h ****/
/**** skipping file: divsufsort.h ****/
/**** skipping file: ../common/bits.h ****/
/*-*************************************
* Constants
***************************************/
#define KB *(1 <<10)
#define MB *(1 <<20)
#define GB *(1U<<30)
#define DICTLISTSIZE_DEFAULT 10000
#define NOISELENGTH 32
static const U32 g_selectivity_default = 9;
/*-*************************************
* Console display
***************************************/
#undef DISPLAY
#define DISPLAY(...) do { fprintf(stderr, __VA_ARGS__); fflush( stderr ); } while (0)
#undef DISPLAYLEVEL
#define DISPLAYLEVEL(l, ...) do { if (notificationLevel>=l) { DISPLAY(__VA_ARGS__); } } while (0) /* 0 : no display; 1: errors; 2: default; 3: details; 4: debug */
static clock_t ZDICT_clockSpan(clock_t nPrevious) { return clock() - nPrevious; }
static void ZDICT_printHex(const void* ptr, size_t length)
{
const BYTE* const b = (const BYTE*)ptr;
size_t u;
for (u=0; u<length; u++) {
BYTE c = b[u];
if (c<32 || c>126) c = '.'; /* non-printable char */
DISPLAY("%c", c);
}
}
/*-********************************************************
* Helper functions
**********************************************************/
unsigned ZDICT_isError(size_t errorCode) { return ERR_isError(errorCode); }
const char* ZDICT_getErrorName(size_t errorCode) { return ERR_getErrorName(errorCode); }
unsigned ZDICT_getDictID(const void* dictBuffer, size_t dictSize)
{
if (dictSize < 8) return 0;
if (MEM_readLE32(dictBuffer) != ZSTD_MAGIC_DICTIONARY) return 0;
return MEM_readLE32((const char*)dictBuffer + 4);
}
size_t ZDICT_getDictHeaderSize(const void* dictBuffer, size_t dictSize)
{
size_t headerSize;
if (dictSize <= 8 || MEM_readLE32(dictBuffer) != ZSTD_MAGIC_DICTIONARY) return ERROR(dictionary_corrupted);
{ ZSTD_compressedBlockState_t* bs = (ZSTD_compressedBlockState_t*)malloc(sizeof(ZSTD_compressedBlockState_t));
U32* wksp = (U32*)malloc(HUF_WORKSPACE_SIZE);
if (!bs || !wksp) {
headerSize = ERROR(memory_allocation);
} else {
ZSTD_reset_compressedBlockState(bs);
headerSize = ZSTD_loadCEntropy(bs, wksp, dictBuffer, dictSize);
}
free(bs);
free(wksp);
}
return headerSize;
}
/*-********************************************************
* Dictionary training functions
**********************************************************/
/*! ZDICT_count() :
Count the nb of common bytes between 2 pointers.
Note : this function presumes end of buffer followed by noisy guard band.
*/
static size_t ZDICT_count(const void* pIn, const void* pMatch)
{
const char* const pStart = (const char*)pIn;
for (;;) {
size_t const diff = MEM_readST(pMatch) ^ MEM_readST(pIn);
if (!diff) {
pIn = (const char*)pIn+sizeof(size_t);
pMatch = (const char*)pMatch+sizeof(size_t);
continue;
}
pIn = (const char*)pIn+ZSTD_NbCommonBytes(diff);
return (size_t)((const char*)pIn - pStart);
}
}
typedef struct {
U32 pos;
U32 length;
U32 savings;
} dictItem;
static void ZDICT_initDictItem(dictItem* d)
{
d->pos = 1;
d->length = 0;
d->savings = (U32)(-1);
}
#define LLIMIT 64 /* heuristic determined experimentally */
#define MINMATCHLENGTH 7 /* heuristic determined experimentally */
static dictItem ZDICT_analyzePos(
BYTE* doneMarks,
const int* suffix, U32 start,
const void* buffer, U32 minRatio, U32 notificationLevel)
{
U32 lengthList[LLIMIT] = {0};
U32 cumulLength[LLIMIT] = {0};
U32 savings[LLIMIT] = {0};
const BYTE* b = (const BYTE*)buffer;
size_t maxLength = LLIMIT;
size_t pos = (size_t)suffix[start];
U32 end = start;
dictItem solution;
/* init */
memset(&solution, 0, sizeof(solution));
doneMarks[pos] = 1;
/* trivial repetition cases */
if ( (MEM_read16(b+pos+0) == MEM_read16(b+pos+2))
||(MEM_read16(b+pos+1) == MEM_read16(b+pos+3))
||(MEM_read16(b+pos+2) == MEM_read16(b+pos+4)) ) {
/* skip and mark segment */
U16 const pattern16 = MEM_read16(b+pos+4);
U32 u, patternEnd = 6;
while (MEM_read16(b+pos+patternEnd) == pattern16) patternEnd+=2 ;
if (b[pos+patternEnd] == b[pos+patternEnd-1]) patternEnd++;
for (u=1; u<patternEnd; u++)
doneMarks[pos+u] = 1;
return solution;
}
/* look forward */
{ size_t length;
do {
end++;
length = ZDICT_count(b + pos, b + suffix[end]);
} while (length >= MINMATCHLENGTH);
}
/* look backward */
{ size_t length;
do {
length = ZDICT_count(b + pos, b + *(suffix+start-1));
if (length >=MINMATCHLENGTH) start--;
} while(length >= MINMATCHLENGTH);
}
/* exit if not found a minimum nb of repetitions */
if (end-start < minRatio) {
U32 idx;
for(idx=start; idx<end; idx++)
doneMarks[suffix[idx]] = 1;
return solution;
}
{ int i;
U32 mml;
U32 refinedStart = start;
U32 refinedEnd = end;
DISPLAYLEVEL(4, "\n");
DISPLAYLEVEL(4, "found %3u matches of length >= %i at pos %7u ", (unsigned)(end-start), MINMATCHLENGTH, (unsigned)pos);
DISPLAYLEVEL(4, "\n");
for (mml = MINMATCHLENGTH ; ; mml++) {
BYTE currentChar = 0;
U32 currentCount = 0;
U32 currentID = refinedStart;
U32 id;
U32 selectedCount = 0;
U32 selectedID = currentID;
for (id =refinedStart; id < refinedEnd; id++) {
if (b[suffix[id] + mml] != currentChar) {
if (currentCount > selectedCount) {
selectedCount = currentCount;
selectedID = currentID;
}
currentID = id;
currentChar = b[ suffix[id] + mml];
currentCount = 0;
}
currentCount ++;
}
if (currentCount > selectedCount) { /* for last */
selectedCount = currentCount;
selectedID = currentID;
}
if (selectedCount < minRatio)
break;
refinedStart = selectedID;
refinedEnd = refinedStart + selectedCount;
}
/* evaluate gain based on new dict */
start = refinedStart;
pos = suffix[refinedStart];
end = start;
memset(lengthList, 0, sizeof(lengthList));
/* look forward */
{ size_t length;
do {
end++;
length = ZDICT_count(b + pos, b + suffix[end]);
if (length >= LLIMIT) length = LLIMIT-1;
lengthList[length]++;
} while (length >=MINMATCHLENGTH);
}
/* look backward */
{ size_t length = MINMATCHLENGTH;
while ((length >= MINMATCHLENGTH) & (start > 0)) {
length = ZDICT_count(b + pos, b + suffix[start - 1]);
if (length >= LLIMIT) length = LLIMIT - 1;
lengthList[length]++;
if (length >= MINMATCHLENGTH) start--;
}
}
/* largest useful length */
memset(cumulLength, 0, sizeof(cumulLength));
cumulLength[maxLength-1] = lengthList[maxLength-1];
for (i=(int)(maxLength-2); i>=0; i--)
cumulLength[i] = cumulLength[i+1] + lengthList[i];
for (i=LLIMIT-1; i>=MINMATCHLENGTH; i--) if (cumulLength[i]>=minRatio) break;
maxLength = i;
/* reduce maxLength in case of final into repetitive data */
{ U32 l = (U32)maxLength;
BYTE const c = b[pos + maxLength-1];
while (b[pos+l-2]==c) l--;
maxLength = l;
}
if (maxLength < MINMATCHLENGTH) return solution; /* skip : no long-enough solution */
/* calculate savings */
savings[5] = 0;
for (i=MINMATCHLENGTH; i<=(int)maxLength; i++)
savings[i] = savings[i-1] + (lengthList[i] * (i-3));
DISPLAYLEVEL(4, "Selected dict at position %u, of length %u : saves %u (ratio: %.2f) \n",
(unsigned)pos, (unsigned)maxLength, (unsigned)savings[maxLength], (double)savings[maxLength] / (double)maxLength);
solution.pos = (U32)pos;
solution.length = (U32)maxLength;
solution.savings = savings[maxLength];
/* mark positions done */
{ U32 id;
for (id=start; id<end; id++) {
U32 p, pEnd, length;
U32 const testedPos = (U32)suffix[id];
if (testedPos == pos)
length = solution.length;
else {
length = (U32)ZDICT_count(b+pos, b+testedPos);
if (length > solution.length) length = solution.length;
}
pEnd = (U32)(testedPos + length);
for (p=testedPos; p<pEnd; p++)
doneMarks[p] = 1;
} } }
return solution;
}
static int isIncluded(const void* in, const void* container, size_t length)
{
const char* const ip = (const char*) in;
const char* const into = (const char*) container;
size_t u;
for (u=0; u<length; u++) { /* works because end of buffer is a noisy guard band */
if (ip[u] != into[u]) break;
}
return u==length;
}
/*! ZDICT_tryMerge() :
check if dictItem can be merged, do it if possible
@return : id of destination elt, 0 if not merged
*/
static U32 ZDICT_tryMerge(dictItem* table, dictItem elt, U32 eltNbToSkip, const void* buffer)
{
const U32 tableSize = table->pos;
const U32 eltEnd = elt.pos + elt.length;
const char* const buf = (const char*) buffer;
/* tail overlap */
U32 u; for (u=1; u<tableSize; u++) {
if (u==eltNbToSkip) continue;
if ((table[u].pos > elt.pos) && (table[u].pos <= eltEnd)) { /* overlap, existing > new */
/* append */
U32 const addedLength = table[u].pos - elt.pos;
table[u].length += addedLength;
table[u].pos = elt.pos;
table[u].savings += elt.savings * addedLength / elt.length; /* rough approx */
table[u].savings += elt.length / 8; /* rough approx bonus */
elt = table[u];
/* sort : improve rank */
while ((u>1) && (table[u-1].savings < elt.savings))
table[u] = table[u-1], u--;
table[u] = elt;
return u;
} }
/* front overlap */
for (u=1; u<tableSize; u++) {
if (u==eltNbToSkip) continue;
if ((table[u].pos + table[u].length >= elt.pos) && (table[u].pos < elt.pos)) { /* overlap, existing < new */
/* append */
int const addedLength = (int)eltEnd - (int)(table[u].pos + table[u].length);
table[u].savings += elt.length / 8; /* rough approx bonus */
if (addedLength > 0) { /* otherwise, elt fully included into existing */
table[u].length += addedLength;
table[u].savings += elt.savings * addedLength / elt.length; /* rough approx */
}
/* sort : improve rank */
elt = table[u];
while ((u>1) && (table[u-1].savings < elt.savings))
table[u] = table[u-1], u--;
table[u] = elt;
return u;
}
if (MEM_read64(buf + table[u].pos) == MEM_read64(buf + elt.pos + 1)) {
if (isIncluded(buf + table[u].pos, buf + elt.pos + 1, table[u].length)) {
size_t const addedLength = MAX( (int)elt.length - (int)table[u].length , 1 );
table[u].pos = elt.pos;
table[u].savings += (U32)(elt.savings * addedLength / elt.length);
table[u].length = MIN(elt.length, table[u].length + 1);
return u;
}
}
}
return 0;
}
static void ZDICT_removeDictItem(dictItem* table, U32 id)
{
/* convention : table[0].pos stores nb of elts */
U32 const max = table[0].pos;
U32 u;
if (!id) return; /* protection, should never happen */
for (u=id; u<max-1; u++)
table[u] = table[u+1];
table->pos--;
}
static void ZDICT_insertDictItem(dictItem* table, U32 maxSize, dictItem elt, const void* buffer)
{
/* merge if possible */
U32 mergeId = ZDICT_tryMerge(table, elt, 0, buffer);
if (mergeId) {
U32 newMerge = 1;
while (newMerge) {
newMerge = ZDICT_tryMerge(table, table[mergeId], mergeId, buffer);
if (newMerge) ZDICT_removeDictItem(table, mergeId);
mergeId = newMerge;
}
return;
}
/* insert */
{ U32 current;
U32 nextElt = table->pos;
if (nextElt >= maxSize) nextElt = maxSize-1;
current = nextElt-1;
while (table[current].savings < elt.savings) {
table[current+1] = table[current];
current--;
}
table[current+1] = elt;
table->pos = nextElt+1;
}
}
static U32 ZDICT_dictSize(const dictItem* dictList)
{
U32 u, dictSize = 0;
for (u=1; u<dictList[0].pos; u++)
dictSize += dictList[u].length;
return dictSize;
}
static size_t ZDICT_trainBuffer_legacy(dictItem* dictList, U32 dictListSize,
const void* const buffer, size_t bufferSize, /* buffer must end with noisy guard band */
const size_t* fileSizes, unsigned nbFiles,
unsigned minRatio, U32 notificationLevel)
{
int* const suffix0 = (int*)malloc((bufferSize+2)*sizeof(*suffix0));
int* const suffix = suffix0+1;
U32* reverseSuffix = (U32*)malloc((bufferSize)*sizeof(*reverseSuffix));
BYTE* doneMarks = (BYTE*)malloc((bufferSize+16)*sizeof(*doneMarks)); /* +16 for overflow security */
U32* filePos = (U32*)malloc(nbFiles * sizeof(*filePos));
size_t result = 0;
clock_t displayClock = 0;
clock_t const refreshRate = CLOCKS_PER_SEC * 3 / 10;
# undef DISPLAYUPDATE
# define DISPLAYUPDATE(l, ...) \
do { \
if (notificationLevel>=l) { \
if (ZDICT_clockSpan(displayClock) > refreshRate) { \
displayClock = clock(); \
DISPLAY(__VA_ARGS__); \
} \
if (notificationLevel>=4) fflush(stderr); \
} \
} while (0)
/* init */
DISPLAYLEVEL(2, "\r%70s\r", ""); /* clean display line */
if (!suffix0 || !reverseSuffix || !doneMarks || !filePos) {
result = ERROR(memory_allocation);
goto _cleanup;
}
if (minRatio < MINRATIO) minRatio = MINRATIO;
memset(doneMarks, 0, bufferSize+16);
/* limit sample set size (divsufsort limitation)*/
if (bufferSize > ZDICT_MAX_SAMPLES_SIZE) DISPLAYLEVEL(3, "sample set too large : reduced to %u MB ...\n", (unsigned)(ZDICT_MAX_SAMPLES_SIZE>>20));
while (bufferSize > ZDICT_MAX_SAMPLES_SIZE) bufferSize -= fileSizes[--nbFiles];
/* sort */
DISPLAYLEVEL(2, "sorting %u files of total size %u MB ...\n", nbFiles, (unsigned)(bufferSize>>20));
{ int const divSuftSortResult = divsufsort((const unsigned char*)buffer, suffix, (int)bufferSize, 0);
if (divSuftSortResult != 0) { result = ERROR(GENERIC); goto _cleanup; }
}
suffix[bufferSize] = (int)bufferSize; /* leads into noise */
suffix0[0] = (int)bufferSize; /* leads into noise */
/* build reverse suffix sort */
{ size_t pos;
for (pos=0; pos < bufferSize; pos++)
reverseSuffix[suffix[pos]] = (U32)pos;
/* note filePos tracks borders between samples.
It's not used at this stage, but planned to become useful in a later update */
filePos[0] = 0;
for (pos=1; pos<nbFiles; pos++)
filePos[pos] = (U32)(filePos[pos-1] + fileSizes[pos-1]);
}
DISPLAYLEVEL(2, "finding patterns ... \n");
DISPLAYLEVEL(3, "minimum ratio : %u \n", minRatio);
{ U32 cursor; for (cursor=0; cursor < bufferSize; ) {
dictItem solution;
if (doneMarks[cursor]) { cursor++; continue; }
solution = ZDICT_analyzePos(doneMarks, suffix, reverseSuffix[cursor], buffer, minRatio, notificationLevel);
if (solution.length==0) { cursor++; continue; }
ZDICT_insertDictItem(dictList, dictListSize, solution, buffer);
cursor += solution.length;
DISPLAYUPDATE(2, "\r%4.2f %% \r", (double)cursor / (double)bufferSize * 100.0);
} }
_cleanup:
free(suffix0);
free(reverseSuffix);
free(doneMarks);
free(filePos);
return result;
}
static void ZDICT_fillNoise(void* buffer, size_t length)
{
unsigned const prime1 = 2654435761U;
unsigned const prime2 = 2246822519U;
unsigned acc = prime1;
size_t p=0;
for (p=0; p<length; p++) {
acc *= prime2;
((unsigned char*)buffer)[p] = (unsigned char)(acc >> 21);
}
}
typedef struct
{
ZSTD_CDict* dict; /* dictionary */
ZSTD_CCtx* zc; /* working context */
void* workPlace; /* must be ZSTD_BLOCKSIZE_MAX allocated */
} EStats_ress_t;
#define MAXREPOFFSET 1024
static void ZDICT_countEStats(EStats_ress_t esr, const ZSTD_parameters* params,
unsigned* countLit, unsigned* offsetcodeCount, unsigned* matchlengthCount, unsigned* litlengthCount, U32* repOffsets,
const void* src, size_t srcSize,
U32 notificationLevel)
{
size_t const blockSizeMax = MIN (ZSTD_BLOCKSIZE_MAX, 1 << params->cParams.windowLog);
size_t cSize;
if (srcSize > blockSizeMax) srcSize = blockSizeMax; /* protection vs large samples */
{ size_t const errorCode = ZSTD_compressBegin_usingCDict_deprecated(esr.zc, esr.dict);
if (ZSTD_isError(errorCode)) { DISPLAYLEVEL(1, "warning : ZSTD_compressBegin_usingCDict failed \n"); return; }
}
cSize = ZSTD_compressBlock_deprecated(esr.zc, esr.workPlace, ZSTD_BLOCKSIZE_MAX, src, srcSize);
if (ZSTD_isError(cSize)) { DISPLAYLEVEL(3, "warning : could not compress sample size %u \n", (unsigned)srcSize); return; }
if (cSize) { /* if == 0; block is not compressible */
const SeqStore_t* const seqStorePtr = ZSTD_getSeqStore(esr.zc);
/* literals stats */
{ const BYTE* bytePtr;
for(bytePtr = seqStorePtr->litStart; bytePtr < seqStorePtr->lit; bytePtr++)
countLit[*bytePtr]++;
}
/* seqStats */
{ U32 const nbSeq = (U32)(seqStorePtr->sequences - seqStorePtr->sequencesStart);
ZSTD_seqToCodes(seqStorePtr);
{ const BYTE* codePtr = seqStorePtr->ofCode;
U32 u;
for (u=0; u<nbSeq; u++) offsetcodeCount[codePtr[u]]++;
}
{ const BYTE* codePtr = seqStorePtr->mlCode;
U32 u;
for (u=0; u<nbSeq; u++) matchlengthCount[codePtr[u]]++;
}
{ const BYTE* codePtr = seqStorePtr->llCode;
U32 u;
for (u=0; u<nbSeq; u++) litlengthCount[codePtr[u]]++;
}
if (nbSeq >= 2) { /* rep offsets */
const SeqDef* const seq = seqStorePtr->sequencesStart;
U32 offset1 = seq[0].offBase - ZSTD_REP_NUM;
U32 offset2 = seq[1].offBase - ZSTD_REP_NUM;
if (offset1 >= MAXREPOFFSET) offset1 = 0;
if (offset2 >= MAXREPOFFSET) offset2 = 0;
repOffsets[offset1] += 3;
repOffsets[offset2] += 1;
} } }
}
static size_t ZDICT_totalSampleSize(const size_t* fileSizes, unsigned nbFiles)
{
size_t total=0;
unsigned u;
for (u=0; u<nbFiles; u++) total += fileSizes[u];
return total;
}
typedef struct { U32 offset; U32 count; } offsetCount_t;
static void ZDICT_insertSortCount(offsetCount_t table[ZSTD_REP_NUM+1], U32 val, U32 count)
{
U32 u;
table[ZSTD_REP_NUM].offset = val;
table[ZSTD_REP_NUM].count = count;
for (u=ZSTD_REP_NUM; u>0; u--) {
offsetCount_t tmp;
if (table[u-1].count >= table[u].count) break;
tmp = table[u-1];
table[u-1] = table[u];
table[u] = tmp;
}
}
/* ZDICT_flatLit() :
* rewrite `countLit` to contain a mostly flat but still compressible distribution of literals.
* necessary to avoid generating a non-compressible distribution that HUF_writeCTable() cannot encode.
*/
static void ZDICT_flatLit(unsigned* countLit)
{
int u;
for (u=1; u<256; u++) countLit[u] = 2;
countLit[0] = 4;
countLit[253] = 1;
countLit[254] = 1;
}
#define OFFCODE_MAX 30 /* only applicable to first block */
static size_t ZDICT_analyzeEntropy(void* dstBuffer, size_t maxDstSize,
int compressionLevel,
const void* srcBuffer, const size_t* fileSizes, unsigned nbFiles,
const void* dictBuffer, size_t dictBufferSize,
unsigned notificationLevel)
{
unsigned countLit[256];
HUF_CREATE_STATIC_CTABLE(hufTable, 255);
unsigned offcodeCount[OFFCODE_MAX+1];
short offcodeNCount[OFFCODE_MAX+1];
U32 offcodeMax = ZSTD_highbit32((U32)(dictBufferSize + 128 KB));
unsigned matchLengthCount[MaxML+1];
short matchLengthNCount[MaxML+1];
unsigned litLengthCount[MaxLL+1];
short litLengthNCount[MaxLL+1];
U32 repOffset[MAXREPOFFSET];
offsetCount_t bestRepOffset[ZSTD_REP_NUM+1];
EStats_ress_t esr = { NULL, NULL, NULL };
ZSTD_parameters params;
U32 u, huffLog = 11, Offlog = OffFSELog, mlLog = MLFSELog, llLog = LLFSELog, total;
size_t pos = 0, errorCode;
size_t eSize = 0;
size_t const totalSrcSize = ZDICT_totalSampleSize(fileSizes, nbFiles);
size_t const averageSampleSize = totalSrcSize / (nbFiles + !nbFiles);
BYTE* dstPtr = (BYTE*)dstBuffer;
U32 wksp[HUF_CTABLE_WORKSPACE_SIZE_U32];
/* init */
DEBUGLOG(4, "ZDICT_analyzeEntropy");
if (offcodeMax>OFFCODE_MAX) { eSize = ERROR(dictionaryCreation_failed); goto _cleanup; } /* too large dictionary */
for (u=0; u<256; u++) countLit[u] = 1; /* any character must be described */
for (u=0; u<=offcodeMax; u++) offcodeCount[u] = 1;
for (u=0; u<=MaxML; u++) matchLengthCount[u] = 1;
for (u=0; u<=MaxLL; u++) litLengthCount[u] = 1;
memset(repOffset, 0, sizeof(repOffset));
repOffset[1] = repOffset[4] = repOffset[8] = 1;
memset(bestRepOffset, 0, sizeof(bestRepOffset));
if (compressionLevel==0) compressionLevel = ZSTD_CLEVEL_DEFAULT;
params = ZSTD_getParams(compressionLevel, averageSampleSize, dictBufferSize);
esr.dict = ZSTD_createCDict_advanced(dictBuffer, dictBufferSize, ZSTD_dlm_byRef, ZSTD_dct_rawContent, params.cParams, ZSTD_defaultCMem);
esr.zc = ZSTD_createCCtx();
esr.workPlace = malloc(ZSTD_BLOCKSIZE_MAX);
if (!esr.dict || !esr.zc || !esr.workPlace) {
eSize = ERROR(memory_allocation);
DISPLAYLEVEL(1, "Not enough memory \n");
goto _cleanup;
}
/* collect stats on all samples */
for (u=0; u<nbFiles; u++) {
ZDICT_countEStats(esr, ¶ms,
countLit, offcodeCount, matchLengthCount, litLengthCount, repOffset,
(const char*)srcBuffer + pos, fileSizes[u],
notificationLevel);
pos += fileSizes[u];
}
if (notificationLevel >= 4) {
/* writeStats */
DISPLAYLEVEL(4, "Offset Code Frequencies : \n");
for (u=0; u<=offcodeMax; u++) {
DISPLAYLEVEL(4, "%2u :%7u \n", u, offcodeCount[u]);
} }
/* analyze, build stats, starting with literals */
{ size_t maxNbBits = HUF_buildCTable_wksp(hufTable, countLit, 255, huffLog, wksp, sizeof(wksp));
if (HUF_isError(maxNbBits)) {
eSize = maxNbBits;
DISPLAYLEVEL(1, " HUF_buildCTable error \n");
goto _cleanup;
}
if (maxNbBits==8) { /* not compressible : will fail on HUF_writeCTable() */
DISPLAYLEVEL(2, "warning : pathological dataset : literals are not compressible : samples are noisy or too regular \n");
ZDICT_flatLit(countLit); /* replace distribution by a fake "mostly flat but still compressible" distribution, that HUF_writeCTable() can encode */
maxNbBits = HUF_buildCTable_wksp(hufTable, countLit, 255, huffLog, wksp, sizeof(wksp));
assert(maxNbBits==9);
}
huffLog = (U32)maxNbBits;
}
/* looking for most common first offsets */
{ U32 offset;
for (offset=1; offset<MAXREPOFFSET; offset++)
ZDICT_insertSortCount(bestRepOffset, offset, repOffset[offset]);
}
/* note : the result of this phase should be used to better appreciate the impact on statistics */
total=0; for (u=0; u<=offcodeMax; u++) total+=offcodeCount[u];
errorCode = FSE_normalizeCount(offcodeNCount, Offlog, offcodeCount, total, offcodeMax, /* useLowProbCount */ 1);
if (FSE_isError(errorCode)) {
eSize = errorCode;
DISPLAYLEVEL(1, "FSE_normalizeCount error with offcodeCount \n");
goto _cleanup;
}
Offlog = (U32)errorCode;
total=0; for (u=0; u<=MaxML; u++) total+=matchLengthCount[u];
errorCode = FSE_normalizeCount(matchLengthNCount, mlLog, matchLengthCount, total, MaxML, /* useLowProbCount */ 1);
if (FSE_isError(errorCode)) {
eSize = errorCode;
DISPLAYLEVEL(1, "FSE_normalizeCount error with matchLengthCount \n");
goto _cleanup;
}
mlLog = (U32)errorCode;
total=0; for (u=0; u<=MaxLL; u++) total+=litLengthCount[u];
errorCode = FSE_normalizeCount(litLengthNCount, llLog, litLengthCount, total, MaxLL, /* useLowProbCount */ 1);
if (FSE_isError(errorCode)) {
eSize = errorCode;
DISPLAYLEVEL(1, "FSE_normalizeCount error with litLengthCount \n");
goto _cleanup;
}
llLog = (U32)errorCode;
/* write result to buffer */
{ size_t const hhSize = HUF_writeCTable_wksp(dstPtr, maxDstSize, hufTable, 255, huffLog, wksp, sizeof(wksp));
if (HUF_isError(hhSize)) {
eSize = hhSize;
DISPLAYLEVEL(1, "HUF_writeCTable error \n");
goto _cleanup;
}
dstPtr += hhSize;
maxDstSize -= hhSize;
eSize += hhSize;
}
{ size_t const ohSize = FSE_writeNCount(dstPtr, maxDstSize, offcodeNCount, OFFCODE_MAX, Offlog);
if (FSE_isError(ohSize)) {
eSize = ohSize;
DISPLAYLEVEL(1, "FSE_writeNCount error with offcodeNCount \n");
goto _cleanup;
}
dstPtr += ohSize;
maxDstSize -= ohSize;
eSize += ohSize;
}
{ size_t const mhSize = FSE_writeNCount(dstPtr, maxDstSize, matchLengthNCount, MaxML, mlLog);
if (FSE_isError(mhSize)) {
eSize = mhSize;
DISPLAYLEVEL(1, "FSE_writeNCount error with matchLengthNCount \n");
goto _cleanup;
}
dstPtr += mhSize;
maxDstSize -= mhSize;
eSize += mhSize;
}
{ size_t const lhSize = FSE_writeNCount(dstPtr, maxDstSize, litLengthNCount, MaxLL, llLog);
if (FSE_isError(lhSize)) {
eSize = lhSize;
DISPLAYLEVEL(1, "FSE_writeNCount error with litlengthNCount \n");
goto _cleanup;
}
dstPtr += lhSize;
maxDstSize -= lhSize;
eSize += lhSize;
}
if (maxDstSize<12) {
eSize = ERROR(dstSize_tooSmall);
DISPLAYLEVEL(1, "not enough space to write RepOffsets \n");
goto _cleanup;
}
# if 0
MEM_writeLE32(dstPtr+0, bestRepOffset[0].offset);
MEM_writeLE32(dstPtr+4, bestRepOffset[1].offset);
MEM_writeLE32(dstPtr+8, bestRepOffset[2].offset);
#else
/* at this stage, we don't use the result of "most common first offset",
* as the impact of statistics is not properly evaluated */
MEM_writeLE32(dstPtr+0, repStartValue[0]);
MEM_writeLE32(dstPtr+4, repStartValue[1]);
MEM_writeLE32(dstPtr+8, repStartValue[2]);
#endif
eSize += 12;
_cleanup:
ZSTD_freeCDict(esr.dict);
ZSTD_freeCCtx(esr.zc);
free(esr.workPlace);
return eSize;
}
/**
* @returns the maximum repcode value
*/
static U32 ZDICT_maxRep(U32 const reps[ZSTD_REP_NUM])
{
U32 maxRep = reps[0];
int r;
for (r = 1; r < ZSTD_REP_NUM; ++r)
maxRep = MAX(maxRep, reps[r]);
return maxRep;
}
size_t ZDICT_finalizeDictionary(void* dictBuffer, size_t dictBufferCapacity,
const void* customDictContent, size_t dictContentSize,
const void* samplesBuffer, const size_t* samplesSizes,
unsigned nbSamples, ZDICT_params_t params)
{
size_t hSize;
#define HBUFFSIZE 256 /* should prove large enough for all entropy headers */
BYTE header[HBUFFSIZE];
int const compressionLevel = (params.compressionLevel == 0) ? ZSTD_CLEVEL_DEFAULT : params.compressionLevel;
U32 const notificationLevel = params.notificationLevel;
/* The final dictionary content must be at least as large as the largest repcode */
size_t const minContentSize = (size_t)ZDICT_maxRep(repStartValue);
size_t paddingSize;
/* check conditions */
DEBUGLOG(4, "ZDICT_finalizeDictionary");
if (dictBufferCapacity < dictContentSize) return ERROR(dstSize_tooSmall);
if (dictBufferCapacity < ZDICT_DICTSIZE_MIN) return ERROR(dstSize_tooSmall);
/* dictionary header */
MEM_writeLE32(header, ZSTD_MAGIC_DICTIONARY);
{ U64 const randomID = XXH64(customDictContent, dictContentSize, 0);
U32 const compliantID = (randomID % ((1U<<31)-32768)) + 32768;
U32 const dictID = params.dictID ? params.dictID : compliantID;
MEM_writeLE32(header+4, dictID);
}
hSize = 8;
/* entropy tables */
DISPLAYLEVEL(2, "\r%70s\r", ""); /* clean display line */
DISPLAYLEVEL(2, "statistics ... \n");
{ size_t const eSize = ZDICT_analyzeEntropy(header+hSize, HBUFFSIZE-hSize,
compressionLevel,
samplesBuffer, samplesSizes, nbSamples,
customDictContent, dictContentSize,
notificationLevel);
if (ZDICT_isError(eSize)) return eSize;
hSize += eSize;
}
/* Shrink the content size if it doesn't fit in the buffer */
if (hSize + dictContentSize > dictBufferCapacity) {
dictContentSize = dictBufferCapacity - hSize;
}
/* Pad the dictionary content with zeros if it is too small */
if (dictContentSize < minContentSize) {
RETURN_ERROR_IF(hSize + minContentSize > dictBufferCapacity, dstSize_tooSmall,
"dictBufferCapacity too small to fit max repcode");
paddingSize = minContentSize - dictContentSize;
} else {
paddingSize = 0;
}
{
size_t const dictSize = hSize + paddingSize + dictContentSize;
/* The dictionary consists of the header, optional padding, and the content.
* The padding comes before the content because the "best" position in the
* dictionary is the last byte.
*/
BYTE* const outDictHeader = (BYTE*)dictBuffer;
BYTE* const outDictPadding = outDictHeader + hSize;
BYTE* const outDictContent = outDictPadding + paddingSize;
assert(dictSize <= dictBufferCapacity);
assert(outDictContent + dictContentSize == (BYTE*)dictBuffer + dictSize);
/* First copy the customDictContent into its final location.
* `customDictContent` and `dictBuffer` may overlap, so we must
* do this before any other writes into the output buffer.
* Then copy the header & padding into the output buffer.
*/
memmove(outDictContent, customDictContent, dictContentSize);
memcpy(outDictHeader, header, hSize);
memset(outDictPadding, 0, paddingSize);
return dictSize;
}
}
static size_t ZDICT_addEntropyTablesFromBuffer_advanced(
void* dictBuffer, size_t dictContentSize, size_t dictBufferCapacity,
const void* samplesBuffer, const size_t* samplesSizes, unsigned nbSamples,
ZDICT_params_t params)
{
int const compressionLevel = (params.compressionLevel == 0) ? ZSTD_CLEVEL_DEFAULT : params.compressionLevel;
U32 const notificationLevel = params.notificationLevel;
size_t hSize = 8;
/* calculate entropy tables */
DISPLAYLEVEL(2, "\r%70s\r", ""); /* clean display line */
DISPLAYLEVEL(2, "statistics ... \n");
{ size_t const eSize = ZDICT_analyzeEntropy((char*)dictBuffer+hSize, dictBufferCapacity-hSize,
compressionLevel,
samplesBuffer, samplesSizes, nbSamples,
(char*)dictBuffer + dictBufferCapacity - dictContentSize, dictContentSize,
notificationLevel);
if (ZDICT_isError(eSize)) return eSize;
hSize += eSize;
}
/* add dictionary header (after entropy tables) */
MEM_writeLE32(dictBuffer, ZSTD_MAGIC_DICTIONARY);
{ U64 const randomID = XXH64((char*)dictBuffer + dictBufferCapacity - dictContentSize, dictContentSize, 0);
U32 const compliantID = (randomID % ((1U<<31)-32768)) + 32768;
U32 const dictID = params.dictID ? params.dictID : compliantID;
MEM_writeLE32((char*)dictBuffer+4, dictID);
}
if (hSize + dictContentSize < dictBufferCapacity)
memmove((char*)dictBuffer + hSize, (char*)dictBuffer + dictBufferCapacity - dictContentSize, dictContentSize);
return MIN(dictBufferCapacity, hSize+dictContentSize);
}
/*! ZDICT_trainFromBuffer_unsafe_legacy() :
* Warning : `samplesBuffer` must be followed by noisy guard band !!!
* @return : size of dictionary, or an error code which can be tested with ZDICT_isError()
*/
static size_t ZDICT_trainFromBuffer_unsafe_legacy(
void* dictBuffer, size_t maxDictSize,
const void* samplesBuffer, const size_t* samplesSizes, unsigned nbSamples,
ZDICT_legacy_params_t params)
{
U32 const dictListSize = MAX(MAX(DICTLISTSIZE_DEFAULT, nbSamples), (U32)(maxDictSize/16));
dictItem* const dictList = (dictItem*)malloc(dictListSize * sizeof(*dictList));
unsigned const selectivity = params.selectivityLevel == 0 ? g_selectivity_default : params.selectivityLevel;
unsigned const minRep = (selectivity > 30) ? MINRATIO : nbSamples >> selectivity;
size_t const targetDictSize = maxDictSize;
size_t const samplesBuffSize = ZDICT_totalSampleSize(samplesSizes, nbSamples);
size_t dictSize = 0;
U32 const notificationLevel = params.zParams.notificationLevel;
/* checks */
if (!dictList) return ERROR(memory_allocation);
if (maxDictSize < ZDICT_DICTSIZE_MIN) { free(dictList); return ERROR(dstSize_tooSmall); } /* requested dictionary size is too small */
if (samplesBuffSize < ZDICT_MIN_SAMPLES_SIZE) { free(dictList); return ERROR(dictionaryCreation_failed); } /* not enough source to create dictionary */
/* init */
ZDICT_initDictItem(dictList);
/* build dictionary */
ZDICT_trainBuffer_legacy(dictList, dictListSize,
samplesBuffer, samplesBuffSize,
samplesSizes, nbSamples,
minRep, notificationLevel);
/* display best matches */
if (params.zParams.notificationLevel>= 3) {
unsigned const nb = MIN(25, dictList[0].pos);
unsigned const dictContentSize = ZDICT_dictSize(dictList);
unsigned u;
DISPLAYLEVEL(3, "\n %u segments found, of total size %u \n", (unsigned)dictList[0].pos-1, dictContentSize);
DISPLAYLEVEL(3, "list %u best segments \n", nb-1);
for (u=1; u<nb; u++) {
unsigned const pos = dictList[u].pos;
unsigned const length = dictList[u].length;
U32 const printedLength = MIN(40, length);
if ((pos > samplesBuffSize) || ((pos + length) > samplesBuffSize)) {
free(dictList);
return ERROR(GENERIC); /* should never happen */
}
DISPLAYLEVEL(3, "%3u:%3u bytes at pos %8u, savings %7u bytes |",
u, length, pos, (unsigned)dictList[u].savings);
ZDICT_printHex((const char*)samplesBuffer+pos, printedLength);
DISPLAYLEVEL(3, "| \n");
} }
/* create dictionary */
{ unsigned dictContentSize = ZDICT_dictSize(dictList);
if (dictContentSize < ZDICT_CONTENTSIZE_MIN) { free(dictList); return ERROR(dictionaryCreation_failed); } /* dictionary content too small */
if (dictContentSize < targetDictSize/4) {
DISPLAYLEVEL(2, "! warning : selected content significantly smaller than requested (%u < %u) \n", dictContentSize, (unsigned)maxDictSize);
if (samplesBuffSize < 10 * targetDictSize)
DISPLAYLEVEL(2, "! consider increasing the number of samples (total size : %u MB)\n", (unsigned)(samplesBuffSize>>20));
if (minRep > MINRATIO) {
DISPLAYLEVEL(2, "! consider increasing selectivity to produce larger dictionary (-s%u) \n", selectivity+1);
DISPLAYLEVEL(2, "! note : larger dictionaries are not necessarily better, test its efficiency on samples \n");
}
}
if ((dictContentSize > targetDictSize*3) && (nbSamples > 2*MINRATIO) && (selectivity>1)) {
unsigned proposedSelectivity = selectivity-1;
while ((nbSamples >> proposedSelectivity) <= MINRATIO) { proposedSelectivity--; }
DISPLAYLEVEL(2, "! note : calculated dictionary significantly larger than requested (%u > %u) \n", dictContentSize, (unsigned)maxDictSize);
DISPLAYLEVEL(2, "! consider increasing dictionary size, or produce denser dictionary (-s%u) \n", proposedSelectivity);
DISPLAYLEVEL(2, "! always test dictionary efficiency on real samples \n");
}
/* limit dictionary size */
{ U32 const max = dictList->pos; /* convention : nb of useful elts within dictList */
U32 currentSize = 0;
U32 n; for (n=1; n<max; n++) {
currentSize += dictList[n].length;
if (currentSize > targetDictSize) { currentSize -= dictList[n].length; break; }
}
dictList->pos = n;
dictContentSize = currentSize;
}
/* build dict content */
{ U32 u;
BYTE* ptr = (BYTE*)dictBuffer + maxDictSize;
for (u=1; u<dictList->pos; u++) {
U32 l = dictList[u].length;
ptr -= l;
if (ptr<(BYTE*)dictBuffer) { free(dictList); return ERROR(GENERIC); } /* should not happen */
memcpy(ptr, (const char*)samplesBuffer+dictList[u].pos, l);
} }
dictSize = ZDICT_addEntropyTablesFromBuffer_advanced(dictBuffer, dictContentSize, maxDictSize,
samplesBuffer, samplesSizes, nbSamples,
params.zParams);
}
/* clean up */
free(dictList);
return dictSize;
}
/* ZDICT_trainFromBuffer_legacy() :
* issue : samplesBuffer need to be followed by a noisy guard band.
* work around : duplicate the buffer, and add the noise */
size_t ZDICT_trainFromBuffer_legacy(void* dictBuffer, size_t dictBufferCapacity,
const void* samplesBuffer, const size_t* samplesSizes, unsigned nbSamples,
ZDICT_legacy_params_t params)
{
size_t result;
void* newBuff;
size_t const sBuffSize = ZDICT_totalSampleSize(samplesSizes, nbSamples);
if (sBuffSize < ZDICT_MIN_SAMPLES_SIZE) return 0; /* not enough content => no dictionary */
newBuff = malloc(sBuffSize + NOISELENGTH);
if (!newBuff) return ERROR(memory_allocation);
memcpy(newBuff, samplesBuffer, sBuffSize);
ZDICT_fillNoise((char*)newBuff + sBuffSize, NOISELENGTH); /* guard band, for end of buffer condition */
result =
ZDICT_trainFromBuffer_unsafe_legacy(dictBuffer, dictBufferCapacity, newBuff,
samplesSizes, nbSamples, params);
free(newBuff);
return result;
}
size_t ZDICT_trainFromBuffer(void* dictBuffer, size_t dictBufferCapacity,
const void* samplesBuffer, const size_t* samplesSizes, unsigned nbSamples)
{
ZDICT_fastCover_params_t params;
DEBUGLOG(3, "ZDICT_trainFromBuffer");
memset(¶ms, 0, sizeof(params));
params.d = 8;
params.steps = 4;
/* Use default level since no compression level information is available */
params.zParams.compressionLevel = ZSTD_CLEVEL_DEFAULT;
#if defined(DEBUGLEVEL) && (DEBUGLEVEL>=1)
params.zParams.notificationLevel = DEBUGLEVEL;
#endif
return ZDICT_optimizeTrainFromBuffer_fastCover(dictBuffer, dictBufferCapacity,
samplesBuffer, samplesSizes, nbSamples,
¶ms);
}
size_t ZDICT_addEntropyTablesFromBuffer(void* dictBuffer, size_t dictContentSize, size_t dictBufferCapacity,
const void* samplesBuffer, const size_t* samplesSizes, unsigned nbSamples)
{
ZDICT_params_t params;
memset(¶ms, 0, sizeof(params));
return ZDICT_addEntropyTablesFromBuffer_advanced(dictBuffer, dictContentSize, dictBufferCapacity,
samplesBuffer, samplesSizes, nbSamples,
params);
}
/**** ended inlining dictBuilder/zdict.c ****/
|