1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 3008 3009 3010 3011 3012 3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 3026 3027 3028 3029 3030 3031 3032 3033 3034 3035 3036 3037 3038 3039 3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 3053 3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 3081 3082 3083 3084 3085 3086 3087 3088 3089 3090 3091 3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 3132 3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 3193 3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 3370 3371 3372 3373 3374 3375 3376 3377 3378 3379 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 3408 3409 3410 3411 3412 3413 3414 3415 3416 3417 3418 3419 3420 3421 3422 3423 3424 3425 3426 3427 3428 3429 3430 3431 3432 3433 3434 3435 3436 3437 3438 3439 3440 3441 3442 3443 3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 3454 3455 3456 3457 3458 3459 3460 3461 3462 3463 3464 3465 3466 3467 3468 3469 3470 3471 3472 3473 3474 3475 3476 3477 3478 3479 3480 3481 3482 3483 3484 3485 3486 3487 3488 3489 3490 3491 3492 3493 3494 3495 3496 3497 3498 3499 3500 3501 3502 3503 3504 3505 3506 3507 3508 3509 3510 3511 3512 3513 3514 3515 3516 3517 3518 3519 3520 3521 3522 3523 3524 3525 3526 3527 3528 3529 3530 3531 3532 3533 3534 3535 3536 3537 3538 3539 3540 3541 3542 3543 3544 3545 3546 3547 3548 3549 3550 3551 3552 3553 3554 3555 3556 3557 3558 3559 3560 3561 3562 3563 3564 3565 3566 3567 3568 3569 3570 3571 3572 3573 3574 3575 3576 3577 3578 3579 3580 3581 3582 3583 3584 3585 3586 3587 3588 3589 3590 3591 3592 3593 3594 3595 3596 3597 3598 3599 3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 3610 3611 3612 3613 3614 3615 3616 3617 3618 3619 3620 3621 3622 3623 3624 3625 3626 3627 3628 3629 3630 3631 3632 3633 3634 3635 3636 3637 3638 3639 3640 3641 3642 3643 3644 3645 3646 3647 3648 3649 3650 3651 3652 3653 3654 3655 3656 3657 3658 3659 3660 3661 3662 3663 3664 3665 3666 3667 3668 3669 3670 3671 3672 3673 3674 3675 3676 3677 3678 3679 3680 3681 3682 3683 3684 3685 3686 3687 3688 3689 3690 3691 3692 3693 3694 3695 3696 3697 3698 3699 3700 3701 3702 3703 3704 3705 3706 3707 3708 3709 3710 3711 3712 3713 3714 3715 3716 3717 3718 3719 3720 3721 3722 3723 3724 3725 3726 3727 3728 3729 3730 3731 3732 3733 3734 3735 3736 3737 3738 3739 3740 3741 3742 3743 3744 3745 3746 3747 3748 3749 3750 3751 3752 3753 3754 3755 3756 3757 3758 3759 3760 3761 3762 3763 3764 3765 3766 3767 3768 3769 3770 3771 3772 3773 3774 3775 3776 3777 3778 3779 3780 3781 3782 3783 3784 3785 3786 3787 3788 3789 3790 3791 3792 3793 3794 3795 3796 3797 3798 3799 3800 3801 3802 3803 3804 3805 3806 3807 3808 3809 3810 3811 3812 3813 3814 3815 3816 3817 3818 3819 3820 3821 3822 3823 3824 3825 3826 3827 3828 3829 3830 3831 3832 3833 3834 3835 3836 3837 3838 3839 3840 3841 3842 3843 3844 3845 3846 3847 3848 3849 3850 3851 3852 3853 3854 3855 3856 3857 3858 3859 3860 3861 3862 3863 3864 3865 3866 3867 3868 3869 3870 3871 3872 3873 3874 3875 3876 3877 3878 3879 3880 3881 3882 3883 3884 3885 3886 3887 3888 3889 3890 3891 3892 3893 3894 3895 3896 3897 3898 3899 3900 3901 3902 3903 3904 3905 3906 3907 3908 3909 3910 3911 3912 3913 3914 3915 3916 3917 3918 3919 3920 3921 3922 3923 3924 3925 3926 3927 3928 3929 3930 3931 3932 3933 3934 3935 3936 3937 3938 3939 3940 3941 3942 3943 3944 3945 3946 3947 3948 3949 3950 3951 3952 3953 3954 3955 3956 3957 3958 3959 3960 3961 3962 3963 3964 3965 3966 3967 3968 3969 3970 3971 3972 3973 3974 3975 3976 3977 3978 3979 3980 3981 3982 3983 3984 3985 3986 3987 3988 3989 3990 3991 3992 3993 3994 3995 3996 3997 3998 3999 4000 4001 4002 4003 4004 4005 4006 4007 4008 4009 4010 4011 4012 4013 4014 4015 4016 4017 4018 4019 4020 4021 4022 4023 4024 4025 4026 4027 4028 4029 4030 4031 4032 4033 4034 4035 4036 4037 4038 4039 4040 4041 4042 4043 4044 4045 4046 4047 4048 4049 4050 4051 4052 4053 4054 4055 4056 4057 4058 4059 4060 4061 4062 4063 4064 4065 4066 4067 4068 4069 4070 4071 4072 4073 4074 4075 4076 4077 4078 4079 4080 4081 4082 4083 4084 4085 4086 4087 4088 4089 4090 4091 4092 4093 4094 4095 4096 4097 4098 4099 4100 4101 4102 4103 4104 4105 4106 4107 4108 4109 4110 4111 4112 4113 4114 4115 4116 4117 4118 4119 4120 4121 4122 4123 4124 4125 4126 4127 4128 4129 4130 4131 4132 4133 4134 4135 4136 4137 4138 4139 4140 4141 4142 4143 4144 4145 4146 4147 4148 4149 4150 4151 4152 4153 4154 4155 4156 4157 4158 4159 4160 4161 4162 4163 4164 4165 4166 4167 4168 4169 4170 4171 4172 4173 4174 4175 4176 4177 4178 4179 4180 4181 4182 4183 4184 4185 4186 4187 4188 4189 4190 4191 4192 4193 4194 4195 4196 4197 4198 4199 4200 4201 4202 4203 4204 4205 4206 4207 4208 4209 4210 4211 4212 4213 4214 4215 4216 4217 4218 4219 4220 4221 4222 4223 4224 4225 4226 4227 4228 4229 4230 4231 4232 4233 4234 4235 4236 4237 4238 4239 4240 4241 4242 4243 4244 4245 4246 4247 4248 4249 4250 4251 4252 4253 4254 4255 4256 4257 4258 4259 4260 4261 4262 4263 4264 4265 4266 4267 4268 4269 4270 4271 4272 4273 4274 4275 4276 4277 4278 4279 4280 4281 4282 4283 4284 4285 4286 4287 4288 4289 4290 4291 4292 4293 4294 4295 4296 4297 4298 4299 4300 4301 4302 4303 4304 4305 4306 4307 4308 4309 4310 4311 4312 4313 4314 4315 4316 4317 4318 4319 4320 4321 4322 4323 4324 4325 4326 4327 4328 4329 4330 4331 4332 4333 4334 4335 4336 4337 4338 4339 4340 4341 4342 4343 4344 4345 4346 4347 4348 4349 4350 4351 4352 4353 4354 4355 4356 4357 4358 4359 4360 4361 4362 4363 4364 4365 4366 4367 4368 4369 4370 4371 4372 4373 4374 4375 4376 4377 4378 4379 4380 4381 4382 4383 4384 4385 4386 4387 4388 4389 4390 4391 4392 4393 4394 4395 4396 4397 4398 4399 4400 4401 4402 4403 4404 4405 4406 4407 4408 4409 4410 4411 4412 4413 4414 4415 4416 4417 4418 4419 4420 4421 4422 4423 4424 4425 4426 4427 4428 4429 4430 4431 4432 4433 4434 4435 4436 4437 4438 4439 4440 4441 4442 4443 4444 4445 4446 4447 4448 4449 4450 4451 4452 4453 4454 4455 4456 4457 4458 4459 4460 4461 4462 4463 4464 4465 4466 4467 4468 4469 4470 4471 4472 4473 4474 4475 4476 4477 4478 4479 4480 4481 4482 4483 4484 4485 4486 4487 4488 4489 4490 4491 4492 4493 4494 4495 4496 4497 4498 4499 4500 4501 4502 4503 4504 4505 4506 4507 4508 4509 4510 4511 4512 4513 4514 4515 4516 4517 4518 4519 4520 4521 4522 4523 4524 4525 4526 4527 4528 4529 4530 4531 4532 4533 4534 4535 4536 4537 4538 4539 4540 4541 4542 4543 4544 4545 4546 4547 4548 4549 4550 4551 4552 4553 4554 4555 4556 4557 4558 4559 4560 4561 4562 4563 4564 4565 4566 4567 4568 4569 4570 4571 4572 4573 4574 4575 4576 4577 4578 4579 4580 4581 4582 4583 4584 4585 4586 4587 4588 4589 4590 4591 4592 4593 4594 4595 4596 4597 4598 4599 4600 4601 4602 4603 4604 4605 4606 4607 4608 4609 4610 4611 4612 4613 4614 4615 4616 4617 4618 4619 4620 4621 4622 4623 4624 4625 4626 4627 4628 4629 4630 4631 4632 4633 4634 4635 4636 4637 4638 4639 4640 4641 4642 4643 4644 4645 4646 4647 4648 4649 4650 4651 4652 4653 4654 4655 4656 4657 4658 4659 4660 4661 4662 4663 4664 4665 4666 4667 4668 4669 4670 4671 4672 4673 4674 4675 4676 4677 4678 4679 4680 4681 4682 4683 4684 4685 4686 4687 4688 4689 4690 4691 4692 4693 4694 4695 4696 4697 4698 4699 4700 4701 4702 4703 4704 4705 4706 4707 4708 4709 4710 4711 4712 4713 4714 4715 4716 4717 4718 4719 4720 4721 4722 4723 4724 4725 4726 4727 4728 4729 4730 4731 4732 4733 4734 4735 4736 4737 4738 4739 4740 4741 4742 4743 4744 4745 4746 4747 4748 4749 4750 4751 4752 4753 4754 4755 4756 4757 4758 4759 4760 4761 4762 4763 4764 4765 4766 4767 4768 4769 4770 4771 4772 4773 4774 4775 4776 4777 4778 4779 4780 4781 4782 4783 4784 4785 4786 4787 4788 4789 4790 4791 4792 4793 4794 4795 4796 4797 4798 4799 4800 4801 4802 4803 4804 4805 4806 4807 4808 4809 4810 4811 4812 4813 4814 4815 4816 4817 4818 4819 4820 4821 4822 4823 4824 4825 4826 4827 4828 4829 4830 4831 4832 4833 4834 4835 4836 4837 4838 4839 4840 4841 4842 4843 4844 4845 4846 4847 4848 4849 4850 4851 4852 4853 4854 4855 4856 4857 4858 4859 4860 4861 4862 4863 4864 4865 4866 4867 4868 4869 4870 4871 4872 4873 4874 4875 4876 4877 4878 4879 4880 4881 4882 4883 4884 4885 4886 4887 4888 4889 4890 4891 4892 4893 4894 4895 4896 4897 4898 4899 4900 4901 4902 4903 4904 4905 4906 4907 4908 4909 4910 4911 4912 4913 4914 4915 4916 4917 4918 4919 4920 4921 4922 4923 4924 4925 4926 4927 4928 4929 4930 4931 4932 4933 4934 4935 4936 4937 4938 4939 4940 4941 4942 4943 4944 4945 4946 4947 4948 4949 4950 4951 4952 4953 4954 4955 4956 4957 4958 4959 4960 4961 4962 4963 4964 4965 4966 4967 4968 4969 4970 4971 4972 4973 4974 4975 4976 4977 4978 4979 4980 4981 4982 4983 4984 4985 4986 4987 4988 4989 4990 4991 4992 4993 4994 4995 4996 4997 4998 4999 5000 5001 5002 5003 5004 5005 5006 5007 5008 5009 5010 5011 5012 5013 5014 5015 5016 5017 5018 5019 5020 5021 5022 5023 5024 5025 5026 5027 5028 5029 5030 5031 5032 5033 5034 5035 5036 5037 5038 5039 5040 5041 5042 5043 5044 5045 5046 5047 5048 5049 5050 5051 5052 5053 5054 5055 5056 5057 5058 5059 5060 5061 5062 5063 5064 5065 5066 5067 5068 5069 5070 5071 5072 5073 5074 5075 5076 5077 5078 5079 5080 5081 5082 5083 5084 5085 5086 5087 5088 5089 5090 5091 5092 5093 5094 5095 5096 5097 5098 5099 5100 5101 5102 5103 5104 5105 5106 5107 5108 5109 5110
|
<chapter xmlns="http://docbook.org/ns/docbook" version="5.0"
xml:id="manual.ext.containers.pbds" xreflabel="pbds">
<info>
<title>Policy-Based Data Structures</title>
<keywordset>
<keyword>ISO C++</keyword>
<keyword>policy</keyword>
<keyword>container</keyword>
<keyword>data</keyword>
<keyword>structure</keyword>
<keyword>associated</keyword>
<keyword>tree</keyword>
<keyword>trie</keyword>
<keyword>hash</keyword>
<keyword>metaprogramming</keyword>
</keywordset>
</info>
<?dbhtml filename="policy_data_structures.html"?>
<!-- 2006-04-01 Ami Tavory -->
<!-- 2011-05-25 Benjamin Kosnik -->
<!-- S01: intro -->
<section xml:id="pbds.intro">
<info><title>Intro</title></info>
<para>
This is a library of policy-based elementary data structures:
associative containers and priority queues. It is designed for
high-performance, flexibility, semantic safety, and conformance to
the corresponding containers in <literal>std</literal> and
<literal>std::tr1</literal> (except for some points where it differs
by design).
</para>
<para>
</para>
<section xml:id="pbds.intro.issues">
<info><title>Performance Issues</title></info>
<para>
</para>
<para>
An attempt is made to categorize the wide variety of possible
container designs in terms of performance-impacting factors. These
performance factors are translated into design policies and
incorporated into container design.
</para>
<para>
There is tension between unravelling factors into a coherent set of
policies. Every attempt is made to make a minimal set of
factors. However, in many cases multiple factors make for long
template names. Every attempt is made to alias and use typedefs in
the source files, but the generated names for external symbols can
be large for binary files or debuggers.
</para>
<para>
In many cases, the longer names allow capabilities and behaviours
controlled by macros to also be unamibiguously emitted as distinct
generated names.
</para>
<para>
Specific issues found while unraveling performance factors in the
design of associative containers and priority queues follow.
</para>
<section xml:id="pbds.intro.issues.associative">
<info><title>Associative</title></info>
<para>
Associative containers depend on their composite policies to a very
large extent. Implicitly hard-wiring policies can hamper their
performance and limit their functionality. An efficient hash-based
container, for example, requires policies for testing key
equivalence, hashing keys, translating hash values into positions
within the hash table, and determining when and how to resize the
table internally. A tree-based container can efficiently support
order statistics, i.e. the ability to query what is the order of
each key within the sequence of keys in the container, but only if
the container is supplied with a policy to internally update
meta-data. There are many other such examples.
</para>
<para>
Ideally, all associative containers would share the same
interface. Unfortunately, underlying data structures and mapping
semantics differentiate between different containers. For example,
suppose one writes a generic function manipulating an associative
container.
</para>
<programlisting>
template<typename Cntnr>
void
some_op_sequence(Cntnr& r_cnt)
{
...
}
</programlisting>
<para>
Given this, then what can one assume about the instantiating
container? The answer varies according to its underlying data
structure. If the underlying data structure of
<literal>Cntnr</literal> is based on a tree or trie, then the order
of elements is well defined; otherwise, it is not, in general. If
the underlying data structure of <literal>Cntnr</literal> is based
on a collision-chaining hash table, then modifying
r_<literal>Cntnr</literal> will not invalidate its iterators' order;
if the underlying data structure is a probing hash table, then this
is not the case. If the underlying data structure is based on a tree
or trie, then a reference to the container can efficiently be split;
otherwise, it cannot, in general. If the underlying data structure
is a red-black tree, then splitting a reference to the container is
exception-free; if it is an ordered-vector tree, exceptions can be
thrown.
</para>
</section>
<section xml:id="pbds.intro.issues.priority_queue">
<info><title>Priority Que</title></info>
<para>
Priority queues are useful when one needs to efficiently access a
minimum (or maximum) value as the set of values changes.
</para>
<para>
Most useful data structures for priority queues have a relatively
simple structure, as they are geared toward relatively simple
requirements. Unfortunately, these structures do not support access
to an arbitrary value, which turns out to be necessary in many
algorithms. Say, decreasing an arbitrary value in a graph
algorithm. Therefore, some extra mechanism is necessary and must be
invented for accessing arbitrary values. There are at least two
alternatives: embedding an associative container in a priority
queue, or allowing cross-referencing through iterators. The first
solution adds significant overhead; the second solution requires a
precise definition of iterator invalidation. Which is the next
point...
</para>
<para>
Priority queues, like hash-based containers, store values in an
order that is meaningless and undefined externally. For example, a
<code>push</code> operation can internally reorganize the
values. Because of this characteristic, describing a priority
queues' iterator is difficult: on one hand, the values to which
iterators point can remain valid, but on the other, the logical
order of iterators can change unpredictably.
</para>
<para>
Roughly speaking, any element that is both inserted to a priority
queue (e.g. through <code>push</code>) and removed
from it (e.g., through <code>pop</code>), incurs a
logarithmic overhead (in the amortized sense). Different underlying
data structures place the actual cost differently: some are
optimized for amortized complexity, whereas others guarantee that
specific operations only have a constant cost. One underlying data
structure might be chosen if modifying a value is frequent
(Dijkstra's shortest-path algorithm), whereas a different one might
be chosen otherwise. Unfortunately, an array-based binary heap - an
underlying data structure that optimizes (in the amortized sense)
<code>push</code> and <code>pop</code> operations, differs from the
others in terms of its invalidation guarantees. Other design
decisions also impact the cost and placement of the overhead, at the
expense of more difference in the the kinds of operations that the
underlying data structure can support. These differences pose a
challenge when creating a uniform interface for priority queues.
</para>
</section>
</section>
<section xml:id="pbds.intro.motivation">
<info><title>Goals</title></info>
<para>
Many fine associative-container libraries were already written,
most notably, the C++ standard's associative containers. Why
then write another library? This section shows some possible
advantages of this library, when considering the challenges in
the introduction. Many of these points stem from the fact that
the ISO C++ process introduced associative-containers in a
two-step process (first standardizing tree-based containers,
only then adding hash-based containers, which are fundamentally
different), did not standardize priority queues as containers,
and (in our opinion) overloads the iterator concept.
</para>
<section xml:id="pbds.intro.motivation.associative">
<info><title>Associative</title></info>
<para>
</para>
<section xml:id="motivation.associative.policy">
<info><title>Policy Choices</title></info>
<para>
Associative containers require a relatively large number of
policies to function efficiently in various settings. In some
cases this is needed for making their common operations more
efficient, and in other cases this allows them to support a
larger set of operations
</para>
<orderedlist>
<listitem>
<para>
Hash-based containers, for example, support look-up and
insertion methods (<function>find</function> and
<function>insert</function>). In order to locate elements
quickly, they are supplied a hash functor, which instruct
how to transform a key object into some size type; a hash
functor might transform <constant>"hello"</constant>
into <constant>1123002298</constant>. A hash table, though,
requires transforming each key object into some size-type
type in some specific domain; a hash table with a 128-long
table might transform <constant>"hello"</constant> into
position <constant>63</constant>. The policy by which the
hash value is transformed into a position within the table
can dramatically affect performance. Hash-based containers
also do not resize naturally (as opposed to tree-based
containers, for example). The appropriate resize policy is
unfortunately intertwined with the policy that transforms
hash value into a position within the table.
</para>
</listitem>
<listitem>
<para>
Tree-based containers, for example, also support look-up and
insertion methods, and are primarily useful when maintaining
order between elements is important. In some cases, though,
one can utilize their balancing algorithms for completely
different purposes.
</para>
<para>
Figure A shows a tree whose each node contains two entries:
a floating-point key, and some size-type
<emphasis>metadata</emphasis> (in bold beneath it) that is
the number of nodes in the sub-tree. (The root has key 0.99,
and has 5 nodes (including itself) in its sub-tree.) A
container based on this data structure can obviously answer
efficiently whether 0.3 is in the container object, but it
can also answer what is the order of 0.3 among all those in
the container object: see <xref linkend="biblio.clrs2001"/>.
</para>
<para>
As another example, Figure B shows a tree whose each node
contains two entries: a half-open geometric line interval,
and a number <emphasis>metadata</emphasis> (in bold beneath
it) that is the largest endpoint of all intervals in its
sub-tree. (The root describes the interval <constant>[20,
36)</constant>, and the largest endpoint in its sub-tree is
99.) A container based on this data structure can obviously
answer efficiently whether <constant>[3, 41)</constant> is
in the container object, but it can also answer efficiently
whether the container object has intervals that intersect
<constant>[3, 41)</constant>. These types of queries are
very useful in geometric algorithms and lease-management
algorithms.
</para>
<para>
It is important to note, however, that as the trees are
modified, their internal structure changes. To maintain
these invariants, one must supply some policy that is aware
of these changes. Without this, it would be better to use a
linked list (in itself very efficient for these purposes).
</para>
</listitem>
</orderedlist>
<figure>
<title>Node Invariants</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_node_invariants.png"/>
</imageobject>
<textobject>
<phrase>Node Invariants</phrase>
</textobject>
</mediaobject>
</figure>
</section>
<section xml:id="motivation.associative.underlying">
<info><title>Underlying Data Structures</title></info>
<para>
The standard C++ library contains associative containers based on
red-black trees and collision-chaining hash tables. These are
very useful, but they are not ideal for all types of
settings.
</para>
<para>
The figure below shows the different underlying data structures
currently supported in this library.
</para>
<figure>
<title>Underlying Associative Data Structures</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_different_underlying_dss_1.png"/>
</imageobject>
<textobject>
<phrase>Underlying Associative Data Structures</phrase>
</textobject>
</mediaobject>
</figure>
<para>
A shows a collision-chaining hash-table, B shows a probing
hash-table, C shows a red-black tree, D shows a splay tree, E shows
a tree based on an ordered vector(implicit in the order of the
elements), F shows a PATRICIA trie, and G shows a list-based
container with update policies.
</para>
<para>
Each of these data structures has some performance benefits, in
terms of speed, size or both. For now, note that vector-based trees
and probing hash tables manipulate memory more efficiently than
red-black trees and collision-chaining hash tables, and that
list-based associative containers are very useful for constructing
"multimaps".
</para>
<para>
Now consider a function manipulating a generic associative
container,
</para>
<programlisting>
template<class Cntnr>
int
some_op_sequence(Cntnr &r_cnt)
{
...
}
</programlisting>
<para>
Ideally, the underlying data structure
of <classname>Cntnr</classname> would not affect what can be
done with <varname>r_cnt</varname>. Unfortunately, this is not
the case.
</para>
<para>
For example, if <classname>Cntnr</classname>
is <classname>std::map</classname>, then the function can
use
</para>
<programlisting>
std::for_each(r_cnt.find(foo), r_cnt.find(bar), foobar)
</programlisting>
<para>
in order to apply <classname>foobar</classname> to all
elements between <classname>foo</classname> and
<classname>bar</classname>. If
<classname>Cntnr</classname> is a hash-based container,
then this call's results are undefined.
</para>
<para>
Also, if <classname>Cntnr</classname> is tree-based, the type
and object of the comparison functor can be
accessed. If <classname>Cntnr</classname> is hash based, these
queries are nonsensical.
</para>
<para>
There are various other differences based on the container's
underlying data structure. For one, they can be constructed by,
and queried for, different policies. Furthermore:
</para>
<orderedlist>
<listitem>
<para>
Containers based on C, D, E and F store elements in a
meaningful order; the others store elements in a meaningless
(and probably time-varying) order. By implication, only
containers based on C, D, E and F can
support <function>erase</function> operations taking an
iterator and returning an iterator to the following element
without performance loss.
</para>
</listitem>
<listitem>
<para>
Containers based on C, D, E, and F can be split and joined
efficiently, while the others cannot. Containers based on C
and D, furthermore, can guarantee that this is exception-free;
containers based on E cannot guarantee this.
</para>
</listitem>
<listitem>
<para>
Containers based on all but E can guarantee that
erasing an element is exception free; containers based on E
cannot guarantee this. Containers based on all but B and E
can guarantee that modifying an object of their type does
not invalidate iterators or references to their elements,
while containers based on B and E cannot. Containers based
on C, D, and E can furthermore make a stronger guarantee,
namely that modifying an object of their type does not
affect the order of iterators.
</para>
</listitem>
</orderedlist>
<para>
A unified tag and traits system (as used for the C++ standard
library iterators, for example) can ease generic manipulation of
associative containers based on different underlying data
structures.
</para>
</section>
<section xml:id="motivation.associative.iterators">
<info><title>Iterators</title></info>
<para>
Iterators are centric to the design of the standard library
containers, because of the container/algorithm/iterator
decomposition that allows an algorithm to operate on a range
through iterators of some sequence. Iterators, then, are useful
because they allow going over a
specific <emphasis>sequence</emphasis>. The standard library
also uses iterators for accessing a
specific <emphasis>element</emphasis>: when an associative
container returns one through <function>find</function>. The
standard library consistently uses the same types of iterators
for both purposes: going over a range, and accessing a specific
found element. Before the introduction of hash-based containers
to the standard library, this made sense (with the exception of
priority queues, which are discussed later).
</para>
<para>
Using the standard associative containers together with
non-order-preserving associative containers (and also because of
priority-queues container), there is a possible need for
different types of iterators for self-organizing containers:
the iterator concept seems overloaded to mean two different
things (in some cases). <remark> XXX
"ds_gen.html#find_range">Design::Associative
Containers::Data-Structure Genericity::Point-Type and Range-Type
Methods</remark>.
</para>
<section xml:id="associative.iterators.using">
<info>
<title>Using Point Iterators for Range Operations</title>
</info>
<para>
Suppose <classname>cntnr</classname> is some associative
container, and say <varname>c</varname> is an object of
type <classname>cntnr</classname>. Then what will be the outcome
of
</para>
<programlisting>
std::for_each(c.find(1), c.find(5), foo);
</programlisting>
<para>
If <classname>cntnr</classname> is a tree-based container
object, then an in-order walk will
apply <classname>foo</classname> to the relevant elements,
as in the graphic below, label A. If <varname>c</varname> is
a hash-based container, then the order of elements between any
two elements is undefined (and probably time-varying); there is
no guarantee that the elements traversed will coincide with the
<emphasis>logical</emphasis> elements between 1 and 5, as in
label B.
</para>
<figure>
<title>Range Iteration in Different Data Structures</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_point_iterators_range_ops_1.png"/>
</imageobject>
<textobject>
<phrase>Node Invariants</phrase>
</textobject>
</mediaobject>
</figure>
<para>
In our opinion, this problem is not caused just because
red-black trees are order preserving while
collision-chaining hash tables are (generally) not - it
is more fundamental. Most of the standard's containers
order sequences in a well-defined manner that is
determined by their <emphasis>interface</emphasis>:
calling <function>insert</function> on a tree-based
container modifies its sequence in a predictable way, as
does calling <function>push_back</function> on a list or
a vector. Conversely, collision-chaining hash tables,
probing hash tables, priority queues, and list-based
containers (which are very useful for "multimaps") are
self-organizing data structures; the effect of each
operation modifies their sequences in a manner that is
(practically) determined by their
<emphasis>implementation</emphasis>.
</para>
<para>
Consequently, applying an algorithm to a sequence obtained from most
containers may or may not make sense, but applying it to a
sub-sequence of a self-organizing container does not.
</para>
</section>
<section xml:id="associative.iterators.cost">
<info>
<title>Cost to Point Iterators to Enable Range Operations</title>
</info>
<para>
Suppose <varname>c</varname> is some collision-chaining
hash-based container object, and one calls
</para>
<programlisting>c.find(3)</programlisting>
<para>
Then what composes the returned iterator?
</para>
<para>
In the graphic below, label A shows the simplest (and
most efficient) implementation of a collision-chaining
hash table. The little box marked
<classname>point_iterator</classname> shows an object
that contains a pointer to the element's node. Note that
this "iterator" has no way to move to the next element (
it cannot support
<function>operator++</function>). Conversely, the little
box marked <classname>iterator</classname> stores both a
pointer to the element, as well as some other
information (the bucket number of the element). the
second iterator, then, is "heavier" than the first one-
it requires more time and space. If we were to use a
different container to cross-reference into this
hash-table using these iterators - it would take much
more space. As noted above, nothing much can be done by
incrementing these iterators, so why is this extra
information needed?
</para>
<para>
Alternatively, one might create a collision-chaining hash-table
where the lists might be linked, forming a monolithic total-element
list, as in the graphic below, label B. Here the iterators are as
light as can be, but the hash-table's operations are more
complicated.
</para>
<figure>
<title>Point Iteration in Hash Data Structures</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_point_iterators_range_ops_2.png"/>
</imageobject>
<textobject>
<phrase>Point Iteration in Hash Data Structures</phrase>
</textobject>
</mediaobject>
</figure>
<para>
It should be noted that containers based on collision-chaining
hash-tables are not the only ones with this type of behavior;
many other self-organizing data structures display it as well.
</para>
</section>
<section xml:id="associative.iterators.invalidation">
<info><title>Invalidation Guarantees</title></info>
<para>Consider the following snippet:</para>
<programlisting>
it = c.find(3);
c.erase(5);
</programlisting>
<para>
Following the call to <classname>erase</classname>, what is the
validity of <classname>it</classname>: can it be de-referenced?
can it be incremented?
</para>
<para>
The answer depends on the underlying data structure of the
container. The graphic below shows three cases: A1 and A2 show
a red-black tree; B1 and B2 show a probing hash-table; C1 and C2
show a collision-chaining hash table.
</para>
<figure>
<title>Effect of erase in different underlying data structures</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_invalidation_guarantee_erase.png"/>
</imageobject>
<textobject>
<phrase>Effect of erase in different underlying data structures</phrase>
</textobject>
</mediaobject>
</figure>
<orderedlist>
<listitem>
<para>
Erasing 5 from A1 yields A2. Clearly, an iterator to 3 can
be de-referenced and incremented. The sequence of iterators
changed, but in a way that is well-defined by the interface.
</para>
</listitem>
<listitem>
<para>
Erasing 5 from B1 yields B2. Clearly, an iterator to 3 is
not valid at all - it cannot be de-referenced or
incremented; the order of iterators changed in a way that is
(practically) determined by the implementation and not by
the interface.
</para>
</listitem>
<listitem>
<para>
Erasing 5 from C1 yields C2. Here the situation is more
complicated. On the one hand, there is no problem in
de-referencing <classname>it</classname>. On the other hand,
the order of iterators changed in a way that is
(practically) determined by the implementation and not by
the interface.
</para>
</listitem>
</orderedlist>
<para>
So in the standard library containers, it is not always possible
to express whether <varname>it</varname> is valid or not. This
is true also for <function>insert</function>. Again, the
iterator concept seems overloaded.
</para>
</section>
</section> <!--iterators-->
<section xml:id="motivation.associative.functions">
<info><title>Functional</title></info>
<para>
</para>
<para>
The design of the functional overlay to the underlying data
structures differs slightly from some of the conventions used in
the C++ standard. A strict public interface of methods that
comprise only operations which depend on the class's internal
structure; other operations are best designed as external
functions. (See <xref linkend="biblio.meyers02both"/>).With this
rubric, the standard associative containers lack some useful
methods, and provide other methods which would be better
removed.
</para>
<section xml:id="motivation.associative.functions.erase">
<info><title><function>erase</function></title></info>
<orderedlist>
<listitem>
<para>
Order-preserving standard associative containers provide the
method
</para>
<programlisting>
iterator
erase(iterator it)
</programlisting>
<para>
which takes an iterator, erases the corresponding
element, and returns an iterator to the following
element. Also standardd hash-based associative
containers provide this method. This seemingly
increasesgenericity between associative containers,
since it is possible to use
</para>
<programlisting>
typename C::iterator it = c.begin();
typename C::iterator e_it = c.end();
while(it != e_it)
it = pred(*it)? c.erase(it) : ++it;
</programlisting>
<para>
in order to erase from a container object <varname>
c</varname> all element which match a
predicate <classname>pred</classname>. However, in a
different sense this actually decreases genericity: an
integral implication of this method is that tree-based
associative containers' memory use is linear in the total
number of elements they store, while hash-based
containers' memory use is unbounded in the total number of
elements they store. Assume a hash-based container is
allowed to decrease its size when an element is
erased. Then the elements might be rehashed, which means
that there is no "next" element - it is simply
undefined. Consequently, it is possible to infer from the
fact that the standard library's hash-based containers
provide this method that they cannot downsize when
elements are erased. As a consequence, different code is
needed to manipulate different containers, assuming that
memory should be conserved. Therefor, this library's
non-order preserving associative containers omit this
method.
</para>
</listitem>
<listitem>
<para>
All associative containers include a conditional-erase method
</para>
<programlisting>
template<
class Pred>
size_type
erase_if
(Pred pred)
</programlisting>
<para>
which erases all elements matching a predicate. This is probably the
only way to ensure linear-time multiple-item erase which can
actually downsize a container.
</para>
</listitem>
<listitem>
<para>
The standard associative containers provide methods for
multiple-item erase of the form
</para>
<programlisting>
size_type
erase(It b, It e)
</programlisting>
<para>
erasing a range of elements given by a pair of
iterators. For tree-based or trie-based containers, this can
implemented more efficiently as a (small) sequence of split
and join operations. For other, unordered, containers, this
method isn't much better than an external loop. Moreover,
if <varname>c</varname> is a hash-based container,
then
</para>
<programlisting>
c.erase(c.find(2), c.find(5))
</programlisting>
<para>
is almost certain to do something
different than erasing all elements whose keys are between 2
and 5, and is likely to produce other undefined behavior.
</para>
</listitem>
</orderedlist>
</section> <!-- erase -->
<section xml:id="motivation.associative.functions.split">
<info>
<title>
<function>split</function> and <function>join</function>
</title>
</info>
<para>
It is well-known that tree-based and trie-based container
objects can be efficiently split or joined (See
<xref linkend="biblio.clrs2001"/>). Externally splitting or
joining trees is super-linear, and, furthermore, can throw
exceptions. Split and join methods, consequently, seem good
choices for tree-based container methods, especially, since as
noted just before, they are efficient replacements for erasing
sub-sequences.
</para>
</section> <!-- split -->
<section xml:id="motivation.associative.functions.insert">
<info>
<title>
<function>insert</function>
</title>
</info>
<para>
The standard associative containers provide methods of the form
</para>
<programlisting>
template<class It>
size_type
insert(It b, It e);
</programlisting>
<para>
for inserting a range of elements given by a pair of
iterators. At best, this can be implemented as an external loop,
or, even more efficiently, as a join operation (for the case of
tree-based or trie-based containers). Moreover, these methods seem
similar to constructors taking a range given by a pair of
iterators; the constructors, however, are transactional, whereas
the insert methods are not; this is possibly confusing.
</para>
</section> <!-- insert -->
<section xml:id="motivation.associative.functions.compare">
<info>
<title>
<function>operator==</function> and <function>operator<=</function>
</title>
</info>
<para>
Associative containers are parametrized by policies allowing to
test key equivalence: a hash-based container can do this through
its equivalence functor, and a tree-based container can do this
through its comparison functor. In addition, some standard
associative containers have global function operators, like
<function>operator==</function> and <function>operator<=</function>,
that allow comparing entire associative containers.
</para>
<para>
In our opinion, these functions are better left out. To begin
with, they do not significantly improve over an external
loop. More importantly, however, they are possibly misleading -
<function>operator==</function>, for example, usually checks for
equivalence, or interchangeability, but the associative
container cannot check for values' equivalence, only keys'
equivalence; also, are two containers considered equivalent if
they store the same values in different order? this is an
arbitrary decision.
</para>
</section> <!-- compare -->
</section> <!-- functional -->
</section> <!--associative-->
<section xml:id="pbds.intro.motivation.priority_queue">
<info><title>Priority Queues</title></info>
<section xml:id="motivation.priority_queue.policy">
<info><title>Policy Choices</title></info>
<para>
Priority queues are containers that allow efficiently inserting
values and accessing the maximal value (in the sense of the
container's comparison functor). Their interface
supports <function>push</function>
and <function>pop</function>. The standard
container <classname>std::priorityqueue</classname> indeed support
these methods, but little else. For algorithmic and
software-engineering purposes, other methods are needed:
</para>
<orderedlist>
<listitem>
<para>
Many graph algorithms (see
<xref linkend="biblio.clrs2001"/>) require increasing a
value in a priority queue (again, in the sense of the
container's comparison functor), or joining two
priority-queue objects.
</para>
</listitem>
<listitem>
<para>The return type of <classname>priority_queue</classname>'s
<function>push</function> method is a point-type iterator, which can
be used for modifying or erasing arbitrary values. For
example:</para>
<programlisting>
priority_queue<int> p;
priority_queue<int>::point_iterator it = p.push(3);
p.modify(it, 4);
</programlisting>
<para>These types of cross-referencing operations are necessary
for making priority queues useful for different applications,
especially graph applications.</para>
</listitem>
<listitem>
<para>
It is sometimes necessary to erase an arbitrary value in a
priority queue. For example, consider
the <function>select</function> function for monitoring
file descriptors:
</para>
<programlisting>
int
select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *errorfds,
struct timeval *timeout);
</programlisting>
<para>
then, as the select documentation states:
</para>
<para>
<quote>
The nfds argument specifies the range of file
descriptors to be tested. The select() function tests file
descriptors in the range of 0 to nfds-1.</quote>
</para>
<para>
It stands to reason, therefore, that we might wish to
maintain a minimal value for <varname>nfds</varname>, and
priority queues immediately come to mind. Note, though, that
when a socket is closed, the minimal file description might
change; in the absence of an efficient means to erase an
arbitrary value from a priority queue, we might as well
avoid its use altogether.
</para>
<para>
The standard containers typically support iterators. It is
somewhat unusual
for <classname>std::priority_queue</classname> to omit them
(See <xref linkend="biblio.meyers01stl"/>). One might
ask why do priority queues need to support iterators, since
they are self-organizing containers with a different purpose
than abstracting sequences. There are several reasons:
</para>
<orderedlist>
<listitem>
<para>
Iterators (even in self-organizing containers) are
useful for many purposes: cross-referencing
containers, serialization, and debugging code that uses
these containers.
</para>
</listitem>
<listitem>
<para>
The standard library's hash-based containers support
iterators, even though they too are self-organizing
containers with a different purpose than abstracting
sequences.
</para>
</listitem>
<listitem>
<para>
In standard-library-like containers, it is natural to specify the
interface of operations for modifying a value or erasing
a value (discussed previously) in terms of a iterators.
It should be noted that the standard
containers also use iterators for accessing and
manipulating a specific value. In hash-based
containers, one checks the existence of a key by
comparing the iterator returned by <function>find</function> to the
iterator returned by <function>end</function>, and not by comparing a
pointer returned by <function>find</function> to <type>NULL</type>.
</para>
</listitem>
</orderedlist>
</listitem>
</orderedlist>
</section>
<section xml:id="motivation.priority_queue.underlying">
<info><title>Underlying Data Structures</title></info>
<para>
There are three main implementations of priority queues: the
first employs a binary heap, typically one which uses a
sequence; the second uses a tree (or forest of trees), which is
typically less structured than an associative container's tree;
the third simply uses an associative container. These are
shown in the figure below with labels A1 and A2, B, and C.
</para>
<figure>
<title>Underlying Priority Queue Data Structures</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_different_underlying_dss_2.png"/>
</imageobject>
<textobject>
<phrase>Underlying Priority Queue Data Structures</phrase>
</textobject>
</mediaobject>
</figure>
<para>
No single implementation can completely replace any of the
others. Some have better <function>push</function>
and <function>pop</function> amortized performance, some have
better bounded (worst case) response time than others, some
optimize a single method at the expense of others, etc. In
general the "best" implementation is dictated by the specific
problem.
</para>
<para>
As with associative containers, the more implementations
co-exist, the more necessary a traits mechanism is for handling
generic containers safely and efficiently. This is especially
important for priority queues, since the invalidation guarantees
of one of the most useful data structures - binary heaps - is
markedly different than those of most of the others.
</para>
</section>
<section xml:id="motivation.priority_queue.binary_heap">
<info><title>Binary Heaps</title></info>
<para>
Binary heaps are one of the most useful underlying
data structures for priority queues. They are very efficient in
terms of memory (since they don't require per-value structure
metadata), and have the best amortized <function>push</function> and
<function>pop</function> performance for primitive types like
<type>int</type>.
</para>
<para>
The standard library's <classname>priority_queue</classname>
implements this data structure as an adapter over a sequence,
typically
<classname>std::vector</classname>
or <classname>std::deque</classname>, which correspond to labels
A1 and A2 respectively in the graphic above.
</para>
<para>
This is indeed an elegant example of the adapter concept and
the algorithm/container/iterator decomposition. (See <xref linkend="biblio.nelson96stlpq"/>). There are
several reasons why a binary-heap priority queue
may be better implemented as a container instead of a
sequence adapter:
</para>
<orderedlist>
<listitem>
<para>
<classname>std::priority_queue</classname> cannot erase values
from its adapted sequence (irrespective of the sequence
type). This means that the memory use of
an <classname>std::priority_queue</classname> object is always
proportional to the maximal number of values it ever contained,
and not to the number of values that it currently
contains. (See <filename>performance/priority_queue_text_pop_mem_usage.cc</filename>.)
This implementation of binary heaps acts very differently than
other underlying data structures (See also pairing heaps).
</para>
</listitem>
<listitem>
<para>
Some combinations of adapted sequences and value types
are very inefficient or just don't make sense. If one uses
<classname>std::priority_queue<std::vector<std::string>
> ></classname>, for example, then not only will each
operation perform a logarithmic number of
<classname>std::string</classname> assignments, but, furthermore, any
operation (including <function>pop</function>) can render the container
useless due to exceptions. Conversely, if one uses
<classname>std::priority_queue<std::deque<int> >
></classname>, then each operation uses incurs a logarithmic
number of indirect accesses (through pointers) unnecessarily.
It might be better to let the container make a conservative
deduction whether to use the structure in the graphic above, labels A1 or A2.
</para>
</listitem>
<listitem>
<para>
There does not seem to be a systematic way to determine
what exactly can be done with the priority queue.
</para>
<orderedlist>
<listitem>
<para>
If <classname>p</classname> is a priority queue adapting an
<classname>std::vector</classname>, then it is possible to iterate over
all values by using <function>&p.top()</function> and
<function>&p.top() + p.size()</function>, but this will not work
if <varname>p</varname> is adapting an <classname>std::deque</classname>; in any
case, one cannot use <classname>p.begin()</classname> and
<classname>p.end()</classname>. If a different sequence is adapted, it
is even more difficult to determine what can be
done.
</para>
</listitem>
<listitem>
<para>
If <varname>p</varname> is a priority queue adapting an
<classname>std::deque</classname>, then the reference return by
</para>
<programlisting>
p.top()
</programlisting>
<para>
will remain valid until it is popped,
but if <varname>p</varname> adapts an <classname>std::vector</classname>, the
next <function>push</function> will invalidate it. If a different
sequence is adapted, it is even more difficult to
determine what can be done.
</para>
</listitem>
</orderedlist>
</listitem>
<listitem>
<para>
Sequence-based binary heaps can still implement
linear-time <function>erase</function> and <function>modify</function> operations.
This means that if one needs to erase a small
(say logarithmic) number of values, then one might still
choose this underlying data structure. Using
<classname>std::priority_queue</classname>, however, this will generally
change the order of growth of the entire sequence of
operations.
</para>
</listitem>
</orderedlist>
</section>
</section>
</section> <!-- goals/motivation -->
</section> <!-- intro -->
<!-- S02: Using -->
<section xml:id="containers.pbds.using">
<info><title>Using</title></info>
<?dbhtml filename="policy_data_structures_using.html"?>
<section xml:id="pbds.using.prereq">
<info><title>Prerequisites</title></info>
<para>The library contains only header files, and does not require any
other libraries except the standard C++ library . All classes are
defined in namespace <code>__gnu_pbds</code>. The library internally
uses macros beginning with <code>PB_DS</code>, but
<code>#undef</code>s anything it <code>#define</code>s (except for
header guards). Compiling the library in an environment where macros
beginning in <code>PB_DS</code> are defined, may yield unpredictable
results in compilation, execution, or both.</para>
<para>
Further dependencies are necessary to create the visual output
for the performance tests. To create these graphs, an
additional package is needed: <command>pychart</command>.
</para>
</section>
<section xml:id="pbds.using.organization">
<info><title>Organization</title></info>
<para>
The various data structures are organized as follows.
</para>
<itemizedlist>
<listitem>
<para>
Branch-Based
</para>
<itemizedlist>
<listitem>
<para>
<classname>basic_branch</classname>
is an abstract base class for branched-based
associative-containers
</para>
</listitem>
<listitem>
<para>
<classname>tree</classname>
is a concrete base class for tree-based
associative-containers
</para>
</listitem>
<listitem>
<para>
<classname>trie</classname>
is a concrete base class trie-based
associative-containers
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Hash-Based
</para>
<itemizedlist>
<listitem>
<para>
<classname>basic_hash_table</classname>
is an abstract base class for hash-based
associative-containers
</para>
</listitem>
<listitem>
<para>
<classname>cc_hash_table</classname>
is a concrete collision-chaining hash-based
associative-containers
</para>
</listitem>
<listitem>
<para>
<classname>gp_hash_table</classname>
is a concrete (general) probing hash-based
associative-containers
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
List-Based
</para>
<itemizedlist>
<listitem>
<para>
<classname>list_update</classname>
list-based update-policy associative container
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
Heap-Based
</para>
<itemizedlist>
<listitem>
<para>
<classname>priority_queue</classname>
A priority queue.
</para>
</listitem>
</itemizedlist>
</listitem>
</itemizedlist>
<para>
The hierarchy is composed naturally so that commonality is
captured by base classes. Thus <function>operator[]</function>
is defined at the base of any hierarchy, since all derived
containers support it. Conversely <function>split</function> is
defined in <classname>basic_branch</classname>, since only
tree-like containers support it.
</para>
<para>
In addition, there are the following diagnostics classes,
used to report errors specific to this library's data
structures.
</para>
<figure>
<title>Exception Hierarchy</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PDF" scale="75"
fileref="../images/pbds_exception_hierarchy.pdf"/>
</imageobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_exception_hierarchy.png"/>
</imageobject>
<textobject>
<phrase>Exception Hierarchy</phrase>
</textobject>
</mediaobject>
</figure>
</section>
<section xml:id="pbds.using.tutorial">
<info><title>Tutorial</title></info>
<section xml:id="pbds.using.tutorial.basic">
<info><title>Basic Use</title></info>
<para>
For the most part, the policy-based containers containers in
namespace <literal>__gnu_pbds</literal> have the same interface as
the equivalent containers in the standard C++ library, except for
the names used for the container classes themselves. For example,
this shows basic operations on a collision-chaining hash-based
container:
</para>
<programlisting>
#include <ext/pb_ds/assoc_container.h>
int main()
{
__gnu_pbds::cc_hash_table<int, char> c;
c[2] = 'b';
assert(c.find(1) == c.end());
};
</programlisting>
<para>
The container is called
<classname>__gnu_pbds::cc_hash_table</classname> instead of
<classname>std::unordered_map</classname>, since <quote>unordered
map</quote> does not necessarily mean a hash-based map as implied by
the C++ library (C++11 or TR1). For example, list-based associative
containers, which are very useful for the construction of
"multimaps," are also unordered.
</para>
<para>This snippet shows a red-black tree based container:</para>
<programlisting>
#include <ext/pb_ds/assoc_container.h>
int main()
{
__gnu_pbds::tree<int, char> c;
c[2] = 'b';
assert(c.find(2) != c.end());
};
</programlisting>
<para>The container is called <classname>tree</classname> instead of
<classname>map</classname> since the underlying data structures are
being named with specificity.
</para>
<para>
The member function naming convention is to strive to be the same as
the equivalent member functions in other C++ standard library
containers. The familiar methods are unchanged:
<function>begin</function>, <function>end</function>,
<function>size</function>, <function>empty</function>, and
<function>clear</function>.
</para>
<para>
This isn't to say that things are exactly as one would expect, given
the container requirments and interfaces in the C++ standard.
</para>
<para>
The names of containers' policies and policy accessors are
different then the usual. For example, if <type>hash_type</type> is
some type of hash-based container, then</para>
<programlisting>
hash_type::hash_fn
</programlisting>
<para>
gives the type of its hash functor, and if <varname>obj</varname> is
some hash-based container object, then
</para>
<programlisting>
obj.get_hash_fn()
</programlisting>
<para>will return a reference to its hash-functor object.</para>
<para>
Similarly, if <type>tree_type</type> is some type of tree-based
container, then
</para>
<programlisting>
tree_type::cmp_fn
</programlisting>
<para>
gives the type of its comparison functor, and if
<varname>obj</varname> is some tree-based container object,
then
</para>
<programlisting>
obj.get_cmp_fn()
</programlisting>
<para>will return a reference to its comparison-functor object.</para>
<para>
It would be nice to give names consistent with those in the existing
C++ standard (inclusive of TR1). Unfortunately, these standard
containers don't consistently name types and methods. For example,
<classname>std::tr1::unordered_map</classname> uses
<type>hasher</type> for the hash functor, but
<classname>std::map</classname> uses <type>key_compare</type> for
the comparison functor. Also, we could not find an accessor for
<classname>std::tr1::unordered_map</classname>'s hash functor, but
<classname>std::map</classname> uses <classname>compare</classname>
for accessing the comparison functor.
</para>
<para>
Instead, <literal>__gnu_pbds</literal> attempts to be internally
consistent, and uses standard-derived terminology if possible.
</para>
<para>
Another source of difference is in scope:
<literal>__gnu_pbds</literal> contains more types of associative
containers than the standard C++ library, and more opportunities
to configure these new containers, since different types of
associative containers are useful in different settings.
</para>
<para>
Namespace <literal>__gnu_pbds</literal> contains different classes for
hash-based containers, tree-based containers, trie-based containers,
and list-based containers.
</para>
<para>
Since associative containers share parts of their interface, they
are organized as a class hierarchy.
</para>
<para>Each type or method is defined in the most-common ancestor
in which it makes sense.
</para>
<para>For example, all associative containers support iteration
expressed in the following form:
</para>
<programlisting>
const_iterator
begin() const;
iterator
begin();
const_iterator
end() const;
iterator
end();
</programlisting>
<para>
But not all containers contain or use hash functors. Yet, both
collision-chaining and (general) probing hash-based associative
containers have a hash functor, so
<classname>basic_hash_table</classname> contains the interface:
</para>
<programlisting>
const hash_fn&
get_hash_fn() const;
hash_fn&
get_hash_fn();
</programlisting>
<para>
so all hash-based associative containers inherit the same
hash-functor accessor methods.
</para>
</section> <!--basic use -->
<section xml:id="pbds.using.tutorial.configuring">
<info>
<title>
Configuring via Template Parameters
</title>
</info>
<para>
In general, each of this library's containers is
parametrized by more policies than those of the standard library. For
example, the standard hash-based container is parametrized as
follows:
</para>
<programlisting>
template<typename Key, typename Mapped, typename Hash,
typename Pred, typename Allocator, bool Cache_Hashe_Code>
class unordered_map;
</programlisting>
<para>
and so can be configured by key type, mapped type, a functor
that translates keys to unsigned integral types, an equivalence
predicate, an allocator, and an indicator whether to store hash
values with each entry. this library's collision-chaining
hash-based container is parametrized as
</para>
<programlisting>
template<typename Key, typename Mapped, typename Hash_Fn,
typename Eq_Fn, typename Comb_Hash_Fn,
typename Resize_Policy, bool Store_Hash
typename Allocator>
class cc_hash_table;
</programlisting>
<para>
and so can be configured by the first four types of
<classname>std::tr1::unordered_map</classname>, then a
policy for translating the key-hash result into a position
within the table, then a policy by which the table resizes,
an indicator whether to store hash values with each entry,
and an allocator (which is typically the last template
parameter in standard containers).
</para>
<para>
Nearly all policy parameters have default values, so this
need not be considered for casual use. It is important to
note, however, that hash-based containers' policies can
dramatically alter their performance in different settings,
and that tree-based containers' policies can make them
useful for other purposes than just look-up.
</para>
<para>As opposed to associative containers, priority queues have
relatively few configuration options. The priority queue is
parametrized as follows:</para>
<programlisting>
template<typename Value_Type, typename Cmp_Fn,typename Tag,
typename Allocator>
class priority_queue;
</programlisting>
<para>The <classname>Value_Type</classname>, <classname>Cmp_Fn</classname>, and
<classname>Allocator</classname> parameters are the container's value type,
comparison-functor type, and allocator type, respectively;
these are very similar to the standard's priority queue. The
<classname>Tag</classname> parameter is different: there are a number of
pre-defined tag types corresponding to binary heaps, binomial
heaps, etc., and <classname>Tag</classname> should be instantiated
by one of them.</para>
<para>Note that as opposed to the
<classname>std::priority_queue</classname>,
<classname>__gnu_pbds::priority_queue</classname> is not a
sequence-adapter; it is a regular container.</para>
</section>
<section xml:id="pbds.using.tutorial.traits">
<info>
<title>
Querying Container Attributes
</title>
</info>
<para></para>
<para>A containers underlying data structure
affect their performance; Unfortunately, they can also affect
their interface. When manipulating generically associative
containers, it is often useful to be able to statically
determine what they can support and what the cannot.
</para>
<para>Happily, the standard provides a good solution to a similar
problem - that of the different behavior of iterators. If
<classname>It</classname> is an iterator, then
</para>
<programlisting>
typename std::iterator_traits<It>::iterator_category
</programlisting>
<para>is one of a small number of pre-defined tag classes, and
</para>
<programlisting>
typename std::iterator_traits<It>::value_type
</programlisting>
<para>is the value type to which the iterator "points".</para>
<para>
Similarly, in this library, if <type>C</type> is a
container, then <classname>container_traits</classname> is a
trait class that stores information about the kind of
container that is implemented.
</para>
<programlisting>
typename container_traits<C>::container_category
</programlisting>
<para>
is one of a small number of predefined tag structures that
uniquely identifies the type of underlying data structure.
</para>
<para>In most cases, however, the exact underlying data
structure is not really important, but what is important is
one of its other attributes: whether it guarantees storing
elements by key order, for example. For this one can
use</para>
<programlisting>
typename container_traits<C>::order_preserving
</programlisting>
<para>
Also,
</para>
<programlisting>
typename container_traits<C>::invalidation_guarantee
</programlisting>
<para>is the container's invalidation guarantee. Invalidation
guarantees are especially important regarding priority queues,
since in this library's design, iterators are practically the
only way to manipulate them.</para>
</section>
<section xml:id="pbds.using.tutorial.point_range_iteration">
<info>
<title>
Point and Range Iteration
</title>
</info>
<para></para>
<para>This library differentiates between two types of methods
and iterators: point-type, and range-type. For example,
<function>find</function> and <function>insert</function> are point-type methods, since
they each deal with a specific element; their returned
iterators are point-type iterators. <function>begin</function> and
<function>end</function> are range-type methods, since they are not used to
find a specific element, but rather to go over all elements in
a container object; their returned iterators are range-type
iterators.
</para>
<para>Most containers store elements in an order that is
determined by their interface. Correspondingly, it is fine that
their point-type iterators are synonymous with their range-type
iterators. For example, in the following snippet
</para>
<programlisting>
std::for_each(c.find(1), c.find(5), foo);
</programlisting>
<para>
two point-type iterators (returned by <function>find</function>) are used
for a range-type purpose - going over all elements whose key is
between 1 and 5.
</para>
<para>
Conversely, the above snippet makes no sense for
self-organizing containers - ones that order (and reorder)
their elements by implementation. It would be nice to have a
uniform iterator system that would allow the above snippet to
compile only if it made sense.
</para>
<para>
This could trivially be done by specializing
<function>std::for_each</function> for the case of iterators returned by
<classname>std::tr1::unordered_map</classname>, but this would only solve the
problem for one algorithm and one container. Fundamentally, the
problem is that one can loop using a self-organizing
container's point-type iterators.
</para>
<para>
This library's containers define two families of
iterators: <type>point_const_iterator</type> and
<type>point_iterator</type> are the iterator types returned by
point-type methods; <type>const_iterator</type> and
<type>iterator</type> are the iterator types returned by range-type
methods.
</para>
<programlisting>
class <- some container ->
{
public:
...
typedef <- something -> const_iterator;
typedef <- something -> iterator;
typedef <- something -> point_const_iterator;
typedef <- something -> point_iterator;
...
public:
...
const_iterator begin () const;
iterator begin();
point_const_iterator find(...) const;
point_iterator find(...);
};
</programlisting>
<para>For
containers whose interface defines sequence order , it
is very simple: point-type and range-type iterators are exactly
the same, which means that the above snippet will compile if it
is used for an order-preserving associative container.
</para>
<para>
For self-organizing containers, however, (hash-based
containers as a special example), the preceding snippet will
not compile, because their point-type iterators do not support
<function>operator++</function>.
</para>
<para>In any case, both for order-preserving and self-organizing
containers, the following snippet will compile:
</para>
<programlisting>
typename Cntnr::point_iterator it = c.find(2);
</programlisting>
<para>
because a range-type iterator can always be converted to a
point-type iterator.
</para>
<para>Distingushing between iterator types also
raises the point that a container's iterators might have
different invalidation rules concerning their de-referencing
abilities and movement abilities. This now corresponds exactly
to the question of whether point-type and range-type iterators
are valid. As explained above, <classname>container_traits</classname> allows
querying a container for its data structure attributes. The
iterator-invalidation guarantees are certainly a property of
the underlying data structure, and so
</para>
<programlisting>
container_traits<C>::invalidation_guarantee
</programlisting>
<para>
gives one of three pre-determined types that answer this
query.
</para>
</section>
</section> <!-- tutorial -->
<section xml:id="pbds.using.examples">
<info><title>Examples</title></info>
<para>
Additional code examples are provided in the source
distribution, as part of the regression and performance
testsuite.
</para>
<section xml:id="pbds.using.examples.basic">
<info><title>Intermediate Use</title></info>
<itemizedlist>
<listitem>
<para>
Basic use of maps:
<filename>basic_map.cc</filename>
</para>
</listitem>
<listitem>
<para>
Basic use of sets:
<filename>basic_set.cc</filename>
</para>
</listitem>
<listitem>
<para>
Conditionally erasing values from an associative container object:
<filename>erase_if.cc</filename>
</para>
</listitem>
<listitem>
<para>
Basic use of multimaps:
<filename>basic_multimap.cc</filename>
</para>
</listitem>
<listitem>
<para>
Basic use of multisets:
<filename>basic_multiset.cc</filename>
</para>
</listitem>
<listitem>
<para>
Basic use of priority queues:
<filename>basic_priority_queue.cc</filename>
</para>
</listitem>
<listitem>
<para>
Splitting and joining priority queues:
<filename>priority_queue_split_join.cc</filename>
</para>
</listitem>
<listitem>
<para>
Conditionally erasing values from a priority queue:
<filename>priority_queue_erase_if.cc</filename>
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="pbds.using.examples.query">
<info><title>Querying with <classname>container_traits</classname> </title></info>
<itemizedlist>
<listitem>
<para>
Using <classname>container_traits</classname> to query
about underlying data structure behavior:
<filename>assoc_container_traits.cc</filename>
</para>
</listitem>
<listitem>
<para>
A non-compiling example showing wrong use of finding keys in
hash-based containers: <filename>hash_find_neg.cc</filename>
</para>
</listitem>
<listitem>
<para>
Using <classname>container_traits</classname>
to query about underlying data structure behavior:
<filename>priority_queue_container_traits.cc</filename>
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="pbds.using.examples.container">
<info><title>By Container Method</title></info>
<para></para>
<section xml:id="pbds.using.examples.container.hash">
<info><title>Hash-Based</title></info>
<section xml:id="pbds.using.examples.container.hash.resize">
<info><title>size Related</title></info>
<itemizedlist>
<listitem>
<para>
Setting the initial size of a hash-based container
object:
<filename>hash_initial_size.cc</filename>
</para>
</listitem>
<listitem>
<para>
A non-compiling example showing how not to resize a
hash-based container object:
<filename>hash_resize_neg.cc</filename>
</para>
</listitem>
<listitem>
<para>
Resizing the size of a hash-based container object:
<filename>hash_resize.cc</filename>
</para>
</listitem>
<listitem>
<para>
Showing an illegal resize of a hash-based container
object:
<filename>hash_illegal_resize.cc</filename>
</para>
</listitem>
<listitem>
<para>
Changing the load factors of a hash-based container
object: <filename>hash_load_set_change.cc</filename>
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="pbds.using.examples.container.hash.hashor">
<info><title>Hashing Function Related</title></info>
<para></para>
<itemizedlist>
<listitem>
<para>
Using a modulo range-hashing function for the case of an
unknown skewed key distribution:
<filename>hash_mod.cc</filename>
</para>
</listitem>
<listitem>
<para>
Writing a range-hashing functor for the case of a known
skewed key distribution:
<filename>shift_mask.cc</filename>
</para>
</listitem>
<listitem>
<para>
Storing the hash value along with each key:
<filename>store_hash.cc</filename>
</para>
</listitem>
<listitem>
<para>
Writing a ranged-hash functor:
<filename>ranged_hash.cc</filename>
</para>
</listitem>
</itemizedlist>
</section>
</section>
<section xml:id="pbds.using.examples.container.branch">
<info><title>Branch-Based</title></info>
<section xml:id="pbds.using.examples.container.branch.split">
<info><title>split or join Related</title></info>
<itemizedlist>
<listitem>
<para>
Joining two tree-based container objects:
<filename>tree_join.cc</filename>
</para>
</listitem>
<listitem>
<para>
Splitting a PATRICIA trie container object:
<filename>trie_split.cc</filename>
</para>
</listitem>
<listitem>
<para>
Order statistics while joining two tree-based container
objects:
<filename>tree_order_statistics_join.cc</filename>
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="pbds.using.examples.container.branch.invariants">
<info><title>Node Invariants</title></info>
<itemizedlist>
<listitem>
<para>
Using trees for order statistics:
<filename>tree_order_statistics.cc</filename>
</para>
</listitem>
<listitem>
<para>
Augmenting trees to support operations on line
intervals:
<filename>tree_intervals.cc</filename>
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="pbds.using.examples.container.branch.trie">
<info><title>trie</title></info>
<itemizedlist>
<listitem>
<para>
Using a PATRICIA trie for DNA strings:
<filename>trie_dna.cc</filename>
</para>
</listitem>
<listitem>
<para>
Using a PATRICIA
trie for finding all entries whose key matches a given prefix:
<filename>trie_prefix_search.cc</filename>
</para>
</listitem>
</itemizedlist>
</section>
</section>
<section xml:id="pbds.using.examples.container.priority_queue">
<info><title>Priority Queues</title></info>
<itemizedlist>
<listitem>
<para>
Cross referencing an associative container and a priority
queue: <filename>priority_queue_xref.cc</filename>
</para>
</listitem>
<listitem>
<para>
Cross referencing a vector and a priority queue using a
very simple version of Dijkstra's shortest path
algorithm:
<filename>priority_queue_dijkstra.cc</filename>
</para>
</listitem>
</itemizedlist>
</section>
</section>
</section>
</section> <!-- using -->
<!-- S03: Design -->
<section xml:id="containers.pbds.design">
<info><title>Design</title></info>
<?dbhtml filename="policy_data_structures_design.html"?>
<para></para>
<section xml:id="pbds.design.concepts">
<info><title>Concepts</title></info>
<section xml:id="pbds.design.concepts.null_type">
<info><title>Null Policy Classes</title></info>
<para>
Associative containers are typically parametrized by various
policies. For example, a hash-based associative container is
parametrized by a hash-functor, transforming each key into an
non-negative numerical type. Each such value is then further mapped
into a position within the table. The mapping of a key into a
position within the table is therefore a two-step process.
</para>
<para>
In some cases, instantiations are redundant. For example, when the
keys are integers, it is possible to use a redundant hash policy,
which transforms each key into its value.
</para>
<para>
In some other cases, these policies are irrelevant. For example, a
hash-based associative container might transform keys into positions
within a table by a different method than the two-step method
described above. In such a case, the hash functor is simply
irrelevant.
</para>
<para>
When a policy is either redundant or irrelevant, it can be replaced
by <classname>null_type</classname>.
</para>
<para>
For example, a <emphasis>set</emphasis> is an associative
container with one of its template parameters (the one for the
mapped type) replaced with <classname>null_type</classname>. Other
places simplifications are made possible with this technique
include node updates in tree and trie data structures, and hash
and probe functions for hash data structures.
</para>
</section>
<section xml:id="pbds.design.concepts.associative_semantics">
<info><title>Map and Set Semantics</title></info>
<section xml:id="concepts.associative_semantics.set_vs_map">
<info>
<title>
Distinguishing Between Maps and Sets
</title>
</info>
<para>
Anyone familiar with the standard knows that there are four kinds
of associative containers: maps, sets, multimaps, and
multisets. The map datatype associates each key to
some data.
</para>
<para>
Sets are associative containers that simply store keys -
they do not map them to anything. In the standard, each map class
has a corresponding set class. E.g.,
<classname>std::map<int, char></classname> maps each
<classname>int</classname> to a <classname>char</classname>, but
<classname>std::set<int, char></classname> simply stores
<classname>int</classname>s. In this library, however, there are no
distinct classes for maps and sets. Instead, an associative
container's <classname>Mapped</classname> template parameter is a policy: if
it is instantiated by <classname>null_type</classname>, then it
is a "set"; otherwise, it is a "map". E.g.,
</para>
<programlisting>
cc_hash_table<int, char>
</programlisting>
<para>
is a "map" mapping each <type>int</type> value to a <type>
char</type>, but
</para>
<programlisting>
cc_hash_table<int, null_type>
</programlisting>
<para>
is a type that uniquely stores <type>int</type> values.
</para>
<para>Once the <classname>Mapped</classname> template parameter is instantiated
by <classname>null_type</classname>, then
the "set" acts very similarly to the standard's sets - it does not
map each key to a distinct <classname>null_type</classname> object. Also,
, the container's <type>value_type</type> is essentially
its <type>key_type</type> - just as with the standard's sets
.</para>
<para>
The standard's multimaps and multisets allow, respectively,
non-uniquely mapping keys and non-uniquely storing keys. As
discussed, the
reasons why this might be necessary are 1) that a key might be
decomposed into a primary key and a secondary key, 2) that a
key might appear more than once, or 3) any arbitrary
combination of 1)s and 2)s. Correspondingly,
one should use 1) "maps" mapping primary keys to secondary
keys, 2) "maps" mapping keys to size types, or 3) any arbitrary
combination of 1)s and 2)s. Thus, for example, an
<classname>std::multiset<int></classname> might be used to store
multiple instances of integers, but using this library's
containers, one might use
</para>
<programlisting>
tree<int, size_t>
</programlisting>
<para>
i.e., a <classname>map</classname> of <type>int</type>s to
<type>size_t</type>s.
</para>
<para>
These "multimaps" and "multisets" might be confusing to
anyone familiar with the standard's <classname>std::multimap</classname> and
<classname>std::multiset</classname>, because there is no clear
correspondence between the two. For example, in some cases
where one uses <classname>std::multiset</classname> in the standard, one might use
in this library a "multimap" of "multisets" - i.e., a
container that maps primary keys each to an associative
container that maps each secondary key to the number of times
it occurs.
</para>
<para>
When one uses a "multimap," one should choose with care the
type of container used for secondary keys.
</para>
</section> <!-- map vs set -->
<section xml:id="concepts.associative_semantics.multi">
<info><title>Alternatives to <classname>std::multiset</classname> and <classname>std::multimap</classname></title></info>
<para>
Brace onself: this library does not contain containers like
<classname>std::multimap</classname> or
<classname>std::multiset</classname>. Instead, these data
structures can be synthesized via manipulation of the
<classname>Mapped</classname> template parameter.
</para>
<para>
One maps the unique part of a key - the primary key, into an
associative-container of the (originally) non-unique parts of
the key - the secondary key. A primary associative-container
is an associative container of primary keys; a secondary
associative-container is an associative container of
secondary keys.
</para>
<para>
Stepping back a bit, and starting in from the beginning.
</para>
<para>
Maps (or sets) allow mapping (or storing) unique-key values.
The standard library also supplies associative containers which
map (or store) multiple values with equivalent keys:
<classname>std::multimap</classname>, <classname>std::multiset</classname>,
<classname>std::tr1::unordered_multimap</classname>, and
<classname>unordered_multiset</classname>. We first discuss how these might
be used, then why we think it is best to avoid them.
</para>
<para>
Suppose one builds a simple bank-account application that
records for each client (identified by an <classname>std::string</classname>)
and account-id (marked by an <type>unsigned long</type>) -
the balance in the account (described by a
<type>float</type>). Suppose further that ordering this
information is not useful, so a hash-based container is
preferable to a tree based container. Then one can use
</para>
<programlisting>
std::tr1::unordered_map<std::pair<std::string, unsigned long>, float, ...>
</programlisting>
<para>
which hashes every combination of client and account-id. This
might work well, except for the fact that it is now impossible
to efficiently list all of the accounts of a specific client
(this would practically require iterating over all
entries). Instead, one can use
</para>
<programlisting>
std::tr1::unordered_multimap<std::pair<std::string, unsigned long>, float, ...>
</programlisting>
<para>
which hashes every client, and decides equivalence based on
client only. This will ensure that all accounts belonging to a
specific user are stored consecutively.
</para>
<para>
Also, suppose one wants an integers' priority queue
(a container that supports <function>push</function>,
<function>pop</function>, and <function>top</function> operations, the last of which
returns the largest <type>int</type>) that also supports
operations such as <function>find</function> and <function>lower_bound</function>. A
reasonable solution is to build an adapter over
<classname>std::set<int></classname>. In this adapter,
<function>push</function> will just call the tree-based
associative container's <function>insert</function> method; <function>pop</function>
will call its <function>end</function> method, and use it to return the
preceding element (which must be the largest). Then this might
work well, except that the container object cannot hold
multiple instances of the same integer (<function>push(4)</function>,
will be a no-op if <constant>4</constant> is already in the
container object). If multiple keys are necessary, then one
might build the adapter over an
<classname>std::multiset<int></classname>.
</para>
<para>
The standard library's non-unique-mapping containers are useful
when (1) a key can be decomposed in to a primary key and a
secondary key, (2) a key is needed multiple times, or (3) any
combination of (1) and (2).
</para>
<para>
The graphic below shows how the standard library's container
design works internally; in this figure nodes shaded equally
represent equivalent-key values. Equivalent keys are stored
consecutively using the properties of the underlying data
structure: binary search trees (label A) store equivalent-key
values consecutively (in the sense of an in-order walk)
naturally; collision-chaining hash tables (label B) store
equivalent-key values in the same bucket, the bucket can be
arranged so that equivalent-key values are consecutive.
</para>
<figure>
<title>Non-unique Mapping Standard Containers</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_embedded_lists_1.png"/>
</imageobject>
<textobject>
<phrase>Non-unique Mapping Standard Containers</phrase>
</textobject>
</mediaobject>
</figure>
<para>
Put differently, the standards' non-unique mapping
associative-containers are associative containers that map
primary keys to linked lists that are embedded into the
container. The graphic below shows again the two
containers from the first graphic above, this time with
the embedded linked lists of the grayed nodes marked
explicitly.
</para>
<figure xml:id="fig.pbds_embedded_lists_2">
<title>
Effect of embedded lists in
<classname>std::multimap</classname>
</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_embedded_lists_2.png"/>
</imageobject>
<textobject>
<phrase>
Effect of embedded lists in
<classname>std::multimap</classname>
</phrase>
</textobject>
</mediaobject>
</figure>
<para>
These embedded linked lists have several disadvantages.
</para>
<orderedlist>
<listitem>
<para>
The underlying data structure embeds the linked lists
according to its own consideration, which means that the
search path for a value might include several different
equivalent-key values. For example, the search path for the
the black node in either of the first graphic, labels A or B,
includes more than a single gray node.
</para>
</listitem>
<listitem>
<para>
The links of the linked lists are the underlying data
structures' nodes, which typically are quite structured. In
the case of tree-based containers (the grapic above, label
B), each "link" is actually a node with three pointers (one
to a parent and two to children), and a
relatively-complicated iteration algorithm. The linked
lists, therefore, can take up quite a lot of memory, and
iterating over all values equal to a given key (through the
return value of the standard
library's <function>equal_range</function>) can be
expensive.
</para>
</listitem>
<listitem>
<para>
The primary key is stored multiply; this uses more memory.
</para>
</listitem>
<listitem>
<para>
Finally, the interface of this design excludes several
useful underlying data structures. Of all the unordered
self-organizing data structures, practically only
collision-chaining hash tables can (efficiently) guarantee
that equivalent-key values are stored consecutively.
</para>
</listitem>
</orderedlist>
<para>
The above reasons hold even when the ratio of secondary keys to
primary keys (or average number of identical keys) is small, but
when it is large, there are more severe problems:
</para>
<orderedlist>
<listitem>
<para>
The underlying data structures order the links inside each
embedded linked-lists according to their internal
considerations, which effectively means that each of the
links is unordered. Irrespective of the underlying data
structure, searching for a specific value can degrade to
linear complexity.
</para>
</listitem>
<listitem>
<para>
Similarly to the above point, it is impossible to apply
to the secondary keys considerations that apply to primary
keys. For example, it is not possible to maintain secondary
keys by sorted order.
</para>
</listitem>
<listitem>
<para>
While the interface "understands" that all equivalent-key
values constitute a distinct list (through
<function>equal_range</function>), the underlying data
structure typically does not. This means that operations such
as erasing from a tree-based container all values whose keys
are equivalent to a a given key can be super-linear in the
size of the tree; this is also true also for several other
operations that target a specific list.
</para>
</listitem>
</orderedlist>
<para>
In this library, all associative containers map
(or store) unique-key values. One can (1) map primary keys to
secondary associative-containers (containers of
secondary keys) or non-associative containers (2) map identical
keys to a size-type representing the number of times they
occur, or (3) any combination of (1) and (2). Instead of
allowing multiple equivalent-key values, this library
supplies associative containers based on underlying
data structures that are suitable as secondary
associative-containers.
</para>
<para>
In the figure below, labels A and B show the equivalent
underlying data structures in this library, as mapped to the
first graphic above. Labels A and B, respectively. Each shaded
box represents some size-type or secondary
associative-container.
</para>
<figure>
<title>Non-unique Mapping Containers</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_embedded_lists_3.png"/>
</imageobject>
<textobject>
<phrase>Non-unique Mapping Containers</phrase>
</textobject>
</mediaobject>
</figure>
<para>
In the first example above, then, one would use an associative
container mapping each user to an associative container which
maps each application id to a start time (see
<filename>example/basic_multimap.cc</filename>); in the second
example, one would use an associative container mapping
each <classname>int</classname> to some size-type indicating the
number of times it logically occurs
(see <filename>example/basic_multiset.cc</filename>.
</para>
<para>
See the discussion in list-based container types for containers
especially suited as secondary associative-containers.
</para>
</section>
</section> <!-- map and set semantics -->
<section xml:id="pbds.design.concepts.iterator_semantics">
<info><title>Iterator Semantics</title></info>
<section xml:id="concepts.iterator_semantics.point_and_range">
<info><title>Point and Range Iterators</title></info>
<para>
Iterator concepts are bifurcated in this design, and are
comprised of point-type and range-type iteration.
</para>
<para>
A point-type iterator is an iterator that refers to a specific
element as returned through an
associative-container's <function>find</function> method.
</para>
<para>
A range-type iterator is an iterator that is used to go over a
sequence of elements, as returned by a container's
<function>find</function> method.
</para>
<para>
A point-type method is a method that
returns a point-type iterator; a range-type method is a method
that returns a range-type iterator.
</para>
<para>For most containers, these types are synonymous; for
self-organizing containers, such as hash-based containers or
priority queues, these are inherently different (in any
implementation, including that of C++ standard library
components), but in this design, it is made explicit. They are
distinct types.
</para>
</section>
<section xml:id="concepts.iterator_semantics.both">
<info><title>Distinguishing Point and Range Iterators</title></info>
<para>When using this library, is necessary to differentiate
between two types of methods and iterators: point-type methods and
iterators, and range-type methods and iterators. Each associative
container's interface includes the methods:</para>
<programlisting>
point_const_iterator
find(const_key_reference r_key) const;
point_iterator
find(const_key_reference r_key);
std::pair<point_iterator,bool>
insert(const_reference r_val);
</programlisting>
<para>The relationship between these iterator types varies between
container types. The figure below
shows the most general invariant between point-type and
range-type iterators: In <emphasis>A</emphasis> <literal>iterator</literal>, can
always be converted to <literal>point_iterator</literal>. In <emphasis>B</emphasis>
shows invariants for order-preserving containers: point-type
iterators are synonymous with range-type iterators.
Orthogonally, <emphasis>C</emphasis>shows invariants for "set"
containers: iterators are synonymous with const iterators.</para>
<figure>
<title>Point Iterator Hierarchy</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_point_iterator_hierarchy.png"/>
</imageobject>
<textobject>
<phrase>Point Iterator Hierarchy</phrase>
</textobject>
</mediaobject>
</figure>
<para>Note that point-type iterators in self-organizing containers
(hash-based associative containers) lack movement
operators, such as <literal>operator++</literal> - in fact, this
is the reason why this library differentiates from the standard C++ librarys
design on this point.</para>
<para>Typically, one can determine an iterator's movement
capabilities using
<literal>std::iterator_traits<It>iterator_category</literal>,
which is a <literal>struct</literal> indicating the iterator's
movement capabilities. Unfortunately, none of the standard predefined
categories reflect a pointer's <emphasis>not</emphasis> having any
movement capabilities whatsoever. Consequently,
<literal>pb_ds</literal> adds a type
<literal>trivial_iterator_tag</literal> (whose name is taken from
a concept in C++ standardese, which is the category of iterators
with no movement capabilities.) All other standard C++ library
tags, such as <literal>forward_iterator_tag</literal> retain their
common use.</para>
</section>
<section xml:id="pbds.design.concepts.invalidation">
<info><title>Invalidation Guarantees</title></info>
<para>
If one manipulates a container object, then iterators previously
obtained from it can be invalidated. In some cases a
previously-obtained iterator cannot be de-referenced; in other cases,
the iterator's next or previous element might have changed
unpredictably. This corresponds exactly to the question whether a
point-type or range-type iterator (see previous concept) is valid or
not. In this design, one can query a container (in compile time) about
its invalidation guarantees.
</para>
<para>
Given three different types of associative containers, a modifying
operation (in that example, <function>erase</function>) invalidated
iterators in three different ways: the iterator of one container
remained completely valid - it could be de-referenced and
incremented; the iterator of a different container could not even be
de-referenced; the iterator of the third container could be
de-referenced, but its "next" iterator changed unpredictably.
</para>
<para>
Distinguishing between find and range types allows fine-grained
invalidation guarantees, because these questions correspond exactly
to the question of whether point-type iterators and range-type
iterators are valid. The graphic below shows tags corresponding to
different types of invalidation guarantees.
</para>
<figure>
<title>Invalidation Guarantee Tags Hierarchy</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PDF" scale="75"
fileref="../images/pbds_invalidation_tag_hierarchy.pdf"/>
</imageobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_invalidation_tag_hierarchy.png"/>
</imageobject>
<textobject>
<phrase>Invalidation Guarantee Tags Hierarchy</phrase>
</textobject>
</mediaobject>
</figure>
<itemizedlist>
<listitem>
<para>
<classname>basic_invalidation_guarantee</classname>
corresponds to a basic guarantee that a point-type iterator,
a found pointer, or a found reference, remains valid as long
as the container object is not modified.
</para>
</listitem>
<listitem>
<para>
<classname>point_invalidation_guarantee</classname>
corresponds to a guarantee that a point-type iterator, a
found pointer, or a found reference, remains valid even if
the container object is modified.
</para>
</listitem>
<listitem>
<para>
<classname>range_invalidation_guarantee</classname>
corresponds to a guarantee that a range-type iterator remains
valid even if the container object is modified.
</para>
</listitem>
</itemizedlist>
<para>To find the invalidation guarantee of a
container, one can use</para>
<programlisting>
typename container_traits<Cntnr>::invalidation_guarantee
</programlisting>
<para>Note that this hierarchy corresponds to the logic it
represents: if a container has range-invalidation guarantees,
then it must also have find invalidation guarantees;
correspondingly, its invalidation guarantee (in this case
<classname>range_invalidation_guarantee</classname>)
can be cast to its base class (in this case <classname>point_invalidation_guarantee</classname>).
This means that this this hierarchy can be used easily using
standard metaprogramming techniques, by specializing on the
type of <literal>invalidation_guarantee</literal>.</para>
<para>
These types of problems were addressed, in a more general
setting, in <xref linkend="biblio.meyers96more"/> - Item 2. In
our opinion, an invalidation-guarantee hierarchy would solve
these problems in all container types - not just associative
containers.
</para>
</section>
</section> <!-- iterator semantics -->
<section xml:id="pbds.design.concepts.genericity">
<info><title>Genericity</title></info>
<para>
The design attempts to address the following problem of
data-structure genericity. When writing a function manipulating
a generic container object, what is the behavior of the object?
Suppose one writes
</para>
<programlisting>
template<typename Cntnr>
void
some_op_sequence(Cntnr &r_container)
{
...
}
</programlisting>
<para>
then one needs to address the following questions in the body
of <function>some_op_sequence</function>:
</para>
<itemizedlist>
<listitem>
<para>
Which types and methods does <literal>Cntnr</literal> support?
Containers based on hash tables can be queries for the
hash-functor type and object; this is meaningless for tree-based
containers. Containers based on trees can be split, joined, or
can erase iterators and return the following iterator; this
cannot be done by hash-based containers.
</para>
</listitem>
<listitem>
<para>
What are the exception and invalidation guarantees
of <literal>Cntnr</literal>? A container based on a probing
hash-table invalidates all iterators when it is modified; this
is not the case for containers based on node-based
trees. Containers based on a node-based tree can be split or
joined without exceptions; this is not the case for containers
based on vector-based trees.
</para>
</listitem>
<listitem>
<para>
How does the container maintain its elements? Tree-based and
Trie-based containers store elements by key order; others,
typically, do not. A container based on a splay trees or lists
with update policies "cache" "frequently accessed" elements;
containers based on most other underlying data structures do
not.
</para>
</listitem>
<listitem>
<para>
How does one query a container about characteristics and
capabilities? What is the relationship between two different
data structures, if anything?
</para>
</listitem>
</itemizedlist>
<para>The remainder of this section explains these issues in
detail.</para>
<section xml:id="concepts.genericity.tag">
<info><title>Tag</title></info>
<para>
Tags are very useful for manipulating generic types. For example, if
<literal>It</literal> is an iterator class, then <literal>typename
It::iterator_category</literal> or <literal>typename
std::iterator_traits<It>::iterator_category</literal> will
yield its category, and <literal>typename
std::iterator_traits<It>::value_type</literal> will yield its
value type.
</para>
<para>
This library contains a container tag hierarchy corresponding to the
diagram below.
</para>
<figure>
<title>Container Tag Hierarchy</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PDF" scale="75"
fileref="../images/pbds_container_tag_hierarchy.pdf"/>
</imageobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_container_tag_hierarchy.png"/>
</imageobject>
<textobject>
<phrase>Container Tag Hierarchy</phrase>
</textobject>
</mediaobject>
</figure>
<para>
Given any container <type>Cntnr</type>, the tag of
the underlying data structure can be found via <literal>typename
Cntnr::container_category</literal>.
</para>
</section> <!-- tag -->
<section xml:id="concepts.genericity.traits">
<info><title>Traits</title></info>
<para></para>
<para>Additionally, a traits mechanism can be used to query a
container type for its attributes. Given any container
<literal>Cntnr</literal>, then <literal><Cntnr></literal>
is a traits class identifying the properties of the
container.</para>
<para>To find if a container can throw when a key is erased (which
is true for vector-based trees, for example), one can
use
</para>
<programlisting>container_traits<Cntnr>::erase_can_throw</programlisting>
<para>
Some of the definitions in <classname>container_traits</classname>
are dependent on other
definitions. If <classname>container_traits<Cntnr>::order_preserving</classname>
is <constant>true</constant> (which is the case for containers
based on trees and tries), then the container can be split or
joined; in this
case, <classname>container_traits<Cntnr>::split_join_can_throw</classname>
indicates whether splits or joins can throw exceptions (which is
true for vector-based trees);
otherwise <classname>container_traits<Cntnr>::split_join_can_throw</classname>
will yield a compilation error. (This is somewhat similar to a
compile-time version of the COM model).
</para>
</section> <!-- traits -->
</section> <!-- genericity -->
</section> <!-- concepts -->
<section xml:id="pbds.design.container">
<info><title>By Container</title></info>
<!-- hash -->
<section xml:id="pbds.design.container.hash">
<info><title>hash</title></info>
<!--
// hash policies
/// general terms / background
/// range hashing policies
/// ranged-hash policies
/// implementation
// resize policies
/// general
/// size policies
/// trigger policies
/// implementation
// policy interactions
/// probe/size/trigger
/// hash/trigger
/// eq/hash/storing hash values
/// size/load-check trigger
-->
<section xml:id="container.hash.interface">
<info><title>Interface</title></info>
<para>
The collision-chaining hash-based container has the
following declaration.</para>
<programlisting>
template<
typename Key,
typename Mapped,
typename Hash_Fn = std::hash<Key>,
typename Eq_Fn = std::equal_to<Key>,
typename Comb_Hash_Fn = direct_mask_range_hashing<>
typename Resize_Policy = default explained below.
bool Store_Hash = false,
typename Allocator = std::allocator<char> >
class cc_hash_table;
</programlisting>
<para>The parameters have the following meaning:</para>
<orderedlist>
<listitem><para><classname>Key</classname> is the key type.</para></listitem>
<listitem><para><classname>Mapped</classname> is the mapped-policy.</para></listitem>
<listitem><para><classname>Hash_Fn</classname> is a key hashing functor.</para></listitem>
<listitem><para><classname>Eq_Fn</classname> is a key equivalence functor.</para></listitem>
<listitem><para><classname>Comb_Hash_Fn</classname> is a range-hashing_functor;
it describes how to translate hash values into positions
within the table. </para></listitem>
<listitem><para><classname>Resize_Policy</classname> describes how a container object
should change its internal size. </para></listitem>
<listitem><para><classname>Store_Hash</classname> indicates whether the hash value
should be stored with each entry. </para></listitem>
<listitem><para><classname>Allocator</classname> is an allocator
type.</para></listitem>
</orderedlist>
<para>The probing hash-based container has the following
declaration.</para>
<programlisting>
template<
typename Key,
typename Mapped,
typename Hash_Fn = std::hash<Key>,
typename Eq_Fn = std::equal_to<Key>,
typename Comb_Probe_Fn = direct_mask_range_hashing<>
typename Probe_Fn = default explained below.
typename Resize_Policy = default explained below.
bool Store_Hash = false,
typename Allocator = std::allocator<char> >
class gp_hash_table;
</programlisting>
<para>The parameters are identical to those of the
collision-chaining container, except for the following.</para>
<orderedlist>
<listitem><para><classname>Comb_Probe_Fn</classname> describes how to transform a probe
sequence into a sequence of positions within the table.</para></listitem>
<listitem><para><classname>Probe_Fn</classname> describes a probe sequence policy.</para></listitem>
</orderedlist>
<para>Some of the default template values depend on the values of
other parameters, and are explained below.</para>
</section>
<section xml:id="container.hash.details">
<info><title>Details</title></info>
<section xml:id="container.hash.details.hash_policies">
<info><title>Hash Policies</title></info>
<section xml:id="details.hash_policies.general">
<info><title>General</title></info>
<para>Following is an explanation of some functions which hashing
involves. The graphic below illustrates the discussion.</para>
<figure>
<title>Hash functions, ranged-hash functions, and
range-hashing functions</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_hash_ranged_hash_range_hashing_fns.png"/>
</imageobject>
<textobject>
<phrase>Hash functions, ranged-hash functions, and
range-hashing functions</phrase>
</textobject>
</mediaobject>
</figure>
<para>Let U be a domain (e.g., the integers, or the
strings of 3 characters). A hash-table algorithm needs to map
elements of U "uniformly" into the range [0,..., m -
1] (where m is a non-negative integral value, and
is, in general, time varying). I.e., the algorithm needs
a ranged-hash function</para>
<para>
f : U × Z<subscript>+</subscript> → Z<subscript>+</subscript>
</para>
<para>such that for any u in U ,</para>
<para>0 ≤ f(u, m) ≤ m - 1</para>
<para>and which has "good uniformity" properties (say
<xref linkend="biblio.knuth98sorting"/>.)
One
common solution is to use the composition of the hash
function</para>
<para>h : U → Z<subscript>+</subscript> ,</para>
<para>which maps elements of U into the non-negative
integrals, and</para>
<para>g : Z<subscript>+</subscript> × Z<subscript>+</subscript> →
Z<subscript>+</subscript>,</para>
<para>which maps a non-negative hash value, and a non-negative
range upper-bound into a non-negative integral in the range
between 0 (inclusive) and the range upper bound (exclusive),
i.e., for any r in Z<subscript>+</subscript>,</para>
<para>0 ≤ g(r, m) ≤ m - 1</para>
<para>The resulting ranged-hash function, is</para>
<!-- ranged_hash_composed_of_hash_and_range_hashing -->
<equation>
<title>Ranged Hash Function</title>
<mathphrase>
f(u , m) = g(h(u), m)
</mathphrase>
</equation>
<para>From the above, it is obvious that given g and
h, f can always be composed (however the converse
is not true). The standard's hash-based containers allow specifying
a hash function, and use a hard-wired range-hashing function;
the ranged-hash function is implicitly composed.</para>
<para>The above describes the case where a key is to be mapped
into a single position within a hash table, e.g.,
in a collision-chaining table. In other cases, a key is to be
mapped into a sequence of positions within a table,
e.g., in a probing table. Similar terms apply in this
case: the table requires a ranged probe function,
mapping a key into a sequence of positions withing the table.
This is typically achieved by composing a hash function
mapping the key into a non-negative integral type, a
probe function transforming the hash value into a
sequence of hash values, and a range-hashing function
transforming the sequence of hash values into a sequence of
positions.</para>
</section>
<section xml:id="details.hash_policies.range">
<info><title>Range Hashing</title></info>
<para>Some common choices for range-hashing functions are the
division, multiplication, and middle-square methods (<xref linkend="biblio.knuth98sorting"/>), defined
as</para>
<equation>
<title>Range-Hashing, Division Method</title>
<mathphrase>
g(r, m) = r mod m
</mathphrase>
</equation>
<para>g(r, m) = ⌈ u/v ( a r mod v ) ⌉</para>
<para>and</para>
<para>g(r, m) = ⌈ u/v ( r<superscript>2</superscript> mod v ) ⌉</para>
<para>respectively, for some positive integrals u and
v (typically powers of 2), and some a. Each of
these range-hashing functions works best for some different
setting.</para>
<para>The division method (see above) is a
very common choice. However, even this single method can be
implemented in two very different ways. It is possible to
implement using the low
level % (modulo) operation (for any m), or the
low level & (bit-mask) operation (for the case where
m is a power of 2), i.e.,</para>
<equation>
<title>Division via Prime Modulo</title>
<mathphrase>
g(r, m) = r % m
</mathphrase>
</equation>
<para>and</para>
<equation>
<title>Division via Bit Mask</title>
<mathphrase>
g(r, m) = r & m - 1, (with m =
2<superscript>k</superscript> for some k)
</mathphrase>
</equation>
<para>respectively.</para>
<para>The % (modulo) implementation has the advantage that for
m a prime far from a power of 2, g(r, m) is
affected by all the bits of r (minimizing the chance of
collision). It has the disadvantage of using the costly modulo
operation. This method is hard-wired into SGI's implementation
.</para>
<para>The & (bit-mask) implementation has the advantage of
relying on the fast bit-wise and operation. It has the
disadvantage that for g(r, m) is affected only by the
low order bits of r. This method is hard-wired into
Dinkumware's implementation.</para>
</section>
<section xml:id="details.hash_policies.ranged">
<info><title>Ranged Hash</title></info>
<para>In cases it is beneficial to allow the
client to directly specify a ranged-hash hash function. It is
true, that the writer of the ranged-hash function cannot rely
on the values of m having specific numerical properties
suitable for hashing (in the sense used in <xref linkend="biblio.knuth98sorting"/>), since
the values of m are determined by a resize policy with
possibly orthogonal considerations.</para>
<para>There are two cases where a ranged-hash function can be
superior. The firs is when using perfect hashing: the
second is when the values of m can be used to estimate
the "general" number of distinct values required. This is
described in the following.</para>
<para>Let</para>
<para>
s = [ s<subscript>0</subscript>,..., s<subscript>t - 1</subscript>]
</para>
<para>be a string of t characters, each of which is from
domain S. Consider the following ranged-hash
function:</para>
<equation>
<title>
A Standard String Hash Function
</title>
<mathphrase>
f<subscript>1</subscript>(s, m) = ∑ <subscript>i =
0</subscript><superscript>t - 1</superscript> s<subscript>i</subscript> a<superscript>i</superscript> mod m
</mathphrase>
</equation>
<para>where a is some non-negative integral value. This is
the standard string-hashing function used in SGI's
implementation (with a = 5). Its advantage is that
it takes into account all of the characters of the string.</para>
<para>Now assume that s is the string representation of a
of a long DNA sequence (and so S = {'A', 'C', 'G',
'T'}). In this case, scanning the entire string might be
prohibitively expensive. A possible alternative might be to use
only the first k characters of the string, where</para>
<para>|S|<superscript>k</superscript> ≥ m ,</para>
<para>i.e., using the hash function</para>
<equation>
<title>
Only k String DNA Hash
</title>
<mathphrase>
f<subscript>2</subscript>(s, m) = ∑ <subscript>i
= 0</subscript><superscript>k - 1</superscript> s<subscript>i</subscript> a<superscript>i</superscript> mod m
</mathphrase>
</equation>
<para>requiring scanning over only</para>
<para>k = log<subscript>4</subscript>( m )</para>
<para>characters.</para>
<para>Other more elaborate hash-functions might scan k
characters starting at a random position (determined at each
resize), or scanning k random positions (determined at
each resize), i.e., using</para>
<para>f<subscript>3</subscript>(s, m) = ∑ <subscript>i =
r</subscript>0<superscript>r<subscript>0</subscript> + k - 1</superscript> s<subscript>i</subscript>
a<superscript>i</superscript> mod m ,</para>
<para>or</para>
<para>f<subscript>4</subscript>(s, m) = ∑ <subscript>i = 0</subscript><superscript>k -
1</superscript> s<subscript>r</subscript>i a<superscript>r<subscript>i</subscript></superscript> mod
m ,</para>
<para>respectively, for r<subscript>0</subscript>,..., r<subscript>k-1</subscript>
each in the (inclusive) range [0,...,t-1].</para>
<para>It should be noted that the above functions cannot be
decomposed as per a ranged hash composed of hash and range hashing.</para>
</section>
<section xml:id="details.hash_policies.implementation">
<info><title>Implementation</title></info>
<para>This sub-subsection describes the implementation of
the above in this library. It first explains range-hashing
functions in collision-chaining tables, then ranged-hash
functions in collision-chaining tables, then probing-based
tables, and finally lists the relevant classes in this
library.</para>
<section xml:id="hash_policies.implementation.collision-chaining">
<info><title>
Range-Hashing and Ranged-Hashes in Collision-Chaining Tables
</title></info>
<para><classname>cc_hash_table</classname> is
parametrized by <classname>Hash_Fn</classname> and <classname>Comb_Hash_Fn</classname>, a
hash functor and a combining hash functor, respectively.</para>
<para>In general, <classname>Comb_Hash_Fn</classname> is considered a
range-hashing functor. <classname>cc_hash_table</classname>
synthesizes a ranged-hash function from <classname>Hash_Fn</classname> and
<classname>Comb_Hash_Fn</classname>. The figure below shows an <classname>insert</classname> sequence
diagram for this case. The user inserts an element (point A),
the container transforms the key into a non-negative integral
using the hash functor (points B and C), and transforms the
result into a position using the combining functor (points D
and E).</para>
<figure>
<title>Insert hash sequence diagram</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_hash_range_hashing_seq_diagram.png"/>
</imageobject>
<textobject>
<phrase>Insert hash sequence diagram</phrase>
</textobject>
</mediaobject>
</figure>
<para>If <classname>cc_hash_table</classname>'s
hash-functor, <classname>Hash_Fn</classname> is instantiated by <classname>null_type</classname> , then <classname>Comb_Hash_Fn</classname> is taken to be
a ranged-hash function. The graphic below shows an <function>insert</function> sequence
diagram. The user inserts an element (point A), the container
transforms the key into a position using the combining functor
(points B and C).</para>
<figure>
<title>Insert hash sequence diagram with a null policy</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_hash_range_hashing_seq_diagram2.png"/>
</imageobject>
<textobject>
<phrase>Insert hash sequence diagram with a null policy</phrase>
</textobject>
</mediaobject>
</figure>
</section>
<section xml:id="hash_policies.implementation.probe">
<info><title>
Probing tables
</title></info>
<para><classname>gp_hash_table</classname> is parametrized by
<classname>Hash_Fn</classname>, <classname>Probe_Fn</classname>,
and <classname>Comb_Probe_Fn</classname>. As before, if
<classname>Hash_Fn</classname> and <classname>Probe_Fn</classname>
are both <classname>null_type</classname>, then
<classname>Comb_Probe_Fn</classname> is a ranged-probe
functor. Otherwise, <classname>Hash_Fn</classname> is a hash
functor, <classname>Probe_Fn</classname> is a functor for offsets
from a hash value, and <classname>Comb_Probe_Fn</classname>
transforms a probe sequence into a sequence of positions within
the table.</para>
</section>
<section xml:id="hash_policies.implementation.predefined">
<info><title>
Pre-Defined Policies
</title></info>
<para>This library contains some pre-defined classes
implementing range-hashing and probing functions:</para>
<orderedlist>
<listitem><para><classname>direct_mask_range_hashing</classname>
and <classname>direct_mod_range_hashing</classname>
are range-hashing functions based on a bit-mask and a modulo
operation, respectively.</para></listitem>
<listitem><para><classname>linear_probe_fn</classname>, and
<classname>quadratic_probe_fn</classname> are
a linear probe and a quadratic probe function,
respectively.</para></listitem>
</orderedlist>
<para>
The graphic below shows the relationships.
</para>
<figure>
<title>Hash policy class diagram</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_hash_policy_cd.png"/>
</imageobject>
<textobject>
<phrase>Hash policy class diagram</phrase>
</textobject>
</mediaobject>
</figure>
</section>
</section> <!-- impl -->
</section>
<section xml:id="container.hash.details.resize_policies">
<info><title>Resize Policies</title></info>
<section xml:id="resize_policies.general">
<info><title>General</title></info>
<para>Hash-tables, as opposed to trees, do not naturally grow or
shrink. It is necessary to specify policies to determine how
and when a hash table should change its size. Usually, resize
policies can be decomposed into orthogonal policies:</para>
<orderedlist>
<listitem><para>A size policy indicating how a hash table
should grow (e.g., it should multiply by powers of
2).</para></listitem>
<listitem><para>A trigger policy indicating when a hash
table should grow (e.g., a load factor is
exceeded).</para></listitem>
</orderedlist>
</section>
<section xml:id="resize_policies.size">
<info><title>Size Policies</title></info>
<para>Size policies determine how a hash table changes size. These
policies are simple, and there are relatively few sensible
options. An exponential-size policy (with the initial size and
growth factors both powers of 2) works well with a mask-based
range-hashing function, and is the
hard-wired policy used by Dinkumware. A
prime-list based policy works well with a modulo-prime range
hashing function and is the hard-wired policy used by SGI's
implementation.</para>
</section>
<section xml:id="resize_policies.trigger">
<info><title>Trigger Policies</title></info>
<para>Trigger policies determine when a hash table changes size.
Following is a description of two policies: load-check
policies, and collision-check policies.</para>
<para>Load-check policies are straightforward. The user specifies
two factors, Α<subscript>min</subscript> and
Α<subscript>max</subscript>, and the hash table maintains the
invariant that</para>
<para>Α<subscript>min</subscript> ≤ (number of
stored elements) / (hash-table size) ≤
Α<subscript>max</subscript><remark>load factor min max</remark></para>
<para>Collision-check policies work in the opposite direction of
load-check policies. They focus on keeping the number of
collisions moderate and hoping that the size of the table will
not grow very large, instead of keeping a moderate load-factor
and hoping that the number of collisions will be small. A
maximal collision-check policy resizes when the longest
probe-sequence grows too large.</para>
<para>Consider the graphic below. Let the size of the hash table
be denoted by m, the length of a probe sequence be denoted by k,
and some load factor be denoted by Α. We would like to
calculate the minimal length of k, such that if there were Α
m elements in the hash table, a probe sequence of length k would
be found with probability at most 1/m.</para>
<figure>
<title>Balls and bins</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_balls_and_bins.png"/>
</imageobject>
<textobject>
<phrase>Balls and bins</phrase>
</textobject>
</mediaobject>
</figure>
<para>Denote the probability that a probe sequence of length
k appears in bin i by p<subscript>i</subscript>, the
length of the probe sequence of bin i by
l<subscript>i</subscript>, and assume uniform distribution. Then</para>
<equation>
<title>
Probability of Probe Sequence of Length k
</title>
<mathphrase>
p<subscript>1</subscript> =
</mathphrase>
</equation>
<para>P(l<subscript>1</subscript> ≥ k) =</para>
<para>
P(l<subscript>1</subscript> ≥ α ( 1 + k / α - 1) ≤ (a)
</para>
<para>
e ^ ( - ( α ( k / α - 1 )<superscript>2</superscript> ) /2)
</para>
<para>where (a) follows from the Chernoff bound (<xref linkend="biblio.motwani95random"/>). To
calculate the probability that some bin contains a probe
sequence greater than k, we note that the
l<subscript>i</subscript> are negatively-dependent
(<xref linkend="biblio.dubhashi98neg"/>)
. Let
I(.) denote the indicator function. Then</para>
<equation>
<title>
Probability Probe Sequence in Some Bin
</title>
<mathphrase>
P( exists<subscript>i</subscript> l<subscript>i</subscript> ≥ k ) =
</mathphrase>
</equation>
<para>P ( ∑ <subscript>i = 1</subscript><superscript>m</superscript>
I(l<subscript>i</subscript> ≥ k) ≥ 1 ) =</para>
<para>P ( ∑ <subscript>i = 1</subscript><superscript>m</superscript> I (
l<subscript>i</subscript> ≥ k ) ≥ m p<subscript>1</subscript> ( 1 + 1 / (m
p<subscript>1</subscript>) - 1 ) ) ≤ (a)</para>
<para>e ^ ( ( - m p<subscript>1</subscript> ( 1 / (m p<subscript>1</subscript>)
- 1 ) <superscript>2</superscript> ) / 2 ) ,</para>
<para>where (a) follows from the fact that the Chernoff bound can
be applied to negatively-dependent variables (<xref
linkend="biblio.dubhashi98neg"/>). Inserting the first probability
equation into the second one, and equating with 1/m, we
obtain</para>
<para>k ~ √ ( 2 α ln 2 m ln(m) )
) .</para>
</section>
<section xml:id="resize_policies.impl">
<info><title>Implementation</title></info>
<para>This sub-subsection describes the implementation of the
above in this library. It first describes resize policies and
their decomposition into trigger and size policies, then
describes pre-defined classes, and finally discusses controlled
access the policies' internals.</para>
<section xml:id="resize_policies.impl.decomposition">
<info><title>Decomposition</title></info>
<para>Each hash-based container is parametrized by a
<classname>Resize_Policy</classname> parameter; the container derives
<classname>public</classname>ly from <classname>Resize_Policy</classname>. For
example:</para>
<programlisting>
cc_hash_table<typename Key,
typename Mapped,
...
typename Resize_Policy
...> : public Resize_Policy
</programlisting>
<para>As a container object is modified, it continuously notifies
its <classname>Resize_Policy</classname> base of internal changes
(e.g., collisions encountered and elements being
inserted). It queries its <classname>Resize_Policy</classname> base whether
it needs to be resized, and if so, to what size.</para>
<para>The graphic below shows a (possible) sequence diagram
of an insert operation. The user inserts an element; the hash
table notifies its resize policy that a search has started
(point A); in this case, a single collision is encountered -
the table notifies its resize policy of this (point B); the
container finally notifies its resize policy that the search
has ended (point C); it then queries its resize policy whether
a resize is needed, and if so, what is the new size (points D
to G); following the resize, it notifies the policy that a
resize has completed (point H); finally, the element is
inserted, and the policy notified (point I).</para>
<figure>
<title>Insert resize sequence diagram</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_insert_resize_sequence_diagram1.png"/>
</imageobject>
<textobject>
<phrase>Insert resize sequence diagram</phrase>
</textobject>
</mediaobject>
</figure>
<para>In practice, a resize policy can be usually orthogonally
decomposed to a size policy and a trigger policy. Consequently,
the library contains a single class for instantiating a resize
policy: <classname>hash_standard_resize_policy</classname>
is parametrized by <classname>Size_Policy</classname> and
<classname>Trigger_Policy</classname>, derives <classname>public</classname>ly from
both, and acts as a standard delegate (<xref linkend="biblio.gof"/>)
to these policies.</para>
<para>The two graphics immediately below show sequence diagrams
illustrating the interaction between the standard resize policy
and its trigger and size policies, respectively.</para>
<figure>
<title>Standard resize policy trigger sequence
diagram</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_insert_resize_sequence_diagram2.png"/>
</imageobject>
<textobject>
<phrase>Standard resize policy trigger sequence
diagram</phrase>
</textobject>
</mediaobject>
</figure>
<figure>
<title>Standard resize policy size sequence
diagram</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_insert_resize_sequence_diagram3.png"/>
</imageobject>
<textobject>
<phrase>Standard resize policy size sequence
diagram</phrase>
</textobject>
</mediaobject>
</figure>
</section>
<section xml:id="resize_policies.impl.predefined">
<info><title>Predefined Policies</title></info>
<para>The library includes the following
instantiations of size and trigger policies:</para>
<orderedlist>
<listitem><para><classname>hash_load_check_resize_trigger</classname>
implements a load check trigger policy.</para></listitem>
<listitem><para><classname>cc_hash_max_collision_check_resize_trigger</classname>
implements a collision check trigger policy.</para></listitem>
<listitem><para><classname>hash_exponential_size_policy</classname>
implements an exponential-size policy (which should be used
with mask range hashing).</para></listitem>
<listitem><para><classname>hash_prime_size_policy</classname>
implementing a size policy based on a sequence of primes
(which should
be used with mod range hashing</para></listitem>
</orderedlist>
<para>The graphic below gives an overall picture of the resize-related
classes. <classname>basic_hash_table</classname>
is parametrized by <classname>Resize_Policy</classname>, which it subclasses
publicly. This class is currently instantiated only by <classname>hash_standard_resize_policy</classname>.
<classname>hash_standard_resize_policy</classname>
itself is parametrized by <classname>Trigger_Policy</classname> and
<classname>Size_Policy</classname>. Currently, <classname>Trigger_Policy</classname> is
instantiated by <classname>hash_load_check_resize_trigger</classname>,
or <classname>cc_hash_max_collision_check_resize_trigger</classname>;
<classname>Size_Policy</classname> is instantiated by <classname>hash_exponential_size_policy</classname>,
or <classname>hash_prime_size_policy</classname>.</para>
</section>
<section xml:id="resize_policies.impl.internals">
<info><title>Controling Access to Internals</title></info>
<para>There are cases where (controlled) access to resize
policies' internals is beneficial. E.g., it is sometimes
useful to query a hash-table for the table's actual size (as
opposed to its <function>size()</function> - the number of values it
currently holds); it is sometimes useful to set a table's
initial size, externally resize it, or change load factors.</para>
<para>Clearly, supporting such methods both decreases the
encapsulation of hash-based containers, and increases the
diversity between different associative-containers' interfaces.
Conversely, omitting such methods can decrease containers'
flexibility.</para>
<para>In order to avoid, to the extent possible, the above
conflict, the hash-based containers themselves do not address
any of these questions; this is deferred to the resize policies,
which are easier to change or replace. Thus, for example,
neither <classname>cc_hash_table</classname> nor
<classname>gp_hash_table</classname>
contain methods for querying the actual size of the table; this
is deferred to <classname>hash_standard_resize_policy</classname>.</para>
<para>Furthermore, the policies themselves are parametrized by
template arguments that determine the methods they support
(
<xref linkend="biblio.alexandrescu01modern"/>
shows techniques for doing so). <classname>hash_standard_resize_policy</classname>
is parametrized by <classname>External_Size_Access</classname> that
determines whether it supports methods for querying the actual
size of the table or resizing it. <classname>hash_load_check_resize_trigger</classname>
is parametrized by <classname>External_Load_Access</classname> that
determines whether it supports methods for querying or
modifying the loads. <classname>cc_hash_max_collision_check_resize_trigger</classname>
is parametrized by <classname>External_Load_Access</classname> that
determines whether it supports methods for querying the
load.</para>
<para>Some operations, for example, resizing a container at
run time, or changing the load factors of a load-check trigger
policy, require the container itself to resize. As mentioned
above, the hash-based containers themselves do not contain
these types of methods, only their resize policies.
Consequently, there must be some mechanism for a resize policy
to manipulate the hash-based container. As the hash-based
container is a subclass of the resize policy, this is done
through virtual methods. Each hash-based container has a
<classname>private</classname> <classname>virtual</classname> method:</para>
<programlisting>
virtual void
do_resize
(size_type new_size);
</programlisting>
<para>which resizes the container. Implementations of
<classname>Resize_Policy</classname> can export public methods for resizing
the container externally; these methods internally call
<classname>do_resize</classname> to resize the table.</para>
</section>
</section>
</section> <!-- resize policies -->
<section xml:id="container.hash.details.policy_interaction">
<info><title>Policy Interactions</title></info>
<para>
</para>
<para>Hash-tables are unfortunately especially susceptible to
choice of policies. One of the more complicated aspects of this
is that poor combinations of good policies can form a poor
container. Following are some considerations.</para>
<section xml:id="policy_interaction.probesizetrigger">
<info><title>probe/size/trigger</title></info>
<para>Some combinations do not work well for probing containers.
For example, combining a quadratic probe policy with an
exponential size policy can yield a poor container: when an
element is inserted, a trigger policy might decide that there
is no need to resize, as the table still contains unused
entries; the probe sequence, however, might never reach any of
the unused entries.</para>
<para>Unfortunately, this library cannot detect such problems at
compilation (they are halting reducible). It therefore defines
an exception class <classname>insert_error</classname> to throw an
exception in this case.</para>
</section>
<section xml:id="policy_interaction.hashtrigger">
<info><title>hash/trigger</title></info>
<para>Some trigger policies are especially susceptible to poor
hash functions. Suppose, as an extreme case, that the hash
function transforms each key to the same hash value. After some
inserts, a collision detecting policy will always indicate that
the container needs to grow.</para>
<para>The library, therefore, by design, limits each operation to
one resize. For each <classname>insert</classname>, for example, it queries
only once whether a resize is needed.</para>
</section>
<section xml:id="policy_interaction.eqstorehash">
<info><title>equivalence functors/storing hash values/hash</title></info>
<para><classname>cc_hash_table</classname> and
<classname>gp_hash_table</classname> are
parametrized by an equivalence functor and by a
<classname>Store_Hash</classname> parameter. If the latter parameter is
<classname>true</classname>, then the container stores with each entry
a hash value, and uses this value in case of collisions to
determine whether to apply a hash value. This can lower the
cost of collision for some types, but increase the cost of
collisions for other types.</para>
<para>If a ranged-hash function or ranged probe function is
directly supplied, however, then it makes no sense to store the
hash value with each entry. This library's container will
fail at compilation, by design, if this is attempted.</para>
</section>
<section xml:id="policy_interaction.sizeloadtrigger">
<info><title>size/load-check trigger</title></info>
<para>Assume a size policy issues an increasing sequence of sizes
a, a q, a q<superscript>1</superscript>, a q<superscript>2</superscript>, ... For
example, an exponential size policy might issue the sequence of
sizes 8, 16, 32, 64, ...</para>
<para>If a load-check trigger policy is used, with loads
α<subscript>min</subscript> and α<subscript>max</subscript>,
respectively, then it is a good idea to have:</para>
<orderedlist>
<listitem><para>α<subscript>max</subscript> ~ 1 / q</para></listitem>
<listitem><para>α<subscript>min</subscript> < 1 / (2 q)</para></listitem>
</orderedlist>
<para>This will ensure that the amortized hash cost of each
modifying operation is at most approximately 3.</para>
<para>α<subscript>min</subscript> ~ α<subscript>max</subscript> is, in
any case, a bad choice, and α<subscript>min</subscript> >
α <subscript>max</subscript> is horrendous.</para>
</section>
</section>
</section> <!-- details -->
</section> <!-- hash -->
<!-- tree -->
<section xml:id="pbds.design.container.tree">
<info><title>tree</title></info>
<section xml:id="container.tree.interface">
<info><title>Interface</title></info>
<para>The tree-based container has the following declaration:</para>
<programlisting>
template<
typename Key,
typename Mapped,
typename Cmp_Fn = std::less<Key>,
typename Tag = rb_tree_tag,
template<
typename Const_Node_Iterator,
typename Node_Iterator,
typename Cmp_Fn_,
typename Allocator_>
class Node_Update = null_node_update,
typename Allocator = std::allocator<char> >
class tree;
</programlisting>
<para>The parameters have the following meaning:</para>
<orderedlist>
<listitem>
<para><classname>Key</classname> is the key type.</para></listitem>
<listitem>
<para><classname>Mapped</classname> is the mapped-policy.</para></listitem>
<listitem>
<para><classname>Cmp_Fn</classname> is a key comparison functor</para></listitem>
<listitem>
<para><classname>Tag</classname> specifies which underlying data structure
to use.</para></listitem>
<listitem>
<para><classname>Node_Update</classname> is a policy for updating node
invariants.</para></listitem>
<listitem>
<para><classname>Allocator</classname> is an allocator
type.</para></listitem>
</orderedlist>
<para>The <classname>Tag</classname> parameter specifies which underlying
data structure to use. Instantiating it by <classname>rb_tree_tag</classname>, <classname>splay_tree_tag</classname>, or
<classname>ov_tree_tag</classname>,
specifies an underlying red-black tree, splay tree, or
ordered-vector tree, respectively; any other tag is illegal.
Note that containers based on the former two contain more types
and methods than the latter (e.g.,
<classname>reverse_iterator</classname> and <classname>rbegin</classname>), and different
exception and invalidation guarantees.</para>
</section>
<section xml:id="container.tree.details">
<info><title>Details</title></info>
<section xml:id="container.tree.node">
<info><title>Node Invariants</title></info>
<para>Consider the two trees in the graphic below, labels A and B. The first
is a tree of floats; the second is a tree of pairs, each
signifying a geometric line interval. Each element in a tree is referred to as a node of the tree. Of course, each of
these trees can support the usual queries: the first can easily
search for <classname>0.4</classname>; the second can easily search for
<classname>std::make_pair(10, 41)</classname>.</para>
<para>Each of these trees can efficiently support other queries.
The first can efficiently determine that the 2rd key in the
tree is <constant>0.3</constant>; the second can efficiently determine
whether any of its intervals overlaps
<programlisting>std::make_pair(29,42)</programlisting> (useful in geometric
applications or distributed file systems with leases, for
example). It should be noted that an <classname>std::set</classname> can
only solve these types of problems with linear complexity.</para>
<para>In order to do so, each tree stores some metadata in
each node, and maintains node invariants (see <xref linkend="biblio.clrs2001"/>.) The first stores in
each node the size of the sub-tree rooted at the node; the
second stores at each node the maximal endpoint of the
intervals at the sub-tree rooted at the node.</para>
<figure>
<title>Tree node invariants</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_tree_node_invariants.png"/>
</imageobject>
<textobject>
<phrase>Tree node invariants</phrase>
</textobject>
</mediaobject>
</figure>
<para>Supporting such trees is difficult for a number of
reasons:</para>
<orderedlist>
<listitem><para>There must be a way to specify what a node's metadata
should be (if any).</para></listitem>
<listitem><para>Various operations can invalidate node
invariants. The graphic below shows how a right rotation,
performed on A, results in B, with nodes x and y having
corrupted invariants (the grayed nodes in C). The graphic shows
how an insert, performed on D, results in E, with nodes x and y
having corrupted invariants (the grayed nodes in F). It is not
feasible to know outside the tree the effect of an operation on
the nodes of the tree.</para></listitem>
<listitem><para>The search paths of standard associative containers are
defined by comparisons between keys, and not through
metadata.</para></listitem>
<listitem><para>It is not feasible to know in advance which methods trees
can support. Besides the usual <classname>find</classname> method, the
first tree can support a <classname>find_by_order</classname> method, while
the second can support an <classname>overlaps</classname> method.</para></listitem>
</orderedlist>
<figure>
<title>Tree node invalidation</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_tree_node_invalidations.png"/>
</imageobject>
<textobject>
<phrase>Tree node invalidation</phrase>
</textobject>
</mediaobject>
</figure>
<para>These problems are solved by a combination of two means:
node iterators, and template-template node updater
parameters.</para>
<section xml:id="container.tree.node.iterators">
<info><title>Node Iterators</title></info>
<para>Each tree-based container defines two additional iterator
types, <classname>const_node_iterator</classname>
and <classname>node_iterator</classname>.
These iterators allow descending from a node to one of its
children. Node iterator allow search paths different than those
determined by the comparison functor. The <classname>tree</classname>
supports the methods:</para>
<programlisting>
const_node_iterator
node_begin() const;
node_iterator
node_begin();
const_node_iterator
node_end() const;
node_iterator
node_end();
</programlisting>
<para>The first pairs return node iterators corresponding to the
root node of the tree; the latter pair returns node iterators
corresponding to a just-after-leaf node.</para>
</section>
<section xml:id="container.tree.node.updator">
<info><title>Node Updator</title></info>
<para>The tree-based containers are parametrized by a
<classname>Node_Update</classname> template-template parameter. A
tree-based container instantiates
<classname>Node_Update</classname> to some
<classname>node_update</classname> class, and publicly subclasses
<classname>node_update</classname>. The graphic below shows this
scheme, as well as some predefined policies (which are explained
below).</para>
<figure>
<title>A tree and its update policy</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_tree_node_updator_policy_cd.png"/>
</imageobject>
<textobject>
<phrase>A tree and its update policy</phrase>
</textobject>
</mediaobject>
</figure>
<para><classname>node_update</classname> (an instantiation of
<classname>Node_Update</classname>) must define <classname>metadata_type</classname> as
the type of metadata it requires. For order statistics,
e.g., <classname>metadata_type</classname> might be <classname>size_t</classname>.
The tree defines within each node a <classname>metadata_type</classname>
object.</para>
<para><classname>node_update</classname> must also define the following method
for restoring node invariants:</para>
<programlisting>
void
operator()(node_iterator nd_it, const_node_iterator end_nd_it)
</programlisting>
<para>In this method, <varname>nd_it</varname> is a
<classname>node_iterator</classname> corresponding to a node whose
A) all descendants have valid invariants, and B) its own
invariants might be violated; <classname>end_nd_it</classname> is
a <classname>const_node_iterator</classname> corresponding to a
just-after-leaf node. This method should correct the node
invariants of the node pointed to by
<classname>nd_it</classname>. For example, say node x in the
graphic below label A has an invalid invariant, but its' children,
y and z have valid invariants. After the invocation, all three
nodes should have valid invariants, as in label B.</para>
<figure>
<title>Restoring node invariants</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_restoring_node_invariants.png"/>
</imageobject>
<textobject>
<phrase>Restoring node invariants</phrase>
</textobject>
</mediaobject>
</figure>
<para>When a tree operation might invalidate some node invariant,
it invokes this method in its <classname>node_update</classname> base to
restore the invariant. For example, the graphic below shows
an <function>insert</function> operation (point A); the tree performs some
operations, and calls the update functor three times (points B,
C, and D). (It is well known that any <function>insert</function>,
<function>erase</function>, <function>split</function> or <function>join</function>, can restore
all node invariants by a small number of node invariant updates (<xref linkend="biblio.clrs2001"/>)
.</para>
<figure>
<title>Insert update sequence</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_update_seq_diagram.png"/>
</imageobject>
<textobject>
<phrase>Insert update sequence</phrase>
</textobject>
</mediaobject>
</figure>
<para>To complete the description of the scheme, three questions
need to be answered:</para>
<orderedlist>
<listitem><para>How can a tree which supports order statistics define a
method such as <classname>find_by_order</classname>?</para></listitem>
<listitem><para>How can the node updater base access methods of the
tree?</para></listitem>
<listitem><para>How can the following cyclic dependency be resolved?
<classname>node_update</classname> is a base class of the tree, yet it
uses node iterators defined in the tree (its child).</para></listitem>
</orderedlist>
<para>The first two questions are answered by the fact that
<classname>node_update</classname> (an instantiation of
<classname>Node_Update</classname>) is a <emphasis>public</emphasis> base class
of the tree. Consequently:</para>
<orderedlist>
<listitem><para>Any public methods of
<classname>node_update</classname> are automatically methods of
the tree (<xref linkend="biblio.alexandrescu01modern"/>).
Thus an order-statistics node updater,
<classname>tree_order_statistics_node_update</classname> defines
the <function>find_by_order</function> method; any tree
instantiated by this policy consequently supports this method as
well.</para></listitem>
<listitem><para>In C++, if a base class declares a method as
<literal>virtual</literal>, it is
<literal>virtual</literal> in its subclasses. If
<classname>node_update</classname> needs to access one of the
tree's methods, say the member function
<function>end</function>, it simply declares that method as
<literal>virtual</literal> abstract.</para></listitem>
</orderedlist>
<para>The cyclic dependency is solved through template-template
parameters. <classname>Node_Update</classname> is parametrized by
the tree's node iterators, its comparison functor, and its
allocator type. Thus, instantiations of
<classname>Node_Update</classname> have all information
required.</para>
<para>This library assumes that constructing a metadata object and
modifying it are exception free. Suppose that during some method,
say <classname>insert</classname>, a metadata-related operation
(e.g., changing the value of a metadata) throws an exception. Ack!
Rolling back the method is unusually complex.</para>
<para>Previously, a distinction was made between redundant
policies and null policies. Node invariants show a
case where null policies are required.</para>
<para>Assume a regular tree is required, one which need not
support order statistics or interval overlap queries.
Seemingly, in this case a redundant policy - a policy which
doesn't affect nodes' contents would suffice. This, would lead
to the following drawbacks:</para>
<orderedlist>
<listitem><para>Each node would carry a useless metadata object, wasting
space.</para></listitem>
<listitem><para>The tree cannot know if its
<classname>Node_Update</classname> policy actually modifies a
node's metadata (this is halting reducible). In the graphic
below, assume the shaded node is inserted. The tree would have
to traverse the useless path shown to the root, applying
redundant updates all the way.</para></listitem>
</orderedlist>
<figure>
<title>Useless update path</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_rationale_null_node_updator.png"/>
</imageobject>
<textobject>
<phrase>Useless update path</phrase>
</textobject>
</mediaobject>
</figure>
<para>A null policy class, <classname>null_node_update</classname>
solves both these problems. The tree detects that node
invariants are irrelevant, and defines all accordingly.</para>
</section>
</section>
<section xml:id="container.tree.details.split">
<info><title>Split and Join</title></info>
<para>Tree-based containers support split and join methods.
It is possible to split a tree so that it passes
all nodes with keys larger than a given key to a different
tree. These methods have the following advantages over the
alternative of externally inserting to the destination
tree and erasing from the source tree:</para>
<orderedlist>
<listitem><para>These methods are efficient - red-black trees are split
and joined in poly-logarithmic complexity; ordered-vector
trees are split and joined at linear complexity. The
alternatives have super-linear complexity.</para></listitem>
<listitem><para>Aside from orders of growth, these operations perform
few allocations and de-allocations. For red-black trees, allocations are not performed,
and the methods are exception-free. </para></listitem>
</orderedlist>
</section>
</section> <!-- details -->
</section> <!-- tree -->
<!-- trie -->
<section xml:id="pbds.design.container.trie">
<info><title>Trie</title></info>
<section xml:id="container.trie.interface">
<info><title>Interface</title></info>
<para>The trie-based container has the following declaration:</para>
<programlisting>
template<typename Key,
typename Mapped,
typename Cmp_Fn = std::less<Key>,
typename Tag = pat_trie_tag,
template<typename Const_Node_Iterator,
typename Node_Iterator,
typename E_Access_Traits_,
typename Allocator_>
class Node_Update = null_node_update,
typename Allocator = std::allocator<char> >
class trie;
</programlisting>
<para>The parameters have the following meaning:</para>
<orderedlist>
<listitem><para><classname>Key</classname> is the key type.</para></listitem>
<listitem><para><classname>Mapped</classname> is the mapped-policy.</para></listitem>
<listitem><para><classname>E_Access_Traits</classname> is described in below.</para></listitem>
<listitem><para><classname>Tag</classname> specifies which underlying data structure
to use, and is described shortly.</para></listitem>
<listitem><para><classname>Node_Update</classname> is a policy for updating node
invariants. This is described below.</para></listitem>
<listitem><para><classname>Allocator</classname> is an allocator
type.</para></listitem>
</orderedlist>
<para>The <classname>Tag</classname> parameter specifies which underlying
data structure to use. Instantiating it by <classname>pat_trie_tag</classname>, specifies an
underlying PATRICIA trie (explained shortly); any other tag is
currently illegal.</para>
<para>Following is a description of a (PATRICIA) trie
(this implementation follows <xref linkend="biblio.okasaki98mereable"/> and
<xref linkend="biblio.filliatre2000ptset"/>).
</para>
<para>A (PATRICIA) trie is similar to a tree, but with the
following differences:</para>
<orderedlist>
<listitem><para>It explicitly views keys as a sequence of elements.
E.g., a trie can view a string as a sequence of
characters; a trie can view a number as a sequence of
bits.</para></listitem>
<listitem><para>It is not (necessarily) binary. Each node has fan-out n
+ 1, where n is the number of distinct
elements.</para></listitem>
<listitem><para>It stores values only at leaf nodes.</para></listitem>
<listitem><para>Internal nodes have the properties that A) each has at
least two children, and B) each shares the same prefix with
any of its descendant.</para></listitem>
</orderedlist>
<para>A (PATRICIA) trie has some useful properties:</para>
<orderedlist>
<listitem><para>It can be configured to use large node fan-out, giving it
very efficient find performance (albeit at insertion
complexity and size).</para></listitem>
<listitem><para>It works well for common-prefix keys.</para></listitem>
<listitem><para>It can support efficiently queries such as which
keys match a certain prefix. This is sometimes useful in file
systems and routers, and for "type-ahead" aka predictive text matching
on mobile devices.</para></listitem>
</orderedlist>
</section>
<section xml:id="container.trie.details">
<info><title>Details</title></info>
<section xml:id="container.trie.details.etraits">
<info><title>Element Access Traits</title></info>
<para>A trie inherently views its keys as sequences of elements.
For example, a trie can view a string as a sequence of
characters. A trie needs to map each of n elements to a
number in {0, n - 1}. For example, a trie can map a
character <varname>c</varname> to
<programlisting>static_cast<size_t>(c)</programlisting>.</para>
<para>Seemingly, then, a trie can assume that its keys support
(const) iterators, and that the <classname>value_type</classname> of this
iterator can be cast to a <classname>size_t</classname>. There are several
reasons, though, to decouple the mechanism by which the trie
accesses its keys' elements from the trie:</para>
<orderedlist>
<listitem><para>In some cases, the numerical value of an element is
inappropriate. Consider a trie storing DNA strings. It is
logical to use a trie with a fan-out of 5 = 1 + |{'A', 'C',
'G', 'T'}|. This requires mapping 'T' to 3, though.</para></listitem>
<listitem><para>In some cases the keys' iterators are different than what
is needed. For example, a trie can be used to search for
common suffixes, by using strings'
<classname>reverse_iterator</classname>. As another example, a trie mapping
UNICODE strings would have a huge fan-out if each node would
branch on a UNICODE character; instead, one can define an
iterator iterating over 8-bit (or less) groups.</para></listitem>
</orderedlist>
<para>trie is,
consequently, parametrized by <classname>E_Access_Traits</classname> -
traits which instruct how to access sequences' elements.
<classname>string_trie_e_access_traits</classname>
is a traits class for strings. Each such traits define some
types, like:</para>
<programlisting>
typename E_Access_Traits::const_iterator
</programlisting>
<para>is a const iterator iterating over a key's elements. The
traits class must also define methods for obtaining an iterator
to the first and last element of a key.</para>
<para>The graphic below shows a
(PATRICIA) trie resulting from inserting the words: "I wish
that I could ever see a poem lovely as a trie" (which,
unfortunately, does not rhyme).</para>
<para>The leaf nodes contain values; each internal node contains
two <classname>typename E_Access_Traits::const_iterator</classname>
objects, indicating the maximal common prefix of all keys in
the sub-tree. For example, the shaded internal node roots a
sub-tree with leafs "a" and "as". The maximal common prefix is
"a". The internal node contains, consequently, to const
iterators, one pointing to <varname>'a'</varname>, and the other to
<varname>'s'</varname>.</para>
<figure>
<title>A PATRICIA trie</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_pat_trie.png"/>
</imageobject>
<textobject>
<phrase>A PATRICIA trie</phrase>
</textobject>
</mediaobject>
</figure>
</section>
<section xml:id="container.trie.details.node">
<info><title>Node Invariants</title></info>
<para>Trie-based containers support node invariants, as do
tree-based containers. There are two minor
differences, though, which, unfortunately, thwart sharing them
sharing the same node-updating policies:</para>
<orderedlist>
<listitem>
<para>A trie's <classname>Node_Update</classname> template-template
parameter is parametrized by <classname>E_Access_Traits</classname>, while
a tree's <classname>Node_Update</classname> template-template parameter is
parametrized by <classname>Cmp_Fn</classname>.</para></listitem>
<listitem><para>Tree-based containers store values in all nodes, while
trie-based containers (at least in this implementation) store
values in leafs.</para></listitem>
</orderedlist>
<para>The graphic below shows the scheme, as well as some predefined
policies (which are explained below).</para>
<figure>
<title>A trie and its update policy</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_trie_node_updator_policy_cd.png"/>
</imageobject>
<textobject>
<phrase>A trie and its update policy</phrase>
</textobject>
</mediaobject>
</figure>
<para>This library offers the following pre-defined trie node
updating policies:</para>
<orderedlist>
<listitem>
<para>
<classname>trie_order_statistics_node_update</classname>
supports order statistics.
</para>
</listitem>
<listitem><para><classname>trie_prefix_search_node_update</classname>
supports searching for ranges that match a given prefix.</para></listitem>
<listitem><para><classname>null_node_update</classname>
is the null node updater.</para></listitem>
</orderedlist>
</section>
<section xml:id="container.trie.details.split">
<info><title>Split and Join</title></info>
<para>Trie-based containers support split and join methods; the
rationale is equal to that of tree-based containers supporting
these methods.</para>
</section>
</section> <!-- details -->
</section> <!-- trie -->
<!-- list_update -->
<section xml:id="pbds.design.container.list">
<info><title>List</title></info>
<section xml:id="container.list.interface">
<info><title>Interface</title></info>
<para>The list-based container has the following declaration:</para>
<programlisting>
template<typename Key,
typename Mapped,
typename Eq_Fn = std::equal_to<Key>,
typename Update_Policy = move_to_front_lu_policy<>,
typename Allocator = std::allocator<char> >
class list_update;
</programlisting>
<para>The parameters have the following meaning:</para>
<orderedlist>
<listitem>
<para>
<classname>Key</classname> is the key type.
</para>
</listitem>
<listitem>
<para>
<classname>Mapped</classname> is the mapped-policy.
</para>
</listitem>
<listitem>
<para>
<classname>Eq_Fn</classname> is a key equivalence functor.
</para>
</listitem>
<listitem>
<para>
<classname>Update_Policy</classname> is a policy updating positions in
the list based on access patterns. It is described in the
following subsection.
</para>
</listitem>
<listitem>
<para>
<classname>Allocator</classname> is an allocator type.
</para>
</listitem>
</orderedlist>
<para>A list-based associative container is a container that
stores elements in a linked-list. It does not order the elements
by any particular order related to the keys. List-based
containers are primarily useful for creating "multimaps". In fact,
list-based containers are designed in this library expressly for
this purpose.</para>
<para>List-based containers might also be useful for some rare
cases, where a key is encapsulated to the extent that only
key-equivalence can be tested. Hash-based containers need to know
how to transform a key into a size type, and tree-based containers
need to know if some key is larger than another. List-based
associative containers, conversely, only need to know if two keys
are equivalent.</para>
<para>Since a list-based associative container does not order
elements by keys, is it possible to order the list in some
useful manner? Remarkably, many on-line competitive
algorithms exist for reordering lists to reflect access
prediction. (See <xref linkend="biblio.motwani95random"/> and <xref linkend="biblio.andrew04mtf"/>).
</para>
</section>
<section xml:id="container.list.details">
<info><title>Details</title></info>
<para>
</para>
<section xml:id="container.list.details.ds">
<info><title>Underlying Data Structure</title></info>
<para>The graphic below shows a
simple list of integer keys. If we search for the integer 6, we
are paying an overhead: the link with key 6 is only the fifth
link; if it were the first link, it could be accessed
faster.</para>
<figure>
<title>A simple list</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_simple_list.png"/>
</imageobject>
<textobject>
<phrase>A simple list</phrase>
</textobject>
</mediaobject>
</figure>
<para>List-update algorithms reorder lists as elements are
accessed. They try to determine, by the access history, which
keys to move to the front of the list. Some of these algorithms
require adding some metadata alongside each entry.</para>
<para>For example, in the graphic below label A shows the counter
algorithm. Each node contains both a key and a count metadata
(shown in bold). When an element is accessed (e.g. 6) its count is
incremented, as shown in label B. If the count reaches some
predetermined value, say 10, as shown in label C, the count is set
to 0 and the node is moved to the front of the list, as in label
D.
</para>
<figure>
<title>The counter algorithm</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_list_update.png"/>
</imageobject>
<textobject>
<phrase>The counter algorithm</phrase>
</textobject>
</mediaobject>
</figure>
</section>
<section xml:id="container.list.details.policies">
<info><title>Policies</title></info>
<para>this library allows instantiating lists with policies
implementing any algorithm moving nodes to the front of the
list (policies implementing algorithms interchanging nodes are
unsupported).</para>
<para>Associative containers based on lists are parametrized by a
<classname>Update_Policy</classname> parameter. This parameter defines the
type of metadata each node contains, how to create the
metadata, and how to decide, using this metadata, whether to
move a node to the front of the list. A list-based associative
container object derives (publicly) from its update policy.
</para>
<para>An instantiation of <classname>Update_Policy</classname> must define
internally <classname>update_metadata</classname> as the metadata it
requires. Internally, each node of the list contains, besides
the usual key and data, an instance of <classname>typename
Update_Policy::update_metadata</classname>.</para>
<para>An instantiation of <classname>Update_Policy</classname> must define
internally two operators:</para>
<programlisting>
update_metadata
operator()();
bool
operator()(update_metadata &);
</programlisting>
<para>The first is called by the container object, when creating a
new node, to create the node's metadata. The second is called
by the container object, when a node is accessed (
when a find operation's key is equivalent to the key of the
node), to determine whether to move the node to the front of
the list.
</para>
<para>The library contains two predefined implementations of
list-update policies. The first
is <classname>lu_counter_policy</classname>, which implements the
counter algorithm described above. The second is
<classname>lu_move_to_front_policy</classname>,
which unconditionally move an accessed element to the front of
the list. The latter type is very useful in this library,
since there is no need to associate metadata with each element.
(See <xref linkend="biblio.andrew04mtf"/>
</para>
</section>
<section xml:id="container.list.details.mapped">
<info><title>Use in Multimaps</title></info>
<para>In this library, there are no equivalents for the standard's
multimaps and multisets; instead one uses an associative
container mapping primary keys to secondary keys.</para>
<para>List-based containers are especially useful as associative
containers for secondary keys. In fact, they are implemented
here expressly for this purpose.</para>
<para>To begin with, these containers use very little per-entry
structure memory overhead, since they can be implemented as
singly-linked lists. (Arrays use even lower per-entry memory
overhead, but they are less flexible in moving around entries,
and have weaker invalidation guarantees).</para>
<para>More importantly, though, list-based containers use very
little per-container memory overhead. The memory overhead of an
empty list-based container is practically that of a pointer.
This is important for when they are used as secondary
associative-containers in situations where the average ratio of
secondary keys to primary keys is low (or even 1).</para>
<para>In order to reduce the per-container memory overhead as much
as possible, they are implemented as closely as possible to
singly-linked lists.</para>
<orderedlist>
<listitem>
<para>
List-based containers do not store internally the number
of values that they hold. This means that their <function>size</function>
method has linear complexity (just like <classname>std::list</classname>).
Note that finding the number of equivalent-key values in a
standard multimap also has linear complexity (because it must be
done, via <function>std::distance</function> of the
multimap's <function>equal_range</function> method), but usually with
higher constants.
</para>
</listitem>
<listitem>
<para>
Most associative-container objects each hold a policy
object (a hash-based container object holds a
hash functor). List-based containers, conversely, only have
class-wide policy objects.
</para>
</listitem>
</orderedlist>
</section>
</section> <!-- details -->
</section> <!-- list -->
<!-- priority_queue -->
<section xml:id="pbds.design.container.priority_queue">
<info><title>Priority Queue</title></info>
<section xml:id="container.priority_queue.interface">
<info><title>Interface</title></info>
<para>The priority queue container has the following
declaration:
</para>
<programlisting>
template<typename Value_Type,
typename Cmp_Fn = std::less<Value_Type>,
typename Tag = pairing_heap_tag,
typename Allocator = std::allocator<char > >
class priority_queue;
</programlisting>
<para>The parameters have the following meaning:</para>
<orderedlist>
<listitem><para><classname>Value_Type</classname> is the value type.</para></listitem>
<listitem><para><classname>Cmp_Fn</classname> is a value comparison functor</para></listitem>
<listitem><para><classname>Tag</classname> specifies which underlying data structure
to use.</para></listitem>
<listitem><para><classname>Allocator</classname> is an allocator
type.</para></listitem>
</orderedlist>
<para>The <classname>Tag</classname> parameter specifies which underlying
data structure to use. Instantiating it by<classname>pairing_heap_tag</classname>,<classname>binary_heap_tag</classname>,
<classname>binomial_heap_tag</classname>,
<classname>rc_binomial_heap_tag</classname>,
or <classname>thin_heap_tag</classname>,
specifies, respectively,
an underlying pairing heap (<xref linkend="biblio.fredman86pairing"/>),
binary heap (<xref linkend="biblio.clrs2001"/>),
binomial heap (<xref linkend="biblio.clrs2001"/>),
a binomial heap with a redundant binary counter (<xref linkend="biblio.maverik_lowerbounds"/>),
or a thin heap (<xref linkend="biblio.kt99fat_heaps"/>).
</para>
<para>
As mentioned in the tutorial,
<classname>__gnu_pbds::priority_queue</classname> shares most of the
same interface with <classname>std::priority_queue</classname>.
E.g. if <varname>q</varname> is a priority queue of type
<classname>Q</classname>, then <function>q.top()</function> will
return the "largest" value in the container (according to
<classname>typename
Q::cmp_fn</classname>). <classname>__gnu_pbds::priority_queue</classname>
has a larger (and very slightly different) interface than
<classname>std::priority_queue</classname>, however, since typically
<classname>push</classname> and <classname>pop</classname> are deemed
insufficient for manipulating priority-queues. </para>
<para>Different settings require different priority-queue
implementations which are described in later; see traits
discusses ways to differentiate between the different traits of
different implementations.</para>
</section>
<section xml:id="container.priority_queue.details">
<info><title>Details</title></info>
<section xml:id="container.priority_queue.details.iterators">
<info><title>Iterators</title></info>
<para>There are many different underlying-data structures for
implementing priority queues. Unfortunately, most such
structures are oriented towards making <function>push</function> and
<function>top</function> efficient, and consequently don't allow efficient
access of other elements: for instance, they cannot support an efficient
<function>find</function> method. In the use case where it
is important to both access and "do something with" an
arbitrary value, one would be out of luck. For example, many graph algorithms require
modifying a value (typically increasing it in the sense of the
priority queue's comparison functor).</para>
<para>In order to access and manipulate an arbitrary value in a
priority queue, one needs to reference the internals of the
priority queue from some form of an associative container -
this is unavoidable. Of course, in order to maintain the
encapsulation of the priority queue, this needs to be done in a
way that minimizes exposure to implementation internals.</para>
<para>In this library the priority queue's <function>insert</function>
method returns an iterator, which if valid can be used for subsequent <function>modify</function> and
<function>erase</function> operations. This both preserves the priority
queue's encapsulation, and allows accessing arbitrary values (since the
returned iterators from the <function>push</function> operation can be
stored in some form of associative container).</para>
<para>Priority queues' iterators present a problem regarding their
invalidation guarantees. One assumes that calling
<function>operator++</function> on an iterator will associate it
with the "next" value. Priority-queues are
self-organizing: each operation changes what the "next" value
means. Consequently, it does not make sense that <function>push</function>
will return an iterator that can be incremented - this can have
no possible use. Also, as in the case of hash-based containers,
it is awkward to define if a subsequent <function>push</function> operation
invalidates a prior returned iterator: it invalidates it in the
sense that its "next" value is not related to what it
previously considered to be its "next" value. However, it might not
invalidate it, in the sense that it can be
de-referenced and used for <function>modify</function> and <function>erase</function>
operations.</para>
<para>Similarly to the case of the other unordered associative
containers, this library uses a distinction between
point-type and range type iterators. A priority queue's <classname>iterator</classname> can always be
converted to a <classname>point_iterator</classname>, and a
<classname>const_iterator</classname> can always be converted to a
<classname>point_const_iterator</classname>.</para>
<para>The following snippet demonstrates manipulating an arbitrary
value:</para>
<programlisting>
// A priority queue of integers.
priority_queue<int > p;
// Insert some values into the priority queue.
priority_queue<int >::point_iterator it = p.push(0);
p.push(1);
p.push(2);
// Now modify a value.
p.modify(it, 3);
assert(p.top() == 3);
</programlisting>
<para>It should be noted that an alternative design could embed an
associative container in a priority queue. Could, but most
probably should not. To begin with, it should be noted that one
could always encapsulate a priority queue and an associative
container mapping values to priority queue iterators with no
performance loss. One cannot, however, "un-encapsulate" a priority
queue embedding an associative container, which might lead to
performance loss. Assume, that one needs to associate each value
with some data unrelated to priority queues. Then using
this library's design, one could use an
associative container mapping each value to a pair consisting of
this data and a priority queue's iterator. Using the embedded
method would need to use two associative containers. Similar
problems might arise in cases where a value can reside
simultaneously in many priority queues.</para>
</section>
<section xml:id="container.priority_queue.details.d">
<info><title>Underlying Data Structure</title></info>
<para>There are three main implementations of priority queues: the
first employs a binary heap, typically one which uses a
sequence; the second uses a tree (or forest of trees), which is
typically less structured than an associative container's tree;
the third simply uses an associative container. These are
shown in the graphic below, in labels A1 and A2, label B, and label C.</para>
<figure>
<title>Underlying Priority-Queue Data-Structures.</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_priority_queue_different_underlying_dss.png"/>
</imageobject>
<textobject>
<phrase>Underlying Priority-Queue Data-Structures.</phrase>
</textobject>
</mediaobject>
</figure>
<para>Roughly speaking, any value that is both pushed and popped
from a priority queue must incur a logarithmic expense (in the
amortized sense). Any priority queue implementation that would
avoid this, would violate known bounds on comparison-based
sorting (see <xref linkend="biblio.clrs2001"/> and <xref linkend="biblio.brodal96priority"/>).
</para>
<para>Most implementations do
not differ in the asymptotic amortized complexity of
<function>push</function> and <function>pop</function> operations, but they differ in
the constants involved, in the complexity of other operations
(e.g., <function>modify</function>), and in the worst-case
complexity of single operations. In general, the more
"structured" an implementation (i.e., the more internal
invariants it possesses) - the higher its amortized complexity
of <function>push</function> and <function>pop</function> operations.</para>
<para>This library implements different algorithms using a
single class: <classname>priority_queue</classname>.
Instantiating the <classname>Tag</classname> template parameter, "selects"
the implementation:</para>
<orderedlist>
<listitem><para>
Instantiating <classname>Tag = binary_heap_tag</classname> creates
a binary heap of the form in represented in the graphic with labels A1 or A2. The former is internally
selected by priority_queue
if <classname>Value_Type</classname> is instantiated by a primitive type
(e.g., an <type>int</type>); the latter is
internally selected for all other types (e.g.,
<classname>std::string</classname>). This implementations is relatively
unstructured, and so has good <classname>push</classname> and <classname>pop</classname>
performance; it is the "best-in-kind" for primitive
types, e.g., <type>int</type>s. Conversely, it has
high worst-case performance, and can support only linear-time
<function>modify</function> and <function>erase</function> operations.</para></listitem>
<listitem><para>Instantiating <classname>Tag =
pairing_heap_tag</classname> creates a pairing heap of the form
in represented by label B in the graphic above. This
implementations too is relatively unstructured, and so has good
<function>push</function> and <function>pop</function>
performance; it is the "best-in-kind" for non-primitive types,
e.g., <classname>std:string</classname>s. It also has very good
worst-case <function>push</function> and
<function>join</function> performance (O(1)), but has high
worst-case <function>pop</function>
complexity.</para></listitem>
<listitem><para>Instantiating <classname>Tag =
binomial_heap_tag</classname> creates a binomial heap of the
form repsented by label B in the graphic above. This
implementations is more structured than a pairing heap, and so
has worse <function>push</function> and <function>pop</function>
performance. Conversely, it has sub-linear worst-case bounds for
<function>pop</function>, e.g., and so it might be preferred in
cases where responsiveness is important.</para></listitem>
<listitem><para>Instantiating <classname>Tag =
rc_binomial_heap_tag</classname> creates a binomial heap of the
form represented in label B above, accompanied by a redundant
counter which governs the trees. This implementations is
therefore more structured than a binomial heap, and so has worse
<function>push</function> and <function>pop</function>
performance. Conversely, it guarantees O(1)
<function>push</function> complexity, and so it might be
preferred in cases where the responsiveness of a binomial heap
is insufficient.</para></listitem>
<listitem><para>Instantiating <classname>Tag =
thin_heap_tag</classname> creates a thin heap of the form
represented by the label B in the graphic above. This
implementations too is more structured than a pairing heap, and
so has worse <function>push</function> and
<function>pop</function> performance. Conversely, it has better
worst-case and identical amortized complexities than a Fibonacci
heap, and so might be more appropriate for some graph
algorithms.</para></listitem>
</orderedlist>
<para>Of course, one can use any order-preserving associative
container as a priority queue, as in the graphic above label C, possibly by creating an adapter class
over the associative container (much as
<classname>std::priority_queue</classname> can adapt <classname>std::vector</classname>).
This has the advantage that no cross-referencing is necessary
at all; the priority queue itself is an associative container.
Most associative containers are too structured to compete with
priority queues in terms of <function>push</function> and <function>pop</function>
performance.</para>
</section>
<section xml:id="container.priority_queue.details.traits">
<info><title>Traits</title></info>
<para>It would be nice if all priority queues could
share exactly the same behavior regardless of implementation. Sadly, this is not possible. Just one for instance is in join operations: joining
two binary heaps might throw an exception (not corrupt
any of the heaps on which it operates), but joining two pairing
heaps is exception free.</para>
<para>Tags and traits are very useful for manipulating generic
types. <classname>__gnu_pbds::priority_queue</classname>
publicly defines <classname>container_category</classname> as one of the tags. Given any
container <classname>Cntnr</classname>, the tag of the underlying
data structure can be found via <classname>typename
Cntnr::container_category</classname>; this is one of the possible tags shown in the graphic below.
</para>
<figure>
<title>Priority-Queue Data-Structure Tags.</title>
<mediaobject>
<imageobject>
<imagedata align="center" format="PNG" scale="100"
fileref="../images/pbds_priority_queue_tag_hierarchy.png"/>
</imageobject>
<textobject>
<phrase>Priority-Queue Data-Structure Tags.</phrase>
</textobject>
</mediaobject>
</figure>
<para>Additionally, a traits mechanism can be used to query a
container type for its attributes. Given any container
<classname>Cntnr</classname>, then <programlisting>__gnu_pbds::container_traits<Cntnr></programlisting>
is a traits class identifying the properties of the
container.</para>
<para>To find if a container might throw if two of its objects are
joined, one can use
<programlisting>
container_traits<Cntnr>::split_join_can_throw
</programlisting>
</para>
<para>
Different priority-queue implementations have different invalidation guarantees. This is
especially important, since there is no way to access an arbitrary
value of priority queues except for iterators. Similarly to
associative containers, one can use
<programlisting>
container_traits<Cntnr>::invalidation_guarantee
</programlisting>
to get the invalidation guarantee type of a priority queue.</para>
<para>It is easy to understand from the graphic above, what <classname>container_traits<Cntnr>::invalidation_guarantee</classname>
will be for different implementations. All implementations of
type represented by label B have <classname>point_invalidation_guarantee</classname>:
the container can freely internally reorganize the nodes -
range-type iterators are invalidated, but point-type iterators
are always valid. Implementations of type represented by labels A1 and A2 have <classname>basic_invalidation_guarantee</classname>:
the container can freely internally reallocate the array - both
point-type and range-type iterators might be invalidated.</para>
<para>
This has major implications, and constitutes a good reason to avoid
using binary heaps. A binary heap can perform <function>modify</function>
or <function>erase</function> efficiently given a valid point-type
iterator. However, in order to supply it with a valid point-type
iterator, one needs to iterate (linearly) over all
values, then supply the relevant iterator (recall that a
range-type iterator can always be converted to a point-type
iterator). This means that if the number of <function>modify</function> or
<function>erase</function> operations is non-negligible (say
super-logarithmic in the total sequence of operations) - binary
heaps will perform badly.
</para>
</section>
</section> <!-- details -->
</section> <!-- priority_queue -->
</section> <!-- container -->
</section> <!-- design -->
<!-- S04: Test -->
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml"
href="test_policy_data_structures.xml">
</xi:include>
<!-- S05: Reference/Acknowledgments -->
<section xml:id="pbds.ack">
<info><title>Acknowledgments</title></info>
<?dbhtml filename="policy_data_structures_ack.html"?>
<para>
Written by Ami Tavory and Vladimir Dreizin (IBM Haifa Research
Laboratories), and Benjamin Kosnik (Red Hat).
</para>
<para>
This library was partially written at IBM's Haifa Research Labs.
It is based heavily on policy-based design and uses many useful
techniques from Modern C++ Design: Generic Programming and Design
Patterns Applied by Andrei Alexandrescu.
</para>
<para>
Two ideas are borrowed from the SGI-STL implementation:
</para>
<orderedlist>
<listitem>
<para>
The prime-based resize policies use a list of primes taken from
the SGI-STL implementation.
</para>
</listitem>
<listitem>
<para>
The red-black trees contain both a root node and a header node
(containing metadata), connected in a way that forward and
reverse iteration can be performed efficiently.
</para>
</listitem>
</orderedlist>
<para>
Some test utilities borrow ideas from
<link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://www.boost.org/doc/libs/release/libs/timer/index.html">boost::timer</link>.
</para>
<para>
We would like to thank Scott Meyers for useful comments (without
attributing to him any flaws in the design or implementation of the
library).
</para>
<para>We would like to thank Matt Austern for the suggestion to
include tries.</para>
</section>
<!-- S06: Biblio -->
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml"
href="policy_data_structures_biblio.xml">
</xi:include>
</chapter>
|