1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 3008 3009 3010 3011 3012 3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 3026 3027 3028 3029 3030 3031 3032 3033 3034 3035 3036 3037 3038 3039 3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 3053 3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 3081 3082 3083 3084 3085 3086 3087 3088 3089 3090 3091 3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 3132 3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 3193 3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 3370 3371 3372 3373 3374 3375 3376 3377 3378 3379 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 3408 3409 3410 3411 3412 3413 3414 3415 3416 3417 3418 3419 3420 3421 3422 3423 3424 3425 3426 3427 3428 3429 3430 3431 3432 3433 3434 3435 3436 3437 3438 3439 3440 3441 3442 3443 3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 3454 3455 3456 3457 3458 3459 3460 3461 3462 3463 3464 3465 3466 3467 3468 3469 3470 3471 3472 3473 3474 3475 3476 3477 3478 3479 3480 3481 3482 3483 3484 3485 3486 3487 3488 3489 3490 3491 3492 3493 3494 3495 3496 3497 3498 3499 3500 3501 3502 3503 3504 3505 3506 3507 3508 3509 3510 3511 3512 3513 3514 3515 3516 3517 3518 3519 3520 3521 3522 3523 3524 3525 3526 3527 3528 3529 3530 3531 3532 3533 3534 3535 3536 3537 3538 3539 3540 3541 3542 3543 3544 3545 3546 3547 3548 3549 3550 3551 3552 3553 3554 3555 3556 3557 3558 3559 3560 3561 3562 3563 3564 3565 3566 3567 3568 3569 3570 3571 3572 3573 3574 3575 3576 3577 3578 3579 3580 3581 3582 3583 3584 3585 3586 3587 3588 3589 3590 3591 3592 3593 3594 3595 3596 3597 3598 3599 3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 3610 3611 3612 3613 3614 3615 3616 3617 3618 3619 3620 3621 3622 3623 3624 3625 3626 3627 3628 3629 3630 3631 3632 3633 3634 3635 3636 3637 3638 3639 3640 3641 3642 3643 3644 3645 3646 3647 3648 3649 3650 3651 3652 3653 3654 3655 3656 3657 3658 3659 3660 3661 3662 3663 3664 3665 3666 3667 3668 3669 3670 3671 3672 3673 3674 3675 3676 3677 3678 3679 3680 3681 3682 3683 3684 3685 3686 3687 3688 3689 3690 3691 3692 3693 3694 3695 3696 3697 3698 3699 3700 3701 3702 3703 3704 3705 3706 3707 3708 3709 3710 3711 3712 3713 3714 3715 3716 3717 3718 3719 3720 3721 3722 3723 3724 3725 3726 3727 3728 3729 3730 3731 3732 3733 3734 3735 3736 3737 3738 3739 3740 3741 3742 3743 3744 3745 3746 3747 3748 3749 3750 3751 3752 3753 3754 3755 3756 3757 3758 3759 3760 3761 3762 3763 3764 3765 3766 3767 3768 3769 3770 3771 3772 3773 3774 3775 3776 3777 3778 3779 3780 3781 3782 3783 3784 3785 3786 3787 3788 3789 3790 3791 3792 3793 3794 3795 3796 3797 3798 3799 3800 3801 3802 3803 3804 3805 3806 3807 3808 3809 3810 3811 3812 3813 3814 3815 3816 3817 3818 3819 3820 3821 3822 3823 3824 3825 3826 3827 3828 3829 3830 3831 3832 3833 3834 3835 3836 3837 3838 3839 3840 3841 3842 3843 3844 3845 3846 3847 3848 3849 3850 3851 3852 3853 3854 3855 3856 3857 3858 3859 3860 3861 3862 3863 3864 3865 3866 3867 3868 3869 3870 3871 3872 3873 3874 3875 3876 3877 3878 3879 3880 3881 3882 3883 3884 3885 3886 3887 3888 3889 3890 3891 3892 3893 3894 3895 3896 3897 3898 3899 3900 3901 3902 3903 3904 3905 3906 3907 3908 3909 3910 3911 3912 3913 3914 3915 3916 3917 3918 3919 3920 3921 3922 3923 3924 3925 3926 3927 3928 3929 3930 3931 3932 3933 3934 3935 3936 3937 3938 3939 3940 3941 3942 3943 3944 3945 3946 3947 3948 3949 3950 3951 3952 3953 3954 3955 3956 3957 3958 3959 3960 3961 3962 3963 3964 3965 3966 3967 3968 3969 3970 3971 3972 3973 3974 3975 3976 3977 3978 3979 3980 3981 3982 3983 3984 3985 3986 3987 3988 3989 3990 3991 3992 3993 3994 3995 3996 3997 3998 3999 4000 4001 4002 4003 4004 4005 4006 4007 4008 4009 4010 4011 4012 4013 4014 4015 4016 4017 4018 4019 4020 4021 4022 4023 4024 4025 4026 4027 4028 4029 4030 4031 4032 4033 4034 4035 4036 4037 4038 4039 4040 4041 4042 4043 4044 4045 4046 4047 4048 4049 4050 4051 4052 4053 4054 4055 4056 4057 4058 4059 4060 4061 4062 4063 4064 4065 4066 4067 4068 4069 4070 4071 4072 4073 4074 4075 4076 4077 4078 4079 4080 4081 4082 4083 4084 4085 4086 4087 4088 4089 4090 4091 4092 4093 4094 4095 4096 4097 4098 4099 4100 4101 4102 4103 4104 4105 4106 4107 4108 4109 4110 4111 4112 4113 4114 4115 4116 4117 4118 4119 4120 4121 4122 4123 4124 4125 4126 4127 4128 4129 4130 4131 4132 4133 4134 4135 4136 4137 4138 4139 4140 4141 4142 4143 4144 4145 4146 4147 4148 4149 4150 4151 4152 4153 4154 4155 4156 4157 4158 4159 4160 4161 4162 4163 4164 4165 4166 4167 4168 4169 4170 4171 4172 4173 4174 4175 4176 4177 4178 4179 4180 4181 4182 4183 4184 4185 4186 4187 4188 4189 4190 4191 4192 4193 4194 4195 4196 4197 4198 4199 4200 4201 4202 4203 4204 4205 4206 4207 4208 4209 4210 4211 4212 4213 4214 4215 4216 4217 4218 4219 4220 4221 4222 4223 4224 4225 4226 4227 4228 4229 4230 4231 4232 4233 4234 4235 4236 4237 4238 4239 4240 4241 4242 4243 4244 4245 4246 4247 4248 4249 4250 4251 4252 4253 4254 4255 4256 4257 4258 4259 4260 4261 4262 4263 4264 4265 4266 4267 4268 4269 4270 4271 4272 4273 4274 4275 4276 4277 4278 4279 4280 4281 4282 4283 4284 4285 4286 4287 4288 4289 4290 4291 4292 4293 4294 4295 4296 4297 4298 4299 4300 4301 4302 4303 4304 4305 4306 4307 4308 4309 4310 4311 4312 4313 4314 4315 4316 4317 4318 4319 4320 4321 4322 4323 4324 4325 4326 4327 4328 4329 4330 4331 4332 4333 4334 4335 4336 4337 4338 4339 4340 4341 4342 4343 4344 4345 4346 4347 4348 4349 4350 4351 4352 4353 4354 4355 4356 4357 4358 4359 4360 4361 4362 4363 4364 4365 4366 4367 4368 4369 4370 4371 4372 4373 4374 4375 4376 4377 4378 4379 4380 4381 4382 4383 4384 4385 4386 4387 4388 4389 4390 4391 4392 4393 4394 4395 4396 4397 4398 4399 4400 4401 4402 4403 4404 4405 4406 4407 4408 4409 4410 4411 4412 4413 4414 4415 4416 4417 4418 4419 4420 4421 4422 4423 4424 4425 4426 4427 4428 4429 4430 4431 4432 4433 4434 4435 4436 4437 4438 4439 4440 4441 4442 4443 4444 4445 4446 4447 4448 4449 4450 4451 4452 4453 4454 4455 4456 4457 4458 4459 4460 4461 4462 4463 4464 4465 4466 4467 4468 4469 4470 4471 4472 4473 4474 4475 4476 4477 4478 4479 4480 4481 4482 4483 4484 4485 4486 4487 4488 4489 4490 4491 4492 4493 4494 4495 4496 4497 4498 4499 4500 4501 4502 4503 4504 4505 4506 4507 4508 4509 4510 4511 4512 4513 4514 4515 4516 4517 4518 4519 4520 4521 4522 4523 4524 4525 4526 4527 4528 4529 4530 4531 4532 4533 4534 4535 4536 4537 4538 4539 4540 4541 4542 4543 4544 4545 4546 4547 4548 4549 4550 4551 4552 4553 4554 4555 4556 4557 4558 4559 4560 4561 4562 4563 4564 4565 4566 4567 4568 4569 4570 4571 4572 4573 4574 4575 4576 4577 4578 4579 4580 4581 4582 4583 4584 4585 4586 4587 4588 4589 4590 4591 4592 4593 4594 4595 4596 4597 4598 4599 4600 4601 4602 4603 4604 4605 4606 4607 4608 4609 4610 4611 4612 4613 4614 4615 4616 4617 4618 4619 4620 4621 4622 4623 4624 4625 4626 4627 4628 4629 4630 4631 4632 4633 4634 4635 4636 4637 4638 4639 4640 4641 4642 4643 4644 4645 4646 4647 4648 4649 4650 4651 4652 4653 4654 4655 4656 4657 4658 4659 4660 4661 4662 4663 4664 4665 4666 4667 4668 4669 4670 4671 4672 4673 4674 4675 4676 4677 4678 4679 4680 4681 4682 4683 4684 4685 4686 4687 4688 4689 4690 4691 4692 4693 4694 4695 4696 4697 4698 4699 4700 4701 4702 4703 4704 4705 4706 4707 4708 4709 4710 4711 4712 4713 4714 4715 4716 4717 4718 4719 4720 4721 4722 4723 4724 4725 4726 4727 4728 4729 4730 4731 4732 4733 4734 4735 4736 4737 4738 4739 4740 4741 4742 4743 4744 4745 4746 4747 4748 4749 4750 4751 4752 4753 4754 4755 4756 4757 4758 4759 4760 4761 4762 4763 4764 4765 4766 4767 4768 4769 4770 4771 4772 4773 4774 4775 4776 4777 4778 4779 4780 4781 4782 4783 4784 4785 4786 4787 4788 4789 4790 4791 4792 4793 4794 4795 4796 4797 4798 4799 4800 4801 4802 4803 4804 4805 4806 4807 4808 4809 4810 4811 4812 4813 4814 4815 4816 4817 4818 4819 4820 4821 4822 4823 4824 4825 4826 4827 4828 4829 4830 4831 4832 4833 4834 4835 4836 4837 4838 4839 4840 4841 4842 4843 4844 4845 4846 4847 4848 4849 4850 4851 4852 4853 4854 4855 4856 4857 4858 4859 4860 4861 4862 4863 4864 4865 4866 4867 4868 4869 4870 4871 4872 4873 4874 4875 4876 4877 4878 4879 4880 4881 4882 4883 4884 4885 4886 4887 4888 4889 4890 4891 4892 4893 4894 4895 4896 4897 4898 4899 4900 4901 4902 4903 4904 4905 4906 4907 4908 4909 4910 4911 4912 4913 4914 4915 4916 4917 4918 4919 4920 4921 4922 4923 4924 4925 4926 4927 4928 4929 4930 4931 4932 4933 4934 4935 4936 4937 4938 4939 4940 4941 4942 4943 4944 4945 4946 4947 4948 4949 4950 4951 4952 4953 4954 4955 4956 4957 4958 4959 4960 4961 4962 4963 4964 4965 4966 4967 4968 4969 4970 4971 4972 4973 4974 4975 4976 4977 4978 4979 4980 4981 4982 4983 4984 4985 4986 4987 4988 4989 4990 4991 4992 4993 4994 4995 4996 4997 4998 4999 5000 5001 5002 5003 5004 5005 5006 5007 5008 5009 5010 5011 5012 5013 5014 5015 5016 5017 5018 5019 5020 5021 5022 5023 5024 5025 5026 5027 5028 5029 5030 5031 5032 5033 5034 5035 5036 5037 5038 5039 5040 5041 5042 5043 5044 5045 5046 5047 5048 5049 5050 5051 5052 5053 5054 5055 5056 5057 5058 5059 5060 5061 5062 5063 5064 5065 5066 5067 5068 5069 5070 5071 5072 5073 5074 5075 5076 5077 5078 5079 5080 5081 5082 5083 5084 5085 5086 5087 5088 5089 5090 5091 5092 5093 5094 5095 5096 5097 5098 5099 5100 5101 5102 5103 5104 5105 5106 5107 5108 5109 5110 5111 5112 5113 5114 5115 5116 5117 5118 5119 5120 5121 5122 5123 5124 5125 5126 5127 5128 5129 5130 5131 5132 5133 5134 5135 5136 5137 5138 5139 5140 5141 5142 5143 5144 5145 5146 5147 5148 5149 5150 5151 5152 5153 5154 5155 5156 5157 5158 5159 5160 5161 5162 5163 5164 5165 5166 5167 5168 5169 5170 5171 5172 5173 5174 5175 5176 5177 5178 5179 5180 5181 5182 5183 5184 5185 5186 5187 5188 5189 5190 5191 5192 5193 5194 5195 5196 5197 5198 5199 5200 5201 5202 5203 5204 5205 5206 5207 5208 5209 5210 5211 5212 5213 5214 5215 5216 5217 5218 5219 5220 5221 5222 5223 5224 5225 5226 5227 5228 5229 5230 5231 5232 5233 5234 5235 5236 5237 5238 5239 5240 5241 5242 5243 5244 5245 5246 5247 5248 5249 5250 5251 5252 5253 5254 5255 5256 5257 5258 5259 5260 5261 5262 5263 5264 5265 5266 5267 5268 5269 5270 5271 5272 5273 5274 5275 5276 5277 5278 5279 5280 5281 5282 5283 5284 5285 5286 5287 5288 5289 5290 5291 5292 5293 5294 5295 5296 5297 5298 5299 5300 5301 5302 5303 5304 5305 5306 5307 5308 5309 5310 5311 5312 5313 5314 5315 5316 5317 5318 5319 5320 5321 5322 5323 5324 5325 5326 5327 5328 5329 5330 5331 5332 5333 5334 5335 5336 5337 5338 5339 5340 5341 5342 5343 5344 5345 5346 5347 5348 5349 5350 5351 5352 5353 5354 5355 5356 5357 5358 5359 5360 5361 5362 5363 5364 5365 5366 5367 5368 5369 5370 5371 5372 5373 5374 5375 5376 5377 5378 5379 5380 5381 5382 5383 5384 5385 5386 5387 5388 5389 5390 5391 5392 5393 5394 5395 5396 5397 5398 5399 5400 5401 5402 5403 5404 5405 5406 5407 5408 5409 5410 5411 5412 5413 5414 5415 5416 5417 5418 5419 5420 5421 5422 5423 5424 5425 5426 5427 5428 5429 5430 5431 5432 5433 5434 5435 5436 5437 5438 5439 5440 5441 5442 5443 5444 5445 5446 5447 5448 5449 5450 5451 5452 5453 5454 5455 5456 5457 5458 5459 5460 5461 5462 5463 5464 5465 5466 5467 5468 5469 5470 5471 5472 5473 5474 5475 5476 5477 5478 5479 5480 5481 5482 5483 5484 5485 5486 5487 5488 5489 5490 5491 5492 5493 5494 5495 5496 5497 5498 5499 5500 5501 5502 5503 5504 5505 5506 5507 5508 5509 5510 5511 5512 5513 5514 5515 5516 5517 5518 5519 5520 5521 5522 5523 5524 5525 5526 5527 5528 5529 5530 5531 5532 5533 5534 5535 5536 5537 5538 5539 5540 5541 5542 5543 5544 5545 5546 5547 5548 5549 5550 5551 5552 5553 5554 5555 5556 5557 5558 5559 5560 5561 5562 5563 5564 5565 5566 5567 5568 5569 5570 5571 5572 5573 5574 5575 5576 5577 5578 5579 5580 5581 5582 5583 5584 5585 5586 5587 5588 5589 5590 5591 5592 5593 5594 5595 5596 5597 5598 5599 5600 5601 5602 5603 5604 5605 5606 5607 5608 5609 5610 5611 5612 5613 5614 5615 5616 5617 5618 5619 5620 5621 5622 5623 5624 5625 5626 5627 5628 5629 5630 5631 5632 5633 5634 5635 5636 5637 5638 5639 5640 5641 5642 5643 5644 5645 5646 5647 5648 5649 5650 5651 5652 5653 5654 5655 5656 5657 5658 5659 5660 5661 5662 5663 5664 5665 5666 5667 5668 5669 5670 5671 5672 5673 5674 5675 5676 5677 5678 5679 5680 5681 5682 5683 5684 5685 5686 5687 5688 5689 5690 5691 5692 5693 5694 5695 5696 5697 5698 5699 5700 5701 5702 5703 5704 5705 5706 5707 5708 5709 5710 5711 5712 5713 5714 5715 5716 5717 5718 5719 5720 5721 5722 5723 5724 5725 5726 5727 5728 5729 5730 5731 5732 5733 5734 5735 5736 5737 5738 5739 5740 5741 5742 5743 5744 5745 5746 5747 5748 5749 5750 5751 5752 5753 5754 5755 5756 5757 5758 5759 5760 5761 5762 5763 5764 5765 5766 5767 5768 5769 5770 5771 5772 5773 5774 5775 5776 5777 5778 5779 5780 5781 5782 5783 5784 5785 5786 5787 5788 5789 5790 5791 5792 5793 5794 5795 5796 5797 5798 5799 5800 5801 5802 5803 5804 5805 5806 5807 5808 5809 5810 5811 5812 5813 5814 5815 5816 5817 5818 5819 5820 5821 5822 5823 5824 5825 5826 5827 5828 5829 5830 5831 5832 5833 5834 5835 5836 5837 5838 5839 5840 5841 5842 5843 5844 5845 5846 5847 5848 5849 5850 5851 5852 5853 5854 5855 5856 5857 5858 5859 5860 5861 5862 5863 5864 5865 5866 5867 5868 5869 5870 5871 5872 5873 5874 5875 5876 5877 5878 5879 5880 5881 5882 5883 5884 5885 5886 5887 5888 5889 5890 5891 5892 5893 5894 5895 5896 5897 5898 5899 5900 5901 5902 5903 5904 5905 5906 5907 5908 5909 5910 5911 5912 5913 5914 5915 5916 5917 5918 5919 5920 5921 5922 5923 5924 5925 5926 5927 5928 5929 5930 5931 5932 5933 5934 5935 5936 5937 5938 5939 5940 5941 5942 5943 5944 5945 5946 5947 5948 5949 5950 5951 5952 5953 5954 5955 5956 5957 5958 5959 5960 5961 5962 5963 5964 5965 5966 5967 5968 5969 5970 5971 5972 5973 5974 5975 5976 5977 5978 5979 5980 5981 5982 5983 5984 5985 5986 5987 5988 5989 5990 5991 5992 5993 5994 5995 5996 5997 5998 5999 6000 6001 6002 6003 6004 6005 6006 6007 6008 6009 6010 6011 6012 6013 6014 6015 6016 6017 6018 6019 6020 6021 6022 6023 6024 6025 6026 6027 6028 6029 6030 6031 6032 6033 6034 6035 6036 6037 6038 6039 6040 6041 6042 6043 6044 6045 6046 6047 6048 6049 6050 6051 6052 6053 6054 6055 6056 6057 6058 6059 6060 6061 6062 6063 6064 6065 6066 6067 6068 6069 6070 6071 6072 6073 6074 6075 6076 6077 6078 6079 6080 6081 6082 6083 6084 6085 6086 6087 6088 6089 6090 6091 6092 6093 6094 6095 6096 6097 6098 6099 6100 6101 6102 6103 6104 6105 6106 6107 6108 6109 6110 6111 6112 6113 6114 6115 6116 6117 6118 6119 6120 6121 6122 6123 6124 6125 6126 6127 6128 6129 6130 6131 6132 6133 6134 6135 6136 6137 6138 6139 6140 6141 6142 6143 6144 6145 6146 6147 6148 6149 6150 6151 6152 6153 6154 6155 6156 6157 6158 6159 6160 6161 6162 6163 6164 6165 6166 6167 6168 6169 6170 6171 6172 6173 6174 6175 6176 6177 6178 6179 6180 6181 6182 6183 6184 6185 6186 6187 6188 6189 6190 6191 6192 6193 6194 6195 6196 6197 6198 6199 6200 6201 6202 6203 6204 6205 6206 6207 6208 6209 6210 6211 6212 6213 6214 6215 6216 6217 6218 6219 6220 6221 6222 6223 6224 6225 6226 6227 6228 6229 6230 6231 6232 6233 6234 6235 6236 6237 6238 6239 6240 6241 6242 6243 6244 6245 6246 6247 6248 6249 6250 6251 6252 6253 6254 6255 6256 6257 6258 6259 6260 6261 6262 6263 6264 6265 6266 6267 6268 6269 6270 6271 6272 6273 6274 6275 6276 6277 6278 6279 6280 6281 6282 6283 6284 6285 6286 6287 6288 6289 6290 6291 6292 6293 6294 6295 6296 6297 6298 6299 6300 6301 6302 6303 6304 6305 6306 6307 6308 6309 6310 6311 6312 6313 6314 6315 6316 6317 6318 6319 6320 6321 6322 6323 6324 6325 6326 6327 6328 6329 6330 6331 6332 6333 6334 6335 6336 6337 6338 6339 6340 6341 6342 6343 6344 6345 6346 6347 6348 6349 6350 6351 6352 6353 6354 6355 6356 6357 6358 6359 6360 6361 6362 6363 6364 6365 6366 6367 6368 6369 6370 6371 6372 6373 6374 6375 6376 6377 6378 6379 6380 6381 6382 6383 6384 6385 6386 6387 6388 6389 6390 6391 6392 6393 6394 6395 6396 6397 6398 6399 6400 6401 6402 6403 6404 6405 6406 6407 6408 6409 6410 6411 6412 6413 6414 6415 6416 6417 6418 6419 6420 6421 6422 6423 6424 6425 6426 6427 6428 6429 6430 6431 6432 6433 6434 6435 6436 6437 6438 6439 6440 6441 6442 6443 6444 6445 6446 6447 6448 6449 6450 6451 6452 6453 6454 6455 6456 6457 6458 6459 6460 6461 6462 6463 6464 6465 6466 6467 6468 6469 6470 6471 6472 6473 6474 6475 6476 6477 6478 6479 6480 6481 6482 6483 6484 6485 6486 6487 6488 6489 6490 6491 6492 6493 6494 6495 6496 6497 6498 6499 6500 6501 6502 6503 6504 6505 6506 6507 6508 6509 6510 6511 6512 6513 6514 6515 6516 6517 6518 6519 6520 6521 6522 6523 6524 6525 6526 6527 6528 6529 6530 6531 6532 6533 6534 6535 6536 6537 6538 6539 6540 6541 6542 6543 6544 6545 6546 6547 6548 6549 6550 6551 6552 6553 6554 6555 6556 6557 6558 6559 6560 6561 6562 6563 6564 6565 6566 6567 6568 6569 6570 6571 6572 6573 6574 6575 6576 6577 6578 6579 6580 6581 6582 6583 6584 6585 6586 6587 6588 6589 6590 6591 6592 6593 6594 6595 6596 6597 6598 6599 6600 6601 6602 6603 6604 6605 6606 6607 6608 6609 6610 6611 6612 6613 6614 6615 6616 6617 6618 6619 6620 6621 6622 6623 6624 6625 6626 6627 6628 6629 6630 6631 6632 6633 6634 6635 6636 6637 6638 6639 6640 6641 6642 6643 6644 6645 6646 6647 6648 6649 6650 6651 6652 6653 6654 6655 6656 6657 6658 6659 6660 6661 6662 6663 6664 6665 6666 6667 6668 6669 6670 6671 6672 6673 6674 6675 6676 6677 6678 6679 6680 6681 6682 6683 6684 6685 6686 6687 6688 6689 6690 6691 6692 6693 6694 6695 6696 6697 6698 6699 6700 6701 6702 6703 6704 6705 6706 6707 6708 6709 6710 6711 6712 6713 6714 6715 6716 6717 6718 6719 6720 6721 6722 6723 6724 6725 6726 6727 6728 6729 6730 6731 6732 6733 6734 6735 6736 6737 6738 6739 6740 6741 6742 6743 6744 6745 6746 6747 6748 6749 6750 6751 6752 6753 6754 6755 6756 6757 6758 6759 6760 6761 6762 6763 6764 6765 6766 6767 6768 6769 6770 6771 6772 6773 6774 6775 6776 6777 6778 6779 6780 6781 6782 6783 6784 6785 6786 6787 6788 6789 6790 6791 6792 6793 6794 6795 6796 6797 6798 6799 6800 6801 6802 6803 6804 6805 6806 6807 6808 6809 6810 6811 6812 6813 6814 6815 6816 6817 6818 6819 6820 6821 6822 6823 6824 6825 6826 6827 6828 6829 6830 6831 6832 6833 6834 6835 6836 6837 6838 6839 6840 6841 6842 6843 6844 6845 6846 6847 6848 6849 6850 6851 6852 6853 6854 6855 6856 6857 6858 6859 6860 6861 6862 6863 6864 6865 6866 6867 6868 6869 6870 6871 6872 6873 6874 6875 6876 6877 6878 6879 6880 6881 6882 6883 6884 6885 6886 6887 6888 6889 6890 6891 6892 6893 6894 6895 6896 6897 6898 6899 6900 6901 6902 6903 6904 6905 6906 6907 6908 6909 6910 6911 6912 6913 6914 6915 6916 6917 6918 6919 6920 6921 6922 6923 6924 6925 6926 6927 6928 6929 6930 6931 6932 6933 6934 6935 6936 6937 6938 6939 6940 6941 6942 6943 6944 6945 6946 6947 6948 6949 6950 6951 6952 6953 6954 6955 6956 6957 6958 6959 6960 6961 6962 6963 6964 6965 6966 6967 6968 6969 6970 6971 6972 6973 6974 6975 6976 6977 6978 6979 6980 6981 6982 6983 6984 6985 6986 6987 6988 6989 6990 6991 6992 6993 6994 6995 6996 6997 6998 6999 7000 7001 7002 7003 7004 7005 7006 7007 7008 7009 7010 7011 7012 7013 7014 7015 7016 7017 7018 7019 7020 7021 7022 7023 7024 7025 7026 7027 7028 7029 7030 7031 7032 7033 7034 7035 7036 7037 7038 7039 7040 7041 7042 7043 7044 7045 7046 7047 7048 7049 7050 7051 7052 7053 7054 7055 7056 7057 7058 7059 7060 7061 7062 7063 7064 7065 7066 7067 7068 7069 7070 7071 7072 7073 7074 7075 7076 7077 7078 7079 7080 7081 7082 7083 7084 7085 7086 7087 7088 7089 7090 7091 7092 7093 7094 7095 7096 7097 7098 7099 7100 7101 7102 7103 7104 7105 7106 7107 7108 7109 7110 7111 7112 7113 7114 7115 7116 7117 7118 7119 7120 7121 7122 7123 7124 7125 7126 7127 7128 7129 7130 7131 7132 7133 7134 7135 7136 7137 7138 7139 7140 7141 7142 7143 7144 7145 7146 7147 7148 7149 7150 7151 7152 7153 7154 7155 7156 7157 7158 7159 7160 7161 7162 7163 7164 7165 7166 7167 7168 7169 7170 7171 7172 7173 7174 7175 7176 7177 7178 7179 7180 7181 7182 7183 7184 7185 7186 7187 7188 7189 7190 7191 7192 7193 7194 7195 7196 7197 7198 7199 7200 7201 7202 7203 7204 7205 7206 7207 7208 7209 7210 7211 7212 7213 7214 7215 7216 7217 7218 7219 7220 7221 7222 7223 7224 7225 7226 7227 7228 7229 7230 7231 7232 7233 7234 7235 7236 7237 7238 7239 7240 7241 7242 7243 7244 7245 7246 7247 7248 7249 7250 7251 7252 7253 7254 7255 7256 7257 7258 7259 7260 7261 7262 7263 7264 7265 7266 7267 7268 7269 7270 7271 7272 7273 7274 7275 7276 7277 7278 7279 7280 7281 7282 7283 7284 7285 7286 7287 7288 7289 7290 7291 7292 7293 7294 7295 7296 7297 7298 7299 7300 7301 7302 7303 7304 7305 7306 7307 7308 7309 7310 7311 7312 7313 7314 7315 7316 7317 7318 7319 7320 7321 7322 7323 7324 7325 7326 7327 7328 7329 7330 7331 7332 7333 7334 7335 7336 7337 7338 7339 7340 7341 7342 7343 7344 7345 7346 7347 7348 7349 7350 7351 7352 7353 7354 7355 7356 7357 7358 7359 7360 7361 7362 7363 7364 7365 7366 7367 7368 7369 7370 7371 7372 7373 7374 7375 7376 7377 7378 7379 7380 7381 7382 7383 7384 7385 7386 7387 7388 7389 7390 7391 7392 7393 7394 7395 7396 7397 7398 7399 7400 7401 7402 7403 7404 7405 7406 7407 7408 7409 7410 7411 7412 7413 7414 7415 7416 7417 7418 7419 7420 7421 7422 7423 7424 7425 7426 7427 7428 7429 7430 7431 7432 7433 7434 7435 7436 7437 7438 7439 7440 7441 7442 7443 7444 7445 7446 7447 7448 7449 7450 7451 7452 7453 7454 7455 7456 7457 7458 7459 7460 7461 7462 7463 7464 7465 7466 7467 7468 7469 7470 7471 7472 7473 7474 7475 7476 7477 7478 7479 7480 7481 7482 7483 7484 7485 7486 7487 7488 7489 7490 7491 7492 7493 7494 7495 7496 7497 7498 7499 7500 7501 7502 7503 7504 7505 7506 7507 7508 7509 7510 7511 7512 7513 7514 7515 7516 7517 7518 7519 7520 7521 7522 7523 7524 7525 7526 7527 7528 7529 7530 7531 7532 7533 7534 7535 7536 7537 7538 7539 7540 7541 7542 7543 7544 7545 7546 7547 7548 7549 7550 7551 7552 7553 7554 7555 7556 7557 7558 7559 7560 7561 7562 7563 7564 7565 7566 7567 7568 7569 7570 7571 7572 7573 7574 7575 7576 7577 7578 7579 7580 7581 7582 7583 7584 7585 7586 7587 7588 7589 7590 7591 7592 7593 7594 7595 7596 7597 7598 7599 7600 7601 7602 7603 7604 7605 7606 7607 7608 7609 7610 7611 7612 7613 7614 7615 7616 7617 7618 7619 7620 7621 7622 7623 7624 7625 7626 7627 7628 7629 7630 7631 7632 7633 7634 7635 7636 7637 7638 7639 7640 7641 7642 7643 7644 7645 7646 7647 7648 7649 7650 7651 7652 7653 7654 7655 7656 7657 7658 7659 7660 7661 7662 7663 7664 7665 7666 7667 7668 7669 7670 7671 7672 7673 7674 7675 7676 7677 7678 7679 7680 7681 7682 7683 7684 7685 7686 7687 7688 7689 7690 7691 7692 7693 7694 7695 7696 7697 7698 7699 7700 7701 7702 7703 7704 7705 7706 7707 7708 7709 7710 7711 7712 7713 7714 7715 7716 7717 7718 7719 7720 7721 7722 7723 7724 7725 7726 7727 7728 7729 7730 7731 7732 7733 7734 7735 7736 7737 7738 7739 7740 7741 7742 7743 7744 7745 7746 7747 7748 7749 7750 7751 7752 7753 7754 7755 7756 7757 7758 7759 7760 7761 7762 7763 7764 7765 7766 7767 7768 7769 7770 7771 7772 7773 7774 7775 7776 7777 7778 7779 7780 7781 7782 7783 7784 7785 7786 7787 7788 7789 7790 7791 7792 7793 7794 7795 7796 7797 7798 7799 7800 7801 7802 7803 7804 7805 7806 7807 7808 7809 7810 7811 7812 7813 7814 7815 7816 7817 7818 7819 7820 7821 7822 7823 7824 7825 7826 7827 7828 7829 7830 7831 7832 7833 7834 7835 7836 7837 7838 7839 7840 7841 7842 7843 7844 7845 7846 7847 7848 7849 7850 7851 7852 7853 7854 7855 7856 7857 7858 7859 7860 7861 7862 7863 7864 7865 7866 7867 7868 7869 7870 7871 7872 7873 7874 7875 7876 7877 7878 7879 7880 7881 7882 7883 7884 7885 7886 7887 7888 7889 7890 7891 7892 7893 7894 7895 7896 7897 7898 7899 7900 7901 7902 7903 7904 7905 7906 7907 7908 7909 7910 7911 7912 7913 7914 7915 7916 7917 7918 7919 7920 7921 7922 7923 7924 7925 7926 7927 7928 7929 7930 7931 7932 7933 7934 7935 7936 7937 7938 7939 7940 7941 7942 7943 7944 7945 7946 7947 7948 7949 7950 7951 7952 7953 7954 7955 7956 7957 7958 7959 7960 7961 7962 7963 7964 7965 7966 7967 7968 7969 7970 7971 7972 7973 7974 7975 7976 7977 7978 7979 7980 7981 7982 7983 7984 7985 7986 7987 7988 7989 7990 7991 7992 7993 7994 7995 7996 7997 7998 7999 8000 8001 8002 8003 8004 8005 8006 8007 8008 8009 8010 8011 8012 8013 8014 8015 8016 8017 8018 8019 8020 8021 8022 8023 8024 8025 8026 8027 8028 8029 8030 8031 8032 8033 8034 8035 8036 8037 8038 8039 8040 8041 8042 8043 8044 8045 8046 8047 8048 8049 8050 8051 8052 8053 8054 8055 8056 8057 8058 8059 8060 8061 8062 8063 8064 8065 8066 8067 8068 8069 8070 8071 8072 8073 8074 8075 8076 8077 8078 8079 8080 8081 8082 8083 8084 8085 8086 8087 8088 8089 8090 8091 8092 8093 8094 8095 8096 8097 8098 8099 8100 8101 8102 8103 8104 8105 8106 8107 8108 8109 8110 8111 8112 8113 8114 8115 8116 8117 8118 8119 8120 8121 8122 8123 8124 8125 8126 8127 8128 8129 8130 8131 8132 8133 8134 8135 8136 8137 8138 8139 8140 8141 8142 8143 8144 8145 8146 8147 8148 8149 8150 8151 8152 8153 8154 8155 8156 8157 8158 8159 8160 8161 8162 8163 8164 8165 8166 8167 8168 8169 8170 8171 8172 8173 8174 8175 8176 8177 8178 8179 8180 8181 8182 8183 8184 8185 8186 8187 8188 8189 8190 8191 8192 8193 8194 8195 8196 8197 8198 8199 8200 8201 8202 8203 8204 8205 8206 8207 8208 8209 8210 8211 8212 8213 8214 8215 8216 8217 8218 8219 8220 8221 8222 8223 8224 8225 8226 8227 8228 8229 8230 8231 8232 8233 8234 8235 8236 8237 8238 8239 8240 8241 8242 8243 8244 8245 8246 8247 8248 8249 8250 8251 8252 8253 8254 8255 8256 8257 8258 8259 8260 8261 8262 8263 8264 8265 8266 8267 8268 8269 8270 8271 8272 8273 8274 8275 8276 8277 8278 8279 8280 8281 8282 8283 8284 8285 8286 8287 8288 8289 8290 8291 8292 8293 8294 8295 8296 8297 8298 8299 8300 8301 8302 8303 8304 8305 8306 8307 8308 8309 8310 8311 8312 8313 8314 8315 8316 8317 8318 8319 8320 8321 8322 8323 8324 8325 8326 8327 8328 8329 8330 8331 8332 8333 8334 8335 8336 8337 8338 8339 8340 8341 8342 8343 8344 8345 8346 8347 8348 8349 8350 8351 8352 8353 8354 8355 8356 8357 8358 8359 8360 8361 8362 8363 8364 8365 8366 8367 8368 8369 8370 8371 8372 8373 8374 8375 8376 8377 8378 8379 8380 8381 8382 8383 8384 8385 8386 8387 8388 8389 8390 8391 8392 8393 8394 8395 8396 8397 8398 8399 8400 8401 8402 8403 8404 8405 8406 8407 8408 8409 8410 8411 8412 8413 8414 8415 8416 8417 8418 8419 8420 8421 8422 8423 8424 8425 8426 8427 8428 8429 8430 8431 8432 8433 8434 8435 8436 8437 8438 8439 8440 8441 8442 8443 8444 8445 8446 8447 8448 8449 8450 8451 8452 8453 8454 8455 8456 8457 8458 8459 8460 8461 8462 8463 8464 8465 8466 8467 8468 8469 8470 8471 8472 8473 8474 8475 8476 8477 8478 8479 8480 8481 8482 8483 8484 8485 8486 8487 8488 8489 8490 8491 8492 8493 8494 8495 8496 8497 8498 8499 8500 8501 8502 8503 8504 8505 8506 8507 8508 8509 8510 8511 8512 8513 8514 8515 8516 8517 8518 8519 8520 8521 8522 8523 8524 8525 8526 8527 8528 8529 8530 8531 8532 8533 8534 8535 8536 8537 8538 8539 8540 8541 8542 8543 8544 8545 8546 8547 8548 8549 8550 8551 8552 8553 8554 8555 8556 8557 8558 8559 8560 8561 8562 8563 8564 8565 8566 8567 8568 8569 8570 8571 8572 8573 8574 8575 8576 8577 8578 8579 8580 8581 8582 8583 8584 8585 8586 8587 8588 8589 8590 8591 8592 8593 8594 8595 8596 8597 8598 8599 8600 8601 8602 8603 8604 8605 8606 8607 8608 8609 8610 8611 8612 8613 8614 8615 8616 8617 8618 8619 8620 8621 8622 8623 8624 8625 8626 8627 8628 8629 8630 8631 8632 8633 8634 8635 8636 8637 8638 8639 8640 8641 8642 8643 8644 8645 8646 8647 8648 8649 8650 8651 8652 8653 8654 8655 8656 8657 8658 8659 8660 8661 8662 8663 8664 8665 8666 8667 8668 8669 8670 8671 8672 8673 8674 8675 8676 8677 8678 8679 8680 8681 8682 8683 8684 8685 8686 8687 8688 8689 8690 8691 8692 8693 8694 8695 8696 8697 8698 8699 8700 8701 8702 8703 8704 8705 8706 8707 8708 8709 8710 8711 8712 8713 8714 8715 8716 8717 8718 8719 8720 8721 8722 8723 8724 8725 8726 8727 8728 8729 8730 8731 8732 8733 8734 8735 8736 8737 8738 8739 8740 8741 8742 8743 8744 8745 8746 8747 8748 8749 8750 8751 8752 8753 8754 8755 8756 8757 8758 8759 8760 8761 8762 8763 8764 8765 8766 8767 8768 8769 8770 8771 8772 8773 8774 8775 8776 8777 8778 8779 8780 8781 8782 8783 8784 8785 8786 8787 8788 8789 8790 8791 8792 8793 8794 8795 8796 8797 8798 8799 8800 8801 8802 8803 8804 8805 8806 8807 8808 8809 8810 8811 8812 8813 8814 8815 8816 8817 8818 8819 8820 8821 8822 8823 8824 8825 8826 8827 8828 8829 8830 8831 8832 8833 8834 8835 8836 8837 8838 8839 8840 8841 8842 8843 8844 8845 8846 8847 8848 8849 8850 8851 8852 8853 8854 8855 8856 8857 8858 8859 8860 8861 8862 8863 8864 8865 8866 8867 8868 8869 8870 8871 8872 8873 8874 8875 8876 8877 8878 8879 8880 8881 8882 8883 8884 8885 8886 8887 8888 8889 8890 8891 8892 8893 8894 8895 8896 8897 8898 8899 8900 8901 8902 8903 8904 8905 8906 8907 8908 8909 8910 8911 8912 8913 8914 8915 8916 8917 8918 8919 8920 8921 8922 8923 8924 8925 8926 8927 8928 8929 8930 8931 8932 8933 8934 8935 8936 8937 8938 8939 8940 8941 8942 8943 8944 8945 8946 8947 8948 8949 8950 8951 8952 8953 8954 8955 8956 8957 8958 8959 8960 8961 8962 8963 8964 8965 8966 8967 8968 8969 8970 8971 8972 8973 8974 8975 8976 8977 8978 8979 8980 8981 8982 8983 8984 8985 8986 8987 8988 8989 8990 8991 8992 8993 8994 8995 8996 8997 8998 8999 9000 9001 9002 9003 9004 9005 9006 9007 9008 9009 9010 9011 9012 9013 9014 9015 9016 9017 9018 9019 9020 9021 9022 9023 9024 9025 9026 9027 9028 9029 9030 9031 9032 9033 9034 9035 9036 9037 9038 9039 9040 9041 9042 9043 9044 9045 9046 9047 9048 9049 9050 9051 9052 9053 9054 9055 9056 9057 9058 9059 9060 9061 9062 9063 9064 9065 9066 9067 9068 9069 9070 9071 9072 9073 9074 9075 9076 9077 9078 9079 9080 9081 9082 9083 9084 9085 9086 9087 9088 9089 9090 9091 9092 9093 9094 9095 9096 9097 9098 9099 9100 9101 9102 9103 9104 9105 9106 9107 9108 9109 9110 9111 9112 9113 9114 9115 9116 9117 9118 9119 9120 9121 9122 9123 9124 9125 9126 9127 9128 9129 9130 9131 9132 9133 9134 9135 9136 9137 9138 9139 9140 9141 9142 9143 9144 9145 9146 9147 9148 9149 9150 9151 9152 9153 9154 9155 9156 9157 9158 9159 9160 9161 9162 9163 9164 9165 9166 9167 9168 9169 9170 9171 9172 9173 9174 9175 9176 9177 9178 9179 9180 9181 9182 9183 9184 9185 9186 9187 9188 9189 9190 9191 9192 9193 9194 9195 9196 9197 9198 9199 9200 9201 9202 9203 9204 9205 9206 9207 9208 9209 9210 9211 9212 9213 9214 9215 9216 9217 9218 9219 9220 9221 9222 9223 9224 9225 9226 9227 9228 9229 9230 9231 9232 9233 9234 9235 9236 9237 9238 9239 9240 9241 9242 9243 9244 9245 9246 9247 9248 9249 9250 9251 9252 9253 9254 9255 9256 9257 9258 9259 9260 9261 9262 9263 9264 9265 9266 9267 9268 9269 9270 9271 9272 9273 9274 9275 9276 9277 9278 9279 9280 9281 9282 9283 9284 9285 9286 9287 9288 9289 9290 9291 9292 9293 9294 9295 9296 9297 9298 9299 9300 9301 9302 9303 9304 9305 9306 9307 9308 9309 9310 9311 9312 9313 9314 9315 9316 9317 9318 9319 9320 9321 9322 9323 9324 9325 9326 9327 9328 9329 9330 9331 9332 9333 9334 9335 9336 9337 9338 9339 9340 9341 9342 9343 9344 9345 9346 9347 9348 9349 9350 9351 9352 9353 9354 9355 9356 9357 9358 9359 9360 9361 9362 9363 9364 9365 9366 9367 9368 9369 9370 9371 9372 9373 9374 9375 9376 9377 9378 9379 9380 9381 9382 9383 9384 9385 9386 9387 9388 9389 9390 9391 9392 9393 9394 9395 9396 9397 9398 9399 9400 9401 9402 9403 9404 9405 9406 9407 9408 9409 9410 9411 9412 9413 9414 9415 9416 9417 9418 9419 9420 9421 9422 9423 9424 9425 9426 9427 9428 9429 9430 9431 9432 9433 9434 9435 9436 9437 9438 9439 9440 9441 9442 9443 9444 9445 9446 9447 9448 9449 9450 9451 9452 9453 9454 9455 9456 9457 9458 9459 9460 9461 9462 9463 9464 9465 9466 9467 9468 9469 9470 9471 9472 9473 9474 9475 9476 9477 9478 9479 9480 9481 9482 9483 9484 9485 9486 9487 9488 9489 9490 9491 9492 9493 9494 9495 9496 9497 9498 9499 9500 9501 9502 9503 9504 9505 9506 9507 9508 9509 9510 9511 9512 9513 9514 9515 9516 9517 9518 9519 9520 9521 9522 9523 9524 9525 9526 9527 9528 9529 9530 9531 9532 9533 9534 9535 9536 9537 9538 9539 9540 9541 9542 9543 9544 9545 9546 9547 9548 9549 9550 9551 9552 9553 9554 9555 9556 9557 9558 9559 9560 9561 9562 9563 9564 9565 9566 9567 9568 9569 9570 9571 9572 9573 9574 9575 9576 9577 9578 9579 9580 9581 9582 9583 9584 9585 9586 9587 9588 9589 9590 9591 9592 9593 9594 9595 9596 9597 9598 9599 9600 9601 9602 9603 9604 9605 9606 9607 9608 9609 9610 9611 9612 9613 9614 9615 9616 9617 9618 9619 9620 9621 9622 9623 9624 9625 9626 9627 9628 9629 9630 9631 9632 9633 9634 9635 9636 9637 9638 9639 9640 9641 9642 9643 9644 9645 9646 9647 9648 9649 9650 9651 9652 9653 9654 9655 9656 9657 9658 9659 9660 9661 9662 9663 9664 9665 9666 9667 9668 9669 9670 9671 9672 9673 9674 9675 9676 9677 9678 9679 9680 9681 9682 9683 9684 9685 9686 9687 9688 9689 9690 9691 9692 9693 9694 9695 9696 9697 9698 9699 9700 9701 9702 9703 9704 9705 9706 9707 9708 9709 9710 9711 9712 9713 9714 9715 9716 9717 9718 9719 9720 9721 9722 9723 9724 9725 9726 9727 9728 9729 9730 9731 9732 9733 9734 9735 9736 9737 9738 9739 9740 9741 9742 9743 9744 9745 9746 9747 9748 9749 9750 9751 9752 9753 9754 9755 9756 9757 9758 9759 9760 9761 9762 9763 9764 9765 9766 9767 9768 9769 9770 9771 9772 9773 9774 9775 9776 9777 9778 9779 9780 9781 9782 9783 9784 9785 9786 9787 9788 9789 9790 9791 9792 9793 9794 9795 9796 9797 9798 9799 9800 9801 9802 9803 9804 9805 9806 9807 9808 9809 9810 9811 9812 9813 9814 9815 9816 9817 9818 9819 9820 9821 9822 9823 9824 9825 9826 9827 9828 9829 9830 9831 9832 9833 9834 9835 9836 9837 9838 9839 9840 9841 9842 9843 9844 9845 9846 9847 9848 9849 9850 9851 9852 9853 9854 9855 9856 9857 9858 9859 9860 9861 9862 9863 9864 9865 9866 9867 9868 9869 9870 9871 9872 9873 9874 9875 9876 9877 9878 9879 9880 9881 9882 9883 9884 9885 9886 9887 9888 9889 9890 9891 9892 9893 9894 9895 9896 9897 9898 9899 9900 9901 9902 9903 9904 9905 9906 9907 9908 9909 9910 9911 9912 9913 9914 9915 9916 9917 9918 9919 9920 9921 9922 9923 9924 9925 9926 9927 9928 9929 9930 9931 9932 9933 9934 9935 9936 9937 9938 9939 9940 9941 9942 9943 9944 9945 9946 9947 9948 9949 9950 9951 9952 9953 9954 9955 9956 9957 9958 9959 9960 9961 9962 9963 9964 9965 9966 9967 9968 9969 9970 9971 9972 9973 9974 9975 9976 9977 9978 9979 9980 9981 9982 9983 9984 9985 9986 9987 9988 9989 9990 9991 9992 9993 9994 9995 9996 9997 9998 9999 10000 10001 10002 10003 10004 10005 10006 10007 10008 10009 10010 10011 10012 10013 10014 10015 10016 10017 10018 10019 10020 10021 10022 10023 10024 10025 10026 10027 10028 10029 10030 10031 10032 10033 10034 10035 10036 10037 10038 10039 10040 10041 10042 10043 10044 10045 10046 10047 10048 10049 10050 10051 10052 10053 10054 10055 10056 10057 10058 10059 10060 10061 10062 10063 10064 10065 10066 10067 10068 10069 10070 10071 10072 10073 10074 10075 10076 10077 10078 10079 10080 10081 10082 10083 10084 10085 10086 10087 10088 10089 10090 10091 10092 10093 10094 10095 10096 10097 10098 10099 10100 10101 10102 10103 10104 10105 10106 10107 10108 10109 10110 10111 10112 10113 10114 10115 10116 10117 10118 10119 10120 10121 10122 10123 10124 10125 10126 10127 10128 10129 10130 10131 10132 10133 10134 10135 10136 10137 10138 10139 10140 10141 10142 10143 10144 10145 10146 10147 10148 10149 10150 10151 10152 10153 10154 10155 10156 10157 10158 10159 10160 10161 10162 10163 10164 10165 10166 10167 10168 10169 10170 10171 10172 10173 10174 10175 10176 10177 10178 10179 10180 10181 10182 10183 10184 10185 10186 10187 10188 10189 10190 10191 10192 10193 10194 10195 10196 10197 10198 10199 10200 10201 10202 10203 10204 10205 10206 10207 10208 10209 10210 10211 10212 10213 10214 10215 10216 10217 10218 10219 10220 10221 10222 10223 10224 10225 10226 10227 10228 10229 10230 10231 10232 10233 10234 10235 10236 10237 10238 10239 10240 10241 10242 10243 10244 10245 10246 10247 10248 10249 10250 10251 10252 10253 10254 10255 10256 10257 10258 10259 10260 10261 10262 10263 10264 10265 10266 10267 10268 10269 10270 10271 10272 10273 10274 10275 10276 10277 10278 10279 10280 10281 10282 10283 10284 10285 10286 10287 10288 10289 10290 10291 10292 10293 10294 10295 10296 10297 10298 10299 10300 10301 10302 10303 10304 10305 10306 10307 10308 10309 10310 10311 10312 10313 10314 10315 10316 10317 10318 10319 10320 10321 10322 10323 10324 10325 10326 10327 10328 10329 10330 10331 10332 10333 10334 10335 10336 10337 10338 10339 10340 10341 10342 10343 10344 10345 10346 10347 10348 10349 10350 10351 10352 10353 10354 10355 10356 10357 10358 10359 10360 10361 10362 10363 10364 10365 10366 10367 10368 10369 10370 10371 10372 10373 10374 10375 10376 10377 10378 10379 10380 10381 10382 10383 10384 10385 10386 10387 10388 10389 10390 10391 10392 10393 10394 10395 10396 10397 10398 10399 10400 10401 10402 10403 10404 10405 10406 10407 10408 10409 10410 10411 10412 10413 10414 10415 10416 10417 10418 10419 10420 10421 10422 10423 10424 10425 10426 10427 10428 10429 10430 10431 10432 10433 10434 10435 10436 10437 10438 10439 10440 10441 10442 10443 10444 10445 10446 10447 10448 10449 10450 10451 10452 10453 10454 10455 10456 10457 10458 10459 10460 10461 10462 10463 10464 10465 10466 10467 10468 10469 10470 10471 10472 10473 10474 10475 10476 10477 10478 10479 10480 10481 10482 10483 10484 10485 10486 10487 10488 10489 10490 10491 10492 10493 10494 10495 10496 10497 10498 10499 10500 10501 10502 10503 10504 10505 10506 10507 10508 10509 10510 10511 10512 10513 10514 10515 10516 10517 10518 10519 10520 10521 10522 10523 10524 10525 10526 10527 10528 10529 10530 10531 10532 10533 10534 10535 10536 10537 10538 10539 10540 10541 10542 10543 10544 10545 10546 10547 10548 10549 10550 10551 10552 10553 10554 10555 10556 10557 10558 10559 10560 10561 10562 10563 10564 10565 10566 10567 10568 10569 10570 10571 10572 10573 10574 10575 10576 10577 10578 10579 10580 10581 10582 10583 10584 10585 10586 10587 10588 10589 10590 10591 10592 10593 10594 10595 10596 10597 10598 10599 10600 10601 10602 10603 10604 10605 10606 10607 10608 10609 10610 10611 10612 10613 10614 10615 10616 10617 10618 10619 10620 10621 10622 10623 10624 10625 10626 10627 10628 10629 10630 10631 10632 10633 10634 10635 10636 10637 10638 10639 10640 10641 10642 10643 10644 10645 10646 10647 10648 10649 10650 10651 10652 10653 10654 10655 10656 10657 10658 10659 10660 10661 10662 10663 10664 10665 10666 10667 10668 10669 10670 10671 10672 10673 10674 10675 10676 10677 10678 10679 10680 10681 10682 10683 10684 10685 10686 10687 10688 10689 10690 10691 10692 10693 10694 10695 10696 10697 10698 10699 10700 10701 10702 10703 10704 10705 10706 10707 10708 10709 10710 10711 10712 10713 10714 10715 10716 10717 10718 10719 10720 10721 10722 10723 10724 10725 10726 10727 10728 10729 10730 10731 10732 10733 10734 10735 10736 10737 10738 10739 10740 10741 10742 10743 10744 10745 10746 10747 10748 10749 10750 10751 10752 10753 10754 10755 10756 10757 10758 10759 10760 10761 10762 10763 10764 10765 10766 10767 10768 10769 10770 10771 10772 10773 10774 10775 10776 10777 10778 10779 10780 10781 10782 10783 10784 10785 10786 10787 10788 10789 10790 10791 10792 10793 10794 10795 10796 10797 10798 10799 10800 10801 10802 10803 10804 10805 10806 10807 10808 10809 10810 10811 10812 10813 10814 10815 10816 10817 10818 10819 10820 10821 10822 10823 10824 10825 10826 10827 10828 10829 10830 10831 10832 10833 10834 10835 10836 10837 10838 10839 10840 10841 10842 10843 10844 10845 10846 10847 10848 10849 10850 10851 10852 10853 10854 10855 10856 10857 10858 10859 10860 10861 10862 10863 10864 10865 10866 10867 10868 10869 10870 10871 10872 10873 10874 10875 10876 10877 10878 10879 10880 10881 10882 10883 10884 10885 10886 10887 10888 10889 10890 10891 10892 10893 10894 10895 10896 10897 10898 10899 10900 10901 10902 10903 10904 10905 10906 10907 10908 10909 10910 10911 10912 10913 10914 10915 10916 10917 10918 10919 10920 10921 10922 10923 10924 10925 10926 10927 10928 10929 10930 10931 10932 10933 10934 10935 10936 10937 10938 10939 10940 10941 10942 10943 10944 10945 10946 10947 10948 10949 10950 10951 10952 10953 10954 10955 10956 10957 10958 10959 10960 10961 10962 10963 10964 10965 10966 10967 10968 10969 10970 10971 10972 10973 10974 10975 10976 10977 10978 10979 10980 10981 10982 10983 10984 10985 10986 10987 10988 10989 10990 10991 10992 10993 10994 10995 10996 10997 10998 10999 11000 11001 11002 11003 11004 11005 11006 11007 11008 11009 11010 11011 11012 11013 11014 11015 11016 11017 11018 11019 11020 11021 11022 11023 11024 11025 11026 11027 11028 11029 11030 11031 11032 11033 11034 11035 11036 11037 11038 11039 11040 11041 11042 11043 11044 11045 11046 11047 11048 11049 11050 11051 11052 11053 11054 11055 11056 11057 11058 11059 11060 11061 11062 11063 11064 11065 11066 11067 11068 11069 11070 11071 11072 11073 11074 11075 11076 11077 11078 11079 11080 11081 11082 11083 11084 11085 11086 11087 11088 11089 11090 11091 11092 11093 11094 11095 11096 11097 11098 11099 11100 11101 11102 11103 11104 11105 11106 11107 11108 11109 11110 11111 11112 11113 11114 11115 11116 11117 11118 11119 11120 11121 11122 11123 11124 11125 11126 11127 11128 11129 11130 11131 11132 11133 11134 11135 11136 11137 11138 11139 11140 11141 11142 11143 11144 11145 11146 11147 11148 11149 11150 11151 11152 11153 11154 11155 11156 11157 11158 11159 11160 11161 11162 11163 11164 11165 11166 11167 11168 11169 11170 11171 11172 11173 11174 11175 11176 11177 11178 11179 11180 11181 11182 11183 11184 11185 11186 11187 11188 11189 11190 11191 11192 11193 11194 11195 11196 11197 11198 11199 11200 11201 11202 11203 11204 11205 11206 11207 11208 11209 11210 11211 11212 11213 11214 11215 11216 11217 11218 11219 11220 11221 11222 11223 11224 11225 11226 11227 11228 11229 11230 11231 11232 11233 11234 11235 11236 11237 11238 11239 11240 11241 11242 11243 11244 11245 11246 11247 11248 11249 11250 11251 11252 11253 11254 11255 11256 11257 11258 11259 11260 11261 11262 11263 11264 11265 11266 11267 11268 11269 11270 11271 11272 11273 11274 11275 11276 11277 11278 11279 11280 11281 11282 11283 11284 11285 11286 11287 11288 11289 11290 11291 11292 11293 11294 11295 11296 11297 11298 11299 11300 11301 11302 11303 11304 11305 11306 11307 11308 11309 11310 11311 11312 11313 11314 11315 11316 11317 11318 11319 11320 11321 11322 11323 11324 11325 11326 11327 11328 11329 11330 11331 11332 11333 11334 11335 11336 11337 11338 11339 11340 11341 11342 11343 11344 11345 11346 11347 11348 11349 11350 11351 11352 11353 11354 11355 11356 11357 11358 11359 11360 11361 11362 11363 11364 11365 11366 11367 11368 11369 11370 11371 11372 11373 11374 11375 11376 11377 11378 11379 11380 11381 11382 11383 11384 11385 11386 11387 11388 11389 11390 11391 11392 11393 11394 11395 11396 11397 11398 11399 11400 11401 11402 11403 11404 11405 11406 11407 11408 11409 11410 11411 11412 11413 11414 11415 11416 11417 11418 11419 11420 11421 11422 11423 11424 11425 11426 11427 11428 11429 11430 11431 11432 11433 11434 11435 11436 11437 11438 11439 11440 11441 11442 11443 11444 11445 11446 11447 11448 11449 11450 11451 11452 11453 11454 11455 11456 11457 11458 11459 11460 11461 11462 11463 11464 11465 11466 11467 11468 11469 11470 11471 11472 11473 11474 11475 11476 11477 11478 11479 11480 11481 11482 11483 11484 11485 11486 11487 11488 11489 11490 11491 11492 11493 11494 11495 11496 11497 11498 11499 11500 11501 11502 11503 11504 11505 11506 11507 11508 11509 11510 11511 11512 11513 11514 11515 11516 11517 11518 11519 11520 11521 11522 11523 11524 11525 11526 11527 11528 11529 11530 11531 11532 11533 11534 11535 11536 11537 11538 11539 11540 11541 11542 11543 11544 11545 11546 11547 11548 11549 11550 11551 11552 11553 11554 11555 11556 11557 11558 11559 11560 11561 11562 11563 11564 11565 11566 11567 11568 11569 11570 11571 11572 11573 11574 11575 11576 11577 11578 11579 11580 11581 11582 11583 11584 11585 11586 11587 11588 11589 11590 11591 11592 11593 11594 11595 11596 11597 11598 11599 11600 11601 11602 11603 11604 11605 11606 11607 11608 11609 11610 11611 11612 11613 11614 11615 11616 11617 11618 11619 11620 11621 11622 11623 11624 11625 11626 11627 11628 11629 11630 11631 11632 11633 11634 11635 11636 11637 11638 11639 11640 11641 11642 11643 11644 11645 11646 11647 11648 11649 11650 11651 11652 11653 11654 11655 11656 11657 11658 11659 11660 11661 11662 11663 11664 11665 11666 11667 11668 11669 11670 11671 11672 11673 11674 11675 11676 11677 11678 11679 11680 11681 11682 11683 11684 11685 11686 11687 11688 11689 11690 11691 11692 11693 11694 11695 11696 11697 11698 11699 11700 11701 11702 11703 11704 11705 11706 11707 11708 11709 11710 11711 11712 11713 11714 11715 11716 11717 11718 11719 11720 11721 11722 11723 11724 11725 11726 11727 11728 11729 11730 11731 11732 11733 11734 11735 11736 11737 11738 11739 11740 11741 11742 11743 11744 11745 11746 11747 11748 11749 11750 11751 11752 11753 11754 11755 11756 11757 11758 11759 11760 11761 11762 11763 11764 11765 11766 11767 11768 11769 11770 11771 11772 11773 11774 11775 11776 11777 11778 11779 11780 11781 11782 11783 11784 11785 11786 11787 11788 11789 11790 11791 11792 11793 11794 11795 11796 11797 11798 11799 11800 11801 11802 11803 11804 11805 11806 11807 11808 11809 11810 11811 11812 11813 11814 11815 11816 11817 11818 11819 11820 11821 11822 11823 11824 11825 11826 11827 11828 11829 11830 11831 11832 11833 11834 11835 11836 11837 11838 11839 11840 11841 11842 11843 11844 11845 11846 11847 11848 11849 11850 11851 11852 11853 11854 11855 11856 11857 11858 11859 11860 11861 11862 11863 11864 11865 11866 11867 11868 11869 11870 11871 11872 11873 11874 11875 11876 11877 11878 11879 11880 11881 11882 11883 11884 11885 11886 11887 11888 11889 11890 11891 11892 11893 11894 11895 11896 11897 11898 11899 11900 11901 11902 11903 11904 11905 11906 11907 11908 11909 11910 11911 11912 11913 11914 11915 11916 11917 11918 11919 11920 11921 11922 11923 11924 11925 11926 11927 11928 11929 11930 11931 11932 11933 11934 11935 11936 11937 11938 11939 11940 11941 11942 11943 11944 11945 11946 11947 11948 11949 11950 11951 11952 11953 11954 11955 11956 11957 11958 11959 11960 11961 11962 11963 11964 11965 11966 11967 11968 11969 11970 11971 11972 11973 11974 11975 11976 11977 11978 11979 11980 11981 11982 11983 11984 11985 11986 11987 11988 11989 11990 11991 11992 11993 11994 11995 11996 11997 11998 11999 12000 12001 12002 12003 12004 12005 12006 12007 12008 12009 12010 12011 12012 12013 12014 12015 12016 12017 12018 12019 12020 12021 12022 12023 12024 12025 12026 12027 12028 12029 12030 12031 12032 12033 12034 12035 12036 12037 12038 12039 12040 12041 12042 12043 12044 12045 12046 12047 12048 12049 12050 12051 12052 12053 12054 12055 12056 12057 12058 12059 12060 12061 12062 12063 12064 12065 12066 12067 12068 12069 12070 12071 12072 12073 12074 12075 12076 12077 12078 12079 12080 12081 12082 12083 12084 12085 12086 12087 12088 12089 12090 12091 12092 12093 12094 12095 12096 12097 12098 12099 12100 12101 12102 12103 12104 12105 12106 12107 12108 12109 12110 12111 12112 12113 12114 12115 12116 12117 12118 12119 12120 12121 12122 12123 12124 12125 12126 12127 12128 12129 12130 12131 12132 12133 12134 12135 12136 12137 12138 12139 12140 12141 12142 12143 12144 12145 12146 12147 12148 12149 12150 12151 12152 12153 12154 12155 12156 12157 12158 12159 12160 12161 12162 12163 12164 12165 12166 12167 12168 12169 12170 12171 12172 12173 12174 12175 12176 12177 12178 12179 12180 12181 12182 12183 12184 12185 12186 12187 12188 12189 12190 12191 12192 12193 12194 12195 12196 12197 12198 12199 12200 12201 12202 12203 12204 12205 12206 12207 12208 12209 12210 12211 12212 12213 12214 12215 12216 12217 12218 12219 12220 12221 12222 12223 12224 12225 12226 12227 12228 12229 12230 12231 12232 12233 12234 12235 12236 12237 12238 12239 12240 12241 12242 12243 12244 12245 12246 12247 12248 12249 12250 12251 12252 12253 12254 12255 12256 12257 12258 12259 12260 12261 12262 12263 12264 12265 12266 12267 12268 12269 12270 12271 12272 12273 12274 12275 12276 12277 12278 12279 12280 12281 12282 12283 12284 12285 12286 12287 12288 12289 12290 12291 12292 12293 12294 12295 12296 12297 12298 12299 12300 12301 12302 12303 12304 12305 12306 12307 12308 12309 12310 12311 12312 12313 12314 12315 12316 12317 12318 12319 12320 12321 12322 12323 12324 12325 12326 12327 12328 12329 12330 12331 12332 12333 12334 12335 12336 12337 12338 12339 12340 12341 12342 12343 12344 12345 12346 12347 12348 12349 12350 12351 12352 12353 12354 12355 12356 12357 12358 12359 12360 12361 12362 12363 12364 12365 12366 12367 12368 12369 12370 12371 12372 12373 12374 12375 12376 12377 12378 12379 12380 12381 12382 12383 12384 12385 12386 12387 12388 12389 12390 12391 12392 12393 12394 12395 12396 12397 12398 12399 12400 12401 12402 12403 12404 12405 12406 12407 12408 12409 12410 12411 12412 12413 12414 12415 12416 12417 12418 12419 12420 12421 12422 12423 12424 12425 12426 12427 12428 12429 12430 12431 12432 12433 12434 12435 12436 12437 12438 12439 12440 12441 12442 12443 12444 12445 12446 12447 12448 12449 12450 12451 12452 12453 12454 12455 12456 12457 12458 12459 12460 12461 12462 12463 12464 12465 12466 12467 12468 12469 12470 12471 12472 12473 12474 12475 12476 12477 12478 12479 12480 12481 12482 12483 12484 12485 12486 12487 12488 12489 12490 12491 12492 12493 12494 12495 12496 12497 12498 12499 12500 12501 12502 12503 12504 12505 12506 12507 12508 12509 12510 12511 12512 12513 12514 12515 12516 12517 12518 12519 12520 12521 12522 12523 12524 12525 12526 12527 12528 12529 12530 12531 12532 12533 12534 12535 12536 12537 12538 12539 12540 12541 12542 12543 12544 12545 12546 12547 12548 12549 12550 12551 12552 12553 12554 12555 12556 12557 12558 12559 12560 12561 12562 12563 12564 12565 12566 12567 12568 12569 12570 12571 12572 12573 12574 12575 12576 12577 12578 12579 12580 12581 12582 12583 12584 12585 12586 12587 12588 12589 12590 12591 12592 12593 12594 12595 12596 12597 12598 12599 12600 12601 12602 12603 12604 12605 12606 12607 12608 12609 12610 12611 12612 12613 12614 12615 12616 12617 12618 12619 12620 12621 12622 12623 12624 12625 12626 12627 12628 12629 12630 12631 12632 12633 12634 12635 12636 12637 12638 12639 12640 12641 12642 12643 12644 12645 12646 12647 12648 12649 12650 12651 12652 12653 12654 12655 12656 12657 12658 12659 12660 12661 12662 12663 12664 12665 12666 12667 12668 12669 12670 12671 12672 12673 12674 12675 12676 12677 12678 12679 12680 12681 12682 12683 12684 12685 12686 12687 12688 12689 12690 12691 12692 12693 12694 12695 12696 12697 12698 12699 12700 12701 12702 12703 12704 12705 12706 12707 12708 12709 12710 12711 12712 12713 12714 12715 12716 12717 12718 12719 12720 12721 12722 12723 12724 12725 12726 12727 12728 12729 12730 12731 12732 12733 12734 12735 12736 12737 12738 12739 12740 12741 12742 12743 12744 12745 12746 12747 12748 12749 12750 12751 12752 12753 12754 12755 12756 12757 12758 12759 12760 12761 12762 12763 12764 12765 12766 12767 12768 12769 12770 12771 12772 12773 12774 12775 12776 12777 12778 12779 12780 12781 12782 12783 12784 12785 12786 12787 12788 12789 12790 12791 12792 12793 12794 12795 12796 12797 12798 12799 12800 12801 12802 12803 12804 12805 12806 12807 12808 12809 12810 12811 12812 12813 12814 12815 12816 12817 12818 12819 12820 12821 12822 12823 12824 12825 12826 12827 12828 12829 12830 12831 12832 12833 12834 12835 12836 12837 12838 12839 12840 12841 12842 12843 12844 12845 12846 12847 12848 12849 12850 12851 12852 12853 12854 12855 12856 12857 12858 12859 12860 12861 12862 12863 12864 12865 12866 12867 12868 12869 12870 12871 12872 12873 12874 12875 12876 12877 12878 12879 12880 12881 12882 12883 12884 12885 12886 12887 12888 12889 12890 12891 12892 12893 12894 12895 12896 12897 12898 12899 12900 12901 12902 12903 12904 12905 12906 12907 12908 12909 12910 12911 12912 12913 12914 12915 12916 12917 12918 12919 12920 12921 12922 12923 12924 12925 12926 12927 12928 12929 12930 12931 12932 12933 12934 12935 12936 12937 12938 12939 12940 12941 12942 12943 12944 12945 12946 12947 12948 12949 12950 12951 12952 12953 12954 12955 12956 12957 12958 12959 12960 12961 12962 12963 12964 12965 12966 12967 12968 12969 12970 12971 12972 12973 12974 12975 12976 12977 12978 12979 12980 12981 12982 12983 12984 12985 12986 12987 12988 12989 12990 12991 12992 12993 12994 12995 12996 12997 12998 12999 13000 13001 13002 13003 13004 13005 13006 13007 13008 13009 13010 13011 13012 13013 13014 13015 13016 13017 13018 13019 13020 13021 13022 13023 13024 13025 13026 13027 13028 13029 13030 13031 13032 13033 13034 13035 13036 13037 13038 13039 13040 13041 13042 13043 13044 13045 13046 13047 13048 13049 13050 13051 13052 13053 13054 13055 13056 13057 13058 13059 13060 13061 13062 13063 13064 13065 13066 13067 13068 13069 13070 13071 13072 13073 13074 13075 13076 13077 13078 13079 13080 13081 13082 13083 13084 13085 13086 13087 13088 13089 13090 13091 13092 13093 13094 13095 13096 13097 13098 13099 13100 13101 13102 13103 13104 13105 13106 13107 13108 13109 13110 13111 13112 13113 13114 13115 13116 13117 13118 13119 13120 13121 13122 13123 13124 13125 13126 13127 13128 13129 13130 13131 13132 13133 13134 13135 13136 13137 13138 13139 13140 13141 13142 13143 13144 13145 13146 13147 13148 13149 13150 13151 13152 13153 13154 13155 13156 13157 13158 13159 13160 13161 13162 13163 13164 13165 13166 13167 13168 13169 13170 13171 13172 13173 13174 13175 13176 13177 13178 13179 13180 13181 13182 13183 13184 13185 13186 13187 13188 13189 13190 13191 13192 13193 13194 13195 13196 13197 13198 13199 13200 13201 13202 13203 13204 13205 13206 13207 13208 13209 13210 13211 13212 13213 13214 13215 13216 13217 13218 13219 13220 13221 13222 13223 13224 13225 13226 13227 13228 13229 13230 13231 13232 13233 13234 13235 13236 13237 13238 13239 13240 13241 13242 13243 13244 13245 13246 13247 13248 13249 13250 13251 13252 13253 13254 13255 13256 13257 13258 13259 13260 13261 13262 13263 13264 13265 13266 13267 13268 13269 13270 13271 13272 13273 13274 13275 13276 13277 13278 13279 13280 13281 13282 13283 13284 13285 13286 13287 13288 13289 13290 13291 13292 13293 13294 13295 13296 13297 13298 13299 13300 13301 13302 13303 13304 13305 13306 13307 13308 13309 13310 13311 13312 13313 13314 13315 13316 13317 13318 13319 13320 13321 13322 13323 13324 13325 13326 13327 13328 13329 13330 13331 13332 13333 13334 13335 13336 13337 13338 13339 13340 13341 13342 13343 13344 13345 13346 13347 13348 13349 13350 13351 13352 13353 13354 13355 13356 13357 13358 13359 13360 13361 13362 13363 13364 13365 13366 13367 13368 13369 13370 13371 13372 13373 13374 13375 13376 13377 13378 13379 13380 13381 13382 13383 13384 13385 13386 13387 13388 13389 13390 13391 13392 13393 13394 13395 13396 13397 13398 13399 13400 13401 13402 13403 13404 13405 13406 13407 13408 13409 13410 13411 13412 13413 13414 13415 13416 13417 13418 13419 13420 13421 13422 13423 13424 13425 13426 13427 13428 13429 13430 13431 13432 13433 13434 13435 13436 13437 13438 13439 13440 13441 13442 13443 13444 13445 13446 13447 13448 13449 13450 13451 13452 13453 13454 13455 13456 13457 13458 13459 13460 13461 13462 13463 13464 13465 13466 13467 13468 13469 13470 13471 13472 13473 13474 13475 13476 13477 13478 13479 13480 13481 13482 13483 13484 13485 13486 13487 13488 13489 13490 13491 13492 13493 13494 13495 13496 13497 13498 13499 13500 13501 13502 13503 13504 13505 13506 13507 13508 13509 13510 13511 13512 13513 13514 13515 13516 13517 13518 13519 13520 13521 13522 13523 13524 13525 13526 13527 13528 13529 13530 13531 13532 13533 13534 13535 13536 13537 13538 13539 13540 13541 13542 13543 13544 13545 13546 13547 13548 13549 13550 13551 13552 13553 13554 13555 13556 13557 13558 13559 13560 13561 13562 13563 13564 13565 13566 13567 13568 13569 13570 13571 13572 13573 13574 13575 13576 13577 13578 13579 13580 13581 13582 13583 13584 13585 13586 13587 13588 13589 13590 13591 13592 13593 13594 13595 13596 13597 13598 13599 13600 13601 13602 13603 13604 13605 13606 13607 13608 13609 13610 13611 13612 13613 13614 13615 13616 13617 13618 13619 13620 13621 13622 13623 13624 13625 13626 13627 13628 13629 13630 13631 13632 13633 13634 13635 13636 13637 13638 13639 13640 13641 13642 13643 13644 13645 13646 13647 13648 13649 13650 13651 13652 13653 13654 13655 13656 13657 13658 13659 13660 13661 13662 13663 13664 13665 13666 13667 13668 13669 13670 13671 13672 13673 13674 13675 13676 13677 13678 13679 13680 13681 13682 13683 13684 13685 13686 13687 13688 13689 13690 13691 13692 13693 13694 13695 13696 13697 13698 13699 13700 13701 13702 13703 13704 13705 13706 13707 13708 13709 13710 13711 13712 13713 13714 13715 13716 13717 13718 13719 13720 13721 13722 13723 13724 13725 13726 13727 13728 13729 13730 13731 13732 13733 13734 13735 13736 13737 13738 13739 13740 13741 13742 13743 13744 13745 13746 13747 13748 13749 13750 13751 13752 13753 13754 13755 13756 13757 13758 13759 13760 13761 13762 13763 13764 13765 13766 13767 13768 13769 13770 13771 13772 13773 13774 13775 13776 13777 13778 13779 13780 13781 13782 13783 13784 13785 13786 13787 13788 13789 13790 13791 13792 13793 13794 13795 13796 13797 13798 13799 13800 13801 13802 13803 13804 13805 13806 13807 13808 13809 13810 13811 13812 13813 13814 13815 13816 13817 13818 13819 13820 13821 13822 13823 13824 13825 13826 13827 13828 13829 13830 13831 13832 13833 13834 13835 13836 13837 13838 13839 13840 13841 13842 13843 13844 13845 13846 13847 13848 13849 13850 13851 13852 13853 13854 13855 13856 13857 13858 13859 13860 13861 13862 13863 13864 13865 13866 13867 13868 13869 13870 13871 13872 13873 13874 13875 13876 13877 13878 13879 13880 13881 13882 13883 13884 13885 13886 13887 13888 13889 13890 13891 13892 13893 13894 13895 13896 13897 13898 13899 13900 13901 13902 13903 13904 13905 13906 13907 13908 13909 13910 13911 13912 13913 13914 13915 13916 13917 13918 13919 13920 13921 13922 13923 13924 13925 13926 13927 13928 13929 13930 13931 13932 13933 13934 13935 13936 13937 13938 13939 13940 13941 13942 13943 13944 13945 13946 13947 13948 13949 13950 13951 13952 13953 13954 13955 13956 13957 13958 13959 13960 13961 13962 13963 13964 13965 13966 13967 13968 13969 13970 13971 13972 13973 13974 13975 13976 13977 13978 13979 13980 13981 13982 13983 13984 13985 13986 13987 13988 13989 13990 13991 13992 13993 13994 13995 13996 13997 13998 13999 14000 14001 14002 14003 14004 14005 14006 14007 14008 14009 14010 14011 14012 14013 14014 14015 14016 14017 14018 14019 14020 14021 14022 14023 14024 14025 14026 14027 14028 14029 14030 14031 14032 14033 14034 14035 14036 14037 14038 14039 14040 14041 14042 14043 14044 14045 14046 14047 14048 14049 14050 14051 14052 14053 14054 14055 14056 14057 14058 14059 14060 14061 14062 14063 14064 14065 14066 14067 14068 14069 14070 14071 14072 14073 14074 14075 14076 14077 14078 14079 14080 14081 14082 14083 14084 14085 14086 14087 14088 14089 14090 14091 14092 14093 14094 14095 14096 14097 14098 14099 14100 14101 14102 14103 14104 14105 14106 14107 14108 14109 14110 14111 14112 14113 14114 14115 14116 14117 14118 14119 14120 14121 14122 14123 14124 14125 14126 14127 14128 14129 14130 14131 14132 14133 14134 14135 14136 14137 14138 14139 14140 14141 14142 14143 14144 14145 14146 14147 14148 14149 14150 14151 14152 14153 14154 14155 14156 14157 14158 14159 14160 14161 14162 14163 14164 14165 14166 14167 14168 14169 14170 14171 14172 14173 14174 14175 14176 14177 14178 14179 14180 14181 14182 14183 14184 14185 14186 14187 14188 14189 14190 14191 14192 14193 14194 14195 14196 14197 14198 14199 14200 14201 14202 14203 14204 14205 14206 14207 14208 14209 14210 14211 14212 14213 14214 14215 14216 14217 14218 14219 14220 14221 14222 14223 14224 14225 14226 14227 14228 14229 14230 14231 14232 14233 14234 14235 14236 14237 14238 14239 14240 14241 14242 14243 14244 14245 14246 14247 14248 14249 14250 14251 14252 14253 14254 14255 14256 14257 14258 14259 14260 14261 14262 14263 14264 14265 14266 14267 14268 14269 14270 14271 14272 14273 14274 14275 14276 14277 14278 14279 14280 14281 14282 14283 14284 14285 14286 14287 14288 14289 14290 14291 14292 14293 14294 14295 14296 14297 14298 14299 14300 14301 14302 14303 14304 14305 14306 14307 14308 14309 14310 14311 14312 14313 14314 14315 14316 14317 14318 14319 14320 14321 14322 14323 14324 14325 14326 14327 14328 14329 14330 14331 14332 14333 14334 14335 14336 14337 14338 14339 14340 14341 14342 14343 14344 14345 14346 14347 14348 14349 14350 14351 14352 14353 14354 14355 14356 14357 14358 14359 14360 14361 14362 14363 14364 14365 14366 14367 14368 14369 14370 14371 14372 14373 14374 14375 14376 14377 14378 14379 14380 14381 14382 14383 14384 14385 14386 14387 14388 14389 14390 14391 14392 14393 14394 14395 14396 14397 14398 14399 14400 14401 14402 14403 14404 14405 14406 14407 14408 14409 14410 14411 14412 14413 14414 14415 14416 14417 14418 14419 14420 14421 14422 14423 14424 14425 14426 14427 14428 14429 14430 14431 14432 14433 14434 14435 14436 14437 14438 14439 14440 14441 14442 14443 14444 14445 14446 14447 14448 14449 14450 14451 14452 14453 14454 14455 14456 14457 14458 14459 14460 14461 14462 14463 14464 14465 14466 14467 14468 14469 14470 14471 14472 14473 14474 14475 14476 14477 14478 14479 14480 14481 14482 14483 14484 14485 14486 14487 14488 14489 14490 14491 14492 14493 14494 14495 14496 14497 14498 14499 14500 14501 14502 14503 14504 14505 14506 14507 14508 14509 14510 14511 14512 14513 14514 14515 14516 14517 14518 14519 14520 14521 14522 14523 14524 14525 14526 14527 14528 14529 14530 14531 14532 14533 14534 14535 14536 14537 14538 14539 14540 14541 14542 14543 14544 14545 14546 14547 14548 14549 14550 14551 14552 14553 14554 14555 14556 14557 14558 14559 14560 14561 14562 14563 14564 14565 14566 14567 14568 14569 14570 14571 14572 14573 14574 14575 14576 14577 14578 14579 14580 14581 14582 14583 14584 14585 14586 14587 14588 14589 14590 14591 14592 14593 14594 14595 14596 14597 14598 14599 14600 14601 14602 14603 14604 14605 14606 14607 14608 14609 14610 14611 14612 14613 14614 14615 14616 14617 14618 14619 14620 14621 14622 14623 14624 14625 14626 14627 14628 14629 14630 14631 14632 14633 14634 14635 14636 14637 14638 14639 14640 14641 14642 14643 14644 14645 14646 14647 14648 14649 14650 14651 14652 14653 14654 14655 14656 14657 14658 14659 14660 14661 14662 14663 14664 14665 14666 14667 14668 14669 14670 14671 14672 14673 14674 14675 14676 14677 14678 14679 14680 14681 14682 14683 14684 14685 14686 14687 14688 14689 14690 14691 14692 14693 14694 14695 14696 14697 14698 14699 14700 14701 14702 14703 14704 14705 14706 14707 14708 14709 14710 14711 14712 14713 14714 14715 14716 14717 14718 14719 14720 14721 14722 14723 14724 14725 14726 14727 14728 14729 14730 14731 14732 14733 14734 14735 14736 14737 14738 14739 14740 14741 14742 14743 14744 14745 14746 14747 14748 14749 14750 14751 14752 14753 14754 14755 14756 14757 14758 14759 14760 14761 14762 14763 14764 14765 14766 14767 14768 14769 14770 14771 14772 14773 14774 14775 14776 14777 14778 14779 14780 14781 14782 14783 14784 14785 14786 14787 14788 14789 14790 14791 14792 14793 14794 14795 14796 14797 14798 14799 14800 14801 14802 14803 14804 14805 14806 14807 14808 14809 14810 14811 14812 14813 14814 14815 14816 14817 14818 14819 14820 14821 14822 14823 14824 14825 14826 14827 14828 14829 14830 14831 14832 14833 14834 14835 14836 14837 14838 14839 14840 14841 14842 14843 14844 14845 14846 14847 14848 14849 14850 14851 14852 14853 14854 14855 14856 14857 14858 14859 14860 14861 14862 14863 14864 14865 14866 14867 14868 14869 14870 14871 14872 14873 14874 14875 14876 14877 14878 14879 14880 14881 14882 14883 14884 14885 14886 14887 14888 14889 14890 14891 14892 14893 14894 14895 14896 14897 14898 14899 14900 14901 14902 14903 14904 14905 14906 14907 14908 14909 14910 14911 14912 14913 14914 14915 14916 14917 14918 14919 14920 14921 14922 14923 14924 14925 14926 14927 14928 14929 14930 14931 14932 14933 14934 14935 14936 14937 14938 14939 14940 14941 14942 14943 14944 14945 14946 14947 14948 14949 14950 14951 14952 14953 14954 14955 14956 14957 14958 14959 14960 14961 14962 14963 14964 14965 14966 14967 14968 14969 14970 14971 14972 14973 14974 14975 14976 14977 14978 14979 14980 14981 14982 14983 14984 14985 14986 14987 14988 14989 14990 14991 14992 14993 14994 14995 14996 14997 14998 14999 15000 15001 15002 15003 15004 15005 15006 15007 15008 15009 15010 15011 15012 15013 15014 15015 15016 15017 15018 15019 15020 15021 15022 15023 15024 15025 15026 15027 15028 15029 15030 15031 15032 15033 15034 15035 15036 15037 15038 15039 15040 15041 15042 15043 15044 15045 15046 15047 15048 15049 15050 15051 15052 15053 15054 15055 15056 15057 15058 15059 15060 15061 15062 15063 15064 15065 15066 15067 15068 15069 15070 15071 15072 15073 15074 15075 15076 15077 15078 15079 15080 15081 15082 15083 15084 15085 15086 15087 15088 15089 15090 15091 15092 15093 15094 15095 15096 15097 15098 15099 15100 15101 15102 15103 15104 15105 15106 15107 15108 15109 15110 15111 15112 15113 15114 15115 15116 15117 15118 15119 15120 15121 15122 15123 15124 15125 15126 15127 15128 15129 15130 15131 15132 15133 15134 15135 15136 15137 15138 15139 15140 15141 15142 15143 15144 15145 15146 15147 15148 15149 15150 15151 15152 15153 15154 15155 15156 15157 15158 15159 15160 15161 15162 15163 15164 15165 15166 15167 15168 15169 15170 15171 15172 15173 15174 15175 15176 15177 15178 15179 15180 15181 15182 15183 15184 15185 15186 15187 15188 15189 15190 15191 15192 15193 15194 15195 15196 15197 15198 15199 15200 15201 15202 15203 15204 15205 15206 15207 15208 15209 15210 15211 15212 15213 15214 15215 15216 15217 15218 15219 15220 15221 15222 15223 15224 15225 15226 15227 15228 15229 15230 15231 15232 15233 15234 15235 15236 15237 15238 15239 15240 15241 15242 15243 15244 15245 15246 15247 15248 15249 15250 15251 15252 15253 15254 15255 15256 15257 15258 15259 15260 15261 15262 15263 15264 15265 15266 15267 15268 15269 15270 15271 15272 15273 15274 15275 15276 15277 15278 15279 15280 15281 15282 15283 15284 15285 15286 15287 15288 15289 15290 15291 15292 15293 15294 15295 15296 15297 15298 15299 15300 15301 15302 15303 15304 15305 15306 15307 15308 15309 15310 15311 15312 15313 15314 15315 15316 15317 15318 15319 15320 15321 15322 15323 15324 15325 15326 15327 15328 15329 15330 15331 15332 15333 15334 15335 15336 15337 15338 15339 15340 15341 15342 15343 15344 15345 15346 15347 15348 15349 15350 15351 15352 15353 15354 15355 15356 15357 15358 15359 15360 15361 15362 15363 15364 15365 15366 15367 15368 15369 15370 15371 15372 15373 15374 15375 15376 15377 15378 15379 15380 15381 15382 15383 15384 15385 15386 15387 15388 15389 15390 15391 15392 15393 15394 15395 15396 15397 15398 15399 15400 15401 15402 15403 15404 15405 15406 15407 15408 15409 15410 15411 15412 15413 15414 15415 15416 15417 15418 15419 15420 15421 15422 15423 15424 15425 15426 15427 15428 15429 15430 15431 15432 15433 15434 15435 15436 15437 15438 15439 15440 15441 15442 15443 15444 15445 15446 15447 15448 15449 15450 15451 15452 15453 15454 15455 15456 15457 15458 15459 15460 15461 15462 15463 15464 15465 15466 15467 15468 15469 15470 15471 15472 15473 15474 15475 15476 15477 15478 15479 15480 15481 15482 15483 15484 15485 15486 15487 15488 15489 15490 15491 15492 15493 15494 15495 15496 15497 15498 15499 15500 15501 15502 15503 15504 15505 15506 15507 15508 15509 15510 15511 15512 15513 15514 15515 15516 15517 15518 15519 15520 15521 15522 15523 15524 15525 15526 15527 15528 15529 15530 15531 15532 15533 15534 15535 15536 15537 15538 15539 15540 15541 15542 15543 15544 15545 15546 15547 15548 15549 15550 15551 15552 15553 15554 15555 15556 15557 15558 15559 15560 15561 15562 15563 15564 15565 15566 15567 15568 15569 15570 15571 15572 15573 15574 15575 15576 15577 15578 15579 15580 15581 15582 15583 15584 15585 15586 15587 15588 15589 15590 15591 15592 15593 15594 15595 15596 15597 15598 15599 15600 15601 15602 15603 15604 15605 15606 15607 15608 15609 15610 15611 15612 15613 15614 15615 15616 15617 15618 15619 15620 15621 15622 15623 15624 15625 15626 15627 15628 15629 15630 15631 15632 15633 15634 15635 15636 15637 15638 15639 15640 15641 15642 15643 15644 15645 15646 15647 15648 15649 15650 15651 15652 15653 15654 15655 15656 15657 15658 15659 15660 15661 15662 15663 15664 15665 15666 15667 15668 15669 15670 15671 15672 15673 15674 15675 15676 15677 15678 15679 15680 15681 15682 15683 15684 15685 15686 15687 15688 15689 15690 15691 15692 15693 15694 15695 15696 15697 15698 15699 15700 15701 15702 15703 15704 15705 15706 15707 15708 15709 15710 15711 15712 15713 15714 15715 15716 15717 15718 15719 15720 15721 15722 15723 15724 15725 15726 15727 15728 15729 15730 15731 15732 15733 15734 15735 15736 15737 15738 15739 15740 15741 15742 15743 15744 15745 15746 15747 15748 15749 15750 15751 15752 15753 15754 15755 15756 15757 15758 15759 15760 15761 15762 15763 15764 15765 15766 15767 15768 15769 15770 15771 15772 15773 15774 15775 15776 15777 15778 15779 15780 15781 15782 15783 15784 15785 15786 15787 15788 15789 15790 15791 15792 15793 15794 15795 15796 15797 15798 15799 15800 15801 15802 15803 15804 15805 15806 15807 15808 15809 15810 15811 15812 15813 15814 15815 15816 15817 15818 15819 15820 15821 15822 15823 15824 15825 15826 15827 15828 15829 15830 15831 15832 15833 15834 15835 15836 15837 15838 15839 15840 15841 15842 15843 15844 15845 15846 15847 15848 15849 15850 15851 15852 15853 15854 15855 15856 15857 15858 15859 15860 15861 15862 15863 15864 15865 15866 15867 15868 15869 15870 15871 15872 15873 15874 15875 15876 15877 15878 15879 15880 15881 15882 15883 15884 15885 15886 15887 15888 15889 15890 15891 15892 15893 15894 15895 15896 15897 15898 15899 15900 15901 15902 15903 15904 15905 15906 15907 15908 15909 15910 15911 15912 15913 15914 15915 15916 15917 15918 15919 15920 15921 15922 15923 15924 15925 15926 15927 15928 15929 15930 15931 15932 15933 15934 15935 15936 15937 15938 15939 15940 15941 15942 15943 15944 15945 15946 15947 15948 15949 15950 15951 15952 15953 15954 15955 15956 15957 15958 15959 15960 15961 15962 15963 15964 15965 15966 15967 15968 15969 15970 15971 15972 15973 15974 15975 15976 15977 15978 15979 15980 15981 15982 15983 15984 15985 15986 15987 15988 15989 15990 15991 15992 15993 15994 15995 15996 15997 15998 15999 16000 16001 16002 16003 16004 16005 16006 16007 16008 16009 16010 16011 16012 16013 16014 16015 16016 16017 16018 16019 16020 16021 16022 16023 16024 16025 16026 16027 16028 16029 16030 16031 16032 16033 16034 16035 16036 16037 16038 16039 16040 16041 16042 16043 16044 16045 16046 16047 16048 16049 16050 16051 16052 16053 16054 16055 16056 16057 16058 16059 16060 16061 16062 16063 16064 16065 16066 16067 16068 16069 16070 16071 16072 16073 16074 16075 16076 16077 16078 16079 16080 16081 16082 16083 16084 16085 16086 16087 16088 16089 16090 16091 16092 16093 16094 16095 16096 16097 16098 16099 16100 16101 16102 16103 16104 16105 16106 16107 16108 16109 16110 16111 16112 16113 16114 16115 16116 16117 16118 16119 16120 16121 16122 16123 16124 16125 16126 16127 16128 16129 16130 16131 16132 16133 16134 16135 16136 16137 16138 16139 16140 16141 16142 16143 16144 16145 16146 16147 16148 16149 16150 16151 16152 16153 16154 16155 16156 16157 16158 16159 16160 16161 16162 16163 16164 16165 16166 16167 16168 16169 16170 16171 16172 16173 16174 16175 16176 16177 16178 16179 16180 16181 16182 16183 16184 16185 16186 16187 16188 16189 16190 16191 16192 16193 16194 16195 16196 16197 16198 16199 16200 16201 16202 16203 16204 16205 16206 16207 16208 16209 16210 16211 16212 16213 16214 16215 16216 16217 16218 16219 16220 16221 16222 16223 16224 16225 16226 16227 16228 16229 16230 16231 16232 16233
|
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.3//EN"
"http://www.oasis-open.org/docbook/xml/4.3/docbookx.dtd" [
<!ENTITY iuml "ï">
]>
<book lang="en">
<title>Sphinx 2.2.11-release reference manual</title>
<subtitle>Free open-source SQL full-text search engine</subtitle>
<bookinfo>
<copyright>
<year>2001-2016</year>
<holder>Andrew Aksyonoff</holder>
</copyright>
<copyright>
<year>2008-2016</year>
<holder>Sphinx Technologies Inc, <ulink
url="http://sphinxsearch.com">http://sphinxsearch.com</ulink></holder>
</copyright>
</bookinfo>
<chapter id="intro"><title>Introduction</title>
<sect1 id="about"><title>About</title>
<para>
Sphinx is a full-text search engine, publicly distributed under GPL version 2.
Commercial licensing (eg. for embedded use) is available upon request.
</para>
<para>
Technically, Sphinx is a standalone software package provides
fast and relevant full-text search functionality to client applications.
It was specially designed to integrate well with SQL databases storing
the data, and to be easily accessed by scripting languages. However, Sphinx
does not depend on nor require any specific database to function.
</para>
<para>
Applications can access Sphinx search daemon (searchd) using any of
the three different access methods: a) via Sphinx own implementation of MySQL
network protocol (using a small SQL subset called SphinxQL, this is recommended
way), b) via native search API (SphinxAPI) or c) via MySQL server with a
pluggable storage engine (SphinxSE).
</para>
<para>
Official native SphinxAPI implementations for PHP, Perl, Python, Ruby and Java
are included within the distribution package. API is very lightweight
so porting it to a new language is known to take a few hours or days.
Third party API ports and plugins exist for Perl, C#, Haskell,
Ruby-on-Rails, and possibly other languages and frameworks.
</para>
<para>
Starting from version 1.10-beta, Sphinx supports two different indexing
backends: "disk" index backend, and "realtime" (RT) index backend.
Disk indexes support online full-text index rebuilds, but online updates
can only be done on non-text (attribute) data. RT indexes additionally
allow for online full-text index updates. Previous versions only
supported disk indexes.
</para>
<para>
Data can be loaded into disk indexes using a so-called data source.
Built-in sources can fetch data directly from MySQL, PostgreSQL, MSSQL, ODBC
compliant database (Oracle, etc) or a pipe in TSV or a custom XML format.
Adding new data sources drivers (eg. to natively support other DBMSes)
is designed to be as easy as possible. RT indexes, as of 1.10-beta,
can only be populated using SphinxQL.
</para>
<para>
As for the name, Sphinx is an acronym which is officially decoded
as SQL Phrase Index. Yes, I know about CMU's Sphinx project.
</para>
</sect1>
<sect1 id="features"><title>Sphinx features</title>
<para>
Key Sphinx features are:
<itemizedlist>
<listitem><para>high indexing and searching performance;</para></listitem>
<listitem><para>advanced indexing and querying tools (flexible and feature-rich text tokenizer, querying language, several different ranking modes, etc);</para></listitem>
<listitem><para>advanced result set post-processing (SELECT with expressions, WHERE, ORDER BY, GROUP BY, HAVING etc over text search results);</para></listitem>
<listitem><para>proven scalability up to billions of documents, terabytes of data, and thousands of queries per second;</para></listitem>
<listitem><para>easy integration with SQL and XML data sources, and SphinxQL, SphinxAPI, or SphinxSE search interfaces;</para></listitem>
<listitem><para>easy scaling with distributed searches.</para></listitem>
</itemizedlist>
To expand a bit, Sphinx:
<itemizedlist>
<listitem><para>has high indexing speed (upto 10-15 MB/sec per core on an internal benchmark);</para></listitem>
<listitem><para>has high search speed (upto 150-250 queries/sec per core against 1,000,000 documents, 1.2 GB of data on an internal benchmark);</para></listitem>
<listitem><para>has high scalability (biggest known cluster indexes over 3,000,000,000 documents, and busiest one peaks over 50,000,000 queries/day);</para></listitem>
<listitem><para>provides good relevance ranking through combination of phrase proximity ranking and statistical (BM25) ranking;</para></listitem>
<listitem><para>provides distributed searching capabilities;</para></listitem>
<listitem><para>provides document excerpts (snippets) generation;</para></listitem>
<listitem><para>provides searching from within application with SphinxQL or SphinxAPI interfaces, and from within MySQL with pluggable SphinxSE storage engine;</para></listitem>
<listitem><para>supports boolean, phrase, word proximity and other types of queries;</para></listitem>
<listitem><para>supports multiple full-text fields per document (upto 32 by default);</para></listitem>
<listitem><para>supports multiple additional attributes per document (ie. groups, timestamps, etc);</para></listitem>
<listitem><para>supports stopwords;</para></listitem>
<listitem><para>supports morphological word forms dictionaries;</para></listitem>
<listitem><para>supports tokenizing exceptions;</para></listitem>
<listitem><para>supports UTF-8 encoding;</para></listitem>
<listitem><para>supports stemming (stemmers for English, Russian, Czech and Arabic are built-in; and stemmers for
French, Spanish, Portuguese, Italian, Romanian, German, Dutch, Swedish, Norwegian, Danish, Finnish, Hungarian,
are available by building third party <ulink url="http://snowball.tartarus.org/">libstemmer library</ulink>);</para></listitem>
<listitem><para>supports MySQL natively (all types of tables, including MyISAM, InnoDB, NDB, Archive, etc are supported);</para></listitem>
<listitem><para>supports PostgreSQL natively;</para></listitem>
<listitem><para>supports ODBC compliant databases (MS SQL, Oracle, etc) natively;</para></listitem>
<listitem><para>...has 50+ other features not listed here, refer configuration manual!</para></listitem>
</itemizedlist>
</para>
</sect1>
<sect1 id="getting"><title>Where to get Sphinx</title>
<para>Sphinx is available through its official Web site at <ulink url="http://sphinxsearch.com/">http://sphinxsearch.com/</ulink>.
</para>
<para>Currently, Sphinx distribution tarball includes the following software:
<itemizedlist>
<listitem><para><filename>indexer</filename>: an utility which creates fulltext indexes;</para></listitem>
<listitem><para><filename>searchd</filename>: a daemon which enables external software (eg. Web applications) to search through fulltext indexes;</para></listitem>
<listitem><para><filename>sphinxapi</filename>: a set of searchd client API libraries for popular Web scripting languages (PHP, Python, Perl, Ruby).</para></listitem>
<listitem><para><filename>spelldump</filename>: a simple command-line tool to extract the items from an <filename>ispell</filename> or <filename>MySpell</filename>
(as bundled with OpenOffice) format dictionary to help customize your index, for use with <link linkend="conf-wordforms">wordforms</link>.</para></listitem>
<listitem><para><filename>indextool</filename>: an utility to dump miscellaneous debug information about the index, added in version 0.9.9-rc2.</para></listitem>
<listitem><para><filename>wordbreaker</filename>: an utility to break down compound words into separate words, added in version 2.1.1.</para></listitem>
</itemizedlist>
</para>
</sect1>
<sect1 id="license"><title>License</title>
<para>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License,
or (at your option) any later version. See COPYING file for details.
</para>
<para>
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
more details.
</para>
<para>
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation, Inc.,
59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
</para>
<para>
Non-GPL licensing (for OEM/ISV embedded use) can also be arranged, please
<ulink url="http://sphinxsearch.com/contacts.html">contact us</ulink> to discuss
commercial licensing possibilities.
</para>
</sect1>
<sect1 id="credits"><title>Credits</title>
<bridgehead>Author</bridgehead>
<para>
Sphinx initial author (and a benevolent dictator ever since):
<itemizedlist>
<listitem><para>Andrew Aksyonoff, <ulink url="http://shodan.ru">http://shodan.ru</ulink></para></listitem>
</itemizedlist>
</para>
<bridgehead>Team</bridgehead>
<para>
Past and present employees of Sphinx Technologies Inc who should be
noted on their work on Sphinx (in alphabetical order):
<itemizedlist>
<listitem><para>Adam Rice</para></listitem>
<listitem><para>Adrian Nuta</para></listitem>
<listitem><para>Alexander Klimenko</para></listitem>
<listitem><para>Alexey Dvoichenkov</para></listitem>
<listitem><para>Alexey Vinogradov</para></listitem>
<listitem><para>Anton Tsitlionok</para></listitem>
<listitem><para>Eugene Kosov</para></listitem>
<listitem><para>Gloria Vinogradova</para></listitem>
<listitem><para>Ilya Kuznetsov</para></listitem>
<listitem><para>Kirill Shmatov</para></listitem>
<listitem><para>Rich Kelm</para></listitem>
<listitem><para>Stanislav Klinov</para></listitem>
<listitem><para>Steven Barker</para></listitem>
<listitem><para>Vladimir Fedorkov</para></listitem>
<listitem><para>Yuri Schapov</para></listitem>
</itemizedlist>
</para>
<bridgehead>Contributors</bridgehead>
<para>People who contributed to Sphinx and their contributions (in no particular order):
<itemizedlist>
<listitem><para>Robert "coredev" Bengtsson (Sweden), initial version of PostgreSQL data source</para></listitem>
<listitem><para>Len Kranendonk, Perl API</para></listitem>
<listitem><para>Dmytro Shteflyuk, Ruby API</para></listitem>
</itemizedlist>
</para>
<para>
Many other people have contributed ideas, bug reports, fixes, etc.
Thank you!
</para>
</sect1>
<sect1 id="history"><title>History</title>
<para>
Sphinx development was started back in 2001, because I didn't manage
to find an acceptable search solution (for a database driven Web site)
which would meet my requirements. Actually, each and every important aspect was a problem:
<itemizedlist>
<listitem><para>search quality (ie. good relevance)
<itemizedlist><listitem><para>statistical ranking methods performed rather bad, especially on large collections of small documents (forums, blogs, etc)</para></listitem></itemizedlist>
</para></listitem>
<listitem><para>search speed
<itemizedlist><listitem><para>especially if searching for phrases which contain stopwords, as in "to be or not to be"</para></listitem></itemizedlist>
</para></listitem>
<listitem><para>moderate disk and CPU requirements when indexing
<itemizedlist><listitem><para>important in shared hosting environment, not to mention the indexing speed.</para></listitem></itemizedlist>
</para></listitem>
</itemizedlist>
</para>
<para>
Despite the amount of time passed and numerous improvements made in the
other solutions, there's still no solution which I personally would
be eager to migrate to.
</para>
<para>
Considering that and a lot of positive feedback received from Sphinx users
during last years, the obvious decision is to continue developing Sphinx
(and, eventually, to take over the world).
</para>
</sect1>
</chapter>
<chapter id="installation"><title>Installation</title>
<sect1 id="supported-system"><title>Supported systems</title>
<para>
Sphinx can be compiled either from source or installed using prebuilt
packages. Most modern UNIX systems with a C++ compiler should be able
to compile and run Sphinx without any modifications.
</para>
<para>
Currently known systems Sphinx has been successfully running on are:
<itemizedlist>
<listitem><para>Linux 2.4.x, 2.6.x, 3.x (many various distributions)</para></listitem>
<listitem><para>Windows 2000, XP, 7, 8</para></listitem>
<listitem><para>FreeBSD 4.x, 5.x, 6.x, 7.x, 8.x</para></listitem>
<listitem><para>NetBSD 1.6, 3.0</para></listitem>
<listitem><para>Solaris 9, 11</para></listitem>
<listitem><para>Mac OS X</para></listitem>
</itemizedlist>
</para>
<para>
CPU architectures known to work include i386 (aka x86), amd64 (aka x86_64),
SPARC64, and ARM.
</para>
<para>
Chances are good that Sphinx should work on other Unix platforms and/or
CPU architectures just as well. Please report any other platforms that
worked for you!
</para>
<para>
All platforms are production quality. There are no principal functional
limitations on any platform.
</para>
</sect1>
<sect1 id="compiling-from-source"><title>Compiling Sphinx from source</title>
<sect2 id="required-tools"><title>Required tools</title>
<para>
On UNIX, you will need the following tools to build
and install Sphinx:
<itemizedlist>
<listitem><para>a working C++ compiler. GNU gcc and clang are known to work.</para></listitem>
<listitem><para>a good make program. GNU make is known to work.</para></listitem>
</itemizedlist>
</para>
<para>
On Windows, you will need Microsoft Visual C/C++ Studio .NET 2005 or above.
Other compilers/environments will probably work as well, but for the
time being, you will have to build makefile (or other environment
specific project files) manually.
</para>
</sect2>
<sect2 id="compiling-source-linux"><title>Compiling on Linux</title>
<para><orderedlist>
<listitem>
<para>
Extract everything from the distribution tarball (haven't you already?)
and go to the <filename>sphinx</filename> subdirectory. (We are using
version 2.2.11-dev here for the sake of example only; be sure to change this
to a specific version you're using.)
</para>
<para><literallayout><userinput>$ tar xzvf sphinx-2.2.11-dev.tar.gz
$ cd sphinx
</userinput></literallayout></para></listitem>
<listitem>
<para>Run the configuration program:</para>
<para><literallayout><userinput>$ ./configure</userinput></literallayout></para>
<para>
There's a number of options to configure. The complete listing may
be obtained by using <option>--help</option> switch. The most important ones are:
<itemizedlist>
<listitem><para><option>--prefix</option>, which specifies where to install Sphinx; such as <option>--prefix=/usr/local/sphinx</option> (all of the examples use this prefix)</para></listitem>
<listitem><para><option>--with-mysql</option>, which specifies where to look for MySQL include and library files, if auto-detection fails;</para></listitem>
<listitem><para><option>--with-static-mysql</option>, which builds Sphinx with statically linked MySQL support;</para></listitem>
<listitem><para><option>--with-pgsql</option>, which specifies where to look for PostgreSQL include and library files.</para></listitem>
<listitem><para><option>--with-static-pgsql</option>, which builds Sphinx with statically linked PostgreSQL support;</para></listitem>
</itemizedlist>
</para></listitem>
<listitem>
<para>Build the binaries:</para>
<para><literallayout><userinput>$ make</userinput></literallayout></para></listitem>
<listitem>
<para>Install the binaries in the directory of your choice:
(defaults to <filename>/usr/local/bin/</filename> on *nix systems,
but is overridden with <option>configure --prefix</option>)</para>
<para><literallayout><userinput>$ make install</userinput></literallayout></para></listitem>
</orderedlist></para>
</sect2>
<sect2 id="compiling-source-problems"><title>Known compilation issues</title>
<para>
If <filename>configure</filename> fails to locate MySQL headers and/or libraries,
try checking for and installing <filename>mysql-devel</filename> package. On some systems,
it is not installed by default.
</para>
<para>
If <filename>make</filename> fails with a message which look like
<programlisting>
/bin/sh: g++: command not found
make[1]: *** [libsphinx_a-sphinx.o] Error 127
</programlisting>
try checking for and installing <filename>gcc-c++</filename> package.
</para>
<para>
If you are getting compile-time errors which look like
<programlisting>
sphinx.cpp:67: error: invalid application of `sizeof' to
incomplete type `Private::SizeError<false>'
</programlisting>
this means that some compile-time type size check failed.
The most probable reason is that off_t type is less than 64-bit
on your system. As a quick hack, you can edit sphinx.h and replace off_t
with DWORD in a typedef for SphOffset_t, but note that this will prohibit
you from using full-text indexes larger than 2 GB. Even if the hack helps,
please report such issues, providing the exact error message and
compiler/OS details, so I could properly fix them in next releases.
</para>
<para>
If you keep getting any other error, or the suggestions above
do not seem to help you, please don't hesitate to contact me.
</para>
</sect2>
</sect1>
<sect1 id="installing-debian"><title>Installing Sphinx packages on Debian and Ubuntu</title>
<para>There are two ways of getting Sphinx for Ubuntu: regular deb packages and the Launchpad PPA repository.</para>
<para>Deb packages:</para>
<orderedlist>
<listitem>
<para>Sphinx requires a few libraries to be installed on Debian/Ubuntu. Use apt-get to download and install these dependencies:</para>
<userinput>$ sudo apt-get install mysql-client unixodbc libpq5</userinput></listitem>
<listitem>
<para>Now you can install Sphinx:</para>
<userinput>$ sudo dpkg -i sphinxsearch_2.2.11-dev-0ubuntu12~trusty_amd64.deb</userinput></listitem>
</orderedlist>
<para>PPA repository (Ubuntu only).</para>
<para>Installing Sphinx is much easier from Sphinxsearch PPA repository, because you will get all dependencies and can also update Sphinx to the latest version with the same command.</para>
<orderedlist>
<listitem>
<para>First, add Sphinxsearch repository and update the list of packages:</para>
<para><userinput>$ sudo add-apt-repository ppa:builds/sphinxsearch-rel22</userinput></para>
<para><userinput>$ sudo apt-get update</userinput></para></listitem>
<listitem>
<para>Install/update sphinxsearch package:</para>
<para><userinput>$ sudo apt-get install sphinxsearch</userinput></para></listitem>
</orderedlist>
<para>Sphinx <filename>searchd</filename> daemon can be started/stopped using service command:</para>
<para><userinput>$ sudo service sphinxsearch start</userinput></para>
</sect1>
<sect1 id="installing-redhat"><title>Installing Sphinx packages on RedHat and CentOS</title>
<para>Currently we distribute Sphinx RPMS and SRPMS on our website for both 5.x and 6.x
versions of Red Hat Enterprise Linux, but they can be installed on CentOS as well.</para>
<orderedlist>
<listitem>
<para>Before installation make sure you have these packages installed:</para>
<para><userinput>$ yum install postgresql-libs unixODBC</userinput></para></listitem>
<listitem>
<para>Download RedHat RPM from Sphinx website and install it:</para>
<para><userinput>$ rpm -Uhv sphinx-2.2.1-1.rhel6.x86_64.rpm</userinput></para></listitem>
<listitem>
<para>After preparing configuration file (see <link linkend="quick-tour">Quick tour</link>), you can start searchd daemon:</para>
<para><userinput>$ service searchd start</userinput></para></listitem>
</orderedlist>
</sect1>
<sect1 id="installing-windows"><title>Installing Sphinx on Windows</title>
<para>Installing Sphinx on a Windows server is often easier than installing on a Linux environment;
unless you are preparing code patches, you can use the pre-compiled binary files from the Downloads
area on the website.</para>
<orderedlist>
<listitem>
<para>Extract everything from the .zip file you have downloaded -
<filename>sphinx-2.2.11-dev-win32.zip</filename>,
or <filename>sphinx-2.2.11-dev-win32-pgsql.zip</filename> if you need PostgresSQL support as well.
(We are using version 2.2.11-dev here for the sake of example only;
be sure to change this to a specific version you're using.)
You can use Windows Explorer in Windows XP and up to extract the files,
or a freeware package like 7Zip to open the archive.</para>
<para>For the remainder of this guide, we will assume that the folders are unzipped into <filename>C:\Sphinx</filename>,
such that <filename>searchd.exe</filename> can be found in <filename>C:\Sphinx\bin\searchd.exe</filename>. If you decide
to use any different location for the folders or configuration file, please change it accordingly.</para></listitem>
<listitem>
<para>Edit the contents of sphinx.conf.in - specifically entries relating to @CONFDIR@ - to paths suitable for your system.</para></listitem>
<listitem>
<para>Install the <filename>searchd</filename> system as a Windows service:</para>
<para><userinput>C:\Sphinx\bin> C:\Sphinx\bin\searchd --install --config C:\Sphinx\sphinx.conf.in --servicename SphinxSearch</userinput></para></listitem>
<listitem>
<para>The <filename>searchd</filename> service will now be listed in the Services panel
within the Management Console, available from Administrative Tools. It will not have been
started, as you will need to configure it and build your indexes with <filename>indexer</filename>
before starting the service. A guide to do this can be found under
<link linkend="quick-tour">Quick tour</link>.</para>
<para>During the next steps of the install (which involve running indexer pretty much as
you would on Linux) you may find that you get an error relating to libmysql.dll not being found.
If you have MySQL installed, you should find a copy of this library in your Windows directory,
or sometimes in Windows\System32, or failing that in the MySQL core directories. If you
do receive an error please copy libmysql.dll into the bin directory.</para></listitem>
</orderedlist>
</sect1>
<sect1 id="sphinx-deprecations-defaults">
<title>Sphinx deprecations and changes in default configuration</title>
<para>
In 2.2.1-beta version we decided to start removing some old features. All
of them was 'unofficially' deprecated for some time. And we're informing
you now about it.
</para>
<para>
Changes are as follows:
<itemizedlist>
<listitem><para>32-bit document IDs are now deprecated. Our binary releases
are now all built with 64-bit IDs by default. Note that they can still
load older indexes with 32-bit IDs, but that support will eventually be
removed. In fact, that was deprecated awhile ago, but now we just want to
make it clear: we don't see any sense in trying to save your server's RAM
this way.</para></listitem>
<listitem><para>dict=crc is now deprecated. It has a bunch of limitations,
the most important ones being keyword collisions, and no (good) wildcard
matching support. You can read more about those limitations in our
documentation.</para></listitem>
<listitem><para>charset_type=sbcs is now deprecated, we're slowly switching
to UTF-only. Even if your database is SBCS (likely for legacy reasons
too, eh?), this should be absolutely trivial to workaround, just add a
pre-query to fetch your data in UTF-8 and you're all set. Also, in fact,
our current UTF-8 tokenizer is even faster than the SBCS one.</para></listitem>
<listitem><para>custom sort (@custom) is now removed from Sphinx. This
feature was introduced long before sort by expression became a reality
and it has been deprecated for a very long time.</para></listitem>
<listitem><para>enable_star is deprecated now. Previous default mode was
enable_star=0 which was due to compatibility with a very old Sphinx
version. Such implicit star search isn't very intuitive. So, we've decided
to eventually remove it and have marked it as deprecated just recently. We plan
to totally remove this configuration key in the 2.2.X branch.</para></listitem>
<listitem><para>str2ordinal attributes are deprecated. This feature allows
you to perform sorting by a string. But it's also possible to do this with
ordinary string attributes, which is much easier to use. str2ordinal only
covers a small part of this functionality and is not needed now.</para></listitem>
<listitem><para>str2wordcount attributes are deprecated.
<link linkend="conf-index-field-lengths">index_field_lengths=1</link>
will create an integer attribute with field length set automatically and we
recommend to use this configuration key when you need to store field
lengths. Also, index_field_lengths=1 allows you to use new ranking formulas
like BM25F().</para></listitem>
<listitem><para>hit_format is deprecated. This is a hidden configuration
key - it's not mentioned in our documentation. But, it's there and it's
possible that someone may use it. And now we're urging you: don't use it.
The default value is 'inline' and it's a new standard. 'plain' hit_format
is obsolete and will be removed in the near future.</para></listitem>
<listitem><para>docinfo=inline is deprecated. You can now use
<link linkend="conf-ondisk-attrs">ondisk_attrs</link> or
<link linkend="conf-ondisk-attrs-default">ondisk_attrs_default</link> instead.</para>
</listitem>
<listitem><para>workers=threads is a new default for all OS now.
We're gonna get rid of other modes in future.</para></listitem>
<listitem><para>mem_limit=128M is a new default.</para></listitem>
<listitem><para>rt_mem_limit=128M is a new default.</para></listitem>
<listitem><para>ondisk_dict is deprecated. No need to save RAM this way.</para>
</listitem>
<listitem><para>ondisk_dict_default is deprecated. No need to save RAM this way.
</para></listitem>
<listitem><para>compat_sphinxql_magics was removed. Now you can't use an old
result format and SphinxQL always looks more like ANSI SQL.</para></listitem>
<listitem><para>Completely removed xmlpipe. This was a very old ad hoc solution
for a particular customer. xmlpipe2 surpasses it in every single aspect.</para>
</listitem>
</itemizedlist>
</para>
<para>None of the different querying methods are deprecated, but as of
version 2.2.1-beta, SphinxQL is the most advanced method. We plan to
remove SphinxAPI and Sphinx SE someday so it would be a good idea to
start using SphinxQL.</para>
<para>
<itemizedlist>
<listitem><para>The SetWeights() API call has been deprecated for a long
time and has now been removed from official APIs.</para></listitem>
<listitem><para>The default matching mode for the API is now 'extended'.
Actually, all other modes are deprecated. We recommend using the
<link linkend="extended-syntax">extended query syntax</link> instead.
</para></listitem>
</itemizedlist>
</para>
<para>
Changes for 2.2.2-beta:
<itemizedlist>
<listitem><para>Removed deprecated "address" and "port" directives.
Use "listen" instead.</para></listitem>
<listitem><para>Removed str2wordcount attributes.
Use <link linkend="conf-index-field-lengths">index_field_lengths=1</link>
instead.</para></listitem>
<listitem><para>Removed str2ordinal attributes. Use string attributes
for sorting.</para></listitem>
<listitem><para>ondisk_dict and ondisk_dict_default was removed.
</para></listitem>
<listitem><para>Removed charset_type and mssql_unicode - we now support
only UTF-8 encoding.</para></listitem>
<listitem><para>Removed deprecated enable_star. Now always work as
with enable_star=1.</para></listitem>
<listitem><para>Removed CLI search which confused people instead of
helping them and sql_query_info.</para></listitem>
<listitem><para>Deprecated SetMatchMode() API call.</para></listitem>
<listitem><para>Changed default <link linkend="conf-thread-stack">thread_stack
</link> value to 1M.</para></listitem>
<listitem><para>Deprecated SetOverride() API call.</para></listitem>
</itemizedlist>
</para>
<para>
Changes for 2.2.3-beta:
<itemizedlist>
<listitem><para>Removed unneeded max_matches key from config file.</para>
</listitem>
</itemizedlist>
</para>
</sect1>
<sect1 id="quick-tour"><title>Quick Sphinx usage tour</title>
<para>
All the example commands below assume that you installed Sphinx
in <filename>/usr/local/sphinx</filename>, so <filename>searchd</filename> can
be found in <filename>/usr/local/sphinx/bin/searchd</filename>.
</para>
<para>
To use Sphinx, you will need to:
</para>
<orderedlist>
<listitem>
<para>Create a configuration file.</para>
<para>
Default configuration file name is <filename>sphinx.conf</filename>.
All Sphinx programs look for this file in current working directory
by default.
</para>
<para>
Sample configuration file, <filename>sphinx.conf.dist</filename>, which has
all the options documented, is created by <filename>configure</filename>.
Copy and edit that sample file to make your own configuration: (assuming Sphinx is installed into <filename>/usr/local/sphinx/</filename>)
</para>
<para><literallayout><userinput>$ cd /usr/local/sphinx/etc
$ cp sphinx.conf.dist sphinx.conf
$ vi sphinx.conf</userinput></literallayout></para>
<para>
Sample configuration file is setup to index <filename>documents</filename>
table from MySQL database <filename>test</filename>; so there's <filename>example.sql</filename>
sample data file to populate that table with a few documents for testing purposes:
</para>
<para><literallayout><userinput>$ mysql -u test < /usr/local/sphinx/etc/example.sql</userinput></literallayout></para></listitem>
<listitem>
<para>Run the indexer to create full-text index from your data:</para>
<para><literallayout><userinput>$ cd /usr/local/sphinx/etc
$ /usr/local/sphinx/bin/indexer --all</userinput></literallayout></para></listitem>
<listitem>
<para>Query your newly created index!</para></listitem>
</orderedlist>
<para>Now query your indexes!</para>
<para>Connect to server:</para>
<para><literallayout><userinput>$ mysql -h0 -P9306</userinput></literallayout></para>
<para><literallayout><userinput>SELECT * FROM test1 WHERE MATCH('my document');</userinput></literallayout></para>
<para><literallayout><userinput>INSERT INTO rt VALUES (1, 'this is', 'a sample text', 11);</userinput></literallayout></para>
<para><literallayout><userinput>INSERT INTO rt VALUES (2, 'some more', 'text here', 22);</userinput></literallayout></para>
<para><literallayout><userinput>SELECT gid/11 FROM rt WHERE MATCH('text') GROUP BY gid;</userinput></literallayout></para>
<para><literallayout><userinput>SELECT * FROM rt ORDER BY gid DESC;</userinput></literallayout></para>
<para><literallayout><userinput>SHOW TABLES;</userinput></literallayout></para>
<para><literallayout><userinput>SELECT *, WEIGHT() FROM test1 WHERE MATCH('"document one"/1');SHOW META;</userinput></literallayout></para>
<para><literallayout><userinput>SET profiling=1;SELECT * FROM test1 WHERE id IN (1,2,4);SHOW PROFILE;</userinput></literallayout></para>
<para><literallayout><userinput>SELECT id, id%3 idd FROM test1 WHERE MATCH('this is | nothing') GROUP BY idd;SHOW PROFILE;</userinput></literallayout></para>
<para><literallayout><userinput>SELECT id FROM test1 WHERE MATCH('is this a good plan?');SHOW PLAN;</userinput></literallayout></para>
<para><literallayout><userinput>SELECT COUNT(*) c, id%3 idd FROM test1 GROUP BY idd HAVING COUNT(*)>1;</userinput></literallayout></para>
<para><literallayout><userinput>SELECT COUNT(*) FROM test1;</userinput></literallayout></para>
<para><literallayout><userinput>CALL KEYWORDS ('one two three', 'test1');</userinput></literallayout></para>
<para><literallayout><userinput>CALL KEYWORDS ('one two three', 'test1', 1);</userinput></literallayout></para>
<para>
Happy searching!
</para>
</sect1>
</chapter>
<chapter id="indexing"><title>Indexing</title>
<sect1 id="sources"><title>Data sources</title>
<para>
The data to be indexed can generally come from very different
sources: SQL databases, plain text files, HTML files, mailboxes,
and so on. From Sphinx point of view, the data it indexes is a
set of structured <glossterm>documents</glossterm>, each of which has the
same set of <glossterm>fields</glossterm> and <glossterm>attributes</glossterm>.
This is similar to SQL, where each row would correspond to a document,
and each column to either a field or an attribute.
</para>
<para>
Depending on what source Sphinx should get the data from,
different code is required to fetch the data and prepare it for indexing.
This code is called <glossterm>data source driver</glossterm> (or simply
<glossterm>driver</glossterm> or <glossterm>data source</glossterm> for brevity).
</para>
<para>
At the time of this writing, there are built-in drivers for
MySQL, PostgreSQL, MS SQL (on Windows), and ODBC. There is also
a generic driver called xmlpipe2, which runs a specified command
and reads the data from its <filename>stdout</filename>.
See <xref linkend="xmlpipe2"/> section for the format description.
In 2.2.1-beta a tsvpipe (Tab Separated Values) and csvpipe (Comma Separated Values)
data source was added. You can get more information here <xref linkend="xsvpipe"/>.
</para>
<para>
There can be as many sources per index as necessary. They will be
sequentially processed in the very same order which was specified in
index definition. All the documents coming from those sources
will be merged as if they were coming from a single source.
</para>
</sect1>
<sect1 id="fields"><title>Full-text fields</title>
<para>
Full-text fields (or just <glossterm>fields</glossterm> for brevity)
are the textual document contents that get indexed by Sphinx, and can be
(quickly) searched for keywords.
</para>
<para>
Fields are named, and you can limit your searches to a single
field (eg. search through "title" only) or a subset of fields
(eg. to "title" and "abstract" only). Sphinx index format generally
supports up to 256 fields. However, up to version 2.0.1-beta indexes
were forcibly limited by 32 fields, because of certain complications
in the matching engine. Full support for up to 256 fields was added
in version 2.0.2-beta.
</para>
<para>
Note that the original contents of the fields are <b>not</b> stored
in the Sphinx index. The text that you send to Sphinx gets processed,
and a full-text index (a special data structure that enables quick
searches for a keyword) gets built from that text. But the original
text contents are then simply discarded. Sphinx assumes that you store
those contents elsewhere anyway.
</para>
<para>
Moreover, it is impossible to <emphasis>fully</emphasis> reconstruct
the original text, because the specific whitespace, capitalization,
punctuation, etc will all be lost during indexing. It is theoretically
possible to partially reconstruct a given document from the Sphinx
full-text index, but that would be a slow process (especially if
the <link linkend="conf-dict">CRC dictionary</link> is used,
which does not even store the original keywords and works with
their hashes instead).
</para>
</sect1>
<sect1 id="attributes"><title>Attributes</title>
<para>
Attributes are additional values associated with each document
that can be used to perform additional filtering and sorting during search.
</para>
<para>
It is often desired to additionally process full-text search results
based not only on matching document ID and its rank, but on a number
of other per-document values as well. For instance, one might need to
sort news search results by date and then relevance,
or search through products within specified price range,
or limit blog search to posts made by selected users,
or group results by month. To do that efficiently, Sphinx allows
to attach a number of additional <glossterm>attributes</glossterm>
to each document, and store their values in the full-text index.
It's then possible to use stored values to filter, sort,
or group full-text matches.
</para>
<para>Attributes, unlike the fields, are not full-text indexed. They
are stored in the index, but it is not possible to search them as full-text,
and attempting to do so results in an error.</para>
<para>For example, it is impossible to use the extended matching mode expression
<option>@column 1</option> to match documents where column is 1, if column is an
attribute, and this is still true even if the numeric digits are normally indexed.</para>
<para>Attributes can be used for filtering, though, to restrict returned
rows, as well as sorting or <link linkend="clustering">result grouping</link>;
it is entirely possible to sort results purely based on attributes, and ignore the search
relevance tools. Additionally, attributes are returned from the search daemon, while the
indexed text is not.</para>
<para>
A good example for attributes would be a forum posts table. Assume
that only title and content fields need to be full-text searchable -
but that sometimes it is also required to limit search to a certain
author or a sub-forum (ie. search only those rows that have some
specific values of author_id or forum_id columns in the SQL table);
or to sort matches by post_date column; or to group matching posts
by month of the post_date and calculate per-group match counts.
</para>
<para>
This can be achieved by specifying all the mentioned columns
(excluding title and content, that are full-text fields) as
attributes, indexing them, and then using API calls to
setup filtering, sorting, and grouping. Here as an example.
</para>
<bridgehead>Example sphinx.conf part:</bridgehead>
<programlisting>
...
sql_query = SELECT id, title, content, \
author_id, forum_id, post_date FROM my_forum_posts
sql_attr_uint = author_id
sql_attr_uint = forum_id
sql_attr_timestamp = post_date
...
</programlisting>
<bridgehead>Example application code (in PHP):</bridgehead>
<programlisting>
// only search posts by author whose ID is 123
$cl->SetFilter ( "author_id", array ( 123 ) );
// only search posts in sub-forums 1, 3 and 7
$cl->SetFilter ( "forum_id", array ( 1,3,7 ) );
// sort found posts by posting date in descending order
$cl->SetSortMode ( SPH_SORT_ATTR_DESC, "post_date" );
</programlisting>
<para>
Attributes are named. Attribute names are case insensitive.
Attributes are <emphasis>not</emphasis> full-text indexed; they are stored in the index as is.
Currently supported attribute types are:
<itemizedlist>
<listitem><para>unsigned integers (1-bit to 32-bit wide);</para></listitem>
<listitem><para>UNIX timestamps;</para></listitem>
<listitem><para>floating point values (32-bit, IEEE 754 single precision);</para></listitem>
<listitem><para><link linkend="conf-sql-attr-string">strings</link> (since 1.10-beta);</para></listitem>
<listitem><para><link linkend="conf-sql-attr-json">JSON</link> (since 2.1.1-beta);</para></listitem>
<listitem><para><link linkend="mva">MVA</link>, multi-value attributes (variable-length lists of 32-bit unsigned integers).</para></listitem>
</itemizedlist>
</para>
<para>
The complete set of per-document attribute values is sometimes
referred to as <glossterm>docinfo</glossterm>. Docinfos can either be
<itemizedlist>
<listitem><para>stored separately from the main full-text index data ("extern" storage, in <filename>.spa</filename> file), or</para></listitem>
<listitem><para>attached to each occurrence of document ID in full-text index data ("inline" storage, in <filename>.spd</filename> file).</para></listitem>
</itemizedlist>
</para>
<para>
When using extern storage, a copy of <filename>.spa</filename> file
(with all the attribute values for all the documents) is kept in RAM by
<filename>searchd</filename> at all times. This is for performance reasons;
random disk I/O would be too slow. On the contrary, inline storage does not
require any additional RAM at all, but that comes at the cost of greatly
inflating the index size: remember that it copies <emphasis>all</emphasis>
attribute value <emphasis>every</emphasis> time when the document ID
is mentioned, and that is exactly as many times as there are
different keywords in the document. Inline may be the only viable
option if you have only a few attributes and need to work with big
datasets in limited RAM. However, in most cases extern storage
makes both indexing and searching <emphasis>much</emphasis> more efficient.
</para>
<para>
Search-time memory requirements for extern storage are
(1+number_of_attrs)*number_of_docs*4 bytes, ie. 10 million docs with
2 groups and 1 timestamp will take (1+2+1)*10M*4 = 160 MB of RAM.
This is <emphasis>PER DAEMON</emphasis>, not per query. <filename>searchd</filename>
will allocate 160 MB on startup, read the data and keep it shared between queries.
The children will <emphasis>NOT</emphasis> allocate any additional
copies of this data.
</para>
</sect1>
<sect1 id="mva"><title>MVA (multi-valued attributes)</title>
<para>
MVAs, or multi-valued attributes, are an important special type of per-document attributes in Sphinx.
MVAs let you attach sets of numeric values to every document.
That is useful to implement article tags, product categories, etc.
Filtering and group-by (but not sorting) on MVA attributes is supported.
</para>
<para>
As of version 2.0.2-beta, MVA values can either be unsigned 32-bit integers
(UNSIGNED INTEGER) or signed 64-bit integers (BIGINT). Up to version 2.0.1-beta,
only the unsigned 32-bit values were supported.
</para>
<para>
The set size is not limited, you can have an arbitrary number of values
attached to each document as long as RAM permits (<filename>.spm</filename> file
that contains the MVA values will be precached in RAM by <filename>searchd</filename>).
The source data can be taken either from a separate query, or from a document field;
see source type in <link linkend="conf-sql-attr-multi">sql_attr_multi</link>.
In the first case the query will have to return pairs of document ID and MVA values,
in the second one the field will be parsed for integer values.
There are absolutely no requirements as to incoming data order; the values will be
automatically grouped by document ID (and internally sorted within the same ID)
during indexing anyway.
</para>
<para>
When filtering, a document will match the filter on MVA attribute
if <emphasis>any</emphasis> of the values satisfy the filtering condition.
(Therefore, documents that pass through exclude filters will not
contain any of the forbidden values.)
When grouping by MVA attribute, a document will contribute to as
many groups as there are different MVA values associated with that document.
For instance, if the collection contains exactly 1 document having a 'tag' MVA
with values 5, 7, and 11, grouping on 'tag' will produce 3 groups with
'COUNT(*)' equal to 1 and 'GROUPBY()' key values of 5, 7, and 11 respectively.
Also note that grouping by MVA might lead to duplicate documents in the result set:
because each document can participate in many groups, it can be chosen as the best
one in in more than one group, leading to duplicate IDs. PHP API historically
uses ordered hash on the document ID for the resulting rows; so you'll also need to use
<link linkend="api-func-setarrayresult">SetArrayResult()</link> in order
to employ group-by on MVA with PHP API.
</para>
</sect1>
<sect1 id="indexes"><title>Indexes</title>
<para>
To be able to answer full-text search queries fast, Sphinx needs
to build a special data structure optimized for such queries from
your text data. This structure is called <glossterm>index</glossterm>; and
the process of building index from text is called <glossterm>indexing</glossterm>.
</para>
<para>
Different index types are well suited for different tasks.
For example, a disk-based tree-based index would be easy to
update (ie. insert new documents to existing index), but rather
slow to search. Sphinx architecture allows internally for different
<glossterm>index types</glossterm>, or <glossterm>backends</glossterm>,
to be implemented comparatively easily.
</para>
<para>
Starting with 1.10-beta, Sphinx provides 2 different backends:
a <b>disk index</b> backend, and a <b>RT (realtime) index</b> backend.
</para>
<para>
<b>Disk indexes</b> are designed to provide maximum indexing and searching
speed, while keeping the RAM footprint as low as possible. That comes
at a cost of text index updates. You can not update an existing document or
incrementally add a new document to a disk index. You only can batch
rebuild the entire disk index from scratch. (Note that you still can
update document's <b>attributes</b> on the fly, even with the disk
indexes.)
</para>
<para>
This "rebuild only" limitation might look as a big constraint
at a first glance. But in reality, it can very frequently be worked
around rather easily by setting up multiple disk indexes, searching
through them all, and only rebuilding the one with a fraction
of the most recently changed data.
See <xref linkend="live-updates"/> for details.
</para>
<para>
<b>RT indexes</b> enable you to implement dynamic updates and
incremental additions to the full text index. RT stands for Real Time
and they are indeed "soft realtime" in terms of writes, meaning that
most index changes become available for searching as quick as 1 millisecond
or less, but could occasionally stall for seconds. (Searches will still work
even during that occasional writing stall.) Refer to
<xref linkend="rt-indexes"/> for details.
</para>
<para>
Last but not least, Sphinx supports so-called <b>distributed indexes</b>.
Compared to disk and RT indexes, those are not a real physical backend,
but rather just lists of either local or remote indexes that can be
searched transparently to the application, with Sphinx doing all the chores
of sending search requests to remote machines in the cluster, aggregating
the result sets, retrying the failed requests, and even doing some
load balancing. See <xref linkend="distributed"/> for a discussion
of distributed indexes.
</para>
<para>
There can be as many indexes per configuration file as necessary.
<filename>indexer</filename> utility can reindex either all of them
(if <option>--all</option> option is specified), or a certain explicitly
specified subset. <filename>searchd</filename> utility will serve all
the specified indexes, and the clients can specify what indexes to
search in run time.
</para>
</sect1>
<sect1 id="data-restrictions"><title>Restrictions on the source data</title>
<para>
There are a few different restrictions imposed on the source data
which is going to be indexed by Sphinx, of which the single most
important one is:
</para>
<para><emphasis role="bold">
ALL DOCUMENT IDS MUST BE UNIQUE UNSIGNED NON-ZERO INTEGER NUMBERS (32-BIT OR 64-BIT, DEPENDING ON BUILD TIME SETTINGS).
</emphasis></para>
<para>
If this requirement is not met, different bad things can happen.
For instance, Sphinx can crash with an internal assertion while indexing;
or produce strange results when searching due to conflicting IDs.
Also, a 1000-pound gorilla might eventually come out of your
display and start throwing barrels at you. You've been warned.
</para>
</sect1>
<sect1 id="charsets"><title>Charsets, case folding, translation tables, and replacement rules</title>
<para>
When indexing some index, Sphinx fetches documents from
the specified sources, splits the text into words, and does
case folding so that "Abc", "ABC" and "abc" would be treated
as the same word (or, to be pedantic, <glossterm>term</glossterm>).
</para>
<para>
To do that properly, Sphinx needs to know
<itemizedlist>
<listitem><para>what encoding is the source text in (and this encoding should always be UTF-8);</para></listitem>
<listitem><para>what characters are letters and what are not;</para></listitem>
<listitem><para>what letters should be folded to what letters.</para></listitem>
</itemizedlist>
This should be configured on a per-index basis using
<option><link linkend="conf-charset-table">charset_table</link></option> option.
<option><link linkend="conf-charset-table">charset_table</link></option>
specifies the table that maps letter characters to their case
folded versions. The characters that are not in the table are considered
to be non-letters and will be treated as word separators when indexing
or searching through this index.
</para>
<para>
Default tables currently include English and Russian characters.
Please do submit your tables for other languages!
</para>
<para>As of version 2.1.1-beta, you can also specify text pattern replacement rules.
For example, given the rules</para>
<programlisting>
regexp_filter = \b(\d+)\" => \1 inch
regexp_filter = (BLUE|RED) => COLOR
</programlisting>
<para>the text 'RED TUBE 5" LONG' would be indexed as 'COLOR TUBE 5 INCH LONG', and
'PLANK 2" x 4"' as 'PLANK 2 INCH x 4 INCH'. Rules are applied in the given order.
Text in queries is also replaced; a search for "BLUE TUBE" would
actually become a search for "COLOR TUBE". Note that Sphinx must
be built with the --with-re2 option to use this feature.</para>
</sect1>
<sect1 id="sql"><title>SQL data sources (MySQL, PostgreSQL)</title>
<para>
With all the SQL drivers, indexing generally works as follows.
<itemizedlist>
<listitem><para>connection to the database is established;</para></listitem>
<listitem><para>pre-query (see <xref linkend="conf-sql-query-pre"/>) is executed
to perform any necessary initial setup, such as setting per-connection encoding with MySQL;</para></listitem>
<listitem><para>main query (see <xref linkend="conf-sql-query"/>) is executed and the rows it returns are indexed;</para></listitem>
<listitem><para>post-query (see <xref linkend="conf-sql-query-post"/>) is executed
to perform any necessary cleanup;</para></listitem>
<listitem><para>connection to the database is closed;</para></listitem>
<listitem><para>indexer does the sorting phase (to be pedantic, index-type specific post-processing);</para></listitem>
<listitem><para>connection to the database is established again;</para></listitem>
<listitem><para>post-index query (see <xref linkend="conf-sql-query-post-index"/>) is executed
to perform any necessary final cleanup;</para></listitem>
<listitem><para>connection to the database is closed again.</para></listitem>
</itemizedlist>
Most options, such as database user/host/password, are straightforward.
However, there are a few subtle things, which are discussed in more detail here.
</para>
<bridgehead id="ranged-queries">Ranged queries</bridgehead>
<para>
Main query, which needs to fetch all the documents, can impose
a read lock on the whole table and stall the concurrent queries
(eg. INSERTs to MyISAM table), waste a lot of memory for result set, etc.
To avoid this, Sphinx supports so-called <glossterm>ranged queries</glossterm>.
With ranged queries, Sphinx first fetches min and max document IDs from
the table, and then substitutes different ID intervals into main query text
and runs the modified query to fetch another chunk of documents.
Here's an example.
</para>
<example id="ex-ranged-queries"><title>Ranged query usage example</title>
<programlisting>
# in sphinx.conf
sql_query_range = SELECT MIN(id),MAX(id) FROM documents
sql_range_step = 1000
sql_query = SELECT * FROM documents WHERE id>=$start AND id<=$end
</programlisting>
</example>
<para>
If the table contains document IDs from 1 to, say, 2345, then sql_query would
be run three times:
<orderedlist>
<listitem><para>with <option>$start</option> replaced with 1 and <option>$end</option> replaced with 1000;</para></listitem>
<listitem><para>with <option>$start</option> replaced with 1001 and <option>$end</option> replaced with 2000;</para></listitem>
<listitem><para>with <option>$start</option> replaced with 2001 and <option>$end</option> replaced with 2345.</para></listitem>
</orderedlist>
Obviously, that's not much of a difference for 2000-row table,
but when it comes to indexing 10-million-row MyISAM table,
ranged queries might be of some help.
</para>
<bridgehead><option>sql_query_post</option> vs. <option>sql_query_post_index</option></bridgehead>
<para>
The difference between post-query and post-index query is in that post-query
is run immediately when Sphinx received all the documents, but further indexing
<emphasis role="bold">may</emphasis> still fail for some other reason. On the contrary,
by the time the post-index query gets executed, it is <emphasis role="bold">guaranteed</emphasis>
that the indexing was successful. Database connection is dropped and re-established
because sorting phase can be very lengthy and would just timeout otherwise.
</para>
</sect1>
<sect1 id="xmlpipe2"><title>xmlpipe2 data source</title>
<para>
xmlpipe2 lets you pass arbitrary full-text and attribute data to Sphinx
in yet another custom XML format. It also allows to specify the schema
(ie. the set of fields and attributes) either in the XML stream itself,
or in the source settings.
</para>
<para>
When indexing xmlpipe2 source, indexer runs the given command, opens
a pipe to its stdout, and expects well-formed XML stream. Here's sample
stream data:
<example id="ex-xmlpipe2-document"><title>xmlpipe2 document stream</title>
<programlisting>
<?xml version="1.0" encoding="utf-8"?>
<sphinx:docset>
<sphinx:schema>
<sphinx:field name="subject"/>
<sphinx:field name="content"/>
<sphinx:attr name="published" type="timestamp"/>
<sphinx:attr name="author_id" type="int" bits="16" default="1"/>
</sphinx:schema>
<sphinx:document id="1234">
<content>this is the main content <![CDATA[[and this <cdata> entry
must be handled properly by xml parser lib]]></content>
<published>1012325463</published>
<subject>note how field/attr tags can be
in <b class="red">randomized</b> order</subject>
<misc>some undeclared element</misc>
</sphinx:document>
<sphinx:document id="1235">
<subject>another subject</subject>
<content>here comes another document, and i am given to understand,
that in-document field order must not matter, sir</content>
<published>1012325467</published>
</sphinx:document>
<!-- ... even more sphinx:document entries here ... -->
<sphinx:killlist>
<id>1234</id>
<id>4567</id>
</sphinx:killlist>
</sphinx:docset>
</programlisting>
</example>
</para>
<para>
Arbitrary fields and attributes are allowed.
They also can occur in the stream in arbitrary order within each document; the order is ignored.
There is a restriction on maximum field length; fields longer than 2 MB will be truncated to 2 MB (this limit can be changed in the source).
</para>
<para>
The schema, ie. complete fields and attributes list, must be declared
before any document could be parsed. This can be done either in the
configuration file using <option>xmlpipe_field</option> and <option>xmlpipe_attr_XXX</option>
settings, or right in the stream using <sphinx:schema> element.
<sphinx:schema> is optional. It is only allowed to occur as the very
first sub-element in <sphinx:docset>. If there is no in-stream
schema definition, settings from the configuration file will be used.
Otherwise, stream settings take precedence.
</para>
<para>
Unknown tags (which were not declared neither as fields nor as attributes)
will be ignored with a warning. In the example above, <misc> will be ignored.
All embedded tags and their attributes (such as <b> in <subject>
in the example above) will be silently ignored.
</para>
<para>
Support for incoming stream encodings depends on whether <filename>iconv</filename>
is installed on the system. xmlpipe2 is parsed using <filename>libexpat</filename>
parser that understands US-ASCII, ISO-8859-1, UTF-8 and a few UTF-16 variants
natively. Sphinx <filename>configure</filename> script will also check
for <filename>libiconv</filename> presence, and utilize it to handle
other encodings. <filename>libexpat</filename> also enforces the
requirement to use UTF-8 charset on Sphinx side, because the
parsed data it returns is always in UTF-8.
<!-- TODO: check this vs latin-1 -->
</para>
<para>
XML elements (tags) recognized by xmlpipe2 (and their attributes where applicable) are:
<variablelist>
<varlistentry>
<term>sphinx:docset</term>
<listitem><para>Mandatory top-level element, denotes and contains xmlpipe2 document set.</para></listitem>
</varlistentry>
<varlistentry>
<term>sphinx:schema</term>
<listitem><para>Optional element, must either occur as the very first child
of sphinx:docset, or never occur at all. Declares the document schema.
Contains field and attribute declarations. If present, overrides
per-source settings from the configuration file.
</para></listitem>
</varlistentry>
<varlistentry>
<term>sphinx:field</term>
<listitem><para>Optional element, child of sphinx:schema. Declares a full-text field.
Known attributes are:
<itemizedlist>
<listitem><para>"name", specifies the XML element name that will be treated as a full-text field in the subsequent documents.</para></listitem>
<listitem><para>"attr", specifies whether to also index this field as a string. Possible value is "string". Introduced in version 1.10-beta.</para></listitem>
</itemizedlist>
</para></listitem>
</varlistentry>
<varlistentry>
<term>sphinx:attr</term>
<listitem><para>Optional element, child of sphinx:schema. Declares an attribute.
Known attributes are:
<itemizedlist>
<listitem><para>"name", specifies the element name that should be treated as an attribute in the subsequent documents.</para></listitem>
<listitem><para>"type", specifies the attribute type. Possible values are "int", "bigint", "timestamp", "bool", "float", "multi" and "json".</para></listitem>
<listitem><para>"bits", specifies the bit size for "int" attribute type. Valid values are 1 to 32.</para></listitem>
<listitem><para>"default", specifies the default value for this attribute that should be used if the attribute's element is not present in the document.</para></listitem>
</itemizedlist>
</para></listitem>
</varlistentry>
<varlistentry>
<term>sphinx:document</term>
<listitem><para>Mandatory element, must be a child of sphinx:docset.
Contains arbitrary other elements with field and attribute values
to be indexed, as declared either using sphinx:field and sphinx:attr
elements or in the configuration file. The only known attribute
is "id" that must contain the unique integer document ID.
</para></listitem>
</varlistentry>
<varlistentry>
<term>sphinx:killlist</term>
<listitem><para>Optional element, child of sphinx:docset.
Contains a number of "id" elements whose contents are document IDs
to be put into a <link linkend="conf-sql-query-killlist">kill-list</link> for this index.
</para></listitem>
</varlistentry>
</variablelist>
</para>
</sect1>
<sect1 id="xsvpipe"><title>tsvpipe\csvpipe (Tab\Comma Separated Values) data source</title>
<para>
This is the simplest way to pass data to the indexer. It was created due to
xmlpipe2 limitations. Namely, indexer must map each attribute and field tag
in XML file to corresponding schema element. This mapping requires some time.
And time increases with increasing the number of fields and attributes in
schema. There is no such issue in tsvpipe because each field and attribute
is a particular column in TSV file. So, in some cases tsvpipe could work
slightly faster than xmlpipe2. Added in 2.2.1-beta.
</para>
<para>
The first column in TSV\CSV file must be a document ID. The rest ones must mirror
the declaration of fields and attributes in schema definition.
</para>
<para>
The difference between tsvpipe and csvpipe is delimiter and quoting rules.
tsvpipe has tab character as hardcoded delimiter and has no quoting rules.
csvpipe has option <link linkend="conf-csvpipe-delimiter">csvpipe_delimiter</link> for
delimiter with default value ',' and also has quoting rules, such as:
<itemizedlist>
<listitem><para>any field may be quoted</para></listitem>
<listitem><para>fields containing a line-break, double-quote or commas should be quoted</para></listitem>
<listitem><para>a double quote character in a field must be represented by two double quote characters</para></listitem>
</itemizedlist>
</para>
<para>
tsvpipe and csvpipe have same field and attrribute declaration derectives as xmlpipe.
</para>
<para>
tsvpipe declarations:
</para>
<para>
<link linkend="conf-xmlpipe-command">tsvpipe_command</link>,
<link linkend="conf-xmlpipe-field">tsvpipe_field</link>, <link linkend="conf-xmlpipe-field-string">tsvpipe_field_string</link>,
<link linkend="conf-xmlpipe-attr-uint">tsvpipe_attr_uint</link>, <link linkend="conf-xmlpipe-attr-timestamp">tsvpipe_attr_timestamp</link>,
<link linkend="conf-xmlpipe-attr-bool">tsvpipe_attr_bool</link>, <link linkend="conf-xmlpipe-attr-float">tsvpipe_attr_float</link>,
<link linkend="conf-xmlpipe-attr-bigint">tsvpipe_attr_bigint</link>, <link linkend="conf-xmlpipe-attr-multi">tsvpipe_attr_multi</link>,
<link linkend="conf-xmlpipe-attr-multi-64">tsvpipe_attr_multi_64</link>, <link linkend="conf-xmlpipe-attr-string">tsvpipe_attr_string</link>,
<link linkend="conf-xmlpipe-attr-json">tsvpipe_attr_json</link>
</para>
<para>
csvpipe declarations:
</para>
<para>
<link linkend="conf-xmlpipe-command">csvpipe_command</link>,
<link linkend="conf-xmlpipe-field">csvpipe_field</link>, <link linkend="conf-xmlpipe-field-string">csvpipe_field_string</link>,
<link linkend="conf-xmlpipe-attr-uint">csvpipe_attr_uint</link>, <link linkend="conf-xmlpipe-attr-timestamp">csvpipe_attr_timestamp</link>,
<link linkend="conf-xmlpipe-attr-bool">csvpipe_attr_bool</link>, <link linkend="conf-xmlpipe-attr-float">csvpipe_attr_float</link>,
<link linkend="conf-xmlpipe-attr-bigint">csvpipe_attr_bigint</link>, <link linkend="conf-xmlpipe-attr-multi">csvpipe_attr_multi</link>,
<link linkend="conf-xmlpipe-attr-multi-64">csvpipe_attr_multi_64</link>, <link linkend="conf-xmlpipe-attr-string">csvpipe_attr_string</link>,
<link linkend="conf-xmlpipe-attr-json">csvpipe_attr_json</link>
</para>
<programlisting>
source tsv_test
{
type = tsvpipe
tsvpipe_command = cat /tmp/rock_bands.tsv
tsvpipe_field = name
tsvpipe_attr_multi = genre_tags
}
</programlisting>
<programlisting>
1 Led Zeppelin 35,23,16
2 Deep Purple 35,92
3 Frank Zappa 35,23,16,92,33,24
</programlisting>
</sect1>
<sect1 id="live-updates"><title>Live index updates</title>
<para>
There are two major approaches to maintaining the full-text index
contents up to date. Note, however, that both these approaches deal
with the task of <emphasis>full-text data updates</emphasis>, and not
attribute updates. Instant attribute updates are supported since
version 0.9.8. Refer to <link linkend="api-func-updateatttributes">UpdateAttributes()</link>
API call description for details.
</para>
<para>
First, you can use disk-based indexes, partition them manually,
and only rebuild the smaller partitions (so-called "deltas") frequently.
By minimizing the rebuild size, you can reduce the average indexing lag
to something as low as 30-60 seconds. This approach was the only one
available in versions 0.9.x. On huge collections it actually might be
the most efficient one. Refer to <xref linkend="delta-updates"/>
for details.
</para>
<para>
Second, versions 1.x (starting with 1.10-beta) add support for so-called
real-time indexes (RT indexes for short) that on-the-fly updates of the
full-text data. Updates on a RT index can appear in the search results in
1-2 milliseconds, ie. 0.001-0.002 seconds. However, RT index are less
efficient for bulk indexing huge amounts of data. Refer to
<xref linkend="rt-indexes"/> for details.
</para>
</sect1>
<sect1 id="delta-updates"><title>Delta index updates</title>
<para>
There's a frequent situation when the total dataset is too big
to be reindexed from scratch often, but the amount of new records
is rather small. Example: a forum with a 1,000,000 archived posts,
but only 1,000 new posts per day.
</para>
<para>
In this case, "live" (almost real time) index updates could be
implemented using so called "main+delta" scheme.
</para>
<para>
The idea is to set up two sources and two indexes, with one
"main" index for the data which only changes rarely (if ever),
and one "delta" for the new documents. In the example above,
1,000,000 archived posts would go to the main index, and newly
inserted 1,000 posts/day would go to the delta index. Delta index
could then be reindexed very frequently, and the documents can
be made available to search in a matter of minutes.
</para>
<para>
Specifying which documents should go to what index and
reindexing main index could also be made fully automatic.
One option would be to make a counter table which would track
the ID which would split the documents, and update it
whenever the main index is reindexed.
<example id="ex-live-updates">
<title>Fully automated live updates</title>
<programlisting>
# in MySQL
CREATE TABLE sph_counter
(
counter_id INTEGER PRIMARY KEY NOT NULL,
max_doc_id INTEGER NOT NULL
);
# in sphinx.conf
source main
{
# ...
sql_query_pre = SET NAMES utf8
sql_query_pre = REPLACE INTO sph_counter SELECT 1, MAX(id) FROM documents
sql_query = SELECT id, title, body FROM documents \
WHERE id<=( SELECT max_doc_id FROM sph_counter WHERE counter_id=1 )
}
source delta : main
{
sql_query_pre = SET NAMES utf8
sql_query = SELECT id, title, body FROM documents \
WHERE id>( SELECT max_doc_id FROM sph_counter WHERE counter_id=1 )
}
index main
{
source = main
path = /path/to/main
# ... all the other settings
}
# note how all other settings are copied from main,
# but source and path are overridden (they MUST be)
index delta : main
{
source = delta
path = /path/to/delta
}
</programlisting>
</example>
</para>
<para>
Note how we're overriding <code>sql_query_pre</code> in the delta source.
We need to explicitly have that override. Otherwise <code>REPLACE</code> query
would be run when indexing delta source too, effectively nullifying it. However,
when we issue the directive in the inherited source for the first time, it removes
<emphasis>all</emphasis> inherited values, so the encoding setup is also lost.
So <code>sql_query_pre</code> in the delta can not just be empty; and we need
to issue the encoding setup query explicitly once again.
</para>
</sect1>
<sect1 id="index-merging"><title>Index merging</title>
<para>
Merging two existing indexes can be more efficient than indexing the data
from scratch, and desired in some cases (such as merging 'main' and 'delta'
indexes instead of simply reindexing 'main' in 'main+delta' partitioning
scheme). So <filename>indexer</filename> has an option to do that.
Merging the indexes is normally faster than reindexing but still
<emphasis>not</emphasis> instant on huge indexes. Basically,
it will need to read the contents of both indexes once and write
the result once. Merging 100 GB and 1 GB index, for example,
will result in 202 GB of IO (but that's still likely less than
the indexing from scratch requires).
</para>
<para>
The basic command syntax is as follows:
<programlisting>
indexer --merge DSTINDEX SRCINDEX [--rotate]
</programlisting>
Only the DSTINDEX index will be affected: the contents of SRCINDEX will be merged into it.
<option>--rotate</option> switch will be required if DSTINDEX is already being served by <filename>searchd</filename>.
The initially devised usage pattern is to merge a smaller update from SRCINDEX into DSTINDEX.
Thus, when merging the attributes, values from SRCINDEX will win if duplicate document IDs are encountered.
Note, however, that the "old" keywords will <emphasis>not</emphasis> be automatically removed in such cases.
For example, if there's a keyword "old" associated with document 123 in DSTINDEX, and a keyword "new" associated
with it in SRCINDEX, document 123 will be found by <emphasis>both</emphasis> keywords after the merge.
You can supply an explicit condition to remove documents from DSTINDEX to mitigate that;
the relevant switch is <option>--merge-dst-range</option>:
<programlisting>
indexer --merge main delta --merge-dst-range deleted 0 0
</programlisting>
This switch lets you apply filters to the destination index along with merging.
There can be several filters; all of their conditions must be met in order
to include the document in the resulting merged index. In the example above,
the filter passes only those records where 'deleted' is 0, eliminating all
records that were flagged as deleted (for instance, using
<link linkend="api-func-updateatttributes">UpdateAttributes()</link> call).
</para>
</sect1>
</chapter>
<chapter id="rt-indexes"><title>Real-time indexes</title>
<para>
Real-time indexes (or RT indexes for brevity) are a new backend
that lets you insert, update, or delete documents (rows) on the fly.
RT indexes were added in version 1.10-beta. While querying of RT indexes
is possible using any of the SphinxAPI, SphinxQL, or SphinxSE, updating
them is only possible via SphinxQL at the moment. Full SphinxQL
reference is available in <xref linkend="sphinxql-reference"/>.
</para>
<sect1 id="rt-overview"><title>RT indexes overview</title>
<para>
RT indexes should be declared in <filename>sphinx.conf</filename>,
just as every other index type. Notable differences from the regular,
disk-based indexes are that a) data sources are not required and ignored,
and b) you should explicitly enumerate all the text fields, not just
attributes. Here's an example:
</para>
<example id="ex-rt-updates">
<title>RT index declaration</title>
<programlisting>
index rt
{
type = rt
path = /usr/local/sphinx/data/rt
rt_field = title
rt_field = content
rt_attr_uint = gid
}
</programlisting>
</example>
<para>
As of 2.0.1-beta and above, RT indexes are production quality,
despite a few missing features.
</para>
<para>
RT index can be accessed using MySQL protocol. INSERT, REPLACE, DELETE, and
SELECT statements against RT index are supported. For instance, this
is an example session with the sample index above:
</para>
<programlisting>
$ mysql -h 127.0.0.1 -P 9306
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 1.10-dev (r2153)
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> INSERT INTO rt VALUES ( 1, 'first record', 'test one', 123 );
Query OK, 1 row affected (0.05 sec)
mysql> INSERT INTO rt VALUES ( 2, 'second record', 'test two', 234 );
Query OK, 1 row affected (0.00 sec)
mysql> SELECT * FROM rt;
+------+--------+------+
| id | weight | gid |
+------+--------+------+
| 1 | 1 | 123 |
| 2 | 1 | 234 |
+------+--------+------+
2 rows in set (0.02 sec)
mysql> SELECT * FROM rt WHERE MATCH('test');
+------+--------+------+
| id | weight | gid |
+------+--------+------+
| 1 | 1643 | 123 |
| 2 | 1643 | 234 |
+------+--------+------+
2 rows in set (0.01 sec)
mysql> SELECT * FROM rt WHERE MATCH('@title test');
Empty set (0.00 sec)
</programlisting>
<para>
Both partial and batch INSERT syntaxes are supported, ie.
you can specify a subset of columns, and insert several rows at a time.
Deletions are also possible using DELETE statement; the only currently
supported syntax is DELETE FROM <index> WHERE id=<id>.
REPLACE is also supported, enabling you to implement updates.
</para>
<programlisting>
mysql> INSERT INTO rt ( id, title ) VALUES ( 3, 'third row' ), ( 4, 'fourth entry' );
Query OK, 2 rows affected (0.01 sec)
mysql> SELECT * FROM rt;
+------+--------+------+
| id | weight | gid |
+------+--------+------+
| 1 | 1 | 123 |
| 2 | 1 | 234 |
| 3 | 1 | 0 |
| 4 | 1 | 0 |
+------+--------+------+
4 rows in set (0.00 sec)
mysql> DELETE FROM rt WHERE id=2;
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT * FROM rt WHERE MATCH('test');
+------+--------+------+
| id | weight | gid |
+------+--------+------+
| 1 | 1500 | 123 |
+------+--------+------+
1 row in set (0.00 sec)
mysql> INSERT INTO rt VALUES ( 1, 'first record on steroids', 'test one', 123 );
ERROR 1064 (42000): duplicate id '1'
mysql> REPLACE INTO rt VALUES ( 1, 'first record on steroids', 'test one', 123 );
Query OK, 1 row affected (0.01 sec)
mysql> SELECT * FROM rt WHERE MATCH('steroids');
+------+--------+------+
| id | weight | gid |
+------+--------+------+
| 1 | 1500 | 123 |
+------+--------+------+
1 row in set (0.01 sec)
</programlisting>
<para>
Data stored in RT index should survive clean shutdown. When binary logging
is enabled, it should also survive crash and/or dirty shutdown, and recover
on subsequent startup.
</para>
</sect1>
<sect1 id="rt-caveats"><title>Known caveats with RT indexes</title>
<para>
RT indexes are currently quality feature, but there are still a few known
usage quirks. Those quirks are listed in this section.
</para>
<itemizedlist>
<listitem><para>Default conservative RAM chunk limit (<option>rt_mem_limit</option>)
of 32M can lead to poor performance on bigger indexes, you should raise it to
256..1024M if you're planning to index gigabytes.</para></listitem>
<listitem><para>High DELETE/REPLACE rate can lead to kill-list fragmentation
and impact searching performance.</para></listitem>
<listitem><para>No transaction size limits are currently imposed;
too many concurrent INSERT/REPLACE transactions might therefore
consume a lot of RAM.</para></listitem>
<listitem><para>In case of a damaged binlog, recovery will stop on the
first damaged transaction, even though it's technically possible
to keep looking further for subsequent undamaged transactions, and
recover those. This mid-file damage case (due to flaky HDD/CDD/tape?)
is supposed to be extremely rare, though.</para></listitem>
<listitem><para>Multiple INSERTs grouped in a single transaction perform
better than equivalent single-row transactions and are recommended for
batch loading of data.</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rt-internals"><title>RT index internals</title>
<para>
RT index is internally chunked. It keeps a so-called RAM chunk
that stores all the most recent changes. RAM chunk memory usage
is rather strictly limited with per-index
<link linkend="conf-rt-mem-limit">rt_mem_limit</link> directive.
Once RAM chunk grows over this limit, a new disk chunk is created
from its data, and RAM chunk is reset. Thus, while most changes
on the RT index will be performed in RAM only and complete instantly
(in milliseconds), those changes that overflow the RAM chunk will
stall for the duration of disk chunk creation (a few seconds).
</para>
<para>
Since version 2.1.1-beta, Sphinx uses double-buffering to avoid INSERT stalls. When data is
being dumped to disk, the second buffer is used, so further INSERTs
won't be delayed. The second buffer is defined to be 10% the size
of the standard buffer, <link linkend="conf-rt-mem-limit">rt_mem_limit</link>,
but future versions of Sphinx may allow configuring this further.
</para>
<para>
Disk chunks are, in fact, just regular disk-based indexes.
But they're a part of an RT index and automatically managed by it,
so you need not configure nor manage them manually. Because a new
disk chunk is created every time RT chunk overflows the limit, and
because in-memory chunk format is close to on-disk format, the disk
chunks will be approximately <option>rt_mem_limit</option> bytes
in size each.
</para>
<para>
Generally, it is better to set the limit bigger, to minimize both
the frequency of flushes, and the index fragmentation (number of disk
chunks). For instance, on a dedicated search server that handles
a big RT index, it can be advised to set <option>rt_mem_limit</option>
to 1-2 GB. A global limit on all indexes is also planned, but not yet
implemented yet as of 1.10-beta.
</para>
<para>
Disk chunk full-text index data can not be actually modified,
so the full-text field changes (ie. row deletions and updates)
suppress a previous row version from a disk chunk using a kill-list,
but do not actually physically purge the data. Therefore, on workloads
with high full-text updates ratio index might eventually get polluted
by these previous row versions, and searching performance would
degrade. Physical index purging that would improve the performance
is planned, but not yet implemented as of 1.10-beta.
</para>
<para>
Data in RAM chunk gets saved to disk on clean daemon shutdown, and
then loaded back on startup. However, on daemon or server crash,
updates from RAM chunk might be lost. To prevent that, binary logging
of transactions can be used; see <xref linkend="rt-binlog"/> for details.
</para>
<para>
Full-text changes in RT index are transactional. They are stored
in a per-thread accumulator until COMMIT, then applied at once.
Bigger batches per single COMMIT should result in faster indexing.
</para>
</sect1>
<sect1 id="rt-binlog"><title>Binary logging</title>
<para>
Binary logs are essentially a recovery mechanism. With binary logs
enabled, <filename>searchd</filename> writes every given transaction
to the binlog file, and uses that for recovery after an unclean shutdown.
On clean shutdown, RAM chunks are saved to disk, and then all the binlog
files are unlinked.
</para>
<para>
During normal operation, a new binlog file will be opened every time
when <option>binlog_max_log_size</option> limit
is reached. Older, already closed binlog files are kept until all of the
transactions stored in them (from all indexes) are flushed as a disk chunk.
Setting the limit to 0 pretty much prevents binlog from being unlinked
at all while <filename>searchd</filename> is running; however, it will
still be unlinked on clean shutdown. (This is the default case as of
2.0.3-release, <option>binlog_max_log_size</option> defaults to 0.)
</para>
<para>
There are 3 different binlog flushing strategies, controlled by
<link linkend="conf-binlog-flush">binlog_flush</link> directive
which takes the values of 0, 1, or 2. 0 means to flush the log
to OS and sync it to disk every second; 1 means flush and sync
every transaction; and 2 (the default mode) means flush every
transaction but sync every second. Sync is relatively slow because
it has to perform physical disk writes, so mode 1 is the safest
(every committed transaction is guaranteed to be written on disk)
but the slowest. Flushing log to OS prevents from data loss on
<filename>searchd</filename> crashes but not system crashes.
Mode 2 is the default.
</para>
<para>
On recovery after an unclean shutdown, binlogs are replayed
and all logged transactions since the last good on-disk state
are restored. Transactions are checksummed so in case of binlog
file corruption garbage data will <b>not</b> be replayed; such
a broken transaction will be detected and, currently, will stop
replay. Transactions also start with a magic marker and timestamped,
so in case of binlog damage in the middle of the file, it's technically
possible to skip broken transactions and keep replaying from the next
good one, and/or it's possible to replay transactions until a given
timestamp (point-in-time recovery), but none of that is implemented yet
as of 1.10-beta.
</para>
<para>
One unwanted side effect of binlogs is that actively updating
a small RT index that fully fits into a RAM chunk part will lead
to an ever-growing binlog that can never be unlinked until clean
shutdown. Binlogs are essentially append-only deltas against
the last known good saved state on disk, and unless RAM chunk
gets saved, they can not be unlinked. An ever-growing binlog
is not very good for disk use and crash recovery time. Starting
with 2.0.1-beta you can configure <filename>searchd</filename>
to perform a periodic RAM chunk flush to fix that problem
using a <link linkend="conf-rt-flush-period">rt_flush_period</link>
directive. With periodic flushes enabled, <filename>searchd</filename>
will keep a separate thread, checking whether RT indexes RAM
chunks need to be written back to disk. Once that happens,
the respective binlogs can be (and are) safely unlinked.
</para>
<para>
Note that <code>rt_flush_period</code> only controls the
frequency at which the <emphasis>checks</emphasis> happen.
There are no <emphasis>guarantees</emphasis> that the
particular RAM chunk will get saved. For instance, it does
not make sense to regularly re-save a huge RAM chunk that
only gets a few rows worth of updates. The search daemon
determine whether to actually perform the flush with a few
heuristics.
</para>
</sect1>
</chapter>
<chapter id="searching"><title>Searching</title>
<!-- TODO
<sect1 id="searching-overview"><title>Overview</title>
</sect1>
-->
<sect1 id="matching-modes"><title>Matching modes</title>
<para>
So-called matching modes are a legacy feature that used to provide
(very) limited query syntax and ranking support. Currently, they are
deprecated in favor of <link linkend="extended-syntax">full-text query
language</link> and so-called <link linkend="weighting">rankers</link>.
Starting with version 0.9.9-release, it is thus strongly recommended
to use SPH_MATCH_EXTENDED and proper query syntax rather than any other
legacy mode. All those other modes are actually internally converted
to extended syntax anyway. SphinxAPI still defaults to SPH_MATCH_ALL
but that is for compatibility reasons only.
</para>
<para>
There are the following matching modes available:
<itemizedlist>
<listitem><para>SPH_MATCH_ALL, matches all query words;</para></listitem>
<listitem><para>SPH_MATCH_ANY, matches any of the query words;</para></listitem>
<listitem><para>SPH_MATCH_PHRASE, matches query as a phrase, requiring perfect match;</para></listitem>
<listitem><para>SPH_MATCH_BOOLEAN, matches query as a boolean expression (see <xref linkend="boolean-syntax"/>);</para></listitem>
<listitem><para>SPH_MATCH_EXTENDED, matches query as an expression in Sphinx internal query language
(see <xref linkend="extended-syntax"/>);</para></listitem>
<listitem><para>SPH_MATCH_EXTENDED2, an alias for SPH_MATCH_EXTENDED (default mode);</para></listitem>
<listitem><para>SPH_MATCH_FULLSCAN, matches query, forcibly using the "full scan" mode as below.
NB, any query terms will be ignored, such that filters, filter-ranges and grouping
will still be applied, but no text-matching.</para></listitem>
</itemizedlist>
</para>
<para>
SPH_MATCH_EXTENDED2 was used during 0.9.8 and 0.9.9 development cycle,
when the internal matching engine was being rewritten (for the sake of
additional functionality and better performance). By 0.9.9-release,
the older version was removed, and SPH_MATCH_EXTENDED and SPH_MATCH_EXTENDED2
are now just aliases.
</para>
<para>
The SPH_MATCH_FULLSCAN mode will be automatically activated in place of the specified matching mode when the following conditions are met:
<orderedlist>
<listitem><para>The query string is empty (ie. its length is zero).</para></listitem>
<listitem><para><link linkend="conf-docinfo">docinfo</link> storage is set to <code>extern</code>.</para></listitem>
</orderedlist>
In full scan mode, all the indexed documents will be considered as matching.
Such queries will still apply filters, sorting, and group by, but will not perform any full-text searching.
This can be useful to unify full-text and non-full-text searching code, or to offload SQL server
(there are cases when Sphinx scans will perform better than analogous MySQL queries).
An example of using the full scan mode might be to find posts in a forum.
By selecting the forum's user ID via <code>SetFilter()</code> but not actually providing any search text,
Sphinx will match every document (i.e. every post) where <code>SetFilter()</code> would match -
in this case providing every post from that user. By default this will be ordered by relevancy,
followed by Sphinx document ID in ascending order (earliest first).
</para>
</sect1>
<sect1 id="boolean-syntax"><title>Boolean query syntax</title>
<para>
Boolean queries allow the following special operators to be used:
<itemizedlist>
<listitem><para>operator OR: <programlisting>hello | world</programlisting></para></listitem>
<listitem><para>operator NOT:
<programlisting>
hello -world
hello !world
</programlisting>
</para></listitem>
<listitem><para>grouping: <programlisting>( hello world )</programlisting></para></listitem>
</itemizedlist>
Here's an example query which uses all these operators:
<example id="ex-boolean-query"><title>Boolean query example</title>
<programlisting>
( cat -dog ) | ( cat -mouse)
</programlisting>
</example>
</para>
<para>
There always is implicit AND operator, so "hello world" query actually
means "hello & world".
</para>
<para>
OR operator precedence is higher than AND, so "looking for cat | dog | mouse"
means "looking for ( cat | dog | mouse )" and <emphasis>not</emphasis>
"(looking for cat) | dog | mouse".
</para>
<para>
Since version 2.1.1-beta, queries may be automatically optimized if OPTION boolean_simplify=1 is specified.
Some transformations performed by this optimization include:
<itemizedlist>
<listitem><para>Excess brackets: ((A | B) | C) becomes ( A | B | C ); ((A B) C) becomes ( A B C )</para></listitem>
<listitem><para>Excess AND NOT: ((A !N1) !N2) becomes (A !(N1 | N2))</para></listitem>
<listitem><para>Common NOT: ((A !N) | (B !N)) becomes ((A|B) !N)</para></listitem>
<listitem><para>Common Compound NOT: ((A !(N AA)) | (B !(N BB))) becomes (((A|B) !N) | (A !AA) | (B !BB)) if the cost of evaluating N is greater than the added together costs of evaluating A and B</para></listitem>
<listitem><para>Common subterm: ((A (N | AA)) | (B (N | BB))) becomes (((A|B) N) | (A AA) | (B BB)) if the cost of evaluating N is greater than the added together costs of evaluating A and B</para></listitem>
<listitem><para>Common keywords: (A | "A B"~N) becomes A; ("A B" | "A B C") becomes "A B"; ("A B"~N | "A B C"~N) becomes ("A B"~N)</para></listitem>
<listitem><para>Common phrase: ("X A B" | "Y A B") becomes (("X|Y") "A B")</para></listitem>
<listitem><para>Common AND NOT: ((A !X) | (A !Y) | (A !Z)) becomes (A !(X Y Z))</para></listitem>
<listitem><para>Common OR NOT: ((A !(N | N1)) | (B !(N | N2))) becomes (( (A !N1) | (B !N2) ) !N)</para></listitem>
</itemizedlist>
Note that optimizing the queries consumes CPU time, so for simple queries -or for hand-optimized queries- you'll do
better with the default boolean_simplify=0 value. Simplifications are often better for complex queries, or
algorithmically generated queries.
</para>
<para>
Queries like "-dog", which implicitly include all documents from the
collection, can not be evaluated. This is both for technical and performance
reasons. Technically, Sphinx does not always keep a list of all IDs.
Performance-wise, when the collection is huge (ie. 10-100M documents),
evaluating such queries could take very long.
</para>
</sect1>
<sect1 id="extended-syntax"><title>Extended query syntax</title>
<para>
The following special operators and modifiers can be used when using the extended matching mode:
<itemizedlist>
<listitem><para>operator OR: <programlisting>hello | world</programlisting></para></listitem>
<listitem><para>operator MAYBE (introduced in verion 2.2.3-beta): <programlisting>hello MAYBE world</programlisting></para></listitem>
<listitem><para>operator NOT:
<programlisting>
hello -world
hello !world
</programlisting>
</para></listitem>
<listitem><para>field search operator: <programlisting>@title hello @body world</programlisting></para></listitem>
<listitem><para>field position limit modifier (introduced in version 0.9.9-rc1): <programlisting>@body[50] hello</programlisting></para></listitem>
<listitem><para>multiple-field search operator: <programlisting>@(title,body) hello world</programlisting></para></listitem>
<listitem><para>ignore field search operator (will ignore any matches of 'hello world' from field 'title'): <programlisting>@!title hello world</programlisting></para></listitem>
<listitem><para>ignore multiple-field search operator (if we have fields title, subject and body then @!(title) is equivalent to @(subject,body)): <programlisting>@!(title,body) hello world</programlisting></para></listitem>
<listitem><para>all-field search operator: <programlisting>@* hello</programlisting></para></listitem>
<listitem><para>phrase search operator: <programlisting>"hello world"</programlisting></para></listitem>
<listitem><para>proximity search operator: <programlisting>"hello world"~10</programlisting></para></listitem>
<listitem><para>quorum matching operator: <programlisting>"the world is a wonderful place"/3</programlisting></para></listitem>
<listitem><para>strict order operator (aka operator "before"): <programlisting>aaa << bbb << ccc</programlisting></para></listitem>
<listitem><para>exact form modifier (introduced in version 0.9.9-rc1): <programlisting>raining =cats and =dogs</programlisting></para></listitem>
<listitem><para>field-start and field-end modifier (introduced in version 0.9.9-rc2): <programlisting>^hello world$</programlisting></para></listitem>
<listitem><para>keyword IDF boost modifier (introduced in version 2.2.3-beta): <programlisting>boosted^1.234 boostedfieldend$^1.234</programlisting></para></listitem>
<listitem><para>NEAR, generalized proximity operator (introduced in version 2.0.1-beta): <programlisting>hello NEAR/3 world NEAR/4 "my test"</programlisting></para></listitem>
<listitem><para>SENTENCE operator (introduced in version 2.0.1-beta): <programlisting>all SENTENCE words SENTENCE "in one sentence"</programlisting></para></listitem>
<listitem><para>PARAGRAPH operator (introduced in version 2.0.1-beta): <programlisting>"Bill Gates" PARAGRAPH "Steve Jobs"</programlisting></para></listitem>
<listitem><para>ZONE limit operator: <programlisting>ZONE:(h3,h4)</programlisting> only in these titles</para></listitem>
<listitem><para>ZONESPAN limit operator: <programlisting>ZONESPAN:(h2)</programlisting> only in a (single) title</para></listitem>
</itemizedlist>
Here's an example query that uses some of these operators:
<example id="ex-extended-query"><title>Extended matching mode: query example</title>
<programlisting>
"hello world" @title "example program"~5 @body python -(php|perl) @* code
</programlisting>
</example>
The full meaning of this search is:
<itemizedlist>
<listitem><para>Find the words 'hello' and 'world' adjacently in any field in a document;</para></listitem>
<listitem><para>Additionally, the same document must also contain the words 'example' and 'program'
in the title field, with up to, but not including, 5 words between the words in question;
(E.g. "example PHP program" would be matched however "example script to introduce outside data
into the correct context for your program" would not because two terms have 5 or more words between them)</para></listitem>
<listitem><para>Additionally, the same document must contain the word 'python' in the body field, but not contain either 'php' or 'perl';</para></listitem>
<listitem><para>Additionally, the same document must contain the word 'code' in any field.</para></listitem>
</itemizedlist>
</para>
<para>
There always is implicit AND operator, so "hello world" means that
both "hello" and "world" must be present in matching document.
</para>
<para>
OR operator precedence is higher than AND, so "looking for cat | dog | mouse"
means "looking for ( cat | dog | mouse )" and <emphasis>not</emphasis>
"(looking for cat) | dog | mouse".
</para>
<para>
Field limit operator limits subsequent searching to a given field.
Normally, query will fail with an error message if given field name does not exist
in the searched index. However, that can be suppressed by specifying "@@relaxed"
option at the very beginning of the query:
<programlisting>
@@relaxed @nosuchfield my query
</programlisting>
This can be helpful when searching through heterogeneous indexes with
different schemas.
</para>
<para>
Field position limit, introduced in version 0.9.9-rc1, additionally restricts the searching
to first N position within given field (or fields). For example, "@body[50] hello" will
<b>not</b> match the documents where the keyword 'hello' occurs at position 51 and below
in the body.
</para>
<para>
Proximity distance is specified in words, adjusted for word count, and
applies to all words within quotes. For instance, "cat dog mouse"~5 query
means that there must be less than 8-word span which contains all 3 words,
ie. "CAT aaa bbb ccc DOG eee fff MOUSE" document will <emphasis>not</emphasis>
match this query, because this span is exactly 8 words long.
</para>
<para>
Quorum matching operator introduces a kind of fuzzy matching.
It will only match those documents that pass a given threshold of given words.
The example above ("the world is a wonderful place"/3) will match all documents
that have at least 3 of the 6 specified words. Operator is limited to 255 keywords.
Instead of an absolute number, you can also specify a number between 0.0 and 1.0
(standing for 0% and 100%), and Sphinx will match only documents with at least
the specified percentage of given words. The same example above could also have
been written "the world is a wonderful place"/0.5 and it would match documents
with at least 50% of the 6 words.
</para>
<para>
Strict order operator (aka operator "before"), introduced in version 0.9.9-rc2,
will match the document only if its argument keywords occur in the document
exactly in the query order. For instance, "black << cat" query (without
quotes) will match the document "black and white cat" but <emphasis>not</emphasis>
the "that cat was black" document. Order operator has the lowest priority.
It can be applied both to just keywords and more complex expressions,
ie. this is a valid query:
<programlisting>
(bag of words) << "exact phrase" << red|green|blue
</programlisting>
</para>
<para>
Exact form keyword modifier, introduced in version 0.9.9-rc1, will match the document only if the keyword occurred
in exactly the specified form. The default behavior is to match the document
if the stemmed keyword matches. For instance, "runs" query will match both
the document that contains "runs" <emphasis>and</emphasis> the document that
contains "running", because both forms stem to just "run" - while "=runs"
query will only match the first document. Exact form operator requires
<link linkend="conf-index-exact-words">index_exact_words</link> option to be enabled.
This is a modifier that affects the keyword and thus can be used within
operators such as phrase, proximity, and quorum operators.
Starting with 2.2.2-beta, it is possible to apply an exact form modifier
to the phrase operator. It's really just syntax sugar - it adds an exact form
modifier to all terms contained within the phrase.
<programlisting>
="exact phrase"
</programlisting>
</para>
<para>
Field-start and field-end keyword modifiers, introduced in version 0.9.9-rc2,
will make the keyword match only if it occurred at the very start or the very end
of a fulltext field, respectively. For instance, the query "^hello world$"
(with quotes and thus combining phrase operator and start/end modifiers)
will only match documents that contain at least one field that has exactly
these two keywords.
</para>
<para>
Starting with 0.9.9-rc1, arbitrarily nested brackets and negations are allowed.
However, the query must be possible to compute without involving an implicit
list of all documents:
<programlisting>
// correct query
aaa -(bbb -(ccc ddd))
// queries that are non-computable
-aaa
aaa | -bbb
</programlisting>
</para>
<para>
Starting with 2.2.2-beta, the phrase search operator may include a 'match any term'
modifier. Terms within the phrase operator are position significant. When
the 'match any term' modifier is implemented, the position of the subsequent terms
from that phrase query will be shifted. Therefore, 'match any' has no impact
on search performance.
<programlisting>
"exact * phrase * * for terms"
</programlisting>
</para>
<para>
<b>NEAR operator</b>, added in 2.0.1-beta, is a generalized version
of a proximity operator. The syntax is <code>NEAR/N</code>, it is
case-sensitive, and no spaces are allowed between the NEAR keyword,
the slash sign, and the distance value.
</para>
<para>
The original proximity operator only worked on sets of keywords.
NEAR is more generic and can accept arbitrary subexpressions as
its two arguments, matching the document when both subexpressions
are found within N words of each other, no matter in which order.
NEAR is left associative and has the same (lowest) precedence
as BEFORE.
</para>
<para>
You should also note how a <code>(one NEAR/7 two NEAR/7 three)</code>
query using NEAR is not really equivalent to a
<code>("one two three"~7)</code> one using keyword proximity operator.
The difference here is that the proximity operator allows for up to
6 non-matching words between all the 3 matching words, but the version
with NEAR is less restrictive: it would allow for up to 6 words between
'one' and 'two' and then for up to 6 more between that two-word
matching and a 'three' keyword.
</para>
<para>
<b>SENTENCE and PARAGRAPH operators</b>, added in 2.0.1-beta,
matches the document when both its arguments are within the same
sentence or the same paragraph of text, respectively. The arguments
can be either keywords, or phrases, or the instances of the same
operator. Here are a few examples:
<programlisting>
one SENTENCE two
one SENTENCE "two three"
one SENTENCE "two three" SENTENCE four
</programlisting>
The order of the arguments within the sentence or paragraph
does not matter. These operators only work on indexes built
with <link linkend="conf-index-sp">index_sp</link> (sentence
and paragraph indexing feature) enabled, and revert to a mere
AND otherwise. Refer to the <code>index_sp</code> directive
documentation for the notes on what's considered a sentence
and a paragraph.
</para>
<para>
<b>ZONE limit operator</b>, added in 2.0.1-beta, is quite similar
to field limit operator, but restricts matching to a given in-field
zone or a list of zones. Note that the subsequent subexpressions
are <emphasis>not</emphasis> required to match in a single contiguous
span of a given zone, and may match in multiple spans.
For instance, <code>(ZONE:th hello world)</code> query
<emphasis>will</emphasis> match this example document:
<programlisting>
<th>Table 1. Local awareness of Hello Kitty brand.</th>
.. some table data goes here ..
<th>Table 2. World-wide brand awareness.</th>
</programlisting>
ZONE operator affects the query until the next
field or ZONE limit operator, or the closing parenthesis.
It only works on the indexes built with zones support
(see <xref linkend="conf-index-zones"/>) and will be ignored
otherwise.
</para>
<para>
<b>ZONESPAN limit operator</b>, added in 2.1.1-beta, is similar to the ZONE operator,
but requires the match to occur in a single contiguous span. In the example
above, <code>(ZONESPAN:th hello world)></code> would not match the document,
since "hello" and "world" do not occur within the same span.
</para>
<para>
<b>MAYBE</b> operator was added in 2.2.3-beta. It works much like |
operator but doesn't return documents which match only right subtree expression.
</para>
</sect1>
<sect1 id="weighting"><title>Search results ranking</title>
<sect2 id="ranking-overview"><title>Ranking overview</title>
<para>
Ranking (aka weighting) of the search results can be defined
as a process of computing a so-called relevance (aka weight)
for every given matched document with regards to a given query
that matched it. So relevance is in the end just a number attached
to every document that estimates how relevant the document is to
the query. Search results can then be sorted based on this number
and/or some additional parameters, so that the most sought after
results would come up higher on the results page.
</para>
<para>
There is no single standard one-size-fits-all way to rank
any document in any scenario. Moreover, there can not ever be
such a way, because relevance is <emphasis>subjective</emphasis>.
As in, what seems relevant to you might not seem relevant to me.
Hence, in general case it's not just hard to compute, it's
theoretically impossible.
</para>
<para>
So ranking in Sphinx is configurable. It has a notion of
a so-called <b>ranker</b>. A ranker can formally be defined
as a function that takes document and query as its input and
produces a relevance value as output. In layman's terms,
a ranker controls exactly how (using which specific algorithm)
will Sphinx assign weights to the document.
</para>
<para>
Previously, this ranking function was rigidly bound to the matching mode.
So in the legacy matching modes (that is, SPH_MATCH_ALL, SPH_MATCH_ANY,
SPH_MATCH_PHRASE, and SPH_MATCH_BOOLEAN) you can not choose the ranker.
You can only do that in the SPH_MATCH_EXTENDED mode. (Which is the only
mode in SphinxQL and the suggested mode in SphinxAPI anyway.) To choose
a non-default ranker you can either use
<link linkend="api-func-setrankingmode">SetRankingMode()</link>
with SphinxAPI, or <link linkend="sphinxql-select">OPTION ranker</link>
clause in <code>SELECT</code> statement when using SphinxQL.
</para>
<para>
As a sidenote, legacy matching modes are internally implemented via
the unified syntax anyway. When you use one of those modes, Sphinx just
internally adjusts the query and sets the associated ranker, then
executes the query using the very same unified code path.
</para>
</sect2>
<sect2 id="builtin-rankers"><title>Available built-in rankers</title>
<para>
Sphinx ships with a number of built-in rankers suited for different
purposes. A number of them uses two factors, phrase proximity (aka LCS)
and BM25. Phrase proximity works on the keyword positions, while BM25
works on the keyword frequencies. Basically, the better the degree of
the phrase match between the document body and the query, the higher
is the phrase proximity (it maxes out when the document contains
the entire query as a verbatim quote). And BM25 is higher when
the document contains more rare words. We'll save the detailed
discussion for later.
</para>
<para>
Currently implemented rankers are:
<itemizedlist>
<listitem><para>
SPH_RANK_PROXIMITY_BM25, the default ranking mode that uses and combines
both phrase proximity and BM25 ranking.
</para></listitem>
<listitem><para>
SPH_RANK_BM25, statistical ranking mode which uses BM25 ranking only (similar to
most other full-text engines). This mode is faster but may result in worse quality
on queries which contain more than 1 keyword.
</para></listitem>
<listitem><para>
SPH_RANK_NONE, no ranking mode. This mode is obviously the fastest.
A weight of 1 is assigned to all matches. This is sometimes called boolean
searching that just matches the documents but does not rank them.
</para></listitem>
<listitem><para>SPH_RANK_WORDCOUNT, ranking by the keyword occurrences count.
This ranker computes the per-field keyword occurrence counts, then multiplies
them by field weights, and sums the resulting values.
</para></listitem>
<listitem><para>
SPH_RANK_PROXIMITY, added in version 0.9.9-rc1, returns raw phrase proximity
value as a result. This mode is internally used to emulate SPH_MATCH_ALL queries.
</para></listitem>
<listitem><para>
SPH_RANK_MATCHANY, added in version 0.9.9-rc1, returns rank as it was computed
in SPH_MATCH_ANY mode earlier, and is internally used to emulate SPH_MATCH_ANY queries.
</para></listitem>
<listitem><para>
SPH_RANK_FIELDMASK, added in version 0.9.9-rc2, returns a 32-bit mask with
N-th bit corresponding to N-th fulltext field, numbering from 0. The bit will
only be set when the respective field has any keyword occurrences satisfying
the query.
</para></listitem>
<listitem><para>
SPH_RANK_SPH04, added in version 1.10-beta, is generally based on the default
SPH_RANK_PROXIMITY_BM25 ranker, but additionally boosts the matches when
they occur in the very beginning or the very end of a text field. Thus,
if a field equals the exact query, SPH04 should rank it higher than a field
that contains the exact query but is not equal to it. (For instance, when
the query is "Hyde Park", a document entitled "Hyde Park" should be ranked
higher than a one entitled "Hyde Park, London" or "The Hyde Park Cafe".)
</para></listitem>
<listitem><para>
SPH_RANK_EXPR, added in version 2.0.2-beta, lets you specify the ranking
formula in run time. It exposes a number of internal text factors and lets
you define how the final weight should be computed from those factors.
You can find more details about its syntax and a reference available
factors in a subsection below.
</para></listitem>
</itemizedlist>
</para>
<para>
You should specify the <code>SPH_RANK_</code> prefix and use capital letters only
when using the <link linkend="api-func-setrankingmode">SetRankingMode()</link>
call from the SphinxAPI. The API ports expose these as global constants.
Using SphinxQL syntax, the prefix should be omitted and the ranker name
is case insensitive. Example:
<programlisting>
// SphinxAPI
$client->SetRankingMode ( SPH_RANK_SPH04 );
// SphinxQL
mysql_query ( "SELECT ... OPTION ranker=sph04" );
</programlisting>
</para>
<bridgehead>Legacy matching modes rankers</bridgehead>
<para>
Legacy matching modes automatically select a ranker as follows:
<itemizedlist>
<listitem><para>SPH_MATCH_ALL uses SPH_RANK_PROXIMITY ranker;</para></listitem>
<listitem><para>SPH_MATCH_ANY uses SPH_RANK_MATCHANY ranker;</para></listitem>
<listitem><para>SPH_MATCH_PHRASE uses SPH_RANK_PROXIMITY ranker;</para></listitem>
<listitem><para>SPH_MATCH_BOOLEAN uses SPH_RANK_NONE ranker.</para></listitem>
</itemizedlist>
</para>
</sect2>
<sect2 id="expression-ranker"><title>Expression based ranker (SPH_RANK_EXPR)</title>
<para>
Expression ranker, added in version 2.0.2-beta, lets you change the ranking
formula on the fly, on a per-query basis. For a quick kickoff, this is how you
emulate PROXIMITY_BM25 ranker using the expression based one:
<programlisting>
SELECT *, WEIGHT() FROM myindex WHERE MATCH('hello world')
OPTION ranker=expr('sum(lcs*user_weight)*1000+bm25')
</programlisting>
The output of this query must not change if you omit the <code>OPTION</code>
clause, because the default ranker (PROXIMITY_BM25) behaves exactly like
specified in the ranker formula above. But the expression ranker is somewhat
more flexible than just that and provides access to many more factors.
</para>
<para>
The ranking formula is an arbitrary arithmetic expression that can use
constants, document attributes, built-in functions and operators (described
in <xref linkend="expressions"/>), and also a few ranking-specific things
that are only accessible in a ranking formula. Namely, those are field
aggregation functions, field-level, and document-level ranking factors.
</para>
</sect2>
<sect2 id="ranking-factors"><title>Quick summary of the ranking factors</title>
<para>
<table frame="all" id="ranking-factors-table">
<tgroup cols="3">
<colspec colname="name"/>
<colspec colname="level"/>
<colspec colname="type"/>
<colspec colname="summary"/>
<thead>
<row>
<entry>Name</entry>
<entry>Level</entry>
<entry>Type</entry>
<entry>Summary</entry></row>
</thead>
<tbody>
<!-- query level -->
<row>
<entry>max_lcs</entry>
<entry>query</entry>
<entry>int</entry>
<entry>maximum possible LCS value for the current query</entry>
</row>
<!-- document level -->
<row>
<entry>bm25</entry>
<entry>document</entry>
<entry>int</entry>
<entry>quick estimate of BM25(1.2, 0) without syntax support</entry>
</row>
<row>
<entry>bm25a(k1, b)</entry>
<entry>document</entry>
<entry>int</entry>
<entry>precise BM25() value with configurable K1, B constants and syntax support</entry>
</row>
<row>
<entry>bm25f(k1, b, {field=weight, ...})</entry>
<entry>document</entry>
<entry>int</entry>
<entry>precise BM25F() value with extra configurable field weights</entry>
</row>
<row>
<entry>field_mask</entry>
<entry>document</entry>
<entry>int</entry>
<entry>bit mask of matched fields</entry>
</row>
<row>
<entry>query_word_count</entry>
<entry>document</entry>
<entry>int</entry>
<entry>number of unique inclusive keywords in a query</entry>
</row>
<row>
<entry>doc_word_count</entry>
<entry>document</entry>
<entry>int</entry>
<entry>number of unique keywords matched in the document</entry>
</row>
<!-- field level -->
<row>
<entry>lcs</entry>
<entry>field</entry>
<entry>int</entry>
<entry>Longest Common Subsequence between query and document, in words</entry>
</row>
<row>
<entry>user_weight</entry>
<entry>field</entry>
<entry>int</entry>
<entry>user field weight</entry>
</row>
<row>
<entry>hit_count</entry>
<entry>field</entry>
<entry>int</entry>
<entry>total number of keyword occurrences</entry>
</row>
<row>
<entry>word_count</entry>
<entry>field</entry>
<entry>int</entry>
<entry>number of unique matched keywords</entry>
</row>
<row>
<entry>tf_idf</entry>
<entry>field</entry>
<entry>float</entry>
<entry>sum(tf*idf) over matched keywords == sum(idf) over occurrences</entry>
</row>
<row>
<entry>min_hit_pos</entry>
<entry>field</entry>
<entry>int</entry>
<entry>first matched occurrence position, in words, 1-based</entry>
</row>
<row>
<entry>min_best_span_pos</entry>
<entry>field</entry>
<entry>int</entry>
<entry>first maximum LCS span position, in words, 1-based</entry>
</row>
<row>
<entry>exact_hit</entry>
<entry>field</entry>
<entry>bool</entry>
<entry>whether query == field</entry>
</row>
<row>
<entry>min_idf</entry>
<entry>field</entry>
<entry>float</entry>
<entry>min(idf) over matched keywords</entry>
</row>
<row>
<entry>max_idf</entry>
<entry>field</entry>
<entry>float</entry>
<entry>max(idf) over matched keywords</entry>
</row>
<row>
<entry>sum_idf</entry>
<entry>field</entry>
<entry>float</entry>
<entry>sum(idf) over matched keywords</entry>
</row>
<row>
<entry>exact_order</entry>
<entry>field</entry>
<entry>bool</entry>
<entry>whether all query keywords were a) matched and b) in query order</entry>
</row>
<row>
<entry>min_gaps</entry>
<entry>field</entry>
<entry>int</entry>
<entry>minimum number of gaps between the matched keywords over the matching spans</entry>
</row>
<row>
<entry>lccs</entry>
<entry>field</entry>
<entry>int</entry>
<entry>Longest Common Contiguous Subsequence between query and document, in words</entry>
</row>
<row>
<entry>wlccs</entry>
<entry>field</entry>
<entry>float</entry>
<entry>Weighted Longest Common Contiguous Subsequence, sum(idf) over contiguous keyword spans</entry>
</row>
<row>
<entry>atc</entry>
<entry>field</entry>
<entry>float</entry>
<entry>Aggregate Term Closeness, log(1+sum(idf1*idf2*pow(distance, -1.75)) over the best pairs of keywords</entry>
</row>
</tbody>
</tgroup>
</table>
</para>
</sect2>
<sect2 id="document-factors"><title>Document-level ranking factors</title>
<para>
A <b>document-level factor</b> is a numeric value computed by the ranking
engine for every matched document with regards to the current query.
(So it differs from a plain document attribute in that the attribute
do not depend on the full text query, while factors might.) Those
factors can be used anywhere in the ranking expression.
Currently implemented document-level factors are:
<itemizedlist>
<listitem><para>
<code>bm25</code> (integer), a document-level BM25 estimate (computed without
keyword occurrence filtering).
</para></listitem>
<listitem><para>
<code>max_lcs</code> (integer), a query-level maximum possible value that
the sum(lcs*user_weight) expression can ever take. This can be
useful for weight boost scaling. For instance, MATCHANY ranker
formula uses this to guarantee that a full phrase match in any
field ranks higher than any combination of partial matches
in all fields.
</para></listitem>
<listitem><para>
<code>field_mask</code> (integer), a document-level 32-bit mask of matched
fields.
</para></listitem>
<listitem><para>
<code>query_word_count</code> (integer), the number of unique keywords
in a query, adjusted for a number of excluded keywords. For instance,
both <code>(one one one one)</code> and <code>(one !two)</code> queries
should assign a value of 1 to this factor, because there is just one unique
non-excluded keyword.
</para></listitem>
<listitem><para>
<code>doc_word_count</code> (integer), the number of unique keywords
matched in the entire document.
</para></listitem>
</itemizedlist>
</para>
</sect2>
<sect2 id="field-factors"><title>Field-level ranking factors</title>
<para>
A <b>field-level factor</b> is a numeric value computed by the ranking
engine for every matched in-document text field with regards to the
current query. As more than one field can be matched by a query,
but the final weight needs to be a single integer value, these
values need to be folded into a single one. To achieve that,
field-level factors can only be used within a field aggregation
function, they can <b>not</b> be used anywhere in the expression.
For example, you can not use <code>(lcs+bm25)</code> as your
ranking expression, as <code>lcs</code> takes multiple values (one
in every matched field). You should use <code>(sum(lcs)+bm25)</code>
instead, that expression sums <code>lcs</code> over all matching fields,
and then adds <code>bm25</code> to that per-field sum.
Currently implemented field-level factors are:
<itemizedlist>
<listitem><para>
<code>lcs</code> (integer), the length of a maximum verbatim match between
the document and the query, counted in words. LCS stands for Longest Common
Subsequence (or Subset). Takes a minimum value of 1 when only stray keywords
were matched in a field, and a maximum value of query keywords count
when the entire query was matched in a field verbatim (in the exact
query keywords order). For example, if the query is 'hello world'
and the field contains these two words quoted from the query (that is,
adjacent to each other, and exactly in the query order), <code>lcs</code>
will be 2. For example, if the query is 'hello world program' and
the field contains 'hello world', <code>lcs</code> will be 2.
Note that any subset of the query keyword works, not just a subset
of adjacent keywords. For example, if the query is 'hello world program'
and the field contains 'hello (test program)', <code>lcs</code> will be 2
just as well, because both 'hello' and 'program' matched in the same
respective positions as they were in the query. Finally, if the query
is 'hello world program' and the field contains 'hello world program',
<code>lcs</code> will be 3. (Hopefully that is unsurprising at this point.)
</para></listitem>
<listitem><para>
<code>user_weight</code> (integer), the user specified per-field weight
(refer to <link linkend="api-func-setfieldweights">SetFieldWeights()</link>
in SphinxAPI and <link linkend="sphinxql-select">OPTION field_weights</link>
in SphinxQL respectively). The weights default to 1 if not specified
explicitly.
</para></listitem>
<listitem><para>
<code>hit_count</code> (integer), the number of keyword occurrences
that matched in the field. Note that a single keyword may occur multiple
times. For example, if 'hello' occurs 3 times in a field and 'world'
occurs 5 times, <code>hit_count</code> will be 8.
</para></listitem>
<listitem><para>
<code>word_count</code> (integer), the number of unique keywords matched
in the field. For example, if 'hello' and 'world' occur anywhere in a field,
<code>word_count</code> will be 2, irregardless of how many times do both
keywords occur.
</para></listitem>
<listitem><para>
<code>tf_idf</code> (float), the sum of TF*IDF over all the keywords matched in the
field. IDF is the Inverse Document Frequency, a floating point value
between 0 and 1 that describes how frequent is the keywords (basically,
0 for a keyword that occurs in every document indexed, and 1 for a unique
keyword that occurs in just a single document). TF is the Term Frequency,
the number of matched keyword occurrences in the field. As a side note,
<code>tf_idf</code> is actually computed by summing IDF over all matched
occurrences. That's by construction equivalent to summing TF*IDF over
all matched keywords.
</para></listitem>
<listitem><para>
<code>min_hit_pos</code> (integer), the position of the first matched keyword occurrence,
counted in words. Indexing begins from position 1.
</para></listitem>
<listitem><para>
<code>min_best_span_pos</code> (integer), the position of the first maximum LCS
occurrences span. For example, assume that our query was 'hello world
program' and 'hello world' subphrase was matched twice in the field,
in positions 13 and 21. Assume that 'hello' and 'world' additionally
occurred elsewhere in the field, but never next to each other and thus
never as a subphrase match. In that case, <code>min_best_span_pos</code>
will be 13. Note how for the single keyword queries
<code>min_best_span_pos</code> will always equal <code>min_hit_pos</code>.
</para></listitem>
<listitem><para>
<code>exact_hit</code> (boolean), whether a query was an exact match
of the entire current field. Used in the SPH04 ranker.
</para></listitem>
<listitem><para>
<code>min_idf</code>, <code>max_idf</code>, and <code>sum_idf</code> (float),
added in version 2.1.1-beta. These factors respectively represent the min(idf),
max(idf) and sum(idf) over all keywords that were matched in the field.
</para></listitem>
<listitem><para>
<code>exact_order</code> (boolean), added in version 2.2.1-beta. Whether all of the
query keywords were matched in the field in the exact query order. For example,
<code>(microsoft office)</code> query would yield exact_order=1 in a field with the
following contents: <code>(We use Microsoft software in our office.)</code>.
However, the very same query in a <code>(Our office is Microsoft free.)</code>
field would yield exact_order=0.
</para></listitem>
<listitem><para>
<code>min_gaps</code> (integer), added in version 2.2.1-beta, the minimum number
of positional gaps between (just) the keywords matched in field. Always 0 when less
than 2 keywords match; always greater or equal than 0 otherwise.
</para><para>
For example, with a <code>[big wolf]</code> query, <code>[big bad wolf]</code> field
would yield min_gaps=1; <code>[big bad hairy wolf]</code> field would yield min_gaps=2;
<code>[the wolf was scary and big]</code> field would yield min_gaps=3; etc.
However, a field like <code>[i heard a wolf howl]</code> would yield min_gaps=0,
because only one keyword would be matching in that field, and, naturally, there
would be no gaps between the <emphasis>matched</emphasis>keywords.
</para><para>
Therefore, this is a rather low-level, "raw" factor that you would most likely
want to <emphasis>adjust</emphasis> before actually using for ranking. Specific
adjustments depend heavily on your data and the resulting formula, but here are
a few ideas you can start with: (a) any min_gaps based boosts could be simply ignored
when word_count<2; (b) non-trivial min_gaps values (i.e. when word_count>=2)
could be clamped with a certain "worst case" constant while trivial values
(i.e. when min_gaps=0 and word_count<2) could be replaced by that constant;
(c) a transfer function like 1/(1+min_gaps) could be applied (so that better,
smaller min_gaps values would maximize it and worse, bigger min_gaps values
would fall off slowly); and so on.
</para></listitem>
<listitem><para>
<code>lccs</code> (integer), added in version 2.2.1-beta. Longest Common Contiguous
Subsequence. A length of the longest subphrase that is common between the query and
the document, computed in keywords.
</para><para>
LCCS factor is rather similar to LCS but more restrictive, in a sense. While LCS could
be greater than 1 though no two query words are matched next to each other, LCCS
would only get greater than 1 if there are <emphasis>exact</emphasis>, contiguous
query subphrases in the document. For example, (one two three four five) query
vs (one hundred three hundred five hundred) document would yield lcs=3, but lccs=1,
because even though mutual dispositions of 3 keywords (one, three, five) match between
the query and the document, no 2 matching positions are actually next to each other.
</para><para>
Note that LCCS still does not differentiate between the frequent and rare keywords;
for that, see WLCS and WLLCS.
</para></listitem>
<listitem><para>
<code>wlccs</code> (float), added in version 2.2.1-beta. Weighted Longest Common Contiguous
Subsequence. A sum of IDFs of the keywords of the longest subphrase that is common
between the query and the document.
</para><para>
WLCCS is computed very similarly to LCCS, but every "suitable" keyword occurrence
increases it by the keyword IDF rather than just by 1 (which is the case with LCS and
LCCS). That lets us rank sequences of more rare and important keywords higher than
sequences of frequent keywords, even if the latter are longer. For example, a query
<code>(Zanzibar bed and breakfast)</code> would yield lccs=1 for a
<code>(hotels of Zanzibar)</code> document, but lccs=3 against
<code>(London bed and breakfast)</code>, even though "Zanzibar" is actually
somewhat more rare than the entire "bed and breakfast" phrase. WLCCS factor alleviates
that problem by using the keyword frequencies.
</para></listitem>
<listitem><para>
<code>atc</code> (float), added in version 2.2.1-beta. Aggregate Term Closeness.
A proximity based measure that grows higher when the document contains more groups
of more closely located and more important (rare) query keywords. <b>WARNING:</b>
you should use ATC with OPTION idf='plain,tfidf_unnormalized'; otherwise you would
get unexpected results.
</para><para>
ATC basically works as follows. For every keyword <emphasis>occurrence</emphasis>
in the document, we compute the so called <emphasis>term closeness</emphasis>. For that,
we examine all the other closest occurrences of all the query keywords (keyword itself
included too) to the left and to the right of the subject occurrence, compute a distance
dampening coefficient as k = pow(distance, -1.75) for those occurrences, and sum the
dampened IDFs. Thus for every occurrence of every keyword, we get a "closeness" value
that describes the "neighbors" of that occurrence. We then multiply those per-occurrence
closenesses by their respective subject keyword IDF, sum them all, and finally,
compute a logarithm of that sum.
</para><para>
Or in other words, we process the best (closest) matched keyword pairs in the document,
and compute pairwise "closenesses" as the product of their IDFs scaled by the distance
coefficient:
<programlisting>
pair_tc = idf(pair_word1) * idf(pair_word2) * pow(pair_distance, -1.75)
</programlisting>
We then sum such closenesses, and compute the final, log-dampened ATC value:
<programlisting>
atc = log(1+sum(pair_tc))
</programlisting>
Note that this final dampening logarithm is exactly the reason you should use
OPTION idf=plain, because without it, the expression inside the log() could be negative.
</para><para>
Having closer keyword occurrences actually contributes <emphasis>much</emphasis> more
to ATC than having more frequent keywords. Indeed, when the keywords are right next to
each other, distance=1 and k=1; when there just one word in between them, distance=2 and
k=0.297, with two words between, distance=3 and k=0.146, and so on. At the same time
IDF attenuates somewhat slower. For example, in a 1 million document collection, the IDF
values for keywords that match in 10, 100, and 1000 documents would be respectively
0.833, 0.667, and 0.500. So a keyword pair with two rather rare keywords that occur
in just 10 documents each but with 2 other words in between would yield pair_tc = 0.101
and thus just barely outweigh a pair with a 100-doc and a 1000-doc keyword with 1 other
word between them and pair_tc = 0.099. Moreover, a pair of two <emphasis>unique</emphasis>,
1-doc keywords with 3 words between them would get a pair_tc = 0.088 and lose to a pair of
two 1000-doc keywords located right next to each other and yielding a pair_tc = 0.25.
So, basically, while ATC does combine both keyword frequency and proximity, it is still
somewhat favoring the proximity.
</para></listitem>
</itemizedlist>
</para>
</sect2>
<sect2 id="factor-aggr-functions"><title>Ranking factor aggregation functions</title>
<para>
A <b>field aggregation function</b> is a single argument function
that takes an expression with field-level factors, iterates it over
all the matched fields, and computes the final results.
Currently implemented field aggregation functions are:
<itemizedlist>
<listitem><para>
<code>sum</code>, sums the argument expression over all matched
fields. For instance, <code>sum(1)</code> should return a number
of matched fields.
</para></listitem>
<listitem><para>
<code>top</code>, returns the greatest value of the argument over all
matched fields.
</para></listitem>
</itemizedlist>
</para>
</sect2>
<sect2 id="formulas-for-builtin-rankers"><title>Formula expressions for all the built-in rankers</title>
<para>
Most of the other rankers can actually be emulated with the expression
based ranker. You just need to pass a proper expression. Such emulation is,
of course, going to be slower than using the built-in, compiled ranker but
still might be of interest if you want to fine-tune your ranking formula
starting with one of the existing ones. Also, the formulas define the
nitty gritty ranker details in a nicely readable fashion.
</para>
<itemizedlist>
<listitem><para>
SPH_RANK_PROXIMITY_BM25 = sum(lcs*user_weight)*1000+bm25
</para></listitem>
<listitem><para>
SPH_RANK_BM25 = bm25
</para></listitem>
<listitem><para>
SPH_RANK_NONE = 1
</para></listitem>
<listitem><para>
SPH_RANK_WORDCOUNT = sum(hit_count*user_weight)
</para></listitem>
<listitem><para>
SPH_RANK_PROXIMITY = sum(lcs*user_weight)
</para></listitem>
<listitem><para>
SPH_RANK_MATCHANY = sum((word_count+(lcs-1)*max_lcs)*user_weight)
</para></listitem>
<listitem><para>
SPH_RANK_FIELDMASK = field_mask
</para></listitem>
<listitem><para>
SPH_RANK_SPH04 = sum((4*lcs+2*(min_hit_pos==1)+exact_hit)*user_weight)*1000+bm25
</para></listitem>
</itemizedlist>
</sect2>
</sect1>
<sect1 id="expressions">
<title>Expressions, functions, and operators</title>
<para>
Sphinx lets you use arbitrary arithmetic expressions both via SphinxQL
and SphinxAPI, involving attribute values, internal attributes (document ID
and relevance weight), arithmetic operations, a number of built-in functions,
and user-defined functions.
This section documents the supported operators and functions.
Here's the complete reference list for quick access.
<itemizedlist>
<listitem><para><link linkend="expr-ari-ops">Arithmetic operators: +, -, *, /, %, DIV, MOD</link></para></listitem>
<listitem><para><link linkend="expr-comp-ops">Comparison operators: <, > <=, >=, =, <></link></para></listitem>
<listitem><para><link linkend="expr-bool-ops">Boolean operators: AND, OR, NOT</link></para></listitem>
<listitem><para><link linkend="expr-bitwise-ops">Bitwise operators: &, |</link></para></listitem>
<listitem><para><link linkend="expr-func-abs">ABS()</link></para></listitem>
<listitem><para><link linkend="expr-func-all">ALL()</link></para></listitem>
<listitem><para><link linkend="expr-func-any">ANY()</link></para></listitem>
<listitem><para><link linkend="expr-func-atan2">ATAN2()</link></para></listitem>
<listitem><para><link linkend="expr-func-bigint">BIGINT()</link></para></listitem>
<listitem><para><link linkend="expr-func-bitdot">BITDOT()</link></para></listitem>
<listitem><para><link linkend="expr-func-ceil">CEIL()</link></para></listitem>
<listitem><para><link linkend="expr-func-contains">CONTAINS()</link></para></listitem>
<listitem><para><link linkend="expr-func-cos">COS()</link></para></listitem>
<listitem><para><link linkend="expr-func-crc32">CRC32()</link></para></listitem>
<listitem><para><link linkend="expr-func-day">DAY()</link></para></listitem>
<listitem><para><link linkend="expr-func-double">DOUBLE()</link></para></listitem>
<listitem><para><link linkend="expr-func-exp">EXP()</link></para></listitem>
<listitem><para><link linkend="expr-func-fibonacci">FIBONACCI()</link></para></listitem>
<listitem><para><link linkend="expr-func-floor">FLOOR()</link></para></listitem>
<listitem><para><link linkend="expr-func-geodist">GEODIST()</link></para></listitem>
<listitem><para><link linkend="expr-func-geopoly2d">GEOPOLY2D()</link></para></listitem>
<listitem><para><link linkend="expr-func-greatest">GREATEST()</link></para></listitem>
<listitem><para><link linkend="expr-func-idiv">IDIV()</link></para></listitem>
<listitem><para><link linkend="expr-func-if">IF()</link></para></listitem>
<listitem><para><link linkend="expr-func-in">IN()</link></para></listitem>
<listitem><para><link linkend="expr-func-indexof">INDEXOF()</link></para></listitem>
<listitem><para><link linkend="expr-func-integer">INTEGER()</link></para></listitem>
<listitem><para><link linkend="expr-func-interval">INTERVAL()</link></para></listitem>
<listitem><para><link linkend="expr-func-least">LEAST()</link></para></listitem>
<listitem><para><link linkend="expr-func-length">LENGTH()</link></para></listitem>
<listitem><para><link linkend="expr-func-ln">LN()</link></para></listitem>
<listitem><para><link linkend="expr-func-log10">LOG10()</link></para></listitem>
<listitem><para><link linkend="expr-func-log2">LOG2()</link></para></listitem>
<listitem><para><link linkend="expr-func-max">MAX()</link></para></listitem>
<listitem><para><link linkend="expr-func-min">MIN()</link></para></listitem>
<listitem><para><link linkend="expr-func-min-top-sortval">MIN_TOP_SORTVAL()</link></para></listitem>
<listitem><para><link linkend="expr-func-min-top-weight">MIN_TOP_WEIGHT()</link></para></listitem>
<listitem><para><link linkend="expr-func-month">MONTH()</link></para></listitem>
<listitem><para><link linkend="expr-func-now">NOW()</link></para></listitem>
<listitem><para><link linkend="expr-func-poly2d">POLY2D()</link></para></listitem>
<listitem><para><link linkend="expr-func-pow">POW()</link></para></listitem>
<listitem><para><link linkend="expr-func-remap">REMAP()</link></para></listitem>
<listitem><para><link linkend="expr-func-sin">SIN()</link></para></listitem>
<listitem><para><link linkend="expr-func-sint">SINT()</link></para></listitem>
<listitem><para><link linkend="expr-func-sqrt">SQRT()</link></para></listitem>
<listitem><para><link linkend="expr-func-uint">UINT()</link></para></listitem>
<listitem><para><link linkend="expr-func-year">YEAR()</link></para></listitem>
<listitem><para><link linkend="expr-func-yearmonth">YEARMONTH()</link></para></listitem>
<listitem><para><link linkend="expr-func-yearmonthday">YEARMONTHDAY()</link></para></listitem>
</itemizedlist>
</para>
<sect2 id="operators">
<title>Operators</title>
<variablelist>
<varlistentry>
<term id="expr-ari-ops">Arithmetic operators: +, -, *, /, %, DIV, MOD</term>
<listitem><para>
The standard arithmetic operators. Arithmetic calculations involving those
can be performed in three different modes: (a) using single-precision,
32-bit IEEE 754 floating point values (the default), (b) using signed 32-bit integers,
(c) using 64-bit signed integers. The expression parser will automatically switch
to integer mode if there are no operations the result in a floating point value.
Otherwise, it will use the default floating point mode. For instance, <code>a+b</code>
will be computed using 32-bit integers if both arguments are 32-bit integers;
or using 64-bit integers if both arguments are integers but one of them is
64-bit; or in floats otherwise. However, <code>a/b</code> or <code>sqrt(a)</code>
will always be computed in floats, because these operations return a result
of non-integer type. To avoid the first, you can either use <code>IDIV(a,b)</code>
or <code>a DIV b</code> form. Also, <code>a*b</code>
will not be automatically promoted to 64-bit when the arguments are 32-bit.
To enforce 64-bit results, you can use BIGINT(). (But note that if there are
non-integer operations, BIGINT() will simply be ignored.)
</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-comp-ops">Comparison operators: <, > <=, >=, =, <></term>
<listitem><para>
Comparison operators (eg. = or <=) return 1.0 when the condition is true and 0.0 otherwise.
For instance, <code>(a=b)+3</code> will evaluate to 4 when attribute 'a' is equal to attribute 'b', and to 3 when 'a' is not.
Unlike MySQL, the equality comparisons (ie. = and <> operators) introduce a small equality threshold (1e-6 by default).
If the difference between compared values is within the threshold, they will be considered equal.
</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-bool-ops">Boolean operators: AND, OR, NOT</term>
<listitem><para>
Boolean operators (AND, OR, NOT) were introduced in 0.9.9-rc2 and behave as usual.
They are left-associative and have the least priority compared to other operators.
NOT has more priority than AND and OR but nevertheless less than any other operator.
AND and OR have the same priority so brackets use is recommended to avoid confusion
in complex expressions.
</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-bitwise-ops">Bitwise operators: &, |</term>
<listitem><para>
These operators perform bitwise AND and OR respectively. The operands
must be of an integer types. Introduced in version 1.10-beta.
</para></listitem>
</varlistentry>
</variablelist>
</sect2>
<sect2 id="numeric-functions">
<title>Numeric functions</title>
<variablelist>
<varlistentry>
<term id="expr-func-abs">ABS()</term>
<listitem><para>Returns the absolute value of the argument.</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-bitdot">BITDOT()</term>
<listitem><para>BITDOT(mask, w0, w1, ...) returns the sum of products of an each bit of a mask multiplied with its weight.
<code>bit0*w0 + bit1*w1 + ...</code></para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-ceil">CEIL()</term>
<listitem><para>Returns the smallest integer value greater or equal to the argument.</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-contains">CONTAINS()</term>
<listitem><para>CONTAINS(polygon, x, y) checks whether the (x,y) point is within the given polygon,
and returns 1 if true, or 0 if false. The polygon has to be specified using either the <link linkend="expr-func-poly2d">POLY2D()</link> function
or the <link linkend="expr-func-poly2d">GEOPOLY2D()</link> function. The former function is intended for "small" polygons, meaning less than
500 km (300 miles) a side, and it doesn't take into account the Earth's curvature for speed. For larger
distances, you should use GEOPOLY2D, which tessellates the given polygon in smaller parts, accounting
for the Earth's curvature.
These functions were added in version 2.1.1-beta.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-cos">COS()</term>
<listitem><para>Returns the cosine of the argument.</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-double">DOUBLE()</term>
<listitem><para>Forcibly promotes given argument to floating point type. Intended to help enforce evaluation of numeric JSON fields. Introduced in version 2.2.1-beta.</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-exp">EXP()</term>
<listitem><para>Returns the exponent of the argument (e=2.718... to the power of the argument).</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-fibonacci">FIBONACCI()</term>
<listitem><para>Returns the N-th Fibonacci number, where N is the integer
argument. That is, arguments of 0 and up will generate the values 0, 1, 1,
2, 3, 5, 8, 13 and so on. Note that the computations are done using 32-bit
integer math and thus numbers 48th and up will be returned modulo 2^32.
</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-floor">FLOOR()</term>
<listitem><para>Returns the largest integer value lesser or equal to the argument.</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-geopoly2d">GEOPOLY2D()</term>
<listitem><para>GEOPOLY2D(x1,y1,x2,y2,x3,y3...) produces a polygon to be used with the <link linkend="expr-func-contains">CONTAINS()</link> function.
This function takes into account the Earth's curvature by tessellating the polygon into smaller ones,
and should be used for larger areas; see the <link linkend="expr-func-poly2d">POLY2D()</link> function. The function expects coordinates to be in degrees, if radians are used it will give same result as POLY2D().
</para>
</listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-idiv">IDIV()</term>
<listitem><para>
Returns the result of an integer division of the first
argument by the second argument. Both arguments must be
of an integer type.
</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-ln">LN()</term>
<listitem><para>Returns the natural logarithm of the argument (with the base of e=2.718...).</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-log10">LOG10()</term>
<listitem><para>Returns the common logarithm of the argument (with the base of 10).</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-log2">LOG2()</term>
<listitem><para>Returns the binary logarithm of the argument (with the base of 2).</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-max">MAX()</term>
<listitem><para>Returns the bigger of two arguments.</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-min">MIN()</term>
<listitem><para>Returns the smaller of two arguments.</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-poly2d">POLY2D()</term>
<listitem><para>POLY2D(x1,y1,x2,y2,x3,y3...) produces a polygon to be used with the <link linkend="expr-func-contains">CONTAINS()</link> function.
This polygon assumes a flat Earth, so it should not be too large; see the <link linkend="expr-func-poly2d">POLY2D()</link> function.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-pow">POW()</term>
<listitem><para>Returns the first argument raised to the power of the second argument.</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-sin">SIN()</term>
<listitem><para>Returns the sine of the argument.</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-sqrt">SQRT()</term>
<listitem><para>Returns the square root of the argument.</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-uint">UINT()</term>
<listitem><para>Forcibly reinterprets given argument to 64-bit unsigned type. Introduced in version 2.2.1-beta.</para></listitem>
</varlistentry>
</variablelist>
</sect2>
<sect2 id="date-time-functions">
<title>Date and time functions</title>
<variablelist>
<varlistentry>
<term id="expr-func-day">DAY()</term>
<listitem><para>Returns the integer day of month (in 1..31 range) from a timestamp argument, according to the current timezone. Introduced in version 2.0.1-beta.</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-month">MONTH()</term>
<listitem><para>Returns the integer month (in 1..12 range) from a timestamp argument, according to the current timezone. Introduced in version 2.0.1-beta.</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-now">NOW()</term>
<listitem><para>Returns the current timestamp as an INTEGER. Introduced in version 0.9.9-rc1.</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-year">YEAR()</term>
<listitem><para>Returns the integer year (in 1969..2038 range) from a timestamp argument, according to the current timezone. Introduced in version 2.0.1-beta.</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-yearmonth">YEARMONTH()</term>
<listitem><para>Returns the integer year and month code (in 196912..203801 range) from a timestamp argument, according to the current timezone. Introduced in version 2.0.1-beta.</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-yearmonthday">YEARMONTHDAY()</term>
<listitem><para>Returns the integer year, month, and date code (in 19691231..20380119 range) from a timestamp argument, according to the current timezone. Introduced in version 2.0.1-beta.</para></listitem>
</varlistentry>
</variablelist>
</sect2>
<sect2 id="type-conversion-functions">
<title>Type conversion functions</title>
<variablelist>
<varlistentry>
<term id="expr-func-bigint">BIGINT()</term>
<listitem><para>
Forcibly promotes the integer argument to 64-bit type,
and does nothing on floating point argument. It's intended to help enforce evaluation
of certain expressions (such as <code>a*b</code>) in 64-bit mode even though all the arguments
are 32-bit.
Introduced in version 0.9.9-rc1.
</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-integer">INTEGER()</term>
<listitem>
<para>Forcibly promotes given argument to 64-bit signed type. Intended to help enforce evaluation of numeric JSON fields. Introduced in version 2.2.1-beta.</para>
</listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-sint">SINT()</term>
<listitem><para>
Forcibly reinterprets its
32-bit unsigned integer argument as signed, and also expands it to 64-bit type
(because 32-bit type is unsigned). It's easily illustrated by the following
example: 1-2 normally evaluates to 4294967295, but SINT(1-2) evaluates to -1.
Introduced in version 1.10-beta.
</para></listitem>
</varlistentry>
</variablelist>
</sect2>
<sect2 id="comparison-functions">
<title>Comparison functions</title>
<variablelist>
<varlistentry>
<term id="expr-func-if">IF()</term>
<listitem><para>
<code>IF()</code> behavior is slightly different that that of its MySQL counterpart.
It takes 3 arguments, check whether the 1st argument is equal to 0.0, returns the 2nd argument if it is not zero, or the 3rd one when it is.
Note that unlike comparison operators, <code>IF()</code> does <b>not</b> use a threshold!
Therefore, it's safe to use comparison results as its 1st argument, but arithmetic operators might produce unexpected results.
For instance, the following two calls will produce <emphasis>different</emphasis> results even though they are logically equivalent:
<programlisting>
IF ( sqrt(3)*sqrt(3)-3<>0, a, b )
IF ( sqrt(3)*sqrt(3)-3, a, b )
</programlisting>
In the first case, the comparison operator <> will return 0.0 (false)
because of a threshold, and <code>IF()</code> will always return 'b' as a result.
In the second one, the same <code>sqrt(3)*sqrt(3)-3</code> expression will be compared
with zero <emphasis>without</emphasis> threshold by the <code>IF()</code> function itself.
But its value will be slightly different from zero because of limited floating point
calculations precision. Because of that, the comparison with 0.0 done by <code>IF()</code>
will not pass, and the second variant will return 'a' as a result.
</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-in">IN()</term>
<listitem><para>
IN(expr,val1,val2,...), introduced in version 0.9.9-rc1, takes 2 or more arguments, and returns 1 if 1st argument
(expr) is equal to any of the other arguments (val1..valN), or 0 otherwise.
Currently, all the checked values (but not the expression itself!) are required
to be constant. (Its technically possible to implement arbitrary expressions too,
and that might be implemented in the future.) Constants are pre-sorted and then
binary search is used, so IN() even against a big arbitrary list of constants
will be very quick. Starting with 0.9.9-rc2, first argument can also be
a MVA attribute. In that case, IN() will return 1 if any of the MVA values
is equal to any of the other arguments. Starting with 2.0.1-beta, IN() also
supports <code>IN(expr,@uservar)</code> syntax to check whether the value
belongs to the list in the given global user variable. First argument can be
JSON attribute since 2.2.1-beta.
</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-interval">INTERVAL()</term>
<listitem><para>
INTERVAL(expr,point1,point2,point3,...), introduced in version 0.9.9-rc1, takes 2 or more arguments, and returns
the index of the argument that is less than the first argument: it returns
0 if expr<point1, 1 if point1<=expr<point2, and so on.
It is required that point1<point2<...<pointN for this function
to work correctly.
</para></listitem>
</varlistentry>
</variablelist>
</sect2>
<sect2 id="misc-functions">
<title>Miscellaneous functions</title>
<variablelist>
<varlistentry>
<term id="expr-func-all">ALL()</term>
<listitem>
<para>ALL(cond FOR var IN json.array) function was introduced in 2.2.1-beta. It
applies to JSON arrays and returns 1 if condition is true for all elements in
array and 0 otherwise. 'cond' is a general expression which additionally can use
'var' as current value of an array element within itself.</para>
<para><programlisting>
SELECT ALL(x>3 AND x<7 FOR x IN j.intarray) FROM test;
</programlisting></para>
</listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-any">ANY()</term>
<listitem>
<para>ANY(cond FOR var IN json.array) function was introduced in 2.2.1-beta.
It works similar to <link linkend="expr-func-all">ALL()</link> except for it
returns 1 if condition is true for any element in array.</para>
</listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-atan2">ATAN2()</term>
<listitem><para>
Returns the arctangent function of two arguments, expressed in <b>radians</b>.
</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-crc32">CRC32()</term>
<listitem><para>
Returns the CRC32 value of a string argument. Introduced in version 2.0.1-beta.
</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-geodist">GEODIST()</term>
<listitem><para>
GEODIST(lat1, lon1, lat2, lon2, [...]) function, introduced in version 0.9.9-rc2,
computes geosphere distance between two given points specified by their
coordinates. Note that by default both latitudes and longitudes must be in <b>radians</b>
and the result will be in <b>meters</b>. You can use arbitrary expression as any
of the four coordinates. An optimized path will be selected when one pair
of the arguments refers directly to a pair attributes and the other one
is constant.
</para><para>
Starting with version 2.2.1-beta, GEODIST() also takes an optional 5th argument
that lets you easily convert between input and output units, and pick the specific
geodistance formula to use. The complete syntax and a few examples are as follows:
<programlisting>
GEODIST(lat1, lon1, lat2, lon2, { option=value, ... })
GEODIST(40.7643929, -73.9997683, 40.7642578, -73.9994565, {in=degrees, out=feet})
GEODIST(51.50, -0.12, 29.98, 31.13, {in=deg, out=mi}}
</programlisting>
The known options and their values are:
<itemizedlist>
<listitem><code>in = {deg | degrees | rad | radians}</code>, specifies the input units;</listitem>
<listitem><code>out = {m | meters | km | kilometers | ft | feet | mi | miles}</code>, specifies the output units;</listitem>
<listitem><code>method = {haversine | adaptive}</code>, specifies the geodistance calculation method.</listitem>
</itemizedlist>
Upto version 2.1.x (inclusive), "haversine" method was the default.
Starting with 2.2.1-beta, the default method changed to "adaptive",
a new, well optimized implementation that is both more precise
<emphasis>and</emphasis> much faster at all times.
</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-greatest">GREATEST()</term>
<listitem><para>
GREATEST(attr_json.some_array) was introduced in version 2.2.1-beta. First argument
is JSON array and return value is the greatest value in that array.
Also works for MVA.
</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-indexof">INDEXOF()</term>
<listitem>
<para>INDEXOF(cond FOR var IN json.array) function was introduced in 2.2.1-beta. It
iterates through all elements in array and returns index of first element for which
'cond' is true and -1 if 'cond' is false for every element in array.</para>
<para><programlisting>
SELECT INDEXOF(name='John' FOR name IN j.peoples) FROM test;
</programlisting></para>
</listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-least">LEAST()</term>
<listitem><para>
LEAST(attr_json.some_array) was introduced in version 2.2.1-beta. First argument
is JSON array and return value is the least value in that array.
Also works for MVA.
</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-length">LENGTH()</term>
<listitem><para>
LENGTH(attr_mva) function, introduced in version 2.1.2-stable,
returns amount of elements in MVA set. It works with both 32-bit and
64-bit MVA attributes.
LENGTH(attr_json) was introduced in version 2.2.1-beta. It returns length of
a field in JSON. Return value depends on type of a field.
For example LENGTH(json_attr.some_int) always returns 1 and
LENGTH(json_attr.some_array) returns number of elements in array.
</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-min-top-sortval">MIN_TOP_SORTVAL()</term>
<listitem><para>Returns sort key value of the worst found element in the current top-N matches if sort key is float and 0 otherwise.</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-min-top-weight">MIN_TOP_WEIGHT()</term>
<listitem><para>Returns weight of the worst found element in the current top-N matches.</para></listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-packedfactors">PACKEDFACTORS()</term>
<listitem>
<para>
PACKEDFACTORS(), introduced in version 2.1.1-beta, can be used in queries,
either to just see all the weighting factors calculated when doing the matching, or to
provide a binary attribute that can be used to write a custom ranking UDF.
This function works only if expression ranker is specified and the query
is not a full scan, otherwise it will return an error. Starting with 2.2.2-beta
PACKEDFACTORS() can take an optional argument that disables ATC ranking factor calculation:
<programlisting>
PACKEDFACTORS({no_atc=1})
</programlisting>
Calculating ATC slows down query processing considerably, so this option can be useful
if you need to see the ranking factors, but do not need ATC.
Starting with 2.2.3-beta PACKEDFACTORS() can also be told to format its output as JSON:
<programlisting>
PACKEDFACTORS({json=1})
</programlisting>
The respective outputs in either key-value pair or JSON format would look
as follows below. (Note that the examples below are wrapped for readability;
actual returned values would be single-line.)
<programlisting>
mysql> SELECT id, PACKEDFACTORS() FROM test1
-> WHERE MATCH('test one') OPTION ranker=expr('1') \G
*************************** 1. row ***************************
id: 1
packedfactors(): bm25=569, bm25a=0.617197, field_mask=2, doc_word_count=2,
field1=(lcs=1, hit_count=2, word_count=2, tf_idf=0.152356,
min_idf=-0.062982, max_idf=0.215338, sum_idf=0.152356, min_hit_pos=4,
min_best_span_pos=4, exact_hit=0, max_window_hits=1, min_gaps=2,
exact_order=1, lccs=1, wlccs=0.215338, atc=-0.003974),
word0=(tf=1, idf=-0.062982),
word1=(tf=1, idf=0.215338)
1 row in set (0.00 sec)
mysql> SELECT id, PACKEDFACTORS({json=1}) FROM test1
-> WHERE MATCH('test one') OPTION ranker=expr('1') \G
*************************** 1. row ***************************
id: 1
packedfactors({json=1}):
{
"bm25": 569,
"bm25a": 0.617197,
"field_mask": 2,
"doc_word_count": 2,
"fields": [
{
"lcs": 1,
"hit_count": 2,
"word_count": 2,
"tf_idf": 0.152356,
"min_idf": -0.062982,
"max_idf": 0.215338,
"sum_idf": 0.152356,
"min_hit_pos": 4,
"min_best_span_pos": 4,
"exact_hit": 0,
"max_window_hits": 1,
"min_gaps": 2,
"exact_order": 1,
"lccs": 1,
"wlccs": 0.215338,
"atc": -0.003974
}
],
"words": [
{
"tf": 1,
"idf": -0.062982
},
{
"tf": 1,
"idf": 0.215338
}
]
}
1 row in set (0.01 sec)
</programlisting>
</para>
<para>
This function can be used to implement custom ranking functions in UDFs, as in
<programlisting>
SELECT *, CUSTOM_RANK(PACKEDFACTORS()) AS r
FROM my_index
WHERE match('hello')
ORDER BY r DESC
OPTION ranker=expr('1');
</programlisting>
Where CUSTOM_RANK() is a function implemented in an UDF. It should declare a
SPH_UDF_FACTORS structure (defined in <filename>sphinxudf.h</filename>), initialize this structure,
unpack the factors into it before usage, and deinitialize it afterwards, as follows:
<programlisting>
SPH_UDF_FACTORS factors;
sphinx_factors_init(&factors);
sphinx_factors_unpack((DWORD*)args->arg_values[0], &factors);
// ... can use the contents of factors variable here ...
sphinx_factors_deinit(&factors);
</programlisting>
</para>
<para>
PACKEDFACTORS() data is available at all query stages, not just
when doing the initial matching and ranking pass. That enables
another particularly interesting application of PACKEDFACTORS(),
namely <b>re-ranking</b>.
</para>
<para>
In the example just above, we used an expression-based ranker with
a dummy expression, and sorted the result set by the value computed
by our UDF. In other words, we used the UDF to <emphasis>rank</emphasis>
all our results. Assume now, for the sake of an example, that our UDF
is extremely expensive to compute and has a throughput of just
10,000 calls per second. Assume that our query matches 1,000,000 documents.
To maintain reasonable performance, we would then want to use a (much)
simpler expression to do most of our ranking, and then apply the
expensive UDF to only a few top results, say, top-100 results.
Or, in other words, build top-100 results using a simpler ranking
function and then <emphasis>re-rank</emphasis> those with a complex one.
We can do that just as well with subselects:
<programlisting>
SELECT * FROM (
SELECT *, CUSTOM_RANK(PACKEDFACTORS()) AS r
FROM my_index WHERE match('hello')
OPTION ranker=expr('sum(lcs)*1000+bm25')
ORDER BY WEIGHT() DESC
LIMIT 100
) ORDER BY r DESC LIMIT 10
</programlisting>
In this example, expression-based ranker will be called for every
matched document to compute WEIGHT(). So it will get called 1,000,000
times. But the UDF computation can be postponed until the outer sort.
And it also will be done for just the top-100 matches by WEIGHT(),
according to the inner limit. So the UDF will only get called 100 times.
And then the final top-10 matches by UDF value will be selected
and returned to the application.
</para>
<para>
For reference, in the distributed case PACKEDFACTORS() data gets
sent from the agents to master in a binary format, too. This makes
it technically feasible to implement additional re-ranking pass
(or passes) on the master node, if needed.
</para>
<para>
If used with SphinxQL but not called from any UDFs, the result of PACKEDFACTORS()
is simply formatted as plain text, which can be used to manually assess the ranking
factors. Note that this feature is not currently supported by the Sphinx API.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term id="expr-func-remap">REMAP()</term>
<listitem>
<para>
REMAP(condition, expression, (cond1, cond2, ...), (expr1, expr2, ...)) function
was added in 2.2.2-beta. It allows you to make some exceptions of an expression
values depending on condition values. Condition expression should always result
integer, expression can result in integer or float.
<programlisting>
SELECT REMAP(userid, karmapoints, (1, 67), (999, 0)) FROM users;
SELECT REMAP(id%10, salary, (0), (0.0)) FROM employes;
</programlisting>
</para>
</listitem>
</varlistentry>
</variablelist>
</sect2>
</sect1>
<sect1 id="sorting-modes"><title>Sorting modes</title>
<para>
There are the following result sorting modes available:
<itemizedlist>
<listitem><para>SPH_SORT_RELEVANCE mode, that sorts by relevance in descending order (best matches first);</para></listitem>
<listitem><para>SPH_SORT_ATTR_DESC mode, that sorts by an attribute in descending order (bigger attribute values first);</para></listitem>
<listitem><para>SPH_SORT_ATTR_ASC mode, that sorts by an attribute in ascending order (smaller attribute values first);</para></listitem>
<listitem><para>SPH_SORT_TIME_SEGMENTS mode, that sorts by time segments (last hour/day/week/month) in descending order, and then by relevance in descending order;</para></listitem>
<listitem><para>SPH_SORT_EXTENDED mode, that sorts by SQL-like combination of columns in ASC/DESC order;</para></listitem>
<listitem><para>SPH_SORT_EXPR mode, that sorts by an arithmetic expression.</para></listitem>
</itemizedlist>
</para>
<para>
SPH_SORT_RELEVANCE ignores any additional parameters and always sorts matches
by relevance rank. All other modes require an additional sorting clause, with the
syntax depending on specific mode. SPH_SORT_ATTR_ASC, SPH_SORT_ATTR_DESC and
SPH_SORT_TIME_SEGMENTS modes require simply an attribute name.
SPH_SORT_RELEVANCE is equivalent to sorting by "@weight DESC, @id ASC" in extended sorting mode,
SPH_SORT_ATTR_ASC is equivalent to "attribute ASC, @weight DESC, @id ASC",
and SPH_SORT_ATTR_DESC to "attribute DESC, @weight DESC, @id ASC" respectively.
</para>
<bridgehead>SPH_SORT_TIME_SEGMENTS mode</bridgehead>
<para>
In SPH_SORT_TIME_SEGMENTS mode, attribute values are split into so-called
time segments, and then sorted by time segment first, and by relevance second.
</para>
<para>
The segments are calculated according to the <emphasis>current timestamp</emphasis>
at the time when the search is performed, so the results would change over time.
The segments are as follows:
<itemizedlist>
<listitem><para>last hour,</para></listitem>
<listitem><para>last day,</para></listitem>
<listitem><para>last week,</para></listitem>
<listitem><para>last month,</para></listitem>
<listitem><para>last 3 months,</para></listitem>
<listitem><para>everything else.</para></listitem>
</itemizedlist>
These segments are hardcoded, but it is trivial to change them if necessary.
</para>
<para>
This mode was added to support searching through blogs, news headlines, etc.
When using time segments, recent records would be ranked higher because of segment,
but within the same segment, more relevant records would be ranked higher -
unlike sorting by just the timestamp attribute, which would not take relevance
into account at all.
</para>
<bridgehead id="sort-extended">SPH_SORT_EXTENDED mode</bridgehead>
<para>
In SPH_SORT_EXTENDED mode, you can specify an SQL-like sort expression
with up to 5 attributes (including internal attributes), eg:
<programlisting>
@relevance DESC, price ASC, @id DESC
</programlisting>
</para>
<para>
Both internal attributes (that are computed by the engine on the fly)
and user attributes that were configured for this index are allowed.
Internal attribute names must start with magic @-symbol; user attribute
names can be used as is. In the example above, <option>@relevance</option>
and <option>@id</option> are internal attributes and <option>price</option> is user-specified.
</para>
<para>
Known internal attributes are:
<itemizedlist>
<listitem><para>@id (match ID)</para></listitem>
<listitem><para>@weight (match weight)</para></listitem>
<listitem><para>@rank (match weight)</para></listitem>
<listitem><para>@relevance (match weight)</para></listitem>
<listitem><para>@random (return results in random order)</para></listitem>
</itemizedlist>
<option>@rank</option> and <option>@relevance</option> are just additional
aliases to <option>@weight</option>.
</para>
<bridgehead id="sort-expr">SPH_SORT_EXPR mode</bridgehead>
<para>
Expression sorting mode lets you sort the matches by an arbitrary arithmetic
expression, involving attribute values, internal attributes (@id and @weight),
arithmetic operations, and a number of built-in functions. Here's an example:
<programlisting>
$cl->SetSortMode ( SPH_SORT_EXPR,
"@weight + ( user_karma + ln(pageviews) )*0.1" );
</programlisting>
The operators and functions supported in the expressions are discussed
in a separate section, <xref linkend="expressions"/>.
</para>
</sect1>
<sect1 id="clustering"><title>Grouping (clustering) search results </title>
<para>
Sometimes it could be useful to group (or in other terms, cluster)
search results and/or count per-group match counts - for instance,
to draw a nice graph of how much matching blog posts were there per
each month; or to group Web search results by site; or to group
matching forum posts by author; etc.
</para>
<para>
In theory, this could be performed by doing only the full-text search
in Sphinx and then using found IDs to group on SQL server side. However,
in practice doing this with a big result set (10K-10M matches) would
typically kill performance.
</para>
<para>
To avoid that, Sphinx offers so-called grouping mode. It is enabled
with SetGroupBy() API call. When grouping, all matches are assigned to
different groups based on group-by value. This value is computed from
specified attribute using one of the following built-in functions:
<itemizedlist>
<listitem><para>SPH_GROUPBY_DAY, extracts year, month and day in YYYYMMDD format from timestamp;</para></listitem>
<listitem><para>SPH_GROUPBY_WEEK, extracts year and first day of the week number (counting from year start) in YYYYNNN format from timestamp;</para></listitem>
<listitem><para>SPH_GROUPBY_MONTH, extracts month in YYYYMM format from timestamp;</para></listitem>
<listitem><para>SPH_GROUPBY_YEAR, extracts year in YYYY format from timestamp;</para></listitem>
<listitem><para>SPH_GROUPBY_ATTR, uses attribute value itself for grouping.</para></listitem>
</itemizedlist>
</para>
<para>
The final search result set then contains one best match per group.
Grouping function value and per-group match count are returned along
as "virtual" attributes named
<emphasis role="bold">@group</emphasis> and
<emphasis role="bold">@count</emphasis> respectively.
</para>
<para>
The result set is sorted by group-by sorting clause, with the syntax similar
to <link linkend="sort-extended"><option>SPH_SORT_EXTENDED</option> sorting clause</link>
syntax. In addition to <option>@id</option> and <option>@weight</option>,
group-by sorting clause may also include:
<itemizedlist>
<listitem><para>@group (groupby function value),</para></listitem>
<listitem><para>@count (amount of matches in group).</para></listitem>
</itemizedlist>
</para>
<para>
The default mode is to sort by groupby value in descending order,
ie. by <option>"@group desc"</option>.
</para>
<para>
On completion, <option>total_found</option> result parameter would
contain total amount of matching groups over he whole index.
</para>
<para>
<emphasis role="bold">WARNING:</emphasis> grouping is done in fixed memory
and thus its results are only approximate; so there might be more groups reported
in <option>total_found</option> than actually present. <option>@count</option> might also
be underestimated. To reduce inaccuracy, one should raise <option>max_matches</option>.
If <option>max_matches</option> allows to store all found groups, results will be 100% correct.
</para>
<para>
For example, if sorting by relevance and grouping by <code>"published"</code>
attribute with <code>SPH_GROUPBY_DAY</code> function, then the result set will
contain
<itemizedlist>
<listitem><para>one most relevant match per each day when there were any
matches published,</para></listitem>
<listitem><para>with day number and per-day match count attached,</para></listitem>
<listitem><para>sorted by day number in descending order (ie. recent days first).</para></listitem>
</itemizedlist>
</para>
<para>
Starting with version 0.9.9-rc2, aggregate functions (AVG(), MIN(),
MAX(), SUM()) are supported through <link linkend="api-func-setselect">SetSelect()</link> API call
when using GROUP BY.
</para>
</sect1>
<sect1 id="distributed"><title>Distributed searching</title>
<para>
To scale well, Sphinx has distributed searching capabilities.
Distributed searching is useful to improve query latency (ie. search
time) and throughput (ie. max queries/sec) in multi-server, multi-CPU
or multi-core environments. This is essential for applications which
need to search through huge amounts data (ie. billions of records
and terabytes of text).
</para>
<para>
The key idea is to horizontally partition (HP) searched data
across search nodes and then process it in parallel.
</para>
<para>
Partitioning is done manually. You should
<itemizedlist>
<listitem><para>setup several instances
of Sphinx programs (<filename>indexer</filename> and <filename>searchd</filename>)
on different servers;</para></listitem>
<listitem><para>make the instances index (and search) different parts of data;</para></listitem>
<listitem><para>configure a special distributed index on some of the <filename>searchd</filename>
instances;</para></listitem>
<listitem><para>and query this index.</para></listitem>
</itemizedlist>
This index only contains references to other
local and remote indexes - so it could not be directly reindexed,
and you should reindex those indexes which it references instead.
</para>
<para>
When <filename>searchd</filename> receives a query against distributed index,
it does the following:
<orderedlist>
<listitem><para>connects to configured remote agents;</para></listitem>
<listitem><para>issues the query;</para></listitem>
<listitem><para>sequentially searches configured local indexes (while the remote agents are searching);</para></listitem>
<listitem><para>retrieves remote agents' search results;</para></listitem>
<listitem><para>merges all the results together, removing the duplicates;</para></listitem>
<listitem><para>sends the merged results to client.</para></listitem>
</orderedlist>
</para>
<para>
From the application's point of view, there are no differences
between searching through a regular index, or a distributed index at all.
That is, distributed indexes are fully transparent to the application,
and actually there's no way to tell whether the index you queried
was distributed or local. (Even though as of 0.9.9 Sphinx does not
allow to combine searching through distributed indexes with anything else,
this constraint will be lifted in the future.)
</para>
<para>
Any <filename>searchd</filename> instance could serve both as a master
(which aggregates the results) and a slave (which only does local searching)
at the same time. This has a number of uses:
<orderedlist>
<listitem><para>every machine in a cluster could serve as a master which
searches the whole cluster, and search requests could be balanced between
masters to achieve a kind of HA (high availability) in case any of the nodes fails;
</para></listitem>
<listitem><para>
if running within a single multi-CPU or multi-core machine, there
would be only 1 searchd instance querying itself as an agent and thus
utilizing all CPUs/core.
</para></listitem>
</orderedlist>
</para>
<para>
It is scheduled to implement better HA support which would allow
to specify which agents mirror each other, do health checks, keep track
of alive agents, load-balance requests, etc.
</para>
</sect1>
<sect1 id="query-log-format"><title><filename>searchd</filename> query log formats</title>
<para>
In version 2.0.1-beta and above two query log formats are supported.
Previous versions only supported a custom plain text format. That format
is still the default one. However, while it might be more convenient for
manual monitoring and review, but hard to replay for benchmarks, it only
logs <emphasis>search</emphasis> queries but not the other types
of requests, does not always contain the complete search query
data, etc. The default text format is also harder (and sometimes
impossible) to replay for benchmarking purposes. The new <code>sphinxql</code>
format alleviates that. It aims to be complete and automatable,
even though at the cost of brevity and readability.
</para>
<sect2 id="plain-log-format"><title>Plain log format</title>
<para>
By default, <filename>searchd</filename> logs all successfully executed search queries
into a query log file. Here's an example:
<programlisting>
[Fri Jun 29 21:17:58 2007] 0.004 sec 0.004 sec [all/0/rel 35254 (0,20)] [lj] test
[Fri Jun 29 21:20:34 2007] 0.024 sec 0.024 sec [all/0/rel 19886 (0,20) @channel_id] [lj] test
</programlisting>
This log format is as follows:
<programlisting>
[query-date] real-time wall-time [match-mode/filters-count/sort-mode
total-matches (offset,limit) @groupby-attr] [index-name] query
</programlisting>
<itemizedlist>
<listitem><para>real-time is a time measured just from start to finish of the query</para></listitem>
<listitem><para>wall-time like real-time but not including waiting for agents and merging result sets time</para></listitem>
</itemizedlist>
Match mode can take one of the following values:
<itemizedlist>
<listitem><para>"all" for SPH_MATCH_ALL mode;</para></listitem>
<listitem><para>"any" for SPH_MATCH_ANY mode;</para></listitem>
<listitem><para>"phr" for SPH_MATCH_PHRASE mode;</para></listitem>
<listitem><para>"bool" for SPH_MATCH_BOOLEAN mode;</para></listitem>
<listitem><para>"ext" for SPH_MATCH_EXTENDED mode;</para></listitem>
<listitem><para>"ext2" for SPH_MATCH_EXTENDED2 mode;</para></listitem>
<listitem><para>"scan" if the full scan mode was used, either by being specified with SPH_MATCH_FULLSCAN, or if the query was empty (as documented under <link linkend="matching-modes">Matching Modes</link>)</para></listitem>
</itemizedlist>
Sort mode can take one of the following values:
<itemizedlist>
<listitem><para>"rel" for SPH_SORT_RELEVANCE mode;</para></listitem>
<listitem><para>"attr-" for SPH_SORT_ATTR_DESC mode;</para></listitem>
<listitem><para>"attr+" for SPH_SORT_ATTR_ASC mode;</para></listitem>
<listitem><para>"tsegs" for SPH_SORT_TIME_SEGMENTS mode;</para></listitem>
<listitem><para>"ext" for SPH_SORT_EXTENDED mode.</para></listitem>
</itemizedlist>
</para>
<para>Additionally, if <filename>searchd</filename> was started with <option>--iostats</option>, there will be a block of data after where the index(es) searched are listed.</para>
<para>A query log entry might take the form of:</para>
<programlisting>
[Fri Jun 29 21:17:58 2007] 0.004 sec [all/0/rel 35254 (0,20)] [lj]
[ios=6 kb=111.1 ms=0.5] test
</programlisting>
<para>
This additional block is information regarding I/O operations in performing the search:
the number of file I/O operations carried out, the amount of data in kilobytes read from
the index files and time spent on I/O operations (although there is a background processing
component, the bulk of this time is the I/O operation time).
</para>
</sect2>
<sect2 id="sphinxql-log-format"><title>SphinxQL log format</title>
<para>
This is a new log format introduced in 2.0.1-beta, with the goals
begin logging everything and then some, and in a format easy to automate
(for instance, automatically replay). New format can either be enabled
via the <link linkend="conf-query-log-format">query_log_format</link>
directive in the configuration file, or switched back and forth
on the fly with the
<link linkend="sphinxql-set"><code>SET GLOBAL query_log_format=...</code></link>
statement via SphinxQL. In the new format, the example from the previous
section would look as follows. (Wrapped below for readability, but with
just one query per line in the actual log.)
<programlisting>
/* Fri Jun 29 21:17:58.609 2007 2011 conn 2 real 0.004 wall 0.004 found 35254 */
SELECT * FROM lj WHERE MATCH('test') OPTION ranker=proximity;
/* Fri Jun 29 21:20:34 2007.555 conn 3 real 0.024 wall 0.024 found 19886 */
SELECT * FROM lj WHERE MATCH('test') GROUP BY channel_id
OPTION ranker=proximity;
</programlisting>
Note that <b>all</b> requests would be logged in this format,
including those sent via SphinxAPI and SphinxSE, not just those
sent via SphinxQL. Also note, that this kind of logging works only with plain log
files and will not work if you use 'syslog' for logging.
</para>
<para>
The features of SphinxQL log format compared to the default text
one are as follows.
<itemizedlist>
<listitem><para>All request types should be logged. (This is still work in progress.)</para></listitem>
<listitem><para>Full statement data will be logged where possible.</para></listitem>
<listitem><para>Errors and warnings are logged.</para></listitem>
<listitem><para>The log should be automatically replayable via SphinxQL.</para></listitem>
<listitem><para>Additional performance counters (currently, per-agent distributed query times) are logged.</para></listitem>
</itemizedlist>
<!-- FIXME! more examples with ios, kbs, agents etc; comment stuff reference?-->
</para>
<para>
Use sphinxql:compact_in to shorten your IN() clauses in log if you have
too much values in it.
</para>
<para>
Every request (including both SphinxAPI and SphinxQL) request
must result in exactly one log line. All request types, including
INSERT, CALL SNIPPETS, etc will eventually get logged, though as of
time of this writing, that is a work in progress). Every log line
must be a valid SphinxQL statement that reconstructs the full request,
except if the logged request is too big and needs shortening
for performance reasons. Additional messages, counters, etc can be
logged in the comments section after the request.
</para>
</sect2>
</sect1>
<sect1 id="sphinxql"><title>MySQL protocol support and SphinxQL</title>
<para>
Starting with version 0.9.9-rc2, Sphinx searchd daemon supports MySQL binary
network protocol and can be accessed with regular MySQL API. For instance,
'mysql' CLI client program works well. Here's an example of querying
Sphinx using MySQL client:
<programlisting>
$ mysql -P 9306
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 0.9.9-dev (r1734)
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> SELECT * FROM test1 WHERE MATCH('test')
-> ORDER BY group_id ASC OPTION ranker=bm25;
+------+--------+----------+------------+
| id | weight | group_id | date_added |
+------+--------+----------+------------+
| 4 | 1442 | 2 | 1231721236 |
| 2 | 2421 | 123 | 1231721236 |
| 1 | 2421 | 456 | 1231721236 |
+------+--------+----------+------------+
3 rows in set (0.00 sec)
</programlisting>
</para>
<para>
Note that mysqld was not even running on the test machine. Everything was
handled by searchd itself.
</para>
<para>
The new access method is supported <emphasis>in addition</emphasis>
to native APIs which all still work perfectly well. In fact, both
access methods can be used at the same time. Also, native API is still
the default access method. MySQL protocol support needs to be additionally
configured. This is a matter of 1-line config change, adding a new
<link linkend="conf-listen">listener</link> with mysql41 specified
as a protocol:
<programlisting>
listen = localhost:9306:mysql41
</programlisting>
</para>
<para>
Just supporting the protocol and not the SQL syntax would be useless
so Sphinx now also supports a subset of SQL that we dubbed SphinxQL.
It supports the standard querying all the index types with SELECT,
modifying RT indexes with INSERT, REPLACE, and DELETE, and much more.
Full SphinxQL reference is available in <xref linkend="sphinxql-reference"/>.
</para>
</sect1>
<sect1 id="multi-queries"><title>Multi-queries</title>
<para>
Multi-queries, or query batches, let you send multiple queries to Sphinx
in one go (more formally, one network request).
</para>
<para>
Two API methods that implement multi-query mechanism are
<link linkend="api-func-addquery">AddQuery()</link> and
<link linkend="api-func-runqueries">RunQueries()</link>.
You can also run multiple queries with SphinxQL, see
<xref linkend="sphinxql-multi-queries"/>.
(In fact, regular <link linkend="api-func-addquery">Query()</link>
call is internally implemented as a single AddQuery() call immediately
followed by RunQueries() call.) AddQuery() captures the current state
of all the query settings set by previous API calls, and memorizes
the query. RunQueries() actually sends all the memorized queries,
and returns multiple result sets. There are no restrictions on
the queries at all, except just a sanity check on a number of queries
in a single batch (see <xref linkend="conf-max-batch-queries"/>).
</para>
<para>
Why use multi-queries? Generally, it all boils down to performance.
First, by sending requests to <filename>searchd</filename> in a batch
instead of one by one, you always save a bit by doing less network
roundtrips. Second, and somewhat more important, sending queries
in a batch enables <filename>searchd</filename> to perform certain
internal optimizations. As new types of optimizations are being
added over time, it generally makes sense to pack all the queries
into batches where possible, so that simply upgrading Sphinx
to a new version would automatically enable new optimizations.
In the case when there aren't any possible batch optimizations
to apply, queries will be processed one by one internally.
</para>
<para>
Why (or rather when) not use multi-queries? Multi-queries requires
all the queries in a batch to be independent, and sometimes they aren't.
That is, sometimes query B is based on query A results, and so can only be
set up after executing query A. For instance, you might want to display
results from a secondary index if and only if there were no results
found in a primary index. Or maybe just specify offset into 2nd result set
based on the amount of matches in the 1st result set. In that case,
you will have to use separate queries (or separate batches).
</para>
<para>
As of 0.9.10, there are two major optimizations to be aware of:
common query optimization (available since 0.9.8); and common
subtree optimization (available since 0.9.10).
</para>
<para>
<b>Common query optimization</b> means that <filename>searchd</filename>
will identify all those queries in a batch where only the sorting
and group-by settings differ, and <emphasis>only perform searching once</emphasis>.
For instance, if a batch consists of 3 queries, all of them are for
"ipod nano", but 1st query requests top-10 results sorted by price,
2nd query groups by vendor ID and requests top-5 vendors sorted by
rating, and 3rd query requests max price, full-text search for
"ipod nano" will only be performed once, and its results will be
reused to build 3 different result sets.
</para>
<para>
So-called <b>faceted searching</b> is a particularly important case
that benefits from this optimization. Indeed, faceted searching
can be implemented by running a number of queries, one to retrieve
search results themselves, and a few other ones with same full-text
query but different group-by settings to retrieve all the required
groups of results (top-3 authors, top-5 vendors, etc). And as long
as full-text query and filtering settings stay the same, common
query optimization will trigger, and greatly improve performance.
</para>
<para>
<b>Common subtree optimization</b> is even more interesting.
It lets <filename>searchd</filename> exploit similarities between
batched full-text queries. It identifies common full-text query parts
(subtrees) in all queries, and caches them between queries. For instance,
look at the following query batch:
<programlisting>
barack obama president
barack obama john mccain
barack obama speech
</programlisting>
There's a common two-word part ("barack obama") that can be computed
only once, then cached and shared across the queries. And common subtree
optimization does just that. Per-query cache size is strictly controlled
by <link linkend="conf-subtree-docs-cache">subtree_docs_cache</link>
and <link linkend="conf-subtree-hits-cache">subtree_hits_cache</link>
directives (so that caching <emphasis>all</emphasis> sixteen gazillions
of documents that match "i am" does not exhaust the RAM and instantly
kill your server).
</para>
<para>
Here's a code sample (in PHP) that fire the same query in 3 different
sorting modes:
<programlisting>
require ( "sphinxapi.php" );
$cl = new SphinxClient ();
$cl->SetMatchMode ( SPH_MATCH_EXTENDED );
$cl->SetSortMode ( SPH_SORT_RELEVANCE );
$cl->AddQuery ( "the", "lj" );
$cl->SetSortMode ( SPH_SORT_EXTENDED, "published desc" );
$cl->AddQuery ( "the", "lj" );
$cl->SetSortMode ( SPH_SORT_EXTENDED, "published asc" );
$cl->AddQuery ( "the", "lj" );
$res = $cl->RunQueries();
</programlisting>
</para>
<para>
How to tell whether the queries in the batch were actually optimized?
If they were, respective query log will have a "multiplier" field that
specifies how many queries were processed together:
<programlisting>
[Sun Jul 12 15:18:17.000 2009] 0.040 sec x3 [ext/0/rel 747541 (0,20)] [lj] the
[Sun Jul 12 15:18:17.000 2009] 0.040 sec x3 [ext/0/ext 747541 (0,20)] [lj] the
[Sun Jul 12 15:18:17.000 2009] 0.040 sec x3 [ext/0/ext 747541 (0,20)] [lj] the
</programlisting>
Note the "x3" field. It means that this query was optimized and
processed in a sub-batch of 3 queries. For reference, this is how
the regular log would look like if the queries were not batched:
<programlisting>
[Sun Jul 12 15:18:17.062 2009] 0.059 sec [ext/0/rel 747541 (0,20)] [lj] the
[Sun Jul 12 15:18:17.156 2009] 0.091 sec [ext/0/ext 747541 (0,20)] [lj] the
[Sun Jul 12 15:18:17.250 2009] 0.092 sec [ext/0/ext 747541 (0,20)] [lj] the
</programlisting>
Note how per-query time in multi-query case was improved by a factor
of 1.5x to 2.3x, depending on a particular sorting mode. In fact, for both
common query and common subtree optimizations, there were reports of 3x and
even more improvements, and that's from production instances, not just
synthetic tests.
</para>
</sect1>
<sect1 id="collations"><title>Collations</title>
<para>
Introduced to Sphinx in version 2.0.1-beta to supplement string sorting,
collations essentially affect the string attribute comparisons. They specify
both the character set encoding and the strategy that Sphinx uses to compare
strings when doing ORDER BY or GROUP BY with a string attribute involved.
</para>
<para>
String attributes are stored as is when indexing, and no character set
or language information is attached to them. That's okay as long as Sphinx
only needs to store and return the strings to the calling application verbatim.
But when you ask Sphinx to sort by a string value, that request immediately
becomes quite ambiguous.
</para>
<para>
First, single-byte (ASCII, or ISO-8859-1, or Windows-1251) strings
need to be processed differently that the UTF-8 ones that may encode
every character with a variable number of bytes. So we need to know
what is the character set type to interpret the raw bytes as meaningful
characters properly.
</para>
<para>
Second, we additionally need to know the language-specific
string sorting rules. For instance, when sorting according to US rules
in en_US locale, the accented character 'ï' (small letter i with diaeresis)
should be placed somewhere after 'z'. However, when sorting with French rules
and fr_FR locale in mind, it should be placed between 'i' and 'j'. And some
other set of rules might choose to ignore accents at all, allowing 'ï'
and 'i' to be mixed arbitrarily.
</para>
<para>
Third, but not least, we might need case-sensitive sorting in some
scenarios and case-insensitive sorting in some others.
</para>
<para>
Collations combine all of the above: the character set, the language rules,
and the case sensitivity. Sphinx currently provides the following four
collations.
<orderedlist>
<listitem><para><option>libc_ci</option></para></listitem>
<listitem><para><option>libc_cs</option></para></listitem>
<listitem><para><option>utf8_general_ci</option></para></listitem>
<listitem><para><option>binary</option></para></listitem>
</orderedlist>
</para>
<para>
The first two collations rely on several standard C library (libc) calls
and can thus support any locale that is installed on your system. They provide
case-insensitive (_ci) and case-sensitive (_cs) comparisons respectively.
By default they will use C locale, effectively resorting to bytewise
comparisons. To change that, you need to specify a different available
locale using <link linkend="conf-collation-libc-locale">collation_libc_locale</link>
directive. The list of locales available on your system can usually be obtained
with the <filename>locale</filename> command:
<programlisting>
$ locale -a
C
en_AG
en_AU.utf8
en_BW.utf8
en_CA.utf8
en_DK.utf8
en_GB.utf8
en_HK.utf8
en_IE.utf8
en_IN
en_NG
en_NZ.utf8
en_PH.utf8
en_SG.utf8
en_US.utf8
en_ZA.utf8
en_ZW.utf8
es_ES
fr_FR
POSIX
ru_RU.utf8
ru_UA.utf8
</programlisting>
</para>
<para>
The specific list of the system locales may vary. Consult your OS documentation
to install additional needed locales.
</para>
<para>
<option>utf8_general_ci</option> and <option>binary</option> locales are
built-in into Sphinx. The first one is a generic collation for UTF-8 data
(without any so-called language tailoring); it should behave similar to
<option>utf8_general_ci</option> collation in MySQL. The second one
is a simple bytewise comparison.
</para>
<para>
Collation can be overridden via SphinxQL on a per-session basis using
<code>SET collation_connection</code> statement. All subsequent SphinxQL
queries will use this collation. SphinxAPI and SphinxSE queries will use
the server default collation, as specified in
<link linkend="conf-collation-server">collation_server</link> configuration
directive. Sphinx currently defaults to <option>libc_ci</option> collation.
</para>
<para>
Collations should affect all string attribute comparisons, including
those within ORDER BY and GROUP BY, so differently ordered or grouped results
can be returned depending on the collation chosen. Note that collations don't
affect full-text searching, for that use <link linkend="conf-charset-table">charset_table</link>.
</para>
</sect1>
</chapter>
<chapter id="extending-sphinx"><title>Extending Sphinx</title>
<sect1 id="sphinx-udfs"><title>Sphinx UDFs (User Defined Functions)</title>
<para>
Starting with 2.0.1-beta, our expression engine can be extended with
user defined functions, or UDFs for short, like this:
<programlisting>
SELECT id, attr1, myudf(attr2, attr3+attr4) ...
</programlisting>
You can load and unload UDFs dynamically into <filename>searchd</filename>
without having to restart the daemon, and used them in expressions when
searching, ranking, etc. Quick summary of the UDF features is as follows.
<itemizedlist>
<listitem><para>UDFs can take integer (both 32-bit and 64-bit), float,
string, MVA, or PACKEDFACTORS() arguments.</para></listitem>
<listitem><para>UDFs can return integer, float, or string values.</para></listitem>
<listitem><para>UDFs can check the argument number, types, and names
during the query setup phase, and raise errors.</para></listitem>
<listitem><para>Aggregation UDFs are not yet supported (but might be
in the future).</para></listitem>
</itemizedlist>
UDFs have a wide variety of uses, for instance:
<itemizedlist>
<listitem><para>adding custom mathematical or string functions;</para></listitem>
<listitem><para>accessing the database or files from within Sphinx;</para></listitem>
<listitem><para>implementing complex ranking functions.</para></listitem>
</itemizedlist>
</para>
<para>
UDFs reside in the external dynamic libraries (.so files on UNIX and .dll
on Windows systems). Library files need to reside in a trusted folder
specified by <link linkend="conf-plugin-dir">plugin_dir</link> directive,
for obvious security reasons: securing a single folder is easy; letting
anyone install arbitrary code into <filename>searchd</filename> is a risk.
You can load and unload them dynamically into searchd
with <link linkend="sphinxql-create-function">CREATE FUNCTION</link> and
<link linkend="sphinxql-drop-function">DROP FUNCTION</link> SphinxQL statements
respectively. Sphinx keeps track of the currently loaded functions, that is,
every time you create or drop an UDF, <filename>searchd</filename> writes
its state to the <link linkend="conf-sphinxql-state">sphinxql_state</link> file
as a plain good old SQL script.
</para>
<para>
Once you successfully load an UDF, you can use it in your SELECT or other
statements just as well as any of the builtin functions:
<programlisting>
SELECT id, MYCUSTOMFUNC(groupid, authorname), ... FROM myindex
</programlisting>
</para>
<para>
UDFs are completely supported in <link linkend="conf-workers">workers=threads</link>
mode only. They are partially supported in <code>workers=prefork</code> mode too:
namely, CREATEs from the <code>sphinxql_state</code> startup script will work and
those UDFs will be accessible. However, DROPs will not be available. UDFs are not
supported in <code>workers=fork</code> mode.
</para>
<para>
Multiple UDFs (and other plugins) may reside in a single library. That library
will only be loaded once. It gets automatically unloaded once all the UDFs and
plugins from it are dropped.
</para>
<para>
In theory you can write an UDF in any language as long as its compiler
is able to import standard C header, and emit standard dynamic libraries with
properly exported functions. Of course, the path of least resistance is to write
in either C++ or plain C. We provide an example UDF library written in plain C
and implementing several functions (demonstrating a few different techniques)
along with our source code, see
<ulink url="http://code.google.com/p/sphinxsearch/source/browse/trunk/src/udfexample.c">src/udfexample.c</ulink>.
That example includes
<ulink url="http://code.google.com/p/sphinxsearch/source/browse/trunk/src/sphinxudf.h">src/sphinxudf.h</ulink>
header file definitions of a few UDF related structures and types. For most
UDFs and plugins, a mere <code>#include "sphinxudf.h"</code>, like in the example,
should be completely sufficient, too. However, if you're writing a ranking function and
need to access the ranking signals (factors) data from within the UDF, you will
also need to compile and link with <filename>src/sphinxudf.c</filename> (also
available in our source code), because the <emphasis>implementations</emphasis>
of the fuctions that let you access the signal data from within the UDF reside
in that file.
</para>
<para>
Both <filename>sphinxudf.h</filename> header and <filename>sphinxudf.c</filename>
are standalone. So you can copy around those files only; they do not depend
on any other bits of Sphinx source code.
</para>
<para>
Within your UDF, you <b>must</b> implement and export only a couple functions,
literally. First, for UDF interface version control, you <b>must</b> define
a function <code>int LIBRARYNAME_ver()</code>, where LIBRARYNAME is the name
of your library file, and you must return <code>SPH_UDF_VERSION</code> (a value
defined in <filename>sphinxudf.h</filename>) from it. Here's an example.
<programlisting>
#include <sphinxudf.h>
// our library will be called udfexample.so, thus, so it must define
// a version function named udfexample_ver()
int udfexample_ver()
{
return SPH_UDF_VERSION;
}
</programlisting>
That protects you from accidentally loading a library with a mismatching
UDF interface version into a newer or older <filename>searchd</filename>.
Second, yout <b>must</b> implement the actual function, too.
<code>
sphinx_int64_t testfunc ( SPH_UDF_INIT * init, SPH_UDF_ARGS * args,
char * error_flag )
{
return 123;
}
</code>
</para>
<para>
UDF function names in SphinxQL are case insensitive. However, the
respective C function names are not, they need to be all <b>lower-case</b>,
or the UDF will not load. More importantly, it is vital that a) the calling
convention is C (aka __cdecl), b) arguments list matches the plugin system
expectations exactly, and c) the return type matches the one you specify in
<code>CREATE FUNCTION</code>. Unfortunately, there is no (easy) way for us
to check for those mistakes when loading the function, and they could crash
the server and/or result in unexpected results. Last but not least,
all the C functions you implement need to be thread-safe.
</para>
<para>
The first argument, a pointer to SPH_UDF_INIT structure, is essentially
a pointer to our function state. It is option. In the example just above
the function is stateless, it simply returns 123 every time it gets called.
So we do not have to define an initialization function, and we can simply
ignore that argument.
</para>
<para>
The second argument, a pointer to SPH_UDF_ARGS, is the most important one.
All the actual call arguments are passed to your UDF via this structure;
it contians the call argument count, names, types, etc. So whether your
function gets called like <code>SELECT id, testfunc(1)</code> or like
<code>SELECT id, testfunc('abc', 1000*id+gid, WEIGHT())</code> or anyhow
else, it will receive the very same SPH_UDF_ARGS structure in all of these
cases. However, the data passed in the <code>args</code> structure will be
different. In the first example <code>args->arg_count</code> will be set to 1,
in the second example it will be set to 3, <code>args->arg_types</code> array
will contain different type data, and so on.
</para>
<para>
Finally, the third argument is an error flag. UDF can raise it to indicate
that some kinda of an internal error happened, the UDF can not continue, and
the query should terminate early. You should <b>not</b> use this for argument
type checks or for any other error reporting that is likely to happen during
normal use. This flag is designed to report sudden critical runtime errors,
such as running out of memory.
</para>
<para>
If we wanted to, say, allocate temporary storage for our function to use,
or check upfront whether the arguments are of the supported types, then
we would need to add two more functions, with UDF initialization and deinitialization,
respectively.
<programlisting>
int testfunc_init ( SPH_UDF_INIT * init, SPH_UDF_ARGS * args,
char * error_message )
{
// allocate and initialize a little bit of temporary storage
init->func_data = malloc ( sizeof(int) );
*(int*)init->func_data = 123;
// return a success code
return 0;
}
void testfunc_deinit ( SPH_UDF_INIT * init )
{
// free up our temporary storage
free ( init->func_data );
}
</programlisting>
Note how <code>testfunc_init()</code> also receives the call arguments
structure. By the time it is called it does not receive any actual values,
so the <code>args->arg_values</code> will be NULL. But the argument
names and types are known and will be passed. You can check them in
the initialization function and return an error if they are of an
unsupported type.
</para>
<para>
UDFs can receive arguments of pretty much any valid internal Sphinx type.
Refer to <code>sphinx_udf_argtype</code> enumeration in <filename>sphinxudf.h</filename>
for a full list. Most of the types map straightforwardly to the respective C types.
The most notable exception is the SPH_UDF_TYPE_FACTORS argument type.
You get that type by calling your UDF with a
<link linkend="expr-func-packedfactors">PACKEDFACTOR()</link> argument.
It's data is a binary blob in a certain internal format, and to extract
individual ranking signals from that blob, you need to use either of the
two <code>sphinx_factors_XXX()</code> or <code>sphinx_get_YYY_factor()</code>
families of functions. The first family consists of just 3 functions,
<code>sphinx_factors_init()</code> that initializes the unpacked
SPH_UDF_FACTORS structure, <code>sphinx_factors_unpack()</code> that
unpacks a binary blob into it, and <code>sphinx_factors_deinit()</code>
that cleans up an deallocates the SPH_UDF_FACTORS. So you need to call
init() and unpack(), then you can use the SPH_UDF_FACTORS fields, and
then you need to cleanup with deinit(). That is simple, but results
in a bunch of memory allocations per each processed document, and might
be slow. The other interface, consisting of a bunch of
<code>sphinx_get_YYY_factor()</code> functions, is a little more wordy
to use, but accesses the blob data directly and guarantees that there
will be zero allocations. So for top-notch ranking UDF performance,
you want to use that one.
</para>
<para>
As for the return types, UDFs can currently return a signle INTEGER, BIGINT,
FLOAT, or STRING value. The C function return type should be sphinx_int64_t,
sphinx_int64_t, double, or char* respectively. In the last case you <b>must</b>
use <code>args->fn_malloc</code> function to allocate the returned
string values. Internally in your UDF you can use whatever you want,
so the <code>testfunc_init()</code> example above is correct code
even though it uses malloc() directly: you manage that pointer yourself,
it gets freed up using a matching free() call, and all is well. However,
the returned strings values are managed by Sphinx and we have our own
allocator, so for the return values specifically, you need to use it too.
</para>
<para>
Depending on how your UDFs are used in the query, the main function
call (<code>testfunc()</code> in our example) might be called in a rather
different volume and order. Specifically,
<itemizedlist>
<listitem><para>UDFs referenced in WHERE, ORDER BY, or GROUP BY clauses
must and will be evaluated for every matched document. They will be called
in the natural matching order.
</para></listitem>
<listitem><para>without subselects, UDFs that can be evaluated at the very
last stage over the final result set will be evaluated that way, but before
applying the LIMIT clause. They will be called in the result set order.
</para></listitem>
<listitem><para>with subselects, such UDFs will also be evaluated after
applying the inner LIMIT clause.
</para></listitem>
</itemizedlist>
</para>
<para>
The calling sequence of the other functions is fixed, though. Namely,
<itemizedlist>
<listitem><para><code>testfunc_init()</code> is called once when initializing
the query. It can return a non-zero code to indicate a failure; in that case
query will be terminated, and the error message from the <code>error_message</code>
buffer will be returned.</para></listitem>
<listitem><para><code>testfunc()</code> is called for every eligible row
(see above), whenever Sphinx needs to compute the UDF value. It can also
indicate an (internal) failure error by writing a non-zero byte value to
<code>error_flag</code>. In that case, it is guaranteed that will no more be
called for subsequent rows, and a default return value of 0 will be substituted.
Sphinx might or might not choose to terminate such queries early, neither
behavior is currently guaranteed.
</para></listitem>
<listitem><para><code>testfunc_deinit()</code> is called once when the query
processing (in a given index shard) ends.</para></listitem>
</itemizedlist>
</para>
<para>
As of 2.2.2-beta, we do not yet support aggregation functions. In other words,
your UDFs will be called for just a single document at a time and are expected
to return some value for that document. Writing a function that can compute an
aggregate value like AVG() over the entire group of documents that share the same
GROUP BY key is not yet possible. However, you can use UDFs within the builtin
aggregate functions: that is, even though MYCUSTOMAVG() is not supported yet,
AVG(MYCUSTOMFUNC()) should work alright!
</para>
<para>
UDFs are local. In order to use them on a cluster, you have to put the same
library on all its nodes and run CREATEs on all the nodes too. This might change
in the future versions.
</para>
</sect1>
<sect1 id="sphinx-plugins"><title>Sphinx plugins</title>
<para>
Starting with version 2.2.2-beta, we generalized our dynamic plugin
system, and added a few more types of dynamic plugins. Here's the complete
plugin type list.
<itemizedlist>
<listitem><para>UDF plugins;</para></listitem>
<listitem><para>ranker plugins;</para></listitem>
<listitem><para>indexing-time token filter plugins;</para></listitem>
<listitem><para>query-time token filter plugins.</para></listitem>
</itemizedlist>
This section discusses writing and managing plugins in general;
things specific to writing this or that type of a plugin are then
discussed in their respective subsections.
</para>
<para>
So, how do you write and use a plugin? Three-line crash course
goes as follows:
<itemizedlist>
<listitem><para>create a dynamic library (either .so or.dll),
most likely in C or C++;</para></listitem>
<listitem><para>load that plugin into searchd using
<link linkend="sphinxql-create-plugin">CREATE PLUGIN</link>;
</para></listitem>
<listitem><para>invoke it using the plugin specific calls
(typically using this or that OPTION).
</para></listitem>
</itemizedlist>
Note that while UDFs are first-class plugins they are nevertheless
installed using a separate
<link linkend="sphinxql-create-function">CREATE FUNCTION</link>
statement. It lets you specify the return type neatly so there was
especially little reason to ruin backwards compatibility <emphasis>and</emphasis>
change the syntax.
</para>
<para>
Dynamic plugins are supported in <link linkend="conf-workers">workers=threads</link>
mode only. Multiple plugins (and/or UDFs) may reside in a single library file.
So you might choose to either put all your project-specific plugins in a single
common uber-library; or you might choose to have a separate library for every
UDF and plugin; that is up to you.
</para>
<para>
Just as with UDFs, you want to include <filename>src/sphinxudf.h</filename>
header file. At the very least, you will need the SPH_UDF_VERSION
constant to implement a proper version function. Depending on the specific
plugin type, you might or might not need to link your plugin with
<filename>src/sphinxudf.c</filename>. However, as of 2.2.2-beta all
the functions implemented in <filename>sphinxudf.c</filename> are about
unpacking the PACKEDFACTORS() blob, and no plugin types are exposed to that
kind of data. So currently, you would never need to link with the C-file,
just the header would be sufficient. (In fact, if you copy over the
UDF version number, then for some of the plugin types you would not
even need the header file.)
</para>
<para>
Formally, plugins are just sets of C functions that follow a certain
naming parttern. You are typically required to define just one key function
that does the most important work, but you may define a bunch of other
functions, too. For example, to implement a ranker called "myrank",
you must define <code>myrank_finalize()</code> function that actually returns
the rank value, however, you might also define <code>myrank_init()</code>,
<code>myrank_update()</code>, and <code>myrank_deinit()</code> functions.
Specific sets of well-known suffixes and the call arguments do differ
based on the plugin type, but _init() and _deinit() are generic, every
plugin has those. Protip: for a quick reference on the known suffixes and
their argument types, refer to <filename>sphinxplugin.h</filename>,
we define the call prototoypes in the very beginning of that file.
</para>
<para>
Despite having the public interface defined in ye good olde good pure C,
our plugins essentially follow the <emphasis>object-oriented model</emphasis>.
Indeed, every <code>_init()</code> function receives a <code>void ** userdata</code>
out-parameter. And the pointer value that you store at <code>(*userdata)</code>
location is then be passed as a 1st argument to all the other plugin functions.
So you can think of a plugin as <emphasis>class</emphasis> that gets instantiated
every time an object of that class is needed to handle a request: the <code>userdata</code>
pointer would be its <code>this</code> pointer; the functions would be its methods,
and the <code>_init()</code> and <code>_deinit()</code> functions would be
the constructor and destructor respectively.
</para>
<para>
Why this (minor) OOP-in-C complication? Well, plugins run in a multi-threaded
environment, and some of them have to be stateful. You can't keep that state in
a global variable in your plugin. So we have to pass around a userdata parameter
anyway to let you keep that state. And that naturally brings us to the OOP model.
And if you've got a simple, stateless plugin, the interface lets you omit the
<code>_init()</code> and <code>_deinit()</code> and whatever other functions
just as well.
</para>
<para>
To summarize, here goes the simplest complete ranker plugin, in just
3 lines of C code.
<programlisting>
// gcc -fPIC -shared -o myrank.so myrank.c
#include "sphinxudf.h"
int myrank_ver() { return SPH_UDF_VERSION; }
int myrank_finalize(void *u, int w) { return 123; }
</programlisting>
And this is how you use it:
<programlisting>
mysql> CREATE PLUGIN myrank TYPE 'ranker' SONAME 'myrank.dll';
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT id, weight() FROM test1 WHERE MATCH('test')
-> OPTION ranker=myrank('');
+------+----------+
| id | weight() |
+------+----------+
| 1 | 123 |
| 2 | 123 |
+------+----------+
2 rows in set (0.01 sec)
</programlisting>
</para>
</sect1>
<sect1 id="ranker-plugins"><title>Ranker plugins</title>
<para>
Ranker plugins let you implement a custom ranker that receives
all the occurrences of the keywords matched in the document, and
computes a WEIGHT() value. They can be called as follows:
<programlisting>
SELECT id, attr1 FROM test WHERE match('hello')
OPTION ranker=myranker('option1=1');
</programlisting>
</para>
<para>
The call workflow is as follows:
<orderedlist>
<listitem><code>XXX_init()</code> gets called once per query
per index, in the very beginning. A few query-wide options are
passed to it through a <code>SPH_RANKER_INIT</code> structure,
including the user options strings (in the example just above,
"option1=1" is that string).</listitem>
<listitem><code>XXX_update()</code> gets called multiple times per
matched document, with every matched keyword occurrence passed as its
parameter, a <code>SPH_RANKER_HIT</code> structure. The occurrences
within each document are guaranteed to be passed in the order of
ascending <code>hit->hit_pos</code> values.</listitem>
<listitem><code>XXX_finalize()</code> gets called once per matched
document, once there are no more keyword occurrences. It must return
the WEIGHT() value. This is the only mandatory function.</listitem>
<listitem><code>XXX_deinit()</code> gets called once per query,
in the very end.</listitem>
</orderedlist>
</para>
</sect1>
<!-- discuss ranker, index_token_filter, query_token_filter plugins in more details? -->
</chapter>
<chapter id="command-line-tools"><title>Command line tools reference</title>
<para>As mentioned elsewhere, Sphinx is not a single program called 'sphinx',
but a collection of 4 separate programs which collectively form Sphinx. This section
covers these tools and how to use them.</para>
<sect1 id="ref-indexer"><title><filename>indexer</filename> command reference</title>
<para><filename>indexer</filename> is the first of the two principal tools
as part of Sphinx. Invoked from either the command line directly, or as part
of a larger script, <filename>indexer</filename> is solely responsible
for gathering the data that will be searchable.</para>
<para>The calling syntax for <filename>indexer</filename> is as follows:</para>
<programlisting>
indexer [OPTIONS] [indexname1 [indexname2 [...]]]
</programlisting>
<para>Essentially you would list the different possible indexes (that you would later
make available to search) in <filename>sphinx.conf</filename>, so when calling
<filename>indexer</filename>, as a minimum you need to be telling it what index
(or indexes) you want to index.</para>
<para>If <filename>sphinx.conf</filename> contained details on 2 indexes,
<filename>mybigindex</filename> and <filename>mysmallindex</filename>,
you could do the following:</para>
<programlisting>
$ indexer mybigindex
$ indexer mysmallindex mybigindex
</programlisting>
<para>As part of the configuration file, <filename>sphinx.conf</filename>, you specify
one or more indexes for your data. You might call <filename>indexer</filename> to reindex
one of them, ad-hoc, or you can tell it to process all indexes - you are not limited
to calling just one, or all at once, you can always pick some combination
of the available indexes.</para>
<para>The exit codes are as follows:
<itemizedlist>
<listitem>0, everything went ok</listitem>
<listitem>1, there was a problem while indexing (and if --rotate was specified, it was skipped)</listitem>
<listitem>2, indexing went ok, but --rotate attempt failed</listitem>
</itemizedlist>
</para>
<para>The majority of the options for <filename>indexer</filename> are given
in the configuration file, however there are some options you might need to specify
on the command line as well, as they can affect how the indexing operation is performed.
These options are:
<itemizedlist>
<listitem><para><option>--config <file></option> (<option>-c <file></option> for short)
tells <filename>indexer</filename> to use the given file as its configuration. Normally,
it will look for <filename>sphinx.conf</filename> in the installation directory
(e.g. <filename>/usr/local/sphinx/etc/sphinx.conf</filename> if installed into
<filename>/usr/local/sphinx</filename>), followed by the current directory you are
in when calling <filename>indexer</filename> from the shell. This is most of use
in shared environments where the binary files are installed somewhere like
<filename>/usr/local/sphinx/</filename> but you want to provide users with
the ability to make their own custom Sphinx set-ups, or if you want to run
multiple instances on a single server. In cases like those you could allow them
to create their own <filename>sphinx.conf</filename> files and pass them to
<filename>indexer</filename> with this option. For example:
<programlisting>
$ indexer --config /home/myuser/sphinx.conf myindex
</programlisting>
</para></listitem>
<listitem><para><option>--all</option> tells <filename>indexer</filename> to update
every index listed in <filename>sphinx.conf</filename>, instead of listing individual indexes.
This would be useful in small configurations, or <filename>cron</filename>-type or maintenance
jobs where the entire index set will get rebuilt each day, or week, or whatever period is best.
Example usage:
<programlisting>
$ indexer --config /home/myuser/sphinx.conf --all
</programlisting>
</para></listitem>
<listitem><para><option>--rotate</option> is used for rotating indexes. Unless you have the situation
where you can take the search function offline without troubling users, you will almost certainly
need to keep search running whilst indexing new documents. <option>--rotate</option> creates
a second index, parallel to the first (in the same place, simply including <filename>.new</filename>
in the filenames). Once complete, <filename>indexer</filename> notifies <filename>searchd</filename>
via sending the <option>SIGHUP</option> signal, and <filename>searchd</filename> will attempt
to rename the indexes (renaming the existing ones to include <filename>.old</filename>
and renaming the <filename>.new</filename> to replace them), and then start serving
from the newer files. Depending on the setting of
<link linkend="conf-seamless-rotate">seamless_rotate</link>, there may be a slight delay
in being able to search the newer indexes. Example usage:
<programlisting>
$ indexer --rotate --all
</programlisting>
</para></listitem>
<listitem><para><option>--quiet</option> tells <filename>indexer</filename> not to output anything,
unless there is an error. Again, most used for <filename>cron</filename>-type, or other script
jobs where the output is irrelevant or unnecessary, except in the event of some kind of error.
Example usage:
<programlisting>
$ indexer --rotate --all --quiet
</programlisting>
</para></listitem>
<listitem><para><option>--noprogress</option> does not display progress details as they occur;
instead, the final status details (such as documents indexed, speed of indexing and so on
are only reported at completion of indexing. In instances where the script is not being
run on a console (or 'tty'), this will be on by default. Example usage:
<programlisting>
$ indexer --rotate --all --noprogress
</programlisting>
</para></listitem>
<listitem><para><option>--buildstops <outputfile.text> <N></option> reviews
the index source, as if it were indexing the data, and produces a list of the terms
that are being indexed. In other words, it produces a list of all the searchable terms
that are becoming part of the index. Note; it does not update the index in question,
it simply processes the data 'as if' it were indexing, including running queries
defined with <option>sql_query_pre</option> or <option>sql_query_post</option>.
<filename>outputfile.txt</filename> will contain the list of words, one per line,
sorted by frequency with most frequent first, and <filename>N</filename> specifies
the maximum number of words that will be listed; if sufficiently large to encompass
every word in the index, only that many words will be returned. Such a dictionary list
could be used for client application features around "Did you mean..." functionality,
usually in conjunction with <option>--buildfreqs</option>, below. Example:
<programlisting>
$ indexer myindex --buildstops word_freq.txt 1000
</programlisting>
This would produce a document in the current directory, <filename>word_freq.txt</filename>
with the 1,000 most common words in 'myindex', ordered by most common first. Note that
the file will pertain to the last index indexed when specified with multiple indexes or
<option>--all</option> (i.e. the last one listed in the configuration file)
</para></listitem>
<listitem><para><option>--buildfreqs</option> works with <option>--buildstops</option>
(and is ignored if <option>--buildstops</option> is not specified).
As <option>--buildstops</option> provides the list of words used within the index,
<option>--buildfreqs</option> adds the quantity present in the index, which would be
useful in establishing whether certain words should be considered stopwords
if they are too prevalent. It will also help with developing "Did you mean..."
features where you can how much more common a given word compared to another,
similar one. Example:
<programlisting>
$ indexer myindex --buildstops word_freq.txt 1000 --buildfreqs
</programlisting>
This would produce the <filename>word_freq.txt</filename> as above, however after each word would be the number of times it occurred in the index in question.
</para></listitem>
<listitem><para><option>--merge <dst-index> <src-index></option> is used
for physically merging indexes together, for example if you have a main+delta scheme,
where the main index rarely changes, but the delta index is rebuilt frequently,
and <option>--merge</option> would be used to combine the two. The operation moves
from right to left - the contents of <filename>src-index</filename> get examined
and physically combined with the contents of <filename>dst-index</filename>
and the result is left in <filename>dst-index</filename>.
In pseudo-code, it might be expressed as: <code>dst-index += src-index</code>
An example:
<programlisting>
$ indexer --merge main delta --rotate
</programlisting>
In the above example, where the main is the master, rarely modified index,
and delta is the less frequently modified one, you might use the above to call
<filename>indexer</filename> to combine the contents of the delta into the
main index and rotate the indexes.
</para></listitem>
<listitem><para><option>--merge-dst-range <attr> <min> <max></option>
runs the filter range given upon merging. Specifically, as the merge is applied
to the destination index (as part of <option>--merge</option>, and is ignored
if <option>--merge</option> is not specified), <filename>indexer</filename>
will also filter the documents ending up in the destination index, and only
documents will pass through the filter given will end up in the final index.
This could be used for example, in an index where there is a 'deleted' attribute,
where 0 means 'not deleted'. Such an index could be merged with:
<programlisting>
$ indexer --merge main delta --merge-dst-range deleted 0 0
</programlisting>
Any documents marked as deleted (value 1) would be removed from the newly-merged
destination index. It can be added several times to the command line,
to add successive filters to the merge, all of which must be met in order
for a document to become part of the final index.
</para></listitem>
<listitem><para><option>--merge-killlists</option> (and its
shorter alias <option>--merge-klists</option>) changes the way
kill lists are processed when merging indexes. By default, both
kill lists get discarded after a merge. That supports the most typical
main+delta merge scenario. With this option enabled, however, kill lists
from both indexes get concatenated and stored into the destination index.
Note that a source (delta) index kill list will be used to suppress rows
from a destination (main) index at all times.
</para></listitem>
<listitem><para><option>--keep-attrs</option> (added in version 2.1.1-beta)
allows to reuse existing attributes on reindexing. Whenever
the index is rebuilt, each new document id is checked for presence in the
"old" index, and if it already exists, its attributes are transferred to
the "new" index; if not found, attributes from the new index are used. If
the user has updated attributes in the index, but not in the actual source
used for the index, all updates will be lost when reindexing; using --keep-attrs
enables saving the updated attribute values from the previous index
</para></listitem>
<listitem><para><option>--dump-rows <FILE></option> dumps rows fetched
by SQL source(s) into the specified file, in a MySQL compatible syntax.
Resulting dumps are the exact representation of data as received by
<filename>indexer</filename> and help to repeat indexing-time issues.
</para></listitem>
<listitem><para><option>--verbose</option> guarantees that every row that
caused problems indexing (duplicate, zero, or missing document ID;
or file field IO issues; etc) will be reported. By default, this option
is off, and problem summaries may be reported instead.
</para></listitem>
<listitem><para><option>--sighup-each</option> is useful when you are
rebuilding many big indexes, and want each one rotated into
<filename>searchd</filename> as soon as possible. With
<option>--sighup-each</option>, <filename>indexer</filename>
will send a SIGHUP signal to searchd after successfully
completing the work on each index. (The default behavior
is to send a single SIGHUP after all the indexes were built.)
</para></listitem>
<listitem><para><option>--nohup</option> is useful when you want to check your
index with indextool before actually rotating it. indexer won't send
SIGHUP if this option is on.
</para></listitem>
<listitem><para><option>--print-queries</option> prints out
SQL queries that <filename>indexer</filename> sends to
the database, along with SQL connection and disconnection
events. That is useful to diagnose and fix problems with
SQL sources.
</para></listitem>
</itemizedlist>
</para>
</sect1>
<sect1 id="ref-searchd"><title><filename>searchd</filename> command reference</title>
<para><filename>searchd</filename> is the second of the two principle tools as part of Sphinx.
<filename>searchd</filename> is the part of the system which actually handles searches;
it functions as a server and is responsible for receiving queries, processing them and
returning a dataset back to the different APIs for client applications.</para>
<para>Unlike <filename>indexer</filename>, <filename>searchd</filename> is not designed
to be run either from a regular script or command-line calling, but instead either
as a daemon to be called from init.d (on Unix/Linux type systems) or to be called
as a service (on Windows-type systems), so not all of the command line options will
always apply, and so will be build-dependent.</para>
<para>Calling <filename>searchd</filename> is simply a case of:</para>
<programlisting>
$ searchd [OPTIONS]
</programlisting>
<para>The options available to <filename>searchd</filename> on all builds are:</para>
<itemizedlist>
<listitem><para><option>--help</option> (<option>-h</option> for short) lists all of the
parameters that can be called in your particular build of <filename>searchd</filename>.
</para></listitem>
<listitem><para><option>--config <file></option> (<option>-c <file></option> for short)
tells <filename>searchd</filename> to use the given file as its configuration,
just as with <filename>indexer</filename> above.
</para></listitem>
<listitem><para><option>--stop</option> is used to asynchronously stop <filename>searchd</filename>,
using the details of the PID file as specified in the <filename>sphinx.conf</filename> file,
so you may also need to confirm to <filename>searchd</filename> which configuration
file to use with the <option>--config</option> option. NB, calling <option>--stop</option>
will also make sure any changes applied to the indexes with
<link linkend="api-func-updateatttributes"><code>UpdateAttributes()</code></link>
will be applied to the index files themselves. Example:
<programlisting>
$ searchd --config /home/myuser/sphinx.conf --stop
</programlisting>
</para></listitem>
<listitem><para><option>--stopwait</option> is used to synchronously stop <filename>searchd</filename>.
<option>--stop</option> essentially tells the running instance to exit (by sending it a SIGTERM)
and then immediately returns. <option>--stopwait</option> will also attempt to wait until the
running <filename>searchd</filename> instance actually finishes the shutdown (eg. saves all
the pending attribute changes) and exits. Example:
<programlisting>
$ searchd --config /home/myuser/sphinx.conf --stopwait
</programlisting>
Possible exit codes are as follows:
<itemizedlist>
<listitem><para>0 on success;</para></listitem>
<listitem><para>1 if connection to running searchd daemon failed;</para></listitem>
<listitem><para>2 if daemon reported an error during shutdown;</para></listitem>
<listitem><para>3 if daemon crashed during shutdown.</para></listitem>
</itemizedlist>
</para></listitem>
<listitem><para><option>--status</option> command is used to query running
<filename>searchd</filename> instance status, using the connection details
from the (optionally) provided configuration file. It will try to connect
to the running instance using the first configured UNIX socket or TCP port.
On success, it will query for a number of status and performance counter
values and print them. You can use <link linkend="api-func-status">Status()</link>
API call to access the very same counters from your application. Examples:
<programlisting>
$ searchd --status
$ searchd --config /home/myuser/sphinx.conf --status
</programlisting>
</para></listitem>
<listitem><para><option>--pidfile</option> is used to explicitly force
using a PID file (where the <filename>searchd</filename> process number
is stored) despite any other debugging options that say otherwise
(for instance, <option>--console</option>). This is a debugging option.
<programlisting>
$ searchd --console --pidfile
</programlisting>
</para></listitem>
<listitem><para><option>--console</option> is used to force <filename>searchd</filename>
into console mode; typically it will be running as a conventional server application,
and will aim to dump information into the log files (as specified in
<filename>sphinx.conf</filename>). Sometimes though, when debugging issues
in the configuration or the daemon itself, or trying to diagnose hard-to-track-down
problems, it may be easier to force it to dump information directly
to the console/command line from which it is being called. Running in console mode
also means that the process will not be forked (so searches are done in sequence)
and logs will not be written to. (It should be noted that console mode
is not the intended method for running <filename>searchd</filename>.)
You can invoke it as such:
<programlisting>
$ searchd --config /home/myuser/sphinx.conf --console
</programlisting>
</para></listitem>
<listitem><para><option>--logdebug</option>, <option>--logdebugv</option>,
and <option>--logdebugvv</option> options enable additional debug output
in the daemon log. They differ by the logging verboseness level. These are
debugging options, they pollute the log a lot, and thus they should
<emphasis>not</emphasis> be normally enabled. (The normal use case for
these is to enable them temporarily on request, to assist with some
particularly complicated debugging session.)
</para></listitem>
<listitem><para><option>--iostats</option> is used in conjunction with the
logging options (the <option>query_log</option> will need to have been
activated in <filename>sphinx.conf</filename>) to provide more detailed
information on a per-query basis as to the input/output operations
carried out in the course of that query, with a slight performance hit
and of course bigger logs. Further details are available under the
<link linkend="query-log-format">query log format</link> section.
You might start <filename>searchd</filename> thus:
<programlisting>
$ searchd --config /home/myuser/sphinx.conf --iostats
</programlisting>
</para></listitem>
<listitem><para><option>--cpustats</option> is used to provide actual CPU time
report (in addition to wall time) in both query log file (for every given
query) and status report (aggregated). It depends on clock_gettime() system
call and might therefore be unavailable on certain systems. You might start
<filename>searchd</filename> thus:
<programlisting>
$ searchd --config /home/myuser/sphinx.conf --cpustats
</programlisting>
</para></listitem>
<listitem><para><option>--port portnumber</option> (<option>-p</option> for short)
is used to specify the port that <filename>searchd</filename> should listen on,
usually for debugging purposes. This will usually default to 9312, but sometimes
you need to run it on a different port. Specifying it on the command line
will override anything specified in the configuration file. The valid range
is 0 to 65535, but ports numbered 1024 and below usually require
a privileged account in order to run. An example of usage:
<programlisting>
$ searchd --port 9313
</programlisting>
</para></listitem>
<listitem>
<para><option>--listen ( address ":" port | port | path ) [ ":" protocol ]</option>
(or <option>-l</option> for short) Works as <option>--port</option>, but allow
you to specify not only the port, but full path, as IP address and port, or
Unix-domain socket path, that <filename>searchd</filename> will listen on.
Otherwords, you can specify either an IP address (or hostname) and port number, or
just a port number, or Unix socket path. If you specify port number
but not the address, searchd will listen on all network interfaces.
Unix path is identified by a leading slash. As the last param you
can also specify a protocol handler (listener) to be used for
connections on this socket. Supported protocol values are 'sphinx'
(Sphinx 0.9.x API protocol) and 'mysql41' (MySQL protocol used since
4.1 upto at least 5.1).</para>
</listitem>
<listitem><para><option>--index <index></option> (or <option>-i
<index></option> for short) forces this instance of
<filename>searchd</filename> only to serve the specified index.
Like <option>--port</option>, above, this is usually for debugging purposes;
more long-term changes would generally be applied to the configuration file
itself. Example usage:
<programlisting>
$ searchd --index myindex
</programlisting>
</para></listitem>
<listitem><para><option>--strip-path</option> strips the path names from
all the file names referenced from the index (stopwords, wordforms,
exceptions, etc). This is useful for picking up indexes built on another
machine with possibly different path layouts.
</para></listitem>
<listitem><para><option>--replay-flags=<OPTIONS></option> switch,
added in version 2.0.2-beta, can be used to specify a list of extra binary log
replay options. The supported options are:
<itemizedlist>
<listitem><para><option>accept-desc-timestamp</option>,
ignore descending transaction timestamps and replay such
transactions anyway (the default behavior is to exit
with an error).
</para></listitem>
</itemizedlist>
Example:
<programlisting>
$ searchd --replay-flags=accept-desc-timestamp
</programlisting>
</para></listitem>
</itemizedlist>
<para>There are some options for <filename>searchd</filename> that are specific
to Windows platforms, concerning handling as a service, are only be available on Windows binaries.</para>
<para>Note that on Windows searchd will default to <option>--console</option> mode, unless you install it as a service.</para>
<itemizedlist>
<listitem><para><option>--install</option> installs <filename>searchd</filename> as a service
into the Microsoft Management Console (Control Panel / Administrative Tools / Services).
Any other parameters specified on the command line, where <option>--install</option>
is specified will also become part of the command line on future starts of the service.
For example, as part of calling <filename>searchd</filename>, you will likely also need
to specify the configuration file with <option>--config</option>, and you would do that
as well as specifying <option>--install</option>. Once called, the usual start/stop
facilities will become available via the management console, so any methods you could
use for starting, stopping and restarting services would also apply to
<filename>searchd</filename>. Example:
<programlisting>
C:\WINDOWS\system32> C:\Sphinx\bin\searchd.exe --install
--config C:\Sphinx\sphinx.conf
</programlisting>
If you wanted to have the I/O stats every time you started <filename>searchd</filename>,
you would specify its option on the same line as the <option>--install</option> command thus:
<programlisting>
C:\WINDOWS\system32> C:\Sphinx\bin\searchd.exe --install
--config C:\Sphinx\sphinx.conf --iostats
</programlisting>
</para></listitem>
<listitem><para><option>--delete</option> removes the service from the Microsoft Management Console
and other places where services are registered, after previously installed with
<option>--install</option>. Note, this does not uninstall the software or delete the indexes.
It means the service will not be called from the services systems, and will not be started
on the machine's next start. If currently running as a service, the current instance
will not be terminated (until the next reboot, or <filename>searchd</filename> is called
with <option>--stop</option>). If the service was installed with a custom name
(with <option>--servicename</option>), the same name will need to be specified
with <option>--servicename</option> when calling to uninstall. Example:
<programlisting>
C:\WINDOWS\system32> C:\Sphinx\bin\searchd.exe --delete
</programlisting>
</para></listitem>
<listitem><para><option>--servicename <name></option> applies the given name to
<filename>searchd</filename> when installing or deleting the service, as would appear
in the Management Console; this will default to searchd, but if being deployed on servers
where multiple administrators may log into the system, or a system with multiple
<filename>searchd</filename> instances, a more descriptive name may be applicable.
Note that unless combined with <option>--install</option> or <option>--delete</option>,
this option does not do anything. Example:
<programlisting>
C:\WINDOWS\system32> C:\Sphinx\bin\searchd.exe --install
--config C:\Sphinx\sphinx.conf --servicename SphinxSearch
</programlisting>
</para></listitem>
<listitem><para><option>--ntservice</option> is the option that is passed by the
Management Console to <filename>searchd</filename> to invoke it as a service
on Windows platforms. It would not normally be necessary to call this directly;
this would normally be called by Windows when the service would be started,
although if you wanted to call this as a regular service from the command-line
(as the complement to <option>--console</option>) you could do so in theory.
</para></listitem>
<listitem><para><option>--safetrace</option> forces <filename>searchd</filename>
to only use system backtrace() call in crash reports. In certain (rare) scenarios,
this might be a "safer" way to get that report. This is a debugging option.
</para></listitem>
<listitem><para><option>--nodetach</option> switch (Linux only) tells
<filename>searchd</filename> not to detach into background. This will also
cause log entry to be printed out to console. Query processing operates
as usual. This is a debugging option.
</para></listitem>
</itemizedlist>
<para>
Last but not least, as every other daemon, <filename>searchd</filename> supports a number of signals.
<variablelist>
<varlistentry>
<term>SIGTERM</term>
<listitem><para>Initiates a clean shutdown. New queries will not be handled; but queries
that are already started will not be forcibly interrupted.</para></listitem>
</varlistentry>
<varlistentry>
<term>SIGHUP</term>
<listitem><para>Initiates index rotation. Depending on the value of
<link linkend="conf-seamless-rotate">seamless_rotate</link> setting,
new queries might be shortly stalled; clients will receive temporary
errors.</para></listitem>
</varlistentry>
<varlistentry>
<term>SIGUSR1</term>
<listitem><para>Forces reopen of searchd log and query log files, letting
you implement log file rotation.</para></listitem>
</varlistentry>
</variablelist>
</para>
</sect1>
<sect1 id="ref-spelldump"><title><filename>spelldump</filename> command reference</title>
<para><filename>spelldump</filename> is one of the helper tools within the Sphinx package.</para>
<para>It is used to extract the contents of a dictionary file that uses
<filename>ispell</filename> or <filename>MySpell</filename> format, which
can help build word lists for <glossterm>wordforms</glossterm> - all of
the possible forms are pre-built for you.</para>
<para>Its general usage is:</para>
<programlisting>
spelldump [options] <dictionary> <affix> [result] [locale-name]
</programlisting>
<para>The two main parameters are the dictionary's main file and its affix
file; usually these are named as
<filename>[language-prefix].dict</filename> and
<filename>[language-prefix].aff</filename> and will be available with most
common Linux distributions, as well as various places online.</para>
<para><option>[result]</option> specifies where the dictionary data should
be output to, and <option>[locale-name]</option> additionally specifies
the locale details you wish to use.</para>
<para>There is an additional option, <option>-c [file]</option>, which
specifies a file for case conversion details.</para>
<para>Examples of its usage are:</para>
<programlisting>
spelldump en.dict en.aff
spelldump ru.dict ru.aff ru.txt ru_RU.CP1251
spelldump ru.dict ru.aff ru.txt .1251
</programlisting>
<para>The results file will contain a list of all the words in the
dictionary in alphabetical order, output in the format of a wordforms file,
which you can use to customize for your specific circumstances. An example
of the result file:</para>
<programlisting>
zone > zone
zoned > zoned
zoning > zoning
</programlisting>
</sect1>
<sect1 id="ref-indextool"><title><filename>indextool</filename> command reference</title>
<para>
<filename>indextool</filename> is one of the helper tools within
the Sphinx package, introduced in version 0.9.9-rc2. It is used to
dump miscellaneous debug information about the physical index.
(Additional functionality such as index verification is planned
in the future, hence the indextool name rather than just indexdump.)
Its general usage is:
</para>
<programlisting>
indextool <command> [options]
</programlisting>
<para>
Options apply to all commands:
<itemizedlist>
<listitem><para><option>--config <file></option> (<option>-c <file></option> for short)
overrides the built-in config file names.
</para></listitem>
<listitem><para><option>--quiet</option> (<option>-q</option> for short)
keep indextool quiet - it will not output banner, etc.
</para>
</listitem>
</itemizedlist>
</para>
<para>
The commands are as follows:
</para>
<itemizedlist>
<listitem><para><option>--checkconfig</option> just loads and verifies the
config file to check if it's valid, without syntax errors.
This option was added in version 2.1.1-beta.
</para></listitem>
<listitem><para><option>--build-infixes INDEXNAME</option> build infixes for
an existing dict=keywords index (upgrades .sph, .spi in place). You can use
this option for legacy index files that already use dict=keywords, but now
need to support infix searching too; updating the index files with indextool
may prove easier or faster than regenerating them from scratch with indexer.
This option was added in version 2.1.1-beta.
</para></listitem>
<listitem><para><option>--dumpheader FILENAME.sph</option> quickly dumps
the provided index header file without touching any other index files
or even the configuration file. The report provides a breakdown of
all the index settings, in particular the entire attribute and
field list. Prior to 0.9.9-rc2, this command was present in now removed
CLI search utility.
</para></listitem>
<listitem><para><option>--dumpconfig FILENAME.sph</option> dumps
the index definition from the given index header file in (almost)
compliant <filename>sphinx.conf</filename> file format.
Added in version 2.0.1-beta.
</para></listitem>
<listitem><para><option>--dumpheader INDEXNAME</option> dumps index header
by index name with looking up the header path in the configuration file.
</para></listitem>
<listitem><para><option>--dumpdict INDEXNAME</option> dumps dictionary. This was
added in version 2.1.1-beta.
</para></listitem>
<listitem><para><option>--dumpdocids INDEXNAME</option> dumps document IDs
by index name. It takes the data from attribute (.spa) file and therefore
requires docinfo=extern to work.
</para></listitem>
<listitem><para><option>--dumphitlist INDEXNAME KEYWORD</option> dumps all
the hits (occurrences) of a given keyword in a given index, with keyword
specified as text.
</para></listitem>
<listitem><para><option>--dumphitlist INDEXNAME --wordid ID</option> dumps all
the hits (occurrences) of a given keyword in a given index, with keyword
specified as internal numeric ID.
</para></listitem>
<listitem><para><option>--fold INDEXNAME OPTFILE</option>
This options is useful too see how actually tokenizer proceeds input.
You can feed indextool with text from file if specified or from stdin otherwise.
The output will contain spaces instead of separators (accordingly to your
charset_table settings) and lowercased letters in words.
</para></listitem>
<listitem><para><option>--htmlstrip INDEXNAME</option> filters stdin using
HTML stripper settings for a given index, and prints the filtering
results to stdout. Note that the settings will be taken from sphinx.conf,
and not the index header.
</para></listitem>
<listitem><para><option>--morph INDEXNAME</option> applies morphology to the
given stdin and prints the result to stdout.
</para></listitem>
<listitem><para><option>--check INDEXNAME</option> checks the index data
files for consistency errors that might be introduced either by bugs
in <filename>indexer</filename> and/or hardware faults. Starting with
version 2.1.1-beta, <option>--check</option> also works on RT indexes, RAM and disk chunks.
</para></listitem>
<listitem><para><option>--strip-path</option> strips the path names from
all the file names referenced from the index (stopwords, wordforms,
exceptions, etc). This is useful for checking indexes built on another
machine with possibly different path layouts.
</para></listitem>
<listitem><para><option>--optimize-rt-klists</option> optimizes
the kill list memory use in the disk chunk of a given RT index. That
is a one-off optimization intended for rather old RT indexes, created
by development versions prior to 1.10-beta release. As of 1.10-beta
releases, this kill list optimization (purging) should happen
automatically, and there should never be a need to use this option.
</para></listitem>
<listitem><para><option>--rotate</option> works only with <option>--check</option> and defines
whether to check index waiting for rotation, i.e. with .new extension. This
is useful when you want to check your index before actually using it.
</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="ref-wordbreaker"><title><filename>wordbreaker</filename> command reference</title>
<para>
<filename>wordbreaker</filename> is one of the helper tools within
the Sphinx package, introduced in version 2.1.1-beta. It is used to
split compound words, as usual in URLs, into its component words.
For example, this tool can split "lordoftherings" into its four
component words, or "http://manofsteel.warnerbros.com" into "man
of steel warner bros". This helps searching, without requiring
prefixes or infixes: searching for "sphinx" wouldn't match "sphinxsearch"
but if you break the compound word and index the separate components,
you'll get a match without the costs of prefix and infix larger index files.
</para>
<para>Examples of its usage are:</para>
<programlisting>
echo manofsteel | bin/wordbreaker -dict dict.txt split
</programlisting>
<para>The input stream will be separated in words using the <option>-dict</option>
dictionary file. (The dictionary should match the language of the compound word.)
The <option>split</option> command breaks words from the standard input, and
outputs the result in the standard output. There are also <option>test</option> and
<option>bench</option> commands that let you test the splitting quality and benchmark
the splitting functionality.
</para>
<para>Wordbreaker
Wordbreaker needs a dictionary to recognize individual substrings within a string. To
differentiate between different guesses, it uses the relative frequency of each
word in the dictionary: higher frequency means higher split probability. You can
generate such a file using the <filename>indexer</filename> tool, as in
<programlisting>
indexer --buildstops dict.txt 100000 --buildfreqs myindex -c /path/to/sphinx.conf
</programlisting>
which will write the 100,000 most frequent words, along with their counts, from
myindex into dict.txt. The output file is a text file, so you can edit it by hand,
if need be, to add or remove words.
</para>
<para>See
<ulink url="http://sphinxsearch.com/blog/2013/01/29/a-new-tool-in-the-trunk-wordbreaker/">
http://sphinxsearch.com/blog/2013/01/29/a-new-tool-in-the-trunk-wordbreaker/</ulink>
for more on this tool.
</para>
</sect1>
</chapter>
<chapter id="sphinxql-reference"><title>SphinxQL reference</title>
<para>
SphinxQL is our SQL dialect that exposes all of the search daemon
functionality using a standard SQL syntax with a few Sphinx-specific
extensions. Everything available via the SphinxAPI is also available
via SphinxQL but not vice versa; for instance, writes into RT indexes
are only available via SphinxQL. This chapter documents supported
SphinxQL statements syntax.
</para>
<sect1 id="sphinxql-select"><title>SELECT syntax</title>
<programlisting>
SELECT
select_expr [, select_expr ...]
FROM index [, index2 ...]
[WHERE where_condition]
[GROUP [N] BY {col_name | expr_alias} [, {col_name | expr_alias}]]
[WITHIN GROUP ORDER BY {col_name | expr_alias} {ASC | DESC}]
[HAVING having_condition]
[ORDER BY {col_name | expr_alias} {ASC | DESC} [, ...]]
[LIMIT [offset,] row_count]
[OPTION opt_name = opt_value [, ...]]
[FACET facet_options[ FACET facet_options][ ...]]
</programlisting>
<para>
<b>SELECT</b> statement was introduced in version 0.9.9-rc2.
It's syntax is based upon regular SQL but adds several Sphinx-specific
extensions and has a few omissions (such as (currently) missing support for JOINs).
Specifically,
<itemizedlist>
<listitem><para>Column list clause. Column names, arbitrary expressions,
and star ('*') are all allowed (ie.
<code>SELECT id, group_id*123+456 AS expr1 FROM test1</code>
will work). Unlike in regular SQL, all computed expressions must be aliased
with a valid identifier. Starting with version 2.0.1-beta, <code>AS</code>
is optional.
</para>
</listitem>
<!-- FIXME? move this to the functions reference? -->
<listitem>
<para>EXIST() function (added in version 2.1.1-beta) is supported.
EXIST ( "attr-name", default-value )
replaces non-existent columns with default values. It returns either a value
of an attribute specified by 'attr-name', or 'default-value' if that
attribute does not exist. As of 2.1.1-beta it does not support STRING
or MVA attributes. This function is handy when you are searching through
several indexes with different schemas.
</para>
<programlisting>
SELECT *, EXIST('gid', 6) as cnd FROM i1, i2 WHERE cnd>5
</programlisting>
</listitem>
<!-- FIXME? move this to the functions reference? -->
<listitem>
<para>SNIPPET() function (added in version 2.1.1-beta) is supported.
This is a wrapper around the snippets functionality, similar to what is
available via CALL SNIPPETS. The first two arguments are: the text
to highlight, and a query. Starting with 2.2-1-beta it's possible to pass
<link linkend="api-func-buildexcerpts">options</link> to function.
The intended use is as follows:
<programlisting>
SELECT id, SNIPPET(myUdf(id), 'my.query', 'limit=100')
FROM myIndex WHERE MATCH('my.query')
</programlisting>
where myUdf() would be a UDF that fetches a document by its ID from
some external storage. This enables applications to fetch the entire
result set directly from Sphinx in one query, without having to separately
fetch the documents in the application and then send them back to Sphinx
for highlighting.
</para>
<para>
SNIPPET() is a so-called "post limit" function, meaning that computing
snippets is postponed not just until the entire final result set is ready,
but even after the LIMIT clause is applied. For example, with a LIMIT 20,10
clause, SNIPPET() will be called at most 10 times.
</para>
<para>
Table functions is a mechanism of post-query result set processing. It was
added in 2.2.1-beta. Table functions take an arbitrary result set as their
input, and return a new, processed set as their output. The first argument
should be the input result set, but a table function can optionally take
and handle more arguments. Table functions can completely change the result
set, including the schema. For now, only built in table functions are
supported. UDFs are planned when the internal call interface is stabilized.
Table functions work for both outer SELECT and nested SELECT.
<itemizedlist>
<listitem>
<para>REMOVE_REPEATS ( result_set, column, offset, limit ) - removes repeated
adjusted rows with the same 'column' value.</para>
</listitem>
</itemizedlist>
<programlisting>
SELECT REMOVE_REPEATS((SELECT * FROM dist1), gid, 0, 10)
</programlisting>
</para>
</listitem>
<listitem>
<para>FROM clause. FROM clause should contain the list of indexes
to search through. Unlike in regular SQL, comma means enumeration of
full-text indexes as in <link linkend="api-func-query">Query()</link>
API call rather than JOIN. Index name should be according to the rules of
a C identifier.
</para></listitem>
<listitem><para>WHERE clause. This clause will map both to fulltext query
and filters. Comparison operators (=, !=, <, >, <=, >=), IN,
AND, NOT, and BETWEEN are all supported and map directly to filters.
OR is not supported yet but will be in the future. MATCH('query')
is supported and maps to fulltext query. Query will be interpreted
according to <link linkend="extended-syntax">full-text query language rules</link>.
There must be at most one MATCH() in the clause. Starting with version
2.0.1-beta, <code>{col_name | expr_alias} [NOT] IN @uservar</code>
condition syntax is supported. (Refer to <xref linkend="sphinxql-set"/>
for a discussion of global user variables.)
</para></listitem>
<listitem><para>GROUP BY clause. Supports grouping by multiple columns
or computed expressions:
<programlisting>
SELECT *, group_id*1000+article_type AS gkey FROM example GROUP BY gkey
SELECT id FROM products GROUP BY region, price
</programlisting>
Implicit grouping supported when using aggregate functions without
specifiying a GROUP BY clause. Consider these two queries:
<programlisting>
SELECT MAX(id), MIN(id), COUNT(*) FROM books
SELECT MAX(id), MIN(id), COUNT(*), 1 AS grp FROM books GROUP BY grp
</programlisting>
Aggregate functions (AVG(), MIN(), MAX(), SUM()) in column list
clause are supported. Arguments to aggregate functions can be either
plain attributes or arbitrary expressions. COUNT(*), COUNT(DISTINCT attr)
are supported. Currently there can be at most one COUNT(DISTINCT) per
query and an argument needs to be an attribute. Both current restrictions
on COUNT(DISTINCT) might be lifted in the future. A special GROUPBY()
function is also supported. It returns the GROUP BY key. That is
particularly useful when grouping by an MVA value, in order to pick the
specific value that was used to create the current group.
<programlisting>
SELECT *, AVG(price) AS avgprice, COUNT(DISTINCT storeid), GROUPBY()
FROM products
WHERE MATCH('ipod')
GROUP BY vendorid
</programlisting>
</para>
<para>
Starting with 2.0.1-beta, GROUP BY on a string attribute is supported,
with respect for current collation (see <xref linkend="collations"/>).
</para>
<para>Starting with 2.2.1-beta, you can query Sphinx to return (no more than)
N top matches for each group accordingly to WITHIN GROUP ORDER BY.</para>
<programlisting>
SELECT id FROM products GROUP 3 BY category
</programlisting>
<para>
You can sort the result set by (an alias of) the aggregate value.
<programlisting>
SELECT group_id, MAX(id) AS max_id
FROM my_index WHERE MATCH('the')
GROUP BY group_id ORDER BY max_id DESC
</programlisting>
</para>
</listitem>
<!-- FIXME? move this to the functions reference? -->
<listitem>
<para>GROUP_CONCAT() function is supported, starting with version 2.1.1-beta.
When you group by an attribute, the result set only shows attributes from a single document representing the whole group.
GROUP_CONCAT() produces a comma-separated list of the attribute values of all documents in the group.
</para>
<programlisting>
SELECT id, GROUP_CONCAT(price) as pricesList, GROUPBY() AS name FROM shops GROUP BY shopName;
</programlisting>
</listitem>
<!-- FIXME? move this to the functions reference? -->
<listitem>
<para>
ZONESPANLIST() function returns pairs of matched zone spans. Each pair
contains the matched zone span identifier, a colon, and the order number
of the matched zone span. For example, if a document reads
<![CDATA[<b><i>text</i> the <i>text</i></b>]]>, and you query for
'ZONESPAN:(i,b) text', then ZONESPANLIST() will return the string
"1:1 1:2 2:1" meaning that the first zone span matched "text"
in spans 1 and 2, and the second zone span in span 1 only.
This was added in version 2.1.1-beta.
</para>
</listitem>
<listitem>
<para>WITHIN GROUP ORDER BY clause. This is a Sphinx specific
extension that lets you control how the best row within a group
will to be selected. The syntax matches that of regular ORDER BY
clause:
<programlisting>
SELECT *, INTERVAL(posted,NOW()-7*86400,NOW()-86400) AS timeseg, WEIGHT() AS w
FROM example WHERE MATCH('my search query')
GROUP BY siteid
WITHIN GROUP ORDER BY w DESC
ORDER BY timeseg DESC, w DESC
</programlisting>
Starting with 2.0.1-beta, WITHIN GROUP ORDER BY on a string attribute is supported,
with respect for current collation (see <xref linkend="collations"/>).
</para></listitem>
<listitem><para>
HAVING clause. This is used to filter on GROUP BY values. It was added in
2.2.1-beta. Currently supports only one filtering condition.
<programlisting>
SELECT id FROM plain GROUP BY title HAVING group_id=16;
SELECT id FROM plain GROUP BY attribute HAVING COUNT(*)>1;
</programlisting>
<para>
Because of HAVING is implemented as a whole result set post-processing, result set
for query with HAVING could be less than <option>max_matches</option> allows.
</para>
</para></listitem>
<listitem><para>ORDER BY clause. Unlike in regular SQL, only column names
(not expressions) are allowed and explicit ASC and DESC are required.
The columns however can be computed expressions:
</para>
<programlisting>
SELECT *, WEIGHT()*10+docboost AS skey FROM example ORDER BY skey
</programlisting>
<para>
Starting with 2.1.1-beta, you can use subqueries to speed up specific searches, which involve reranking, by postponing hard (slow) calculations as
late as possible. For example, SELECT id,a_slow_expression() AS cond FROM an_index ORDER BY id ASC, cond DESC LIMIT 100; could be
better written as SELECT * FROM (SELECT id,a_slow_expression() AS cond FROM an_index ORDER BY id ASC LIMIT 100) ORDER BY cond DESC;
because in the first case the slow expression would be evaluated for the whole set, while in the second one it would be
evaluated just for a subset of values.
</para>
<para>
Starting with 2.0.1-beta, ORDER BY on a string attribute is supported,
with respect for current collation (see <xref linkend="collations"/>).
</para>
<para>
Starting with 2.0.2-beta, ORDER BY RAND() syntax is supported.
Note that this syntax is actually going to randomize the weight
values and then order matches by those randomized weights.
</para>
</listitem>
<listitem><para>LIMIT clause. Both LIMIT N and LIMIT M,N forms are supported.
Unlike in regular SQL (but like in Sphinx API), an implicit LIMIT 0,20
is present by default.
</para></listitem>
<listitem><para>OPTION clause. This is a Sphinx specific extension that
lets you control a number of per-query options. The syntax is:
<programlisting>
OPTION <optionname>=<value> [ , ... ]
</programlisting>
Supported options and respectively allowed values are:
<itemizedlist>
<listitem><para>'agent_query_timeout' - integer (max time in milliseconds to wait for remote queries to complete,
see <link linkend="conf-agent-query-timeout">agent_query_timeout</link> under Index configuration options for details)</para></listitem>
<listitem><para>'boolean_simplify' - 0 or 1, enables simplifying the query to speed it up</para></listitem>
<listitem><para>'comment' - string, user comment that gets copied to a query log file</para></listitem>
<listitem><para>'cutoff' - integer (max found matches threshold)</para></listitem>
<listitem><para>'field_weights' - a named integer list (per-field user weights for ranking)</para></listitem>
<listitem><para>'global_idf' - use global statistics (frequencies)
from the <link linkend="conf-global-idf">global_idf file</link> for IDF
computations, rather than the local index statistics.
Added in version 2.1.1-beta.
</para></listitem>
<listitem><para>'idf' - a quoted, comma-separated list of IDF computation flags. Added in version 2.1.1-beta.
Known flags are:
<itemizedlist>
<listitem><para>normalized: BM25 variant, idf = log((N-n+1)/n), as per Robertson et al</para></listitem>
<listitem><para>plain: plain variant, idf = log(N/n), as per Sparck-Jones</para></listitem>
<listitem><para>tfidf_normalized (added in 2.2.1-beta): additionally divide IDF
by query word count, so that TF*IDF fits into [0, 1] range</para></listitem>
<listitem><para>tfidf_unnormalized (added in 2.2.1-beta): do not additionally
divide IDF by query word count</para></listitem>
</itemizedlist>
where <b>N</b> is the collection size and <b>n</b> is the number of matched
documents.
</para><para>
The historically default IDF (Inverse Document Frequency) in Sphinx
is equivalent to <code>OPTION idf='normalized,tfidf_normalized'</code>,
and those normalizations may cause several undesired effects.
</para><para>
First, idf=normalized causes keyword penalization. For instance,
if you search for [the | something] and [the] occurs
in more than 50% of the documents, then documents with both keywords
[the] and [something] will get <b>less</b> weight than documents with
just one keyword [something]. Using <code>OPTION idf=plain</code> avoids this.
Plain IDF varies in [0, log(N)] range, and keywords
are never penalized; while the normalized IDF varies in [-log(N), log(N)]
range, and too frequent keywords are penalized.
</para><para>
Second, idf=tfidf_normalized causes IDF drift over queries. Historically,
we additionally divided IDF by query keyword count, so that the entire
sum(tf*idf) over all keywords would still fit into [0,1] range. However,
that means that queries [word1] and [word1 | nonmatchingword2] would
assign different weights to the exactly same result set, because the IDFs
for both "word1" and "nonmatchingword2" would be divided by 2.
<code>OPTION idf=tfidf_unnormalized</code> fixes that. Note that
BM25, BM25A, BM25F() ranking factors will be scale accordingly
once you disable this normalization.
</para><para>
IDF flags can be mixed; 'plain' and 'normalized' are mutually exclusive;
'tfidf_unnormalized' and 'tfidf_normalized' are mutually exclusive;
and unspecified flags in such a mutually exclusive group take their
defaults. That means that <code>OPTION idf=plain</code> is equivalent
to a complete <code>OPTION idf='plain,tfidf_normalized'</code> specification.
</para></listitem>
<listitem><para>local_df (added in 2.2.1-beta): 0 or 1,automatically sum DFs over all the
local parts of a distributed index, so that the IDF is consistent (and precise) over
a locally sharded index.
</para></listitem>
<listitem><para>'index_weights' - a named integer list (per-index user weights for ranking)</para></listitem>
<listitem><para>'max_matches' - integer (per-query max matches value)</para>
<para>
Maximum amount of matches that the daemon keeps in RAM for each index and can return to the client.
Default is 1000.
</para>
<para>
Introduced in order to control and limit RAM usage, <code>max_matches</code>
setting defines how much matches will be kept in RAM while searching each index.
Every match found will still be <emphasis>processed</emphasis>; but only
best N of them will be kept in memory and return to the client in the end.
Assume that the index contains 2,000,000 matches for the query. You rarely
(if ever) need to retrieve <emphasis>all</emphasis> of them. Rather, you need
to scan all of them, but only choose "best" at most, say, 500 by some criteria
(ie. sorted by relevance, or price, or anything else), and display those
500 matches to the end user in pages of 20 to 100 matches. And tracking
only the best 500 matches is much more RAM and CPU efficient than keeping
all 2,000,000 matches, sorting them, and then discarding everything but
the first 20 needed to display the search results page. <code>max_matches</code>
controls N in that "best N" amount.
</para>
<para>
This parameter noticeably affects per-query RAM and CPU usage.
Values of 1,000 to 10,000 are generally fine, but higher limits must be
used with care. Recklessly raising <code>max_matches</code> to 1,000,000
means that <filename>searchd</filename> will have to allocate and
initialize 1-million-entry matches buffer for <emphasis>every</emphasis>
query. That will obviously increase per-query RAM usage, and in some cases
can also noticeably impact performance.
</para></listitem>
<listitem><para>'max_query_time' - integer (max search time threshold, msec)</para></listitem>
<listitem><para>'max_predicted_time' - integer (max predicted search time, see <xref linkend="conf-predicted-time-costs"/>)</para></listitem>
<listitem><para>'ranker' - any of 'proximity_bm25', 'bm25', 'none', 'wordcount', 'proximity',
'matchany', 'fieldmask', 'sph04', 'expr', or 'export' (refer to <xref linkend="weighting"/>
for more details on each ranker)</para></listitem>
<listitem><para>'retry_count' - integer (distributed retries count)</para></listitem>
<listitem><para>'retry_delay' - integer (distributed retry delay, msec)</para></listitem>
<listitem><para>'reverse_scan' - 0 or 1, lets you control the order in which full-scan query processes the rows</para></listitem>
<listitem><para>'sort_method' - 'pq' (priority queue, set by default) or 'kbuffer' (gives faster sorting for already pre-sorted data, e.g. index data sorted by id). The
result set is in both cases the same; picking one option or the other may just improve (or worsen!) performance. This option was added in version 2.1.1-beta.</para>
</listitem>
<listitem><para>'rand_seed' - lets you specify a specific integer seed value
for an <code>ORDER BY RAND()</code> query, for example: ... OPTION <code>rand_seed=1234</code>.
By default, a new and different seed value is autogenerated for every query.
</para></listitem>
</itemizedlist>
Example:
<programlisting>
SELECT * FROM test WHERE MATCH('@title hello @body world')
OPTION ranker=bm25, max_matches=3000,
field_weights=(title=10, body=3), agent_query_timeout=10000
</programlisting>
</para></listitem>
<listitem><para>FACET clause. This Sphinx specific extension enables faceted search with subtree optimization.
It is capable of returning multiple result sets with a single SQL statement, without the need for complicated <link linkend="sphinxql-multi-queries">multi-queries</link>.
FACET clauses should be written at the very end of SELECT statements with spaces between them.
<programlisting>
FACET {expr_list} [BY {expr_list}] [ORDER BY {expr | FACET()} {ASC | DESC}] [LIMIT [offset,] count]
SELECT * FROM test FACET brand_id FACET categories;
SELECT * FROM test FACET brand_name BY brand_id ORDER BY brand_name ASC FACET property;
</programlisting>
Working example:
<programlisting>
mysql> SELECT *, IN(brand_id,1,2,3,4) AS b FROM facetdemo WHERE MATCH('Product') AND b=1 LIMIT 0,10
FACET brand_name, brand_id BY brand_id ORDER BY brand_id ASC
FACET property ORDER BY COUNT(*) DESC
FACET INTERVAL(price,200,400,600,800) ORDER BY FACET() ASC
FACET categories ORDER BY FACET() ASC;
+------+-------+----------+-------------------+-------------+----------+------------+------+
| id | price | brand_id | title | brand_name | property | categories | b |
+------+-------+----------+-------------------+-------------+----------+------------+------+
| 1 | 668 | 3 | Product Four Six | Brand Three | Three | 11,12,13 | 1 |
| 2 | 101 | 4 | Product Two Eight | Brand Four | One | 12,13,14 | 1 |
| 8 | 750 | 3 | Product Ten Eight | Brand Three | Five | 13 | 1 |
| 9 | 49 | 1 | Product Ten Two | Brand One | Three | 13,14,15 | 1 |
| 13 | 613 | 1 | Product Six Two | Brand One | Eight | 13 | 1 |
| 20 | 985 | 2 | Product Two Six | Brand Two | Nine | 10 | 1 |
| 22 | 501 | 3 | Product Five Two | Brand Three | Four | 12,13,14 | 1 |
| 23 | 765 | 1 | Product Six Seven | Brand One | Nine | 11,12 | 1 |
| 28 | 992 | 1 | Product Six Eight | Brand One | Two | 12,13 | 1 |
| 29 | 259 | 1 | Product Nine Ten | Brand One | Five | 12,13,14 | 1 |
+------+-------+----------+-------------------+-------------+----------+------------+------+
+-------------+----------+----------+
| brand_name | brand_id | count(*) |
+-------------+----------+----------+
| Brand One | 1 | 1012 |
| Brand Two | 2 | 1025 |
| Brand Three | 3 | 994 |
| Brand Four | 4 | 973 |
+-------------+----------+----------+
+----------+----------+
| property | count(*) |
+----------+----------+
| One | 427 |
| Five | 420 |
| Seven | 420 |
| Two | 418 |
| Three | 407 |
| Six | 401 |
| Nine | 396 |
| Eight | 387 |
| Four | 371 |
| Ten | 357 |
+----------+----------+
+---------------------------------+----------+
| interval(price,200,400,600,800) | count(*) |
+---------------------------------+----------+
| 0 | 799 |
| 1 | 795 |
| 2 | 757 |
| 3 | 833 |
| 4 | 820 |
+---------------------------------+----------+
+------------+----------+
| categories | count(*) |
+------------+----------+
| 10 | 961 |
| 11 | 1653 |
| 12 | 1998 |
| 13 | 2090 |
| 14 | 1058 |
| 15 | 347 |
+------------+----------+
</programlisting>
</para>
</listitem>
<listitem>
<para>
subselects, starting with 2.2.1-beta, in format SELECT * FROM (SELECT ... ORDER BY cond1 LIMIT X) ORDER BY cond2 LIMIT Y. The outer select allows only ORDER BY and LIMIT clauses.
See <ulink url="http://sphinxsearch.com/blog/2013/05/14/subselects/">http://sphinxsearch.com/blog/2013/05/14/subselects/</ulink> for more details.
</para>
</listitem>
</itemizedlist>
</para>
</sect1>
<sect1 id="sphinxql-select-sysvar"><title>SELECT @@system_variable syntax</title>
<programlisting>
SELECT @@system_variable [LIMIT [offset,] row_count]
</programlisting>
<para>
Added in version 2.0.2-beta, this is currently a placeholder
query that does nothing and reports success. That is in order
to keep compatibility with frameworks and connectors that
automatically execute this statement.
</para>
</sect1>
<sect1 id="sphinxql-show-meta"><title>SHOW META syntax</title>
<programlisting>
SHOW META [ LIKE pattern ]
</programlisting>
<para><b>SHOW META</b> shows additional meta-information about the latest
query such as query time and keyword statistics. IO and CPU counters will only be available if searchd was started with --iostats and --cpustats switches respectively.
Additional predicted_time, dist_predicted_time, [{local|dist}]_fetched_[{docs|hits|skips}] counters will only be available if searchd was configured with
<link linkend="conf-predicted-time-costs">predicted time costs</link> and query had predicted_time in OPTION clause.
<programlisting>
mysql> SELECT * FROM test1 WHERE MATCH('test|one|two');
+------+--------+----------+------------+
| id | weight | group_id | date_added |
+------+--------+----------+------------+
| 1 | 3563 | 456 | 1231721236 |
| 2 | 2563 | 123 | 1231721236 |
| 4 | 1480 | 2 | 1231721236 |
+------+--------+----------+------------+
3 rows in set (0.01 sec)
mysql> SHOW META;
+-----------------------+-------+
| Variable_name | Value |
+-----------------------+-------+
| total | 3 |
| total_found | 3 |
| time | 0.005 |
| keyword[0] | test |
| docs[0] | 3 |
| hits[0] | 5 |
| keyword[1] | one |
| docs[1] | 1 |
| hits[1] | 2 |
| keyword[2] | two |
| docs[2] | 1 |
| hits[2] | 2 |
| cpu_time | 0.350 |
| io_read_time | 0.004 |
| io_read_ops | 2 |
| io_read_kbytes | 0.4 |
| io_write_time | 0.000 |
| io_write_ops | 0 |
| io_write_kbytes | 0.0 |
| agents_cpu_time | 0.000 |
| agent_io_read_time | 0.000 |
| agent_io_read_ops | 0 |
| agent_io_read_kbytes | 0.0 |
| agent_io_write_time | 0.000 |
| agent_io_write_ops | 0 |
| agent_io_write_kbytes | 0.0 |
+-----------------------+-------+
12 rows in set (0.00 sec)
</programlisting>
</para>
<para>
Starting version 2.1.1-beta, you can also use the optional LIKE clause.
It lets you pick just the variables that match a pattern. The pattern syntax
is that of regular SQL wildcards, that is, '%' means any number of any
characters, and '_' means a single character:
<programlisting>
mysql> SHOW META LIKE 'total%';
+-----------------------+-------+
| Variable_name | Value |
+-----------------------+-------+
| total | 3 |
| total_found | 3 |
+-----------------------+-------+
2 rows in set (0.00 sec)
</programlisting>
</para>
</sect1>
<sect1 id="sphinxql-show-warnings"><title>SHOW WARNINGS syntax</title>
<programlisting>
SHOW WARNINGS
</programlisting>
<para><b>SHOW WARNINGS</b> statement, introduced in version 0.9.9-rc2,
can be used to retrieve the warning
produced by the latest query. The error message will be returned along with
the query itself:
<programlisting>
mysql> SELECT * FROM test1 WHERE MATCH('@@title hello') \G
ERROR 1064 (42000): index test1: syntax error, unexpected TOK_FIELDLIMIT
near '@title hello'
mysql> SELECT * FROM test1 WHERE MATCH('@title -hello') \G
ERROR 1064 (42000): index test1: query is non-computable (single NOT operator)
mysql> SELECT * FROM test1 WHERE MATCH('"test doc"/3') \G
*************************** 1. row ***************************
id: 4
weight: 2500
group_id: 2
date_added: 1231721236
1 row in set, 1 warning (0.00 sec)
mysql> SHOW WARNINGS \G
*************************** 1. row ***************************
Level: warning
Code: 1000
Message: quorum threshold too high (words=2, thresh=3); replacing quorum operator
with AND operator
1 row in set (0.00 sec)
</programlisting>
</para>
</sect1>
<sect1 id="sphinxql-show-status"><title>SHOW STATUS syntax</title>
<programlisting>
SHOW STATUS [ LIKE pattern ]
</programlisting>
<para><b>SHOW STATUS</b>, introduced in version 0.9.9-rc2,
displays a number of useful performance counters. IO and CPU
counters will only be available if searchd was started with --iostats and --cpustats
switches respectively.
<programlisting>
mysql> SHOW STATUS;
+--------------------+-------+
| Counter | Value |
+--------------------+-------+
| uptime | 216 |
| connections | 3 |
| maxed_out | 0 |
| command_search | 0 |
| command_excerpt | 0 |
| command_update | 0 |
| command_keywords | 0 |
| command_persist | 0 |
| command_status | 0 |
| agent_connect | 0 |
| agent_retry | 0 |
| queries | 10 |
| dist_queries | 0 |
| query_wall | 0.075 |
| query_cpu | OFF |
| dist_wall | 0.000 |
| dist_local | 0.000 |
| dist_wait | 0.000 |
| query_reads | OFF |
| query_readkb | OFF |
| query_readtime | OFF |
| avg_query_wall | 0.007 |
| avg_query_cpu | OFF |
| avg_dist_wall | 0.000 |
| avg_dist_local | 0.000 |
| avg_dist_wait | 0.000 |
| avg_query_reads | OFF |
| avg_query_readkb | OFF |
| avg_query_readtime | OFF |
+--------------------+-------+
29 rows in set (0.00 sec)
</programlisting>
</para>
<para>
Starting from version 2.1.1-beta, an optional LIKE clause is supported.
Refer to <xref linkend="sphinxql-show-meta"/> for its syntax details.
</para>
</sect1>
<sect1 id="sphinxql-insert"><title>INSERT and REPLACE syntax</title>
<programlisting>
{INSERT | REPLACE} INTO index [(column, ...)]
VALUES (value, ...)
[, (...)]
</programlisting>
<para>
INSERT statement, introduced in version 1.10-beta, is only supported for RT indexes.
It inserts new rows (documents) into an existing index, with the provided column values.
</para>
<para>
ID column must be present in all cases. Rows with duplicate IDs will <b>not</b>
be overwritten by INSERT; use REPLACE to do that. REPLACE works exactly like INSERT,
except that if an old row has the same ID as a new row, the old row is deleted
before the new row is inserted.
</para>
<para>
<option>index</option> is the name of RT index into which the new row(s)
should be inserted. The optional column names list lets you only explicitly specify
values for some of the columns present in the index. All the other columns will be
filled with their default values (0 for scalar types, empty string for text types).
</para>
<para>
Expressions are not currently supported in INSERT and values should be explicitly
specified.
</para>
<para>
Multiple rows can be inserted using a single INSERT statement by providing
several comma-separated, parentheses-enclosed lists of rows values.
</para>
</sect1>
<sect1 id="sphinxql-replace"><title>REPLACE syntax</title>
<programlisting>
{INSERT | REPLACE} INTO index [(column, ...)]
VALUES (value, ...)
[, (...)]
</programlisting>
<para>
REPLACE syntax is identical to INSERT syntax and is discussed in <xref linkend="sphinxql-insert"/>.
</para>
</sect1>
<sect1 id="sphinxql-delete"><title>DELETE syntax</title>
<programlisting>
DELETE FROM index WHERE where_condition
</programlisting>
<para>
DELETE statement, introduced in version 1.10-beta, is only supported for RT indexes and for distributed which contains only RT indexes as agents
It deletes existing rows (documents) from an existing index based on ID.
</para>
<para>
<option>index</option> is the name of RT index from which the row should be deleted.
</para>
<para>
<code>where_condition</code> has the same syntax
as in the SELECT statement (see <xref linkend="sphinxql-select"/> for details).
</para>
<programlisting>
mysql> select * from rt;
+------+------+-------------+------+
| id | gid | mva1 | mva2 |
+------+------+-------------+------+
| 100 | 1000 | 100,201 | 100 |
| 101 | 1001 | 101,202 | 101 |
| 102 | 1002 | 102,203 | 102 |
| 103 | 1003 | 103,204 | 103 |
| 104 | 1004 | 104,204,205 | 104 |
| 105 | 1005 | 105,206 | 105 |
| 106 | 1006 | 106,207 | 106 |
| 107 | 1007 | 107,208 | 107 |
+------+------+-------------+------+
8 rows in set (0.00 sec)
mysql> delete from rt where match ('dumy') and mva1>206;
Query OK, 2 rows affected (0.00 sec)
mysql> select * from rt;
+------+------+-------------+------+
| id | gid | mva1 | mva2 |
+------+------+-------------+------+
| 100 | 1000 | 100,201 | 100 |
| 101 | 1001 | 101,202 | 101 |
| 102 | 1002 | 102,203 | 102 |
| 103 | 1003 | 103,204 | 103 |
| 104 | 1004 | 104,204,205 | 104 |
| 105 | 1005 | 105,206 | 105 |
+------+------+-------------+------+
6 rows in set (0.00 sec)
mysql> delete from rt where id in (100,104,105);
Query OK, 3 rows affected (0.01 sec)
mysql> select * from rt;
+------+------+---------+------+
| id | gid | mva1 | mva2 |
+------+------+---------+------+
| 101 | 1001 | 101,202 | 101 |
| 102 | 1002 | 102,203 | 102 |
| 103 | 1003 | 103,204 | 103 |
+------+------+---------+------+
3 rows in set (0.00 sec)
mysql> delete from rt where mva1 in (102,204);
Query OK, 2 rows affected (0.01 sec)
mysql> select * from rt;
+------+------+---------+------+
| id | gid | mva1 | mva2 |
+------+------+---------+------+
| 101 | 1001 | 101,202 | 101 |
+------+------+---------+------+
1 row in set (0.00 sec)
</programlisting>
</sect1>
<sect1 id="sphinxql-set"><title>SET syntax</title>
<programlisting>
SET [GLOBAL] server_variable_name = value
SET [INDEX index_name] GLOBAL @user_variable_name = (int_val1 [, int_val2, ...])
SET NAMES value
SET @@dummy_variable = ignored_value
</programlisting>
<para>
SET statement, introduced in version 1.10-beta, modifies a variable value.
The variable names are case-insensitive. No variable value changes survive
server restart.
</para>
<para>
SET NAMES statement and SET @@variable_name syntax, both introduced
in version 2.0.2-beta, do nothing. They were implemented to maintain
compatibility with 3rd party MySQL client libraries, connectors,
and frameworks that may need to run this statement when connecting.
</para>
<para>
There are the following classes of the variables:
<orderedlist>
<listitem><para>per-session server variable (1.10-beta and above)</para></listitem>
<listitem><para>global server variable (2.0.1-beta and above)</para></listitem>
<listitem><para>global user variable (2.0.1-beta and above)</para></listitem>
<listitem><para>global distributed variable (2.2.3-beta and above)</para></listitem>
</orderedlist>
</para>
<para>
Global user variables are shared between concurrent sessions. Currently,
the only supported value type is the list of BIGINTs, and these variables
can only be used along with IN() for filtering purpose. The intended usage
scenario is uploading huge lists of values to <filename>searchd</filename>
(once) and reusing them (many times) later, saving on network overheads.
Starting with 2.2.3-beta, global user variables might be either transferred to
all agents of distributed index or set locally in case of local index
defined at distibuted index. Example:
<programlisting>
// in session 1
mysql> SET GLOBAL @myfilter=(2,3,5,7,11,13);
Query OK, 0 rows affected (0.00 sec)
// later in session 2
mysql> SELECT * FROM test1 WHERE group_id IN @myfilter;
+------+--------+----------+------------+-----------------+------+
| id | weight | group_id | date_added | title | tag |
+------+--------+----------+------------+-----------------+------+
| 3 | 1 | 2 | 1299338153 | another doc | 15 |
| 4 | 1 | 2 | 1299338153 | doc number four | 7,40 |
+------+--------+----------+------------+-----------------+------+
2 rows in set (0.02 sec)
</programlisting>
</para>
<para>
Per-session and global server variables affect certain server settings in the respective scope.
Known per-session server variables are:
<variablelist>
<varlistentry>
<term><code>AUTOCOMMIT = {0 | 1}</code></term>
<listitem><para>
Whether any data modification statement should be implicitly
wrapped by BEGIN and COMMIT.
Introduced in version 1.10-beta.
</para></listitem>
</varlistentry>
<varlistentry>
<term><code>COLLATION_CONNECTION = collation_name</code></term>
<listitem><para>
Selects the collation to be used for ORDER BY or GROUP BY on string
values in the subsequent queries. Refer to <xref linkend="collations"/>
for a list of known collation names.
Introduced in version 2.0.1-beta.
</para></listitem>
</varlistentry>
<varlistentry>
<term><code>CHARACTER_SET_RESULTS = charset_name</code></term>
<listitem><para>
Does nothing; a placeholder to support frameworks, clients, and
connectors that attempt to automatically enforce a charset when
connecting to a Sphinx server.
Introduced in version 2.0.1-beta.
</para></listitem>
</varlistentry>
<varlistentry>
<term><code>SQL_AUTO_IS_NULL = value</code></term>
<listitem><para>
Does nothing; a placeholder to support frameworks, clients, and
connectors that attempt to automatically enforce a charset when
connecting to a Sphinx server.
Introduced in version 2.0.2-beta.
</para></listitem>
</varlistentry>
<varlistentry>
<term><code>SQL_MODE = value</code></term>
<listitem><para>
Does nothing; a placeholder to support frameworks, clients, and
connectors that attempt to automatically enforce a charset when
connecting to a Sphinx server.
Introduced in version 2.0.2-beta.
</para></listitem>
</varlistentry>
<varlistentry>
<term><code>PROFILING = {0 | 1}</code></term>
<listitem><para>
Enables query profiling in the current session. Defaults to 0.
See also <xref linkend="sphinxql-show-profile"/>.
Introduced in version 2.1.1-beta.
</para></listitem>
</varlistentry>
</variablelist>
</para>
<para>
Known global server variables are:
<variablelist>
<varlistentry>
<term><code>QUERY_LOG_FORMAT = {plain | sphinxql}</code></term>
<listitem><para>
Changes the current log format.
Introduced in version 2.0.1-beta.
</para></listitem>
</varlistentry>
<varlistentry>
<term><code>LOG_LEVEL = {info | debug | debugv | debugvv}</code></term>
<listitem><para>
Changes the current log verboseness level.
Introduced in version 2.0.1-beta.
</para></listitem>
</varlistentry>
</variablelist>
</para>
<para>
Examples:
<programlisting>
mysql> SET autocommit=0;
Query OK, 0 rows affected (0.00 sec)
mysql> SET GLOBAL query_log_format=sphinxql;
Query OK, 0 rows affected (0.00 sec)
</programlisting>
</para>
</sect1>
<sect1 id="sphinxql-set-transaction"><title>SET TRANSACTION syntax</title>
<programlisting>
SET TRANSACTION ISOLATION LEVEL { READ UNCOMMITTED
| READ COMMITTED
| REPEATABLE READ
| SERIALIZABLE }
</programlisting>
<para>
SET TRANSACTION statement, introduced in version 2.0.2-beta, does nothing.
It was implemented to maintain compatibility with 3rd party MySQL client
libraries, connectors, and frameworks that may need to run this statement
when connecting.
</para>
<para>
Example:
<programlisting>
mysql> SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
Query OK, 0 rows affected (0.00 sec)
</programlisting>
</para>
</sect1>
<sect1 id="sphinxql-commit"><title>BEGIN, COMMIT, and ROLLBACK syntax</title>
<programlisting>
START TRANSACTION | BEGIN
COMMIT
ROLLBACK
SET AUTOCOMMIT = {0 | 1}
</programlisting>
<para>
BEGIN, COMMIT, and ROLLBACK statements were introduced in version 1.10-beta.
BEGIN statement (or its START TRANSACTION alias) forcibly commits pending
transaction, if any, and begins a new one. COMMIT statement commits the current
transaction, making all its changes permanent. ROLLBACK statement rolls back the
current transaction, canceling all its changes. SET AUTOCOMMIT controls the
autocommit mode in the active session.
</para>
<para>
AUTOCOMMIT is set to 1 by default, meaning that every statement that performs
any changes on any index is implicitly wrapped in BEGIN and COMMIT.
</para>
<para>
Transactions are limited to a single RT index, and also limited in size.
They are atomic, consistent, overly isolated, and durable. Overly isolated
means that the changes are not only invisible to the concurrent transactions
but even to the current session itself.
</para>
</sect1>
<sect1 id="sphinxql-begin"><title>BEGIN syntax</title>
<programlisting>
START TRANSACTION | BEGIN
</programlisting>
<para>
BEGIN syntax is discussed in detail in <xref linkend="sphinxql-commit"/>.
</para>
</sect1>
<sect1 id="sphinxql-rollback"><title>ROLLBACK syntax</title>
<programlisting>
ROLLBACK
</programlisting>
<para>
ROLLBACK syntax is discussed in detail in <xref linkend="sphinxql-commit"/>.
</para>
</sect1>
<sect1 id="sphinxql-call-snippets"><title>CALL SNIPPETS syntax</title>
<programlisting>
CALL SNIPPETS(data, index, query[, opt_value AS opt_name[, ...]])
</programlisting>
<para>
CALL SNIPPETS statement, introduced in version 1.10-beta, builds a snippet
from provided data and query, using specified index settings.
</para>
<para>
<option>data</option> is the source data to extract a snippet from. It could be a single string,
or the list of the strings enclosed in curly brackets.
<option>index</option> is the name of the index from which to take the text
processing settings. <option>query</option> is the full-text query to build
snippets for. Additional options are documented in
<xref linkend="api-func-buildexcerpts"/>. Usage example:
</para>
<programlisting>
CALL SNIPPETS('this is my document text', 'test1', 'hello world',
5 AS around, 200 AS limit);
CALL SNIPPETS(('this is my document text','this is my another text'), 'test1', 'hello world',
5 AS around, 200 AS limit);
CALL SNIPPETS(('data/doc1.txt','data/doc2.txt','/home/sphinx/doc3.txt'), 'test1', 'hello world',
5 AS around, 200 AS limit, 1 AS load_files);
</programlisting>
</sect1>
<sect1 id="sphinxql-call-keywords"><title>CALL KEYWORDS syntax</title>
<programlisting>
CALL KEYWORDS(text, index [, 1])
</programlisting>
<para>
CALL KEYWORDS statement, introduced in version 1.10-beta, splits text
into particular keywords. It returns tokenized and normalized forms
of the keywords, and, optionally, keyword statistics. Since version 2.2.2-beta
it also returns the position of each keyword in the query and all
forms of tokenized keywords in the case that lemmatizers were used.
</para>
<para>
<option>text</option> is the text to break down to keywords.
<option>index</option> is the name of the index from which to take the text
processing settings. <option>hits</option> is an optional boolean parameter
that specifies whether to return document and hit occurrence statistics.
</para>
</sect1>
<sect1 id="sphinxql-show-tables"><title>SHOW TABLES syntax</title>
<programlisting>
SHOW TABLES [ LIKE pattern ]
</programlisting>
<para>
SHOW TABLES statement, introduced in version 2.0.1-beta, enumerates
all currently active indexes along with their types. As of 2.0.1-beta,
existing index types are <option>local</option>, <option>distributed</option>,
and <option>rt</option> respectively.
Example:
<programlisting>
mysql> SHOW TABLES;
+-------+-------------+
| Index | Type |
+-------+-------------+
| dist1 | distributed |
| rt | rt |
| test1 | local |
| test2 | local |
+-------+-------------+
4 rows in set (0.00 sec)
</programlisting>
</para>
<para>
Starting from version 2.1.1-beta, an optional LIKE clause is supported.
Refer to <xref linkend="sphinxql-show-meta"/> for its syntax details.
</para>
<programlisting>
mysql> SHOW TABLES LIKE '%4';
+-------+-------------+
| Index | Type |
+-------+-------------+
| dist4 | distributed |
+-------+-------------+
1 row in set (0.00 sec)
</programlisting>
</sect1>
<sect1 id="sphinxql-describe"><title>DESCRIBE syntax</title>
<programlisting>
{DESC | DESCRIBE} index [ LIKE pattern ]
</programlisting>
<para>
DESCRIBE statement, introduced in version 2.0.1-beta, lists
index columns and their associated types. Columns are document ID,
full-text fields, and attributes. The order matches that in which
fields and attributes are expected by INSERT and REPLACE statements.
As of 2.0.1-beta, column types are <option>field</option>,
<option>integer</option>, <option>timestamp</option>,
<option>ordinal</option>, <option>bool</option>,
<option>float</option>, <option>bigint</option>,
<option>string</option>, and <option>mva</option>.
ID column will be typed either <option>integer</option>
or <option>bigint</option> based on whether the binaries
were built with 32-bit or 64-bit document ID support.
Example:
</para>
<programlisting>
mysql> DESC rt;
+---------+---------+
| Field | Type |
+---------+---------+
| id | integer |
| title | field |
| content | field |
| gid | integer |
+---------+---------+
4 rows in set (0.00 sec)
</programlisting>
<para>
Starting from version 2.1.1-beta, an optional LIKE clause is supported.
Refer to <xref linkend="sphinxql-show-meta"/> for its syntax details.
</para>
</sect1>
<sect1 id="sphinxql-create-function"><title>CREATE FUNCTION syntax</title>
<programlisting>
CREATE FUNCTION udf_name
RETURNS {INT | INTEGER | BIGINT | FLOAT | STRING}
SONAME 'udf_lib_file'
</programlisting>
<para>
CREATE FUNCTION statement, introduced in version 2.0.1-beta,
installs a <link linkend="sphinx-udfs">user-defined function (UDF)</link>
with the given name and type from the given library file.
The library file must reside in a trusted
<link linkend="conf-plugin-dir">plugin_dir</link> directory.
On success, the function is available for use in all subsequent
queries that the server receives. Example:
</para>
<programlisting>
mysql> CREATE FUNCTION avgmva RETURNS INTEGER SONAME 'udfexample.dll';
Query OK, 0 rows affected (0.03 sec)
mysql> SELECT *, AVGMVA(tag) AS q from test1;
+------+--------+---------+-----------+
| id | weight | tag | q |
+------+--------+---------+-----------+
| 1 | 1 | 1,3,5,7 | 4.000000 |
| 2 | 1 | 2,4,6 | 4.000000 |
| 3 | 1 | 15 | 15.000000 |
| 4 | 1 | 7,40 | 23.500000 |
+------+--------+---------+-----------+
</programlisting>
</sect1>
<sect1 id="sphinxql-drop-function"><title>DROP FUNCTION syntax</title>
<programlisting>
DROP FUNCTION udf_name
</programlisting>
<para>
DROP FUNCTION statement, introduced in version 2.0.1-beta,
deinstalls a <link linkend="sphinx-udfs">user-defined function (UDF)</link>
with the given name. On success, the function is no longer available
for use in subsequent queries. Pending concurrent queries will not be
affected and the library unload, if necessary, will be postponed
until those queries complete. Example:
</para>
<programlisting>
mysql> DROP FUNCTION avgmva;
Query OK, 0 rows affected (0.00 sec)
</programlisting>
</sect1>
<sect1 id="sphinxql-show-variables"><title>SHOW VARIABLES syntax</title>
<programlisting>
SHOW [{GLOBAL | SESSION}] VARIABLES [WHERE variable_name='xxx']
</programlisting>
<para><b>SHOW VARIABLES</b> statement was added in version 2.0.1-beta
to improve compatibility with 3rd party MySQL connectors and frameworks
that automatically execute this statement. The WHERE option was added in
version 2.1.1-beta.
</para>
<para>
In version 2.0.1-beta, it did nothing.
</para>
<para>
Starting from version 2.0.2-beta, it returns the current values of
a few server-wide variables. Also, support for GLOBAL and SESSION clauses
was added.
<programlisting>
mysql> SHOW GLOBAL VARIABLES;
+----------------------+----------+
| Variable_name | Value |
+----------------------+----------+
| autocommit | 1 |
| collation_connection | libc_ci |
| query_log_format | sphinxql |
| log_level | info |
+----------------------+----------+
4 rows in set (0.00 sec)
</programlisting>
<para>
Starting from 2.1.1-beta, support for WHERE variable_name clause was added,
to help certain connectors.
</para>
</para>
</sect1>
<sect1 id="sphinxql-show-collation"><title>SHOW COLLATION syntax</title>
<programlisting>
SHOW COLLATION
</programlisting>
<para>
Added in version 2.0.1-beta, this is currently a placeholder
query that does nothing and reports success. That is in order
to keep compatibility with frameworks and connectors that
automatically execute this statement.
</para>
<programlisting>
mysql> SHOW COLLATION;
Query OK, 0 rows affected (0.00 sec)
</programlisting>
</sect1>
<sect1 id="sphinxql-show-character-set"><title>SHOW CHARACTER SET syntax</title>
<programlisting>
SHOW CHARACTER SET
</programlisting>
<para>
Added in version 2.1.1-beta, this is currently a placeholder
query that does nothing and reports that a UTF-8 character set
is available. It was added in order
to keep compatibility with frameworks and connectors that
automatically execute this statement.
</para>
<programlisting>
mysql> SHOW CHARACTER SET;
+---------+---------------+-------------------+--------+
| Charset | Description | Default collation | Maxlen |
+---------+---------------+-------------------+--------+
| utf8 | UTF-8 Unicode | utf8_general_ci | 3 |
+---------+---------------+-------------------+--------+
1 row in set (0.00 sec)
</programlisting>
</sect1>
<sect1 id="sphinxql-update"><title>UPDATE syntax</title>
<programlisting>
UPDATE index SET col1 = newval1 [, ...] WHERE where_condition [OPTION opt_name = opt_value [, ...]]
</programlisting>
<para>
UPDATE statement was added in version 2.0.1-beta. Multiple attributes
and values can be specified in a single statement. Both RT and disk indexes
are supported.
</para>
<para>
As of version 2.0.2-beta, all attributes types (int, bigint, float, MVA),
except for strings and JSON attributes, can be dynamically updated.
Previously, some of these types were not supported.
</para>
<para>
<code>where_condition</code> (also added in 2.0.2-beta) has the same syntax
as in the SELECT statement (see <xref linkend="sphinxql-select"/> for details).
</para>
<para>
When assigning the out-of-range values to 32-bit attributes, they
will be trimmed to their lower 32 bits without a prompt. For example,
if you try to update the 32-bit unsigned int with a value of 4294967297,
the value of 1 will actually be stored, because the lower 32 bits of
4294967297 (0x100000001 in hex) amount to 1 (0x00000001 in hex).
</para>
<para>
MVA values sets for updating (and also for INSERT or REPLACE, refer
to <xref linkend="sphinxql-insert"/>) must be specified as comma-separated
lists in parentheses. To erase the MVA value, just assign () to it.
</para>
<para>
Starting from 2.2.1-beta version UPDATE can be used to update integer and float
values in JSON array. No strings, arrays and other types yet.
</para>
<programlisting>
mysql> UPDATE myindex SET enabled=0 WHERE id=123;
Query OK, 1 rows affected (0.00 sec)
mysql> UPDATE myindex
SET bigattr=-100000000000,
fattr=3465.23,
mvattr1=(3,6,4),
mvattr2=()
WHERE MATCH('hehe') AND enabled=1;
Query OK, 148 rows affected (0.01 sec)
</programlisting>
<para>OPTION clause. This is a Sphinx specific extension that
lets you control a number of per-update options. The syntax is:
<programlisting>
OPTION <optionname>=<value> [ , ... ]
</programlisting>
The list of allowed options are the same as for <link
linkend="sphinxql-select">SELECT</link> statement. Specifically for UPDATE
statement you can use these options:
<itemizedlist>
<listitem><para>'ignore_nonexistent_columns' - this option, added in version 2.1.1-beta, points that the update will silently
ignore any warnings about trying to update a column which is not exists in current index schema.
</para>
<para>'strict' - this option is used while updating JSON attributes. As of
2.2.1-beta, it's possible to update just some types in JSON. And if you
try to update, for example, array type you'll get error with 'strict' option
on and warning otherwise.</para>
</listitem>
</itemizedlist>
</para>
</sect1>
<sect1 id="sphinxql-attach"><title>ALTER syntax</title>
<programlisting>
ALTER TABLE index {ADD|DROP} COLUMN column_name [{INTEGER|INT|BIGINT|FLOAT|BOOL|MULTI|MULTI64|JSON|STRING}]
</programlisting>
<para>
The ALTER statement was added in version 2.2.1-beta. As of 2.2.1-beta, it supports adding one
attribute at a time for both plain and RT indexes. The int, bigint, float, bool, multi-valued,
multi-valued 64bit, json and string attribute types are supported. Support for multi, multi64,
json and string attributes was added in 2.2.2-beta. As of 2.2.2-beta, you can add json and
string attributes, but you cannot modify their values. The ability to remove attributes was
added in 2.2.2-beta.
</para>
<para>
Implementation details. As of 2.2.1-beta, the querying of an index is
impossible (because of a write lock) while adding a column. This may change
in the future. The newly created attribute values are set to 0. ALTER will
not work for distributed indexes and indexes without any attributes.
DROP COLUMN will fail if an index has only one attribute.
</para>
<programlisting>
ALTER RTINDEX index RECONFIGURE
</programlisting>
<para>
As of 2.2.3-beta, ALTER can also reconfigure an existing RT index, so that
new tokenization, morphology, and other text processing settings from sphinx.conf
take effect on the newly INSERT-ed rows, while retaining the existing rows
as they were. Internally, it forcibly saves the current RAM chunk as a new
disk chunk, and adjusts the index header, so that the new rows are tokenized
using the new rules. Note that as the queries are currently parsed separately
for every disk chunk, this might result in warnings regarding the keyword sets
mismatch.
</para>
<programlisting>
mysql> desc plain;
+------------+-----------+
| Field | Type |
+------------+-----------+
| id | bigint |
| text | field |
| group_id | uint |
| date_added | timestamp |
+------------+-----------+
4 rows in set (0.01 sec)
mysql> alter table plain add column test integer;
Query OK, 0 rows affected (0.04 sec)
mysql> desc plain;
+------------+-----------+
| Field | Type |
+------------+-----------+
| id | bigint |
| text | field |
| group_id | uint |
| date_added | timestamp |
| test | uint |
+------------+-----------+
5 rows in set (0.00 sec)
mysql> alter table plain drop column group_id;
Query OK, 0 rows affected (0.01 sec)
mysql> desc plain;
+------------+-----------+
| Field | Type |
+------------+-----------+
| id | bigint |
| text | field |
| date_added | timestamp |
| test | uint |
+------------+-----------+
4 rows in set (0.00 sec)
</programlisting>
</sect1>
<sect1 id="sphinxql-attach-index"><title>ATTACH INDEX syntax</title>
<programlisting>
ATTACH INDEX diskindex TO RTINDEX rtindex
</programlisting>
<para>
ATTACH INDEX statement, added in version 2.0.2-beta, lets you move
data from a regular disk index to a RT index.
</para>
<para>
After a successful ATTACH, the data originally stored in the source
disk index becomes a part of the target RT index, and the source disk
index becomes unavailable (until the next rebuild). ATTACH does not
result in any index data changes. Basically, it just renames the files
(making the source index a new disk chunk of the target RT index),
and updates the metadata. So it is a generally quick operation
which might (frequently) complete as fast as under a second.
</para>
<para>
Note that when an index is attached to an empty RT index, the fields,
attributes, and text processing settings (tokenizer, wordforms, etc) from
the <emphasis>source</emphasis> index are copied over and take effect.
The respective parts of the RT index definition from the configuration
file will be ignored.
</para>
<para>
As of 2.0.2-beta, ATTACH INDEX comes with a number of restrictions.
Most notably, the target RT index is currently required to be empty,
making ATTACH INDEX a one-time conversion operation only. Those restrictions
may be lifted in future releases, as we add the needed functionality to the
RT indexes. The complete list is as follows.
<itemizedlist>
<listitem><para>Target RT index needs to be empty. (See <xref linkend="sphinxql-truncate-rtindex"/>)</para></listitem>
<listitem><para>Source disk index needs to have index_sp=0, boundary_step=0, stopword_step=1.</para></listitem>
<listitem><para>Source disk index needs to have an empty index_zones setting.</para></listitem>
</itemizedlist>
</para>
<programlisting>
mysql> DESC rt;
+-----------+---------+
| Field | Type |
+-----------+---------+
| id | integer |
| testfield | field |
| testattr | uint |
+-----------+---------+
3 rows in set (0.00 sec)
mysql> SELECT * FROM rt;
Empty set (0.00 sec)
mysql> SELECT * FROM disk WHERE MATCH('test');
+------+--------+----------+------------+
| id | weight | group_id | date_added |
+------+--------+----------+------------+
| 1 | 1304 | 1 | 1313643256 |
| 2 | 1304 | 1 | 1313643256 |
| 3 | 1304 | 1 | 1313643256 |
| 4 | 1304 | 1 | 1313643256 |
+------+--------+----------+------------+
4 rows in set (0.00 sec)
mysql> ATTACH INDEX disk TO RTINDEX rt;
Query OK, 0 rows affected (0.00 sec)
mysql> DESC rt;
+------------+-----------+
| Field | Type |
+------------+-----------+
| id | integer |
| title | field |
| content | field |
| group_id | uint |
| date_added | timestamp |
+------------+-----------+
5 rows in set (0.00 sec)
mysql> SELECT * FROM rt WHERE MATCH('test');
+------+--------+----------+------------+
| id | weight | group_id | date_added |
+------+--------+----------+------------+
| 1 | 1304 | 1 | 1313643256 |
| 2 | 1304 | 1 | 1313643256 |
| 3 | 1304 | 1 | 1313643256 |
| 4 | 1304 | 1 | 1313643256 |
+------+--------+----------+------------+
4 rows in set (0.00 sec)
mysql> SELECT * FROM disk WHERE MATCH('test');
ERROR 1064 (42000): no enabled local indexes to search
</programlisting>
</sect1>
<sect1 id="sphinxql-flush-rtindex"><title>FLUSH RTINDEX syntax</title>
<programlisting>
FLUSH RTINDEX rtindex
</programlisting>
<para>
FLUSH RTINDEX statement, added in version 2.0.2-beta, forcibly
flushes RT index RAM chunk contents to disk.
</para>
<para>
Backing up a RT index is as simple as copying over its data files,
followed by the binary log. However, recovering from that backup means
that all the transactions in the log since the last successful RAM chunk
write would need to be replayed. Those writes normally happen either
on a clean shutdown, or periodically with a (big enough!) interval
between writes specified in
<link linkend="conf-rt-flush-period">rt_flush_period</link> directive.
So such a backup made at an arbitrary point in time just might end up
with way too much binary log data to replay.
</para>
<para>
FLUSH RTINDEX forcibly writes the RAM chunk contents to disk,
and also causes the subsequent cleanup of (now-redundant) binary
log files. Thus, recovering from a backup made just after
FLUSH RTINDEX should be almost instant.
</para>
<programlisting>
mysql> FLUSH RTINDEX rt;
Query OK, 0 rows affected (0.05 sec)
</programlisting>
</sect1>
<sect1 id="sphinxql-flush-ramchunk"><title>FLUSH RAMCHUNK syntax</title>
<programlisting>
FLUSH RAMCHUNK rtindex
</programlisting>
<para>
FLUSH RAMCHUNK statement, added in version 2.1.2-release, forcibly
creates a new disk chunk in a RT index.
</para>
<para>
Normally, RT index would flush and convert the contents of the
RAM chunk into a new disk chunk automatically, once the RAM chunk
reaches the maximum allowed
<link linkend="conf-rt-mem-limit">rt_mem_limit</link> size.
However, for debugging and testing it might be useful to forcibly
create a new disk chunk, and FLUSH RAMCHUNK statement does exactly that.
</para>
<para>
Note that using FLUSH RAMCHUNK increases RT index fragmentation.
Most likely, you want to use FLUSH RTINDEX instead. We suggest that
you abstain from using just this statement unless you're absolutely sure
what you're doing. As the right way is to issue FLUSH RAMCHUNK with
following <link linkend="sphinxql-optimize-index">OPTIMIZE</link> command.
Such combo allows to keep RT index fragmentation on minimum.
</para>
<programlisting>
mysql> FLUSH RAMCHUNK rt;
Query OK, 0 rows affected (0.05 sec)
</programlisting>
</sect1>
<sect1 id="sphinxql-truncate-rtindex"><title>TRUNCATE RTINDEX syntax</title>
<programlisting>
TRUNCATE RTINDEX rtindex
</programlisting>
<para>
TRUNCATE RTINDEX statement, added in version 2.1.1-beta, clears
the RT index completely. It disposes the in-memory data, unlinks
all the index data files, and releases the associated binary logs.
</para>
<programlisting>
mysql> TRUNCATE RTINDEX rt;
Query OK, 0 rows affected (0.05 sec)
</programlisting>
<para>
You may want to use this if you are using RT indices as "delta index" files; when
you build the main index, you need to wipe the delta index, and thus TRUNCATE RTINDEX.
You also need to use this command before attaching an index; see <xref linkend="sphinxql-attach-index"/>.
</para>
</sect1>
<sect1 id="sphinxql-show-agent-status"><title>SHOW AGENT STATUS</title>
<programlisting>
SHOW AGENT ['agent'|'index'|index] STATUS [ LIKE pattern ]
</programlisting>
<para>
Displays the statistic of <link linkend="conf-agent">remote
agents</link> or distributed index. It includes the values like the age of the last
request, last answer, the number of different kind of errors and
successes, etc. The statistic is shown for every agent for last 1, 5
and 15 intervals, each of them of <link
linkend="conf-ha-period-karma">ha_period_karma</link> seconds.
The command exists only in sphinxql.
</para>
<programlisting>
mysql> SHOW AGENT STATUS;
+------------------------------------+----------------------------+
| Variable_name | Value |
+------------------------------------+----------------------------+
| status_period_seconds | 60 |
| status_stored_periods | 15 |
| ag_0_hostname | 192.168.0.202:6713 |
| ag_0_references | 2 |
| ag_0_lastquery | 0.41 |
| ag_0_lastanswer | 0.19 |
| ag_0_lastperiodmsec | 222 |
| ag_0_errorsarow | 0 |
| ag_0_1periods_query_timeouts | 0 |
| ag_0_1periods_connect_timeouts | 0 |
| ag_0_1periods_connect_failures | 0 |
| ag_0_1periods_network_errors | 0 |
| ag_0_1periods_wrong_replies | 0 |
| ag_0_1periods_unexpected_closings | 0 |
| ag_0_1periods_warnings | 0 |
| ag_0_1periods_succeeded_queries | 27 |
| ag_0_1periods_msecsperquery | 232.31 |
| ag_0_5periods_query_timeouts | 0 |
| ag_0_5periods_connect_timeouts | 0 |
| ag_0_5periods_connect_failures | 0 |
| ag_0_5periods_network_errors | 0 |
| ag_0_5periods_wrong_replies | 0 |
| ag_0_5periods_unexpected_closings | 0 |
| ag_0_5periods_warnings | 0 |
| ag_0_5periods_succeeded_queries | 146 |
| ag_0_5periods_msecsperquery | 231.83 |
| ag_1_hostname | 192.168.0.202:6714 |
| ag_1_references | 2 |
| ag_1_lastquery | 0.41 |
| ag_1_lastanswer | 0.19 |
| ag_1_lastperiodmsec | 220 |
| ag_1_errorsarow | 0 |
| ag_1_1periods_query_timeouts | 0 |
| ag_1_1periods_connect_timeouts | 0 |
| ag_1_1periods_connect_failures | 0 |
| ag_1_1periods_network_errors | 0 |
| ag_1_1periods_wrong_replies | 0 |
| ag_1_1periods_unexpected_closings | 0 |
| ag_1_1periods_warnings | 0 |
| ag_1_1periods_succeeded_queries | 27 |
| ag_1_1periods_msecsperquery | 231.24 |
| ag_1_5periods_query_timeouts | 0 |
| ag_1_5periods_connect_timeouts | 0 |
| ag_1_5periods_connect_failures | 0 |
| ag_1_5periods_network_errors | 0 |
| ag_1_5periods_wrong_replies | 0 |
| ag_1_5periods_unexpected_closings | 0 |
| ag_1_5periods_warnings | 0 |
| ag_1_5periods_succeeded_queries | 146 |
| ag_1_5periods_msecsperquery | 230.85 |
+------------------------------------+----------------------------+
50 rows in set (0.01 sec)
</programlisting>
<para>
Starting from version 2.1.1-beta, an optional LIKE clause is supported.
Refer to <xref linkend="sphinxql-show-meta"/> for its syntax details.
</para>
<programlisting>
mysql> SHOW AGENT STATUS LIKE '%5period%msec%';
+-----------------------------+--------+
| Key | Value |
+-----------------------------+--------+
| ag_0_5periods_msecsperquery | 234.72 |
| ag_1_5periods_msecsperquery | 233.73 |
| ag_2_5periods_msecsperquery | 343.81 |
+-----------------------------+--------+
3 rows in set (0.00 sec)
</programlisting>
<para>
You can specify a particular agent by its address. In this case only
that agent's data will be displayed. Also, 'agent_' prefix will be used
instead of 'ag_N_':
</para>
<programlisting>
mysql> SHOW AGENT '192.168.0.202:6714' STATUS LIKE '%15periods%';
+-------------------------------------+--------+
| Variable_name | Value |
+-------------------------------------+--------+
| agent_15periods_query_timeouts | 0 |
| agent_15periods_connect_timeouts | 0 |
| agent_15periods_connect_failures | 0 |
| agent_15periods_network_errors | 0 |
| agent_15periods_wrong_replies | 0 |
| agent_15periods_unexpected_closings | 0 |
| agent_15periods_warnings | 0 |
| agent_15periods_succeeded_queries | 439 |
| agent_15periods_msecsperquery | 231.73 |
+-------------------------------------+--------+
9 rows in set (0.00 sec)
</programlisting>
<para>Finally, you can check the status of the agents in a specific
distributed index. It can be done with a SHOW AGENT index STATUS statement.
That statement shows the index HA status (ie. whether or not it uses
agent mirrors at all), and then the mirror information (specifically:
address, blackhole and persistent flags, and the mirror selection
probability used when one of the
<link linkend='conf-ha-strategy'>weighted-probability strategies</link>
is in effect).
</para>
<programlisting>
mysql> SHOW AGENT dist_index STATUS;
+--------------------------------------+--------------------------------+
| Variable_name | Value |
+--------------------------------------+--------------------------------+
| dstindex_1_is_ha | 1 |
| dstindex_1mirror1_id | 192.168.0.202:6713:loc |
| dstindex_1mirror1_probability_weight | 0.372864 |
| dstindex_1mirror1_is_blackhole | 0 |
| dstindex_1mirror1_is_persistent | 0 |
| dstindex_1mirror2_id | 192.168.0.202:6714:loc |
| dstindex_1mirror2_probability_weight | 0.374635 |
| dstindex_1mirror2_is_blackhole | 0 |
| dstindex_1mirror2_is_persistent | 0 |
| dstindex_1mirror3_id | dev1.sphinxsearch.com:6714:loc |
| dstindex_1mirror3_probability_weight | 0.252501 |
| dstindex_1mirror3_is_blackhole | 0 |
| dstindex_1mirror3_is_persistent | 0 |
+--------------------------------------+--------------------------------+
13 rows in set (0.00 sec)
</programlisting>
</sect1>
<sect1 id="sphinxql-show-profile"><title>SHOW PROFILE syntax</title>
<programlisting>
SHOW PROFILE
</programlisting>
<para>
SHOW PROFILE statement, added in version 2.1.1-beta, shows a detailed
execution profile of the previous SQL statement executed in the current
SphinxQL session. Also, profiling must be enabled in the current session
<b>before</b> running the statement to be instrumented. That can be done
with a <code>SET profiling=1</code> statement. By default, profiling
is disabled to avoid potential performance implications, and therefore
the profile will be empty.
</para>
<para>
Here's a complete instrumentation example:
<programlisting>
mysql> SET profiling=1;
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT id FROM lj WHERE MATCH('the test') LIMIT 1;
+--------+
| id |
+--------+
| 946418 |
+--------+
1 row in set (0.05 sec)
mysql> SHOW PROFILE;
+--------------+----------+----------+
| Status | Duration | Switches |
+--------------+----------+----------+
| unknown | 0.000610 | 6 |
| net_read | 0.000007 | 1 |
| dist_connect | 0.000036 | 1 |
| sql_parse | 0.000048 | 1 |
| dict_setup | 0.000001 | 1 |
| parse | 0.000023 | 1 |
| transforms | 0.000002 | 1 |
| init | 0.000401 | 3 |
| open | 0.000104 | 1 |
| read_docs | 0.001570 | 71 |
| read_hits | 0.003936 | 222 |
| get_docs | 0.029837 | 1347 |
| get_hits | 0.000548 | 1433 |
| filter | 0.000619 | 1274 |
| rank | 0.009892 | 2909 |
| sort | 0.001562 | 52 |
| finalize | 0.000250 | 1 |
| dist_wait | 0.000000 | 1 |
| aggregate | 0.000145 | 1 |
| net_write | 0.000031 | 1 |
+--------------+----------+----------+
20 rows in set (0.00 sec)
</programlisting>
</para>
<para>
Status column briefly describes where exactly (in which state)
was the time spent. Duration column shows the wall clock time,
in seconds. Switches column displays the number of times query
engine changed to the given state. Those are just logical engine
state switches and <b>not</b> any OS level context switches nor
function calls (even though some of the sections can actually map
to function calls) and they do <b>not</b> have any direct effect
on the performance. In a sense, number of switches is just a number
of times when the respective instrumentation point was hit.
</para>
<para>
States in the profile are returned in a prerecorded order
that roughly maps (but is <b>not</b> identical) to the actual
query order.
</para>
<para>
A list of states may (and will) vary over time, as we refine
the states. Here's a brief description of the currently profiled
states.
<itemizedlist>
<listitem><b>unknown</b>, generic catch-all state. Accounts for both
not-yet-instrumented code, or just small miscellaneous tasks that do not
really belong in any other state, but are too small to deserve their own state.
</listitem>
<listitem><b>net_read</b>, reading the query from the network (that is, the application).</listitem>
<listitem><b>io</b>, generic file IO time.</listitem>
<listitem><b>dist_connect</b>, connecting to remote agents in the distributed index case.</listitem>
<listitem><b>sql_parse</b>, parsing the SphinxQL syntax.</listitem>
<listitem><b>dict_setup</b>, dictionary and tokenizer setup.</listitem>
<listitem><b>parse</b>, parsing the full-text query syntax.</listitem>
<listitem><b>transforms</b>, full-text query transformations (wildcard and other expansions, simplification, etc).</listitem>
<listitem><b>init</b>, initializing the query evaluation.</listitem>
<listitem><b>open</b>, opening the index files.</listitem>
<listitem><b>read_docs</b>, IO time spent reading document lists.</listitem>
<listitem><b>read_hits</b>, IO time spent reading keyword positions.</listitem>
<listitem><b>get_docs</b>, computing the matching documents.</listitem>
<listitem><b>get_hits</b>, computing the matching positions.</listitem>
<listitem><b>filter</b>, filtering the full-text matches.</listitem>
<listitem><b>rank</b>, computing the relevance rank.</listitem>
<listitem><b>sort</b>, sorting the matches.</listitem>
<listitem><b>finalize</b>, finalizing the per-index search result set (last stage expressions, etc).</listitem>
<listitem><b>dist_wait</b>, waiting for the remote results from the agents in the distributed index case.</listitem>
<listitem><b>aggregate</b>, aggregating multiple result sets.</listitem>
<listitem><b>net_write</b>, writing the result set to the network.</listitem>
</itemizedlist>
</para>
</sect1>
<sect1 id="sphinxql-show-index-status"><title>SHOW INDEX STATUS syntax</title>
<programlisting>
SHOW INDEX index_name STATUS
</programlisting>
<para>
Added in version 2.1.1-beta. Displays various per-index statistics. Currently,
those include:
<itemizedlist>
<listitem><b>indexed_documents</b> and <b>indexed_bytes</b>, number of
the documents indexed and their text size in bytes, respectively.</listitem>
<listitem><b>field_tokens_XXX</b>, sums of per-field lengths (in tokens)
over the entire index (that is used internally in BM25A and BM25F functions
for ranking purposes). Only available for indexes built with index_field_lengths=1.</listitem>
<listitem><b>ram_bytes</b>, total size (in bytes) of the RAM-resident
index portion.
</listitem>
</itemizedlist>
</para>
<programlisting>
mysql> SHOW INDEX lj STATUS;
+--------------------+-------------+
| Variable_name | Value |
+--------------------+-------------+
| index_type | disk |
| indexed_documents | 2495219 |
| indexed_bytes | 10380483879 |
| field_tokens_title | 6999145 |
| field_tokens_body | 1501825050 |
| total_tokens | 1508824195 |
| ram_bytes | 305963599 |
| disk_bytes | 5455804365 |
| mem_limit | 536870912 |
+--------------------+-------------+
8 rows in set (0.00 sec)
</programlisting>
</sect1>
<sect1 id="sphinxql-show-index-settings"><title>SHOW INDEX SETTINGS syntax</title>
<programlisting>
SHOW INDEX index_name[.N | CHUNK N] SETTINGS
</programlisting>
<para>
Displays per-index settings in a <filename>sphinx.conf</filename> compliant
file format, similar to the <link linkend="ref-indextool">--dumpconfig</link>
option of the indextool. The report provides a breakdown of all the index
settings, including tokenizer and dictionary options. You may also specify
a particular <link linkend="conf-rt-mem-limit">chunk number</link>
for the RT indexes.
</para>
</sect1>
<sect1 id="sphinxql-optimize-index"><title>OPTIMIZE INDEX syntax</title>
<programlisting>
OPTIMIZE INDEX index_name
</programlisting>
<para>
Available since version 2.1.1-beta, OPTIMIZE statement enqueues
a RT index for optimization in a background thread.
</para>
<para>
Over time, RT indexes can grow fragmented into many disk chunks
and/or tainted with deleted, but unpurged data, impacting search
performance. When that happens, they can be optimized. Basically,
the optimization pass merges together disk chunks pairs, purging
off documents suppressed by K-list as it goes.
</para>
<para>
That is a lengthy and IO intensive process, so to limit the
impact, all the actual merge work is executed serially in
a special background thread, and the OPTIMIZE statement simply
adds a job to its queue. Currently, there is no way to check
the index or queue status (that might be added in the future
to the SHOW INDEX STATUS and SHOW STATUS statements respectively).
The optimization thread can be IO-throttled, you can control the
maximum number of IOs per second and the maximum IO size
with <link linkend="conf-rt-merge-iops">rt_merge_iops</link>
and <link linkend="conf-rt-merge-maxiosize">rt_merge_maxiosize</link>
directives respectively. The optimization jobs queue is lost
on daemon crash.
</para>
<para>
The RT index being optimized stays online and available
for both searching and updates at (almost) all times during
the optimization. It gets locked (very) briefly every time
that a pair of disk chunks is merged successfully, to rename
the old and the new files, and update the index header.
</para>
<para>
At the moment, OPTIMIZE needs to be issued manually,
the indexes will <emphasis>not</emphasis> be optimized
automatically. That might change in the future releases.
</para>
<programlisting>
mysql> OPTIMIZE INDEX rt;
Query OK, 0 rows affected (0.00 sec)
</programlisting>
</sect1>
<sect1 id="sphinxql-show-plan"><title>SHOW PLAN syntax</title>
<programlisting>
SHOW PLAN
</programlisting>
<para>
SHOW PLAN statement, added in 2.1.2-release, displays the execution plan
of the previous SELECT statement. The plan gets generated and stored
during the actual execution, so profiling must be enabled in the current
session <b>before</b> running that statement. That can be done
with a <code>SET profiling=1</code> statement.
</para>
<para>
Here's a complete instrumentation example:
<programlisting>
mysql> SET profiling=1 \G
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT id FROM lj WHERE MATCH('the i') LIMIT 1 \G
*************************** 1. row ***************************
id: 39815
1 row in set (1.53 sec)
mysql> SHOW PLAN \G
*************************** 1. row ***************************
Variable: transformed_tree
Value: AND(
AND(KEYWORD(the, querypos=1)),
AND(KEYWORD(i, querypos=2)))
1 row in set (0.00 sec)
</programlisting>
And here's a less trivial example that shows how the actually
evaluated query tree can be rather different from the original one
because of expansions and other transformations:
<programlisting>
mysql> SELECT * FROM test WHERE MATCH('@title abc* @body hey') \G SHOW PLAN \G
...
*************************** 1. row ***************************
Variable: transformed_tree
Value: AND(
OR(fields=(title), KEYWORD(abcx, querypos=1, expanded), KEYWORD(abcm, querypos=1, expanded)),
AND(fields=(body), KEYWORD(hey, querypos=2)))
1 row in set (0.00 sec)
</programlisting>
</para>
</sect1>
<sect1 id="sphinxql-show-databases"><title>SHOW DATABASES syntax</title>
<programlisting>
SHOW DATABASES
</programlisting>
<para>
Added in 2.2.1-beta. This is a dummy statement to support MySQL Workbench
and other clients that require it. Currently, it does absolutely nothing.
</para>
</sect1>
<sect1 id="sphinxql-create-plugin"><title>CREATE PLUGIN syntax</title>
<programlisting>
CREATE PLUGIN plugin_name TYPE 'plugin_type' SONAME 'plugin_library'
</programlisting>
<para>
Added in 2.2.2-beta. Loads the given library (if it is not loaded yet) and loads
the specified plugin from it. As of 2.2.2-beta, the known plugin types are:
<itemizedlist>
<listitem><para>ranker</para></listitem>
<listitem><para>index_token_filter</para></listitem>
<listitem><para>query_token_filter</para></listitem>
</itemizedlist>
Refer to <xref linkend="sphinx-plugins"/> for more information regarding
writing the plugins.
</para>
<programlisting>
mysql> CREATE PLUGIN myranker TYPE 'ranker' SONAME 'myplugins.so';
Query OK, 0 rows affected (0.00 sec)
</programlisting>
</sect1>
<sect1 id="sphinxql-drop-plugin"><title>DROP PLUGIN syntax</title>
<programlisting>
DROP PLUGIN plugin_name TYPE 'plugin_type'
</programlisting>
<para>
Added in 2.2.2-beta. Markes the specified plugin for unloading.
The unloading is <b>not</b> immediate, because the concurrent queries
might be using it. However, after a DROP new queries will not be able
to use it. Then, once all the currently executing queries using it
are completed, the plugin will be unloaded. Once all the plugins
from the given library are unloaded, the library is also automatically
unloaded.
</para>
<programlisting>
mysql> DROP PLUGIN myranker TYPE 'ranker';
Query OK, 0 rows affected (0.00 sec)
</programlisting>
</sect1>
<sect1 id="sphinxql-show-plugins"><title>SHOW PLUGINS syntax</title>
<programlisting>
SHOW PLUGINS
</programlisting>
<para>
Added in 2.2.2-beta. Displays all the loaded plugins and UDFs.
"Type" column should be one of the udf, ranker, index_token_filter,
or query_token_filter. "Users" column is the number of thread that
are currently using that plugin in a query. "Extra" column is intended
for various additional plugin-type specific information; currently,
it shows the return type for the UDFs and is empty for all the other
plugin types.
</para>
<programlisting>
mysql> SHOW PLUGINS;
+------+----------+----------------+-------+-------+
| Type | Name | Library | Users | Extra |
+------+----------+----------------+-------+-------+
| udf | sequence | udfexample.dll | 0 | INT |
+------+----------+----------------+-------+-------+
1 row in set (0.00 sec)
</programlisting>
</sect1>
<sect1 id="sphinxql-threads"><title>SHOW THREADS syntax</title>
<programlisting>
SHOW THREADS [ OPTION columns=width ]
</programlisting>
<para>
SHOW THREADS statement, introduced in version 2.2.2-beta, lists all
currently active client threads, not counting system threads.
It returns a table with columns that describe:
</para>
<itemizedlist>
<listitem><b>thread id</b></listitem>
<listitem><b>connection protocol</b>, possible values are sphinxapi and sphinxql</listitem>
<listitem><b>thread state</b>, possible values are handshake, net_read,
net_write, query, net_idle</listitem>
<listitem><b>time</b> since the current state was changed (in seconds,
with microsecond precision)</listitem>
<listitem><b>information</b> about queries</listitem>
</itemizedlist>
<para>
The 'Info' column will be cut at the width you've specified in the
'columns=width' option (notice the third row in the example table below).
This column will contain raw SphinxQL queries and, if there are
API queries, full text syntax and comments will be displayed.
With an API-snippet, the data size will be displayed along with the query.
</para>
<programlisting>
mysql> SHOW THREADS OPTION columns=50;
+------+----------+-------+----------+----------------------------------------------------+
| Tid | Proto | State | Time | Info |
+------+----------+-------+----------+----------------------------------------------------+
| 5168 | sphinxql | query | 0.000002 | show threads option columns=50 |
| 5175 | sphinxql | query | 0.000002 | select * from rt where match ( 'the box' ) |
| 1168 | sphinxql | query | 0.000002 | select * from rt where match ( 'the box and faximi |
+------+----------+-------+----------+----------------------------------------------------+
3 row in set (0.00 sec)
</programlisting>
</sect1>
<!-- add new statement sections above this, and other SphinxQL related sections below -->
<sect1 id="sphinxql-multi-queries">
<title>Multi-statement queries</title>
<para>
Starting version 2.0.1-beta, SphinxQL supports multi-statement
queries, or batches. Possible inter-statement optimizations described
in <xref linkend="multi-queries"/> do apply to SphinxQL just as well.
The batched queries should be separated by a semicolon. Your MySQL
client library needs to support MySQL multi-query mechanism and
multiple result set. For instance, mysqli interface in PHP
and DBI/DBD libraries in Perl are known to work.
</para>
<para>
Here's a PHP sample showing how to utilize mysqli interface
with Sphinx.
<programlisting><![CDATA[
<?php
$link = mysqli_connect ( "127.0.0.1", "root", "", "", 9306 );
if ( mysqli_connect_errno() )
die ( "connect failed: " . mysqli_connect_error() );
$batch = "SELECT * FROM test1 ORDER BY group_id ASC;";
$batch .= "SELECT * FROM test1 ORDER BY group_id DESC";
if ( !mysqli_multi_query ( $link, $batch ) )
die ( "query failed" );
do
{
// fetch and print result set
if ( $result = mysqli_store_result($link) )
{
while ( $row = mysqli_fetch_row($result) )
printf ( "id=%s\n", $row[0] );
mysqli_free_result($result);
}
// print divider
if ( mysqli_more_results($link) )
printf ( "------\n" );
} while ( mysqli_next_result($link) );
]]></programlisting>
Its output with the sample <code>test1</code> index included
with Sphinx is as follows.
<programlisting>
$ php test_multi.php
id=1
id=2
id=3
id=4
------
id=3
id=4
id=1
id=2
</programlisting>
</para>
<para>
The following statements can currently be used in a batch:
SELECT, SHOW WARNINGS, SHOW STATUS, and SHOW META. Arbitrary
sequence of these statements are allowed. The results sets
returned should match those that would be returned if the
batched queries were sent one by one.
</para>
</sect1>
<sect1 id="sphinxql-comment-syntax">
<title>Comment syntax</title>
<para>
Since version 2.0.1-beta, SphinxQL supports C-style comment syntax.
Everything from an opening <code>/*</code> sequence to a closing
<code>*/</code> sequence is ignored. Comments can span multiple lines,
can not nest, and should not get logged. MySQL specific
<code>/*! ... */</code> comments are also currently ignored.
(As the comments support was rather added for better compatibility
with <filename>mysqldump</filename> produced dumps, rather than
improving general query interoperability between Sphinx and MySQL.)
<programlisting>
SELECT /*! SQL_CALC_FOUND_ROWS */ col1 FROM table1 WHERE ...
</programlisting>
</para>
</sect1>
<sect1 id="sphinxql-reserved-keywords">
<title>List of SphinxQL reserved keywords</title>
<para>A complete alphabetical list of keywords that are currently reserved
in SphinxQL syntax (and therefore can not be used as identifiers).
<programlisting>
AND, AS, BY, DIV, FACET, FALSE, FROM, ID, IN, IS, LIMIT,
MOD, NOT, NULL, OR, ORDER, SELECT, TRUE
</programlisting>
</para>
</sect1>
<sect1 id="sphinxql-upgrading-magics">
<title>SphinxQL upgrade notes, version 2.0.1-beta</title>
<para>
This section only applies to existing applications that
use SphinxQL versions prior to 2.0.1-beta.
</para>
<para>
In previous versions, SphinxQL just wrapped around SphinxAPI
and inherited its magic columns and column set quirks. Essentially,
SphinxQL queries could return (slightly) different columns and
in a (slightly) different order than it was explicitly requested
in the query. Namely, <code>weight</code> magic column (which is not
a real column in any index) was added at all times, and GROUP BY
related <code>@count</code>, <code>@group</code>, and <code>@distinct</code>
magic columns were conditionally added when grouping. Also, the order
of columns (attributes) in the result set was actually taken from the
index rather than the query. (So if you asked for columns C, B, A
in your query but they were in the A, B, C order in the index,
they would have been returned in the A, B, C order.)
</para>
<para>
In version 2.0.1-beta, we fixed that. SphinxQL is now more
SQL compliant (and will be further brought in as much compliance
with standard SQL syntax as possible).
</para>
<para>
The important changes are as follows:
<itemizedlist>
<listitem><para>
<b><code>@ID</code> magic name is deprecated in favor of
<code>ID</code>.</b> Document ID is considered an attribute.
</para></listitem>
<listitem><para>
<b><code>WEIGHT</code> is no longer implicitly returned</b>,
because it is not actually a column (an index attribute),
but rather an internal function computed per each row (a match).
You have to explicitly ask for it, using the <code>WEIGHT()</code>
function. (The requirement to alias the result will be lifted
in the next release.)
<programlisting>
SELECT id, WEIGHT() w FROM myindex WHERE MATCH('test')
</programlisting>
</para></listitem>
<listitem><para>
<b>You can now use quoted reserved keywords as aliases.</b>
The quote character is backtick ("`", ASCII code 96 decimal,
60 hex). One particularly useful example would be returning
<code>weight</code> column like the old mode:
<programlisting>
SELECT id, WEIGHT() `weight` FROM myindex WHERE MATCH('test')
</programlisting>
</para></listitem>
<listitem><para>
The column order is now different and should now match the
one explicitly defined in the query. So if you are accessing
columns based on their position in the result set rather than
the name (for instance, by using <code>mysql_fetch_row()</code>
rather than <code>mysql_fetch_assoc()</code> in PHP),
<b>check and fix the order of columns in your queries.</b>
</para></listitem>
<listitem><para>
<code>SELECT *</code> return the columns in index order,
as it used to, including the ID column. However,
<b><code>SELECT *</code> does not automatically return WEIGHT().</b>
To update such queries in case you access columns by names,
simply add it to the query:
<programlisting>
SELECT *, WEIGHT() `weight` FROM myindex WHERE MATCH('test')
</programlisting>
Otherwise, i.e., in case you rely on column order, select
ID, weight, and then other columns:
<programlisting>
SELECT id, *, WEIGHT() `weight` FROM myindex WHERE MATCH('test')
</programlisting>
</para></listitem>
<listitem><para>
<b>Magic <code>@count</code> and <code>@distinct</code>
attributes are no longer implicitly returned</b>. You now
have to explicitly ask for them when using GROUP BY.
(Also note that you currently have to alias them;
that requirement will be lifted in the future.)
<programlisting>
SELECT gid, COUNT(*) q FROM myindex WHERE MATCH('test')
GROUP BY gid ORDER BY q DESC
</programlisting>
</para></listitem>
</itemizedlist>
</para>
</sect1>
</chapter>
<chapter id="api-reference"><title>API reference</title>
<para>
There is a number of native searchd client API implementations
for Sphinx. As of time of this writing, we officially support our own
PHP, Python, and Java implementations. There also are third party
free, open-source API implementations for Perl, Ruby, and C++.
</para>
<para>
The reference API implementation is in PHP, because (we believe)
Sphinx is most widely used with PHP than any other language.
This reference documentation is in turn based on reference PHP API,
and all code samples in this section will be given in PHP.
</para>
<para>
However, all other APIs provide the same methods and implement
the very same network protocol. Therefore the documentation does
apply to them as well. There might be minor differences as to the
method naming conventions or specific data structures used.
But the provided functionality must not differ across languages.
</para>
<sect1 id="api-funcgroup-general"><title>General API functions</title>
<sect2 id="api-func-getlasterror"><title>GetLastError</title>
<para><b>Prototype:</b> function GetLastError()</para>
<para>
Returns last error message, as a string, in human readable format.
If there were no errors during the previous API call, empty string is returned.
</para>
<para>
You should call it when any other function (such as <link linkend="api-func-query">Query()</link>)
fails (typically, the failing function returns false). The returned string will
contain the error description.
</para>
<para>
The error message is <emphasis>not</emphasis> reset by this call; so you can safely
call it several times if needed.
</para>
</sect2>
<sect2 id="api-func-getlastwarning"><title>GetLastWarning</title>
<para><b>Prototype:</b> function GetLastWarning ()</para>
<para>
Returns last warning message, as a string, in human readable format.
If there were no warnings during the previous API call, empty string is returned.
</para>
<para>
You should call it to verify whether your request
(such as <link linkend="api-func-query">Query()</link>) was completed but with warnings.
For instance, search query against a distributed index might complete
successfully even if several remote agents timed out. In that case,
a warning message would be produced.
</para>
<para>
The warning message is <emphasis>not</emphasis> reset by this call; so you can safely
call it several times if needed.
</para>
</sect2>
<sect2 id="api-func-setserver"><title>SetServer</title>
<para><b>Prototype:</b> function SetServer ( $host, $port )</para>
<para>
Sets <filename>searchd</filename> host name and TCP port.
All subsequent requests will use the new host and port settings.
Default host and port are 'localhost' and 9312, respectively.
</para>
</sect2>
<sect2 id="api-func-setretries"><title>SetRetries</title>
<para><b>Prototype:</b> function SetRetries ( $count, $delay=0 )</para>
<para>
Sets distributed retry count and delay.
</para>
<para>
On temporary failures <filename>searchd</filename> will attempt up to
<code>$count</code> retries per agent. <code>$delay</code> is the delay
between the retries, in milliseconds. Retries are disabled by default.
Note that this call will <b>not</b> make the API itself retry on
temporary failure; it only tells <filename>searchd</filename> to do so.
Currently, the list of temporary failures includes all kinds of connect()
failures and maxed out (too busy) remote agents.
</para>
</sect2>
<sect2 id="api-func-setconnecttimeout"><title>SetConnectTimeout</title>
<para><b>Prototype:</b> function SetConnectTimeout ( $timeout )</para>
<para>
Sets the time allowed to spend connecting to the server before giving up.
</para>
<para>Under some circumstances, the server can be delayed in responding, either
due to network delays, or a query backlog. In either instance, this allows
the client application programmer some degree of control over how their
program interacts with <filename>searchd</filename> when not available,
and can ensure that the client application does not fail due to exceeding
the script execution limits (especially in PHP).
</para>
<para>In the event of a failure to connect, an appropriate error code should
be returned back to the application in order for application-level error handling
to advise the user.
</para>
</sect2>
<sect2 id="api-func-setarrayresult"><title>SetArrayResult</title>
<para><b>Prototype:</b> function SetArrayResult ( $arrayresult )</para>
<para>
PHP specific. Controls matches format in the search results set
(whether matches should be returned as an array or a hash).
</para>
<para>
<code>$arrayresult</code> argument must be boolean. If <code>$arrayresult</code> is <code>false</code>
(the default mode), matches will returned in PHP hash format with
document IDs as keys, and other information (weight, attributes)
as values. If <code>$arrayresult</code> is true, matches will be returned
as a plain array with complete per-match information including
document ID.
</para>
<para>
Introduced along with GROUP BY support on MVA attributes.
Group-by-MVA result sets may contain duplicate document IDs.
Thus they need to be returned as plain arrays, because hashes
will only keep one entry per document ID.
</para>
</sect2>
<sect2 id="api-func-isconnecterror"><title>IsConnectError</title>
<para><b>Prototype:</b> function IsConnectError ()</para>
<para>
Checks whether the last error was a network error on API side, or a remote error
reported by searchd. Returns true if the last connection attempt to searchd failed on API side,
false otherwise (if the error was remote, or there were no connection attempts at all).
Introduced in version 0.9.9-rc1.
</para>
</sect2>
</sect1>
<sect1 id="api-funcgroup-general-query-settings"><title>General query settings</title>
<sect2 id="api-func-setlimits"><title>SetLimits</title>
<para><b>Prototype:</b> function SetLimits ( $offset, $limit, $max_matches=1000, $cutoff=0 )</para>
<para>
Sets offset into server-side result set (<code>$offset</code>) and amount of matches
to return to client starting from that offset (<code>$limit</code>). Can additionally
control maximum server-side result set size for current query (<code>$max_matches</code>)
and the threshold amount of matches to stop searching at (<code>$cutoff</code>).
All parameters must be non-negative integers.
</para>
<para>
First two parameters to SetLimits() are identical in behavior to MySQL
LIMIT clause. They instruct <filename>searchd</filename> to return at
most <code>$limit</code> matches starting from match number <code>$offset</code>.
The default offset and limit settings are 0 and 20, that is, to return
first 20 matches.
</para>
<para>
<code>max_matches</code> setting controls how much matches <filename>searchd</filename>
will keep in RAM while searching. <b>All</b> matching documents will be normally
processed, ranked, filtered, and sorted even if <code>max_matches</code> is set to 1.
But only best N documents are stored in memory at any given moment for performance
and RAM usage reasons, and this setting controls that N. Note that there are
<b>two</b> places where <code>max_matches</code> limit is enforced. Per-query
limit is controlled by this API call, but there also is per-server limit
controlled by <code>max_matches</code> setting in the config file. To prevent
RAM usage abuse, server will not allow to set per-query limit
higher than the per-server limit.
</para>
<para>
You can't retrieve more than <code>max_matches</code> matches to the client application.
The default limit is set to 1000. Normally, you must not have to go over
this limit. One thousand records is enough to present to the end user.
And if you're thinking about pulling the results to application
for further sorting or filtering, that would be <b>much</b> more efficient
if performed on Sphinx side.
</para>
<para>
<code>$cutoff</code> setting is intended for advanced performance control.
It tells <filename>searchd</filename> to forcibly stop search query
once <code>$cutoff</code> matches had been found and processed.
</para>
</sect2>
<sect2 id="api-func-setmaxquerytime"><title>SetMaxQueryTime</title>
<para><b>Prototype:</b> function SetMaxQueryTime ( $max_query_time )</para>
<para>
Sets maximum search query time, in milliseconds. Parameter must be
a non-negative integer. Default value is 0 which means "do not limit".
</para>
<para>Similar to <code>$cutoff</code> setting from <link linkend="api-func-setlimits">SetLimits()</link>,
but limits elapsed query time instead of processed matches count. Local search queries
will be stopped once that much time has elapsed. Note that if you're performing
a search which queries several local indexes, this limit applies to each index
separately.
</para>
</sect2>
<sect2 id="api-func-setoverride"><title>SetOverride</title>
<para><b>DEPRECATED</b></para>
<para><b>Prototype:</b> function SetOverride ( $attrname, $attrtype, $values )</para>
<para>
Sets temporary (per-query) per-document attribute value overrides.
Only supports scalar attributes. $values must be a hash that maps document
IDs to overridden attribute values. Introduced in version 0.9.9-rc1.
</para>
<para>
Override feature lets you "temporary" update attribute values for some documents
within a single query, leaving all other queries unaffected. This might be useful
for personalized data. For example, assume you're implementing a personalized
search function that wants to boost the posts that the user's friends recommend.
Such data is not just dynamic, but also personal; so you can't simply put it
in the index because you don't want everyone's searches affected. Overrides,
on the other hand, are local to a single query and invisible to everyone else.
So you can, say, setup a "friends_weight" value for every document, defaulting to 0,
then temporary override it with 1 for documents 123, 456 and 789 (recommended by
exactly the friends of current user), and use that value when ranking.
</para>
</sect2>
<sect2 id="api-func-setselect"><title>SetSelect</title>
<para><b>Prototype:</b> function SetSelect ( $clause )</para>
<para>
Sets the select clause, listing specific attributes to fetch, and <link linkend="sort-expr">expressions</link>
to compute and fetch. Clause syntax mimics SQL. Introduced in version 0.9.9-rc1.</para>
<para>
SetSelect() is very similar to the part of a typical SQL query between SELECT and FROM.
It lets you choose what attributes (columns) to fetch, and also what expressions
over the columns to compute and fetch. A certain difference from SQL is that expressions
<b>must</b> always be aliased to a correct identifier (consisting of letters and digits)
using 'AS' keyword. SQL also lets you do that but does not require to. Sphinx enforces
aliases so that the computation results can always be returned under a "normal" name
in the result set, used in other clauses, etc.
</para>
<para>
Everything else is basically identical to SQL. Star ('*') is supported.
Functions are supported. Arbitrary amount of expressions is supported.
Computed expressions can be used for sorting, filtering, and grouping,
just as the regular attributes.
</para>
<para>
Starting with version 0.9.9-rc2, aggregate functions (AVG(), MIN(),
MAX(), SUM()) are supported when using GROUP BY.
</para>
<para>
Expression sorting (<xref linkend="sort-expr"/>) and geodistance functions
(<xref linkend="api-func-setgeoanchor"/>) are now internally implemented using
this computed expressions mechanism, using magic names '@expr' and '@geodist'
respectively.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
$cl->SetSelect ( "*, @weight+(user_karma+ln(pageviews))*0.1 AS myweight" );
$cl->SetSelect ( "exp_years, salary_gbp*{$gbp_usd_rate} AS salary_usd,
IF(age>40,1,0) AS over40" );
$cl->SetSelect ( "*, AVG(price) AS avgprice" );
</programlisting>
</sect2>
</sect1>
<sect1 id="api-funcgroup-fulltext-query-settings"><title>Full-text search query settings</title>
<sect2 id="api-func-setmatchmode"><title>SetMatchMode</title>
<para><b>DEPRECATED</b></para>
<para><b>Prototype:</b> function SetMatchMode ( $mode )</para>
<para>
Sets full-text query matching mode, as described in <xref linkend="matching-modes"/>.
Parameter must be a constant specifying one of the known modes.
</para>
<para>
<b>WARNING:</b> (PHP specific) you <b>must not</b> take the matching mode
constant name in quotes, that syntax specifies a string and is incorrect:
<programlisting>
$cl->SetMatchMode ( "SPH_MATCH_ANY" ); // INCORRECT! will not work as expected
$cl->SetMatchMode ( SPH_MATCH_ANY ); // correct, works OK
</programlisting>
</para>
</sect2>
<sect2 id="api-func-setrankingmode"><title>SetRankingMode</title>
<para><b>Prototype:</b> function SetRankingMode ( $ranker, $rankexpr="" )</para>
<para>
Sets ranking mode (aka ranker). Only available in SPH_MATCH_EXTENDED
matching mode. Parameter must be a constant specifying one of the known
rankers.
</para>
<para>
By default, in the EXTENDED matching mode Sphinx computes two factors
which contribute to the final match weight. The major part is a phrase
proximity value between the document text and the query.
The minor part is so-called BM25 statistical function, which varies
from 0 to 1 depending on the keyword frequency within document
(more occurrences yield higher weight) and within the whole index
(more rare keywords yield higher weight).
</para>
<para>
However, in some cases you'd want to compute weight differently -
or maybe avoid computing it at all for performance reasons because
you're sorting the result set by something else anyway. This can be
accomplished by setting the appropriate ranking mode. The list of
the modes is available in <xref linkend="weighting"/>.
</para>
<para>
<code>$rankexpr</code> argument was added in version 2.0.2-beta.
It lets you specify a ranking formula to use with the
<link linkend="expression-ranker">expression based ranker</link>,
that is, when <code>$ranker</code> is set to SPH_RANK_EXPR.
In all other cases, <code>$rankexpr</code> is ignored.
</para>
</sect2>
<sect2 id="api-func-setsortmode"><title>SetSortMode</title>
<para><b>Prototype:</b> function SetSortMode ( $mode, $sortby="" )</para>
<para>
Set matches sorting mode, as described in <xref linkend="sorting-modes"/>.
Parameter must be a constant specifying one of the known modes.
</para>
<para>
<b>WARNING:</b> (PHP specific) you <b>must not</b> take the matching mode
constant name in quotes, that syntax specifies a string and is incorrect:
<programlisting>
$cl->SetSortMode ( "SPH_SORT_ATTR_DESC" ); // INCORRECT! will not work as expected
$cl->SetSortMode ( SPH_SORT_ATTR_ASC ); // correct, works OK
</programlisting>
</para>
</sect2>
<sect2 id="api-func-setweights"><title>SetWeights</title>
<para><b>Prototype:</b> function SetWeights ( $weights )</para>
<para>
Binds per-field weights in the order of appearance in the index.
<b>DEPRECATED</b>, use <link linkend="api-func-setfieldweights">SetFieldWeights()</link> instead.
</para>
</sect2>
<sect2 id="api-func-setfieldweights"><title>SetFieldWeights</title>
<para><b>Prototype:</b> function SetFieldWeights ( $weights )</para>
<para>
Binds per-field weights by name. Parameter must be a hash (associative array)
mapping string field names to integer weights.
</para>
<para>
Match ranking can be affected by per-field weights. For instance,
see <xref linkend="weighting"/> for an explanation how phrase proximity
ranking is affected. This call lets you specify what non-default
weights to assign to different full-text fields.
</para>
<para>
The weights must be positive 32-bit integers. The final weight
will be a 32-bit integer too. Default weight value is 1. Unknown
field names will be silently ignored.
</para>
<para>
There is no enforced limit on the maximum weight value at the
moment. However, beware that if you set it too high you can start
hitting 32-bit wraparound issues. For instance, if you set
a weight of 10,000,000 and search in extended mode, then
maximum possible weight will be equal to 10 million (your weight)
by 1 thousand (internal BM25 scaling factor, see <xref linkend="weighting"/>)
by 1 or more (phrase proximity rank). The result is at least 10 billion
that does not fit in 32 bits and will be wrapped around, producing
unexpected results.
</para>
</sect2>
<sect2 id="api-func-setindexweights"><title>SetIndexWeights</title>
<para><b>Prototype:</b> function SetIndexWeights ( $weights )</para>
<para>
Sets per-index weights, and enables weighted summing of match weights
across different indexes. Parameter must be a hash (associative array)
mapping string index names to integer weights. Default is empty array
that means to disable weighting summing.
</para>
<para>
When a match with the same document ID is found in several different
local indexes, by default Sphinx simply chooses the match from the index
specified last in the query. This is to support searching through
partially overlapping index partitions.
</para>
<para>
However in some cases the indexes are not just partitions, and you
might want to sum the weights across the indexes instead of picking one.
<code>SetIndexWeights()</code> lets you do that. With summing enabled,
final match weight in result set will be computed as a sum of match
weight coming from the given index multiplied by respective per-index
weight specified in this call. Ie. if the document 123 is found in
index A with the weight of 2, and also in index B with the weight of 3,
and you called <code>SetIndexWeights ( array ( "A"=>100, "B"=>10 ) )</code>,
the final weight return to the client will be 2*100+3*10 = 230.
</para>
</sect2>
</sect1>
<sect1 id="api-funcgroup-filtering"><title>Result set filtering settings</title>
<sect2 id="api-func-setidrange"><title>SetIDRange</title>
<para><b>Prototype:</b> function SetIDRange ( $min, $max )</para>
<para>
Sets an accepted range of document IDs. Parameters must be integers.
Defaults are 0 and 0; that combination means to not limit by range.
</para>
<para>
After this call, only those records that have document ID
between <code>$min</code> and <code>$max</code> (including IDs
exactly equal to <code>$min</code> or <code>$max</code>)
will be matched.
</para>
</sect2>
<sect2 id="api-func-setfilter"><title>SetFilter</title>
<para><b>Prototype:</b> function SetFilter ( $attribute, $values, $exclude=false )</para>
<para>
Adds new integer values set filter.
</para>
<para>
On this call, additional new filter is added to the existing
list of filters. <code>$attribute</code> must be a string with
attribute name. <code>$values</code> must be a plain array
containing integer values. <code>$exclude</code> must be a boolean
value; it controls whether to accept the matching documents
(default mode, when <code>$exclude</code> is false) or reject them.
</para>
<para>
Only those documents where <code>$attribute</code> column value
stored in the index matches any of the values from <code>$values</code>
array will be matched (or rejected, if <code>$exclude</code> is true).
</para>
</sect2>
<sect2 id="api-func-setfilterrange"><title>SetFilterRange</title>
<para><b>Prototype:</b> function SetFilterRange ( $attribute, $min, $max, $exclude=false )</para>
<para>
Adds new integer range filter.
</para>
<para>
On this call, additional new filter is added to the existing
list of filters. <code>$attribute</code> must be a string with
attribute name. <code>$min</code> and <code>$max</code> must be
integers that define the acceptable attribute values range
(including the boundaries). <code>$exclude</code> must be a boolean
value; it controls whether to accept the matching documents
(default mode, when <code>$exclude</code> is false) or reject them.
</para>
<para>
Only those documents where <code>$attribute</code> column value
stored in the index is between <code>$min</code> and <code>$max</code>
(including values that are exactly equal to <code>$min</code> or <code>$max</code>)
will be matched (or rejected, if <code>$exclude</code> is true).
</para>
</sect2>
<sect2 id="api-func-setfilterfloatrange"><title>SetFilterFloatRange</title>
<para><b>Prototype:</b> function SetFilterFloatRange ( $attribute, $min, $max, $exclude=false )</para>
<para>
Adds new float range filter.
</para>
<para>
On this call, additional new filter is added to the existing
list of filters. <code>$attribute</code> must be a string with
attribute name. <code>$min</code> and <code>$max</code> must be
floats that define the acceptable attribute values range
(including the boundaries). <code>$exclude</code> must be a boolean
value; it controls whether to accept the matching documents
(default mode, when <code>$exclude</code> is false) or reject them.
</para>
<para>
Only those documents where <code>$attribute</code> column value
stored in the index is between <code>$min</code> and <code>$max</code>
(including values that are exactly equal to <code>$min</code> or <code>$max</code>)
will be matched (or rejected, if <code>$exclude</code> is true).
</para>
</sect2>
<sect2 id="api-func-setgeoanchor"><title>SetGeoAnchor</title>
<para><b>Prototype:</b> function SetGeoAnchor ( $attrlat, $attrlong, $lat, $long )</para>
<para>
Sets anchor point for and geosphere distance (geodistance) calculations, and enable them.
</para>
<para>
<code>$attrlat</code> and <code>$attrlong</code> must be strings that contain the names
of latitude and longitude attributes, respectively. <code>$lat</code> and <code>$long</code>
are floats that specify anchor point latitude and longitude, in radians.
</para>
<para>
Once an anchor point is set, you can use magic <code>"@geodist"</code> attribute
name in your filters and/or sorting expressions. Sphinx will compute geosphere distance
between the given anchor point and a point specified by latitude and longitude
attributes from each full-text match, and attach this value to the resulting match.
The latitude and longitude values both in <code>SetGeoAnchor</code> and the index
attribute data are expected to be in radians. The result will be returned in meters,
so geodistance value of 1000.0 means 1 km. 1 mile is approximately 1609.344 meters.
</para>
</sect2>
<sect2 id="api-func-setfilterstring"><title>SetFilterString</title>
<para><b>Prototype:</b> function SetFilterString ( $attribute, $value, $exclude=false )</para>
<para>
Adds new string value filter.
</para>
<para>
On this call, additional new filter is added to the existing
list of filters. <code>$attribute</code> must be a string with
attribute name. <code>$value</code> must be a string. <code>$exclude</code> must be a boolean
value; it controls whether to accept the matching documents
(default mode, when <code>$exclude</code> is false) or reject them.
</para>
<para>
Only those documents where <code>$attribute</code> column value
stored in the index matches string value from <code>$value</code>
will be matched (or rejected, if <code>$exclude</code> is true).
</para>
</sect2>
</sect1>
<sect1 id="api-funcgroup-groupby"><title>GROUP BY settings</title>
<sect2 id="api-func-setgroupby"><title>SetGroupBy</title>
<para><b>Prototype:</b> function SetGroupBy ( $attribute, $func, $groupsort="@group desc" )</para>
<para>
Sets grouping attribute, function, and groups sorting mode; and enables grouping
(as described in <xref linkend="clustering"/>).
</para>
<para>
<code>$attribute</code> is a string that contains group-by attribute name.
<code>$func</code> is a constant that chooses a function applied to the attribute value in order to compute group-by key.
<code>$groupsort</code> is a clause that controls how the groups will be sorted. Its syntax is similar
to that described in <xref linkend="sort-extended"/>.
</para>
<para>
Grouping feature is very similar in nature to GROUP BY clause from SQL.
Results produces by this function call are going to be the same as produced
by the following pseudo code:
<programlisting>
SELECT ... GROUP BY $func($attribute) ORDER BY $groupsort
</programlisting>
Note that it's <code>$groupsort</code> that affects the order of matches
in the final result set. Sorting mode (see <xref linkend="api-func-setsortmode"/>)
affect the ordering of matches <emphasis>within</emphasis> group, ie.
what match will be selected as the best one from the group.
So you can for instance order the groups by matches count
and select the most relevant match within each group at the same time.
</para>
<para>
Starting with version 0.9.9-rc2, aggregate functions (AVG(), MIN(),
MAX(), SUM()) are supported through <link linkend="api-func-setselect">SetSelect()</link> API call
when using GROUP BY.
</para>
<para>
Starting with version 2.0.1-beta, grouping on string attributes
is supported, with respect to current collation.
</para>
</sect2>
<sect2 id="api-func-setgroupdistinct"><title>SetGroupDistinct</title>
<para><b>Prototype:</b> function SetGroupDistinct ( $attribute )</para>
<para>
Sets attribute name for per-group distinct values count calculations.
Only available for grouping queries.
</para>
<para>
<code>$attribute</code> is a string that contains the attribute name.
For each group, all values of this attribute will be stored (as RAM limits
permit), then the amount of distinct values will be calculated and returned
to the client. This feature is similar to <code>COUNT(DISTINCT)</code>
clause in standard SQL; so these Sphinx calls:
<programlisting>
$cl->SetGroupBy ( "category", SPH_GROUPBY_ATTR, "@count desc" );
$cl->SetGroupDistinct ( "vendor" );
</programlisting>
can be expressed using the following SQL clauses:
<programlisting>
SELECT id, weight, all-attributes,
COUNT(DISTINCT vendor) AS @distinct,
COUNT(*) AS @count
FROM products
GROUP BY category
ORDER BY @count DESC
</programlisting>
In the sample pseudo code shown just above, <code>SetGroupDistinct()</code> call
corresponds to <code>COUNT(DISINCT vendor)</code> clause only.
<code>GROUP BY</code>, <code>ORDER BY</code>, and <code>COUNT(*)</code>
clauses are all an equivalent of <code>SetGroupBy()</code> settings. Both queries
will return one matching row for each category. In addition to indexed attributes,
matches will also contain total per-category matches count, and the count
of distinct vendor IDs within each category.
</para>
</sect2>
</sect1>
<sect1 id="api-funcgroup-querying"><title>Querying</title>
<sect2 id="api-func-query"><title>Query</title>
<para><b>Prototype:</b> function Query ( $query, $index="*", $comment="" )</para>
<para>
Connects to <filename>searchd</filename> server, runs given search query
with current settings, obtains and returns the result set.
</para>
<para>
<code>$query</code> is a query string. <code>$index</code> is an index name (or names) string.
Returns false and sets <code>GetLastError()</code> message on general error.
Returns search result set on success.
Additionally, the contents of <code>$comment</code> are sent to the query log, marked in square brackets, just before the search terms, which can be very useful for debugging.
Currently, the comment is limited to 128 characters.
</para>
<para>
Default value for <code>$index</code> is <code>"*"</code> that means
to query all local indexes. Characters allowed in index names include
Latin letters (a-z), numbers (0-9) and underscore (_);
everything else is considered a separator. Note that index name should
not start with underscore character. Therefore, all of the
following samples calls are valid and will search the same
two indexes:
<programlisting>
$cl->Query ( "test query", "main delta" );
$cl->Query ( "test query", "main;delta" );
$cl->Query ( "test query", "main, delta" );
</programlisting>
Index specification order matters. If document with identical IDs are found
in two or more indexes, weight and attribute values from the very last matching
index will be used for sorting and returning to client (unless explicitly
overridden with <link linkend="api-func-setindexweights">SetIndexWeights()</link>). Therefore,
in the example above, matches from "delta" index will always win over
matches from "main".
</para>
<para>
On success, <code>Query()</code> returns a result set that contains
some of the found matches (as requested by <link linkend="api-func-setlimits">SetLimits()</link>)
and additional general per-query statistics. The result set is a hash
(PHP specific; other languages might utilize other structures instead
of hash) with the following keys and values:
<variablelist>
<varlistentry>
<term>"matches":</term>
<listitem><para>Hash which maps found document IDs to another small hash containing document weight and attribute values
(or an array of the similar small hashes if <link linkend="api-func-setarrayresult">SetArrayResult()</link> was enabled).
</para></listitem>
</varlistentry>
<varlistentry>
<term>"total":</term>
<listitem><para>Total amount of matches retrieved <emphasis>on server</emphasis> (ie. to the server side result set) by this query.
You can retrieve up to this amount of matches from server for this query text with current query settings.
</para></listitem>
</varlistentry>
<varlistentry>
<term>"total_found":</term>
<listitem><para>Total amount of matching documents in index (that were found and processed on server).</para></listitem>
</varlistentry>
<varlistentry>
<term>"words":</term>
<listitem><para>Hash which maps query keywords (case-folded, stemmed, and otherwise processed) to a small hash with per-keyword statistics ("docs", "hits").</para></listitem>
</varlistentry>
<varlistentry>
<term>"error":</term>
<listitem><para>Query error message reported by <filename>searchd</filename> (string, human readable). Empty if there were no errors.</para></listitem>
</varlistentry>
<varlistentry>
<term>"warning":</term>
<listitem><para>Query warning message reported by <filename>searchd</filename> (string, human readable). Empty if there were no warnings.</para></listitem>
</varlistentry>
</variablelist>
</para>
<para>
It should be noted that <code>Query()</code> carries out the same actions as
<code>AddQuery()</code> and <code>RunQueries()</code> without the intermediate steps;
it is analogous to a single <code>AddQuery()</code> call, followed by a corresponding
<code>RunQueries()</code>, then returning the first array element of matches
(from the first, and only, query.)
</para>
</sect2>
<sect2 id="api-func-addquery"><title>AddQuery</title>
<para><b>Prototype:</b> function AddQuery ( $query, $index="*", $comment="" )</para>
<para>
Adds additional query with current settings to multi-query batch.
<code>$query</code> is a query string. <code>$index</code> is an index name (or names) string.
Additionally if provided, the contents of <code>$comment</code> are sent to the query log,
marked in square brackets, just before the search terms, which can be very useful for debugging.
Currently, this is limited to 128 characters.
Returns index to results array returned from <link linkend="api-func-runqueries">RunQueries()</link>.
</para>
<para>
Batch queries (or multi-queries) enable <filename>searchd</filename> to perform internal
optimizations if possible. They also reduce network connection overheads and search process
creation overheads in all cases. They do not result in any additional overheads compared
to simple queries. Thus, if you run several different queries from your web page,
you should always consider using multi-queries.
</para>
<para>
For instance, running the same full-text query but with different
sorting or group-by settings will enable <filename>searchd</filename>
to perform expensive full-text search and ranking operation only once,
but compute multiple group-by results from its output.
</para>
<para>
This can be a big saver when you need to display not just plain
search results but also some per-category counts, such as the amount of
products grouped by vendor. Without multi-query, you would have to run several
queries which perform essentially the same search and retrieve the
same matches, but create result sets differently. With multi-query,
you simply pass all these queries in a single batch and Sphinx
optimizes the redundant full-text search internally.
</para>
<para>
<code>AddQuery()</code> internally saves full current settings state
along with the query, and you can safely change them afterwards for subsequent
<code>AddQuery()</code> calls. Already added queries will not be affected;
there's actually no way to change them at all. Here's an example:
<programlisting>
$cl->SetSortMode ( SPH_SORT_RELEVANCE );
$cl->AddQuery ( "hello world", "documents" );
$cl->SetSortMode ( SPH_SORT_ATTR_DESC, "price" );
$cl->AddQuery ( "ipod", "products" );
$cl->AddQuery ( "harry potter", "books" );
$results = $cl->RunQueries ();
</programlisting>
With the code above, 1st query will search for "hello world" in "documents" index
and sort results by relevance, 2nd query will search for "ipod" in "products"
index and sort results by price, and 3rd query will search for "harry potter"
in "books" index while still sorting by price. Note that 2nd <code>SetSortMode()</code> call
does not affect the first query (because it's already added) but affects both other
subsequent queries.
</para>
<para>
Additionally, any filters set up before an <code>AddQuery()</code> will fall through to subsequent
queries. So, if <code>SetFilter()</code> is called before the first query, the same filter
will be in place for the second (and subsequent) queries batched through <code>AddQuery()</code>
unless you call <code>ResetFilters()</code> first. Alternatively, you can add additional filters
as well.</para>
<para>This would also be true for grouping options and sorting options; no current sorting,
filtering, and grouping settings are affected by this call; so subsequent queries will reuse
current query settings.
</para>
<para>
<code>AddQuery()</code> returns an index into an array of results
that will be returned from <code>RunQueries()</code> call. It is simply
a sequentially increasing 0-based integer, ie. first call will return 0,
second will return 1, and so on. Just a small helper so you won't have
to track the indexes manually if you need then.
</para>
</sect2>
<sect2 id="api-func-runqueries"><title>RunQueries</title>
<para><b>Prototype:</b> function RunQueries ()</para>
<para>
Connect to searchd, runs a batch of all queries added using <code>AddQuery()</code>,
obtains and returns the result sets. Returns false and sets <code>GetLastError()</code>
message on general error (such as network I/O failure). Returns a plain array
of result sets on success.
</para>
<para>
Each result set in the returned array is exactly the same as
the result set returned from <link linkend="api-func-query"><code>Query()</code></link>.
</para>
<para>
Note that the batch query request itself almost always succeeds -
unless there's a network error, blocking index rotation in progress,
or another general failure which prevents the whole request from being
processed.
</para>
<para>
However individual queries within the batch might very well fail.
In this case their respective result sets will contain non-empty <code>"error"</code> message,
but no matches or query statistics. In the extreme case all queries within the batch
could fail. There still will be no general error reported, because API was able to
successfully connect to <filename>searchd</filename>, submit the batch, and receive
the results - but every result set will have a specific error message.
</para>
</sect2>
<sect2 id="api-func-resetfilters"><title>ResetFilters</title>
<para><b>Prototype:</b> function ResetFilters ()</para>
<para>
Clears all currently set filters.
</para>
<para>
This call is only normally required when using multi-queries. You might want
to set different filters for different queries in the batch. To do that,
you should call <code>ResetFilters()</code> and add new filters using
the respective calls.
</para>
</sect2>
<sect2 id="api-func-resetgroupby"><title>ResetGroupBy</title>
<para><b>Prototype:</b> function ResetGroupBy ()</para>
<para>
Clears all currently group-by settings, and disables group-by.
</para>
<para>
This call is only normally required when using multi-queries.
You can change individual group-by settings using <code>SetGroupBy()</code>
and <code>SetGroupDistinct()</code> calls, but you can not disable
group-by using those calls. <code>ResetGroupBy()</code>
fully resets previous group-by settings and disables group-by mode
in the current state, so that subsequent <code>AddQuery()</code>
calls can perform non-grouping searches.
</para>
</sect2>
</sect1>
<sect1 id="api-funcgroup-additional-functionality"><title>Additional functionality</title>
<sect2 id="api-func-buildexcerpts"><title>BuildExcerpts</title>
<para><b>Prototype:</b> function BuildExcerpts ( $docs, $index, $words, $opts=array() )</para>
<para>
Excerpts (snippets) builder function. Connects to <filename>searchd</filename>,
asks it to generate excerpts (snippets) from given documents, and returns the results.
</para>
<para>
<code>$docs</code> is a plain array of strings that carry the documents' contents.
<code>$index</code> is an index name string. Different settings (such as charset,
morphology, wordforms) from given index will be used.
<code>$words</code> is a string that contains the keywords to highlight. They will
be processed with respect to index settings. For instance, if English stemming
is enabled in the index, "shoes" will be highlighted even if keyword is "shoe".
Starting with version 0.9.9-rc1, keywords can contain wildcards, that work similarly to
star-syntax available in queries.
<code>$opts</code> is a hash which contains additional optional highlighting parameters:
<variablelist>
<varlistentry>
<term>"before_match":</term>
<listitem><para>A string to insert before a keyword match. Starting with version 1.10-beta,
a %PASSAGE_ID% macro can be used in this string. The first match of the macro is replaced
with an incrementing passage number within a current snippet. Numbering starts at 1
by default but can be overridden with "start_passage_id" option. In a multi-document call,
%PASSAGE_ID% would restart at every given document. Default is "<b>".</para></listitem>
</varlistentry>
<varlistentry>
<term>"after_match":</term>
<listitem><para>A string to insert after a keyword match. Starting with version 1.10-beta,
a %PASSAGE_ID% macro can be used in this string. Default is "</b>".</para></listitem>
</varlistentry>
<varlistentry>
<term>"chunk_separator":</term>
<listitem><para>A string to insert between snippet chunks (passages). Default is " ... ".</para></listitem>
</varlistentry>
<varlistentry>
<term>"limit":</term>
<listitem><para>Maximum snippet size, in symbols (codepoints). Integer, default is 256.</para></listitem>
</varlistentry>
<varlistentry>
<term>"around":</term>
<listitem><para>How much words to pick around each matching keywords block. Integer, default is 5.</para></listitem>
</varlistentry>
<varlistentry>
<term>"exact_phrase":</term>
<listitem><para>Whether to highlight exact query phrase matches only instead of individual keywords. Boolean, default is false.</para></listitem>
</varlistentry>
<varlistentry>
<term>"use_boundaries":</term>
<listitem><para>Whether to additionally break passages by phrase
boundary characters, as configured in index settings with
<link linkend="conf-phrase-boundary">phrase_boundary</link>
directive. Boolean, default is false.
</para></listitem>
</varlistentry>
<varlistentry>
<term>"weight_order":</term>
<listitem><para>Whether to sort the extracted passages in order of relevance (decreasing weight),
or in order of appearance in the document (increasing position). Boolean, default is false.</para></listitem>
</varlistentry>
<varlistentry>
<term>"query_mode":</term>
<listitem><para>Added in version 1.10-beta. Whether to handle $words as a query in
<link linkend="extended-syntax">extended syntax</link>, or as a bag of words
(default behavior). For instance, in query mode ("one two" | "three four") will
only highlight and include those occurrences "one two" or "three four" when
the two words from each pair are adjacent to each other. In default mode,
any single occurrence of "one", "two", "three", or "four" would be
highlighted. Boolean, default is false.
</para></listitem>
</varlistentry>
<varlistentry>
<term>"force_all_words":</term>
<listitem><para>Added in version 1.10-beta. Ignores the snippet length limit until it
includes all the keywords. Boolean, default is false.
</para></listitem>
</varlistentry>
<varlistentry>
<term>"limit_passages":</term>
<listitem><para>Added in version 1.10-beta. Limits the maximum number of passages
that can be included into the snippet. Integer, default is 0 (no limit).
</para></listitem>
</varlistentry>
<varlistentry>
<term>"limit_words":</term>
<listitem><para>Added in version 1.10-beta. Limits the maximum number of words
that can be included into the snippet. Note the limit applies to any words, and
not just the matched keywords to highlight. For example, if we are highlighting
"Mary" and a passage "Mary had a little lamb" is selected, then it contributes
5 words to this limit, not just 1. Integer, default is 0 (no limit).
</para></listitem>
</varlistentry>
<varlistentry>
<term>"start_passage_id":</term>
<listitem><para>Added in version 1.10-beta. Specifies the starting value of
%PASSAGE_ID% macro (that gets detected and expanded in <option>before_match</option>,
<option>after_match</option> strings). Integer, default is 1.
</para></listitem>
</varlistentry>
<varlistentry>
<term>"load_files":</term>
<listitem><para>Added in version 1.10-beta. Whether to handle $docs as data
to extract snippets from (default behavior), or to treat it as file names,
and load data from specified files on the server side. Starting with
version 2.0.1-beta, up to <link linkend="conf-dist-threads">dist_threads</link>
worker threads per request will be created to parallelize the work
when this flag is enabled. Boolean, default is false. Starting with version 2.0.2-beta,
building of the snippets could be parallelized between remote agents. Just set the <link linkend='conf-dist-threads'>'dist_threads'</link> param in the config
to the value greater than 1, and then invoke the snippets
generation over the distributed index, which contain only one(!) <link linkend='conf-local'>local</link> agent and several remotes.
Starting with version 2.1.1-beta, the <link linkend="conf-snippets-file-prefix">snippets_file_prefix</link> option is
also in the game and the final filename is calculated by concatenation of the prefix with given name.
Otherwords, when snippets_file_prefix is '/var/data' and filename is 'text.txt' the sphinx will try to generate the snippets
from the file '/var/datatext.txt', which is exactly '/var/data' + 'text.txt'.
</para></listitem>
</varlistentry>
<varlistentry>
<term>"load_files_scattered":</term>
<listitem><para>Added in version 2.0.2-beta. It works only with distributed snippets generation
with remote agents. The source files for snippets could be distributed among different agents, and the main daemon will merge
together all non-erroneous results. So, if one agent of the distributed index has 'file1.txt', another has 'file2.txt' and you call for the snippets
with both these files, the sphinx will merge results from the agents together, so you will get the snippets from both 'file1.txt' and 'file2.txt'.
Boolean, default is false.
</para>
<para>If the "load_files" is also set, the request will return the error in case if any of the files is not available anywhere. Otherwise (if "load_files" is not set)
it will just return the empty strings for all absent files. The master instance reset this flag when distributes the snippets among agents. So, for agents the absence of a file
is not critical error, but for the master it might be so. If you want to be sure that all snippets are actually created, set both "load_files_scattered" and "load_files". If the
absence of some snippets caused by some agents is not critical for you - set just "load_files_scattered", leaving "load_files" not set.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>"html_strip_mode":</term>
<listitem><para>Added in version 1.10-beta. HTML stripping mode setting.
Defaults to "index", which means that index settings will be used.
The other values are "none" and "strip", that forcibly skip or apply
stripping irregardless of index settings; and "retain", that retains
HTML markup and protects it from highlighting. The "retain" mode can
only be used when highlighting full documents and thus requires that
no snippet size limits are set. String, allowed values are "none",
"strip", "index", and "retain".
</para></listitem>
</varlistentry>
<varlistentry>
<term>"allow_empty":</term>
<listitem><para>Added in version 1.10-beta. Allows empty string to be
returned as highlighting result when a snippet could not be generated
(no keywords match, or no passages fit the limit). By default,
the beginning of original text would be returned instead of an empty
string. Boolean, default is false.
</para></listitem>
</varlistentry>
<varlistentry>
<term>"passage_boundary":</term>
<listitem><para>Added in version 2.0.1-beta. Ensures that passages do not
cross a sentence, paragraph, or zone boundary (when used with an index
that has the respective indexing settings enabled). String, allowed
values are "sentence", "paragraph", and "zone".
</para></listitem>
</varlistentry>
<varlistentry>
<term>"emit_zones":</term>
<listitem><para>Added in version 2.0.1-beta. Emits an HTML tag with
an enclosing zone name before each passage. Boolean, default is false.
</para></listitem>
</varlistentry>
</variablelist>
</para>
<para>
Snippets extraction algorithm currently favors better passages
(with closer phrase matches), and then passages with keywords not
yet in snippet. Generally, it will try to highlight the best match
with the query, and it will also to highlight all the query keywords,
as made possible by the limits. In case the document does not match
the query, beginning of the document trimmed down according to the
limits will be return by default. Starting with 1.10-beta, you can
also return an empty snippet instead case by setting "allow_empty"
option to true.
</para>
<para>
Returns false on failure. Returns a plain array of strings with excerpts (snippets) on success.
</para>
</sect2>
<sect2 id="api-func-updateatttributes"><title>UpdateAttributes</title>
<para><b>Prototype:</b> function UpdateAttributes ( $index, $attrs, $values, $mva=false, $ignorenonexistent=false )</para>
<para>
Instantly updates given attribute values in given documents.
Returns number of actually updated documents (0 or more) on success, or -1 on failure.
</para>
<para>
<code>$index</code> is a name of the index (or indexes) to be updated.
<code>$attrs</code> is a plain array with string attribute names, listing attributes that are updated.
<code>$values</code> is a hash where key is document ID, and value is a plain array of new attribute values.
Optional boolean parameter <code>mva</code> points that there is update of MVA
attributes. In this case the $values must be a dict with int key (document ID)
and list of lists of int values (new MVA attribute values).
Optional boolean parameter <code>$ignorenonexistent</code>
(added in version 2.1.1-beta) points that the
update will silently ignore any warnings about trying to update a
column which is not exists in current index schema. </para>
<para>
<code>$index</code> can be either a single index name or a list, like in <code>Query()</code>.
Unlike <code>Query()</code>, wildcard is not allowed and all the indexes
to update must be specified explicitly. The list of indexes can include
distributed index names. Updates on distributed indexes will be pushed
to all agents.
</para>
<para>
The updates only work with <code>docinfo=extern</code> storage strategy.
They are very fast because they're working fully in RAM, but they can also
be made persistent: updates are saved on disk on clean <filename>searchd</filename>
shutdown initiated by SIGTERM signal. With additional restrictions, updates
are also possible on MVA attributes; refer to <link linkend="conf-mva-updates-pool">mva_updates_pool</link>
directive for details.
</para>
<para>
Usage example:
<programlisting>
$cl->UpdateAttributes ( "test1", array("group_id"), array(1=>array(456)) );
$cl->UpdateAttributes ( "products", array ( "price", "amount_in_stock" ),
array ( 1001=>array(123,5), 1002=>array(37,11), 1003=>(25,129) ) );
</programlisting>
The first sample statement will update document 1 in index "test1", setting "group_id" to 456.
The second one will update documents 1001, 1002 and 1003 in index "products". For document 1001,
the new price will be set to 123 and the new amount in stock to 5; for document 1002, the new price
will be 37 and the new amount will be 11; etc.
</para>
</sect2>
<sect2 id="api-func-buildkeywords"><title>BuildKeywords</title>
<para><b>Prototype:</b> function BuildKeywords ( $query, $index, $hits )</para>
<para>
Extracts keywords from query using tokenizer settings for given index, optionally with per-keyword occurrence statistics.
Returns an array of hashes with per-keyword information.
</para>
<para>
<code>$query</code> is a query to extract keywords from.
<code>$index</code> is a name of the index to get tokenizing settings and keyword occurrence statistics from.
<code>$hits</code> is a boolean flag that indicates whether keyword occurrence statistics are required.
</para>
<para>
Usage example:
</para>
<programlisting>
$keywords = $cl->BuildKeywords ( "this.is.my query", "test1", false );
</programlisting>
</sect2>
<sect2 id="api-func-escapestring"><title>EscapeString</title>
<para><b>Prototype:</b> function EscapeString ( $string )</para>
<para>
Escapes characters that are treated as special operators by the query language parser.
Returns an escaped string.
</para>
<para>
<code>$string</code> is a string to escape.
</para>
<para>
This function might seem redundant because it's trivial to implement in any calling
application. However, as the set of special characters might change over time, it makes
sense to have an API call that is guaranteed to escape all such characters at all times.
</para>
<para>
Usage example:
</para>
<programlisting>
$escaped = $cl->EscapeString ( "escaping-sample@query/string" );
</programlisting>
</sect2>
<sect2 id="api-func-status"><title>Status</title>
<para><b>Prototype:</b> function Status ()</para>
<para>
Queries searchd status, and returns an array of status variable name and value pairs.
</para>
<para>
Usage example:
</para>
<programlisting>
$status = $cl->Status ();
foreach ( $status as $row )
print join ( ": ", $row ) . "\n";
</programlisting>
</sect2>
<sect2 id="api-func-flushattributes"><title>FlushAttributes</title>
<para><b>Prototype:</b> function FlushAttributes ()</para>
<para>
Forces <filename>searchd</filename> to flush pending attribute updates
to disk, and blocks until completion. Returns a non-negative internal
"flush tag" on success. Returns -1 and sets an error message on error.
Introduced in version 1.10-beta.
</para>
<para>
Attribute values updated using <link linkend="api-func-updateatttributes">UpdateAttributes()</link>
API call are only kept in RAM until a so-called flush (which writes
the current, possibly updated attribute values back to disk). FlushAttributes()
call lets you enforce a flush. The call will block until <filename>searchd</filename>
finishes writing the data to disk, which might take seconds or even minutes
depending on the total data size (.spa file size). All the currently updated
indexes will be flushed.
</para>
<para>
Flush tag should be treated as an ever growing magic number that does not
mean anything. It's guaranteed to be non-negative. It is guaranteed to grow over
time, though not necessarily in a sequential fashion; for instance, two calls that
return 10 and then 1000 respectively are a valid situation. If two calls to
FlushAttrs() return the same tag, it means that there were no actual attribute
updates in between them, and therefore current flushed state remained the same
(for all indexes).
</para>
<para>
Usage example:
</para>
<programlisting>
$status = $cl->FlushAttributes ();
if ( $status<0 )
print "ERROR: " . $cl->GetLastError();
</programlisting>
</sect2>
</sect1>
<sect1 id="api-funcgroup-pconn"><title>Persistent connections</title>
<para>
Persistent connections allow to use single network connection to run
multiple commands that would otherwise require reconnects.
</para>
<sect2 id="api-func-open"><title>Open</title>
<para><b>Prototype:</b> function Open ()</para>
<para>
Opens persistent connection to the server.
</para>
</sect2>
<sect2 id="api-func-close"><title>Close</title>
<para><b>Prototype:</b> function Close ()</para>
<para>
Closes previously opened persistent connection.
</para>
</sect2>
</sect1>
</chapter>
<chapter id="sphinxse"><title>MySQL storage engine (SphinxSE)</title>
<sect1 id="sphinxse-overview"><title>SphinxSE overview</title>
<para>
SphinxSE is MySQL storage engine which can be compiled
into MySQL server 5.x using its pluggable architecture.
It is not available for MySQL 4.x series. It also requires
MySQL 5.0.22 or higher in 5.0.x series, or MySQL 5.1.12
or higher in 5.1.x series.
</para>
<para>
Despite the name, SphinxSE does <emphasis>not</emphasis>
actually store any data itself. It is actually a built-in client
which allows MySQL server to talk to <filename>searchd</filename>,
run search queries, and obtain search results. All indexing and
searching happen outside MySQL.
</para>
<para>
Obvious SphinxSE applications include:
<itemizedlist>
<listitem><para>easier porting of MySQL FTS applications to Sphinx;</para></listitem>
<listitem><para>allowing Sphinx use with programming languages for which native APIs are not available yet;</para></listitem>
<listitem><para>optimizations when additional Sphinx result set processing on MySQL side is required
(eg. JOINs with original document tables, additional MySQL-side filtering, etc).</para></listitem>
</itemizedlist>
</para>
</sect1>
<sect1 id="sphinxse-installing"><title>Installing SphinxSE</title>
<para>
You will need to obtain a copy of MySQL sources, prepare those,
and then recompile MySQL binary.
MySQL sources (mysql-5.x.yy.tar.gz) could be obtained from
<ulink url="http://dev.mysql.com">dev.mysql.com</ulink> Web site.
</para>
<para>
For some MySQL versions, there are delta tarballs with already
prepared source versions available from Sphinx Web site. After unzipping
those over original sources MySQL would be ready to be configured and
built with Sphinx support.
</para>
<para>
If such tarball is not available, or does not work for you for any
reason, you would have to prepare sources manually. You will need to
GNU Autotools framework (autoconf, automake and libtool) installed
to do that.
</para>
<sect2 id="sphinxse-mysql50"><title>Compiling MySQL 5.0.x with SphinxSE</title>
<orderedlist>
<listitem><para>copy <filename>sphinx.5.0.yy.diff</filename> patch file
into MySQL sources directory and run
<programlisting>
patch -p1 < sphinx.5.0.yy.diff
</programlisting>
If there's no .diff file exactly for the specific version you need
to build, try applying .diff with closest version numbers. It is important
that the patch should apply with no rejects.
</para></listitem>
<listitem><para>in MySQL sources directory, run
<programlisting>
sh BUILD/autorun.sh
</programlisting>
</para></listitem>
<listitem><para>in MySQL sources directory, create <filename>sql/sphinx</filename>
directory in and copy all files in <filename>mysqlse</filename> directory
from Sphinx sources there. Example:
<programlisting>
cp -R /root/builds/sphinx-0.9.7/mysqlse /root/builds/mysql-5.0.24/sql/sphinx
</programlisting>
</para></listitem>
<listitem><para>
configure MySQL and enable Sphinx engine:
<programlisting>
./configure --with-sphinx-storage-engine
</programlisting>
</para></listitem>
<listitem><para>
build and install MySQL:
<programlisting>
make
make install
</programlisting>
</para></listitem>
</orderedlist>
</sect2>
<sect2 id="sphinxse-mysql51"><title>Compiling MySQL 5.1.x with SphinxSE</title>
<orderedlist>
<listitem><para>in MySQL sources directory, create <filename>storage/sphinx</filename>
directory in and copy all files in <filename>mysqlse</filename> directory
from Sphinx sources there. Example:
<programlisting>
cp -R /root/builds/sphinx-0.9.7/mysqlse /root/builds/mysql-5.1.14/storage/sphinx
</programlisting>
</para></listitem>
<listitem><para>in MySQL sources directory, run
<programlisting>
sh BUILD/autorun.sh
</programlisting>
</para></listitem>
<listitem><para>
configure MySQL and enable Sphinx engine:
<programlisting>
./configure --with-plugins=sphinx
</programlisting>
</para></listitem>
<listitem><para>
build and install MySQL:
<programlisting>
make
make install
</programlisting>
</para></listitem>
</orderedlist>
</sect2>
<sect2 id="sphinxse-checking"><title>Checking SphinxSE installation</title>
<para>
To check whether SphinxSE has been successfully compiled
into MySQL, launch newly built servers, run mysql client and
issue <code>SHOW ENGINES</code> query. You should see a list
of all available engines. Sphinx should be present and "Support"
column should contain "YES":
</para>
<programlisting>
mysql> show engines;
+------------+----------+-------------------------------------------------------------+
| Engine | Support | Comment |
+------------+----------+-------------------------------------------------------------+
| MyISAM | DEFAULT | Default engine as of MySQL 3.23 with great performance |
...
| SPHINX | YES | Sphinx storage engine |
...
+------------+----------+-------------------------------------------------------------+
13 rows in set (0.00 sec)
</programlisting>
</sect2>
</sect1>
<sect1 id="sphinxse-using"><title>Using SphinxSE</title>
<para>
To search via SphinxSE, you would need to create special ENGINE=SPHINX "search table",
and then SELECT from it with full text query put into WHERE clause for query column.
</para>
<para>
Let's begin with an example create statement and search query:
<programlisting>
CREATE TABLE t1
(
id INTEGER UNSIGNED NOT NULL,
weight INTEGER NOT NULL,
query VARCHAR(3072) NOT NULL,
group_id INTEGER,
INDEX(query)
) ENGINE=SPHINX CONNECTION="sphinx://localhost:9312/test";
SELECT * FROM t1 WHERE query='test it;mode=any';
</programlisting>
</para>
<para>
First 3 columns of search table <emphasis>must</emphasis> have a types of
<code>INTEGER UNSINGED</code> or <code>BIGINT</code> for the 1st column (document id),
<code>INTEGER</code> or <code>BIGINT</code> for the 2nd column (match weight), and
<code>VARCHAR</code> or <code>TEXT</code> for the 3rd column (your query), respectively.
This mapping is fixed; you can not omit any of these three required columns,
or move them around, or change types. Also, query column must be indexed;
all the others must be kept unindexed. Columns' names are ignored so you
can use arbitrary ones.
</para>
<para>
Additional columns must be either <code>INTEGER</code>, <code>TIMESTAMP</code>,
<code>BIGINT</code>, <code>VARCHAR</code>, or <code>FLOAT</code>.
They will be bound to attributes provided in Sphinx result set by name, so their
names must match attribute names specified in <filename>sphinx.conf</filename>.
If there's no such attribute name in Sphinx search results, column will have
<code>NULL</code> values.
</para>
<para>
Special "virtual" attributes names can also be bound to SphinxSE columns.
<code>_sph_</code> needs to be used instead of <code>@</code> for that.
For instance, to obtain the values of <code>@groupby</code>, <code>@count</code>,
or <code>@distinct</code> virtual attributes, use <code>_sph_groupby</code>,
<code>_sph_count</code> or <code>_sph_distinct</code> column names, respectively.
</para>
<para>
<code>CONNECTION</code> string parameter can be used to specify default
searchd host, port and indexes for queries issued using this table.
If no connection string is specified in <code>CREATE TABLE</code>,
index name "*" (ie. search all indexes) and localhost:9312 are assumed.
Connection string syntax is as follows:
<programlisting>
CONNECTION="sphinx://HOST:PORT/INDEXNAME"
</programlisting>
You can change the default connection string later:
<programlisting>
ALTER TABLE t1 CONNECTION="sphinx://NEWHOST:NEWPORT/NEWINDEXNAME";
</programlisting>
You can also override all these parameters per-query.
</para>
<para>
As seen in example, both query text and search options should be put
into WHERE clause on search query column (ie. 3rd column); the options
are separated by semicolons; and their names from values by equality sign.
Any number of options can be specified. Available options are:
<itemizedlist>
<listitem><para>query - query text;</para></listitem>
<listitem><para>mode - matching mode. Must be one of "all", "any", "phrase",
"boolean", or "extended". Default is "all";</para></listitem>
<listitem><para>sort - match sorting mode. Must be one of "relevance", "attr_desc",
"attr_asc", "time_segments", or "extended". In all modes besides "relevance"
attribute name (or sorting clause for "extended") is also required after a colon:
<programlisting>
... WHERE query='test;sort=attr_asc:group_id';
... WHERE query='test;sort=extended:@weight desc, group_id asc';
</programlisting>
</para></listitem>
<listitem><para>offset - offset into result set, default is 0;</para></listitem>
<listitem><para>limit - amount of matches to retrieve from result set, default is 20;</para></listitem>
<listitem><para>index - names of the indexes to search:
<programlisting>
... WHERE query='test;index=test1;';
... WHERE query='test;index=test1,test2,test3;';
</programlisting>
</para></listitem>
<listitem><para>minid, maxid - min and max document ID to match;</para></listitem>
<listitem><para>weights - comma-separated list of weights to be assigned to Sphinx full-text fields:
<programlisting>
... WHERE query='test;weights=1,2,3;';
</programlisting>
</para></listitem>
<listitem><para>filter, !filter - comma-separated attribute name and a set of values to match:
<programlisting>
# only include groups 1, 5 and 19
... WHERE query='test;filter=group_id,1,5,19;';
# exclude groups 3 and 11
... WHERE query='test;!filter=group_id,3,11;';
</programlisting>
</para></listitem>
<listitem><para>range, !range - comma-separated (integer or bigint) Sphinx attribute name,
and min and max values to match:
<programlisting>
# include groups from 3 to 7, inclusive
... WHERE query='test;range=group_id,3,7;';
# exclude groups from 5 to 25
... WHERE query='test;!range=group_id,5,25;';
</programlisting>
</para></listitem>
<listitem><para>floatrange, !floatrange - comma-separated (floating point) Sphinx attribute name,
and min and max values to match:
<programlisting>
# filter by a float size
... WHERE query='test;floatrange=size,2,3;';
# pick all results within 1000 meter from geoanchor
... WHERE query='test;floatrange=@geodist,0,1000;';
</programlisting>
</para></listitem>
<listitem><para>maxmatches - per-query max matches value, as in max_matches parameter to
<link linkend="api-func-setlimits">SetLimits()</link> API call:
<programlisting>
... WHERE query='test;maxmatches=2000;';
</programlisting>
</para></listitem>
<listitem><para>cutoff - maximum allowed matches, as in cutoff parameter to
<link linkend="api-func-setlimits">SetLimits()</link> API call:
<programlisting>
... WHERE query='test;cutoff=10000;';
</programlisting>
</para></listitem>
<listitem><para>maxquerytime - maximum allowed query time (in milliseconds), as in
<link linkend="api-func-setmaxquerytime">SetMaxQueryTime()</link> API call:
<programlisting>
... WHERE query='test;maxquerytime=1000;';
</programlisting>
</para></listitem>
<listitem><para>groupby - group-by function and attribute, corresponding to
<link linkend="api-func-setgroupby">SetGroupBy()</link> API call:
<programlisting>
... WHERE query='test;groupby=day:published_ts;';
... WHERE query='test;groupby=attr:group_id;';
</programlisting>
</para></listitem>
<listitem><para>groupsort - group-by sorting clause:
<programlisting>
... WHERE query='test;groupsort=@count desc;';
</programlisting>
</para></listitem>
<listitem><para>distinct - an attribute to compute COUNT(DISTINCT) for when doing group-by, as in
<link linkend="api-func-setgroupdistinct">SetGroupDistinct()</link> API call:
<programlisting>
... WHERE query='test;groupby=attr:country_id;distinct=site_id';
</programlisting>
</para></listitem>
<listitem><para>indexweights - comma-separated list of index names and weights
to use when searching through several indexes:
<programlisting>
... WHERE query='test;indexweights=idx_exact,2,idx_stemmed,1;';
</programlisting>
</para></listitem>
<listitem><para>fieldweights - comma-separated list of per-field weights
that can be used by the ranker:
<programlisting>
... WHERE query='test;fieldweights=title,10,abstract,3,content,1;';
</programlisting>
</para></listitem>
<listitem><para>comment - a string to mark this query in query log
(mapping to $comment parameter in <link linkend="api-func-query">Query()</link> API call):
<programlisting>
... WHERE query='test;comment=marker001;';
</programlisting>
</para></listitem>
<listitem><para>select - a string with expressions to compute
(mapping to <link linkend="api-func-setselect">SetSelect()</link> API call):
<programlisting>
... WHERE query='test;select=2*a+3*b as myexpr;';
</programlisting>
</para></listitem>
<listitem><para>host, port - remote <filename>searchd</filename> host name
and TCP port, respectively:
<programlisting>
... WHERE query='test;host=sphinx-test.loc;port=7312;';
</programlisting>
</para></listitem>
<listitem><para>ranker - a ranking function to use with "extended" matching mode,
as in <link linkend="api-func-setrankingmode">SetRankingMode()</link> API call
(the only mode that supports full query syntax).
Known values are "proximity_bm25", "bm25", "none", "wordcount", "proximity",
"matchany", "fieldmask", "sph04" (starting with 1.10-beta),
"expr:EXPRESSION" (starting with 2.0.4-release)
syntax to support expression-based ranker (where EXPRESSION should be replaced
with your specific ranking formula), and "export:EXPRESSION" (starting with 2.1.1-beta):
<programlisting>
... WHERE query='test;mode=extended;ranker=bm25;';
... WHERE query='test;mode=extended;ranker=expr:sum(lcs);';
</programlisting>
The "export" ranker works exactly like ranker=expr, but it stores the per-document
factor values, while ranker=expr discards them after computing the final WEIGHT() value.
Note that ranker=export is meant to be used but rarely, only to train a ML (machine learning)
function or to define your own ranking function by hand, and never in actual production. When using
this ranker, you'll probably want to examine the output of the RANKFACTORS() function (added in
version 2.1.1-beta) that produces a string with all the field level factors for each document.
</para>
<programlisting>
SELECT *, WEIGHT(), RANKFACTORS()
FROM myindex
WHERE MATCH('dog')
OPTION ranker=export('100*bm25')
</programlisting>
<para>would produce something like</para>
<programlisting>
*************************** 1. row ***************************
id: 555617
published: 1110067331
channel_id: 1059819
title: 7
content: 428
weight(): 69900
rankfactors(): bm25=699, bm25a=0.666478, field_mask=2,
doc_word_count=1, field1=(lcs=1, hit_count=4, word_count=1,
tf_idf=1.038127, min_idf=0.259532, max_idf=0.259532, sum_idf=0.259532,
min_hit_pos=120, min_best_span_pos=120, exact_hit=0,
max_window_hits=1), word1=(tf=4, idf=0.259532)
*************************** 2. row ***************************
id: 555313
published: 1108438365
channel_id: 1058561
title: 8
content: 249
weight(): 68500
rankfactors(): bm25=685, bm25a=0.675213, field_mask=3,
doc_word_count=1, field0=(lcs=1, hit_count=1, word_count=1,
tf_idf=0.259532, min_idf=0.259532, max_idf=0.259532, sum_idf=0.259532,
min_hit_pos=8, min_best_span_pos=8, exact_hit=0, max_window_hits=1),
field1=(lcs=1, hit_count=2, word_count=1, tf_idf=0.519063,
min_idf=0.259532, max_idf=0.259532, sum_idf=0.259532, min_hit_pos=36,
min_best_span_pos=36, exact_hit=0, max_window_hits=1), word1=(tf=3,
idf=0.259532)
</programlisting>
</listitem>
<listitem><para>geoanchor - geodistance anchor, as in
<link linkend="api-func-setgeoanchor">SetGeoAnchor()</link> API call.
Takes 4 parameters which are latitude and longitude attribute names,
and anchor point coordinates respectively:
<programlisting>
... WHERE query='test;geoanchor=latattr,lonattr,0.123,0.456';
</programlisting>
</para></listitem>
</itemizedlist>
</para>
<para>
One <emphasis role="bold">very important</emphasis> note that it is
<emphasis role="bold">much</emphasis> more efficient to allow Sphinx
to perform sorting, filtering and slicing the result set than to raise
max matches count and use WHERE, ORDER BY and LIMIT clauses on MySQL
side. This is for two reasons. First, Sphinx does a number of
optimizations and performs better than MySQL on these tasks.
Second, less data would need to be packed by searchd, transferred
and unpacked by SphinxSE.
</para>
<para>
Starting with version 0.9.9-rc1, additional query info besides result set could be
retrieved with <code>SHOW ENGINE SPHINX STATUS</code> statement:
<programlisting>
mysql> SHOW ENGINE SPHINX STATUS;
+--------+-------+-------------------------------------------------+
| Type | Name | Status |
+--------+-------+-------------------------------------------------+
| SPHINX | stats | total: 25, total found: 25, time: 126, words: 2 |
| SPHINX | words | sphinx:591:1256 soft:11076:15945 |
+--------+-------+-------------------------------------------------+
2 rows in set (0.00 sec)
</programlisting>
This information can also be accessed through status variables. Note
that this method does not require super-user privileges.
<programlisting>
mysql> SHOW STATUS LIKE 'sphinx_%';
+--------------------+----------------------------------+
| Variable_name | Value |
+--------------------+----------------------------------+
| sphinx_total | 25 |
| sphinx_total_found | 25 |
| sphinx_time | 126 |
| sphinx_word_count | 2 |
| sphinx_words | sphinx:591:1256 soft:11076:15945 |
+--------------------+----------------------------------+
5 rows in set (0.00 sec)
</programlisting>
</para>
<para>
You could perform JOINs on SphinxSE search table and tables using
other engines. Here's an example with "documents" from example.sql:
<programlisting>
mysql> SELECT content, date_added FROM test.documents docs
-> JOIN t1 ON (docs.id=t1.id)
-> WHERE query="one document;mode=any";
+-------------------------------------+---------------------+
| content | docdate |
+-------------------------------------+---------------------+
| this is my test document number two | 2006-06-17 14:04:28 |
| this is my test document number one | 2006-06-17 14:04:28 |
+-------------------------------------+---------------------+
2 rows in set (0.00 sec)
mysql> SHOW ENGINE SPHINX STATUS;
+--------+-------+---------------------------------------------+
| Type | Name | Status |
+--------+-------+---------------------------------------------+
| SPHINX | stats | total: 2, total found: 2, time: 0, words: 2 |
| SPHINX | words | one:1:2 document:2:2 |
+--------+-------+---------------------------------------------+
2 rows in set (0.00 sec)
</programlisting>
</para>
</sect1>
<sect1 id="sphinxse-snippets"><title>Building snippets (excerpts) via MySQL</title>
<para>
Starting with version 0.9.9-rc2, SphinxSE also includes a UDF function
that lets you create snippets through MySQL. The functionality is fully
similar to <link linkend="api-func-buildexcerpts">BuildExcerprts</link>
API call but accessible through MySQL+SphinxSE.
</para>
<para>
The binary that provides the UDF is named <filename>sphinx.so</filename>
and should be automatically built and installed to proper location
along with SphinxSE itself. If it does not get installed automatically
for some reason, look for <filename>sphinx.so</filename> in the build
directory and copy it to the plugins directory of your MySQL instance.
After that, register the UDF using the following statement:
<programlisting>
CREATE FUNCTION sphinx_snippets RETURNS STRING SONAME 'sphinx.so';
</programlisting>
</para>
<para>
Function name <emphasis>must</emphasis> be sphinx_snippets,
you can not use an arbitrary name. Function arguments are as follows:
</para>
<para>
<b>Prototype:</b> function sphinx_snippets ( document, index, words, [options] );
</para>
<para>
Document and words arguments can be either strings or table columns.
Options must be specified like this: <code>'value' AS option_name</code>.
For a list of supported options, refer to
<link linkend="api-func-buildexcerpts">BuildExcerprts()</link> API call.
The only UDF-specific additional option is named <code>'sphinx'</code>
and lets you specify searchd location (host and port).
</para>
<para>
Usage examples:
<programlisting>
SELECT sphinx_snippets('hello world doc', 'main', 'world',
'sphinx://192.168.1.1/' AS sphinx, true AS exact_phrase,
'[b]' AS before_match, '[/b]' AS after_match)
FROM documents;
SELECT title, sphinx_snippets(text, 'index', 'mysql php') AS text
FROM sphinx, documents
WHERE query='mysql php' AND sphinx.id=documents.id;
</programlisting>
</para>
</sect1>
</chapter>
<chapter id="reporting-bugs"><title>Reporting bugs</title>
<para>
Unfortunately, Sphinx is not yet 100% bug free (even though we're working hard
towards that), so you might occasionally run into some issues.
</para>
<para>
Reporting as much as possible about each bug is very important -
because to fix it, we need to be able either to reproduce and fix the bug,
or to deduce what's causing it from the information that you provide.
So here are some instructions on how to do that.
</para>
<bridgehead>Bug-tracker</bridgehead>
<para>Nothing special to say here. Here is the
<a href="http://sphinxsearch.com/bugs">link</a>. Create a new
ticket and describe your bug in details so both you and developers can
save their time.</para>
<bridgehead>Crashes</bridgehead>
<para>In case of crashes we sometimes can get enough info to fix from
backtrace.</para>
<para>Sphinx tries to write crash backtrace to its log file. It may look like
this:
<programlisting>
./indexer(_Z12sphBacktraceib+0x2d6)[0x5d337e]
./indexer(_Z7sigsegvi+0xbc)[0x4ce26a]
/lib64/libpthread.so.0[0x3f75a0dd40]
/lib64/libc.so.6(fwrite+0x34)[0x3f74e5f564]
./indexer(_ZN27CSphCharsetDefinitionParser5ParseEPKcR10CSphVectorI14CSphRemapRange16CSphVe
ctorPolicyIS3_EE+0x5b)[0x51701b]
./indexer(_ZN13ISphTokenizer14SetCaseFoldingEPKcR10CSphString+0x62)[0x517e4c]
./indexer(_ZN17CSphTokenizerBase14SetCaseFoldingEPKcR10CSphString+0xbd)[0x518283]
./indexer(_ZN18CSphTokenizer_SBCSILb0EEC1Ev+0x3f)[0x5b312b]
./indexer(_Z22sphCreateSBCSTokenizerv+0x20)[0x51835c]
./indexer(_ZN13ISphTokenizer6CreateERK21CSphTokenizerSettingsPK17CSphEmbeddedFilesR10CSphS
tring+0x47)[0x5183d7]
./indexer(_Z7DoIndexRK17CSphConfigSectionPKcRK17SmallStringHash_TIS_EbP8_IO_FILE+0x494)[0x
4d31c8]
./indexer(main+0x1a17)[0x4d6719]
/lib64/libc.so.6(__libc_start_main+0xf4)[0x3f74e1d8a4]
./indexer(__gxx_personality_v0+0x231)[0x4cd779]
</programlisting>
This is an example of a good backtrace - we can see mangled function names
here.</para>
<para>But sometimes backtrace may look like this:
<programlisting>
/opt/piler/bin/indexer[0x4c4919]
/opt/piler/bin/indexer[0x405cf0]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7fc659cb6cb0]
/opt/piler/bin/indexer[0x4237fd]
/opt/piler/bin/indexer[0x491de6]
/opt/piler/bin/indexer[0x451704]
/opt/piler/bin/indexer[0x40861a]
/opt/piler/bin/indexer[0x40442c]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7fc6588aa76d]
/opt/piler/bin/indexer[0x405b89]
</programlisting>
Developers can get nothing useful from those cryptic numbers. They're
ordinary humans and want to see function names. To help them you need to
provide symbols (function and variable names). If you've installed sphinx by
building from the sources, run the following command over your binary:
<programlisting>
nm -n indexer > indexer.sym
</programlisting>
Attach this file to bug report along with backtrace. You should however ensure
that the binary is not stripped. Our official binary packages should be fine.
(That, or we have the symbols stored.) However, if you manually build Sphinx
from the source tarball, do not run <filename>strip</filename> utility on that
binary, and/or do not let your build/packaging system do that!</para>
<bridgehead>Uploading your data</bridgehead>
<para>To fix your bug developers often need to reproduce it on their machines.
To do this they need your sphinx.conf, index files, binlog (if present),
sometimes data to index (like SQL tables or XMLpipe2 data files) and queries.
</para>
<para>
Attach your data to ticket. In case it's too big to attach ask developers and
they give you an address to write-only FTP created exactly for such puproses.
</para>
</chapter>
<chapter id="conf-reference"><title><filename>sphinx.conf</filename> options reference</title>
<sect1 id="confgroup-source"><title>Data source configuration options</title>
<sect2 id="conf-source-type"><title>type</title>
<para>
Data source type.
Mandatory, no default value.
Known types are <option>mysql</option>, <option>pgsql</option>, <option>mssql</option>,
<option>xmlpipe2</option>, <option>tsvpipe</option>, <option>csvpipe</option> and <option>odbc</option>.
</para>
<para>
All other per-source options depend on source type selected by this option.
Names of the options used for SQL sources (ie. MySQL, PostgreSQL, MS SQL) start with "sql_";
names of the ones used for xmlpipe2 or tsvpipe, csvpipe start with "xmlpipe_" and "tsvpipe_",
"csvpipe_" correspondingly.
All source types are conditional; they might or might
not be supported depending on your build settings, installed client libraries, etc.
<option>mssql</option> type is currently only available on Windows.
<option>odbc</option> type is available both on Windows natively and on
Linux through <ulink url="http://www.unixodbc.org/">UnixODBC library</ulink>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
type = mysql
</programlisting>
</sect2>
<sect2 id="conf-sql-host"><title>sql_host</title>
<para>
SQL server host to connect to.
Mandatory, no default value.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
</para>
<para>
In the simplest case when Sphinx resides on the same host with your MySQL
or PostgreSQL installation, you would simply specify "localhost". Note that
MySQL client library chooses whether to connect over TCP/IP or over UNIX
socket based on the host name. Specifically "localhost" will force it
to use UNIX socket (this is the default and generally recommended mode)
and "127.0.0.1" will force TCP/IP usage. Refer to
<ulink url="http://dev.mysql.com/doc/refman/5.0/en/mysql-real-connect.html">MySQL manual</ulink>
for more details.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_host = localhost
</programlisting>
</sect2>
<sect2 id="conf-sql-port"><title>sql_port</title>
<para>
SQL server IP port to connect to.
Optional, default is 3306 for <option>mysql</option> source type and 5432 for <option>pgsql</option> type.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
Note that it depends on <link linkend="conf-sql-host">sql_host</link> setting whether this value will actually be used.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_port = 3306
</programlisting>
</sect2>
<sect2 id="conf-sql-user"><title>sql_user</title>
<para>
SQL user to use when connecting to <link linkend="conf-sql-host">sql_host</link>.
Mandatory, no default value.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_user = test
</programlisting>
</sect2>
<sect2 id="conf-sql-pass"><title>sql_pass</title>
<para>
SQL user password to use when connecting to <link linkend="conf-sql-host">sql_host</link>.
Mandatory, no default value.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_pass = mysecretpassword
</programlisting>
</sect2>
<sect2 id="conf-sql-db"><title>sql_db</title>
<para>
SQL database (in MySQL terms) to use after the connection and perform further queries within.
Mandatory, no default value.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_db = test
</programlisting>
</sect2>
<sect2 id="conf-sql-sock"><title>sql_sock</title>
<para>
UNIX socket name to connect to for local SQL servers.
Optional, default value is empty (use client library default settings).
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
</para>
<para>
On Linux, it would typically be <filename>/var/lib/mysql/mysql.sock</filename>.
On FreeBSD, it would typically be <filename>/tmp/mysql.sock</filename>.
Note that it depends on <link linkend="conf-sql-host">sql_host</link> setting whether this value will actually be used.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_sock = /tmp/mysql.sock
</programlisting>
</sect2>
<sect2 id="conf-mysql-connect-flags"><title>mysql_connect_flags</title>
<para>
MySQL client connection flags.
Optional, default value is 0 (do not set any flags).
Applies to <option>mysql</option> source type only.
</para>
<para>
This option must contain an integer value with the sum of the flags.
The value will be passed to <ulink url="http://dev.mysql.com/doc/refman/5.0/en/mysql-real-connect.html">mysql_real_connect()</ulink> verbatim.
The flags are enumerated in mysql_com.h include file.
Flags that are especially interesting in regard to indexing, with their respective values, are as follows:
<itemizedlist>
<listitem><para>CLIENT_COMPRESS = 32; can use compression protocol</para></listitem>
<listitem><para>CLIENT_SSL = 2048; switch to SSL after handshake</para></listitem>
<listitem><para>CLIENT_SECURE_CONNECTION = 32768; new 4.1 authentication</para></listitem>
</itemizedlist>
For instance, you can specify 2080 (2048+32) to use both compression and SSL,
or 32768 to use new authentication only. Initially, this option was introduced
to be able to use compression when the <filename>indexer</filename>
and <filename>mysqld</filename> are on different hosts. Compression on 1 Gbps
links is most likely to hurt indexing time though it reduces network traffic,
both in theory and in practice. However, enabling compression on 100 Mbps links
may improve indexing time significantly (upto 20-30% of the total indexing time
improvement was reported). Your mileage may vary.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
mysql_connect_flags = 32 # enable compression
</programlisting>
</sect2>
<sect2 id="conf-mysql-ssl"><title>mysql_ssl_cert, mysql_ssl_key, mysql_ssl_ca</title>
<para>
SSL certificate settings to use for connecting to MySQL server.
Optional, default values are empty strings (do not use SSL).
Applies to <option>mysql</option> source type only.
</para>
<para>
These directives let you set up secure SSL connection between
<filename>indexer</filename> and MySQL. The details on creating
the certificates and setting up MySQL server can be found in
MySQL documentation.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
mysql_ssl_cert = /etc/ssl/client-cert.pem
mysql_ssl_key = /etc/ssl/client-key.pem
mysql_ssl_ca = /etc/ssl/cacert.pem
</programlisting>
</sect2>
<sect2 id="conf-odbc-dsn"><title>odbc_dsn</title>
<para>
ODBC DSN to connect to.
Mandatory, no default value.
Applies to <option>odbc</option> source type only.
</para>
<para>
ODBC DSN (Data Source Name) specifies the credentials (host, user, password, etc)
to use when connecting to ODBC data source. The format depends on specific ODBC
driver used.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
odbc_dsn = Driver={Oracle ODBC Driver};Dbq=myDBName;Uid=myUsername;Pwd=myPassword
</programlisting>
</sect2>
<sect2 id="conf-sql-query-pre"><title>sql_query_pre</title>
<para>
Pre-fetch query, or pre-query.
Multi-value, optional, default is empty list of queries.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
</para>
<para>
Multi-value means that you can specify several pre-queries.
They are executed before <link linkend="conf-sql-query">the main fetch query</link>,
and they will be executed exactly in order of appearance in the configuration file.
Pre-query results are ignored.
</para>
<para>
Pre-queries are useful in a lot of ways. They are used to setup encoding,
mark records that are going to be indexed, update internal counters,
set various per-connection SQL server options and variables, and so on.
</para>
<para>
Perhaps the most frequent pre-query usage is to specify the encoding
that the server will use for the rows it returns. Note that Sphinx accepts
only UTF-8 texts.
Two MySQL specific examples of setting the encoding are:
<programlisting>
sql_query_pre = SET CHARACTER_SET_RESULTS=utf8
sql_query_pre = SET NAMES utf8
</programlisting>
Also specific to MySQL sources, it is useful to disable query cache
(for indexer connection only) in pre-query, because indexing queries
are not going to be re-run frequently anyway, and there's no sense
in caching their results. That could be achieved with:
<programlisting>
sql_query_pre = SET SESSION query_cache_type=OFF
</programlisting>
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_query_pre = SET NAMES utf8
sql_query_pre = SET SESSION query_cache_type=OFF
</programlisting>
</sect2>
<sect2 id="conf-sql-query"><title>sql_query</title>
<para>
Main document fetch query.
Mandatory, no default value.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
</para>
<para>
There can be only one main query.
This is the query which is used to retrieve documents from SQL server.
You can specify up to 32 full-text fields (formally, upto SPH_MAX_FIELDS from sphinx.h), and an arbitrary amount of attributes.
All of the columns that are neither document ID (the first one) nor attributes will be full-text indexed.
</para>
<para>
Document ID <emphasis role="bold">MUST</emphasis> be the very first field,
and it <emphasis role="bold">MUST BE UNIQUE UNSIGNED POSITIVE (NON-ZERO, NON-NEGATIVE) INTEGER NUMBER</emphasis>.
It can be either 32-bit or 64-bit, depending on how you built Sphinx;
by default it builds with 32-bit IDs support but <option>--enable-id64</option> option
to <filename>configure</filename> allows to build with 64-bit document and word IDs support.
<!-- TODO: add more on zero, negative, duplicate ID handling -->
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_query = \
SELECT id, group_id, UNIX_TIMESTAMP(date_added) AS date_added, \
title, content \
FROM documents
</programlisting>
</sect2>
<sect2 id="conf-sql-joined-field"><title>sql_joined_field</title>
<para>
Joined/payload field fetch query.
Multi-value, optional, default is empty list of queries.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
</para>
<para>
<option>sql_joined_field</option> lets you use two different features:
joined fields, and payloads (payload fields). It's syntax is as follows:
<programlisting>
sql_joined_field = FIELD-NAME 'from' ( 'query' | 'payload-query' \
| 'ranged-query' ); QUERY [ ; RANGE-QUERY ]
</programlisting>
where
<itemizedlist>
<listitem><para>FIELD-NAME is a joined/payload field name;</para></listitem>
<listitem><para>QUERY is an SQL query that must fetch values to index.</para></listitem>
<listitem><para>RANGE-QUERY is an optional SQL query that fetches a range
of values to index. (Added in version 2.0.1-beta.)</para></listitem>
</itemizedlist>
</para>
<para>
<b>Joined fields</b> let you avoid JOIN and/or GROUP_CONCAT statements in the main
document fetch query (sql_query). This can be useful when SQL-side JOIN is slow,
or needs to be offloaded on Sphinx side, or simply to emulate MySQL-specific
GROUP_CONCAT functionality in case your database server does not support it.
</para>
<para>
The query must return exactly 2 columns: document ID, and text to append
to a joined field. Document IDs can be duplicate, but they <b>must</b> be
in ascending order. All the text rows fetched for a given ID will be
concatenated together, and the concatenation result will be indexed
as the entire contents of a joined field. Rows will be concatenated
in the order returned from the query, and separating whitespace
will be inserted between them. For instance, if joined field query
returns the following rows:
<programlisting>
( 1, 'red' )
( 1, 'right' )
( 1, 'hand' )
( 2, 'mysql' )
( 2, 'sphinx' )
</programlisting>
then the indexing results would be equivalent to that of adding
a new text field with a value of 'red right hand' to document 1 and
'mysql sphinx' to document 2.
</para>
<para>
Joined fields are only indexed differently. There are no other differences
between joined fields and regular text fields.
</para>
<para>
Starting with 2.0.1-beta, <b>ranged queries</b> can be used when
a single query is not efficient enough or does not work because of
the database driver limitations. It works similar to the ranged
queries in the main indexing loop, see <xref linkend="ranged-queries"/>.
The range will be queried for and fetched upfront once,
then multiple queries with different <code>$start</code>
and <code>$end</code> substitutions will be run to fetch
the actual data.
</para>
<para>
<b>Payloads</b> let you create a special field in which, instead of
keyword positions, so-called user payloads are stored. Payloads are
custom integer values attached to every keyword. They can then be used
in search time to affect the ranking.
</para>
<para>
The payload query must return exactly 3 columns: document ID; keyword;
and integer payload value. Document IDs can be duplicate, but they <b>must</b> be
in ascending order. Payloads must be unsigned integers within 24-bit range,
ie. from 0 to 16777215. For reference, payloads are currently internally
stored as in-field keyword positions, but that is not guaranteed
and might change in the future.
</para>
<para>
Currently, the only method to account for payloads is to use
SPH_RANK_PROXIMITY_BM25 ranker. On indexes with payload fields,
it will automatically switch to a variant that matches keywords
in those fields, computes a sum of matched payloads multiplied
by field weights, and adds that sum to the final rank.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_joined_field = \
tagstext from query; \
SELECT docid, CONCAT('tag',tagid) FROM tags ORDER BY docid ASC
sql_joined_field = bigint tag from ranged-query; \
SELECT id, tag FROM tags WHERE id>=$start AND id<=$end ORDER BY id ASC; \
SELECT MIN(id), MAX(id) FROM tags
</programlisting>
</sect2>
<sect2 id="conf-sql-query-range"><title>sql_query_range</title>
<para>
Range query setup.
Optional, default is empty.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
</para>
<para>
Setting this option enables ranged document fetch queries (see <xref linkend="ranged-queries"/>).
Ranged queries are useful to avoid notorious MyISAM table locks when indexing
lots of data. (They also help with other less notorious issues, such as reduced
performance caused by big result sets, or additional resources consumed by InnoDB
to serialize big read transactions.)
</para>
<para>
The query specified in this option must fetch min and max document IDs that will be
used as range boundaries. It must return exactly two integer fields, min ID first
and max ID second; the field names are ignored.
</para>
<para>
When ranged queries are enabled, <link linkend="conf-sql-query">sql_query</link>
will be required to contain <option>$start</option> and <option>$end</option> macros
(because it obviously would be a mistake to index the whole table many times over).
Note that the intervals specified by <option>$start</option>..<option>$end</option>
will not overlap, so you should <b>not</b> remove document IDs that are
exactly equal to <option>$start</option> or <option>$end</option> from your query.
The example in <xref linkend="ranged-queries"/>) illustrates that; note how it
uses greater-or-equal and less-or-equal comparisons.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_query_range = SELECT MIN(id),MAX(id) FROM documents
</programlisting>
</sect2>
<sect2 id="conf-sql-range-step"><title>sql_range_step</title>
<para>
Range query step.
Optional, default is 1024.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
</para>
<para>
Only used when <link linkend="ranged-queries">ranged queries</link> are enabled.
The full document IDs interval fetched by <link linkend="conf-sql-query-range">sql_query_range</link>
will be walked in this big steps. For example, if min and max IDs fetched
are 12 and 3456 respectively, and the step is 1000, indexer will call
<link linkend="conf-sql-query">sql_query</link> several times with the
following substitutions:
<itemizedlist>
<listitem><para>$start=12, $end=1011</para></listitem>
<listitem><para>$start=1012, $end=2011</para></listitem>
<listitem><para>$start=2012, $end=3011</para></listitem>
<listitem><para>$start=3012, $end=3456</para></listitem>
</itemizedlist>
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_range_step = 1000
</programlisting>
</sect2>
<sect2 id="conf-sql-query-killlist"><title>sql_query_killlist</title>
<para>
Kill-list query.
Optional, default is empty (no query).
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
Introduced in version 0.9.9-rc1.
</para>
<para>
This query is expected to return a number of 1-column rows, each containing
just the document ID. The returned document IDs are stored within an index.
Kill-list for a given index suppresses results from <emphasis>other</emphasis>
indexes, depending on index order in the query. The intended use is to help
implement deletions and updates on existing indexes without rebuilding
(actually even touching them), and especially to fight phantom results
problem.
</para>
<para>
Let us dissect an example. Assume we have two indexes, 'main' and 'delta'.
Assume that documents 2, 3, and 5 were deleted since last reindex of 'main',
and documents 7 and 11 were updated (ie. their text contents were changed).
Assume that a keyword 'test' occurred in all these mentioned documents
when we were indexing 'main'; still occurs in document 7 as we index 'delta';
but does not occur in document 11 any more. We now reindex delta and then
search through both these indexes in proper (least to most recent) order:
<programlisting>
$res = $cl->Query ( "test", "main delta" );
</programlisting>
</para>
<para>
First, we need to properly handle deletions. The result set should not
contain documents 2, 3, or 5. Second, we also need to avoid phantom results.
Unless we do something about it, document 11 <emphasis>will</emphasis>
appear in search results! It will be found in 'main' (but not 'delta').
And it will make it to the final result set unless something stops it.
</para>
<para>
Kill-list, or K-list for short, is that something. Kill-list attached
to 'delta' will suppress the specified rows from <b>all</b> the preceding
indexes, in this case just 'main'. So to get the expected results,
we should put all the updated <emphasis>and</emphasis> deleted
document IDs into it.
</para>
<para>
Note that in the distributed index setup, K-lists are <b>local
to every node in the cluster</b>. They are <b>not</b> get transmitted
over the network when sending queries. (Because that might be too much
of an impact when the K-list is huge.) You will need to setup a
separate per-server K-lists in that case.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_query_killlist = \
SELECT id FROM documents WHERE updated_ts>=@last_reindex UNION \
SELECT id FROM documents_deleted WHERE deleted_ts>=@last_reindex
</programlisting>
</sect2>
<sect2 id="conf-sql-attr-uint"><title>sql_attr_uint</title>
<para>
Unsigned integer <link linkend="attributes">attribute</link> declaration.
Multi-value (there might be multiple attributes declared), optional.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
</para>
<para>
The column value should fit into 32-bit unsigned integer range.
Values outside this range will be accepted but wrapped around.
For instance, -1 will be wrapped around to 2^32-1 or 4,294,967,295.
</para>
<para>
You can specify bit count for integer attributes by appending
':BITCOUNT' to attribute name (see example below). Attributes with
less than default 32-bit size, or bitfields, perform slower.
But they require less RAM when using <link linkend="conf-docinfo">extern storage</link>:
such bitfields are packed together in 32-bit chunks in <filename>.spa</filename>
attribute data file. Bit size settings are ignored if using
<link linkend="conf-docinfo">inline storage</link>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_attr_uint = group_id
sql_attr_uint = forum_id:9 # 9 bits for forum_id
</programlisting>
</sect2>
<sect2 id="conf-sql-attr-bool"><title>sql_attr_bool</title>
<para>
Boolean <link linkend="attributes">attribute</link> declaration.
Multi-value (there might be multiple attributes declared), optional.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
Equivalent to <link linkend="conf-sql-attr-uint">sql_attr_uint</link> declaration with a bit count of 1.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_attr_bool = is_deleted # will be packed to 1 bit
</programlisting>
</sect2>
<sect2 id="conf-sql-attr-bigint"><title>sql_attr_bigint</title>
<para>
64-bit signed integer <link linkend="attributes">attribute</link> declaration.
Multi-value (there might be multiple attributes declared), optional.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
Note that unlike <link linkend="conf-sql-attr-uint">sql_attr_uint</link>,
these values are <b>signed</b>.
Introduced in version 0.9.9-rc1.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_attr_bigint = my_bigint_id
</programlisting>
</sect2>
<sect2 id="conf-sql-attr-timestamp"><title>sql_attr_timestamp</title>
<para>
UNIX timestamp <link linkend="attributes">attribute</link> declaration.
Multi-value (there might be multiple attributes declared), optional.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
</para>
<para>
Timestamps can store date and time in the range of Jan 01, 1970
to Jan 19, 2038 with a precision of one second.
The expected column value should be a timestamp in UNIX format, ie. 32-bit unsigned
integer number of seconds elapsed since midnight, January 01, 1970, GMT.
Timestamps are internally stored and handled as integers everywhere.
But in addition to working with timestamps as integers, it's also legal
to use them along with different date-based functions, such as time segments
sorting mode, or day/week/month/year extraction for GROUP BY.
</para>
<para>
Note that DATE or DATETIME column types in MySQL can <b>not</b> be directly
used as timestamp attributes in Sphinx; you need to explicitly convert such
columns using UNIX_TIMESTAMP function (if data is in range).
</para>
<para>
Note timestamps can not represent dates before January 01, 1970,
and UNIX_TIMESTAMP() in MySQL will not return anything expected.
If you only needs to work with dates, not times, consider TO_DAYS()
function in MySQL instead.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
# sql_query = ... UNIX_TIMESTAMP(added_datetime) AS added_ts ...
sql_attr_timestamp = added_ts
</programlisting>
</sect2>
<sect2 id="conf-sql-attr-float"><title>sql_attr_float</title>
<para>
Floating point <link linkend="attributes">attribute</link> declaration.
Multi-value (there might be multiple attributes declared), optional.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
</para>
<para>
The values will be stored in single precision, 32-bit IEEE 754 format.
Represented range is approximately from 1e-38 to 1e+38. The amount
of decimal digits that can be stored precisely is approximately 7.
One important usage of the float attributes is storing latitude
and longitude values (in radians), for further usage in query-time
geosphere distance calculations.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_attr_float = lat_radians
sql_attr_float = long_radians
</programlisting>
</sect2>
<sect2 id="conf-sql-attr-multi"><title>sql_attr_multi</title>
<para>
<link linkend="mva">Multi-valued attribute</link> (MVA) declaration.
Multi-value (ie. there may be more than one such attribute declared), optional.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
</para>
<para>
Plain attributes only allow to attach 1 value per each document.
However, there are cases (such as tags or categories) when it is
desired to attach multiple values of the same attribute and be able
to apply filtering or grouping to value lists.
</para>
<para>
The declaration format is as follows (backslashes are for clarity only;
everything can be declared in a single line as well):
<programlisting>
sql_attr_multi = ATTR-TYPE ATTR-NAME 'from' SOURCE-TYPE \
[;QUERY] \
[;RANGE-QUERY]
</programlisting>
where
<itemizedlist>
<listitem><para>ATTR-TYPE is 'uint', 'bigint' or 'timestamp'</para></listitem>
<listitem><para>SOURCE-TYPE is 'field', 'query', or 'ranged-query'</para></listitem>
<listitem><para>QUERY is SQL query used to fetch all ( docid, attrvalue ) pairs</para></listitem>
<listitem><para>RANGE-QUERY is SQL query used to fetch min and max ID values, similar to 'sql_query_range'</para></listitem>
</itemizedlist>
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_attr_multi = uint tag from query; SELECT id, tag FROM tags
sql_attr_multi = bigint tag from ranged-query; \
SELECT id, tag FROM tags WHERE id>=$start AND id<=$end; \
SELECT MIN(id), MAX(id) FROM tags
</programlisting>
</sect2>
<sect2 id="conf-sql-attr-string"><title>sql_attr_string</title>
<para>
String attribute declaration.
Multi-value (ie. there may be more than one such attribute declared), optional.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
Introduced in version 1.10-beta.
</para>
<para>
String attributes can store arbitrary strings attached to every document.
There's a fixed size limit of 4 MB per value. Also, <filename>searchd</filename>
will currently cache all the values in RAM, which is an additional implicit limit.
</para>
<para>
Starting from 2.0.1-beta string attributes can be used for sorting and
grouping(ORDER BY, GROUP BY, WITHIN GROUP ORDER BY). Note that attributes
declared using <option>sql_attr_string</option> will <b>not</b> be full-text
indexed; you can use <link linkend="conf-sql-field-string">sql_field_string</link>
directive for that.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_attr_string = title # will be stored but will not be indexed
</programlisting>
</sect2>
<sect2 id="conf-sql-attr-json"><title>sql_attr_json</title>
<para>
JSON attribute declaration.
Multi-value (ie. there may be more than one such attribute declared), optional.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
Introduced in version 2.1.1-beta.
</para>
<para>
When indexing JSON attributes, Sphinx expects a text field
with JSON formatted data. As of 2.2.1-beta JSON attributes supports arbitrary
JSON data with no limitation in nested levels or types.
<programlisting>
{
"id": 1,
"gid": 2,
"title": "some title",
"tags": [
"tag1",
"tag2",
"tag3"
{
"one": "two",
"three": [4, 5]
}
]
}
</programlisting>
These attributes allow Sphinx to work with documents without a fixed set of
attribute columns. When you filter on a key of a JSON attribute, documents
that don't include the key will simply be ignored.
</para>
<para>
You can read more on JSON attributes in <ulink
url="http://sphinxsearch.com/blog/2013/08/08/full-json-support-in-trunk/">
http://sphinxsearch.com/blog/2013/08/08/full-json-support-in-trunk/</ulink>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_attr_json = properties
</programlisting>
</sect2>
<sect2 id="conf-sql-column-buffers"><title>sql_column_buffers</title>
<para>
Per-column buffer sizes.
Optional, default is empty (deduce the sizes automatically).
Applies to <option>odbc</option>, <option>mssql</option> source types only.
Introduced in version 2.0.1-beta.
</para>
<para>
ODBC and MS SQL drivers sometimes can not return the maximum
actual column size to be expected. For instance, NVARCHAR(MAX) columns
always report their length as 2147483647 bytes to
<filename>indexer</filename> even though the actually used length
is likely considerably less. However, the receiving buffers still
need to be allocated upfront, and their sizes have to be determined.
When the driver does not report the column length at all, Sphinx
allocates default 1 KB buffers for each non-char column, and 1 MB
buffers for each char column. Driver-reported column length
also gets clamped by an upper limit of 8 MB, so in case the
driver reports (almost) a 2 GB column length, it will be clamped
and a 8 MB buffer will be allocated instead for that column.
These hard-coded limits can be overridden using the
<code>sql_column_buffers</code> directive, either in order
to save memory on actually shorter columns, or overcome
the 8 MB limit on actually longer columns. The directive values
must be a comma-separated lists of selected column names and sizes:
<programlisting>
sql_column_buffers = <colname>=<size>[K|M] [, ...]
</programlisting>
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_query = SELECT id, mytitle, mycontent FROM documents
sql_column_buffers = mytitle=64K, mycontent=10M
</programlisting>
</sect2>
<sect2 id="conf-sql-field-string"><title>sql_field_string</title>
<para>
Combined string attribute and full-text field declaration.
Multi-value (ie. there may be more than one such attribute declared), optional.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
Introduced in version 1.10-beta.
</para>
<para>
<link linkend="conf-sql-attr-string">sql_attr_string</link> only stores the column
value but does not full-text index it. In some cases it might be desired to both full-text
index the column and store it as attribute. <option>sql_field_string</option> lets you do
exactly that. Both the field and the attribute will be named the same.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_field_string = title # will be both indexed and stored
</programlisting>
</sect2>
<sect2 id="conf-sql-file-field"><title>sql_file_field</title>
<para>
File based field declaration.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
Introduced in version 1.10-beta.
</para>
<para>
This directive makes <filename>indexer</filename> interpret field contents
as a file name, and load and index the referred file. Files larger than
<link linkend="conf-max-file-field-buffer">max_file_field_buffer</link>
in size are skipped. Any errors during the file loading (IO errors, missed
limits, etc) will be reported as indexing warnings and will <b>not</b> early
terminate the indexing. No content will be indexed for such files.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_file_field = my_file_path # load and index files referred to by my_file_path
</programlisting>
</sect2>
<sect2 id="conf-sql-query-post"><title>sql_query_post</title>
<para>
Post-fetch query.
Optional, default value is empty.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
</para>
<para>
This query is executed immediately after <link linkend="conf-sql-query">sql_query</link>
completes successfully. When post-fetch query produces errors,
they are reported as warnings, but indexing is <b>not</b> terminated.
It's result set is ignored. Note that indexing is <b>not</b> yet completed
at the point when this query gets executed, and further indexing still may fail.
Therefore, any permanent updates should not be done from here.
For instance, updates on helper table that permanently change
the last successfully indexed ID should not be run from post-fetch
query; they should be run from <link linkend="conf-sql-query-post-index">post-index query</link> instead.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_query_post = DROP TABLE my_tmp_table
</programlisting>
</sect2>
<sect2 id="conf-sql-query-post-index"><title>sql_query_post_index</title>
<para>
Post-index query.
Optional, default value is empty.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
</para>
<para>
This query is executed when indexing is fully and successfully completed.
If this query produces errors, they are reported as warnings,
but indexing is <b>not</b> terminated. It's result set is ignored.
<code>$maxid</code> macro can be used in its text; it will be
expanded to maximum document ID which was actually fetched
from the database during indexing. If no documents were indexed,
$maxid will be expanded to 0.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_query_post_index = REPLACE INTO counters ( id, val ) \
VALUES ( 'max_indexed_id', $maxid )
</programlisting>
</sect2>
<sect2 id="conf-sql-ranged-throttle"><title>sql_ranged_throttle</title>
<para>
Ranged query throttling period, in milliseconds.
Optional, default is 0 (no throttling).
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
</para>
<para>
Throttling can be useful when indexer imposes too much load on the
database server. It causes the indexer to sleep for given amount of
milliseconds once per each ranged query step. This sleep is unconditional,
and is performed before the fetch query.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sql_ranged_throttle = 1000 # sleep for 1 sec before each query step
</programlisting>
</sect2>
<sect2 id="conf-xmlpipe-command"><title>xmlpipe_command</title>
<para>
Shell command that invokes xmlpipe2 stream producer.
Mandatory.
Applies to <option>xmlpipe2</option> source types only.
</para>
<para>
Specifies a command that will be executed and which output
will be parsed for documents. Refer to <xref linkend="xmlpipe2"/> for specific format description.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
xmlpipe_command = cat /home/sphinx/test.xml
</programlisting>
</sect2>
<sect2 id="conf-xmlpipe-field"><title>xmlpipe_field</title>
<para>
xmlpipe field declaration.
Multi-value, optional.
Applies to <option>xmlpipe2</option> source type only. Refer to <xref linkend="xmlpipe2"/>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
xmlpipe_field = subject
xmlpipe_field = content
</programlisting>
</sect2>
<sect2 id="conf-xmlpipe-field-string"><title>xmlpipe_field_string</title>
<para>
xmlpipe field and string attribute declaration.
Multi-value, optional.
Applies to <option>xmlpipe2</option> source type only. Refer to <xref linkend="xmlpipe2"/>.
Introduced in version 1.10-beta.
</para>
<para>
Makes the specified XML element indexed as both a full-text field and a string attribute.
Equivalent to <![CDATA[<sphinx:field name="field" attr="string"/>]]> declaration within the XML file.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
xmlpipe_field_string = subject
</programlisting>
</sect2>
<sect2 id="conf-xmlpipe-attr-uint"><title>xmlpipe_attr_uint</title>
<para>
xmlpipe integer attribute declaration.
Multi-value, optional.
Applies to <option>xmlpipe2</option> source type only.
Syntax fully matches that of <link linkend="conf-sql-attr-uint">sql_attr_uint</link>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
xmlpipe_attr_uint = author_id
</programlisting>
</sect2>
<sect2 id="conf-xmlpipe-attr-bigint"><title>xmlpipe_attr_bigint</title>
<para>
xmlpipe signed 64-bit integer attribute declaration.
Multi-value, optional.
Applies to <option>xmlpipe2</option> source type only.
Syntax fully matches that of <link linkend="conf-sql-attr-bigint">sql_attr_bigint</link>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
xmlpipe_attr_bigint = my_bigint_id
</programlisting>
</sect2>
<sect2 id="conf-xmlpipe-attr-bool"><title>xmlpipe_attr_bool</title>
<para>
xmlpipe boolean attribute declaration.
Multi-value, optional.
Applies to <option>xmlpipe2</option> source type only.
Syntax fully matches that of <link linkend="conf-sql-attr-bool">sql_attr_bool</link>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
xmlpipe_attr_bool = is_deleted # will be packed to 1 bit
</programlisting>
</sect2>
<sect2 id="conf-xmlpipe-attr-timestamp"><title>xmlpipe_attr_timestamp</title>
<para>
xmlpipe UNIX timestamp attribute declaration.
Multi-value, optional.
Applies to <option>xmlpipe2</option> source type only.
Syntax fully matches that of <link linkend="conf-sql-attr-timestamp">sql_attr_timestamp</link>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
xmlpipe_attr_timestamp = published
</programlisting>
</sect2>
<sect2 id="conf-xmlpipe-attr-float"><title>xmlpipe_attr_float</title>
<para>
xmlpipe floating point attribute declaration.
Multi-value, optional.
Applies to <option>xmlpipe2</option> source type only.
Syntax fully matches that of <link linkend="conf-sql-attr-float">sql_attr_float</link>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
xmlpipe_attr_float = lat_radians
xmlpipe_attr_float = long_radians
</programlisting>
</sect2>
<sect2 id="conf-xmlpipe-attr-multi"><title>xmlpipe_attr_multi</title>
<para>
xmlpipe MVA attribute declaration.
Multi-value, optional.
Applies to <option>xmlpipe2</option> source type only.
</para>
<para>
This setting declares an MVA attribute tag in xmlpipe2 stream.
The contents of the specified tag will be parsed and a list of integers
that will constitute the MVA will be extracted, similar to how
<link linkend="conf-sql-attr-multi">sql_attr_multi</link> parses
SQL column contents when 'field' MVA source type is specified.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
xmlpipe_attr_multi = taglist
</programlisting>
</sect2>
<sect2 id="conf-xmlpipe-attr-multi-64"><title>xmlpipe_attr_multi_64</title>
<para>
xmlpipe MVA attribute declaration. Declares the BIGINT (signed 64-bit integer) MVA attribute.
Multi-value, optional.
Applies to <option>xmlpipe2</option> source type only.
</para>
<para>
This setting declares an MVA attribute tag in xmlpipe2 stream.
The contents of the specified tag will be parsed and a list of integers
that will constitute the MVA will be extracted, similar to how
<link linkend="conf-sql-attr-multi">sql_attr_multi</link> parses
SQL column contents when 'field' MVA source type is specified.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
xmlpipe_attr_multi_64 = taglist
</programlisting>
</sect2>
<sect2 id="conf-xmlpipe-attr-string"><title>xmlpipe_attr_string</title>
<para>
xmlpipe string declaration.
Multi-value, optional.
Applies to <option>xmlpipe2</option> source type only.
Introduced in version 1.10-beta.
</para>
<para>
This setting declares a string attribute tag in xmlpipe2 stream.
The contents of the specified tag will be parsed and stored as a string value.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
xmlpipe_attr_string = subject
</programlisting>
</sect2>
<sect2 id="conf-xmlpipe-attr-json"><title>xmlpipe_attr_json</title>
<para>
JSON attribute declaration.
Multi-value (ie. there may be more than one such attribute declared), optional.
Introduced in version 2.1.1-beta.
</para>
<para>
This directive is used to declare that the contents of a given
XML tag are to be treated as a JSON document and stored into a Sphinx
index for later use. Refer to <xref linkend="conf-sql-attr-json"/>
for more details on the JSON attributes.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
xmlpipe_attr_json = properties
</programlisting>
</sect2>
<sect2 id="conf-xmlpipe-fixup-utf8"><title>xmlpipe_fixup_utf8</title>
<para>
Perform Sphinx-side UTF-8 validation and filtering to prevent XML parser from choking on non-UTF-8 documents.
Optional, default is 0.
Applies to <option>xmlpipe2</option> source type only.
</para>
<para>
Under certain occasions it might be hard or even impossible to guarantee
that the incoming XMLpipe2 document bodies are in perfectly valid and
conforming UTF-8 encoding. For instance, documents with national
single-byte encodings could sneak into the stream. libexpat XML parser
is fragile, meaning that it will stop processing in such cases.
UTF8 fixup feature lets you avoid that. When fixup is enabled,
Sphinx will preprocess the incoming stream before passing it to the
XML parser and replace invalid UTF-8 sequences with spaces.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
xmlpipe_fixup_utf8 = 1
</programlisting>
</sect2>
<sect2 id="conf-mssql-winauth"><title>mssql_winauth</title>
<para>
MS SQL Windows authentication flag.
Boolean, optional, default value is 0 (false).
Applies to <option>mssql</option> source type only.
Introduced in version 0.9.9-rc1.
</para>
<para>
Whether to use currently logged in Windows account credentials for
authentication when connecting to MS SQL Server. Note that when running
<filename>searchd</filename> as a service, account user can differ
from the account you used to install the service.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
mssql_winauth = 1
</programlisting>
</sect2>
<sect2 id="conf-unpack-zlib"><title>unpack_zlib</title>
<para>
Columns to unpack using zlib (aka deflate, aka gunzip).
Multi-value, optional, default value is empty list of columns.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
Introduced in version 0.9.9-rc1.
</para>
<para>
Columns specified using this directive will be unpacked by <filename>indexer</filename>
using standard zlib algorithm (called deflate and also implemented by <filename>gunzip</filename>).
When indexing on a different box than the database, this lets you offload the database, and save on network traffic.
The feature is only available if zlib and zlib-devel were both available during build time.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
unpack_zlib = col1
unpack_zlib = col2
</programlisting>
</sect2>
<sect2 id="conf-unpack-mysqlcompress"><title>unpack_mysqlcompress</title>
<para>
Columns to unpack using MySQL UNCOMPRESS() algorithm.
Multi-value, optional, default value is empty list of columns.
Applies to SQL source types (<option>mysql</option>, <option>pgsql</option>, <option>mssql</option>) only.
Introduced in version 0.9.9-rc1.
</para>
<para>
Columns specified using this directive will be unpacked by <filename>indexer</filename>
using modified zlib algorithm used by MySQL COMPRESS() and UNCOMPRESS() functions.
When indexing on a different box than the database, this lets you offload the database, and save on network traffic.
The feature is only available if zlib and zlib-devel were both available during build time.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
unpack_mysqlcompress = body_compressed
unpack_mysqlcompress = description_compressed
</programlisting>
</sect2>
<sect2 id="conf-unpack-mysqlcompress-maxsize"><title>unpack_mysqlcompress_maxsize</title>
<para>
Buffer size for UNCOMPRESS()ed data.
Optional, default value is 16M.
Introduced in version 0.9.9-rc1.
</para>
<para>
When using <link linkend="conf-unpack-mysqlcompress">unpack_mysqlcompress</link>,
due to implementation intricacies it is not possible to deduce the required buffer size
from the compressed data. So the buffer must be preallocated in advance, and unpacked
data can not go over the buffer size. This option lets you control the buffer size,
both to limit <filename>indexer</filename> memory use, and to enable unpacking
of really long data fields if necessary.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
unpack_mysqlcompress_maxsize = 1M
</programlisting>
</sect2>
<sect2 id="conf-csvpipe-delimiter"><title>csvpipe_delimiter</title>
<para>
csvpipe source fields delimiter. Optional, default value is ','.
Introduced in version 2.2.1-beta.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
csvpipe_delimiter = ;
</programlisting>
</sect2>
</sect1>
<sect1 id="confgroup-index"><title>Index configuration options</title>
<sect2 id="conf-index-type"><title>type</title>
<para>
Index type.
Known values are 'plain', 'distributed', 'rt' and 'template'.
Optional, default is 'plain' (plain local index).
</para>
<para>
Sphinx supports several different types of indexes.
Versions 0.9.x supported two index types: plain local indexes
that are stored and processed on the local machine; and distributed indexes,
that involve not only local searching but querying remote <filename>searchd</filename>
instances over the network as well (see <xref linkend="distributed"/>).
Version 1.10-beta also adds support
for so-called real-time indexes (or RT indexes for short) that
are also stored and processed locally, but additionally allow
for on-the-fly updates of the full-text index (see <xref linkend="rt-indexes"/>).
Note that <emphasis>attributes</emphasis> can be updated on-the-fly using
either plain local indexes or RT ones.
In 2.2.1-beta template indexes was introduced. They are actually a
pseudo-indexes because they do not store any data. That means they do not create
any files on your hard drive. But you can use them for keywords and snippets
generation, which may be useful in some cases.
</para>
<para>
Index type setting lets you choose the needed type.
By default, plain local index type will be assumed.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
type = distributed
</programlisting>
</sect2>
<sect2 id="conf-source"><title>source</title>
<para>
Adds document source to local index.
Multi-value, mandatory.
</para>
<para>
Specifies document source to get documents from when the current
index is indexed. There must be at least one source. There may be multiple
sources, without any restrictions on the source types: ie. you can pull
part of the data from MySQL server, part from PostgreSQL, part from
the filesystem using xmlpipe2 wrapper.
</para>
<para>
However, there are some restrictions on the source data. First,
document IDs must be globally unique across all sources. If that
condition is not met, you might get unexpected search results.
Second, source schemas must be the same in order to be stored
within the same index.
</para>
<para>
No source ID is stored automatically. Therefore, in order to be able
to tell what source the matched document came from, you will need to
store some additional information yourself. Two typical approaches
include:
<orderedlist>
<listitem><para>mangling document ID and encoding source ID in it:
<programlisting>
source src1
{
sql_query = SELECT id*10+1, ... FROM table1
...
}
source src2
{
sql_query = SELECT id*10+2, ... FROM table2
...
}
</programlisting>
</para></listitem>
<listitem><para>
storing source ID simply as an attribute:
<programlisting>
source src1
{
sql_query = SELECT id, 1 AS source_id FROM table1
sql_attr_uint = source_id
...
}
source src2
{
sql_query = SELECT id, 2 AS source_id FROM table2
sql_attr_uint = source_id
...
}
</programlisting>
</para></listitem>
</orderedlist>
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
source = srcpart1
source = srcpart2
source = srcpart3
</programlisting>
</sect2>
<sect2 id="conf-path"><title>path</title>
<para>
Index files path and file name (without extension).
Mandatory.
</para>
<para>
Path specifies both directory and file name, but without extension.
<filename>indexer</filename> will append different extensions
to this path when generating final names for both permanent and
temporary index files. Permanent data files have several different
extensions starting with '.sp'; temporary files' extensions
start with '.tmp'. It's safe to remove <filename>.tmp*</filename>
files is if indexer fails to remove them automatically.
</para>
<para>
For reference, different index files store the following data:
<itemizedlist>
<listitem><para><filename>.spa</filename> stores document attributes (used in <link linkend="conf-docinfo">extern docinfo</link> storage mode only);</para></listitem>
<listitem><para><filename>.spd</filename> stores matching document ID lists for each word ID;</para></listitem>
<listitem><para><filename>.sph</filename> stores index header information;</para></listitem>
<listitem><para><filename>.spi</filename> stores word lists (word IDs and pointers to <filename>.spd</filename> file);</para></listitem>
<listitem><para><filename>.spk</filename> stores kill-lists;</para></listitem>
<listitem><para><filename>.spm</filename> stores MVA data;</para></listitem>
<listitem><para><filename>.spp</filename> stores hit (aka posting, aka word occurrence) lists for each word ID;</para></listitem>
<listitem><para><filename>.sps</filename> stores string attribute data.</para></listitem>
<listitem><para><filename>.spe</filename> stores skip-lists to speed up doc-list filtering</para></listitem>
</itemizedlist>
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
path = /var/data/test1
</programlisting>
</sect2>
<sect2 id="conf-docinfo"><title>docinfo</title>
<para>
Document attribute values (docinfo) storage mode.
Optional, default is 'extern'.
Known values are 'none', 'extern' and 'inline'.
</para>
<para>
Docinfo storage mode defines how exactly docinfo will be
physically stored on disk and RAM. "none" means that there will be
no docinfo at all (ie. no attributes). Normally you need not to set
"none" explicitly because Sphinx will automatically select "none"
when there are no attributes configured. "inline" means that the
docinfo will be stored in the <filename>.spd</filename> file,
along with the document ID lists. "extern" means that the docinfo
will be stored separately (externally) from document ID lists,
in a special <filename>.spa</filename> file.
</para>
<para>
Basically, externally stored docinfo must be kept in RAM when querying.
for performance reasons. So in some cases "inline" might be the only option.
However, such cases are infrequent, and docinfo defaults to "extern".
Refer to <xref linkend="attributes"/> for in-depth discussion
and RAM usage estimates.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
docinfo = inline
</programlisting>
</sect2>
<sect2 id="conf-mlock"><title>mlock</title>
<para>
Memory locking for cached data.
Optional, default is 0 (do not call mlock()).
</para>
<para>
For search performance, <filename>searchd</filename> preloads
a copy of <filename>.spa</filename> and <filename>.spi</filename>
files in RAM, and keeps that copy in RAM at all times. But if there
are no searches on the index for some time, there are no accesses
to that cached copy, and OS might decide to swap it out to disk.
First queries to such "cooled down" index will cause swap-in
and their latency will suffer.
</para>
<para>
Setting mlock option to 1 makes Sphinx lock physical RAM used
for that cached data using mlock(2) system call, and that prevents
swapping (see man 2 mlock for details). mlock(2) is a privileged call,
so it will require <filename>searchd</filename> to be either run
from root account, or be granted enough privileges otherwise.
If mlock() fails, a warning is emitted, but index continues
working.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
mlock = 1
</programlisting>
</sect2>
<sect2 id="conf-morphology"><title>morphology</title>
<para>
A list of morphology preprocessors (stemmers or lemmatizers) to apply.
Optional, default is empty (do not apply any preprocessor).
</para>
<para>
Morphology preprocessors can be applied to the words being
indexed to replace different forms of the same word with the base,
normalized form. For instance, English stemmer will normalize
both "dogs" and "dog" to "dog", making search results for
both searches the same.
</para>
<para>
There are 3 different morphology preprocessors that Sphinx implements:
lemmatizers, stemmers, and phonetic algorithms.
<itemizedlist>
<listitem><para>Lemmatizer reduces a keyword form to a so-called lemma,
a proper normal form, or in other words, a valid natural language
root word. For example, "running" could be reduced to "run",
the infinitive verb form, and "octopi" would be reduced to "octopus",
the singular noun form. Note that sometimes a word form can have
multiple corresponding root words. For instance, by looking at
"dove" it is not possible to tell whether this is a past tense
of "dive" the verb as in "He dove into a pool.", or "dove" the noun
as in "White dove flew over the cuckoo's nest." In this case
lemmatizer can generate all the possible root forms.
</para></listitem>
<listitem><para>Stemmer reduces a keyword form to a so-called stem
by removing and/or replacing certain well-known suffixes.
The resulting stem is however <b>not</b>guaranteed to be
a valid word on itself. For instance, with a Porter English
stemmers "running" would still reduce to "run", which is fine,
but "business" would reduce to "busi", which is not a word,
and "octopi" would not reduce at all. Stemmers are essentially
(much) simpler but still pretty good replacements of full-blown
lemmatizers.
</para></listitem>
<listitem><para>Phonetic algorithms replace the words with specially
crafted phonetic codes that are equal even when the words original
are different, but phonetically close.
</para></listitem>
</itemizedlist>
The morphology processors that come with our own built-in Sphinx
implementations are:
<itemizedlist>
<listitem><para>English, Russian, and German lemmatizers;</para></listitem>
<listitem><para>English, Russian, Arabic, and Czech stemmers;</para></listitem>
<listitem><para>SoundEx and MetaPhone phonetic algorithms.</para></listitem>
</itemizedlist>
You can also link with <b>libstemmer</b> library for even more
stemmers (see details below). With libstemmer, Sphinx also supports
morphological processing for more than 15 other languages. Binary
packages should come prebuilt with libstemmer support, too.
</para>
<para>
Lemmatizer support was added in version 2.1.1-beta, starting with
a Russian lemmatizer. English and German lemmatizers were then added
in version 2.2.1-beta.
</para>
<para>
Lemmatizers require a dictionary that needs to be
additionally downloaded from the Sphinx website. That dictionary
needs to be installed in a directory specified by
<link linkend="conf-lemmatizer-base">lemmatizer_base</link>
directive. Also, there is a
<link linkend="conf-lemmatizer-cache">lemmatizer_cache</link>
directive that lets you speed up lemmatizing (and therefore
indexing) by spending more RAM for, basically, an uncompressed
cache of a dictionary.
</para>
<para>
Chinese segmentation using Rosette Linguistics Platform was added in 2.2.1-beta.
It is a much more precise but slower way (compared to n-grams) to segment Chinese documents.
<option><link linkend="conf-charset-table">charset_table</link></option> must contain all Chinese characters except
Chinese punctuation marks because incoming documents are first processed by sphinx tokenizer and then the result
is processed by RLP. Sphinx performs per-token language detection on the incoming documents. If token language is
identified as Chinese, it will only be processed the RLP, even if multiple morphology processors are specified.
Otherwise, it will be processed by all the morphology processors specified in the "morphology" option. Rosette
Linguistics Platform must be installed and configured and sphinx must be built with a --with-rlp switch. See also
<option><link linkend="conf-rlp-root">rlp_root</link></option>,
<option><link linkend="conf-rlp-environment">rlp_environment</link></option> and
<option><link linkend="conf-rlp-context">rlp_context</link></option> options.
A batched version of RLP segmentation is also available (<option>rlp_chinese_batched</option>). It provides the
same functionality as the basic <option>rlp_chinese</option> segmentation, but enables batching documents before
processing them by the RLP. Processing several documents at once can result in a substantial indexing speedup if
the documents are small (for example, less than 1k). See also
<option><link linkend="conf-rlp-max-batch-size">rlp_max_batch_size</link></option> and
<option><link linkend="conf-rlp-max-batch-docs">rlp_max_batch_docs</link></option> options.
</para>
<para>
Additional stemmers provided by <ulink url="http://snowball.tartarus.org/">Snowball</ulink>
project <ulink url="http://snowball.tartarus.org/dist/libstemmer_c.tgz">libstemmer</ulink> library
can be enabled at compile time using <option>--with-libstemmer</option> <filename>configure</filename> option.
Built-in English and Russian stemmers should be faster than their
libstemmer counterparts, but can produce slightly different results,
because they are based on an older version.
</para>
<para>
Soundex implementation matches that of MySQL. Metaphone implementation
is based on Double Metaphone algorithm and indexes the primary code.
</para>
<para>
Built-in values that are available for use in <option>morphology</option>
option are as follows:
<itemizedlist>
<listitem><para>none - do not perform any morphology processing;</para></listitem>
<listitem><para>lemmatize_ru - apply Russian lemmatizer and pick a single root form (added in 2.1.1-beta);</para></listitem>
<listitem><para>lemmatize_en - apply English lemmatizer and pick a single root form (added in 2.2.1-beta);</para></listitem>
<listitem><para>lemmatize_de - apply German lemmatizer and pick a single root form (added in 2.2.1-beta);</para></listitem>
<listitem><para>lemmatize_ru_all - apply Russian lemmatizer and index all possible root forms (added in 2.1.1-beta);</para></listitem>
<listitem><para>lemmatize_en_all - apply English lemmatizer and index all possible root forms (added in 2.2.1-beta);</para></listitem>
<listitem><para>lemmatize_de_all - apply German lemmatizer and index all possible root forms (added in 2.2.1-beta);</para></listitem>
<listitem><para>stem_en - apply Porter's English stemmer;</para></listitem>
<listitem><para>stem_ru - apply Porter's Russian stemmer;</para></listitem>
<listitem><para>stem_enru - apply Porter's English and Russian stemmers;</para></listitem>
<listitem><para>stem_cz - apply Czech stemmer;</para></listitem>
<listitem><para>stem_ar - apply Arabic stemmer (added in 2.1.1-beta);</para></listitem>
<listitem><para>soundex - replace keywords with their SOUNDEX code;</para></listitem>
<listitem><para>metaphone - replace keywords with their METAPHONE code.</para></listitem>
<listitem><para>rlp_chinese - apply Chinese text segmentation using Rosette Linguistics Platform</para></listitem>
<listitem><para>rlp_chinese_batched - apply Chinese text segmentation using Rosette Linguistics Platform with document batching</para></listitem>
</itemizedlist>
Additional values provided by libstemmer are in 'libstemmer_XXX' format,
where XXX is libstemmer algorithm codename (refer to
<filename>libstemmer_c/libstemmer/modules.txt</filename> for a complete list).
</para>
<para>
Several stemmers can be specified (comma-separated). They will be applied
to incoming words in the order they are listed, and the processing will stop
once one of the stemmers actually modifies the word.
Also when <link linkend="conf-wordforms">wordforms</link> feature is enabled
the word will be looked up in word forms dictionary first, and if there is
a matching entry in the dictionary, stemmers will not be applied at all.
Or in other words, <link linkend="conf-wordforms">wordforms</link> can be
used to implement stemming exceptions.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
morphology = stem_en, libstemmer_sv
</programlisting>
</sect2>
<sect2 id="conf-dict"><title>dict</title>
<para>
The keywords dictionary type.
Known values are 'crc' and 'keywords'.
'crc' is DEPRECATED. Use 'keywords' instead.
Optional, default is 'keywords'.
Introduced in version 2.0.1-beta.
</para>
<para>
CRC dictionary mode (dict=crc) is the default dictionary type
in Sphinx, and the only one available until version 2.0.1-beta.
Keywords dictionary mode (dict=keywords) was added in 2.0.1-beta,
primarily to (greatly) reduce indexing impact and enable substring
searches on huge collections. They also eliminate the chance of
CRC32 collisions. In 2.0.1-beta, that mode was only supported
for disk indexes. Starting with 2.0.2-beta, RT indexes are
also supported.
</para>
<para>
CRC dictionaries never store the original keyword text in the index.
Instead, keywords are replaced with their control sum value (either CRC32 or
FNV64, depending whether Sphinx was built with <option>--enable-id64</option>)
both when searching and indexing, and that value is used internally
in the index.
</para>
<para>
That approach has two drawbacks. First, in CRC32 case there is
a chance of control sum collision between several pairs of different
keywords, growing quadratically with the number of unique keywords
in the index. (FNV64 case is unaffected in practice, as a chance
of a single FNV64 collision in a dictionary of 1 billion entries
is approximately 1:16, or 6.25 percent. And most dictionaries
will be much more compact that a billion keywords, as a typical
spoken human language has in the region of 1 to 10 million word
forms.) Second, and more importantly, substring searches are not
directly possible with control sums. Sphinx alleviated that by
pre-indexing all the possible substrings as separate keywords
(see <xref linkend="conf-min-prefix-len"/>, <xref linkend="conf-min-infix-len"/>
directives). That actually has an added benefit of matching
substrings in the quickest way possible. But at the same time
pre-indexing all substrings grows the index size a lot (factors
of 3-10x and even more would not be unusual) and impacts the
indexing time respectively, rendering substring searches
on big indexes rather impractical.
</para>
<para>
Keywords dictionary, introduced in 2.0.1-beta, fixes both these
drawbacks. It stores the keywords in the index and performs
search-time wildcard expansion. For example, a search for a
'test*' prefix could internally expand to 'test|tests|testing'
query based on the dictionary contents. That expansion is fully
transparent to the application, except that the separate
per-keyword statistics for all the actually matched keywords
would now also be reported.
</para>
<para>
Version 2.1.1-beta introduced extended wildcards support, now special
symbols like '?' and '%' are supported along with substring (infix) search (e.g. "t?st*", "run%", "*abc*").
Note, however, these wildcards work only with dict=keywords, and not elsewhere.
</para>
<para>
Indexing with keywords dictionary should be 1.1x to 1.3x slower
compared to regular, non-substring indexing - but times faster
compared to substring indexing (either prefix or infix). Index size
should only be slightly bigger that than of the regular non-substring
index, with a 1..10% percent total difference.
Regular keyword searching time must be very close or identical across
all three discussed index kinds (CRC non-substring, CRC substring,
keywords). Substring searching time can vary greatly depending
on how many actual keywords match the given substring (in other
words, into how many keywords does the search term expand).
The maximum number of keywords matched is restricted by the
<link linkend="conf-expansion-limit">expansion_limit</link>
directive.
</para>
<para>
Essentially, keywords and CRC dictionaries represent the two
different trade-off substring searching decisions. You can choose
to either sacrifice indexing time and index size in favor of
top-speed worst-case searches (CRC dictionary), or only slightly
impact indexing time but sacrifice worst-case searching time when
the prefix expands into very many keywords (keywords dictionary).
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
dict = keywords
</programlisting>
</sect2>
<sect2 id="conf-index-sp"><title>index_sp</title>
<para>
Whether to detect and index sentence and paragraph boundaries.
Optional, default is 0 (do not detect and index).
Introduced in version 2.0.1-beta.
</para>
<para>
This directive enables sentence and paragraph boundary indexing.
It's required for the SENTENCE and PARAGRAPH operators to work.
Sentence boundary detection is based on plain text analysis, so you
only need to set <code>index_sp = 1</code> to enable it. Paragraph
detection is however based on HTML markup, and happens in the
<link linkend="conf-html-strip">HTML stripper</link>.
So to index paragraph locations you also need to enable the stripper
by specifying <code>html_strip = 1</code>. Both types of boundaries
are detected based on a few built-in rules enumerated just below.
</para>
<para>
Sentence boundary detection rules are as follows.
<itemizedlist>
<listitem><para>Question and exclamation signs (? and !) are always a sentence boundary.</para></listitem>
<listitem><para>Trailing dot (.) is a sentence boundary, except:
<itemizedlist>
<listitem><para>When followed by a letter. That's considered a part of an abbreviation (as in "S.T.A.L.K.E.R" or "Goldman Sachs S.p.A.").</para></listitem>
<listitem><para>When followed by a comma. That's considered an abbreviation followed by a comma (as in "Telecom Italia S.p.A., founded in 1994").</para></listitem>
<listitem><para>When followed by a space and a small letter. That's considered an abbreviation within a sentence (as in "News Corp. announced in February").</para></listitem>
<listitem><para>When preceded by a space and a capital letter, and followed by a space. That's considered a middle initial (as in "John D. Doe").</para></listitem>
</itemizedlist>
</para></listitem>
</itemizedlist>
</para>
<para>
Paragraph boundaries are inserted at every block-level HTML tag.
Namely, those are (as taken from HTML 4 standard) ADDRESS, BLOCKQUOTE,
CAPTION, CENTER, DD, DIV, DL, DT, H1, H2, H3, H4, H5, LI, MENU, OL, P,
PRE, TABLE, TBODY, TD, TFOOT, TH, THEAD, TR, and UL.
</para>
<para>
Both sentences and paragraphs increment the keyword position counter by 1.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
index_sp = 1
</programlisting>
</sect2>
<sect2 id="conf-index-zones"><title>index_zones</title>
<para>
A list of in-field HTML/XML zones to index.
Optional, default is empty (do not index zones).
Introduced in version 2.0.1-beta.
</para>
<para>
Zones can be formally defined as follows. Everything between
an opening and a matching closing tag is called a span, and
the aggregate of all spans corresponding sharing the same
tag name is called a zone. For instance, everything between
the occurrences of <H1> and </H1> in the document
field belongs to H1 zone.
</para>
<para>
Zone indexing, enabled by <code>index_zones</code> directive,
is an optional extension of the HTML stripper. So it will also
require that the <link linkend="conf-html-strip">stripper</link>
is enabled (with <code>html_strip = 1</code>). The value of the
<code>index_zones</code> should be a comma-separated list of
those tag names and wildcards (ending with a star) that should
be indexed as zones.
</para>
<para>
Zones can nest and overlap arbitrarily. The only requirement
is that every opening tag has a matching tag. You can also have
an arbitrary number of both zones (as in unique zone names,
such as H1) and spans (all the occurrences of those H1 tags)
in a document.
Once indexed, zones can then be used for matching with
the ZONE operator, see <xref linkend="extended-syntax"/>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
index_zones = h*, th, title
</programlisting>
<para>
Earlier versions than 2.1.1-beta only provided this feature for plain
index files; currently, RT index files also provide it.
</para>
</sect2>
<sect2 id="conf-min-stemming-len"><title>min_stemming_len</title>
<para>
Minimum word length at which to enable stemming.
Optional, default is 1 (stem everything).
Introduced in version 0.9.9-rc1.
</para>
<para>
Stemmers are not perfect, and might sometimes produce undesired results.
For instance, running "gps" keyword through Porter stemmer for English
results in "gp", which is not really the intent. <option>min_stemming_len</option>
feature lets you suppress stemming based on the source word length,
ie. to avoid stemming too short words. Keywords that are shorter than
the given threshold will not be stemmed. Note that keywords that are
exactly as long as specified <b>will</b> be stemmed. So in order to avoid
stemming 3-character keywords, you should specify 4 for the value.
For more finely grained control, refer to <link linkend="conf-wordforms">wordforms</link> feature.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
min_stemming_len = 4
</programlisting>
</sect2>
<sect2 id="conf-stopwords"><title>stopwords</title>
<para>
Stopword files list (space separated).
Optional, default is empty.
</para>
<para>
Stopwords are the words that will not be indexed. Typically you'd
put most frequent words in the stopwords list because they do not add
much value to search results but consume a lot of resources to process.
</para>
<para>
You can specify several file names, separated by spaces. All the files
will be loaded. Stopwords file format is simple plain text. The encoding
must be UTF-8.
File data will be tokenized with respect to <link linkend="conf-charset-table">charset_table</link>
settings, so you can use the same separators as in the indexed data.
</para>
<para>
The <link linkend="conf-morphology">stemmers</link> will normally be
applied when parsing stopwords file. That might however lead to undesired
results. Starting with 2.1.1-beta, you can turn that off with
<link linkend="conf-stopwords-unstemmed">stopwords_unstemmed</link>.
</para>
<para>
Starting with version 2.1.1-beta small enough files are stored in the index
header, see <xref linkend="conf-embedded-limit"/> for details.
</para>
<para>
While stopwords are not indexed, they still do affect the keyword positions.
For instance, assume that "the" is a stopword, that document 1 contains the line
"in office", and that document 2 contains "in the office". Searching for "in office"
as for exact phrase will only return the first document, as expected, even though
"the" in the second one is stopped. That behavior can be tweaked through the
<link linkend="conf-stopword-step">stopword_step</link> directive.
</para>
<para>
Stopwords files can either be created manually, or semi-automatically.
<filename>indexer</filename> provides a mode that creates a frequency dictionary
of the index, sorted by the keyword frequency, see <option>--buildstops</option>
and <option>--buildfreqs</option> switch in <xref linkend="ref-indexer"/>.
Top keywords from that dictionary can usually be used as stopwords.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
stopwords = /usr/local/sphinx/data/stopwords.txt
stopwords = stopwords-ru.txt stopwords-en.txt
</programlisting>
</sect2>
<sect2 id="conf-wordforms"><title>wordforms</title>
<para>
Word forms dictionary.
Optional, default is empty.
</para>
<para>
Word forms are applied after tokenizing the incoming text
by <link linkend="conf-charset-table">charset_table</link> rules.
They essentially let you replace one word with another. Normally,
that would be used to bring different word forms to a single
normal form (eg. to normalize all the variants such as "walks",
"walked", "walking" to the normal form "walk"). It can also be used
to implement stemming exceptions, because stemming is not applied
to words found in the forms list.
</para>
<para>
Starting with version 2.1.1-beta small enough files are stored in the index
header, see <xref linkend="conf-embedded-limit"/> for details.
</para>
<para>
Dictionaries are used to normalize incoming words both during indexing
and searching. Therefore, to pick up changes in wordforms file
it's required to rotate index.
</para>
<para>
Word forms support in Sphinx is designed to support big dictionaries well.
They moderately affect indexing speed: for instance, a dictionary with 1 million
entries slows down indexing about 1.5 times. Searching speed is not affected at all.
Additional RAM impact is roughly equal to the dictionary file size,
and dictionaries are shared across indexes: ie. if the very same 50 MB wordforms
file is specified for 10 different indexes, additional <filename>searchd</filename>
RAM usage will be about 50 MB.
</para>
<para>
Dictionary file should be in a simple plain text format. Each line
should contain source and destination word forms, in UTF-8 encoding,
separated by "greater" sign. Rules from the
<link linkend="conf-charset-table">charset_table</link> will be
applied when the file is loaded. So basically it's as case sensitive
as your other full-text indexed data, ie. typically case insensitive.
Here's the file contents sample:
<programlisting>
walks > walk
walked > walk
walking > walk
</programlisting>
</para>
<para>
There is a bundled <filename>spelldump</filename> utility that
helps you create a dictionary file in the format Sphinx can read
from source <filename>.dict</filename> and <filename>.aff</filename>
dictionary files in <filename>ispell</filename> or <filename>MySpell</filename>
format (as bundled with OpenOffice).
</para>
<para>
Starting with version 0.9.9-rc1, you can map several source words
to a single destination word. Because the work happens on tokens,
not the source text, differences in whitespace and markup are ignored.
</para>
<para>
Starting with version 2.1.1-beta, you can use "=>" instead of ">". Comments
(starting with "#" are also allowed. Finally, if a line starts with a tilde ("~")
the wordform will be applied after morphology, instead of before.
<programlisting>
core 2 duo > c2d
e6600 > c2d
core 2duo => c2d # Some people write '2duo' together...
</programlisting>
</para>
<para>
Stating with version 2.2.4, you can specify multiple destination tokens:
<programlisting>
s02e02 > season 2 episode 2
s3 e3 > season 3 episode 3
</programlisting>
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
wordforms = /usr/local/sphinx/data/wordforms.txt
wordforms = /usr/local/sphinx/data/alternateforms.txt
wordforms = /usr/local/sphinx/private/dict*.txt
</programlisting>
<para>
Starting with version 2.1.1-beta you can specify several files and not
only just one. Masks can be used as a pattern, and all matching files will
be processed in simple ascending order. (If multi-byte codepages are used,
and file names can include foreign characters, the resulting order may not
be exactly alphabetic.) If a same wordform definition is found in several
files, the latter one is used, and it overrides previous definitions.
</para>
</sect2>
<sect2 id="conf-embedded-limit"><title>embedded_limit</title>
<para>
Embedded exceptions, wordforms, or stopwords file size limit.
Optional, default is 16K.
Added in version 2.1.1-beta.
</para>
<para>
Before 2.1.1-beta, the contents of exceptions, wordforms, or stopwords
files were always kept in the files. Only the file names were stored into
the index. Starting with 2.1.1-beta, indexer can either save the file name,
or embed the file contents directly into the index. Files sized under
<code>embedded_limit</code> get stored into the index. For bigger files,
only the file names are stored. This also simplifies moving index files
to a different machine; you may get by just copying a single file.
</para>
<para>
With smaller files, such embedding reduces the number of the external
files on which the index depends, and helps maintenance. But at the same
time it makes no sense to embed a 100 MB wordforms dictionary into a tiny
delta index. So there needs to be a size threshold, and <code>embedded_limit</code>
is that threshold.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
embedded_limit = 32K
</programlisting>
</sect2>
<sect2 id="conf-exceptions"><title>exceptions</title>
<para>
Tokenizing exceptions file.
Optional, default is empty.
</para>
<para>
Exceptions allow to map one or more tokens (including tokens with
characters that would normally be excluded) to a single keyword.
They are similar to <link linkend="conf-wordforms">wordforms</link>
in that they also perform mapping, but have a number of important
differences.
</para>
<para>
Starting with version 2.1.1-beta small enough files are stored in the index
header, see <xref linkend="conf-embedded-limit"/> for details.
</para>
<para>
Short summary of the differences is as follows:
<itemizedlist>
<listitem><para>exceptions are case sensitive, wordforms are not;</para></listitem>
<listitem><para>exceptions can use special characters that are <b>not</b> in charset_table, wordforms fully obey charset_table;</para></listitem>
<listitem><para>exceptions can underperform on huge dictionaries, wordforms handle millions of entries well.</para></listitem>
</itemizedlist>
</para>
<para>
The expected file format is also plain text, with one line per exception,
and the line format is as follows:
<programlisting>
map-from-tokens => map-to-token
</programlisting>
Example file:
<programlisting>
at & t => at&t
AT&T => AT&T
Standarten Fuehrer => standartenfuhrer
Standarten Fuhrer => standartenfuhrer
MS Windows => ms windows
Microsoft Windows => ms windows
C++ => cplusplus
c++ => cplusplus
C plus plus => cplusplus
</programlisting>
All tokens here are case sensitive: they will <b>not</b> be processed by
<link linkend="conf-charset-table">charset_table</link> rules. Thus, with
the example exceptions file above, "at&t" text will be tokenized as two
keywords "at" and "t", because of lowercase letters. On the other hand,
"AT&T" will match exactly and produce single "AT&T" keyword.
</para>
<para>
Note that this map-to keyword is a) always interpreted
as a <emphasis>single</emphasis> word, and b) is both case and space
sensitive! In our sample, "ms windows" query will <emphasis>not</emphasis>
match the document with "MS Windows" text. The query will be interpreted
as a query for two keywords, "ms" and "windows". And what "MS Windows"
gets mapped to is a <emphasis>single</emphasis> keyword "ms windows",
with a space in the middle. On the other hand, "standartenfuhrer"
will retrieve documents with "Standarten Fuhrer" or "Standarten Fuehrer"
contents (capitalized exactly like this), or any capitalization variant
of the keyword itself, eg. "staNdarTenfUhreR". (It won't catch
"standarten fuhrer", however: this text does not match any of the
listed exceptions because of case sensitivity, and gets indexed
as two separate keywords.)
</para>
<para>
Whitespace in the map-from tokens list matters, but its amount does not.
Any amount of the whitespace in the map-form list will match any other amount
of whitespace in the indexed document or query. For instance, "AT & T"
map-from token will match "AT    &  T" text,
whatever the amount of space in both map-from part and the indexed text.
Such text will therefore be indexed as a special "AT&T" keyword,
thanks to the very first entry from the sample.
</para>
<para>
Exceptions also allow to capture special characters (that are exceptions
from general <link linkend="conf-charset-table">charset_table</link> rules;
hence the name). Assume that you generally do not want to treat '+'
as a valid character, but still want to be able search for some exceptions
from this rule such as 'C++'. The sample above will do just that, totally
independent of what characters are in the table and what are not.
</para>
<para>
Exceptions are applied to raw incoming document and query data
during indexing and searching respectively. Therefore, to pick up
changes in the file it's required to reindex and restart
<filename>searchd</filename>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
exceptions = /usr/local/sphinx/data/exceptions.txt
</programlisting>
</sect2>
<sect2 id="conf-min-word-len"><title>min_word_len</title>
<para>
Minimum indexed word length.
Optional, default is 1 (index everything).
</para>
<para>
Only those words that are not shorter than this minimum will be indexed.
For instance, if min_word_len is 4, then 'the' won't be indexed, but 'they' will be.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
min_word_len = 4
</programlisting>
</sect2>
<sect2 id="conf-charset-table"><title>charset_table</title>
<para>
Accepted characters table, with case folding rules.
Optional, default value are latin and cyrillic characters.
</para>
<para>
charset_table is the main workhorse of Sphinx tokenizing process,
ie. the process of extracting keywords from document text or query text.
It controls what characters are accepted as valid and what are not,
and how the accepted characters should be transformed (eg. should
the case be removed or not).
</para>
<para>
You can think of charset_table as of a big table that has a mapping
for each and every of 100K+ characters in Unicode. By default,
every character maps to 0, which means that it does not occur
within keywords and should be treated as a separator. Once
mentioned in the table, character is mapped to some other
character (most frequently, either to itself or to a lowercase
letter), and is treated as a valid keyword part.
</para>
<para>
The expected value format is a commas-separated list of mappings.
Two simplest mappings simply declare a character as valid, and map
a single character to another single character, respectively.
But specifying the whole table in such form would result
in bloated and barely manageable specifications. So there are
several syntax shortcuts that let you map ranges of characters
at once. The complete list is as follows:
<variablelist>
<varlistentry>
<term>A->a</term>
<listitem><para>Single char mapping, declares source char 'A' as allowed
to occur within keywords and maps it to destination char 'a'
(but does <emphasis>not</emphasis> declare 'a' as allowed).
</para></listitem>
</varlistentry>
<varlistentry>
<term>A..Z->a..z</term>
<listitem><para>Range mapping, declares all chars in source range
as allowed and maps them to the destination range. Does <emphasis>not</emphasis>
declare destination range as allowed. Also checks ranges' lengths
(the lengths must be equal).
</para></listitem>
</varlistentry>
<varlistentry>
<term>a</term>
<listitem><para>Stray char mapping, declares a character as allowed
and maps it to itself. Equivalent to a->a single char mapping.
</para></listitem>
</varlistentry>
<varlistentry>
<term>a..z</term>
<listitem><para>Stray range mapping, declares all characters in range
as allowed and maps them to themselves. Equivalent to
a..z->a..z range mapping.
</para></listitem>
</varlistentry>
<varlistentry>
<term>A..Z/2</term>
<listitem><para>Checkerboard range map. Maps every pair of chars
to the second char. More formally, declares odd characters
in range as allowed and maps them to the even ones; also
declares even characters as allowed and maps them to themselves.
For instance, A..Z/2 is equivalent to A->B, B->B, C->D, D->D,
..., Y->Z, Z->Z. This mapping shortcut is helpful for
a number of Unicode blocks where uppercase and lowercase
letters go in such interleaved order instead of contiguous
chunks.
</para></listitem>
</varlistentry>
</variablelist>
</para>
<para>
Control characters with codes from 0 to 31 are always treated as separators.
Characters with codes 32 to 127, ie. 7-bit ASCII characters, can be used
in the mappings as is. To avoid configuration file encoding issues,
8-bit ASCII characters and Unicode characters must be specified in U+xxx form,
where 'xxx' is hexadecimal codepoint number. This form can also be used
for 7-bit ASCII characters to encode special ones: eg. use U+20 to
encode space, U+2E to encode dot, U+2C to encode comma.
</para>
<para>
Starting with 2.2.3-beta, aliases "english" and "russian" are allowed at
control character mapping.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
# default are English and Russian letters
charset_table = 0..9, A..Z->a..z, _, a..z, \
U+410..U+42F->U+430..U+44F, U+430..U+44F, U+401->U+451, U+451
# english charset defined with alias
charset_table = 0..9, english, _
</programlisting>
</sect2>
<sect2 id="conf-ignore-chars"><title>ignore_chars</title>
<para>
Ignored characters list.
Optional, default is empty.
</para>
<para>
Useful in the cases when some characters, such as soft hyphenation mark (U+00AD),
should be not just treated as separators but rather fully ignored.
For example, if '-' is simply not in the charset_table,
"abc-def" text will be indexed as "abc" and "def" keywords.
On the contrary, if '-' is added to ignore_chars list, the same
text will be indexed as a single "abcdef" keyword.
</para>
<para>
The syntax is the same as for <link linkend="conf-charset-table">charset_table</link>,
but it's only allowed to declare characters, and not allowed to map them. Also,
the ignored characters must not be present in charset_table.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
ignore_chars = U+AD
</programlisting>
</sect2>
<sect2 id="conf-min-prefix-len"><title>min_prefix_len</title>
<para>
Minimum word prefix length to index.
Optional, default is 0 (do not index prefixes).
</para>
<para>
Prefix indexing allows to implement wildcard searching by 'wordstart*' wildcards.
When mininum prefix length is set to a positive number, indexer will index
all the possible keyword prefixes (ie. word beginnings) in addition to the keywords
themselves. Too short prefixes (below the minimum allowed length) will not
be indexed.
</para>
<para>
For instance, indexing a keyword "example" with min_prefix_len=3
will result in indexing "exa", "exam", "examp", "exampl" prefixes along
with the word itself. Searches against such index for "exam" will match
documents that contain "example" word, even if they do not contain "exam"
on itself. However, indexing prefixes will make the index grow significantly
(because of many more indexed keywords), and will degrade both indexing
and searching times.
</para>
<para>
Perfect word matches can be differentiated from prefix matches, and ranked higher, by utilizing all of the following options: a) dict=keywords (on by default), b) index_exact_words=1 (off by default), and c) expand_keywords=1 (also off by default).
Note that either with the legacy dict=crc mode (which you should ditch anyway!), or with any of the above options disable, there is no data to differentiate between the prefixes and full words, and thus perfect word matches can't be ranked higher.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
min_prefix_len = 3
</programlisting>
</sect2>
<sect2 id="conf-min-infix-len"><title>min_infix_len</title>
<para>
Minimum infix prefix length to index.
Optional, default is 0 (do not index infixes).
</para>
<para>
Infix indexing allows to implement wildcard searching by 'start*', '*end', and '*middle*' wildcards.
When minimum infix length is set to a positive number, indexer will index all the possible keyword infixes
(ie. substrings) in addition to the keywords themselves. Too short infixes
(below the minimum allowed length) will not be indexed. For instance,
indexing a keyword "test" with min_infix_len=2 will result in indexing
"te", "es", "st", "tes", "est" infixes along with the word itself.
Searches against such index for "es" will match documents that contain
"test" word, even if they do not contain "es" on itself. However,
indexing infixes will make the index grow significantly (because of
many more indexed keywords), and will degrade both indexing and
searching times.</para>
<para>
Perfect word matches can be differentiated from infix matches, and ranked higher, by utilizing all of the following options: a) dict=keywords (on by default), b) index_exact_words=1 (off by default), and c) expand_keywords=1 (also off by default).
Note that either with the legacy dict=crc mode (which you should ditch anyway!), or with any of the above options disable, there is no data to differentiate between the infixes and full words, and thus perfect word matches can't be ranked higher.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
min_infix_len = 3
</programlisting>
</sect2>
<sect2 id="conf-max-substring-len"><title>max_substring_len</title>
<para>
Maximum substring (either prefix or infix) length to index.
Optional, default is 0 (do not limit indexed substrings).
Applies to dict=crc only.
</para>
<para>
By default, substring (either prefix or infix) indexing in the
<link linkend="conf-dict">dict=crc mode</link> will index <b>all</b>
the possible substrings as separate keywords. That might result
in an overly large index. So the <code>max_substring_len</code>
directive lets you limit the impact of substring indexing
by skipping too-long substrings (which, chances are, will never
get searched for anyway).
</para>
<para>
For example, a test index of 10,000 blog posts takes this
much disk space depending on the settings:
<itemizedlist>
<listitem>6.4 MB baseline (no substrings)</listitem>
<listitem>24.3 MB (3.8x) with min_prefix_len = 3</listitem>
<listitem>22.2 MB (3.5x) with min_prefix_len = 3, max_substring_len = 8</listitem>
<listitem>19.3 MB (3.0x) with min_prefix_len = 3, max_substring_len = 6</listitem>
<listitem>94.3 MB (14.7x) with min_infix_len = 3</listitem>
<listitem>84.6 MB (13.2x) with min_infix_len = 3, max_substring_len = 8</listitem>
<listitem>70.7 MB (11.0x) with min_infix_len = 3, max_substring_len = 6</listitem>
</itemizedlist>
So in this test limiting the max substring length saved us
10-15% on the index size.
</para>
<para>
There is no performance impact associated with substring length
when using dict=keywords mode, so this directive is not applicable
and intentionally forbidden in that case. If required, you can still
limit the length of a substring that you search for in the application
code.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
max_substring_len = 12
</programlisting>
</sect2>
<sect2 id="conf-prefix-fields"><title>prefix_fields</title>
<para>
The list of full-text fields to limit prefix indexing to. Applies to dict=crc only.
Optional, default is empty (index all fields in prefix mode).
</para>
<para>
Because prefix indexing impacts both indexing and searching performance,
it might be desired to limit it to specific full-text fields only:
for instance, to provide prefix searching through URLs, but not through
page contents. prefix_fields specifies what fields will be prefix-indexed;
all other fields will be indexed in normal mode. The value format is a
comma-separated list of field names.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
prefix_fields = url, domain
</programlisting>
</sect2>
<sect2 id="conf-infix-fields"><title>infix_fields</title>
<para>
The list of full-text fields to limit infix indexing to. Applies to dict=crc only.
Optional, default is empty (index all fields in infix mode).
</para>
<para>
Similar to <link linkend="conf-prefix-fields">prefix_fields</link>,
but lets you limit infix-indexing to given fields.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
infix_fields = url, domain
</programlisting>
</sect2>
<sect2 id="conf-ngram-len"><title>ngram_len</title>
<para>
N-gram lengths for N-gram indexing.
Optional, default is 0 (disable n-gram indexing).
Known values are 0 and 1 (other lengths to be implemented).
</para>
<para>
N-grams provide basic CJK (Chinese, Japanese, Korean) support for
unsegmented texts. The issue with CJK searching is that there could be no
clear separators between the words. Ideally, the texts would be filtered
through a special program called segmenter that would insert separators
in proper locations. However, segmenters are slow and error prone,
and it's common to index contiguous groups of N characters, or n-grams,
instead.
</para>
<para>
When this feature is enabled, streams of CJK characters are indexed
as N-grams. For example, if incoming text is "ABCDEF" (where A to F represent
some CJK characters) and length is 1, in will be indexed as if
it was "A B C D E F". (With length equal to 2, it would produce "AB BC CD DE EF";
but only 1 is supported at the moment.) Only those characters that are
listed in <link linkend="conf-ngram-chars">ngram_chars</link> table
will be split this way; other ones will not be affected.
</para>
<para>
Note that if search query is segmented, ie. there are separators between
individual words, then wrapping the words in quotes and using extended mode
will result in proper matches being found even if the text was <b>not</b>
segmented. For instance, assume that the original query is BC DEF.
After wrapping in quotes on the application side, it should look
like "BC" "DEF" (<emphasis>with</emphasis> quotes). This query
will be passed to Sphinx and internally split into 1-grams too,
resulting in "B C" "D E F" query, still with
quotes that are the phrase matching operator. And it will match
the text even though there were no separators in the text.
</para>
<para>
Even if the search query is not segmented, Sphinx should still produce
good results, thanks to phrase based ranking: it will pull closer phrase
matches (which in case of N-gram CJK words can mean closer multi-character
word matches) to the top.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
ngram_len = 1
</programlisting>
</sect2>
<sect2 id="conf-ngram-chars"><title>ngram_chars</title>
<para>
N-gram characters list.
Optional, default is empty.
</para>
<para>
To be used in conjunction with in <link linkend="conf-ngram-len">ngram_len</link>,
this list defines characters, sequences of which are subject to N-gram extraction.
Words comprised of other characters will not be affected by N-gram indexing
feature. The value format is identical to <link linkend="conf-charset-table">charset_table</link>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
ngram_chars = U+3000..U+2FA1F
</programlisting>
</sect2>
<sect2 id="conf-phrase-boundary"><title>phrase_boundary</title>
<para>
Phrase boundary characters list.
Optional, default is empty.
</para>
<para>
This list controls what characters will be treated as phrase boundaries,
in order to adjust word positions and enable phrase-level search
emulation through proximity search. The syntax is similar
to <link linkend="conf-charset-table">charset_table</link>.
Mappings are not allowed and the boundary characters must not
overlap with anything else.
</para>
<para>
On phrase boundary, additional word position increment (specified by
<link linkend="conf-phrase-boundary-step">phrase_boundary_step</link>)
will be added to current word position. This enables phrase-level
searching through proximity queries: words in different phrases
will be guaranteed to be more than phrase_boundary_step distance
away from each other; so proximity search within that distance
will be equivalent to phrase-level search.
</para>
<para>
Phrase boundary condition will be raised if and only if such character
is followed by a separator; this is to avoid abbreviations such as
S.T.A.L.K.E.R or URLs being treated as several phrases.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
phrase_boundary = ., ?, !, U+2026 # horizontal ellipsis
</programlisting>
</sect2>
<sect2 id="conf-phrase-boundary-step"><title>phrase_boundary_step</title>
<para>
Phrase boundary word position increment.
Optional, default is 0.
</para>
<para>
On phrase boundary, current word position will be additionally incremented
by this number. See <link linkend="conf-phrase-boundary">phrase_boundary</link> for details.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
phrase_boundary_step = 100
</programlisting>
</sect2>
<sect2 id="conf-html-strip"><title>html_strip</title>
<para>
Whether to strip HTML markup from incoming full-text data.
Optional, default is 0.
Known values are 0 (disable stripping) and 1 (enable stripping).
</para>
<para>
Both HTML tags and entities and considered markup and get processed.
</para>
<para>HTML tags are removed, their contents (i.e., everything between
<P> and </P>) are left intact by default. You can choose
to keep and index attributes of the tags (e.g., HREF attribute in
an A tag, or ALT in an IMG one). Several well-known inline tags are
completely removed, all other tags are treated as block level and
replaced with whitespace. For example, 'te<B>st</B>'
text will be indexed as a single keyword 'test', however,
'te<P>st</P>' will be indexed as two keywords
'te' and 'st'. Known inline tags are as follows: A, B, I, S, U, BASEFONT,
BIG, EM, FONT, IMG, LABEL, SMALL, SPAN, STRIKE, STRONG, SUB, SUP, TT.
</para>
<para>
HTML entities get decoded and replaced with corresponding UTF-8
characters. Stripper supports both numeric forms (such as &#239;)
and text forms (such as &oacute; or &nbsp;). All entities
as specified by HTML4 standard are supported.
</para>
<para>
Stripping should work with
properly formed HTML and XHTML, but, just as most browsers, may produce
unexpected results on malformed input (such as HTML with stray <'s
or unclosed >'s).
</para>
<para>
Only the tags themselves, and also HTML comments, are stripped.
To strip the contents of the tags too (eg. to strip embedded scripts),
see <link linkend="conf-html-remove-elements">html_remove_elements</link> option.
There are no restrictions on tag names; ie. everything
that looks like a valid tag start, or end, or a comment
will be stripped.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
html_strip = 1
</programlisting>
</sect2>
<sect2 id="conf-html-index-attrs"><title>html_index_attrs</title>
<para>
A list of markup attributes to index when stripping HTML.
Optional, default is empty (do not index markup attributes).
</para>
<para>
Specifies HTML markup attributes whose contents should be retained and indexed
even though other HTML markup is stripped. The format is per-tag enumeration of
indexable attributes, as shown in the example below.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
html_index_attrs = img=alt,title; a=title;
</programlisting>
</sect2>
<sect2 id="conf-html-remove-elements"><title>html_remove_elements</title>
<para>
A list of HTML elements for which to strip contents along with the elements themselves.
Optional, default is empty string (do not strip contents of any elements).
</para>
<para>
This feature allows to strip element contents, ie. everything that
is between the opening and the closing tags. It is useful to remove
embedded scripts, CSS, etc. Short tag form for empty elements
(ie. <br />) is properly supported; ie. the text that
follows such tag will <b>not</b> be removed.
</para>
<para>
The value is a comma-separated list of element (tag) names whose
contents should be removed. Tag names are case insensitive.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
html_remove_elements = style, script
</programlisting>
</sect2>
<sect2 id="conf-local"><title>local</title>
<para>
Local index declaration in the <link linkend="distributed">distributed index</link>.
Multi-value, optional, default is empty.
</para>
<para>
This setting is used to declare local indexes that will be searched when
given distributed index is searched. Many local indexes can be declared per
each distributed index. Any local index can also be mentioned several times
in different distributed indexes.
</para>
<para>
Note that by default all local indexes will be searched <b>sequentially</b>,
utilizing only 1 CPU or core. To parallelize processing of the local parts
in the distributed index, you should use <option>dist_threads</option> directive,
see <xref linkend="conf-dist-threads"/>.
</para>
<para>
Before <option>dist_threads</option>, there also was a legacy solution
to configure <filename>searchd</filename> to query itself instead of using
local indexes (refer to <xref linkend="conf-agent"/> for the details). However,
that creates redundant CPU and network load, and <option>dist_threads</option>
is now strongly suggested instead.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
local = chunk1
local = chunk2
</programlisting>
</sect2>
<sect2 id="conf-agent"><title>agent</title>
<para>
Remote agent declaration in the <link linkend="distributed">distributed index</link>.
Multi-value, optional, default is empty.
</para>
<para>
<code>agent</code> directive declares remote agents that are searched
every time when the enclosing distributed index is searched. The agents
are, essentially, pointers to networked indexes. Prior to version 2.1.1-beta,
the value format was:
<programlisting>
agent = address:index-list
</programlisting>
Starting with 2.1.1-beta, the value can additionally specify multiple
alternatives (agent mirrors) for either the address only, or the address
and index list:
<programlisting>
agent = address1 [ | address2 [...] ]:index-list
agent = address1:index-list [ | address2:index-list [...] ]
</programlisting>
In both cases the address specification must be one of the following:
<programlisting>
address = hostname:port # eg. server2:9312
address = /absolute/unix/socket/path # eg. /var/run/sphinx2.sock
</programlisting>
Where
<code>hostname</code> is the remote host name,
<code>port</code> is the remote TCP port number,
<code>index-list</code> is a comma-separated list of index names,
and square braces [] designate an optional clause.
</para>
<para>
In other words, you can point every single agent to one or more remote
indexes, residing on one or more networked servers. There are absolutely
no restrictions on the pointers. To point out a couple important things,
the host can be localhost, and the remote index can be a distributed
index in turn, all that is legal. That enables a bunch of very different
usage modes:
<itemizedlist>
<listitem><para>sharding over multiple agent servers, and creating
an arbitrary cluster topology;</para></listitem>
<listitem><para>sharding over multiple agent servers, mirrored
for HA/LB (High Availability and Load Balancing) purposes
(starting with 2.1.1-beta);</para></listitem>
<listitem><para>sharding within localhost, to utilize multiple cores
(historical and not recommended in versions 1.x and above, use multiple
local indexes and dist_threads directive instead);</para></listitem>
</itemizedlist>
</para>
<para>
All agents are searched in parallel. An index list is passed verbatim
to the remote agent. How exactly that list is searched within the agent
(ie. sequentially or in parallel too) depends solely on the agent
configuration (ie. dist_threads directive). Master has no remote
control over that.
</para>
Starting with 2.2.9-release, the value can additionally enumerate per agent
options such as:
<itemizedlist>
<listitem><para><link linkend="conf-ha-strategy">ha_strategy</link> - random,
roundrobin, nodeads, noerrors (replces index <link linkend="conf-ha-strategy">ha_strategy</link>
for particular agent)</para></listitem>
<listitem><para><link linkend="conf-agent-persistent">conn</link> - pconn,
persistent (same as <link linkend="conf-agent-persistent">agent_persistent</link>
agent declaration)</para></listitem>
<listitem><para><link linkend="conf-agent-blackhole">blackhole</link> - 0,1 (same as
<link linkend="conf-agent-blackhole">agent_blackhole</link> agent declaration)</para></listitem>
</itemizedlist>
<programlisting>
agent = address1:index-list[[ha_strategy=value] | [conn=value] | [blackhole=value]]
</programlisting>
<bridgehead>Example:</bridgehead>
<programlisting>
# config on box2
# sharding an index over 3 servers
agent = box2:9312:chunk2
agent = box3:9312:chunk3
# config on box2
# sharding an index over 3 servers
agent = box1:9312:chunk2
agent = box3:9312:chunk3
# config on box3
# sharding an index over 3 servers
agent = box1:9312:chunk2
agent = box2:9312:chunk3
# per agent options
agent = box1:9312:chunk1[ha_strategy=nodeads]
agent = box2:9312:chunk2[conn=pconn]
agent = test:9312:any[blackhole=1]
</programlisting>
<bridgehead>Agent mirrors</bridgehead>
<para>
New syntax added in 2.1.1-beta lets you define so-called <b>agent mirrors</b>
that can be used interchangeably when processing a search query. Master server
keeps track of mirror status (alive or dead) and response times, and does
automatic failover and load balancing based on that. For example, this line:</para>
<programlisting>
agent = box1:9312|box2:9312|box3:9312:chunk2
</programlisting>
<para>Declares that box1:9312, box2:9312, and box3:9312 all have an index
called chunk2, and can be used as interchangeable mirrors. If any single
of those servers go down, the queries will be distributed between
the other two. When it gets back up, master will detect that and begin
routing queries to all three boxes again.
</para>
<para>
Another way to define the mirrors is to explicitly specify the index list
for every mirror:</para>
<programlisting>
agent = box1:9312:box1chunk2|box2:9312:box2chunk2
</programlisting>
<para>This works essentially the same as the previous example, but different
index names will be used when querying different severs: box1chunk2 when querying
box1:9312, and box2chunk when querying box2:9312.
</para>
<para>
By default, all queries are routed to the best of the mirrors. The best one
is picked based on the recent statistics, as controlled by the
<link linkend="conf-ha-period-karma">ha_period_karma</link> config directive.
Master stores a number of metrics (total query count, error count, response
time, etc) recently observed for every agent. It groups those by time spans,
and karma is that time span length. The best agent mirror is then determined
dynamically based on the last 2 such time spans. Specific algorithm that
will be used to pick a mirror can be configured
<link linkend="conf-ha-strategy">ha_strategy</link> directive.
</para>
<para>
The karma period is in seconds and defaults to 60 seconds. Master stores
upto 15 karma spans with per-agent statistics for instrumentation purposes
(see <link linkend="sphinxql-show-agent-status">SHOW AGENT STATUS</link>
statement). However, only the last 2 spans out of those are ever used for
HA/LB logic.
</para>
<para>
When there are no queries, master sends a regular ping command every
<link linkend="conf-ha-ping-interval">ha_ping_interval</link> milliseconds
in order to have some statistics and at least check, whether the remote
host is still alive. ha_ping_interval defaults to 1000 msec. Setting it to 0
disables pings and statistics will only be accumulated based on actual queries.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
# sharding index over 4 servers total
# in just 2 chunks but with 2 failover mirrors for each chunk
# box1, box2 carry chunk1 as local
# box3, box4 carry chunk2 as local
# config on box1, box2
agent = box3:9312|box4:9312:chunk2
# config on box3, box4
agent = box1:9312|box2:9312:chunk1
</programlisting>
</sect2>
<sect2 id="conf-agent-persistent"><title>agent_persistent</title>
<para>
Persistently connected remote agent declaration.
Multi-value, optional, default is empty.
Introduced in version 2.1.1-beta.
</para>
<para>
<option>agent_persistent</option> directive syntax matches that of
the <link linkend="conf-agent">agent</link> directive. The only difference
is that the master will <b>not</b> open a new connection to the agent for
every query and then close it. Rather, it will keep a connection open and
attempt to reuse for the subsequent queries. The maximal number of such persistent connections per one agent host
is limited by <link linkend="conf-persistent-connections-limit">persistent_connections_limit</link> option of searchd section.
</para>
<para>
Note, that you <b>have</b> to set the last one in something greater than 0 if you want to use persistent agent connections.
Otherwise - when <link linkend="conf-persistent-connections-limit">persistent_connections_limit</link> is not defined, it assumes
the zero num of persistent connections, and 'agent_persistent' acts exactly as simple 'agent'.
</para>
<para>
Persistent master-agent connections reduce TCP port pressure, and
save on connection handshakes. As of time of this writing, they are supported <b>only</b>
in workers=threads mode. In other modes, simple non-persistent connections
(i.e., one connection per operation) will be used, and a warning will show
up in the console.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
agent_persistent = remotebox:9312:index2
</programlisting>
</sect2>
<sect2 id="conf-agent-blackhole"><title>agent_blackhole</title>
<para>
Remote blackhole agent declaration in the <link linkend="distributed">distributed index</link>.
Multi-value, optional, default is empty.
Introduced in version 0.9.9-rc1.
</para>
<para>
<option>agent_blackhole</option> lets you fire-and-forget queries
to remote agents. That is useful for debugging (or just testing)
production clusters: you can setup a separate debugging/testing searchd
instance, and forward the requests to this instance from your production
master (aggregator) instance without interfering with production work.
Master searchd will attempt to connect and query blackhole agent
normally, but it will neither wait nor process any responses.
Also, all network errors on blackhole agents will be ignored.
The value format is completely identical to regular
<link linkend="conf-agent">agent</link> directive.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
agent_blackhole = testbox:9312:testindex1,testindex2
</programlisting>
</sect2>
<sect2 id="conf-agent-connect-timeout"><title>agent_connect_timeout</title>
<para>
Remote agent connection timeout, in milliseconds.
Optional, default is 1000 (ie. 1 second).
</para>
<para>
When connecting to remote agents, <filename>searchd</filename>
will wait at most this much time for connect() call to complete
successfully. If the timeout is reached but connect() does not complete,
and <link linkend="api-func-setretries">retries</link> are enabled,
retry will be initiated.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
agent_connect_timeout = 300
</programlisting>
</sect2>
<sect2 id="conf-agent-query-timeout"><title>agent_query_timeout</title>
<para>
Remote agent query timeout, in milliseconds.
Optional, default is 3000 (ie. 3 seconds).
Added in version 2.1.1-beta.
</para>
<para>
After connection, <filename>searchd</filename> will wait at most this
much time for remote queries to complete. This timeout is fully separate
from connection timeout; so the maximum possible delay caused by
a remote agent equals to the sum of <code>agent_connection_timeout</code> and
<code>agent_query_timeout</code>. Queries will <b>not</b> be retried
if this timeout is reached; a warning will be produced instead.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
agent_query_timeout = 10000 # our query can be long, allow up to 10 sec
</programlisting>
</sect2>
<sect2 id="conf-preopen"><title>preopen</title>
<para>
Whether to pre-open all index files, or open them per each query.
Optional, default is 0 (do not preopen).
</para>
<para>
This option tells <filename>searchd</filename> that it should pre-open
all index files on startup (or rotation) and keep them open while it runs.
Currently, the default mode is <b>not</b> to pre-open the files (this may
change in the future). Preopened indexes take a few (currently 2) file
descriptors per index. However, they save on per-query <code>open()</code> calls;
and also they are invulnerable to subtle race conditions that may happen during
index rotation under high load. On the other hand, when serving many indexes
(100s to 1000s), it still might be desired to open the on per-query basis
in order to save file descriptors.
</para>
<para>
This directive does not affect <filename>indexer</filename> in any way,
it only affects <filename>searchd</filename>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
preopen = 1
</programlisting>
</sect2>
<sect2 id="conf-inplace-enable"><title>inplace_enable</title>
<para>
Whether to enable in-place index inversion.
Optional, default is 0 (use separate temporary files).
Introduced in version 0.9.9-rc1.
</para>
<para>
<option>inplace_enable</option> greatly reduces indexing disk footprint,
at a cost of slightly slower indexing (it uses around 2x less disk,
but yields around 90-95% the original performance).
</para>
<para>
Indexing involves two major phases. The first phase collects,
processes, and partially sorts documents by keyword, and writes
the intermediate result to temporary files (.tmp*). The second
phase fully sorts the documents, and creates the final index
files. Thus, rebuilding a production index on the fly involves
around 3x peak disk footprint: 1st copy for the intermediate
temporary files, 2nd copy for newly constructed copy, and 3rd copy
for the old index that will be serving production queries in the meantime.
(Intermediate data is comparable in size to the final index.)
That might be too much disk footprint for big data collections,
and <option>inplace_enable</option> allows to reduce it.
When enabled, it reuses the temporary files, outputs the
final data back to them, and renames them on completion.
However, this might require additional temporary data chunk
relocation, which is where the performance impact comes from.
</para>
<para>
This directive does not affect <filename>searchd</filename> in any way,
it only affects <filename>indexer</filename>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
inplace_enable = 1
</programlisting>
</sect2>
<sect2 id="conf-inplace-hit-gap"><title>inplace_hit_gap</title>
<para>
<link linkend="conf-inplace-enable">In-place inversion</link> fine-tuning option.
Controls preallocated hitlist gap size.
Optional, default is 0.
Introduced in version 0.9.9-rc1.
</para>
<para>
This directive does not affect <filename>searchd</filename> in any way,
it only affects <filename>indexer</filename>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
inplace_hit_gap = 1M
</programlisting>
</sect2>
<sect2 id="conf-inplace-docinfo-gap"><title>inplace_docinfo_gap</title>
<para>
<link linkend="conf-inplace-enable">In-place inversion</link> fine-tuning option.
Controls preallocated docinfo gap size.
Optional, default is 0.
Introduced in version 0.9.9-rc1.
</para>
<para>
This directive does not affect <filename>searchd</filename> in any way,
it only affects <filename>indexer</filename>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
inplace_docinfo_gap = 1M
</programlisting>
</sect2>
<sect2 id="conf-inplace-reloc-factor"><title>inplace_reloc_factor</title>
<para>
<link linkend="conf-inplace-reloc-factor">In-place inversion</link> fine-tuning option.
Controls relocation buffer size within indexing memory arena.
Optional, default is 0.1.
Introduced in version 0.9.9-rc1.
</para>
<para>
This directive does not affect <filename>searchd</filename> in any way,
it only affects <filename>indexer</filename>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
inplace_reloc_factor = 0.1
</programlisting>
</sect2>
<sect2 id="conf-inplace-write-factor"><title>inplace_write_factor</title>
<para>
<link linkend="conf-inplace-write-factor">In-place inversion</link> fine-tuning option.
Controls in-place write buffer size within indexing memory arena.
Optional, default is 0.1.
Introduced in version 0.9.9-rc1.
</para>
<para>
This directive does not affect <filename>searchd</filename> in any way,
it only affects <filename>indexer</filename>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
inplace_write_factor = 0.1
</programlisting>
</sect2>
<sect2 id="conf-index-exact-words"><title>index_exact_words</title>
<para>
Whether to index the original keywords along with the stemmed/remapped versions.
Optional, default is 0 (do not index).
Introduced in version 0.9.9-rc1.
</para>
<para>
When enabled, <option>index_exact_words</option> forces <filename>indexer</filename>
to put the raw keywords in the index along with the stemmed versions. That, in turn,
enables <link linkend="extended-syntax">exact form operator</link> in the query language to work.
This impacts the index size and the indexing time. However, searching performance
is not impacted at all.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
index_exact_words = 1
</programlisting>
</sect2>
<sect2 id="conf-overshort-step"><title>overshort_step</title>
<para>
Position increment on overshort (less that <link linkend="conf-min-word-len">min_word_len</link>) keywords.
Optional, allowed values are 0 and 1, default is 1.
Introduced in version 0.9.9-rc1.
</para>
<para>
This directive does not affect <filename>searchd</filename> in any way,
it only affects <filename>indexer</filename>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
overshort_step = 1
</programlisting>
</sect2>
<sect2 id="conf-stopword-step"><title>stopword_step</title>
<para>
Position increment on <link linkend="conf-stopwords">stopwords</link>.
Optional, allowed values are 0 and 1, default is 1.
Introduced in version 0.9.9-rc1.
</para>
<para>
This directive does not affect <filename>searchd</filename> in any way,
it only affects <filename>indexer</filename>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
stopword_step = 1
</programlisting>
</sect2>
<sect2 id="conf-hitless-words"><title>hitless_words</title>
<para>
Hitless words list.
Optional, allowed values are 'all', or a list file name.
Introduced in version 1.10-beta.
</para>
<para>
By default, Sphinx full-text index stores not only a list of matching
documents for every given keyword, but also a list of its in-document positions
(aka hitlist). Hitlists enables phrase, proximity, strict order and other
advanced types of searching, as well as phrase proximity ranking. However,
hitlists for specific frequent keywords (that can not be stopped for
some reason despite being frequent) can get huge and thus slow to process
while querying. Also, in some cases we might only care about boolean
keyword matching, and never need position-based searching operators
(such as phrase matching) nor phrase ranking.
</para>
<para>
<option>hitless_words</option> lets you create indexes that either
do not have positional information (hitlists) at all, or skip it for
specific keywords.
</para>
<para>
Hitless index will generally use less space than the respective
regular index (about 1.5x can be expected). Both indexing and searching
should be faster, at a cost of missing positional query and ranking support.
When searching, positional queries (eg. phrase queries) will be automatically
converted to respective non-positional (document-level) or combined queries.
For instance, if keywords "hello" and "world" are hitless, "hello world"
phrase query will be converted to (hello & world) bag-of-words query,
matching all documents that mention either of the keywords but not necessarily
the exact phrase. And if, in addition, keywords "simon" and "says" are not
hitless, "simon says hello world" will be converted to ("simon says" &
hello & world) query, matching all documents that contain "hello" and
"world" anywhere in the document, and also "simon says" as an exact phrase.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
hitless_words = all
</programlisting>
</sect2>
<sect2 id="conf-expand-keywords"><title>expand_keywords</title>
<para>
Expand keywords with exact forms and/or stars when possible.
Optional, default is 0 (do not expand keywords).
Introduced in version 1.10-beta.
</para>
<para>
Queries against indexes with <option>expand_keywords</option> feature
enabled are internally expanded as follows. If the index was built with
prefix or infix indexing enabled, every keyword gets internally replaced
with a disjunction of keyword itself and a respective prefix or infix
(keyword with stars). If the index was built with both stemming and
<link linkend="conf-index-exact-words">index_exact_words</link> enabled,
exact form is also added. Here's an example that shows how internal
expansion works when all of the above (infixes, stemming, and exact
words) are combined:
<programlisting>
running -> ( running | *running* | =running )
</programlisting>
</para>
<para>
Expanded queries take naturally longer to complete, but can possibly
improve the search quality, as the documents with exact form matches
should be ranked generally higher than documents with stemmed or infix matches.
</para>
<para>
Note that the existing query syntax does not allow to emulate this
kind of expansion, because internal expansion works on keyword level and
expands keywords within phrase or quorum operators too (which is not
possible through the query syntax).
</para>
<para>
This directive does not affect <filename>indexer</filename> in any way,
it only affects <filename>searchd</filename>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
expand_keywords = 1
</programlisting>
</sect2>
<sect2 id="conf-blend-chars"><title>blend_chars</title>
<para>
Blended characters list.
Optional, default is empty.
Introduced in version 1.10-beta.
</para>
<para>
Blended characters are indexed both as separators and valid characters.
For instance, assume that & is configured as blended and AT&T
occurs in an indexed document. Three different keywords will get indexed,
namely "at&t", treating blended characters as valid, plus "at" and "t",
treating them as separators.
</para>
<para>
Positions for tokens obtained by replacing blended characters with whitespace
are assigned as usual, so regular keywords will be indexed just as if there was
no <option>blend_chars</option> specified at all. An additional token that
mixes blended and non-blended characters will be put at the starting position.
For instance, if the field contents are "AT&T company" occurs in the very
beginning of the text field, "at" will be given position 1, "t" position 2,
"company" position 3, and "AT&T" will also be given position 1 ("blending"
with the opening regular keyword). Thus, querying for either AT&T or just
AT will match that document, and querying for "AT T" as a phrase also match it.
Last but not least, phrase query for "AT&T company" will <emphasis>also</emphasis>
match it, despite the position
</para>
<para>
Blended characters can overlap with special characters used in query
syntax (think of T-Mobile or @twitter). Where possible, query parser will
automatically handle blended character as blended. For instance, "hello @twitter"
within quotes (a phrase operator) would handle @-sign as blended, because
@-syntax for field operator is not allowed within phrases. Otherwise,
the character would be handled as an operator. So you might want to
escape the keywords.
</para>
<para>
Starting with version 2.0.1-beta, blended characters can be remapped,
so that multiple different blended characters could be normalized into
just one base form. This is useful when indexing multiple alternative
Unicode codepoints with equivalent glyphs.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
blend_chars = +, &, U+23
blend_chars = +, &->+ # 2.0.1 and above
</programlisting>
</sect2>
<sect2 id="conf-blend-mode"><title>blend_mode</title>
<para>
Blended tokens indexing mode.
Optional, default is <option>trim_none</option>.
Introduced in version 2.0.1-beta.
</para>
<para>
By default, tokens that mix blended and non-blended characters
get indexed in there entirety. For instance, when both at-sign and
an exclamation are in <option>blend_chars</option>, "@dude!" will get
result in two tokens indexed: "@dude!" (with all the blended characters)
and "dude" (without any). Therefore "@dude" query will <emphasis>not</emphasis>
match it.
</para>
<para>
<option>blend_mode</option> directive adds flexibility to this indexing
behavior. It takes a comma-separated list of options.
<programlisting>
blend_mode = option [, option [, ...]]
option = trim_none | trim_head | trim_tail | trim_both | skip_pure
</programlisting>
</para>
<para>
Options specify token indexing variants. If multiple options are
specified, multiple variants of the same token will be indexed.
Regular keywords (resulting from that token by replacing blended
with whitespace) are always be indexed.
<variablelist>
<varlistentry>
<term>trim_none</term>
<listitem><para>Index the entire token.</para></listitem>
</varlistentry>
<varlistentry>
<term>trim_head</term>
<listitem><para>Trim heading blended characters, and index the resulting token.</para></listitem>
</varlistentry>
<varlistentry>
<term>trim_tail</term>
<listitem><para>Trim trailing blended characters, and index the resulting token.</para></listitem>
</varlistentry>
<varlistentry>
<term>trim_both</term>
<listitem><para>Trim both heading and trailing blended characters, and index the resulting token.</para></listitem>
</varlistentry>
<varlistentry>
<term>skip_pure</term>
<listitem><para>Do not index the token if it's purely blended, that is, consists of blended characters only.</para></listitem>
</varlistentry>
</variablelist>
Returning to the "@dude!" example above, setting <option>blend_mode = trim_head,
trim_tail</option> will result in two tokens being indexed, "@dude" and "dude!".
In this particular example, <option>trim_both</option> would have no effect,
because trimming both blended characters results in "dude" which is already
indexed as a regular keyword. Indexing "@U.S.A." with <option>trim_both</option>
(and assuming that dot is blended two) would result in "U.S.A" being indexed.
Last but not least, <option>skip_pure</option> enables you to fully ignore
sequences of blended characters only. For example, "one @@@ two" would be
indexed exactly as "one two", and match that as a phrase. That is not the case
by default because a fully blended token gets indexed and offsets the second
keyword position.
</para>
<para>
Default behavior is to index the entire token, equivalent to
<option>blend_mode = trim_none</option>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
blend_mode = trim_tail, skip_pure
</programlisting>
</sect2>
<sect2 id="conf-rt-mem-limit"><title>rt_mem_limit</title>
<para>
RAM chunk size limit.
Optional, default is 128M.
Introduced in version 1.10-beta.
</para>
<para>
RT index keeps some data in memory (so-called RAM chunk) and
also maintains a number of on-disk indexes (so-called disk chunks).
This directive lets you control the RAM chunk size. Once there's
too much data to keep in RAM, RT index will flush it to disk,
activate a newly created disk chunk, and reset the RAM chunk.
</para>
<para>
The limit is pretty strict; RT index should never allocate more
memory than it's limited to. The memory is not preallocated either,
hence, specifying 512 MB limit and only inserting 3 MB of data
should result in allocating 3 MB, not 512 MB.
</para>
<para>
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
rt_mem_limit = 512M
</programlisting>
</sect2>
<sect2 id="conf-rt-field"><title>rt_field</title>
<para>
Full-text field declaration.
Multi-value, mandatory
Introduced in version 1.10-beta.
</para>
<para>
Full-text fields to be indexed are declared using <option>rt_field</option>
directive. The names must be unique. The order is preserved; and so field values
in INSERT statements without an explicit list of inserted columns will have to be
in the same order as configured.
</para>
<para>
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
rt_field = author
rt_field = title
rt_field = content
</programlisting>
</sect2>
<sect2 id="conf-rt-attr-uint"><title>rt_attr_uint</title>
<para>
Unsigned integer attribute declaration.
Multi-value (an arbitrary number of attributes is allowed), optional.
Declares an unsigned 32-bit attribute.
Introduced in version 1.10-beta.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
rt_attr_uint = gid
</programlisting>
</sect2>
<sect2 id="conf-rt-attr-bool"><title>rt_attr_bool</title>
<para>
Boolean attribute declaration.
Multi-value (there might be multiple attributes declared), optional.
Declares a 1-bit unsigned integer attribute.
Introduced in version 2.1.2-release.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
rt_attr_bool = available
</programlisting>
</sect2>
<sect2 id="conf-rt-attr-bigint"><title>rt_attr_bigint</title>
<para>
BIGINT attribute declaration.
Multi-value (an arbitrary number of attributes is allowed), optional.
Declares a signed 64-bit attribute.
Introduced in version 1.10-beta.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
rt_attr_bigint = guid
</programlisting>
</sect2>
<sect2 id="conf-rt-attr-float"><title>rt_attr_float</title>
<para>
Floating point attribute declaration.
Multi-value (an arbitrary number of attributes is allowed), optional.
Declares a single precision, 32-bit IEEE 754 format float attribute.
Introduced in version 1.10-beta.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
rt_attr_float = gpa
</programlisting>
</sect2>
<sect2 id="conf-rt-attr-multi"><title>rt_attr_multi</title>
<para>
<link linkend="mva">Multi-valued attribute</link> (MVA) declaration.
Declares the UNSIGNED INTEGER (unsigned 32-bit) MVA attribute.
Multi-value (ie. there may be more than one such attribute declared), optional.
Applies to RT indexes only.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
rt_attr_multi = my_tags
</programlisting>
</sect2>
<sect2 id="conf-rt-attr-multi-64"><title>rt_attr_multi_64</title>
<para>
<link linkend="mva">Multi-valued attribute</link> (MVA) declaration.
Declares the BIGINT (signed 64-bit) MVA attribute.
Multi-value (ie. there may be more than one such attribute declared), optional.
Applies to RT indexes only.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
rt_attr_multi_64 = my_wide_tags
</programlisting>
</sect2>
<sect2 id="conf-rt-attr-timestamp"><title>rt_attr_timestamp</title>
<para>
Timestamp attribute declaration.
Multi-value (an arbitrary number of attributes is allowed), optional.
Introduced in version 1.10-beta.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
rt_attr_timestamp = date_added
</programlisting>
</sect2>
<sect2 id="conf-rt-attr-string"><title>rt_attr_string</title>
<para>
String attribute declaration.
Multi-value (an arbitrary number of attributes is allowed), optional.
Introduced in version 1.10-beta.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
rt_attr_string = author
</programlisting>
</sect2>
<sect2 id="conf-rt-attr-json"><title>rt_attr_json</title>
<para>
JSON attribute declaration.
Multi-value (ie. there may be more than one such attribute declared), optional.
Introduced in version 2.1.1-beta.
</para>
<para>
Refer to <xref linkend="conf-sql-attr-json"/> for more details on the JSON attributes.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
rt_attr_json = properties
</programlisting>
</sect2>
<sect2 id="conf-ha-strategy"><title>ha_strategy</title>
<para>
Agent mirror selection strategy, for load balancing.
Optional, default is random.
Added in 2.1.1-beta.
</para>
<para>
The strategy used for mirror selection, or in other words, choosing
a specific <link linkend="conf-agent">agent mirror</link> in a distributed
index. Essentially, this directive controls how exactly master does the
load balancing between the configured mirror agent nodes.
As of 2.1.1-beta, the following strategies are implemented:
</para>
<bridgehead>Simple random balancing</bridgehead>
<programlisting>ha_strategy = random</programlisting>
<para>
The default balancing mode. Simple linear random distribution among the mirrors.
That is, equal selection probability are assigned to every mirror. Kind of similar
to round-robin (RR), but unlike RR, does not impose a strict selection order.
</para>
<bridgehead>Adaptive randomized balancing</bridgehead>
<para>
The default simple random strategy does not take mirror status, error rate,
and, most importantly, actual response latencies into account. So to accommodate
for heterogeneous clusters and/or temporary spikes in agent node load, we have
a group of balancing strategies that dynamically adjusts the probabilities
based on the actual query latencies observed by the master.
</para>
<para>
The adaptive strategies based on <b>latency-weighted probabilities</b>
basically work as follows:
<itemizedlist>
<listitem><para>latency stats are accumulated, in blocks of ha_period_karma seconds;</para></listitem>
<listitem><para>once per karma period, latency-weighted probabilities get recomputed;</para></listitem>
<listitem><para>once per request (including ping requests), "dead or alive" flag is adjusted.</para></listitem>
</itemizedlist>
Currently (as of 2.1.1-beta), we begin with equal probabilities (or percentages,
for brevity), and on every step, scale them by the inverse of the latencies observed
during the last "karma" period, and then renormalize them. For example, if during
the first 60 seconds after the master startup 4 mirrors had latencies of
10, 5, 30, and 3 msec/query respectively, the first adjustment step
would go as follow:
<itemizedlist>
<listitem><para>initial percentages: 0.25, 0.25, 0.25, 0.2%;</para></listitem>
<listitem><para>observed latencies: 10 ms, 5 ms, 30 ms, 3 ms;</para></listitem>
<listitem><para>inverse latencies: 0.1, 0.2, 0.0333, 0.333;</para></listitem>
<listitem><para>scaled percentages: 0.025, 0.05, 0.008333, 0.0833;</para></listitem>
<listitem><para>renormalized percentages: 0.15, 0.30, 0.05, 0.50.</para></listitem>
</itemizedlist>
Meaning that the 1st mirror would have a 15% chance of being chosen during
the next karma period, the 2nd one a 30% chance, the 3rd one (slowest at 30 ms)
only a 5% chance, and the 4th and the fastest one (at 3 ms) a 50% chance.
Then, after that period, the second adjustment step would update those chances
again, and so on.
</para>
<para>
The rationale here is, once the <b>observed latencies</b> stabilize,
the <b>latency weighted probabilities</b> stabilize as well. So all these
adjustment iterations are supposed to converge at a point where the average
latencies are (roughly) equal over all mirrors.
</para>
<programlisting>ha_strategy = nodeads</programlisting>
<para>
Latency-weighted probabilities, but dead mirrors are excluded from
the selection. "Dead" mirror is defined as a mirror that resulted
in multiple hard errors (eg. network failure, or no answer, etc) in a row.
</para>
<programlisting>ha_strategy = noerrors</programlisting>
<para>
Latency-weighted probabilities, but mirrors with worse errors/success ratio
are excluded from the selection.
</para>
<bridgehead>Round-robin balancing</bridgehead>
<programlisting>ha_strategy = roundrobin</programlisting>
<para>Simple round-robin selection, that is, selecting the 1st mirror
in the list, then the 2nd one, then the 3rd one, etc, and then repeating
the process once the last mirror in the list is reached. Unlike with
the randomized strategies, RR imposes a strict querying order (1, 2, 3, ..,
N-1, N, 1, 2, 3, ... and so on) and <emphasis>guarantees</emphasis> that
no two subsequent queries will be sent to the same mirror.
</para>
</sect2>
<sect2 id="conf-bigram-freq-words"><title>bigram_freq_words</title>
<para>
A list of keywords considered "frequent" when indexing bigrams.
Optional, default is empty.
Added in 2.1.1-beta.
</para>
<para>
Bigram indexing is a feature to accelerate phrase searches.
When indexing, it stores a document list for either all or some
of the adjacent words pairs into the index. Such a list can then be used
at searching time to significantly accelerate phrase or sub-phrase
matching.
</para>
<para>
Some of the bigram indexing modes (see <xref linkend="conf-bigram-index"/>)
require to define a list of frequent keywords. These are <b>not</b> to be
confused with stopwords! Stopwords are completely eliminated when both indexing
and searching. Frequent keywords are only used by bigrams to determine whether
to index a current word pair or not.
</para>
<para>
<code>bigram_freq_words</code> lets you define a list of such keywords.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
bigram_freq_words = the, a, you, i
</programlisting>
</sect2>
<sect2 id="conf-bigram-index"><title>bigram_index</title>
<para>
Bigram indexing mode.
Optional, default is none.
Added in 2.1.1-beta.
</para>
<para>
Bigram indexing is a feature to accelerate phrase searches.
When indexing, it stores a document list for either all or some
of the adjacent words pairs into the index. Such a list can then be used
at searching time to significantly accelerate phrase or sub-phrase
matching.
</para>
<para>
<code>bigram_index</code> controls the selection of specific word pairs.
The known modes are:
<itemizedlist>
<listitem><para><code>all</code>, index every single word pair.
(NB: probably totally not worth it even on a moderately sized index,
but added anyway for the sake of completeness.)
</para></listitem>
<listitem><para><code>first_freq</code>, only index word pairs
where the <emphasis>first</emphasis> word is in a list of frequent words
(see <xref linkend="conf-bigram-freq-words"/>). For example, with
<code>bigram_freq_words = the, in, i, a</code>, indexing
"alone in the dark" text will result in "in the" and "the dark" pairs
being stored as bigrams, because they begin with a frequent keyword
(either "in" or "the" respectively), but "alone in" would <b>not</b>
be indexed, because "in" is a <emphasis>second</emphasis> word in that pair.
</para></listitem>
<listitem><para><code>both_freq</code>, only index word pairs where
both words are frequent. Continuing with the same example, in this mode
indexing "alone in the dark" would only store "in the" (the very worst
of them all from searching perspective) as a bigram, but none of the
other word pairs.
</para></listitem>
</itemizedlist>
For most usecases, <code>both_freq</code> would be the best mode, but
your mileage may vary.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
bigram_freq_words = both_freq
</programlisting>
</sect2>
<sect2 id="conf-index-field-lengths"><title>index_field_lengths</title>
<para>
Enables computing and storing of field lengths (both per-document and
average per-index values) into the index.
Optional, default is 0 (do not compute and store).
Added in 2.1.1-beta.
</para>
<para>
When <code>index_field_lengths</code> is set to 1, <filename>indexer</filename>
will 1) create a respective length attribute for every full-text field,
sharing the same name but with <emphasis>_len</emphasis> suffix; 2) compute a field length (counted in keywords) for
every document and store in to a respective attribute; 3) compute the per-index
averages. The lengths attributes will have a special TOKENCOUNT type, but their
values are in fact regular 32-bit integers, and their values are generally
accessible.
</para>
<para>
BM25A() and BM25F() functions in the expression ranker are based
on these lengths and require <code>index_field_lengths</code> to be enabled.
Historically, Sphinx used a simplified, stripped-down variant of BM25 that,
unlike the complete function, did <b>not</b> account for document length.
(We later realized that it should have been called BM15 from the start.)
Starting with 2.1.1-beta, we added support for both a complete variant of BM25,
and its extension towards multiple fields, called BM25F. They require
per-document length and per-field lengths, respectively. Hence the additional
directive.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
index_field_lengths = 1
</programlisting>
</sect2>
<sect2 id="conf-regexp-filter"><title>regexp_filter</title>
<para>
Regular expressions (regexps) to filter the fields and queries with.
Optional, multi-value, default is an empty list of regexps.
Added in 2.1.1-beta.
</para>
<para>
In certain applications (like product search) there can be
many different ways to call a model, or a product, or a property,
and so on. For instance, 'iphone 3gs' and 'iphone 3 gs'
(or even 'iphone3 gs') are very likely to mean the same
product. Or, for a more tricky example, '13-inch', '13 inch',
'13"', and '13in' in a laptop screen size descriptions do mean
the same.
</para>
<para>
Regexps provide you with a mechanism to specify a number of rules
specific to your application to handle such cases. In the first
'iphone 3gs' example, you could possibly get away with a wordforms
files tailored to handle a handful of iPhone models. However even
in a comparatively simple second '13-inch' example there is just
way too many individual forms and you are better off specifying
rules that would normalize both '13-inch' and '13in' to something
identical.
</para>
<para>
Regular expressions listed in <code>regexp_filter</code> are
applied in the order they are listed. That happens at the earliest
stage possible, before any other processing, even before tokenization.
That is, regexps are applied to the raw source fields when indeixng,
and to the raw search query text when searching.
</para>
<para>
We use the <ulink url="http://code.google.com/p/re2/">RE2 engine</ulink>
to implement regexps. So when building from the source, the library must be
installed in the system and Sphinx must be configured built with a
<code>--with-re2</code> switch. Binary packages should come with RE2
builtin.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
# index '13-inch' as '13inch'
regexp_filter = \b(\d+)\" => \1inch
# index 'blue' or 'red' as 'color'
regexp_filter = (blue|red) => color
</programlisting>
</sect2>
<sect2 id="conf-stopwords-unstemmed"><title>stopwords_unstemmed</title>
<para>
Whether to apply stopwords before or after stemming.
Optional, default is 0 (apply stopword filter after stemming).
Added in 2.1.1-beta.
</para>
<para>
By default, stopwords are stemmed themselves, and applied to
tokens <emphasis>after</emphasis> stemming (or any other morphology
processing). In other words, by default, a token is stopped when
stem(token) == stem(stopword). That can lead to unexpected results
when a token gets (erroneously) stemmed to a stopped root. For example,
'Andes' gets stemmed to 'and' by our current stemmer implementation,
so when 'and' is a stopword, 'Andes' is also stopped.
</para>
<para>
stopwords_unstemmed directive fixes that issue. When it's enabled,
stopwords are applied before stemming (and therefore to the original
word forms), and the tokens are stopped when token == stopword.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
stopwords_unstemmed = 1
</programlisting>
</sect2>
<sect2 id="conf-global-idf"><title>global_idf</title>
<para>
The path to a file with global (cluster-wide) keyword IDFs.
Optional, default is empty (use local IDFs).
Added in 2.1.1-beta.
</para>
<para>
On a multi-index cluster, per-keyword frequencies are quite
likely to differ across different indexes. That means that when
the ranking function uses TF-IDF based values, such as BM25 family
of factors, the results might be ranked slightly different
depending on what cluster node they reside.
</para>
<para>
The easiest way to fix that issue is to create and utilize
a global frequency dictionary, or a global IDF file for short.
This directive lets you specify the location of that file.
It it suggested (but not required) to use a .idf extension.
When the IDF file is specified for a given index <emphasis>and</emphasis>
and OPTION global_idf is set to 1, the engine will use the keyword
frequencies and collection documents count from the global_idf file,
rather than just the local index. That way, IDFs and the values
that depend on them will stay consistent across the cluster.
</para>
<para>
IDF files can be shared across multiple indexes. Only a single
copy of an IDF file will be loaded by <filename>searchd</filename>,
even when many indexes refer to that file. Should the contents of
an IDF file change, the new contents can be loaded with a SIGHUP.
</para>
<para>
You can build an .idf file using <filename>indextool</filename>
utility, by dumping dictionaries using <code>--dumpdict</code> switch
first, then converting those to .idf format using <code>--buildidf</code>,
then merging all .idf files across cluser using <code>--mergeidf</code>.
Refer to <xref linkend="ref-indextool"/> for more information.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
global_idf = /usr/local/sphinx/var/global.idf
</programlisting>
</sect2>
<sect2 id="conf-rlp-context"><title>rlp_context</title>
<para>
RLP context configuration file. Mandatory if RLP is used.
Added in 2.2.1-beta.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
rlp_context = /home/myuser/RLP/rlp-context.xml
</programlisting>
</sect2>
<sect2 id="conf-ondisk-attrs"><title>ondisk_attrs</title>
<para>
Allows for fine-grain control over how attributes are loaded into memory
when using indexes with external storage. It is now possible (since
version 2.2.1-beta) to keep attributes on disk. Although, the daemon does
map them to memory and the OS loads small chunks of data on demand. This
allows use of docinfo = extern instead of docinfo = inline, but still
leaves plenty of free memory for cases when you have large collections
of pooled attributes (string/JSON/MVA) or when you're using many indexes
per daemon that don't consume memory. It is not possible to update
attributes left on disk when this option is enabled and the constraint
of 4Gb of entries per pool is still in effect.
</para>
<para>
Note that this option also affects RT indexes. When it is enabled, all atribute updates
will be disabled, and also all disk chunks of RT indexes will behave described above. However
inserting and deleting of docs from RT indexes is still possible with enabled ondisk_attrs.
</para>
<bridgehead>Possible values:</bridgehead>
<itemizedlist>
<listitem>
0 - disabled and default value, all attributes are loaded in memory
(the normal behaviour of docinfo = extern)
</listitem>
<listitem>
1 - all attributes stay on disk. Daemon loads no files (spa, spm, sps).
This is the most memory conserving mode, however it is also the slowest
as the whole doc-id-list and block index doesn't load.
</listitem>
<listitem>
pool - only pooled attributes stay on disk. Pooled attributes are string,
MVA, and JSON attributes (sps, spm files). Scalar attributes stored in
docinfo (spa file) load as usual.
</listitem>
</itemizedlist>
<para>
This option does not affect indexing in any way, it only requires daemon
restart.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
ondisk_attrs = pool #keep pooled attributes on disk
</programlisting>
</sect2>
</sect1>
<sect1 id="confgroup-indexer"><title><filename>indexer</filename> program configuration options</title>
<sect2 id="conf-mem-limit"><title>mem_limit</title>
<para>
Indexing RAM usage limit.
Optional, default is 128M.
</para>
<para>
Enforced memory usage limit that the <filename>indexer</filename>
will not go above. Can be specified in bytes, or kilobytes
(using K postfix), or megabytes (using M postfix); see the example.
This limit will be automatically raised if set to extremely low value
causing I/O buffers to be less than 8 KB; the exact lower bound
for that depends on the indexed data size. If the buffers are
less than 256 KB, a warning will be produced.
</para>
<para>
Maximum possible limit is 2047M. Too low values can hurt
indexing speed, but 256M to 1024M should be enough for most
if not all datasets. Setting this value too high can cause
SQL server timeouts. During the document collection phase,
there will be periods when the memory buffer is partially
sorted and no communication with the database is performed;
and the database server can timeout. You can resolve that
either by raising timeouts on SQL server side or by lowering
<code>mem_limit</code>.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
mem_limit = 256M
# mem_limit = 262144K # same, but in KB
# mem_limit = 268435456 # same, but in bytes
</programlisting>
</sect2>
<sect2 id="conf-max-iops"><title>max_iops</title>
<para>
Maximum I/O operations per second, for I/O throttling.
Optional, default is 0 (unlimited).
</para>
<para>
I/O throttling related option.
It limits maximum count of I/O operations (reads or writes) per any given second.
A value of 0 means that no limit is imposed.
</para>
<para>
<filename>indexer</filename> can cause bursts of intensive disk I/O during
indexing, and it might desired to limit its disk activity (and keep something
for other programs running on the same machine, such as <filename>searchd</filename>).
I/O throttling helps to do that. It works by enforcing a minimum guaranteed
delay between subsequent disk I/O operations performed by <filename>indexer</filename>.
Modern SATA HDDs are able to perform up to 70-100+ I/O operations per second
(that's mostly limited by disk heads seek time). Limiting indexing I/O
to a fraction of that can help reduce search performance degradation
caused by indexing.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
max_iops = 40
</programlisting>
</sect2>
<sect2 id="conf-max-iosize"><title>max_iosize</title>
<para>
Maximum allowed I/O operation size, in bytes, for I/O throttling.
Optional, default is 0 (unlimited).
</para>
<para>
I/O throttling related option. It limits maximum file I/O operation
(read or write) size for all operations performed by <filename>indexer</filename>.
A value of 0 means that no limit is imposed.
Reads or writes that are bigger than the limit
will be split in several smaller operations, and counted as several operation
by <link linkend="conf-max-iops">max_iops</link> setting. At the time of this
writing, all I/O calls should be under 256 KB (default internal buffer size)
anyway, so <code>max_iosize</code> values higher than 256 KB must not affect anything.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
max_iosize = 1048576
</programlisting>
</sect2>
<sect2 id="conf-max-xmlpipe2-field"><title>max_xmlpipe2_field</title>
<para>
Maximum allowed field size for XMLpipe2 source type, bytes.
Optional, default is 2 MB.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
max_xmlpipe2_field = 8M
</programlisting>
</sect2>
<sect2 id="conf-write-buffer"><title>write_buffer</title>
<para>
Write buffer size, bytes.
Optional, default is 1 MB.
</para>
<para>
Write buffers are used to write both temporary and final index
files when indexing. Larger buffers reduce the number of required
disk writes. Memory for the buffers is allocated in addition to
<link linkend="conf-mem-limit">mem_limit</link>. Note that several
(currently up to 4) buffers for different files will be allocated,
proportionally increasing the RAM usage.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
write_buffer = 4M
</programlisting>
</sect2>
<sect2 id="conf-max-file-field-buffer"><title>max_file_field_buffer</title>
<para>
Maximum file field adaptive buffer size, bytes.
Optional, default is 8 MB, minimum is 1 MB.
</para>
<para>
File field buffer is used to load files referred to from
<link linkend="conf-sql-file-field">sql_file_field</link> columns.
This buffer is adaptive, starting at 1 MB at first allocation,
and growing in 2x steps until either file contents can be loaded,
or maximum buffer size, specified by <option>max_file_field_buffer</option>
directive, is reached.
</para>
<para>
Thus, if there are no file fields are specified, no buffer
is allocated at all. If all files loaded during indexing are under
(for example) 2 MB in size, but <option>max_file_field_buffer</option>
value is 128 MB, peak buffer usage would still be only 2 MB. However,
files over 128 MB would be entirely skipped.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
max_file_field_buffer = 128M
</programlisting>
</sect2>
<sect2 id="conf-on-file-field-error"><title>on_file_field_error</title>
<para>
How to handle IO errors in file fields.
Optional, default is <code>ignore_field</code>.
Introduced in version 2.0.2-beta.
</para>
<para>
When there is a problem indexing a file referenced by a file field
(<xref linkend="conf-sql-file-field"/>), <filename>indexer</filename> can
either index the document, assuming empty content in this particular field,
or skip the document, or fail indexing entirely. <option>on_file_field_error</option>
directive controls that behavior. The values it takes are:
<itemizedlist>
<listitem><para><code>ignore_field</code>, index the current document without field;</para></listitem>
<listitem><para><code>skip_document</code>, skip the current document but continue indexing;</para></listitem>
<listitem><para><code>fail_index</code>, fail indexing with an error message.</para></listitem>
</itemizedlist>
</para>
<para>
The problems that can arise are: open error, size error (file too big),
and data read error. Warning messages on any problem will be given at all times,
irregardless of the phase and the <code>on_file_field_error</code> setting.
</para>
<para>
Note that with <option>on_file_field_error = skip_document</option>
documents will only be ignored if problems are detected during
an early check phase, and <b>not</b> during the actual file parsing
phase. <filename>indexer</filename> will open every referenced file
and check its size before doing any work, and then open it again
when doing actual parsing work. So in case a file goes away
between these two open attempts, the document will still be
indexed.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
on_file_field_error = skip_document
</programlisting>
</sect2>
<sect2 id="conf-lemmatizer-cache"><title>lemmatizer_cache</title>
<para>
Lemmatizer cache size.
Optional, default is 256K.
Added in version 2.1.1-beta.
</para>
<para>
Our lemmatizer implementation (see <xref linkend="conf-morphology"/>
for a discussion of what lemmatizers are) uses a compressed dictionary
format that enables a space/speed tradeoff. It can either perform
lemmatization off the compressed data, using more CPU but less RAM,
or it can decompress and precache the dictionary either partially
or fully, thus using less CPU but more RAM. And the lemmatizer_cache
directive lets you control how much RAM exactly can be spent for that
uncompressed dictionary cache.
</para>
<para>
Currently, the only available dictionary is ru.pak, the Russian one.
The compressed dictionary is approximately 10 MB in size. Note that the
dictionary stays in memory at all times, too. The default cache size
is 256 KB. The accepted cache sizes are 0 to 2047 MB. It's safe to raise
the cache size too high; the lemmatizer will only use the needed memory.
For instance, the entire Russian dictionary decompresses to approximately
110 MB; and thus setting lemmatizer_cache anywhere higher than that will
not affect the memory use: even when 1024 MB is allowed for the cache,
if only 110 MB is needed, it will only use those 110 MB.
</para>
<para>
On our benchmarks, the total indexing time with different cache
sizes was as follows:
<itemizedlist>
<listitem>9.07 sec, morphology = lemmatize_ru, lemmatizer_cache = 0</listitem>
<listitem>8.60 sec, morphology = lemmatize_ru, lemmatizer_cache = 256K</listitem>
<listitem>8.33 sec, morphology = lemmatize_ru, lemmatizer_cache = 8M</listitem>
<listitem>7.95 sec, morphology = lemmatize_ru, lemmatizer_cache = 128M</listitem>
<listitem>6.85 sec, morphology = stem_ru (baseline)</listitem>
</itemizedlist>
Your mileage may vary, but a simple rule of thumb would be to either
go with the small default 256 KB cache when pressed for memory, or spend
128 MB extra RAM and cache the entire dictionary for maximum indexing
performance.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
lemmatizer_cache = 256M # cache it all
</programlisting>
</sect2>
</sect1>
<sect1 id="confgroup-searchd"><title><filename>searchd</filename> program configuration options</title>
<sect2 id="conf-listen"><title>listen</title>
<para>
This setting lets you specify IP address and port, or Unix-domain
socket path, that <code>searchd</code> will listen on.
Introduced in version 0.9.9-rc1.
</para>
<para>
The informal grammar for <code>listen</code> setting is:
<programlisting>
listen = ( address ":" port | port | path ) [ ":" protocol ]
</programlisting>
I.e. you can specify either an IP address (or hostname) and port
number, or just a port number, or Unix socket path. If you specify
port number but not the address, <code>searchd</code> will listen on
all network interfaces. Unix path is identified by a leading slash.
</para>
<para>
Starting with version 0.9.9-rc2, you can also specify a protocol
handler (listener) to be used for connections on this socket.
Supported protocol values are 'sphinx' (Sphinx 0.9.x API protocol)
and 'mysql41' (MySQL protocol used since 4.1 upto at least 5.1).
More details on MySQL protocol support can be found in
<xref linkend="sphinxql"/> section.
</para>
<bridgehead>Examples:</bridgehead>
<programlisting>
listen = localhost
listen = localhost:5000
listen = 192.168.0.1:5000
listen = /var/run/sphinx.s
listen = 9312
listen = localhost:9306:mysql41
</programlisting>
<para>
There can be multiple listen directives, <code>searchd</code> will
listen for client connections on all specified ports and sockets. If
no <code>listen</code> directives are found then the server will listen
on all available interfaces using the default SphinxAPI port 9312.
Starting with 1.10-beta, it will also listen on default SphinxQL
port 9306. Both port numbers are assigned by IANA (see
<ulink url="http://www.iana.org/assignments/port-numbers">http://www.iana.org/assignments/port-numbers</ulink>
for details) and should therefore be available.
</para>
<para>
Unix-domain sockets are not supported on Windows.
</para>
</sect2>
<sect2 id="conf-log"><title>log</title>
<para>
Log file name.
Optional, default is 'searchd.log'.
All <filename>searchd</filename> run time events will be logged in this file.
</para>
<para>
Also you can use the 'syslog' as the file name. In this case the events will be sent to syslog daemon.
To use the syslog option the sphinx must be configured '--with-syslog' on building.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
log = /var/log/searchd.log
</programlisting>
</sect2>
<sect2 id="conf-query-log"><title>query_log</title>
<para>
Query log file name.
Optional, default is empty (do not log queries).
All search queries will be logged in this file. The format is described in <xref linkend="query-log-format"/>.
</para>
<para>
In case of 'plain' format, you can use the 'syslog' as the path to the log file.
In this case all search queries will be sent to syslog daemon with LOG_INFO priority,
prefixed with '[query]' instead of timestamp.
To use the syslog option the sphinx must be configured '--with-syslog' on building.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
query_log = /var/log/query.log
</programlisting>
</sect2>
<sect2 id="conf-query-log-format"><title>query_log_format</title>
<para>
Query log format.
Optional, allowed values are 'plain' and 'sphinxql', default is 'plain'.
Introduced in version 2.0.1-beta.
</para>
<para>
Starting with version 2.0.1-beta, two different log formats are supported.
The default one logs queries in a custom text format. The new one logs
valid SphinxQL statements. This directive allows to switch between the two
formats on search daemon startup. The log format can also be altered
on the fly, using <code>SET GLOBAL query_log_format=sphinxql</code> syntax.
Refer to <xref linkend="query-log-format"/> for more discussion and format
details.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
query_log_format = sphinxql
</programlisting>
</sect2>
<sect2 id="conf-read-timeout"><title>read_timeout</title>
<para>
Network client request read timeout, in seconds.
Optional, default is 5 seconds.
<filename>searchd</filename> will forcibly close the client connections which fail to send a query within this timeout.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
read_timeout = 1
</programlisting>
</sect2>
<sect2 id="conf-client-timeout"><title>client_timeout</title>
<para>
Maximum time to wait between requests (in seconds) when using
persistent connections. Optional, default is five minutes.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
client_timeout = 3600
</programlisting>
</sect2>
<sect2 id="conf-max-children"><title>max_children</title>
<para>
Maximum amount of children to fork (or in other words, concurrent searches to run in parallel).
Optional, default is 0 (unlimited).
</para>
<para>
Useful to control server load. There will be no more than this much concurrent
searches running, at all times. When the limit is reached, additional incoming
clients are dismissed with temporarily failure (SEARCHD_RETRY) status code
and a message stating that the server is maxed out.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
max_children = 10
</programlisting>
</sect2>
<sect2 id="conf-pid-file"><title>pid_file</title>
<para>
<filename>searchd</filename> process ID file name.
Mandatory.
</para>
<para>
PID file will be re-created (and locked) on startup. It will contain
head daemon process ID while the daemon is running, and it will be unlinked
on daemon shutdown. It's mandatory because Sphinx uses it internally
for a number of things: to check whether there already is a running instance
of <filename>searchd</filename>; to stop <filename>searchd</filename>;
to notify it that it should rotate the indexes. Can also be used for
different external automation scripts.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
pid_file = /var/run/searchd.pid
</programlisting>
</sect2>
<sect2 id="conf-seamless-rotate"><title>seamless_rotate</title>
<para>
Prevents <filename>searchd</filename> stalls while rotating indexes with huge amounts of data to precache.
Optional, default is 1 (enable seamless rotation). On Windows systems seamless rotation is disabled by default.
</para>
<para>
Indexes may contain some data that needs to be precached in RAM.
At the moment, <filename>.spa</filename>, <filename>.spi</filename> and
<filename>.spm</filename> files are fully precached (they contain attribute data,
MVA data, and keyword index, respectively.)
Without seamless rotate, rotating an index tries to use as little RAM
as possible and works as follows:
<orderedlist>
<listitem><para>new queries are temporarily rejected (with "retry" error code);</para></listitem>
<listitem><para><filename>searchd</filename> waits for all currently running queries to finish;</para></listitem>
<listitem><para>old index is deallocated and its files are renamed;</para></listitem>
<listitem><para>new index files are renamed and required RAM is allocated;</para></listitem>
<listitem><para>new index attribute and dictionary data is preloaded to RAM;</para></listitem>
<listitem><para><filename>searchd</filename> resumes serving queries from new index.</para></listitem>
</orderedlist>
</para>
<para>
However, if there's a lot of attribute or dictionary data, then preloading step
could take noticeable time - up to several minutes in case of preloading 1-5+ GB files.
</para>
<para>
With seamless rotate enabled, rotation works as follows:
<orderedlist>
<listitem><para>new index RAM storage is allocated;</para></listitem>
<listitem><para>new index attribute and dictionary data is asynchronously preloaded to RAM;</para></listitem>
<listitem><para>on success, old index is deallocated and both indexes' files are renamed;</para></listitem>
<listitem><para>on failure, new index is deallocated;</para></listitem>
<listitem><para>at any given moment, queries are served either from old or new index copy.</para></listitem>
</orderedlist>
</para>
<para>
Seamless rotate comes at the cost of higher <emphasis role="bold">peak</emphasis>
memory usage during the rotation (because both old and new copies of
<filename>.spa/.spi/.spm</filename> data need to be in RAM while
preloading new copy). Average usage stays the same.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
seamless_rotate = 1
</programlisting>
</sect2>
<sect2 id="conf-preopen-indexes"><title>preopen_indexes</title>
<para>
Whether to forcibly preopen all indexes on startup.
Optional, default is 1 (preopen everything).
</para>
<para>
Starting with 2.0.1-beta, the default value for this
option is now 1 (foribly preopen all indexes). In prior
versions, it used to be 0 (use per-index settings).
</para>
<para>
When set to 1, this directive overrides and enforces
<link linkend="conf-preopen">preopen</link> on all indexes.
They will be preopened, no matter what is the per-index
<code>preopen</code> setting. When set to 0, per-index
settings can take effect. (And they default to 0.)
</para>
<para>
Pre-opened indexes avoid races between search queries
and rotations that can cause queries to fail occasionally.
They also make <filename>searchd</filename> use more file
handles. In most scenarios it's therefore preferred and
recommended to preopen indexes.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
preopen_indexes = 1
</programlisting>
</sect2>
<sect2 id="conf-unlink-old"><title>unlink_old</title>
<para>
Whether to unlink .old index copies on successful rotation.
Optional, default is 1 (do unlink).
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
unlink_old = 0
</programlisting>
</sect2>
<sect2 id="conf-attr-flush-period"><title>attr_flush_period</title>
<para>
When calling <code>UpdateAttributes()</code> to update document attributes in
real-time, changes are first written to the in-memory copy of attributes
(<option>docinfo</option> must be set to <option>extern</option>).
Then, once <filename>searchd</filename> shuts down normally (via <code>SIGTERM</code>
being sent), the changes are written to disk.
Introduced in version 0.9.9-rc1.
</para>
<para>Starting with 0.9.9-rc1, it is possible to tell <filename>searchd</filename>
to periodically write these changes back to disk, to avoid them being lost. The time
between those intervals is set with <option>attr_flush_period</option>, in seconds.
</para>
<para>It defaults to 0, which disables the periodic flushing, but flushing will
still occur at normal shut-down.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
attr_flush_period = 900 # persist updates to disk every 15 minutes
</programlisting>
</sect2>
<sect2 id="conf-max-packet-size"><title>max_packet_size</title>
<para>
Maximum allowed network packet size.
Limits both query packets from clients, and response packets from remote agents in distributed environment.
Only used for internal sanity checks, does not directly affect RAM use or performance.
Optional, default is 8M.
Introduced in version 0.9.9-rc1.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
max_packet_size = 32M
</programlisting>
</sect2>
<sect2 id="conf-mva-updates-pool"><title>mva_updates_pool</title>
<para>
Shared pool size for in-memory MVA updates storage.
Optional, default size is 1M.
Introduced in version 0.9.9-rc1.
</para>
<para>
This setting controls the size of the shared storage pool for updated MVA values.
Specifying 0 for the size disable MVA updates at all. Once the pool size limit
is hit, MVA update attempts will result in an error. However, updates on regular
(scalar) attributes will still work. Due to internal technical difficulties,
currently it is <b>not</b> possible to store (flush) <b>any</b> updates on indexes
where MVA were updated; though this might be implemented in the future.
In the meantime, MVA updates are intended to be used as a measure to quickly
catchup with latest changes in the database until the next index rebuild;
not as a persistent storage mechanism.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
mva_updates_pool = 16M
</programlisting>
</sect2>
<sect2 id="conf-max-filters"><title>max_filters</title>
<para>
Maximum allowed per-query filter count.
Only used for internal sanity checks, does not directly affect RAM use or performance.
Optional, default is 256.
Introduced in version 0.9.9-rc1.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
max_filters = 1024
</programlisting>
</sect2>
<sect2 id="conf-max-filter-values"><title>max_filter_values</title>
<para>
Maximum allowed per-filter values count.
Only used for internal sanity checks, does not directly affect RAM use or performance.
Optional, default is 4096.
Introduced in version 0.9.9-rc1.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
max_filter_values = 16384
</programlisting>
</sect2>
<sect2 id="conf-listen-backlog"><title>listen_backlog</title>
<para>
TCP listen backlog.
Optional, default is 5.
</para>
<para>
Windows builds currently (as of 0.9.9) can only process the requests
one by one. Concurrent requests will be enqueued by the TCP stack
on OS level, and requests that can not be enqueued will immediately
fail with "connection refused" message. listen_backlog directive
controls the length of the connection queue. Non-Windows builds
should work fine with the default value.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
listen_backlog = 20
</programlisting>
</sect2>
<sect2 id="conf-read-buffer"><title>read_buffer</title>
<para>
Per-keyword read buffer size.
Optional, default is 256K.
</para>
<para>
For every keyword occurrence in every search query, there are
two associated read buffers (one for document list and one for
hit list). This setting lets you control their sizes, increasing
per-query RAM use, but possibly decreasing IO time.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
read_buffer = 1M
</programlisting>
</sect2>
<sect2 id="conf-read-unhinted"><title>read_unhinted</title>
<para>
Unhinted read size.
Optional, default is 32K.
</para>
<para>
When querying, some reads know in advance exactly how much data
is there to be read, but some currently do not. Most prominently,
hit list size in not currently known in advance. This setting
lest you control how much data to read in such cases. It will
impact hit list IO time, reducing it for lists larger than
unhinted read size, but raising it for smaller lists. It will
<b>not</b> affect RAM use because read buffer will be already
allocated. So it should be not greater than read_buffer.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
read_unhinted = 32K
</programlisting>
</sect2>
<sect2 id="conf-max-batch-queries"><title>max_batch_queries</title>
<para>
Limits the amount of queries per batch.
Optional, default is 32.
</para>
<para>
Makes searchd perform a sanity check of the amount of the queries
submitted in a single batch when using <link linkend="multi-queries">multi-queries</link>.
Set it to 0 to skip the check.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
max_batch_queries = 256
</programlisting>
</sect2>
<sect2 id="conf-subtree-docs-cache"><title>subtree_docs_cache</title>
<para>
Max common subtree document cache size, per-query.
Optional, default is 0 (disabled).
</para>
<para>
Limits RAM usage of a common subtree optimizer (see <xref linkend="multi-queries"/>).
At most this much RAM will be spent to cache document entries per each query.
Setting the limit to 0 disables the optimizer.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
subtree_docs_cache = 8M
</programlisting>
</sect2>
<sect2 id="conf-subtree-hits-cache"><title>subtree_hits_cache</title>
<para>
Max common subtree hit cache size, per-query.
Optional, default is 0 (disabled).
</para>
<para>
Limits RAM usage of a common subtree optimizer (see <xref linkend="multi-queries"/>).
At most this much RAM will be spent to cache keyword occurrences (hits) per each query.
Setting the limit to 0 disables the optimizer.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
subtree_hits_cache = 16M
</programlisting>
</sect2>
<sect2 id="conf-workers"><title>workers</title>
<para>
Multi-processing mode (MPM).
Optional; allowed values are none, fork, prefork, and threads.
Default is threads.
Introduced in version 1.10-beta.
</para>
<para>
Lets you choose how <filename>searchd</filename> processes multiple
concurrent requests. The possible values are:
<variablelist>
<varlistentry>
<term>none</term>
<listitem><para>All requests will be handled serially, one-by-one.
Prior to 1.x, this was the only mode available on Windows.
</para></listitem>
</varlistentry>
<varlistentry>
<term>fork</term>
<listitem><para>A new child process will be forked to handle every
incoming request.
</para></listitem>
</varlistentry>
<varlistentry>
<term>prefork</term>
<listitem><para>On startup, <filename>searchd</filename> will pre-fork
a number of worker processes, and pass the incoming requests
to one of those children.
</para></listitem>
</varlistentry>
<varlistentry>
<term>threads</term>
<listitem><para>A new thread will be created to handle every
incoming request. This is the only mode compatible with
RT indexing backend. This is a default value.
</para></listitem>
</varlistentry>
</variablelist>
</para>
<para>
Historically, <filename>searchd</filename> used fork-based model,
which generally performs OK but spends a noticeable amount of CPU
in fork() system call when there's a high amount of (tiny) requests
per second. Prefork mode was implemented to alleviate that; with
prefork, worker processes are basically only created on startup
and re-created on index rotation, somewhat reducing fork() call
pressure.
</para>
<para>
Threads mode was implemented along with RT backend and is required
to use RT indexes. (Regular disk-based indexes work in all the
available modes.)
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
workers = threads
</programlisting>
</sect2>
<sect2 id="conf-dist-threads"><title>dist_threads</title>
<para>
Max local worker threads to use for parallelizable requests (searching a distributed index; building a batch of snippets).
Optional, default is 0, which means to disable in-request parallelism.
Introduced in version 1.10-beta.
</para>
<para>
Distributed index can include several local indexes. <option>dist_threads</option>
lets you easily utilize multiple CPUs/cores for that (previously existing
alternative was to specify the indexes as remote agents, pointing searchd
to itself and paying some network overheads).
</para>
<para>
When set to a value N greater than 1, this directive will create up to
N threads for every query, and schedule the specific searches within these
threads. For example, if there are 7 local indexes to search and dist_threads
is set to 2, then 2 parallel threads would be created: one that sequentially
searches 4 indexes, and another one that searches the other 3 indexes.
</para>
<para>
In case of CPU bound workload, setting <option>dist_threads</option>
to 1x the number of cores is advised (creating more threads than cores
will not improve query time). In case of mixed CPU/disk bound workload
it might sometimes make sense to use more (so that all cores could be
utilizes even when there are threads that wait for I/O completion).
</para>
<para>
Note that <option>dist_threads</option> does <b>not</b> require
threads MPM. You can perfectly use it with fork or prefork MPMs too.
</para>
<para>
Starting with version 2.0.1-beta, building a batch of snippets
with <option>load_files</option> flag enabled can also be parallelized.
Up to <option>dist_threads</option> threads are be created to process
those files. That speeds up snippet extraction when the total amount
of document data to process is significant (hundreds of megabytes).
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
index dist_test
{
type = distributed
local = chunk1
local = chunk2
local = chunk3
local = chunk4
}
# ...
dist_threads = 4
</programlisting>
</sect2>
<sect2 id="conf-binlog-path"><title>binlog_path</title>
<para>
Binary log (aka transaction log) files path.
Optional, default is build-time configured data directory.
Introduced in version 1.10-beta.
</para>
<para>
Binary logs are used for crash recovery of RT index data, and also of
attributes updates of plain disk indices that
would otherwise only be stored in RAM until flush. When logging is enabled,
every transaction COMMIT-ted into RT index gets written into
a log file. Logs are then automatically replayed on startup
after an unclean shutdown, recovering the logged changes.
</para>
<para>
<option>binlog_path</option> directive specifies the binary log
files location. It should contain just the path; <option>searchd</option>
will create and unlink multiple binlog.* files in that path as necessary
(binlog data, metadata, and lock files, etc).
</para>
<para>
Empty value disables binary logging. That improves performance,
but puts RT index data at risk.
</para>
<para>
WARNING! It is strongly recommended to always explicitly define 'binlog_path' option in your config.
Otherwise, the default path, which in most cases is the same as working folder, may point to the
folder with no write access (for example, /usr/local/var/data). In this case, the searchd
will not start at all.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
binlog_path = # disable logging
binlog_path = /var/data # /var/data/binlog.001 etc will be created
</programlisting>
</sect2>
<sect2 id="conf-binlog-flush"><title>binlog_flush</title>
<para>
Binary log transaction flush/sync mode.
Optional, default is 2 (flush every transaction, sync every second).
Introduced in version 1.10-beta.
</para>
<para>
This directive controls how frequently will binary log be flushed
to OS and synced to disk. Three modes are supported:
<itemizedlist>
<listitem><para>0, flush and sync every second. Best performance,
but up to 1 second worth of committed transactions can be lost
both on daemon crash, or OS/hardware crash.
</para></listitem>
<listitem><para>1, flush and sync every transaction. Worst performance,
but every committed transaction data is guaranteed to be saved.
</para></listitem>
<listitem><para>2, flush every transaction, sync every second.
Good performance, and every committed transaction is guaranteed
to be saved in case of daemon crash. However, in case of OS/hardware
crash up to 1 second worth of committed transactions can be lost.
</para></listitem>
</itemizedlist>
</para>
<para>
For those familiar with MySQL and InnoDB, this directive is entirely
similar to <option>innodb_flush_log_at_trx_commit</option>. In most
cases, the default hybrid mode 2 provides a nice balance of speed
and safety, with full RT index data protection against daemon crashes,
and some protection against hardware ones.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
binlog_flush = 1 # ultimate safety, low speed
</programlisting>
</sect2>
<sect2 id="conf-binlog-max-log-size"><title>binlog_max_log_size</title>
<para>
Maximum binary log file size.
Optional, default is 0 (do not reopen binlog file based on size).
Introduced in version 1.10-beta.
</para>
<para>
A new binlog file will be forcibly opened once the current binlog file
reaches this limit. This achieves a finer granularity of logs and can yield
more efficient binlog disk usage under certain borderline workloads.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
binlog_max_log_size = 16M
</programlisting>
</sect2>
<sect2 id="conf-snippets-file-prefix"><title>snippets_file_prefix</title>
<para>
A prefix to prepend to the local file names when generating snippets.
Optional, default is empty.
Introduced in version 2.1.1-beta.
</para>
<para>
This prefix can be used in distributed snippets generation along with
<option>load_files</option> or <option>load_files_scattered</option> options.
</para>
<para>
Note how this is a prefix, and <b>not</b> a path! Meaning that if a prefix
is set to "server1" and the request refers to "file23", <filename>searchd</filename>
will attempt to open "server1file23" (all of that without quotes). So if you
need it to be a path, you have to mention the trailing slash.
</para>
<para>
Note also that this is a local option, it does not affect the agents anyhow.
So you can safely set a prefix on a master server. The requests routed to the
agents will not be affected by the master's setting. They will however be affected
by the agent's own settings.
</para>
<para>
This might be useful, for instance, when the document storage locations
(be those local storage or NAS mountpoints) are inconsistent across the servers.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
snippets_file_prefix = /mnt/common/server1/
</programlisting>
</sect2>
<sect2 id="conf-collation-server"><title>collation_server</title>
<para>
Default server collation.
Optional, default is libc_ci.
Introduced in version 2.0.1-beta.
</para>
<para>
Specifies the default collation used for incoming requests.
The collation can be overridden on a per-query basis.
Refer to <xref linkend="collations"/> section for the list of available collations and other details.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
collation_server = utf8_ci
</programlisting>
</sect2>
<sect2 id="conf-collation-libc-locale"><title>collation_libc_locale</title>
<para>
Server libc locale.
Optional, default is C.
Introduced in version 2.0.1-beta.
</para>
<para>
Specifies the libc locale, affecting the libc-based collations.
Refer to <xref linkend="collations"/> section for the details.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
collation_libc_locale = fr_FR
</programlisting>
</sect2>
<sect2 id="conf-mysql-version-string"><title>mysql_version_string</title>
<para>
A server version string to return via MySQL protocol.
Optional, default is empty (return Sphinx version).
Introduced in version 2.0.1-beta.
</para>
<para>
Several picky MySQL client libraries depend on a particular version
number format used by MySQL, and moreover, sometimes choose a different
execution path based on the reported version number (rather than the
indicated capabilities flags). For instance, Python MySQLdb 1.2.2 throws
an exception when the version number is not in X.Y.ZZ format; MySQL .NET
connector 6.3.x fails internally on version numbers 1.x along with
a certain combination of flags, etc. To workaround that, you can use
<option>mysql_version_string</option> directive and have <filename>searchd</filename>
report a different version to clients connecting over MySQL protocol.
(By default, it reports its own version.)
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
mysql_version_string = 5.0.37
</programlisting>
</sect2>
<sect2 id="conf-rt-flush-period"><title>rt_flush_period</title>
<para>
RT indexes RAM chunk flush check period, in seconds.
Optional, default is 10 hours.
Introduced in version 2.0.1-beta.
</para>
<para>
Actively updated RT indexes that however fully fit in RAM chunks
can result in ever-growing binlogs, impacting disk use and crash
recovery time. With this directive the search daemon performs
periodic flush checks, and eligible RAM chunks can get saved,
enabling consequential binlog cleanup. See <xref linkend="rt-binlog"/>
for more details.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
rt_flush_period = 3600 # 1 hour
</programlisting>
</sect2>
<sect2 id="conf-thread-stack"><title>thread_stack</title>
<para>
Per-thread stack size.
Optional, default is 1M.
Introduced in version 2.0.1-beta.
</para>
<para>
In the <code>workers = threads</code> mode, every request is processed
with a separate thread that needs its own stack space. By default, 1M per
thread are allocated for stack. However, extremely complex search requests
might eventually exhaust the default stack and require more. For instance,
a query that matches a thousands of keywords (either directly or through
term expansion) can eventually run out of stack. Previously, that resulted
in crashes. Starting with 2.0.1-beta, <filename>searchd</filename> attempts
to estimate the expected stack use, and blocks the potentially dangerous
queries. To process such queries, you can either the thread stack size
by using the <code>thread_stack</code> directive (or switch to a different
<code>workers</code> setting if that is possible).
</para>
<para>
A query with N levels of nesting is estimated to require approximately
30+0.16*N KB of stack, meaning that the default 64K is enough for queries
with upto 250 levels, 150K for upto 700 levels, etc. If the stack size limit
is not met, <filename>searchd</filename> fails the query and reports
the required stack size in the error message.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
thread_stack = 256K
</programlisting>
</sect2>
<sect2 id="conf-expansion-limit"><title>expansion_limit</title>
<para>
The maximum number of expanded keywords for a single wildcard.
Optional, default is 0 (no limit).
Introduced in version 2.0.1-beta.
</para>
<para>
When doing substring searches against indexes built with
<code>dict = keywords</code> enabled, a single wildcard may
potentially result in thousands and even millions of matched
keywords (think of matching 'a*' against the entire Oxford
dictionary). This directive lets you limit the impact
of such expansions. Setting <code>expansion_limit = N</code>
restricts expansions to no more than N of the most frequent
matching keywords (per each wildcard in the query).
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
expansion_limit = 16
</programlisting>
</sect2>
<sect2 id="conf-watchdog"><title>watchdog</title>
<para>
Threaded server watchdog.
Optional, default is 1 (watchdog enabled).
Introduced in version 2.0.1-beta.
</para>
<para>
A crashed query in <code>threads</code> multi-processing mode
(<code><link linkend="conf-workers">workers</link> = threads</code>)
can take down the entire server. With watchdog feature enabled,
<filename>searchd</filename> additionally keeps a separate lightweight
process that monitors the main server process, and automatically
restarts the latter in case of abnormal termination. Watchdog
is enabled by default.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
watchdog = 0 # disable watchdog
</programlisting>
</sect2>
<sect2 id="conf-prefork-rotation-throttle"><title>prefork_rotation_throttle</title>
<para>
Delay between restarting preforked children on index rotation, in milliseconds.
Optional, default is 0 (no delay).
Introduced in version 2.0.2-beta.
</para>
<para>
When running in <code><link linkend="conf-workers">workers</link> = prefork</code>
mode, every index rotation needs to restart all children to propagate the newly
loaded index data changes. Restarting all of them at once might put excessive
strain on CPU and/or network connections. (For instance, when the application
keeps a bunch of open persistent connections to different children, and all those
children restart.) Those bursts can be throttled down with
<option>prefork_rotation_throttle</option> directive. Note that
the children will be restarted sequentially, and thus "old" results might
persist for a few more seconds. For instance, if
<option>prefork_rotation_throttle</option> is set to 50 (milliseconds), and
there are 30 children, then the last one would only be <emphasis>actually</emphasis>
restarted 1.5 seconds (50*30=1500 milliseconds) <emphasis>after</emphasis>
the "rotation finished" message in the <filename>searchd</filename> event log.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
prefork_rotation_throttle = 50 # throttle children restarts by 50 msec each
</programlisting>
</sect2>
<sect2 id="conf-sphinxql-state"><title>sphinxql_state</title>
<para>
Path to a file where current SphinxQL state will be serialized.
Available since version 2.1.1-beta.
</para>
<para>
On daemon startup, this file gets replayed. On eligible state changes (eg. SET GLOBAL),
this file gets rewritten automatically. This can prevent a hard-to-diagnose problem:
If you load UDF functions, but Sphinx crashes, when it
gets (automatically) restarted, your UDF and global variables will no longer be available;
using persistent state helps a graceful recovery with no such surprises.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
sphinxql_state = uservars.sql
</programlisting>
</sect2>
<sect2 id="conf-ha-ping-interval"><title>ha_ping_interval</title>
<para>
Interval between agent mirror pings, in milliseconds.
Optional, default is 1000.
Added in 2.1.1-beta.
</para>
<para>
For a distributed index with agent mirrors in it (see more in ???),
master sends all mirrors a ping command during the idle periods.
This is to track the current agent status (alive or dead, network
roundtrip, etc). The interval between such pings is defined
by this directive.
</para>
<para>
To disable pings, set ha_ping_interval to 0.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
ha_ping_interval = 0
</programlisting>
</sect2>
<sect2 id="conf-ha-period-karma"><title>ha_period_karma</title>
<para>
Agent mirror statistics window size, in seconds.
Optional, default is 60.
Added in 2.1.1-beta.
</para>
<para>
For a distributed index with agent mirrors in it (see more in ???),
master tracks several different per-mirror counters. These counters
are then used for failover and balancing. (Master picks the best
mirror to use based on the counters.) Counters are accumulated in
blocks of <code>ha_period_karma</code> seconds.
</para>
<para>
After beginning a new block, master may still use the accumulated
values from the previous one, until the new one is half full. Thus,
any previous history stops affecting the mirror choice after
1.5 times ha_period_karma seconds at most.
</para>
<para>
Despite that at most 2 blocks are used for mirror selection,
upto 15 last blocks are actually stored, for instrumentation purposes.
They can be inspected using
<link linkend="sphinxql-show-agent-status">SHOW AGENT STATUS</link>
statement.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
ha_period_karma = 120
</programlisting>
</sect2>
<sect2 id="conf-persistent-connections-limit"><title>persistent_connections_limit</title>
<para>
The maximum # of simultaneous persistent connections to remote <link linkend="conf-agent-persistent">persistent agents</link>.
Each time connecting agent defined under 'agent_persistent' we try to reuse existing connection (if any), or connect and save the connection for the future.
However we can't hold unlimited # of such persistent connections, since each one holds a worker on agent size (and finally we'll receive the 'maxed out' error,
when all of them are busy). This very directive limits the number. It affects the num of connections to each agent's host, across all distributed indexes.
</para>
<para>
It is reasonable to set the value equal or less than <link linkend="conf-max-children">max_children</link> option of the agents.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
persistent_connections_limit = 29 # assume that each host of agents has max_children = 30 (or 29).
</programlisting>
</sect2>
<sect2 id="conf-rt-merge-iops"><title>rt_merge_iops</title>
<para>
A maximum number of I/O operations (per second) that the RT chunks merge thread is allowed to start.
Optional, default is 0 (no limit). Added in 2.1.1-beta.
</para>
<para>
This directive lets you throttle down the I/O impact arising from
the <code>OPTIMIZE</code> statements. It is guaranteed that all the
RT optimization activity will not generate more disk iops (I/Os per second)
than the configured limit. Modern SATA drives can perform up to around 100 I/O operations per
second, and limiting rt_merge_iops can reduce search performance degradation caused by merging.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
rt_merge_iops = 40
</programlisting>
</sect2>
<sect2 id="conf-rt-merge-maxiosize"><title>rt_merge_maxiosize</title>
<para>
A maximum size of an I/O operation that the RT chunks merge
thread is allowed to start.
Optional, default is 0 (no limit).
Added in 2.1.1-beta.
</para>
<para>
This directive lets you throttle down the I/O impact arising from
the <code>OPTIMIZE</code> statements. I/Os bigger than this limit will be
broken down into 2 or more I/Os, which will then be accounted as separate I/Os
with regards to the <link linkend="conf-rt-merge-iops">rt_merge_iops</link>
limit. Thus, it is guaranteed that all the optimization activity will not
generate more than (rt_merge_iops * rt_merge_maxiosize) bytes of disk I/O
per second.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
rt_merge_maxiosize = 1M
</programlisting>
</sect2>
<sect2 id="conf-predicted-time-costs"><title>predicted_time_costs</title>
<para>
Costs for the query time prediction model, in nanoseconds.
Optional, default is "doc=64, hit=48, skip=2048, match=64" (without the quotes).
Added in 2.1.1-beta.
</para>
<para>
Terminating queries before completion based on their execution time
(via either <link linkend="api-func-setmaxquerytime">SetMaxQueryTime()</link>
API call, or <link linkend="sphinxql-select">SELECT ... OPTION max_query_time</link>
SphinxQL statement) is a nice safety net, but it comes with an inborn drawback:
indeterministic (unstable) results. That is, if you repeat the very same (complex)
search query with a time limit several times, the time limit will get hit
at different stages, and you will get <emphasis>different</emphasis> result sets.
</para>
<para>
Starting with 2.1.1-beta, there is a new option,
<link linkend="sphinxql-select">SELECT ... OPTION max_predicted_time</link>,
that lets you limit the query time <emphasis>and</emphasis> get stable,
repeatable results. Instead of regularly checking the actual current time
while evaluating the query, which is indeterministic, it predicts the current
running time using a simple linear model instead:
<programlisting>
predicted_time =
doc_cost * processed_documents +
hit_cost * processed_hits +
skip_cost * skiplist_jumps +
match_cost * found_matches
</programlisting>
The query is then terminated early when the <code>predicted_time</code>
reaches a given limit.
</para>
<para>
Of course, this is not a hard limit on the actual time spent (it is, however,
a hard limit on the amount of <emphasis>processing</emphasis> work done), and
a simple linear model is in no way an ideally precise one. So the wall clock time
<emphasis>may</emphasis> be either below or over the target limit. However,
the error margins are quite acceptable: for instance, in our experiments with
a 100 msec target limit the majority of the test queries fell into a 95 to 105 msec
range, and <emphasis>all</emphasis> of the queries were in a 80 to 120 msec range.
Also, as a nice side effect, using the modeled query time instead of measuring
actual run time results in somewhat less gettimeofday() calls, too.
</para>
<para>
No two server makes and models are identical, so <code>predicted_time_costs</code>
directive lets you configure the costs for the model above. For convenience, they are
integers, counted in nanoseconds. (The limit in max_predicted_time is counted
in milliseconds, and having to specify cost values as 0.000128 ms instead of 128 ns
is somewhat more error prone.) It is not necessary to specify all 4 costs at once,
as the missed one will take the default values. However, we strongly suggest
to specify all of them, for readability.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
predicted_time_costs = doc=128, hit=96, skip=4096, match=128
</programlisting>
</sect2>
<sect2 id="conf-shutdown-timeout"><title>shutdown_timeout</title>
<para>
searchd --stopwait wait time, in seconds.
Optional, default is 3 seconds.
Added in 2.2.1-beta.
</para>
<para>
When you run searchd --stopwait your daemon needs to perform some
activities before stopping like finishing queries, flushing RT RAM chunk,
flushing attributes and updating binlog. And it requires some time.
searchd --stopwait will wait up to shutdown_time seconds for daemon to
finish its jobs. Suitable time depends on your index size and load.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
shutdown_timeout = 5 # wait for up to 5 seconds
</programlisting>
</sect2>
<sect2 id="conf-ondisk-attrs-default"><title>ondisk_attrs_default</title>
<para>
Instance-wide defaults for <link linkend="conf-ondisk-attrs">ondisk_attrs</link>
directive. Optional, default is 0 (all attributes are loaded in memory). This
directive lets you specify the default value of ondisk_attrs for all indexes
served by this copy of searchd. Per-index directives take precedence, and will
overwrite this instance-wide default value, allowing for fine-grain control.
</para>
</sect2>
<sect2 id="conf-query-log-min-msec"><title>query_log_min_msec</title>
<para>
Limit (in milliseconds) that prevents the query from being written to the query log.
Optional, default is 0 (all queries are written to the query log). This directive
specifies that only queries with execution times that exceed the specified limit will be logged.
</para>
</sect2>
<sect2 id="conf-agent-connect-timeout-default"><title>agent_connect_timeout</title>
<para>
Instance-wide defaults for <link linkend="conf-agent-connect-timeout">agent_connect_timeout</link> parameter.
The last defined in distributed (network) indexes.
</para>
</sect2>
<sect2 id="conf-agent-query-timeout-default"><title>agent_query_timeout</title>
<para>
Instance-wide defaults for <link linkend="conf-agent-query-timeout">agent_query_timeout</link> parameter.
The last defined in distributed (network) indexes, or also may be overrided per-query using OPTION clause.
</para>
</sect2>
<sect2 id="conf-agent-retry-count"><title>agent_retry_count</title>
<para>
Integer, specifies how many times sphinx will try to connect and query remote agents in distributed index before reporting
fatal query error. Default is 0 (i.e. no retries). This value may be also specified on per-query basis using
'OPTION retry_count=XXX' clause. If per-query option exists, it will override the one specified in config.
</para>
</sect2>
<sect2 id="conf-agent-retry-delay"><title>agent_retry_delay</title>
<para>
Integer, in milliseconds. Specifies the delay sphinx rest before retrying to query a remote agent in case it fails.
The value has sense only if non-zero <link linkend="conf-agent-retry-count">agent_retry_count</link>
or non-zero per-query OPTION retry_count specified. Default is 500. This value may be also specified
on per-query basis using 'OPTION retry_delay=XXX' clause. If per-query option exists, it will override the one specified in config.
</para>
</sect2>
</sect1>
<sect1 id="confgroup-common"><title>Common section configuration options</title>
<sect2 id="conf-lemmatizer-base"><title>lemmatizer_base</title>
<para>
Lemmatizer dictionaries base path.
Optional, default is /usr/local/share (as in --datadir switch to ./configure script).
Added in version 2.1.1-beta.
</para>
<para>
Our lemmatizer implementation (see <xref linkend="conf-morphology"/> for a discussion
of what lemmatizers are) is dictionary driven. lemmatizer_base directive configures
the base dictionary path. File names are hardcoded and specific to a given lemmatizer;
the Russian lemmatizer uses ru.pak dictionary file. The dictionaries can be obtained
from the Sphinx website.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
lemmatizer_base = /usr/local/share/sphinx/dicts/
</programlisting>
</sect2>
<sect2 id="conf-on-json-attr-error"><title>on_json_attr_error</title>
<para>
What to do if JSON format errors are found.
Optional, default value is <option>ignore_attr</option> (ignore errors).
Applies only to <option>sql_attr_json</option> attributes.
Added in 2.1.1-beta.
</para>
<para>
By default, JSON format errors are ignored (<option>ignore_attr</option>) and
the indexer tool will just show a warning. Setting this option to <option>fail_index</option>
will rather make indexing fail at the first JSON format error.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
on_json_attr_error = ignore_attr
</programlisting>
</sect2>
<sect2 id="conf-json-autoconv-numbers"><title>json_autoconv_numbers</title>
<para>
Automatically detect and convert possible JSON
strings that represent numbers, into numeric attributes.
Optional, default value is 0 (do not convert strings into numbers).
Added in 2.1.1-beta.
</para>
<para>
When this option is 1, values such as "1234" will be indexed as numbers instead
of strings; if the option is 0, such values will be indexed as strings.
This conversion applies to any data source, that is, JSON attributes originating
from either SQL or XMLpipe2 sources will all be affected.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
json_autoconv_numbers = 1
</programlisting>
</sect2>
<sect2 id="conf-json-autoconv-keynames"><title>json_autoconv_keynames</title>
<para>
Whether and how to auto-convert key names within JSON attributes.
Known value is 'lowercase'.
Optional, default value is unspecified (do not convert anything).
Added in 2.1.1-beta.
</para>
<para>
When this directive is set to 'lowercase', key names within JSON attributes
will be automatically brought to lower case when indexing.
This conversion applies to any data source, that is, JSON attributes originating
from either SQL or XMLpipe2 sources will all be affected.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
json_autoconv_keynames = lowercase
</programlisting>
</sect2>
<sect2 id="conf-rlp-root"><title>rlp_root</title>
<para>
Path to the RLP root folder. Mandatory if RLP is used.
Added in 2.2.1-beta.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
rlp_root = /home/myuser/RLP
</programlisting>
</sect2>
<sect2 id="conf-rlp-environment"><title>rlp_environment</title>
<para>
RLP environment configuration file. Mandatory if RLP is used.
Added in 2.2.1-beta.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
rlp_environment = /home/myuser/RLP/rlp-environment.xml
</programlisting>
</sect2>
<sect2 id="conf-rlp-max-batch-size"><title>rlp_max_batch_size</title>
<para>
Maximum total size of documents batched before processing them by the RLP. Optional, default is 51200.
Do not set this value to more than 10Mb because sphinx splits large documents to 10Mb chunks before processing them by the RLP.
This option has effect only if <option>morphology = rlp_chinese_batched</option> is specified.
Added in 2.2.1-beta.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
rlp_max_batch_size = 100k
</programlisting>
</sect2>
<sect2 id="conf-rlp-max-batch-docs"><title>rlp_max_batch_docs</title>
<para>
Maximum number of documents batched before processing them by the RLP. Optional, default is 50.
This option has effect only if <option>morphology = rlp_chinese_batched</option> is specified.
Added in 2.2.1-beta.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
rlp_max_batch_docs = 100
</programlisting>
</sect2>
<sect2 id="conf-plugin-dir"><title>plugin_dir</title>
<para>
Trusted location for the dynamic libraries (UDFs).
Optional, default is empty (no location).
Introduced in version 2.0.1-beta.
</para>
<para>
Specifies the trusted directory from which the
<link linkend="sphinx-udfs">UDF libraries</link> can be loaded. Requires
<link linkend="conf-workers">workers = thread</link> to take effect.
</para>
<bridgehead>Example:</bridgehead>
<programlisting>
plugin_dir = /usr/local/sphinx/lib
</programlisting>
</sect2>
</sect1>
</chapter>
<!--
<chapter id="developers"><title>Developer's corner</title>
<sect1 id="architecture-overview"><title>Sphinx architecture overview</title>
(to be added)
</sect1>
<sect1 id="adding-data-sources"><title>Adding new data source drivers</title>
(to be added)
</sect1>
<sect1 id="adding-data-sources"><title>API porting guidelines</title>
(to be added)
</sect1>
</chapter>
-->
<appendix id="changelog"><title>Sphinx revision history</title>
<sect1 id="rel2211"><title>Version 2.2.11-release, 19 jul 2016</title>
<bridgehead>New features and changes</bridgehead>
<itemizedlist>
<listitem><para>fixed #2499 crash of daemon at phrase node with star shift; added regressions to test 41</para></listitem>
<listitem><para>Backported RE2 patch and solutions from master</para></listitem>
<listitem><para>fixed #2488 performance issue with matching hitless terms</para></listitem>
<listitem><para>fixed #2498 wrong profiling report (was filter instead get_hits)</para></listitem>
<listitem><para>fixed #2497 windows service does not handle system shutdown</para></listitem>
<listitem><para>fixed #2484 plugin ranker at distributed index, crash of daemon in case no plugin at agent and check of cache</para></listitem>
<listitem><para>fixed #2495 filtering by false/true/null values without using aliases</para></listitem>
<listitem><para>fixed #2484 plugin ranker and token_filter work with distributed index; set master ver to 14</para></listitem>
<listitem><para>fixed #2485 client failed to parse reply with string_ptr attribute via API; added regression to test 159</para></listitem>
<listitem><para>fixed #2462 added OPTION statements to SphinxQL log fixed string split buffer overrun</para></listitem>
<listitem><para>fixed #2458 expression with escaped quote was cut at distributed index; added regressions to test 125</para></listitem>
<listitem><para>fixed #2459 incorrect indexer exit code for indexing multiple indexes</para></listitem>
<listitem><para>fixed #2452 group by aliased JSON array</para></listitem>
<listitem><para>optimized sphWildcardMatch (UTF-8 vs ASCII)</para></listitem>
<listitem><para>fixed #2451 UTF-8 support for extended wildcards (?, %)</para></listitem>
<listitem><para>fixed #2434 multiple declaration of same attribute breaks *sv pipe data sources and index; added test 257</para></listitem>
<listitem><para>fixed #2437 64-bit values comparison for ALL/ANY/INDEXOF functions</para></listitem>
<listitem><para>fixed #2434 field modifier error for over-short word; added regression to test 211</para></listitem>
<listitem><para>fixed #2435 added HAVING statement to sphinxql query log</para></listitem>
<listitem><para>fixed #2433 crash of indexer for csv source with escaped quote inside quotation; added regression to tests</para></listitem>
<listitem><para>fixed #2431 indexer crash on multiple escape chars at csv source; added regression to tests</para></listitem>
<listitem><para>backported svn:r5092 git:3b5bf10bb852e992f4e02d6f379899413549b5fe</para></listitem>
<listitem><para>fixed #2429 crash of service threads has no crash log</para></listitem>
<listitem><para>fixed #2416 crash of daemon on hit stream with wrong qpos; fixed tests 184, 251</para></listitem>
<listitem><para>fixed #2427 no warnings for short wildcards inside word; added regressions to test 173</para></listitem>
<listitem><para>fixed #2420 count(*) statement vs space characters at facet; added cases to test 226</para></listitem>
<listitem><para>fixed #2209 prohibited order by MVA, backported from master, added error message; fixed tests 20, 180</para></listitem>
<listitem><para>fixed #2419 ATTACH RT index missed doc-id duplicates for id32; fixed test 187</para></listitem>
<listitem><para>fixed #2421 daemon crash on complex query with field start and quorum operators</para></listitem>
<listitem><para>fixed #2417 destination wordform from multiform was stemmed; added regressions to test 22</para></listitem>
<listitem><para>fixed #2419 ATTACH RT index missed doc-id duplicates; added regression to test 184</para></listitem>
<listitem><para>fixed #2418 wildcards do not work at snippets beside star; added cases to test 40</para></listitem>
<listitem><para>fixed #2406 ALTER RECONFIGURE mess with wordforms due to wrong creation order; added regression to test 255</para></listitem>
<listitem><para>fixed #2405 crash of daemon on quorum query with duplicates from expand_keywords; added regression to test 54</para></listitem>
<listitem><para>fixed #2404 ok reply for ALTER query to missed index; added regression to test 213</para></listitem>
<listitem><para>fixed #2394 multi query with profiling enabled; added statements SET, SHOW PLAN, SHOW profile to multi query; added regression to test 113; fixed ubertest to clean up multi query result sets</para></listitem>
<listitem><para>fixed #2398 lcs calculation for large delta positions; added regression to tests 175, 68</para></listitem>
<listitem><para>fixed #2399 crash of daemon for query with json attribute at HAVING; added regressions to test 171</para></listitem>
<listitem><para>fixed #2397 inplace JSON update for several 64-bit values</para></listitem>
<listitem><para>fixed #1415 (destination tokens in wordforms couldn't be stopwords)</para></listitem>
<listitem><para>fixed #2315 crash of daemon on rotating config without indexes</para></listitem>
<listitem><para>fixed #14 (github) RT index without any attribute; fixed tests 88, 175, 181, 192</para></listitem>
<listitem><para>fixed #2389 indextool now reports on empty segment of RT index at check mode</para></listitem>
<listitem><para>fixed #2383 wrong indexer exit cqode with nohup option</para></listitem>
<listitem><para>fixed #2365 multiple sysvars support in select statement (for Connector/J 5.1.36+)</para></listitem>
<listitem><para>fixed #2363 ping to bad ha mirror pause accept thread at daemon</para></listitem>
<listitem><para>fixed #2376 json-dependent columns vs is null and order by</para></listitem>
<listitem><para>fixed #2355 result set missed SNIPPET expression calc for query with offset; added regressions to test 143</para></listitem>
<listitem><para>fixed JSON type conversion, fixed memory leak, updated test 206</para></listitem>
<listitem><para>fixed #2375 json field type autoconversion in expressions</para></listitem>
<listitem><para>fixed #2363 ping to bad ha mirror pause accept thread at daemon; fixed missed statistics lock</para></listitem>
<listitem><para>fixed #2373 group by 1-character json field, updated test 206</para></listitem>
<listitem><para>fixed crash log time double setup from previous commit</para></listitem>
<listitem><para>fixed #2363 ping to bad ha mirror pause accept thread at daemon (affects all workers but not prefork), moved ping to a separate thread</para></listitem>
<listitem><para>fixed #2364 case-sensitive fields support for distributed indexes</para></listitem>
<listitem><para>fixed #2361 daemon crash with quorum operator; added warning on replacing quorum operator</para></listitem>
<listitem><para>fixed #2359 agent balancing added tests on balancing fixed ping to affect only counters but not timings fixed network statistics aggregation fixed timeout exit on reading agent reply fixed mirror statistics output to be more consistent added feature log_debug_filter to log only matched messages</para></listitem>
<listitem><para>fixed #2356 added error handling for large (>0x400000 bytes) JSON fields</para></listitem>
<listitem><para>fixed #2348 max_matches option did not affect facet queries</para></listitem>
<listitem><para>fixed #2337 crash of indexer on indexing large data passed regexp_filter; added regression to test 194</para></listitem>
<listitem><para>fixed #2336 field modifiers assigned improperly for query with multi destination wordforms; added regressions to test 22</para></listitem>
<listitem><para>fixed #2341 crash of daemon with memory corruption at query to multiple indexes with pool attributes; added regression to test 159</para></listitem>
<listitem><para>fixed #2339 indexer crashed on lemma length overflow; added regression to test 207</para></listitem>
<listitem><para>fixed #1912 indextool memory usage during check of huge index (skiplist keep on disk and docid list variants to use less memory)</para></listitem>
<listitem><para>added string list filter; set master version to 12, API protocol version to 1.31 fixed #2334 daemon crash on filtering by string column via API; fixed string values IN() filter for distributed index; added regression to tests 60, 125</para></listitem>
<listitem><para>fixed #2296 wrong snippet produced from wordforms with multiple destination tokens and AOT dictionary; added regressions to test 223</para></listitem>
<listitem><para>fixed #1675 ZONESPAN was silently ignored with BEFORE operator, added warning</para></listitem>
<listitem><para>fixed #2329 slow wildcard matching (star vs long terms might took above 100 seconds); added regressions to tests</para></listitem>
<listitem><para>fixed #2323 missed field id at packedfactors with json enabled output; fixed model at test 217</para></listitem>
<listitem><para>fixed test 125, enabled back regression</para></listitem>
<listitem><para>fixed #2325 false positive result of indextool check mode for bad index with crc dictionary</para></listitem>
<listitem><para>fixed #2324 daemon crash on handle query via api with empty select list to indexes with dissimilar structure; added regression to test 113</para></listitem>
<listitem><para>fixed JSON mapping from the previous commit (consumes less memory this way)</para></listitem>
<listitem><para>fixed #2320 rt index crashes on groupby() for large JSON fields</para></listitem>
<listitem><para>fixed indextool --check vs nested JSON objects</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel2210"><title>Version 2.2.10-release, 07 sep 2015</title>
<bridgehead>New features and changes</bridgehead>
<itemizedlist>
<listitem><para>added #2310, <option>--replay-flags=ignore-open-errors</option> switch to replay binlogs even if some files are missing</para></listitem>
<listitem><para>added #2234, support for empty string values (stringattr='') in <link linkend="sphinxql-select">WHERE</link> clause</para></listitem>
<listitem><para>added #2233, support for <code>IN()</code> filters with string values</para></listitem>
<listitem><para>added #2232, string collation support in <link linkend="sphinxql-select">SELECT</link> expressions</para></listitem>
<listitem><para>added #2121, "where flt<>val" support, "where fltcol=intval" and "where fltcol!=intval" conditions</para></listitem>
<listitem><para>added #2119, new <filename>indexer</filename> exit code 2 on a <option>--rotate</option> failure</para></listitem>
<listitem><para>fixed #2207, unified <option>min_prefix_len</option>, <option>min_infix_len</option> behavior between RT and plain indexes</para></listitem>
<listitem><para>fixed #2020, unified (and greatly shortened) the list of SphinxQL <link linkend="sphinxql-reserved-keywords">reserved keywords</link> between indexer checks, SphinxQL parser checks, and the documentation</para></listitem>
</itemizedlist>
<bridgehead>Major bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #2251, expressions dependent on aggregation results (eg. as in SELECT MAX(id) m1, m1+10 m2) were not computed properly in RT indexes</para></listitem>
<listitem><para>fixed #2248, <link linkend="expr-func-length">LENGTH()</link> was 2x off for 64-bit MVA attributes</para></listitem>
<listitem><para>fixed #2146, <link linkend="sphinxql-optimize-index">OPTIMIZE</link> could occasionally break big RT indexes (by violating 4/16 GB string/MVA per chunk size limits)</para></listitem>
<listitem><para>fixed #2118, multi-wordforms with clashing prefixes were processed in a wrong order</para></listitem>
<listitem><para>fixed #1926, disabled and later re-enabled indexes were not picked up again by <filename>searchd</filename> on SIGHUP</para></listitem>
</itemizedlist>
<bridgehead>Minor bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #2312, using FACTORS() along with a subtree cache could crash (because on wrong qpos values from the cache passed to the ranker)</para></listitem>
<listitem><para>fixed #2310, comparing a non-existent JSON field with a string constant (as in jcol.some_typo='abc') could crash</para></listitem>
<listitem><para>fixed #2309, UDFs with BIGINT return were saved without a type into sphinxql_state file</para></listitem>
<listitem><para>fixed #2305, punctuation chars not mentioned in <link linkend="conf-charset-table">charset_table</link> could still occasionally affect term position in the query</para></listitem>
<listitem><para>fixed #2303, a combination of <link linkend="conf-hitless-words">hitless_words</link>, lemmatizer_all, and a phrase operator could match a wrong result set</para></listitem>
<listitem><para>fixed #2301, <filename>searchd</filename> could sometimes crash on shutdown (at pid file unlink()) if the config was reloaded</para></listitem>
<listitem><para>fixed #2296, wordforms with multiple destination tokens broke snippet highlighting</para></listitem>
<listitem><para>fixed #2290, error in the middle of a multi-query batch did not abort SphinxQL packet, causing problems with some MySQL drivers like PHP mysqlnd</para></listitem>
<listitem><para>fixed #2286, multi-quries with different string filters were incorrectly considered identical</para></listitem>
<listitem><para>fixed #2280, HTML stripper incorrectly parsed hexadecimal NCRs (like eg. #xC0)</para></listitem>
<listitem><para>fixed #2273, a bit better error message when OPTIMIZE fails on a too big chunk</para></listitem>
<listitem><para>fixed #2258, some ranking FACTORS() were off when lemmatizer expansions yielded duplicate terms</para></listitem>
<listitem><para>fixed #2257, OR operator over conditional operators could crash</para></listitem>
<listitem><para>fixed #2242, added whitespaces support to SNIPPET() before_match/after_matches options, and fixed the handling of repeated %PASSAGE_ID% macros</para></listitem>
<listitem><para>fixed #2238, added a few safeguards to prevent crashes/freezes on loading damaged RT RAM chunks</para></listitem>
<listitem><para>fixed #2237, ATTACH-ing a part of a distributed index did not correctly invalidate it, could crash</para></listitem>
<listitem><para>fixed #2235, <link linkend="sphinxql-update">UPDATE</link> ... OPTION <option>strict=1</option> did not with plain indexes</para></listitem>
<listitem><para>fixed #2225, <filename>searchd</filename> crashed on startup if agent host string was empty</para></listitem>
<listitem><para>fixed #2127, <filename>indextool</filename> did not handle RT indexes with updated JSON attributes in them</para></listitem>
<listitem><para>fixed #2117, <link linkend="sort-expr">GEODIST()</link> calls with hash {in=deg,out=mi} arguments on a distributed index did not parse correctly</para></listitem>
<listitem><para>fixed #2113, @@relaxed could occasonally crash ceratin complex queries</para></listitem>
<listitem><para>fixed #2106, using GROUP N BY with a custom ranker and FACTORS() caused crashes and memory leaks</para></listitem>
<listitem><para>fixed #2093, wildcard character at the end of the keyword could sometimes erroneously produce no matches</para></listitem>
<listitem><para>fixed #2088, NEAR operator with NOT argument could crash</para></listitem>
<listitem><para>fixed #1929, allowed `123abc` column names in SphinxQL SELECT (alas, they are still allowed in <filename>indexer</filename>)</para></listitem>
<listitem><para>fixed #1889, #1890, #1891, a few typo-style bugs in <filename>libsphinxclient</filename> sphinx_set_field_weights(), sphinx_set_index_weights(), sphinx_add_filter_entry()</para></listitem>
<listitem><para>fixed #1859, #2202, XML/TSV/CSV sources now works with control characters like EOF, and UTF BOM marks</para></listitem>
<listitem><para>fixed #1815, a number of SphinxSE issues (inet adress endpoint, too big numbers at MVA, and MVA inserts/replaces via SphinxQL)</para></listitem>
<listitem><para>fixed #1704, <link linkend="expr-func-contains">CONTAINS()</link> now correctly handles polygons with duplicated points</para></listitem>
<listitem><para>fixed #1643, <link linkend="expr-func-crc32">CRC32()</link> is now properly evaluated as unsigned in BIGINT context</para></listitem>
<listitem><para>fixed #1567, #1747, #2245, column name quotation could fail in UDF expressions and distributed queries</para></listitem>
<listitem><para>fixed #1551, off-by-one blended keyword position errors in proximity queries (phrase, NEAR, etc)</para></listitem>
<listitem><para>fixed #1528, metaphone on too long (eg. Chinese or Japanese) strings crashed with a buffer overflow</para></listitem>
<listitem><para>fixed #1510, added an unknown field warning to SetFieldWeights() API call and SphinxQL OPTION field_weights</para></listitem>
<listitem><para>fixed #1367, remote agents (in distributed index) could not be accessed via UNIX sockets</para></listitem>
<listitem><para>fixed #1349, max_matches=0 was not handled correctly in <filename>libsphinxclient</filename> sphinx_set_limits()</para></listitem>
<listitem><para>fixed Github PR-1, SphinxSE TLS leak on table reopen</para></listitem>
<listitem><para>fixed <filename>searchd</filename> crash when trying to load a damaged index with an incorrect row count</para></listitem>
<listitem><para>fixed <filename>indextool</filename> MVA checks (an index error could sometimes be mistakenly reported)</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel229"><title>Version 2.2.9-release, 16 apr 2015</title>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #2228, removed <filename>searchd</filename> shutdown behavior on failed connection</para></listitem>
<listitem><para>fixed #2208, <link linkend="extended-syntax">ZONESPANLIST()</link> support for RT indexes</para></listitem>
<listitem><para>fixed #2203, legacy API <link linkend="api-reference">SELECT</link> list</para></listitem>
<listitem><para>fixed #2201, <filename>indextool</filename> false positive error on RT index</para></listitem>
<listitem><para>fixed #2201, crash with string comparison at expressions and expression ranker</para></listitem>
<listitem><para>fixed #2199, invalid packedfactors JSON output for index with stopwords</para></listitem>
<listitem><para>fixed #2197, <link linkend="sphinxql-truncate-rtindex">TRUNCATE</link> fails to remove disk chunk files after calling <link linkend="sphinxql-optimize-index">OPTIMIZE</link></para></listitem>
<listitem><para>fixed #2196, .NET connector issue (UTC_TIMESTAMP() support)</para></listitem>
<listitem><para>fixed #2190, incorrect <link linkend="api-funcgroup-groupby">GROUP BY</link> outer JSON object</para></listitem>
<listitem><para>fixed #2176, agent used <option>ha_strategy=random</option> instead of specified in config</para></listitem>
<listitem><para>fixed #2144, query parser crash vs multiforms with heading numbers</para></listitem>
<listitem><para>fixed #2122, id64 daemon failed to load RT disk chunk with kill-list from id32 build</para></listitem>
<listitem><para>fixed #2120, aliased JSON elements support</para></listitem>
<listitem><para>fixed #1979, snippets generation and span length and lcs calculation in proximity queries</para></listitem>
<listitem><para>fixed truncated results (and a potential crash) vs long enough ZONESPANLIST() result</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel228"><title>Version 2.2.8-release, 09 mar 2015</title>
<bridgehead>Minor features</bridgehead>
<itemizedlist>
<listitem><para>added #2166, per agent HA strategy for distributed indexes</para></listitem>
</itemizedlist>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #2182, incorrect query results with multiple same destination wordforms</para></listitem>
<listitem><para>fixed #2181, improved error message on incorrect filters</para></listitem>
<listitem><para>fixed #2178, ZONESPAN operator for queries with more than two words</para></listitem>
<listitem><para>fixed #2172, incorrect results with field position fulltext operators</para></listitem>
<listitem><para>fixed #2171, some index options do not work for template indexes</para></listitem>
<listitem><para>fixed #2170, joined fields indexation with document id equals to 0</para></listitem>
<listitem><para>fixed #2110, crash on snippet generation</para></listitem>
<listitem><para>fixed WLCCS ranking factor computation</para></listitem>
<listitem><para>fixed memory leak on queries with ZONEs</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel227"><title>Version 2.2.7-release, 20 jan 2015</title>
<bridgehead>Minor features</bridgehead>
<itemizedlist>
<listitem><para>added #2112, string equal comparison support for <link linkend="expr-func-if">IF()</link> function (for JSON and string attributes)</para></listitem>
<listitem><para>added #2153, <link linkend="expr-func-in">IN()</link> support for mixed and top-level JSON arrays</para></listitem>
</itemizedlist>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #2158, crash at RT index after morphology changed to AOT after index was created</para></listitem>
<listitem><para>fixed #2155, <link linkend="conf-stopwords">stopwords</link> got missed on disk chunk save at RT index</para></listitem>
<listitem><para>fixed #2151, agents statistics missed in case of huge amount of agents</para></listitem>
<listitem><para>fixed #2139, escape all special characters in JSON result set, according to RFC 4627</para></listitem>
<listitem><para>fixed #2123, no pid file created in x64 release built with vs2012</para></listitem>
<listitem><para>fixed #2115, <filename>indexer</filename> crash on wordforms with multiple destination keywords</para></listitem>
<listitem><para>fixed #2050, multi result set doesn't work without <filename>libmysqlclient</filename></para></listitem>
<listitem><para>fixed #2003, <link linkend="conf-morphology">lemmatize_XX_all</link> handling of short and exact words</para></listitem>
<listitem><para>fixed #1912, reduce <filename>indextool</filename> memory usage during a check of a huge index</para></listitem>
<listitem><para>fixed off by one errors in filtering of <code>BIGINT</code> attributes</para></listitem>
<listitem><para>fixed seamless rotation in <link linkend="conf-workers">prefork</link> mode</para></listitem>
<listitem><para>fixed snippets crash with <link linkend="conf-blend-chars">blend chars</link> at the beginning of a string</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel226"><title>Version 2.2.6-release, 13 nov 2014</title>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #2104, <link linkend="expr-func-all">ALL()</link>/ANY()/INDEXOF() support for distributed indexes</para></listitem>
<listitem><para>fixed #2102, show agent status misses warnings from agents</para></listitem>
<listitem><para>fixed #2100, crash of <filename>indexer</filename> while loading stopwords with tokenizer plugin</para></listitem>
<listitem><para>fixed #2098, arbitrary JSON subkeys and IS NULL for distributed indexes</para></listitem>
<listitem><para>fixed #2097, escaping of field-start modifier</para></listitem>
<listitem><para>fixed possibly memory leak in plugin creation function</para></listitem>
<listitem><para>indexation of duplicate documents</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel225"><title>Version 2.2.5-release, 06 oct 2014</title>
<bridgehead>New minor features</bridgehead>
<itemizedlist>
<listitem><para>added OPTION <link linkend="sphinxql-select">rand_seed</link> which affects ORDER BY RAND()</para></listitem>
</itemizedlist>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #2042, <filename>indextool</filename> fails with field mask on 32+ fields</para></listitem>
<listitem><para>fixed #2031, wrong encoding with UnixODBC/Oracle source</para></listitem>
<listitem><para>fixed #2056, several bugs in RLP tokenizer</para></listitem>
<listitem><para>fixed #2054, <link linkend="sphinxql-threads">SHOW THREADS </link>hangs if queries in prefork mode</para></listitem>
<listitem><para>fixed #2057, WARNING at <filename>indexer</filename> on duplicated wordforms</para></listitem>
<listitem><para>fixed #2066, snippet generation with <link linkend="api-func-buildexcerpts">weight_order</link> enabled</para></listitem>
<listitem><para>fixed exception parsing in queries</para></listitem>
<listitem><para>fixed crash in config parser</para></listitem>
<listitem><para>fixed MySQL protocol response when daemon maxed out</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel224"><title>Version 2.2.4-release, 11 sep 2014</title>
<bridgehead>New major features</bridgehead>
<itemizedlist>
<listitem><para>added <link linkend="sphinxql-attach">ALTER</link> RTINDEX rt1 RECONFIGURE which allows to change RT index settings on the fly</para></listitem>
<listitem><para>added <link linkend="sphinxql-show-index-settings">SHOW INDEX idx1 SETTINGS</link> statement</para></listitem>
<listitem><para>added ability to specify several destination forms for the same source wordform (as a result, N:M mapping is now available)</para></listitem>
<listitem><para>added blended chars support to exceptions</para></listitem>
</itemizedlist>
<bridgehead>New minor features</bridgehead>
<itemizedlist>
<listitem><para>added <link linkend="expr-func-any">ANY()</link>/<link linkend="expr-func-all">ALL()</link>/<link linkend="expr-func-indexof">INDEXOF()</link> support for JSON string arrays</para></listitem>
<listitem><para>added FACTORS() alias for <link linkend="expr-func-packedfactors">PACKEDFACTORS()</link> function</para></listitem>
<listitem><para>added <code>LIMIT</code> clause for the <link linkend="sphinxql-select">FACET</link> keyword</para></listitem>
<listitem><para>added JSON-formatted output to <code>PACKEDFACTORS()</code> function</para></listitem>
<listitem><para>added #1999 <link linkend="expr-func-atan2">ATAN2()</link> function</para></listitem>
<listitem><para>added connections counter and also avg and max timers to agent status</para></listitem>
<listitem><para>added <filename>searchd</filename> configuration keys <link linkend="conf-agent-connect-timeout">agent_connect_timeout</link>, <link linkend="conf-agent-query-timeout">agent_query_timeout</link>, <link linkend="conf-agent-retry-count">agent_retry_count</link> and <link linkend="conf-agent-retry-delay">agent_retry_delay</link></para></listitem>
<listitem><para><link linkend="sphinxql-select">GROUPBY()</link> function now returns strings for string attributes</para></listitem>
</itemizedlist>
<bridgehead>Optimizations and removals</bridgehead>
<itemizedlist>
<listitem><para>optimized <link linkend="conf-json-autoconv-numbers">json_autoconv_numbers</link> option speed</para></listitem>
<listitem><para>optimized tokenizing with expections on</para></listitem>
<listitem><para>fixed #1970, speeding up <link linkend="extended-syntax">ZONE and ZONESPAN</link> operators</para></listitem>
</itemizedlist>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #2027, slow queries to multiple indexes with large kill-lists</para></listitem>
<listitem><para>fixed #2022, blend characters of matched word must not be outside of snippet passage</para></listitem>
<listitem><para>fixed #2021, output units in <link linkend="expr-func-geodist">GEODIST()</link> function</para></listitem>
<listitem><para>fixed #2018, different wildcard behaviour in RT and plain indexes</para></listitem>
<listitem><para>fixed #2005, aggregate functions improperly calculate aliased expressions</para></listitem>
<listitem><para>fixed #1972, daemon crashes trying to read a big (>8G) .spm file</para></listitem>
<listitem><para>fixed #1966, <link linkend="expr-func-interval">INTERVAL()</link> function does not work with JSON fields</para></listitem>
<listitem><para>fixed #1963, <code>GROUPBY()</code> on JSON attributes sometimes yields NULL</para></listitem>
<listitem><para>fixed <code>GROUPBY()</code> on empty JSON arrays to return NULL instead of []</para></listitem>
<listitem><para>fixed buffer overrun when sizing packed factors (with way too many fields) in expression ranker</para></listitem>
<listitem><para>fixed cpu time logging for cases where work is done in child threads or agents</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel223"><title>Version 2.2.3-beta, 13 may 2014</title>
<bridgehead>New features</bridgehead>
<itemizedlist>
<listitem><para>added <ulink url="http://sphinxsearch.com/bugs/view.php?id=1920">#1920</ulink>, <link linkend="conf-charset-table">charset_table</link> aliases</para></listitem>
<listitem><para>added <ulink url="http://sphinxsearch.com/bugs/view.php?id=1887">#1887</ulink>, filtering over string attributes</para></listitem>
<listitem><para>added <ulink url="http://sphinxsearch.com/bugs/view.php?id=1860">#1860</ulink>, <link linkend="sphinxql-set">USERVARs</link> for distributed indexes</para></listitem>
<listitem><para>added <ulink url="http://sphinxsearch.com/bugs/view.php?id=1689">#1689</ulink>, <link linkend="sphinxql-select">GROUP BY JSON</link> attributes</para></listitem>
<listitem><para>added <link linkend="sphinxql-select">FACET</link> keyword</para></listitem>
<listitem><para>added Go MySQL connector support</para></listitem>
<listitem><para>added <link linkend="extended-syntax">IDF boost</link> keyword modifier</para></listitem>
<listitem><para>added <link linkend="extended-syntax">MAYBE</link> fulltext operator</para></listitem>
</itemizedlist>
<bridgehead>Optimizations and removals</bridgehead>
<itemizedlist>
<listitem><para>improved speed of concurrent insertion in RT indexes</para></listitem>
<listitem><para>removed <link linkend="sphinx-deprecations-defaults">max_matches</link> config key</para></listitem>
</itemizedlist>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #1946, <link linkend="expr-func-in">IN()</link> function support for string attributes</para></listitem>
<listitem><para>fixed #1942, crash in <link linkend="sphinxql-threads">SHOW THREADS</link> command</para></listitem>
<listitem><para>fixed #1922, crash on snippet generation for queries with duplicated words</para></listitem>
<listitem><para>fixed #1919, <link linkend="xsvpipe">TSV</link> bitcount attributes indexation issue</para></listitem>
<listitem><para>fixed #1916, <link linkend="sphinxql-select">COUNT(*)</link> with empty result set</para></listitem>
<listitem><para>fixed #1910, JSON parsing issue</para></listitem>
<listitem><para>fixed #1906, <link linkend="extended-syntax">ZONE</link> constraints for expanded terms</para></listitem>
<listitem><para>fixed #1904, race condition in RT indexes on saving disk chunk</para></listitem>
<listitem><para>fixed #1899, crash on <link linkend="sphinxql-call-keywords">CALL KEYWORDS</link></para></listitem>
<listitem><para>fixed #1893, <filename>searchd</filename> crashes on expressions like 'a<<(*!b)'</para></listitem>
<listitem><para>fixed #1884, crash with <link linkend="sphinxql-select">SNIPPET()</link> function over distributed index</para></listitem>
<listitem><para>fixed #1883, crash at expanded keyword with hitless index</para></listitem>
<listitem><para>fixed #1870, crash on <link linkend="sphinxql-select">ORDER BY JSON</link> attributes</para></listitem>
<listitem><para>fixed template index removing on rotation</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel222"><title>Version 2.2.2-beta, 11 feb 2014</title>
<bridgehead>New features</bridgehead>
<itemizedlist>
<listitem><para>added #1604, <link linkend="sphinxql-call-keywords">CALL KEYWORDS</link> can show now multiple lemmas for a keyword</para></listitem>
<listitem><para>added <link linkend="sphinxql-attach">ALTER TABLE DROP COLUMN</link></para></listitem>
<listitem><para>added ALTER for JSON/string/MVA attributes</para></listitem>
<listitem><para>added <link linkend="expr-func-remap">REMAP()</link> function which surpasses SetOverride() API</para></listitem>
<listitem><para>added an argument to <link linkend="misc-functions">PACKEDFACTORS()</link> to disable ATC calculation (syntax: PACKEDFACTORS({no_atc=1}))</para></listitem>
<listitem><para>added exact phrase query syntax</para></listitem>
<listitem><para>added flag <option>'--enable-dl'</option> to configure script which works with <filename>libmysqlclient</filename>, <filename>libpostgresql</filename>, <filename>libexpat</filename>, <filename>libunixobdc</filename></para></listitem>
<listitem><para>added new plugin system: <link linkend="sphinxql-create-plugin">CREATE</link>/<link linkend="sphinxql-drop-plugin">DROP PLUGIN</link>, <link linkend="sphinxql-show-plugins">SHOW PLUGINS</link>, <link linkend="conf-plugin-dir">plugin_dir</link> now in common, <link linkend="sphinxql-create-plugin">index/query_token_filter</link> plugins</para></listitem>
<listitem><para>added <link linkend="conf-ondisk-attrs">ondisk_attrs</link> support for RT indexes</para></listitem>
<listitem><para>added position shift operator to phrase operator</para></listitem>
<listitem><para>added possibility to add user-defined rankers (via <link linkend="extending-sphinx">plugins</link>)</para></listitem>
</itemizedlist>
<bridgehead>Optimizations, behavior changes, and removals</bridgehead>
<itemizedlist>
<listitem><para>changed #1797, per-term statistics report (expanded terms fold to their respective substrings)</para></listitem>
<listitem><para>changed default <link linkend="conf-thread-stack">thread_stack</link> value to 1M</para></listitem>
<listitem><para>changed local directive in a distributed index which takes now a list (eg. <option>local=shard1,shard2,shard3</option>)</para></listitem>
<listitem><para>deprecated <link linkend="api-func-setmatchmode">SetMatchMode()</link> API call</para></listitem>
<listitem><para>deprecated <link linkend="api-func-setoverride">SetOverride()</link> API call</para></listitem>
<listitem><para>optimized infix searches for dict=keywords</para></listitem>
<listitem><para>optimized kill lists in plain and RT indexes</para></listitem>
<listitem><para>removed deprecated <option>"address"</option> and <option>"port"</option> config keys</para></listitem>
<listitem><para>removed deprecated CLI <filename>search</filename> and <option>sql_query_info</option></para></listitem>
<listitem><para>removed deprecated <option>charset_type</option> and <option>mssql_unicode</option></para></listitem>
<listitem><para>removed deprecated <option>enable_star</option></para></listitem>
<listitem><para>removed deprecated <option>ondisk_dict</option> and <option>ondisk_dict_default</option></para></listitem>
<listitem><para>removed deprecated <option>str2ordinal</option> attributes</para></listitem>
<listitem><para>removed deprecated <option>str2wordcount</option> attributes</para></listitem>
<listitem><para>removed support for client versions 0.9.6 and below</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel221"><title>Version 2.2.1-beta, 13 nov 2013</title>
<bridgehead>Major new features</bridgehead>
<itemizedlist>
<listitem>added <link linkend="sphinxql-attach">ALTER TABLE</link> that can add attributes to disk and RT indexes on the fly</listitem>
<listitem>added ATTACH support for non-empty RT target indexes</listitem>
<listitem>added Chinese segmentation with <link linkend="conf-morphology">RLP</link> (Rosette Linguistics platform) support</listitem>
<listitem>added English, German <link linkend="conf-morphology">lemmatization</link> support</listitem>
<listitem>added <link linkend="sphinxql-select">HAVING</link> support to SELECT statement, filtering on aggregate values is now possible</listitem>
<listitem>added <link linkend="sphinxql-select">N-best GROUP BY</link> extension to return more than 1 row per group</listitem>
<listitem>added RT index support for <link linkend="conf-index-field-lengths">index_field_lengths=1</link>, bitfield attributes, and multiforms</listitem>
<listitem>added <link linkend="xsvpipe">CSV</link>, <link linkend="xsvpipe">TSV</link> data sources</listitem>
<listitem>added full <link linkend="conf-sql-attr-json">JSON</link> attributes support, arbitrary JSON documents (with subobjects etc) can now be stored</listitem>
<listitem>added in-place JSON updates for scalar values</listitem>
<listitem>added index <link linkend="conf-index-type">type=template</link> directive (allows CALL KEYWORDS, CALL SNIPPETS)</listitem>
<listitem>added <link linkend="conf-ondisk-attrs">ondisk_attrs</link>, <link linkend="conf-ondisk-attrs-default">ondisk_attrs_default</link> directives that keep attributes on disk</listitem>
<listitem>added table functions mechanism, and <link linkend="sphinxql-select">REMOVE_REPEATS()</link> table function</listitem>
<listitem>added support for arbitrary expressions in WHERE for DELETE queries</listitem>
</itemizedlist>
<bridgehead>Ranking related features</bridgehead>
<itemizedlist>
<listitem>added OPTION <link linkend="sphinxql-select">local_df=1</link>, an option to aggregate IDFs over local indexes (shards)</listitem>
<listitem>added <link linkend="sphinx-udfs">UDF</link> XXX_reinit() method to reload UDFs with <option>workers=prefork</option></listitem>
<listitem>added comma-separated syntax to <link linkend="sphinxql-select">OPTION</link> <option> idf</option>, <option>tfidf_unnormalized</option> and <option>tfidf_normalized</option> flags</listitem>
<listitem>added <option>lccs</option>, <option>wlccs</option>, <option>exact_order</option>, <option>min_gaps</option>, and <option>atc </option> <link linkend="field-factors">ranking factors</link></listitem>
<listitem>added <code>sphinx_get_XXX_factors()</code>, a faster interface to access <link linkend="misc-functions">PACKEDFACTORS()</link> in UDFs</listitem>
<listitem>added support for <link linkend="field-factors">exact_hit</link>, <link linkend="field-factors">exact_order</link> field factors when using more than 32 fields (exact_hit, exact_order)</listitem>
</itemizedlist>
<bridgehead>Instrumentation features</bridgehead>
<itemizedlist>
<listitem>added <link linkend="sphinxql-describe">DESCRIBE</link> and <link linkend="ref-indextool">--dumpheader</link> support for tokencount attributes (generated by index_field_lengths=1 directive)</listitem>
<listitem>added RT index query profile, percentages, totals to <link linkend="sphinxql-show-profile">SHOW PROFILE</link></listitem>
<listitem>added <option>predicted_time</option>, <option>dist_predicted_time</option>, <option>fetched_docs</option>, <option>fetched_hits</option> counters to <link linkend="sphinxql-show-meta">SHOW META</link></listitem>
<listitem>added <option>total_tokens</option> and <option>disk_bytes</option> counters to <link linkend="sphinxql-show-index-status">SHOW INDEX STATUS</link> </listitem>
</itemizedlist>
<bridgehead>General features</bridgehead>
<itemizedlist>
<listitem>added <link linkend="expr-func-all">ALL()</link>, <link linkend="expr-func-any">ANY()</link> and <link linkend="expr-func-indexof">INDEXOF()</link> functions for JSON subarrays</listitem>
<listitem>added <link linkend="expr-func-min-top-weight">MIN_TOP_WEIGHT()</link>, <link linkend="expr-func-min-top-sortval">MIN_TOP_SORTVAL()</link> functions</listitem>
<listitem>added <link linkend="factor-aggr-functions">TOP()</link> aggregate function to expression ranker</listitem>
<listitem>added a check for duplicated tail hit positions in <link linkend="ref-indextool">indextool --check</link></listitem>
<listitem>added <link linkend="sphinxql-log-format">compact_in</link> option to <link linkend="conf-query-log-format">query_log_format=sphinxql</link></listitem>
<listitem>added distance units and calculation method options to <link linkend="expr-func-geodist">GEODIST()</link> function, optimized it a lot</listitem>
<listitem>added embedded stopwords/exceptions/wordforms to <option>--dumpheader</option></listitem>
<listitem>added <link linkend="ref-indexer">indexer --nohup</link> and <link linkend="ref-indextool">indextool --rotate</link> switches to check index files before rotating them</listitem>
<listitem>added scientific notation support for JSON attributes (as per <ulink url="http://www.ietf.org/rfc/rfc4627.txt">RFC 4627</ulink>)</listitem>
<listitem>added several SphinxQL statements to fix MySQL Workbench connection issues (LIKE for session variables, etc.)</listitem>
<listitem>added <link linkend="conf-shutdown-timeout">shutdown_timeout</link> directive to <filename>searchd</filename> config section</listitem>
<listitem>added signed values support for <link linkend="expr-func-integer">INTEGER()</link> and <link linkend="expr-func-uint">UINT()</link> function</listitem>
<listitem>added snippet generation options to <link linkend="sphinxql-select">SNIPPET()</link> function</listitem>
<listitem>added string filter support in distributed queries, SphinxAPI, SphinxQL query log</listitem>
<listitem>added support for mixed distributed and local index queries (SELECT * FROM dist1,dist2,local3), and <option>index_weights</option> option for that case</listitem>
</itemizedlist>
<bridgehead>Optimizations, behavior changes, and removals</bridgehead>
<itemizedlist>
<listitem>optimized JSON attributes access (1.12x to 2.0x+ total query speedup depending on the JSON data)</listitem>
<listitem>optimized SELECT (1.02x to 3.5x speedup, depending on index schema size)</listitem>
<listitem>optimized <link linkend="sphinxql-update">UPDATE</link> (up to 3x faster on big updates)</listitem>
<listitem>optimized away internal threads table mutex contention with <option>workers=threads</option> and 1000s of threads</listitem>
<listitem>changed [emptyword -foo] query behavior in cases when emptyword is a stopword or an overshort word, made such queries computable rather than erroneous</listitem>
<listitem>changed post-morphology <link linkend="conf-wordforms">wordforms</link> behavior, now it works as <code>'if ( stem(token)==stem(abc) ) emit(def)'</code></listitem>
<listitem>changed the <link linkend="sphinx-deprecations-defaults">config defaults</link> to <option>id64</option>, <option>dict=keywords</option>, <option>charset_type=utf-8</option>, <option>enable_star=1</option>, <option>workers=threads</option>, <option>mem_limit=128M</option>, <option>rt_mem_limit=128M</option></listitem>
<listitem>changed the default SphinxAPI matching mode to <link linkend="matching-modes">SPH_MATCH_EXTENDED2</link></listitem>
<listitem>disallowed dashes in index names in API requests (just like in SphinxQL)</listitem>
<listitem>removed legacy <option>xmlpipe</option> data source v1, <option>compat_sphinxql_magics</option> directive, <option>SetWeights()</option> SphinxAPI call, and SPH_SORT_CUSTOM SphinxAPI mode</listitem>
</itemizedlist>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem>fixed #1734, unquoted literal in json subscript could cause a crash, returns 'unknown column' now.</listitem>
<listitem>fixed #1683, under certain conditions <link linkend="conf-stopwords">stopwords</link> were not taken into account in RT indexes</listitem>
<listitem>fixed #1648, #1644, when using AOT lemmas with snippet generation, not all the forms got highlighted</listitem>
<listitem>fixed #1549, <link linkend="sphinxql-select">OPTION</link> <option>idf=tfidf_normalized</option> was ignored for distributed queries</listitem>
<listitem>fixed that <link linkend="sphinxql-select">ORDER BY RAND()</link> was not affected by <option>index_weights</option></listitem>
<listitem>fixed that float updates with integer values in SphinxQL mistakenly set the float to 0</listitem>
<listitem>fixed that <option>predicted_time</option> was not accumulated with <link linkend="conf-dist-threads">dist_threads</link></listitem>
<listitem>fixed <link linkend="sphinxql-select">GROUP_CONCAT</link> result length limit (was implicitly limited by 1024 bytes)</listitem>
<listitem>fixed agent query distribution in HA mirroring</listitem>
<listitem>fixed duplicates check for <link linkend="extended-syntax">quorum operator</link>, it works ok now for expanded keywords</listitem>
<listitem>fixed off-by-1 query positions of words in indexes with wordforms and <link linkend="extended-syntax">blended characters</link></listitem>
<listitem>fixed wrong <option>lcs</option> and <link linkend="field-factors">min_best_span_pos</link> ranking factor values when any expansion (<link linkend="conf-expand-keywords">expand_keywords</link> or lemmatize) occurred</listitem>
<listitem>fixed a crash while creating indexes with <link linkend="conf-sql-joined-field">sql_joined_field</link></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel219"><title>Version 2.1.9-release, 03 jul 2014</title>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #1994, parsing of empty JSON arrays</para></listitem>
<listitem><para>fixed #1987, handling of <link linkend="conf-index-exact-words">index_exact_words</link> with AOT morphology and infixes on</para></listitem>
<listitem><para>fixed #1984, teaching HTML parser to handle hex numbers</para></listitem>
<listitem><para>fixed #1983, master and agents networking issue</para></listitem>
<listitem><para>fixed #1977, escaping of characters doens't work with exceptions</para></listitem>
<listitem><para>fixed #1968, parsing of <link linkend="sphinxql-select">WEIGHT()</link> function (queries to distributed indexes affected)</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel218"><title>Version 2.1.8-release, 28 apr 2014</title>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #1937, crash at <link linkend="extended-syntax">SENTENCE</link> operator</para></listitem>
<listitem><para>fixed #1933, quorum operator works incorrectly if it's number is exception</para></listitem>
<listitem><para>fixed #1932, fixed daemon index recovery after failed rotation</para></listitem>
<listitem><para>fixed #1923, crash at <filename>indexer</filename> with <option>dict=keywords</option></para></listitem>
<listitem><para>fixed #1918, fixed crash while hitless words are used within fulltext operators which require hits</para></listitem>
<listitem><para>fixed #1878, daemon doesn't reset <link linkend="conf-regexp-filter">regexp_filter</link> after rotation with <link linkend="conf-seamless-rotate">seamless_rotate=0</link></para></listitem>
<listitem><para>fixed #1769, crash after unsuccessful <link linkend="sphinxql-insert">INSERT</link> at RT index</para></listitem>
<listitem><para>fixed #1682, field end modifier doesn't work with words containing blended chars</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel217"><title>Version 2.1.7-release, 30 mar 2014</title>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #1917, field limit propagation outside of group</para></listitem>
<listitem><para>fixed #1915, exact form passes to index skipping stopwords filter</para></listitem>
<listitem><para>fixed #1905, multiple lemmas at the end of a field</para></listitem>
<listitem><para>fixed #1903, <filename>indextool</filename> check mode for hitless indexes and indexes with large amount of documents</para></listitem>
<listitem><para>fixed #1902, crash on JSON field in the <link linkend="expr-func-in">IN()</link> function</para></listitem>
<listitem><para>fixed #1884, crash at <link linkend="sphinxql-select">SNIPPET()</link> with local indexes at distributed index</para></listitem>
<listitem><para>fixed #1802, loading large keywords dictionary</para></listitem>
<listitem><para>fixed #1786, <filename>indextool</filename> fails to handle indexes with AOT morphology</para></listitem>
<listitem><para>fixed crash of daemon on logging extra large message</para></listitem>
<listitem><para>fixed expression engine: division by zero, log and sqrt() functions of non-positive arguments</para></listitem>
<listitem><para>fixed LCS and min_best_span_pos computation</para></listitem>
<listitem><para>fixed unnecessary escaping in JSON result set</para></listitem>
<listitem><para>fixed Quick Tour documentation chapter</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel216"><title>Version 2.1.6-release, 24 feb 2014</title>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #1857, crash in arabic stemmer</para></listitem>
<listitem><para>fixed #1875, fixed crash on adding documents with long words in dict=keyword index with morphology and infixes enabled</para></listitem>
<listitem><para>fixed #1876, crash on words with large codepoints and infix searches</para></listitem>
<listitem><para>fixed #1880, crash on multiquery with one incorrect query</para></listitem>
<listitem><para>fixed #1882, race of periodic and forced FLUSHing on an RT index</para></listitem>
<listitem><para>fixed #1881, quorum syntax with '.' as blended char</para></listitem>
<listitem><para>fixed evaluating of LCS by an expression ranker</para></listitem>
<listitem><para>fixed #1864, <filename>indexer</filename> crash on badly formed JSON, e.g. '[,1,2,3,4,]'</para></listitem>
<listitem><para>fixed #1853, incomplete <link linkend="sphinxql-select">ORDER BY JSON</link> attribute in distributed indexes</para></listitem>
<listitem><para>fixed #1847, broken infix searches in RT indexes</para></listitem>
<listitem><para>fixed #1844, clash of mix cased attribute and field names at CSV source</para></listitem>
<listitem><para>fixed #1840, filter by <link linkend="sphinxql-set">@uservar</link> in distributes indexes</para></listitem>
<listitem><para>fixed #1832,#1833,#1834, some big endianess issues</para></listitem>
<listitem><para>fixed #1830, loss of <link linkend="conf-ondisk-attrs">ondisk_attrs</link> after rotation</para></listitem>
<listitem><para>fixed #1762, memory leak in <link linkend="conf-regexp-filter">regexp_filter</link></para></listitem>
<listitem><para>fixed #1759, <filename>indextool</filename> false positives on persistent MVA checking</para></listitem>
<listitem><para>fixed <link linkend="sphinxql-select">GROUP BY</link> id</para></listitem>
<listitem><para>fixed crash on sending empty snippet result</para></listitem>
<listitem><para>fixed index corruption in <link linkend="sphinxql-update">UPDATE</link> queries with non-existent attributes</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel215"><title>Version 2.1.5-release, 22 jan 2014</title>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #1848, infixes and morphology clash</para></listitem>
<listitem><para>fixed #1823, <filename>indextool</filename> fails to handle indexes with lemmatizer morphology</para></listitem>
<listitem><para>fixed #1799, crash in queries to distributed indexes with <link linkend="sphinxql-select">GROUP BY</link> on multiple values</para></listitem>
<listitem><para>fixed #1718, <option>expand_keywords</option> option lost in disk chunks of RT indexes</para></listitem>
<listitem><para>fixed documentation on <link linkend="conf-rt-flush-period">rt_flush_period</link></para></listitem>
<listitem><para>fixed network protocol issue which results in timeouts of <filename>libmysqlclient</filename> for big Sphinx responses</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel214"><title>Version 2.1.4-release, 18 dec 2013</title>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #1778, indexes with more than 255 attributes</para></listitem>
<listitem><para>fixed #1777, <link linkend="sphinxql-select">ORDER BY WEIGHT()</link></para></listitem>
<listitem><para>fixed #1796, missing results in queries with quorum operator of indexes with some lemmatizer</para></listitem>
<listitem><para>fixed #1780, incorrect results while querying indexes with wordforms, some lemmatizer and enable_star=1</para></listitem>
<listitem><para>fixed, SHOW PROFILE for fullscan queries</para></listitem>
<listitem><para>fixed, --with-re2 check</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel213"><title>Version 2.1.3-release, 12 nov 2013</title>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #1753, path to re2 sources could not be set using <option>--with-re2</option>, options <option>--with-re2-libs</option> and <option>--with-re2-includes</option> added to <filename>configure</filename></para></listitem>
<listitem><para>fixed #1739, erroneous conversion of RAM chunk into disk chunk when loading id32 index with id64 binary</para></listitem>
<listitem><para>fixed #1738, unlinking RAM chunk when converting it to disk chunk</para></listitem>
<listitem><para>fixed #1710, unable to filter by attributes created by index_field_lengths=1</para></listitem>
<listitem><para>fixed #1716, random crash with with multiple running threads</para></listitem>
<listitem><para>fixed crash while querying index with lemmatizer and wordforms</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel212"><title>Version 2.1.2-release, 10 oct 2013</title>
<bridgehead>New features</bridgehead>
<itemizedlist>
<listitem><para>added <link linkend="sphinxql-flush-ramchunk">FLUSH RAMCHUNK</link> statement</para></listitem>
<listitem><para>added <link linkend="sphinxql-show-plan">SHOW PLAN</link> statement</para></listitem>
<listitem><para>added support for <link linkend="sphinxql-select">GROUP BY</link> on multiple attributes</para></listitem>
<listitem><para>added <link linkend="expression-ranker">BM25F()</link> function to <code>SELECT</code> expressions (now works with the expression based ranker)</para></listitem>
<listitem><para>added <link linkend="ref-indextool">indextool</link> <option>--fold</option> command and <option>-q</option> switch</para></listitem>
<listitem><para>added JSON debug check for RT index RAM chunk</para></listitem>
<listitem><para>added <link linkend="expr-func-length">LENGTH()</link> function for MVA</para></listitem>
<listitem><para>added missing <link linkend="conf-rt-attr-bool">rt_attr_bool</link> directive</para></listitem>
<listitem><para>added support for selecting over 250 columns via SphinxQL</para></listitem>
<listitem><para>deprecated custom sort mode, and <option>str2ordinal</option> and <option>str2wordcount</option> attribute types</para></listitem>
<listitem><para>optimized <code>SELECT</code>, <code>UPDATE</code> for indexes with many attributes (up to 3.5x speedup in extreme cases)</para></listitem>
<listitem><para><code>JSON</code> attributes (up to 5-20% faster <code>SELECTs</code> using JSON objects)</para></listitem>
<listitem><para>optimized <link linkend="xmlpipe2">xmlpipe2</link> indexing (up to 9 times faster on some schemas)</para></listitem>
</itemizedlist>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #1684, <link linkend="sphinxql-select">COUNT(DISTINCT smth)</link> with implicit <code>GROUP BY</code> returns correct value now</para></listitem>
<listitem><para>fixed #1672, exact token AOT vs lemma (<filename>indexer</filename> skips exact form of token that passed AOT through tokenizer)</para></listitem>
<listitem><para>fixed #1659, fail while loading empty infix dictionary with <link linkend="conf-dict">dict=keywords</link></para></listitem>
<listitem><para>fixed #1638, force explicit JSON type conversion for aggregate functions</para></listitem>
<listitem><para>fixed #1628, <link linkend="sphinxql-select">GROUP_CONCAT()</link> and <link linkend="sphinxql-select">GROUPBY()</link> support for distributed agents</para></listitem>
<listitem><para>fixed #1619, <code>INTEGER()</code> conversion function doesn't support signed integers</para></listitem>
<listitem><para>fixed #1615, global IDF vs exact term (=term) fixed global IDF for missed terms fixed SphinxQL <link linkend="conf-global-idf">global_idf=0 option</link></para></listitem>
<listitem><para>fixed #1607, now ignoring binlog when running daemon with <option>--console</option> flag</para></listitem>
<listitem><para>fixed #1606, hard interruption of the daemon by Ctrl+C (SIGINT) signal</para></listitem>
<listitem><para>fixed #1592, duplicates vs expression ranker</para></listitem>
<listitem><para>fixed #1578, <link linkend="sorting-modes">SORT BY</link> string attribute via API <option>attr_asc</option> \ <option>attr_desc</option></para></listitem>
<listitem><para>fixed #1575, crash of daemon on MVA receive from agents with <link linkend="conf-dist-threads">dist_threads</link> enabled</para></listitem>
<listitem><para>fixed #1574, agent got kill list of local indexes of distributed index</para></listitem>
<listitem><para>fixed #1573, ranker expression vs expanded terms</para></listitem>
<listitem><para>fixed #1572, <code>BM25F</code> vs negative terms</para></listitem>
<listitem><para>fixed #1550, float got cut at full-text part of a query</para></listitem>
<listitem><para>fixed #1541, <code>BM25F</code> expression in distributes indexes</para></listitem>
<listitem><para>fixed #1508, <ulink url="http://sphinxsearch.com/bugs/view.php?id=1522">#1522</ulink>, distributed index query lasts up to <link linkend="conf-agent-connect-timeout">agent_connect_timeout</link> with epoll path</para></listitem>
<listitem><para>fixed #1508, master failed to connect waiting agents up to <link linkend="conf-agent-connect-timeout">agent_connect_timeout</link> time</para></listitem>
<listitem><para>fixed #1489, filtering by integer field in JSON using floating point precision</para></listitem>
<listitem><para>fixed #1485, <link linkend="conf-index-exact-words">index_exact_words</link> vs keyword dict with infix</para></listitem>
<listitem><para>fixed #1484, <link linkend="sphinxql-insert">INSERT</link> into RT vs no JSON attribute</para></listitem>
<listitem><para>fixed #1478, memory leaks at daemon <link linkend="misc-functions">PACKEDFACTORS()</link> as UDF argument, index query tokenizer, expression ranker SUM()</para></listitem>
<listitem><para>fixed #1470, broken UDF unpack (since r3738 UDF version 2)</para></listitem>
<listitem><para>fixed #1468, multiple conditions in <code>WHERE</code> for JSON attributes</para></listitem>
<listitem><para>fixed #1466, <link linkend="conf-index-field-lengths">index_field_lengths</link> vs XML data source</para></listitem>
<listitem><para>fixed #1463, daemon shutdown vs RT index optimize (added forced terminate of long merging operation)</para></listitem>
<listitem><para>fixed #1460, <link linkend="sphinxql-select">aggregate functions</link> <code>AVG()</code>, <code>MAX()</code>, <code>MIN()</code>, <code>SUM()</code> do not work for JSON attributes</para></listitem>
<listitem><para>fixed #1459, <code>BM25F</code> doesn't work with <link linkend="conf-sql-field-string">field_string</link> fields</para></listitem>
<listitem><para>fixed #1458, factors to copy <code>field_tf</code> at UDF</para></listitem>
<listitem><para>fixed #1450, garbage in JSON fields when selecting them from a RT index</para></listitem>
<listitem><para>fixed #1449, broken build on Mac OS X</para></listitem>
<listitem><para>fixed #1446, <link linkend="weighting">WEIGHT()</link> did not work in <code>SELECT</code> expressions</para></listitem>
<listitem><para>fixed #1445, field-start/field-end modifiers did not work for star-expanded keywords</para></listitem>
<listitem><para>fixed #1443, <link linkend="conf-morphology">morphology=lemmatizer_ru_all</link> now works with <link linkend="conf-index-exact-words">index_exact_words=1</link> (exact forms can be matches)</para></listitem>
<listitem><para>fixed #1442, incorrect <code>COUNT(*)</code> value in queries to distributed indexes with implicit <code>GROUP BY</code></para></listitem>
<listitem><para>fixed #1439, filters on float values in JSON issue, string values quoting issue</para></listitem>
<listitem><para>fixed #1399, filter error message on string attribute</para></listitem>
<listitem><para>fixed #1384, added possibility to define any own DSN line with <link linkend="confgroup-source">source=mssql</link> (like as in <code>source=odbc</code>)</para></listitem>
<listitem><para>fixed <link linkend="sphinxql-attach-index">ATTACH</link> vs wordforms or stopwords; after daemon was restarted this setting was getting lost in RT indexes</para></listitem>
<listitem><para>fixed balancing of agents in HA</para></listitem>
<listitem><para>fixed co-working of <code>index_exact_word</code> + AOT lemmatizer</para></listitem>
<listitem><para>fixed epoll invoking and turned on by default</para></listitem>
<listitem><para>fixed incorrect handling of wildcards in tokenizer</para></listitem>
<listitem><para>fixed infix indexing with <option>dict=keywords</option></para></listitem>
<listitem><para>fixed <link linkend="sphinxql-select">max_predicted_time</link> integer overflows</para></listitem>
<listitem><para>fixed memory error in tokenizer</para></listitem>
<listitem><para>fixed several memory leaks</para></listitem>
<listitem><para>fixed <code>PACKEDFACTORS()</code> to work in different <code>GROUP BY</code> queries</para></listitem>
<listitem><para>fixed preprocessor definitions for <link linkend="conf-regexp-filter">RE2</link> in VS solution</para></listitem>
<listitem><para>fixed rotation of global IDF for <option>workers=threads</option> and <option>seamless_rotate=1</option></para></listitem>
<listitem><para>fixed rotation of old indexes</para></listitem>
<listitem><para>fixed RT kill list survives <code>TRUNCATE</code> and works in newly <code>ATTACH</code>ed index</para></listitem>
<listitem><para>fixed saving id32 RT index with id64 daemon</para></listitem>
<listitem><para>fixed stemmer vs RT index <code>INSERT</code></para></listitem>
<listitem><para>fixed string case error with JSON attributes in select list of a query</para></listitem>
<listitem><para>fixed <code>TOP_COUNT</code> usage in <filename>misc/suggest</filename> and updated to PHP 5.3 and UTF-8</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel211"><title>Version 2.1.1-beta, 20 feb 2013</title>
<bridgehead>Major new features</bridgehead>
<itemizedlist>
<listitem><para>added query profiling (SET PROFILING=1 and <link linkend="sphinxql-show-profile">SHOW PROFILE</link> statements)</para></listitem>
<listitem><para>added AOT-based Russian lemmatizer (<link linkend="conf-morphology">morphology={lemmatize_ru | lemmatize_ru_all}</link>, <link linkend="conf-lemmatizer-base">lemmatizer_base</link>, and <link linkend="conf-lemmatizer-cache">lemmatizer_cache</link> directives)</para></listitem>
<listitem><para>added <link linkend="ref-wordbreaker">wordbreaker</link>, a tool to split compounds into individual words</para></listitem>
<listitem><para>added JSON attributes support (<link linkend="conf-sql-attr-json">sql_attr_json</link>, <link linkend="conf-on-json-attr-error">on_json_attr_error</link>, <link linkend="conf-json-autoconv-numbers">json_autoconv_numbers</link>, <link linkend="conf-json-autoconv-keynames">json_autoconv_keynames</link> directives)</para></listitem>
<listitem><para>added initial subselects support, SELECT * FROM (SELECT ... ORDER BY cond1 LIMIT X) ORDER BY cond2 LIMIT Y</para></listitem>
<listitem><para>added bigram indexing, and phrase searching with bigrams (<link linkend="conf-bigram-index">bigram_index</link>, <link linkend="conf-bigram-freq-words">bigram_freq_words</link> directives)</para></listitem>
<listitem><para>added HA/LB support, ha_strategy and agent_persistent directives, SHOW AGENT STATUS statement</para></listitem>
<listitem><para>added RT index optimization (<link linkend="sphinxql-optimize-index">OPTIMIZE INDEX</link> statement, <link linkend="conf-rt-merge-iops">rt_merge_iops</link> and <link linkend="conf-rt-merge-maxiosize">rt_merge_maxiosize</link> directives)</para></listitem>
<listitem><para>added wildcards support to <link linkend="conf-dict">dict=keywords</link> (eg. "t?st*")</para></listitem>
<listitem><para>added substring search support (min_infix_len=2 and above) to <link linkend="conf-dict">dict=keywords</link></para></listitem>
</itemizedlist>
<bridgehead>New features</bridgehead>
<itemizedlist>
<listitem><para>added --checkconfig switch to <link linkend="ref-indextool">indextool</link> to check config file for correctness (bug #1395)</para></listitem>
<listitem><para>added global IDF support (<link linkend="conf-global-idf">global_idf</link> directive, <link linkend="sphinxql-select">OPTION global_idf</link>)</para></listitem>
<listitem><para>added "term1 term2 term3"/0.5 <link linkend="extended-syntax">quorum fraction syntax</link> (bug #1372)</para></listitem>
<listitem><para>added an option to apply stopwords before morphology, <link linkend="conf-stopwords-unstemmed">stopwords_unstemmed</link> directive</para></listitem>
<listitem><para>added an alternative method to compute keyword IDFs, <link linkend="sphinxql-select">OPTION idf=plain</link></para></listitem>
<listitem><para>added boolean query optimizations, <link linkend="sphinxql-select">OPTION boolean_simplify=1</link> (bug #1294)</para></listitem>
<listitem><para>added stringptr return type support to UDFs, and <link linkend="sphinxql-create-function">CREATE FUNCTION ... RETURNS STRING syntax</link></para></listitem>
<listitem><para>added early query termination by predicted execution time (<link linkend="sphinxql-select">OPTION max_predicted_time</link>, and <link linkend="conf-predicted-time-costs">predicted_time_costs</link> directive)</para></listitem>
<listitem><para>added <link linkend="conf-index-field-lengths">index_field_lengths</link> directive, BM25A() and BM25F() functions to <link linkend="expression-ranker">expression ranker</link></para></listitem>
<listitem><para>added ranker=export, and <link linkend="expr-func-packedfactors">PACKEDFACTORS()</link> function</para></listitem>
<listitem><para>added <link linkend="sphinxql-select">OPTION agent_query_timeout</link></para></listitem>
<listitem><para>added support for attribute files over 4 GB (bug #1274)</para></listitem>
<listitem><para>added addr2line output to crash reports (bug #1265)</para></listitem>
<listitem><para>added <link linkend="sphinxql-update">OPTION ignore_nonexistent_columns</link> to UPDATE, and a respective <link linkend="api-func-updateatttributes">UpdateAttributes()</link> argument</para></listitem>
<listitem><para>added --keep-attrs switch to <link linkend="ref-indexer">indexer</link></para></listitem>
<listitem><para>added --with-static-mysql, --with-static-pgsql switches to configure</para></listitem>
<listitem><para>added double-buffering for RT <link linkend="sphinxql-insert">INSERTs</link> (bug #1200)</para></listitem>
<listitem><para>added --morph, --dumpdict switch to <link linkend="ref-indextool">indextool</link></para></listitem>
<listitem><para>added support for multiple wordforms files, comment syntax, and pre/post-morphology <link linkend="conf-wordforms">wordforms</link></para></listitem>
<listitem><para>added <link linkend="sphinxql-select">ZONESPANLIST()</link><!-- FIXME separate entry --> builtin function</para></listitem>
<listitem><para>added <link linkend="conf-regexp-filter">regexp_filter</link> directive, regexp document/query filtering support (uses RE2)</para></listitem>
<listitem><para>added min_idf, max_idf, sum_idf <link linkend="expression-ranker">ranking factors</link></para></listitem>
<listitem><para>added uservars persistence, and <link linkend="conf-sphinxql-state">sphinxql_state</link> directive (bug #1132)</para></listitem>
<listitem><para>added <link linkend="expr-func-poly2d">POLY2D</link>, <link linkend="expr-func-geopoly2d">GEOPOLY2D</link>, <link linkend="expr-func-contains">CONTAINS</link> functions</para></listitem>
<listitem><para>added <link linkend="extended-syntax">ZONESPAN</link> operator</para></listitem>
<listitem><para>added <link linkend="conf-snippets-file-prefix">snippets_file_prefix</link> directive</para></listitem>
<listitem><para>added Arabic stemmer, <link linkend="conf-morphology">morphology=stem_ar</link> directive (bug #519)</para></listitem>
<listitem><para>added <link linkend="sphinxql-select">OPTION sort_method={pq | kbuffer}</link>, an alternative match sorting method</para></listitem>
<listitem><para>added SPZ (<link linkend="conf-index-sp">sentence, paragraph</link>, <link linkend="conf-index-zones">zone</link>) support to RT indexes</para></listitem>
<listitem><para>added support for upto 255 keywords in <link linkend="extended-syntax">quorum operator</link> (bug #1030)</para></listitem>
<listitem><para>added multi-threaded agent querying (bug #1000)</para></listitem>
</itemizedlist>
<bridgehead>New SphinxQL features</bridgehead>
<itemizedlist>
<listitem><para>added <link linkend="sphinxql-show-index-status">SHOW INDEX indexname STATUS</link> statement</para></listitem>
<listitem><para>added LIKE clause support to multiple SHOW xxx statements</para></listitem>
<listitem><para>added <link linkend="sphinxql-select">SNIPPET()</link><!-- FIXME! separate entry --> function</para></listitem>
<listitem><para>added <link linkend="sphinxql-select">GROUP_CONCAT()</link> aggregate function</para></listitem>
<listitem><para>added <link linkend="sphinxql-select">GROUPBY()</link> builtin function</para></listitem>
<listitem><para>added iostats and cpustats to <link linkend="sphinxql-show-meta">SHOW META</link></para></listitem>
<listitem><para>added support for <link linkend="sphinxql-delete">DELETE</link> statement over distributed indexes (bug #1104)</para></listitem>
<listitem><para>added <link linkend="sphinxql-select">EXIST('attr_name', default_value)</link><!-- FIXME separate entry --> builtin function (bug #1037)</para></listitem>
<listitem><para>added <link linkend="sphinxql-show-variables">SHOW VARIABLES WHERE variable_name='xxx'</link> syntax</para></listitem>
<listitem><para>added <link linkend="sphinxql-truncate-rtindex">TRUNCATE RTINDEX</link> statement</para></listitem>
</itemizedlist>
<bridgehead>Major behavior changes and optimizations</bridgehead>
<itemizedlist>
<listitem><para>changed that UDFs are now allowed in fork/prefork modes via <link linkend="conf-sphinxql-state">sphinxql_state</link> startup script</para></listitem>
<listitem><para>changed that compat_sphinxql_magics now defaults to 0</para></listitem>
<listitem><para>changed that small enough exceptions, wordforms, stopwords files are now embedded into the index header</para></listitem>
<listitem><para>changed that <link linkend="conf-rt-mem-limit">rt_mem_limit</link> can now be over 2 GB (bug #1059)</para></listitem>
<listitem><para>optimized tokenizer (upto 1.25x indexing and snippets speedup)</para></listitem>
<listitem><para>optimized multi-keyword searching (added skiplists)</para></listitem>
<listitem><para>optimized filtering and scan in several frequent cases (single-value, 2-arg, 3-arg WHERE clauses)</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel2011"><title>Version 2.0.11-dev, xx xxx xxxx</title>
<bridgehead>Bug fixes</bridgehead>
</sect1>
<sect1 id="rel2010"><title>Version 2.0.10-release, 22 jan 2014</title>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #1778, <link linkend="extended-syntax">SENTENCE and PARAGRAPH</link> operators and infix stars clash</para></listitem>
<listitem><para>fixed #1774, stack overflow on parsing large expressions</para></listitem>
<listitem><para>fixed #1744, daemon failed to write to log file bigger than 4G</para></listitem>
<listitem><para>fixed #1705, expression ranker handling of indexes with more than 32 fields</para></listitem>
<listitem><para>fixed #1700, crash and cutoff in fullscan <link linkend="sphinxql-select">reverse_scan=1</link> queries</para></listitem>
<listitem><para>fixed #1698, proper handling of stopword with blended chars</para></listitem>
<listitem><para>fixed #1682, field end modifier and <link linkend="conf-index-exact-words">index_exact_words</link> clash</para></listitem>
<listitem><para>fixed #1678, memory leak in SUM() function of an expression ranker</para></listitem>
<listitem><para>fixed #1670, updating of MVA attributes in distributed indexes via API</para></listitem>
<listitem><para>fixed #1662, <link linkend="api-func-escapestring">EscapeString()</link> API escapes '<' too now</para></listitem>
<listitem><para>fixed #1520, <link linkend="api-func-setlimits">SetLimits()</link> API documentation</para></listitem>
<listitem><para>fixed #1491, documentation: space character is prohibited in <link linkend="conf-charset-table">charset_table</link></para></listitem>
<listitem><para>fixed memory leak in expressions with max_window_hits</para></listitem>
<listitem><para>fixed <link linkend="conf-rt-flush-period">rt_flush_period</link> - less stricter internal check and more often flushes overall</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel209"><title>Version 2.0.9-release, 26 aug 2013</title>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #1655, special characters like ()?* were not processed correctly by exceptions</para></listitem>
<listitem><para>fixed #1651, <link linkend="sphinxql-create-function">CREATE FUNCTION</link> can now be used with BIGINT return type</para></listitem>
<listitem><para>fixed #1649, incorrect warning message (about statistics mismatch) was returned when mixing wildcards and regular keywords</para></listitem>
<listitem><para>fixed #1603, passing MVA64 arguments to non-MVA functions caused unpredicted behavior and crashes (now explicitly forbidden)</para></listitem>
<listitem><para>fixed #1601, negative numbers in <link linkend="expr-func-in">IN()</link> clause caused a syntax error</para></listitem>
<listitem><para>fixed #1581, <link linkend="conf-dict">dict=keywords</link> and <link linkend="conf-sql-joined-field">sql_joined_field</link> occasionally caused <filename>indexer</filename> to build corrupted indexes</para></listitem>
<listitem><para>fixed #1546, file descriptor leaked on index rotation (that eventually prevented <filename>searchd</filename> to reload indexes)</para></listitem>
<listitem><para>fixed #1537, <code>COUNT(*)</code> and compat_sphinxql_magics=0 via SphinxAPI caused an incorrect error message</para></listitem>
<listitem><para>fixed #1531, #1589, several matching and highlighting issues when using both <link linkend="conf-blend-chars">blend_chars</link> and multi-wordforms</para></listitem>
<listitem><para>fixed #1521, <filename>indextool --check</filename> did not handle empty RT MVA and gave an incorrect warning</para></listitem>
<listitem><para>fixed #1392, SphinxSE builds with MySQL 5.6 now</para></listitem>
<listitem><para>fixed #1346, <link linkend="extended-syntax">NEAR</link> handles duplicated keywords properly now</para></listitem>
<listitem><para>fixed #757, wordforms shared between multiple indexes with different tokenizer settings failed to load (they now load with a warning)</para></listitem>
<listitem><para>fixed that batch queries did not batch in some cases (because of internal expression alias issues)</para></listitem>
<listitem><para>fixed that <link linkend="sphinxql-call-keywords">CALL KEYWORDS</link> occasionally gave incorrect error messages</para></listitem>
<listitem><para>fixed searchd crashes on <link linkend="sphinxql-attach-index">ATTACHing</link> plain indexes with MVAs</para></listitem>
<listitem><para>fixed several deadlocks and other threading issues</para></listitem>
<listitem><para>fixed incorrect sorting order with <link linkend="collations">utf8_general_ci</link></para></listitem>
<listitem><para>fixed that in some cases incorrect attribute values were returned when using expression aliases</para></listitem>
<listitem><para>optimized <link linkend="xmlpipe2">xmlpipe2</link> indexing</para></listitem>
<listitem><para>added a warning for missed stopwords, exception, wordforms files on index load and in <filename>indextool --check</filename></para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel208"><title>Version 2.0.8-release, 26 apr 2013</title>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #1515, log strings over 2KB were clipped when <link linkend="conf-query-log-format">query_log_format=plain</link></para></listitem>
<listitem><para>fixed #1514, RT index disk chunk lose attribute update on daemon restart</para></listitem>
<listitem><para>fixed #1512, crash while formatting log messages</para></listitem>
<listitem><para>fixed #1511, crash on indexing PostgreSQL data source with <link linkend="mva">MVA</link> attributes</para></listitem>
<listitem><para>fixed #1509, <link linkend="conf-blend-chars">blend_chars</link> vs incomplete multi-form and overshort</para></listitem>
<listitem><para>fixed #1504, RT binlog replay vs descending tid on update</para></listitem>
<listitem><para>fixed #1499, <option>sql_field_str2wordcount</option> actually is int, not string</para></listitem>
<listitem><para>fixed #1498, now working with exceptions starting with number too</para></listitem>
<listitem><para>fixed #1496, multiple destination keywords in wordform</para></listitem>
<listitem><para>fixed #1494, lost 'mod', '%' operations in select list. Also corrected few typers in the doc.</para></listitem>
<listitem><para>fixed #1490, <link linkend="conf-expand-keywords">expand_keywords</link> vs prefix</para></listitem>
<listitem><para>fixed #1487, `id` in expression fixed</para></listitem>
<listitem><para>fixed #1483, snippets limits fix</para></listitem>
<listitem><para>fixed #1481, shebang config changes check on rotation</para></listitem>
<listitem><para>fixed #1479, port handling in <link linkend="api-reference">PHP Sphinx API</link></para></listitem>
<listitem><para>fixed #1474, daemon crash at SphinxQL packet overflows <link linkend="conf-max-packet-size">max_packet_size</link></para></listitem>
<listitem><para>fixed #1472, crash on loading index to <filename>indextool</filename> for check</para></listitem>
<listitem><para>fixed #1465, <link linkend="conf-expansion-limit">expansion_limit</link> got lost in index rotation</para></listitem>
<listitem><para>fixed #1427, #1506, utf8 3 and 4-bytes codepoints</para></listitem>
<listitem><para>fixed #1405, between with mixed int float values</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel207"><title>Version 2.0.7-release, 26 mar 2013</title>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #1475, memory leak in the expression parser</para></listitem>
<listitem><para>fixed #1457, error messages over 2KB were clipped</para></listitem>
<listitem><para>fixed #1454, searchd did not display an error message when the binlog path did not exist</para></listitem>
<listitem><para>fixed #1441, SHOW META in a query batch was returning the last non-batch error</para></listitem>
<listitem><para>fixed #1435, typo in the documentation</para></listitem>
<listitem><para>fixed #1430, rt_flush_period now works even with a disabled binlog</para></listitem>
<listitem><para>fixed #1427, overlong 4-byte UTF-8 codes in source text could cause indexer crashes or index corruption</para></listitem>
<listitem><para>fixed #1418, warnings from local index searches were lost with dist_threads>0</para></listitem>
<listitem><para>fixed #1417, crash handler now works on searchd startup stage, too (eg. to report index load time crashes)</para></listitem>
<listitem><para>fixed #1410, bad numerics like '123abc' now result in a proper SphinxQL error message</para></listitem>
<listitem><para>fixed #1404, a tiny memory leak in shared mutex</para></listitem>
<listitem><para>fixed #1394, race in --iostats caused incorrect I/O statistics in threaded modes</para></listitem>
<listitem><para>fixed #1391, QUORUM operator vs docinfo=inline returned wrong attribute values</para></listitem>
<listitem><para>fixed #1389, edge case in the ORDER operator caused occasionally searchd crashes</para></listitem>
<listitem><para>fixed #1382, query parts with field limits but without real keywords (like '@name {') are now simply ignored and no longer cause a query syntax error</para></listitem>
<listitem><para>fixed #1370, Windows indexer builds failed to fetch rows from MSSQL 2012</para></listitem>
<listitem><para>fixed #1368, ORDER BY RAND() did not work in RT indexes</para></listitem>
<listitem><para>fixed #1364, queries with hitless words could occasionally crash searchd</para></listitem>
<listitem><para>fixed #1363, '*' in charset_table was causing query syntax errors with enable_star=1</para></listitem>
<listitem><para>fixed #1353, added filtering by 'id' syntax (in addition to '@id') to SphinxSE</para></listitem>
<listitem><para>fixed #1346, fixed NEAR operator behavior vs duplicated keywords</para></listitem>
<listitem><para>fixed #1345, invalid PROXIMITY operator threshold now causes a query syntax error rather than unexpected search behavior</para></listitem>
<listitem><para>fixed #1343, misconfigured indexes with 0 full text fields are now explicitly forbidden</para></listitem>
<listitem><para>fixed #1342, specific error messages (from the preload stage) went missing when failing to load the indexes</para></listitem>
<listitem><para>fixed #1339, no warning on inconsistent word statistics</para></listitem>
<listitem><para>fixed #1335, typo in searchd help screen</para></listitem>
<listitem><para>fixed #1334, typo in SELECT documentation</para></listitem>
<listitem><para>fixed #1316, PHRASE operator did not match in a rare self-repeating document/query case</para></listitem>
<listitem><para>fixed #1297, letting queries complete gracefully instead of killing them off in seamless_rotate=1, workers=prefork case</para></listitem>
<listitem><para>fixed #1295, mentioned index naming requirements (proper identifier) in the FROM clause docs</para></listitem>
<listitem><para>fixed #1221, incorrect results when using @groupby in select list via SphinxAPI with compat_sphinxql_magics=0</para></listitem>
<listitem><para>fixed #1180, special SPZ chars occasionally leaking into snippets</para></listitem>
<listitem><para>fixed #1171, preforked children did not reload logs on SIGUSR1</para></listitem>
<listitem><para>fixed #1150, added support for `id` syntax in DELETE and parents in WHERE</para></listitem>
<listitem><para>fixed #1135, crashes when using MVA/strings attributes in expression ranker</para></listitem>
<listitem><para>fixed #1124, corrupted attributes after merging with an empty index</para></listitem>
<listitem><para>fixed #1090, SphinxSE snippets UDF updated to support MySQL 5.5</para></listitem>
<listitem><para>fixed #1041, added initial support for MVA updates (and other mutex protected things) on FreeBSD</para></listitem>
<listitem><para>fixed #999, fullscan returned empty result sets in mixed batches of fullscan and fulltext queries</para></listitem>
<listitem><para>fixed #921, document count/bytes 32bit overflow in indexer progress output</para></listitem>
<listitem><para>fixed #539, added processing suffix rules with dots in .affix file to spelldump</para></listitem>
<listitem><para>fixed #481, rotation did not work on Windows with preopen=1</para></listitem>
<listitem><para>fixed #268, added warnings about duplicate elements in xmlpipe2</para></listitem>
<listitem><para>fixed CSphStaticMutex (double initialization issue)</para></listitem>
<listitem><para>fixed documentation typo in SQL data sources</para></listitem>
<listitem><para>fixed too-late initialization of mutex at daemon</para></listitem>
<listitem><para>fixed that an instance of searchd resurrected by watchdog could leak resources and/or crash</para></listitem>
<listitem><para>added a console message about crashes during index loading at startup</para></listitem>
<listitem><para>added more debug info about failed index loading</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel206"><title>Version 2.0.6-release, 22 oct 2012</title>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #1322, J connector seems to be broken in rel20 , but works in trunk</para></listitem>
<listitem><para>fixed #1321, 'set names utf8' passes, but 'set names utf-8' doesn't because of syntax error '-'</para></listitem>
<listitem><para>fixed #1318, unhandled float comparison operators at filter</para></listitem>
<listitem><para>fixed #1317, FD leaks on thread seamless rotation</para></listitem>
<listitem><para>fixed #1313, crash on stopping daemon with incorrect RT index config</para></listitem>
<listitem><para>fixed #1306, 'jolly roger ;)', and '(((((((((9 brackets)' crashes <filename>searchd</filename></para></listitem>
<listitem><para>fixed #1304, OS X debug compilation</para></listitem>
<listitem><para>fixed #1302, daemon random crashes on OS X</para></listitem>
<listitem><para>fixed #1301, <filename>indexer</filename> fails to send rotate signal</para></listitem>
<listitem><para>fixed #1300, lost index settings on attach</para></listitem>
<listitem><para>fixed #1299, daemon failed to rotate <link linkend="sphinxql-attach-index">ATTACH</link>ed plain index</para></listitem>
<listitem><para>fixed #1289, <link linkend="extended-syntax">SENTENCE</link> or <link linkend="extended-syntax">PARAGRAPH</link> searching leak memory</para></listitem>
<listitem><para>fixes #1285, crash on running <filename>searchd</filename> with <filename>syslog</filename> and <filename>watchdog</filename></para></listitem>
<listitem><para>fixed #1279, linking against explicitly disabled iconv. Also added <code>--with-libexpat</code> to config options, which sometimes required on systems without XML support</para></listitem>
<listitem><para>fixed #1278, broken <link linkend="conf-odbc-dsn">unixODBC</link> detection in configure script.</para></listitem>
<listitem><para>fixed #1277, broken build on some toolchains (like uClibc) where not defined <code>LLONG_MIN</code>, added <code>ULLONG_MAX</code></para></listitem>
<listitem><para>fixed #1274, large <filename>spa</filename> ( >4GB ) file hasn't loaded</para></listitem>
<listitem><para>fixed #1269, crash at RT index with <link linkend="mva">MVA</link> from disk chunk previously updated</para></listitem>
<listitem><para>fixed #1268, unuseful warning removed</para></listitem>
<listitem><para>fixed #1264, string and MVA attributes aliasing works again</para></listitem>
<listitem><para>fixed #1254, its now possible to add indexes using <link linkend="ref-indexer">--rotate</link></para></listitem>
<listitem><para>fixed #1249, <link linkend="sphinxql-reference">SphinxQL</link> unusable with PHP >= 5.4.5</para></listitem>
<listitem><para>fixed #1246, attributes of 100 character length not being saved</para></listitem>
<listitem><para>fixed #1234, case sensitive <link linkend="sphinxql-select">GROUP BY</link> attribute</para></listitem>
<listitem><para>fixed #1216, typos, <link linkend="conf-mem-limit">mem_limit</link> default size and <link linkend="rt-indexes">RT documentation</link></para></listitem>
<listitem><para>fixed #1148, RT documentation updated</para></listitem>
<listitem><para>fixed #1140, mem_limit default value</para></listitem>
<listitem><para>fixed #1138, updated documentation on <link linkend="conf-sql-attr-string">sql_attr_string</link></para></listitem>
<listitem><para>fixed #1129, snippets vs empty files and empty filenames</para></listitem>
<listitem><para>fixed #1123, configure compatibility fix</para></listitem>
<listitem><para>fixed #1122, 64bit <link linkend="conf-sql-range-step">sql_range_step</link></para></listitem>
<listitem><para>fixed #1082, crashes and deadlocks on OS X with <code>workers=threads</code> and field leak of read-write lock</para></listitem>
<listitem><para>fixed #1081, select only count distinct attr1 but group by attr2</para></listitem>
<listitem><para>fixed #1064, mistake while working with timestamp functions</para></listitem>
<listitem><para>fixed #1043, inaccurate distinct count in case many indexes or distributed index</para></listitem>
<listitem><para>fixed #1042, arithmetic expressions overflow</para></listitem>
<listitem><para>fixed #1007, Russian stemming on big endian systems</para></listitem>
<listitem><para>fixed #986, asserting in <link linkend="api-func-setrankingmode">SetRankingMode</link> (PHP API)</para></listitem>
<listitem><para>fixed #975, incorrect ranking in some rare cases</para></listitem>
<listitem><para>fixed #967, Python API type checking error</para></listitem>
<listitem><para>fixed #934, API vs fullscan vs non-empty query</para></listitem>
<listitem><para>fixed #899, error if using <link linkend="api-func-setfilterrange">SetFilterRange</link> as HAVING from SQL</para></listitem>
<listitem><para>fixed #867, <filename>indexer</filename> accepts index names starting with digit or _</para></listitem>
<listitem><para>fixed #699, signed vs unsigned 64-bit DocIDs in SphinxQL</para></listitem>
<listitem><para>fixed #668, now ignoring single @ character (incorrect field operator)</para></listitem>
<listitem><para>fixed #611, @! operator vs non-existent field, updated documentation</para></listitem>
<listitem><para>fixed #412, multiple <code>--filter</code> arguments work as they should in search utility</para></listitem>
<listitem><para>fixed #108, support for system libstemmer library. The sources of libstemmer placed into <filename>libstemmer_c</filename> is preferred, but the system lib will be tried if no sources found</para></listitem>
<listitem><para>fixed <link linkend="sphinxql-select">ORDER BY</link> output at query log with SphinxQL mode</para></listitem>
<listitem><para>fixed documentation entry about <link linkend="conf-sql-joined-field">sql_joined_field</link></para></listitem>
<listitem><para>fixed sample config file</para></listitem>
<listitem><para>fixed x64 configurations for libstemmer</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel205"><title>Version 2.0.5-release, 28 jul 2012</title>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #1258, <code>xmlpipe2</code> refused to index indexes with <code>docinfo=inline</code></para></listitem>
<listitem><para>fixed #1257, legacy groupby modes vs <code>dist_threads</code> could occasionally return wrong search results (race condition)</para></listitem>
<listitem><para>fixed #1253, missing single-word query performance optimization (simplified ranker) vs prefix-expanded keywords vs <code>dict=keywords</code></para></listitem>
<listitem><para>fixed #1252, COUNT(*) vs <link linkend="conf-dist-threads">dist_threads</link> could occasionally crash (race condition)</para></listitem>
<listitem><para>fixed #1251, missing expression support in the <link linkend="expr-func-in">IN()</link> function</para></listitem>
<listitem><para>fixed #1245, <link linkend="api-func-flushattributes">FlushAttributes</link> mistakenly disabled by <link linkend="conf-attr-flush-period">attr_flush_period=0</link> setting</para></listitem>
<listitem><para>fixed #1244, per-API-command (search, update, etc) statistics were not updated by SphinxQL requests</para></listitem>
<listitem><para>fixed #1243, misc issues (broken statistics, weights, checks) with very long keywords having blended parts in RT indexes</para></listitem>
<listitem><para>fixed #1240, embedded <code>xmlpipe2</code> schema with more attributes than the <code>sphinx.conf</code> one caused <filename>indexer</filename> to crash</para></listitem>
<listitem><para>fixed #1239, memory leak when optimizing <code>ABS(const)</code> and other 1-arg functions</para></listitem>
<listitem><para>fixed #1228, #761, #1183, #1190, #1198, misc issues occasonally caused by MVA updates (crash on SaveAttributes; index rotation vs index name and TID; looped MVA updates; persistent MVA removal on rotation)</para></listitem>
<listitem><para>fixed #1227, API queries with <code>SetGeoAnchor()</code> were logged incorrectly in SphinxQL-format query logs (<code>query_log_format=sphinxql</code>)</para></listitem>
<listitem><para>fixed #1214, phrase query parsing issues when <link linkend="conf-blend-chars">blend_chars</link> contained a quote (") symbol</para></listitem>
<listitem><para>fixed #1213, attribute aliases were not recognized by the subsequent <code>SELECT</code> items</para></listitem>
<listitem><para>fixed #1212, <link linkend="ref-indextool"><filename>indextool</filename></link> failed to check hitless keywords</para></listitem>
<listitem><para>fixed #1210, crash when indexing an index with joined fields only (no regular fields)</para></listitem>
<listitem><para>fixed #1209, <code>xmlpipe_fixup_utf8</code> off by a byte on certain (pretty rare) malformed sequences</para></listitem>
<listitem><para>fixed #1202, various issues with <code>CALL KEYWORDS</code> vs RT indexes (crashes vs <code>dict=keywords</code>, missing modifiers in output)</para></listitem>
<listitem><para>fixed #1201, snippets vs <code>query_mode=1</code> vs complex OR-queries could occasionally crash</para></listitem>
<listitem><para>fixed #1197, <filename>indexer</filename> running out of disk space could either crash, or fail to display a proper error message</para></listitem>
<listitem><para>fixed #1185, keywords with wildcards were not handled when highlighting the entire document</para></listitem>
<listitem><para>fixed #1184, <filename>indexer</filename> crash when <link linkend="conf-ngram-chars">ngram_chars</link> was set, but <link linkend="conf-ngram-len">ngram_len=0</link></para></listitem>
<listitem><para>fixed #1182, <filename>indexer</filename> crash on certain combinations of <link linkend="conf-docinfo"><code>docinfo=inline</code></link> vs bitfields</para></listitem>
<listitem><para>fixed #1181, <code>GROUP BY</code> on a MVA64 was truncated at 32 bits</para></listitem>
<listitem><para>fixed #1179, <code>passage_boundary</code> in snippets could get ignored (when highlighting the entire document)</para></listitem>
<listitem><para>fixed #1178, <filename>indexer</filename> could crash when <code>charset_table</code> specified out-of-bounds codes</para></listitem>
<listitem><para>fixed #1177, SPZ queries in snippets erroneously required <link linkend="api-func-buildexcerpts">passage_boundary</link> option to be explicitly set</para></listitem>
<listitem><para>fixed #1176, multi-queries with a <code>GROUP/ORDER BY</code> on a string attributed crashed</para></listitem>
<listitem><para>fixed #1175, connection id mismatch in SphinxQL-format query logs</para></listitem>
<listitem><para>fixed #1167, nested parentheses in a full-text query could mistakenly reset preceding field or zone limit operator</para></listitem>
<listitem><para>fixed #1158, float range filters were not supported in a multi-query batch optimizer</para></listitem>
<listitem><para>fixed #1157, broken gcc-4.7 build</para></listitem>
<listitem><para>fixed #1156, empty result set instead of an error message when querying distributed indexes with compat_sphinxql_magic=1 and hitting an error</para></listitem>
<listitem><para>fixed #1143, dash after a number incorrectly parsed as an operator <code>NOT</code></para></listitem>
<listitem><para>fixed #1137, <filename>searchd</filename> <link linkend="ref-searchd">--stopwait</link> hanged when the running instance crashed during shutdown</para></listitem>
<listitem><para>fixed #1136, high idle CPU load on systems without <code>pthread_timed_lock()</code></para></listitem>
<listitem><para>fixed #1134, issues with <code>prefork</code> workers on systems without <code>pthread_timed_lock()</code></para></listitem>
<listitem><para>fixed #1133, <link linkend="api-func-buildexcerpts"><code>BuildExcerpts()</code></link> on a distributed index with <code>load_files</code> did not distribute the jobs</para></listitem>
<listitem><para>fixed #1126, inaccurate hits sorting progress report on joined field indexing</para></listitem>
<listitem><para>fixed #1121, occasional bad entries (wrong characters or invalid SQL) in SphinxQL-format query log</para></listitem>
<listitem><para>fixed #1118, <code>libsphinxclient</code> requests failed when using <code>SPH_RANK_EXPR</code></para></listitem>
<listitem><para>fixed #1073, improved handling of wordforms/multiforms rules referring to stopwords</para></listitem>
<listitem><para>fixed #1062, bigint filter ranges truncated when searching via <link linkend="sphinxql-reference">SphinxQL</link></para></listitem>
<listitem><para>fixed #1052, SphinxSE range arguments with leading zeroes mistakenly parsed as octal</para></listitem>
<listitem><para>fixed #1011, negative MVA64 values mistakenly converted to positive (on indexing and/or output)</para></listitem>
<listitem><para>fixed #974, crash when logging queries over 2048 bytes with performance counters enabled</para></listitem>
<listitem><para>fixed #909, field-end modifier was ignored when followed by a non-whitespace syntax character (eg quote or bracket)</para></listitem>
<listitem><para>fixed #907, issue with bigint filtering (large positive or negative values)</para></listitem>
<listitem><para>fixed #906, #1074, Mac OS X 10.7.3 builds (conflicting memory allocation routines in Sphinx and external libs)</para></listitem>
<listitem><para>fixed #901, #1066, sending bigger request packets was broken in Python API</para></listitem>
<listitem><para>fixed #879, filters on weight-dependent expressions did not work correctly</para></listitem>
<listitem><para>fixed #553, default/missing port value was not handled properly in <link linkend="api-func-setserver">SetServer()</link> API call</para></listitem>
<listitem><para>fixed that blended vs multiforms vs <link linkend="conf-min-word-len">min_word_len</link> could hang the query parser</para></listitem>
<listitem><para>fixed missing command-line switches documentation</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel204"><title>Version 2.0.4-release, 02 mar 2012</title>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #605, pack vs mysql compress</para></listitem>
<listitem><para>fixed #783, #862, #917, #985, #990, #1032 documentation bugs</para></listitem>
<listitem><para>fixed #885, bitwise AND/OR were not available via API</para></listitem>
<listitem><para>fixed #984, crash on indexing data with MAGIC_CODE_ZONE symbol</para></listitem>
<listitem><para>fixed #1004, RT index loses words from dictionary on segments merging with <code>id64</code> enabled</para></listitem>
<listitem><para>fixed #1035, daemon doesn't properly handle FDs in case of socket overflow FD_SETSIZE ( *nix, <code>preopen_indexes=0</code>, <code>worker=threads</code> )</para></listitem>
<listitem><para>fixed #1038, quoted string for API select</para></listitem>
<listitem><para>fixed #1046, head SPZ overflow, snippet generation at non fast with SPZ</para></listitem>
<listitem><para>fixed #1048, distributed index can't sort \ filter because of missed attributes</para></listitem>
<listitem><para>fixed #1050, expression ranker vs agents</para></listitem>
<listitem><para>fixed #1051, added <link linkend="mva">MVA64</link> support to <link linkend="sphinx-udfs">UDFs</link></para></listitem>
<listitem><para>fixed #1054, <link linkend="sphinxql-select">max_query_time</link> not handled properly on searching at <link linkend="rt-indexes">RT index</link></para></listitem>
<listitem><para>fixed #1055, <link linkend="conf-expansion-limit">expansion_limit</link> on searching at RT disk chunks</para></listitem>
<listitem><para>fixed #1057, daemon crashes on generating snippet with 0 documents provided</para></listitem>
<listitem><para>fixed #1060, <link linkend="api-func-buildexcerpts">load_files_scattered</link> don't work</para></listitem>
<listitem><para>fixed #1065, libsphinxclient vs distribute index (agents)</para></listitem>
<listitem><para>fixed #1067, modifiers were not escaped in legacy query emulation</para></listitem>
<listitem><para>fixed #1071, master - agent communication got slower for a large query</para></listitem>
<listitem><para>fixed #1076, #1077, (redundant copying, and a possible mutex leak with uservars)</para></listitem>
<listitem><para>fixed #1078, <code>blended</code> vs FIELD_END</para></listitem>
<listitem><para>fixed #1084 crash \ index corruption on loading persist MVA</para></listitem>
<listitem><para>fixed #1091, RT attach of plain index with string \ MVA attributes prior regular attributes</para></listitem>
<listitem><para>fixed #1092, update got binloged with wrong TID</para></listitem>
<listitem><para>fixed #1098, crash on creating large expression</para></listitem>
<listitem><para>fixed #1099, cleaning up temporary files on fail of indexing</para></listitem>
<listitem><para>fixed #1100, missing <link linkend="conf-xmlpipe-attr-bigint">xmlpipe_attr_bigint</link> config directive</para></listitem>
<listitem><para>fixed #1101, now ignoring dashes within keywords when dash is not in charset_table</para></listitem>
<listitem><para>fixed #1103, <code>ZONE</code> operator incorrectly works on more than one keywords in a simple zone</para></listitem>
<listitem><para>fixed #1106, optimized <code>WHERE id=value</code>, <code>WHERE id IN (values_list)</code> clauses used in <code>SELECT</code>, <code>UPDATE</code> statements</para></listitem>
<listitem><para>fixed #1112, Sphinx doesn't work out-of-the-box because the collision of <code>binlog_path</code> option</para></listitem>
<listitem><para>fixed #1116, crash on <code>FLUSH RTINDEX</code> unknown-index-name</para></listitem>
<listitem><para>fixed #1117, occasional RT headers corruption (leading to crashes and/or missing results)</para></listitem>
<listitem><para>fixed #1119, missing expression ranker support in SphinxSE</para></listitem>
<listitem><para>fixed #1120, negative <link linkend="api-funcgroup-querying">total_found</link>, docs and hits counter on huge indexes</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel203"><title>Version 2.0.3-release, 23 dec 2011</title>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #1031, SphinxQL parsing syntax for MVA at insert \ replace statements</para></listitem>
<listitem><para>fixed #1027, stalls on attribute update in high-concurrency load</para></listitem>
<listitem><para>fixed #1026, daemon crash on malformed API command</para></listitem>
<listitem><para>fixed #1021, <code>max_children</code> option has been ignored with <code>worker=threads</code></para></listitem>
<listitem><para>fixed #1020, crash on large attribute files loading</para></listitem>
<listitem><para>fixed #1014, crash on rotation when index has been removed from config file (<code>worker=threads</code>, *nix box)</para></listitem>
<listitem><para>fixed #1001, broken MVA files in RT index while saving disk chunk</para></listitem>
<listitem><para>fixed #995, crash on empty MVA updates</para></listitem>
<listitem><para>fixed #994, crash on daemon shutdown with <code>seamless_rotate=0</code> and <code>workers=threads</code></para></listitem>
<listitem><para>fixed #993, #998, crash on replay <code>DELETE</code> statement vs RT index with <code>dict=keywords</code>, fixed sequential <code>INSERT</code> into <code>dict=keywords</code> index right after <code>INSERT</code> into <code>dict=crc</code> index</para></listitem>
<listitem><para>fixed #991, crash on indexing mssql source with <code>mssql_unicode</code> enabled</para></listitem>
<listitem><para>fixed #983, #950, crash on host name lookup (SphinxSE with MySQL 5.5)</para></listitem>
<listitem><para>fixed #981, snippet inconsistency with <code>allow_empty=0</code></para></listitem>
<listitem><para>fixed #980, broken index produced by index merge in rare cases</para></listitem>
<listitem><para>fixed #971, absent error message at master on agent "maxed out"</para></listitem>
<listitem><para>fixed #695, #815, #835, #866, malformed warnings in SphinxQL</para></listitem>
<listitem><para>fixed build of SphinxSE with MySQL 5.1</para></listitem>
<listitem><para>fixed crash log for 'fork' and 'prefork' workers</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel202"><title>Version 2.0.2-beta, 15 nov 2011</title>
<bridgehead>Major new features</bridgehead>
<itemizedlist>
<listitem><para>added keywords dictionary (<link linkend="conf-dict"><code>dict=keywords</code></link>) support to RT indexes</para></listitem>
<listitem><para>added <link linkend="conf-rt-attr-multi">MVA</link>, <link linkend="conf-index-exact-words">index_exact_words</link> support to RT indexes (#888)</para></listitem>
<listitem><para>added <link linkend="mva">MVA64</link> (a set of BIGINTs) support to both disk and RT indexes (<link linkend="conf-rt-attr-multi-64">rt_attr_multi_64</link> directive)</para></listitem>
<listitem><para>added an <link linkend="expression-ranker">expression-based ranker</link>, and a number of new ranking factors</para></listitem>
<listitem><para>added <link linkend="sphinxql-attach-index">ATTACH INDEX</link> statement that converts a disk index to RT index</para></listitem>
<listitem><para>added <code>WHERE</code> clause support to <link linkend="sphinxql-update">UPDATE</link> statement</para></listitem>
<listitem><para>added <code>bigint</code>, <code>float</code>, and <code>MVA</code> attribute support to <link linkend="sphinxql-update">UPDATE</link> statement</para></listitem>
</itemizedlist>
<bridgehead>New features</bridgehead>
<itemizedlist>
<listitem><para>added support for upto <link linkend="fields">256 searchable fields</link> (was upto 32 before)</para></listitem>
<listitem><para>added <link linkend="expr-func-fibonacci"><code>FIBONACCI()</code></link> function to <link linkend="expressions">expressions</link></para></listitem>
<listitem><para>added <link linkend="api-func-buildexcerpts">load_files_scattered option</link> to snippets</para></listitem>
<listitem><para>added implicit attribute type promotions in multi-index result sets (#939)</para></listitem>
<listitem><para>added index names to <filename>indexer</filename> progress message on merge (#928)</para></listitem>
<listitem><para>added <link linkend="ref-searchd"><option>--replay-flags</option></link> switch to <filename>searchd</filename></para></listitem>
<listitem><para>added string attribute support and a few previously missing <link linkend="sphinxse-snippets">snippets options</link> to SphinxSE</para></listitem>
<listitem><para>added previously missing <link linkend="api-func-status"><code>Status()</code></link>, <link linkend="api-func-setconnecttimeout"><code>SetConnectTimeout()</code></link> API calls to Python API</para></listitem>
<listitem><para>added <code>ORDER BY RAND()</code> support to <link linkend="sphinxql-select">SELECT</link> statement</para></listitem>
<listitem><para>added Sphinx version to Windows crash log</para></listitem>
<listitem><para>added RT index support to <link linkend="ref-indextool">indextool</link> <code>--check</code> (checks disk chunks only) (#877)</para></listitem>
<listitem><para>added <link linkend="conf-prefork-rotation-throttle">prefork_rotation_throttle</link> directive (preforked children restart delay, in milliseconds) (#873)</para></listitem>
<listitem><para>added <link linkend="conf-on-file-field-error">on_file_field_error</link> directive (different <code>sql_file_field</code> handling modes)</para></listitem>
<listitem><para>added manpages for all the programs</para></listitem>
<listitem><para>added syslog logging support</para></listitem>
<listitem><para>added sentence, paragraph, and zone support in <code>html_strip_mode=retain</code> mode to snippets</para></listitem>
<listitem><para>optimized search performance with many <code>ZONE</code> operators</para></listitem>
<listitem><para>improved suggestion tool (added Levenshtein limit, removed extra DB fetch)</para></listitem>
<listitem><para>improved <link linkend="conf-index-sp">sentence extraction</link> (handles salutations, starting initials better now)</para></listitem>
<listitem><para>changed <link linkend="conf-max-filter-values">max_filter_values</link> sanity check to 10M values</para></listitem>
</itemizedlist>
<bridgehead>New SphinxQL features</bridgehead>
<itemizedlist>
<listitem><para>added <link linkend="sphinxql-flush-rtindex">FLUSH RTINDEX</link> statement</para></listitem>
<listitem><para>added <code>dist_threads</code> directive (parallel processing), <code>load_files</code>, <code>load_files_scattered</code>, batch syntax (multiple documents) support to <link linkend="sphinxql-call-snippets">CALL SNIPPETS</link> statement</para></listitem>
<listitem><para>added <code>OPTION comment='...'</code> support to <link linkend="sphinxql-select">SELECT</link> statement (#944)</para></listitem>
<listitem><para>added <link linkend="sphinxql-show-variables">SHOW VARIABLES</link> statement</para></listitem>
<listitem><para>added dummy handlers for <link linkend="sphinxql-set-transaction">SET TRANSACTION</link>, <link linkend="sphinxql-set">SET NAMES</link>, <link linkend="sphinxql-select">SELECT @@sysvar</link> statements, and for <code>sql_auto_is_null</code>, <code>sql_mode</code>, and @@-style variables (like @@tx_isolation) in <link linkend="sphinxql-set">SET</link> statement (better MySQL frameworks/connectors support)</para></listitem>
<listitem><para>added complete <link linkend="sphinxql-log-format">SphinxQL error logging</link> (all errors are logged now, not just <code>SELECT</code>s)</para></listitem>
<listitem><para>improved <link linkend="sphinxql-select">SELECT</link> statement syntax, made expressions aliases optional</para></listitem>
</itemizedlist>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #982, empty binlogs prevented upgraded daemon from starting up</para></listitem>
<listitem><para>fixed #978, libsphinxclient build failed on sparc/sparc64 solaris</para></listitem>
<listitem><para>fixed #977, eliminated (most) compiler warnings</para></listitem>
<listitem><para>fixed #969, broken expression MVA/string argument type check prevented IF(IN(mva..)) and other valid expressions from working</para></listitem>
<listitem><para>fixed #966, NOT IN @global_var syntax was not supported</para></listitem>
<listitem><para>fixed #958, mem_limit over INT_MAX was not clamped</para></listitem>
<listitem><para>fixed #954, UTF-8 snippets could crash on malformed data</para></listitem>
<listitem><para>fixed #951, UTF-8 snippets could hang on malformed data</para></listitem>
<listitem><para>fixed #947, bad float column type was reported via SphinxQL, breaking some clients</para></listitem>
<listitem><para>fixed #940, group-by with a small enough <code>max_matches</code> limit could occasionaly crash and/or sort wrongly</para></listitem>
<listitem><para>fixed #932, sending huge queries to agents occasionally failed (mainly on Windows)</para></listitem>
<listitem><para>fixed #926, snippets did not highlight widlcard matches with morphology enabled</para></listitem>
<listitem><para>fixed #918, crash logger did not report a proper query in <code>dist_threads</code> case</para></listitem>
<listitem><para>fixed #916, watchdog caused (endless) respawns if there was a crash during shutdown</para></listitem>
<listitem><para>fixed #904, attribute names were not forcibly case-folded in some API calls (eg. <code>SetGroupDistinct</code>)</para></listitem>
<listitem><para>fixed #902, query parser did not support <code>stopword_step=0</code></para></listitem>
<listitem><para>fixed #897, network sockets dangled (open but unattended) while replaying binlog</para></listitem>
<listitem><para>fixed #855, <code>allow_empty</code> option in snippets did not always work correctly</para></listitem>
<listitem><para>fixed #854, indexing with many <code>bigint</code> attributes and <code>docinfo=inline</code> crashed</para></listitem>
<listitem><para>fixed #838, RT MVA insertion did not sort MVA values, caused matching issues</para></listitem>
<listitem><para>fixed #833, duplicate MVA values were not eliminated on update</para></listitem>
<listitem><para>fixed #832, certain (overshort/incorrect) documents crashed indexing MS SQL Unicode columns</para></listitem>
<listitem><para>fixed #829, query parser did not properly handle numerics with <code>blend_chars</code></para></listitem>
<listitem><para>fixed #814, group-by string attributes in RT indexes dit not always work correctly</para></listitem>
<listitem><para>fixed #812, utf8 stemming produced unexpected stems on words with single-byte chars</para></listitem>
<listitem><para>fixed #808, huge queries crashed logging with <code>query_log_format=sphinxql</code></para></listitem>
<listitem><para>fixed #806, stray single-star keyword crashed on querying</para></listitem>
<listitem><para>fixed #798, snippets ignored <code>index_exact_words</code> in query_mode</para></listitem>
<listitem><para>fixed #797, RT klist loader had an occasional off-by-one crash</para></listitem>
<listitem><para>fixed #791, <code>preopen_indexes</code> erroneously defaulted to 0 on Windows</para></listitem>
<listitem><para>fixed #790, huge dictionaries (over 4 GB) did not work</para></listitem>
<listitem><para>fixed #786, <code>inplace_enable</code> could occasionally corrupt the indexes</para></listitem>
<listitem><para>fixed #775, doc had a typo (soundex vs metaphone)</para></listitem>
<listitem><para>fixed #772, snippets duplicated blended chars on a SPZ boundary</para></listitem>
<listitem><para>fixed #762, query parser truncated digit-only keywords over 15 digits</para></listitem>
<listitem><para>fixed #736, query parser dit not properly handle blended/special char sequence</para></listitem>
<listitem><para>fixed #726, rotation of an index with a changed attribute count crashed</para></listitem>
<listitem><para>fixed #687, querying multiple indexes with index weights and sort-by expression produced incorrect (unadjusted) weights</para></listitem>
<listitem><para>fixed #585, (unsupported) string ordinals were silently zeroed out with <code>docinfo=inline</code> (instead of failing)</para></listitem>
<listitem><para>fixed #583, certain keywords could occasionally crash multiforms</para></listitem>
<listitem><para>fixed that concurrent MVA updates could crash</para></listitem>
<listitem><para>fixed that query parser did not ignore a pure blended token with a leading modifier</para></listitem>
<listitem><para>fixed that query parser did not properly handle a modifier followed by a dash</para></listitem>
<listitem><para>fixed that substring indexing with <code>dict=crc</code> did not support <code>index_exact_words</code> and <code>zones</code></para></listitem>
<listitem><para>fixed that in a rare edge case common subtree cache could crash</para></listitem>
<listitem><para>fixed that empty result set returned the full schema (rather than <code>SELECT</code>-ed columns)</para></listitem>
<listitem><para>fixed that SphinxQL did not have a sanity check for (currently unsupported) result set schemas over 250 attributes</para></listitem>
<listitem><para>fixed that updates on regular indexes were not binlogged</para></listitem>
<listitem><para>fixed that multi-query optimization check for expressions did not handle multi-index case</para></listitem>
<listitem><para>fixed that SphinxSE did not build vs MySQL 5.5 release</para></listitem>
<listitem><para>fixed that <code>proximity_bm25</code> ranker could yield incorrect weight on duplicated keywords</para></listitem>
<listitem><para>fixed that prefix expansion with <code>dict=keyword</code> occasionally crashed</para></listitem>
<listitem><para>fixed that <code>strip_path</code> did not work on RT disk chunks</para></listitem>
<listitem><para>fixed that exclude filters were not properly logged in <code>query_log_format=sphinxql</code> mode</para></listitem>
<listitem><para>fixed that plain string attribute check in <filename>indextool</filename> <code>--check</code> was broken</para></listitem>
<listitem><para>fixed that Java API did not let specify a connection timeout</para></listitem>
<listitem><para>fixed that ordinal and wordcount attributes could not be fetched via SphinxQL</para></listitem>
<listitem><para>fixed that in a rare edge case <code>OR/ORDER</code> would not match properly</para></listitem>
<listitem><para>fixed that sending (huge) query response did not handle <code>EINTR</code> properly</para></listitem>
<listitem><para>fixed that <code>SPH04</code> ranker could yield incorrectly high weight in some cases</para></listitem>
<listitem><para>fixed that C API did not let zero out cutoff, <code>max_matches</code> settings</para></listitem>
<listitem><para>fixed that on a persistent connection there were occasionally issues handling signals while doing network reads/waitss</para></listitem>
<listitem><para>fixed that in a rare edge case (field start modifier in a certain complex query) querying crashed</para></listitem>
<listitem><para>fixed that snippets did not support <code>dist_threads</code> with <code>load_files=0</code></para></listitem>
<listitem><para>fixed that in some extremely rare edge cases tiny parts of an index could end up corrupted with <code>dict=keywords</code></para></listitem>
<listitem><para>fixed that field/zone conditions were not propagated to expanded keywords with <code>dict=keywords</code></para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel201"><title>Version 2.0.1-beta, 22 apr 2011</title>
<bridgehead>New general features</bridgehead>
<itemizedlist>
<listitem><para>added remapping support to <link linkend="conf-blend-chars">blend_chars</link> directive</para></listitem>
<listitem><para>added multi-threaded snippet batches support (requires a batch sent via API, <link linkend="conf-dist-threads">dist_threads</link>, and <code>load_files</code>)</para></listitem>
<listitem><para>added collations (<link linkend="conf-collation-server">collation_server</link>, <link linkend="conf-collation-libc-locale">collation_libc_locale directives</link>)</para></listitem>
<listitem><para>added support for sorting and grouping on string attributes (<code>ORDER BY</code>, <code>GROUP BY</code>, <code>WITHIN GROUP ORDER BY</code>)</para></listitem>
<listitem><para>added UDF support (<link linkend="conf-plugin-dir">plugin_dir</link> directive; <link linkend="sphinxql-create-function">CREATE FUNCTION</link>, <link linkend="sphinxql-drop-function">DROP FUNCTION</link> statements)</para></listitem>
<listitem><para>added <link linkend="conf-query-log-format">query_log_format</link> directive, <link linkend="sphinxql-set">SET GLOBAL query_log_format | log_level = ...</link> statements; and connection id tracking</para></listitem>
<listitem><para>added <link linkend="conf-sql-column-buffers">sql_column_buffers</link> directive, fixed out-of-buffer column handling in ODBC/MS SQL sources</para></listitem>
<listitem><para>added <link linkend="conf-blend-mode">blend_mode</link> directive that enables indexing multiple variants of a blended sequence</para></listitem>
<listitem><para>added UNIX socket support to C, Ruby APIs</para></listitem>
<listitem><para>added ranged query support to <link linkend="conf-sql-joined-field">sql_joined_field</link></para></listitem>
<listitem><para>added <link linkend="conf-rt-flush-period">rt_flush_period</link> directive</para></listitem>
<listitem><para>added <link linkend="conf-thread-stack">thread_stack</link> directive</para></listitem>
<listitem><para>added SENTENCE, PARAGRAPH, ZONE operators (and <link linkend="conf-index-sp">index_sp</link>, <link linkend="conf-index-zones">index_zones</link> directives)</para></listitem>
<listitem><para>added keywords dictionary support (and <link linkend="conf-dict">dict</link>, <link linkend="conf-expansion-limit">expansion_limit</link> directives)</para></listitem>
<listitem><para>added <code>passage_boundary</code>, <code>emit_zones</code> options to snippets</para></listitem>
<listitem><para>added <link linkend="conf-watchdog">a watchdog process</link> in threaded mode</para></listitem>
<listitem><para>added persistent MVA updates</para></listitem>
<listitem><para>added crash dumps to <filename>searchd.log</filename>, deprecated <code>crash_log_path</code> directive</para></listitem>
<listitem><para>added id32 index support in id64 binaries (EXPERIMENTAL)</para></listitem>
<listitem><para>added SphinxSE support for DELETE and REPLACE on SphinxQL tables</para></listitem>
</itemizedlist>
<bridgehead>New SphinxQL features</bridgehead>
<itemizedlist>
<listitem><para>added new, more SQL compliant SphinxQL syntax; and a compat_sphinxql_magics directive</para></listitem>
<listitem><para>added <link linkend="expr-func-crc32">CRC32()</link>, <link linkend="expr-func-day">DAY()</link>, <link linkend="expr-func-month">MONTH()</link>, <link linkend="expr-func-year">YEAR()</link>, <link linkend="expr-func-yearmonth">YEARMONTH()</link>, <link linkend="expr-func-yearmonthday">YEARMONTHDAY()</link> functions</para></listitem>
<listitem><para>added <link linkend="expr-ari-ops">DIV, MOD, and % operators</link></para></listitem>
<listitem><para>added <link linkend="sphinxql-select">reverse_scan=(0|1)</link> option to SELECT</para></listitem>
<listitem><para>added support for MySQL packets over 16M</para></listitem>
<listitem><para>added dummy SHOW VARIABLES, SHOW COLLATION, and SET character_set_results support (to support handshake with certain client libraries and frameworks)</para></listitem>
<listitem><para>added <link linkend="conf-mysql-version-string">mysql_version_string</link> directive (to workaround picky MySQL client libraries)</para></listitem>
<listitem><para>added support for global filter variables, <link linkend="sphinxql-set">SET GLOBAL @uservar=(int_list)</link> </para></listitem>
<listitem><para>added <link linkend="sphinxql-delete">DELETE ... IN (id_list)</link> syntax support</para></listitem>
<listitem><para>added C-style comments syntax (for example, <code>SELECT /*!40000 some comment*/ id FROM test</code>)</para></listitem>
<listitem><para>added <link linkend="sphinxql-update">UPDATE ... WHERE id=X</link> syntax support</para></listitem>
<listitem><para>added <link linkend="sphinxql-multi-queries">SphinxQL multi-query support</link></para></listitem>
<listitem><para>added <link linkend="sphinxql-describe">DESCRIBE</link>, <link linkend="sphinxql-show-tables">SHOW TABLES</link> statements</para></listitem>
</itemizedlist>
<bridgehead>New command-line switches</bridgehead>
<itemizedlist>
<listitem><para>added <code>--print-queries</code> switch to <filename>indexer</filename> that dumps SQL queries it runs</para></listitem>
<listitem><para>added <code>--sighup-each </code> switch to <filename>indexer</filename> that rotates indexes one by one</para></listitem>
<listitem><para>added <code>--strip-path</code> switch to <filename>searchd</filename> that skips file paths embedded in the index(-es)</para></listitem>
<listitem><para>added <code>--dumpconfig</code> switch to <filename>indextool</filename> that dumps an index header in <filename>sphinx.conf</filename> format</para></listitem>
</itemizedlist>
<bridgehead>Major changes and optimizations</bridgehead>
<itemizedlist>
<listitem><para>changed default preopen_indexes value to 1</para></listitem>
<listitem><para>optimized English stemmer (results in 1.3x faster snippets and indexing with morphology=stem_en)</para></listitem>
<listitem><para>optimized snippets, 1.6x general speedup</para></listitem>
<listitem><para>optimized const-list parsing in SphinxQL</para></listitem>
<listitem><para>optimized full-document highlighting CPU/RAM use</para></listitem>
<listitem><para>optimized binlog replay (improved performance on K-list update)</para></listitem>
</itemizedlist>
<bridgehead>Bug fixes</bridgehead>
<itemizedlist>
<listitem><para>fixed #767, joined fields vs ODBC sources</para></listitem>
<listitem><para>fixed #757, wordforms shared by indexes with different settings</para></listitem>
<listitem><para>fixed #733, loading of indexes in formats prior to v.14</para></listitem>
<listitem><para>fixed #763, occasional snippets failures</para></listitem>
<listitem><para>fixed #648, occasionally missed rotations on multiple SIGHUPs</para></listitem>
<listitem><para>fixed #750, an RT segment merge leading to false positives and/or crashes in some cases</para></listitem>
<listitem><para>fixed #755, zones in snippets output</para></listitem>
<listitem><para>fixed #754, stopwords counting at snippet passage generation</para></listitem>
<listitem><para>fixed #723, fork/prefork index rotation in children processes</para></listitem>
<listitem><para>fixed #696, freeze on zero threshold in quorum operator</para></listitem>
<listitem><para>fixed #732, query escaping in SphinxSE</para></listitem>
<listitem><para>fixed #739, occasional crashes in MT mode on result set send</para></listitem>
<listitem><para>fixed #746, crash with a named list in SphinxQL option</para></listitem>
<listitem><para>fixed #674, AVG vs group order</para></listitem>
<listitem><para>fixed #734, occasional crashes attempting to report NULL errors</para></listitem>
<listitem><para>fixed #829, tail hits within field position modifier</para></listitem>
<listitem><para>fixed #712, missing query_mode, force_all_words snippet option defaults in Java API</para></listitem>
<listitem><para>fixed #721, added dupe removal on RT batch INSERT/REPLACE</para></listitem>
<listitem><para>fixed #720, potential extraneous highlighting after a blended keyword</para></listitem>
<listitem><para>fixed #702, exceptions vs star search</para></listitem>
<listitem><para>fixed #666, ext2 query grouping vs exceptions</para></listitem>
<listitem><para>fixed #688, WITHIN GROUP ORDER BY related crash</para></listitem>
<listitem><para>fixed #660, multi-queue batches vs dist_threads</para></listitem>
<listitem><para>fixed #678, crash on dict=keywords vs xmlpipe vs min_prefix_len</para></listitem>
<listitem><para>fixed #596, ECHILD vs scripted configs</para></listitem>
<listitem><para>fixed #653, dependency in expression, sorting, grouping</para></listitem>
<listitem><para>fixed #661, concurrent distributed searches vs workers=threads</para></listitem>
<listitem><para>fixed #646, crash on status query via UNIX socket</para></listitem>
<listitem><para>fixed #589, libexpat.dll missing from some Win32 build types</para></listitem>
<listitem><para>fixed #574, quorum match order</para></listitem>
<listitem><para>fixed multiple documentation issues (#372, #483, #495, #601, #623, #632, #654)</para></listitem>
<listitem><para>fixed that ondisk_dict did not affect RT indexes</para></listitem>
<listitem><para>fixed that string attributes check in indextool --check was erroneously sensitive to string data order</para></listitem>
<listitem><para>fixed a rare crash when using BEFORE operator</para></listitem>
<listitem><para>fixed an issue with multiforms vs BuildKeywords()</para></listitem>
<listitem><para>fixed an edge case in OR operator (emitted wrong hits order sometimes)</para></listitem>
<listitem><para>fixed aliasing in docinfo accessors that lead to very rare crashes and/or missing results</para></listitem>
<listitem><para>fixed a syntax error on a short token at the end of a query</para></listitem>
<listitem><para>fixed id64 filtering and performance degradation with range filters</para></listitem>
<listitem><para>fixed missing rankers in libsphinxclient</para></listitem>
<listitem><para>fixed missing SPH04 ranker in SphinxSE</para></listitem>
<listitem><para>fixed column names in sql_attr_multi sample (works with example.sql now)</para></listitem>
<listitem><para>fixed an issue with distributed local+remote setup vs aggregate functions</para></listitem>
<listitem><para>fixed case sensitive columns names in RT indexes</para></listitem>
<listitem><para>fixed a crash vs strings from multiple indexes in result set</para></listitem>
<listitem><para>fixed blended keywords vs snippets</para></listitem>
<listitem><para>fixed secure_connection vs MySQL protocol vs MySQL.NET connector</para></listitem>
<listitem><para>fixed that Python API did not works with Python 2.3</para></listitem>
<listitem><para>fixed overshort_step vs snippets</para></listitem>
<listitem><para>fixed keyword staistics vs dist_threads searching</para></listitem>
<listitem><para>fixed multiforms vs query parsing (vs quorum)</para></listitem>
<listitem><para>fixed missed quorum words vs RT segments</para></listitem>
<listitem><para>fixed blended keywords occasionally skipping extra character when querying (eg "abc[]")</para></listitem>
<listitem><para>fixed Python API to handle int32 values</para></listitem>
<listitem><para>fixed prefix and infix indexing of joined fields</para></listitem>
<listitem><para>fixed MVA ranged query</para></listitem>
<listitem><para>fixed missing blended state reset on document boundary</para></listitem>
<listitem><para>fixed a crash on missing index while replaying binlog</para></listitem>
<listitem><para>fixed an error message on filter values overrun</para></listitem>
<listitem><para>fixed passage duplication in snippets in weight_order mode</para></listitem>
<listitem><para>fixed select clauses over 1K vs remote agents</para></listitem>
<listitem><para>fixed overshort accounting vs soft-whitespace tokens</para></listitem>
<listitem><para>fixed rotation vs workers=threads</para></listitem>
<listitem><para>fixed schema issues vs distributed indexes</para></listitem>
<listitem><para>fixed blended-escaped sequence parsing issue</para></listitem>
<listitem><para>fixed MySQL IN clause (values order etc)</para></listitem>
<listitem><para>fixed that post_index did not execute when 0 documents were succesfully indexed</para></listitem>
<listitem><para>fixed field position limit vs many hits</para></listitem>
<listitem><para>fixed that joined fields missed an end marker at field end</para></listitem>
<listitem><para>fixed that xxx_step settings were missing from .sph index header</para></listitem>
<listitem><para>fixed libsphinxclient missing request cleanup in sphinx_query() (eg after network errors)</para></listitem>
<listitem><para>fixed that index_weights were ignored when grouping</para></listitem>
<listitem><para>fixed multi wordforms vs blend_chars</para></listitem>
<listitem><para>fixed broken MVA output in SphinxQL</para></listitem>
<listitem><para>fixed a few RT leaks</para></listitem>
<listitem><para>fixed an issue with RT string storage going missing</para></listitem>
<listitem><para>fixed an issue with repeated queries vs dist_threads</para></listitem>
<listitem><para>fixed an issue with string attributes vs buffer overrun in SphinxQL</para></listitem>
<listitem><para>fixed unexpected character data warnings within ignored xmlpipe tags</para></listitem>
<listitem><para>fixed a crash in snippets with NEAR syntax query</para></listitem>
<listitem><para>fixed passage duplication in snippets</para></listitem>
<listitem><para>fixed libsphinxclient SIGPIPE handling</para></listitem>
<listitem><para>fixed libsphinxclient vs VS2003 compiler bug</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel110"><title>Version 1.10-beta, 19 jul 2010</title>
<itemizedlist>
<listitem><para>added RT indexes support (<xref linkend="rt-indexes"/>)</para></listitem>
<listitem><para>added prefork and threads support (<link linkend="conf-workers">workers</link> directives)</para></listitem>
<listitem><para>added multi-threaded local searches in distributed indexes (<link linkend="conf-dist-threads">dist_threads</link> directive)</para></listitem>
<listitem><para>added common subquery cache (<link linkend="conf-subtree-docs-cache">subtree_docs_cache</link>,
<link linkend="conf-subtree-hits-cache">subtree_hits_cache</link> directives)</para></listitem>
<listitem><para>added string attributes support (<link linkend="conf-sql-attr-string">sql_attr_string</link>,
<link linkend="conf-sql-field-string">sql_field_string</link>,
<link linkend="conf-xmlpipe-attr-string">xml_attr_string</link>,
<link linkend="conf-xmlpipe-field-string">xml_field_string</link> directives)</para></listitem>
<listitem><para>added indexing-time word counter (<option>sql_attr_str2wordcount</option>,
<option>sql_field_str2wordcount</option> directives)</para></listitem>
<listitem><para>added <link linkend="sphinxql-call-snippets">CALL SNIPPETS()</link>,
<link linkend="sphinxql-call-keywords">CALL KEYWORDS()</link> SphinxQL statements</para></listitem>
<listitem><para>added <option>field_weights</option>, <option>index_weights</option> options to
SphinxQL <link linkend="sphinxql-select">SELECT</link> statement</para></listitem>
<listitem><para>added insert-only SphinxQL-talking tables to SphinxSE (connection='sphinxql://host[:port]/index')</para></listitem>
<listitem><para>added <option>select</option> option to SphinxSE queries</para></listitem>
<listitem><para>added backtrace on crash to <filename>searchd</filename></para></listitem>
<listitem><para>added SQL+FS indexing, aka loading files by names fetched from SQL
(<link linkend="conf-sql-file-field">sql_file_field</link> directive)</para></listitem>
<listitem><para>added a watchdog in threads mode to <filename>searchd</filename></para></listitem>
<listitem><para>added automatic row phantoms elimination to index merge</para></listitem>
<listitem><para>added hitless indexing support (hitless_words directive)</para></listitem>
<listitem><para>added --check, --strip-path, --htmlstrip, --dumphitlist ... --wordid switches to <link linkend="ref-indextool">indextool</link></para></listitem>
<listitem><para>added --stopwait, --logdebug switches to <link linkend="ref-searchd">searchd</link></para></listitem>
<listitem><para>added --dump-rows, --verbose switches to <link linkend="ref-indexer">indexer</link></para></listitem>
<listitem><para>added "blended" characters indexing support (<link linkend="conf-blend-chars">blend_chars</link> directive)</para></listitem>
<listitem><para>added joined/payload field indexing (<link linkend="conf-sql-joined-field">sql_joined_field</link> directive)</para></listitem>
<listitem><para>added <link linkend="api-func-flushattributes">FlushAttributes() API call</link></para></listitem>
<listitem><para>added query_mode, force_all_words, limit_passages, limit_words, start_passage_id, load_files, html_strip_mode,
allow_empty options, and %PASSAGE_ID% macro in before_match, after_match options
to <link linkend="api-func-buildexcerpts">BuildExcerpts()</link> API call</para></listitem>
<listitem><para>added @groupby/@count/@distinct columns support to SELECT (but not to expressions)</para></listitem>
<listitem><para>added query-time keyword expansion support (<link linkend="conf-expand-keywords">expand_keywords</link> directive,
<link linkend="api-func-setrankingmode">SPH_RANK_SPH04</link> ranker)</para></listitem>
<listitem><para>added query batch size limit option (<link linkend="conf-max-batch-queries">max_batch_queries</link> directive; was hardcoded)</para></listitem>
<listitem><para>added SINT() function to expressions</para></listitem>
<listitem><para>improved SphinxQL syntax error reporting</para></listitem>
<listitem><para>improved expression optimizer (better constant handling)</para></listitem>
<listitem><para>improved dash handling within keywords (no longer treated as an operator)</para></listitem>
<listitem><para>improved snippets (better passage selection/trimming, around option now a hard limit)</para></listitem>
<listitem><para>optimized index format that yields ~20-30% smaller indexes</para></listitem>
<listitem><para>optimized sorting code (indexing time 1-5% faster on average; 100x faster in worst case)</para></listitem>
<listitem><para>optimized searchd startup time (moved .spa preindexing to indexer), added a progress bar</para></listitem>
<listitem><para>optimized queries against indexes with many attributes (eliminated redundant copying)</para></listitem>
<listitem><para>optimized 1-keyword queries (performace regression introduced in 0.9.9)</para></listitem>
<listitem><para>optimized SphinxQL protocol overheads, and performance on bigger result sets</para></listitem>
<listitem><para>optimized unbuffered attributes writes on index merge</para></listitem>
<listitem><para>changed attribute handling, duplicate names are strictly forbidden now</para></listitem>
<listitem><para>fixed that SphinxQL sessions could stall shutdown</para></listitem>
<listitem><para>fixed consts with leading minus in SphinxQL</para></listitem>
<listitem><para>fixed AND/OR precedence in expressions</para></listitem>
<listitem><para>fixed #334, AVG() on integers was not computed in floats</para></listitem>
<listitem><para>fixed #371, attribute flush vs 2+ GB files</para></listitem>
<listitem><para>fixed #373, segfault on distributed queries vs certain libc versions</para></listitem>
<listitem><para>fixed #398, stopwords not stopped in prefix/infix indexes</para></listitem>
<listitem><para>fixed #404, erroneous MVA failures in indextool --check</para></listitem>
<listitem><para>fixed #408, segfault on certain query batches (regular scan, plus a scan with MVA groupby)</para></listitem>
<listitem><para>fixed #431, occasional shutdown hangs in preforked workers</para></listitem>
<listitem><para>fixed #436, trunk checkout builds vs Solaris sh</para></listitem>
<listitem><para>fixed #440, escaping vs parentheses declared as valid in charset_table</para></listitem>
<listitem><para>fixed #442, occasional non-aligned free in MVA indexing</para></listitem>
<listitem><para>fixed #447, occasional crashes in MVA indexing</para></listitem>
<listitem><para>fixed #449, pconn busyloop on aborted clients on certain arches</para></listitem>
<listitem><para>fixed #465, build issue on Alpha</para></listitem>
<listitem><para>fixed #468, build issue in libsphinxclient</para></listitem>
<listitem><para>fixed #472, multiple stopword files failing to load</para></listitem>
<listitem><para>fixed #489, buffer overflow in query logging</para></listitem>
<listitem><para>fixed #493, Python API assertion after error returned from Query()</para></listitem>
<listitem><para>fixed #500, malformed MySQL packet when sending MVAs</para></listitem>
<listitem><para>fixed #504, SIGPIPE in libsphinxclient</para></listitem>
<listitem><para>fixed #506, better MySQL protocol commands support in SphinxQL (PING etc)</para></listitem>
<listitem><para>fixed #509, indexing ranged results from stored procedures</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel099"><title>Version 0.9.9-release, 02 dec 2009</title>
<itemizedlist>
<listitem><para>added Open, Close, Status calls to libsphinxclient (C API)</para></listitem>
<listitem><para>added automatic persistent connection reopening to PHP, Python APIs</para></listitem>
<listitem><para>added 64-bit value/range filters, fullscan mode support to SphinxSE</para></listitem>
<listitem><para>MAJOR CHANGE, our IANA assigned ports are 9312 and 9306 respectively (goodbye, trusty 3312)</para></listitem>
<listitem><para>MAJOR CHANGE, erroneous filters now fail with an error (were silently ignored before)</para></listitem>
<listitem><para>optimized unbuffered .spa writes on merge</para></listitem>
<listitem><para>optimized 1-keyword queries ranking in extended2 mode</para></listitem>
<listitem><para>fixed #441 (IO race in case of highly conccurent load on a preopened)</para></listitem>
<listitem><para>fixed #434 (distrubuted indexes were not searchable via MySQL protocol)</para></listitem>
<listitem><para>fixed #317 (indexer MVA progress counter)</para></listitem>
<listitem><para>fixed #398 (stopwords not removed from search query)</para></listitem>
<listitem><para>fixed #328 (broken cutoff)</para></listitem>
<listitem><para>fixed #250 (now quoting paths w/spaces when installing Windows service)</para></listitem>
<listitem><para>fixed #348 (K-list was not updated on merge)</para></listitem>
<listitem><para>fixed #357 (destination index were not K-list-filtered on merge)</para></listitem>
<listitem><para>fixed #369 (precaching .spi files over 2 GBs)</para></listitem>
<listitem><para>fixed #438 (missing boundary proximity matches)</para></listitem>
<listitem><para>fixed #371 (.spa flush in case of files over 2 GBs)</para></listitem>
<listitem><para>fixed #373 (crashes on distributed queries via mysql proto)</para></listitem>
<listitem><para>fixed critical bugs in hit merging code</para></listitem>
<listitem><para>fixed #424 (ordinals could be misplaced during indexing in case of bitfields etc)</para></listitem>
<listitem><para>fixed #426 (failing SE build on Solaris; thanks to Ben Beecher)</para></listitem>
<listitem><para>fixed #423 (typo in SE caused crash on SHOW STATUS)</para></listitem>
<listitem><para>fixed #363 (handling of read_timeout over 2147 seconds)</para></listitem>
<listitem><para>fixed #376 (minor error message mismatch)</para></listitem>
<listitem><para>fixed #413 (minus in SphinxQL)</para></listitem>
<listitem><para>fixed #417 (floats w/o leading digit in SphinxQL)</para></listitem>
<listitem><para>fixed #403 (typo in SetFieldWeights name in Java API)</para></listitem>
<listitem><para>fixed index rotation vs persistent connections</para></listitem>
<listitem><para>fixed backslash handling in SphinxQL parser</para></listitem>
<listitem><para>fixed uint unpacking vs. PHP 5.2.9 (possibly other versions)</para></listitem>
<listitem><para>fixed #325 (filter settings send from SphinxSE)</para></listitem>
<listitem><para>fixed #352 (removed mysql wrapper around close() in SphinxSE)</para></listitem>
<listitem><para>fixed #389 (display error messages through SphinxSE status variable)</para></listitem>
<listitem><para>fixed linking with port-installed iconv on OS X</para></listitem>
<listitem><para>fixed negative 64-bit unpacking in PHP API</para></listitem>
<listitem><para>fixed #349 (escaping backslash in query emulation mode)</para></listitem>
<listitem><para>fixed #320 (disabled multi-query route when select items differ)</para></listitem>
<listitem><para>fixed #353 (better quorum counts check)</para></listitem>
<listitem><para>fixed #341 (merging of trailing hits; maybe other ranking issues too)</para></listitem>
<listitem><para>fixed #368 (partially; @field "" caused crashes; now resets field limit)</para></listitem>
<listitem><para>fixed #365 (field mask was leaking on field-limited terms)</para></listitem>
<listitem><para>fixed #339 (updated debug query dumper)</para></listitem>
<listitem><para>fixed #361 (added SetConnectTimeout() to Java API)</para></listitem>
<listitem><para>fixed #338 (added missing fullscan to mode check in Java API)</para></listitem>
<listitem><para>fixed #323 (added floats support to SphinxQL)</para></listitem>
<listitem><para>fixed #340 (support listen=port:proto syntax too)</para></listitem>
<listitem><para>fixed #332 (\r is legal SphinxQL space now)</para></listitem>
<listitem><para>fixed xmlpipe2 K-lists</para></listitem>
<listitem><para>fixed #322 (safety gaps in mysql protocol row buffer)</para></listitem>
<listitem><para>fixed #313 (return keyword stats for empty indexes too)</para></listitem>
<listitem><para>fixed #344 (invalid checkpoints after merge)</para></listitem>
<listitem><para>fixed #326 (missing CLOCK_xxx on FreeBSD)</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel099rc2"><title>Version 0.9.9-rc2, 08 apr 2009</title>
<itemizedlist>
<listitem><para>added IsConnectError(), Open(), Close() calls to Java API (bug #240)</para></listitem>
<listitem><para>added <link linkend="conf-read-buffer">read_buffer</link>, <link linkend="conf-read-unhinted">read_unhinted</link> directives</para></listitem>
<listitem><para>added checks for build options returned by mysql_config (builds on Solaris now)</para></listitem>
<listitem><para>added fixed-RAM index merge (bug #169)</para></listitem>
<listitem><para>added logging chained queries count in case of (optimized) multi-queries</para></listitem>
<listitem><para>added <link linkend="sort-expr">GEODIST()</link> function</para></listitem>
<listitem><para>added <link linkend="ref-searchd">--status switch to searchd</link></para></listitem>
<listitem><para>added MySpell (OpenOffice) affix file support (bug #281)</para></listitem>
<listitem><para>added <link linkend="conf-odbc-dsn">ODBC support</link> (both Windows and UnixODBC)</para></listitem>
<listitem><para>added support for @id in IN() (bug #292)</para></listitem>
<listitem><para>added support for <link linkend="api-func-setselect">aggregate functions</link> in GROUP BY (namely AVG, MAX, MIN, SUM)</para></listitem>
<listitem><para>added <link linkend="sphinxse-snippets">MySQL UDF that builds snippets</link> using searchd</para></listitem>
<listitem><para>added <link linkend="conf-write-buffer">write_buffer</link> directive (defaults to 1M)</para></listitem>
<listitem><para>added <link linkend="conf-xmlpipe-fixup-utf8">xmlpipe_fixup_utf8</link> directive</para></listitem>
<listitem><para>added suggestions sample</para></listitem>
<listitem><para>added microsecond precision int64 timer (bug #282)</para></listitem>
<listitem><para>added <link linkend="conf-listen-backlog">listen_backlog directive</link></para></listitem>
<listitem><para>added <link linkend="conf-max-xmlpipe2-field">max_xmlpipe2_field</link> directive</para></listitem>
<listitem><para>added <link linkend="sphinxql">initial SphinxQL support</link> to mysql41 handler, SELECT .../SHOW WARNINGS/STATUS/META are handled</para></listitem>
<listitem><para>added support for different network protocols, and mysql41 protocol</para></listitem>
<listitem><para>added <link linkend="api-func-setrankingmode">fieldmask ranker</link>, updated SphinxSE list of rankers</para></listitem>
<listitem><para>added <link linkend="conf-mysql-ssl">mysql_ssl_xxx</link> directives</para></listitem>
<listitem><para>added <link linkend="ref-searchd">--cpustats (requires clock_gettime()) and --status switches</link> to searchd</para></listitem>
<listitem><para>added performance counters, <link linkend="api-func-status">Status()</link> API call</para></listitem>
<listitem><para>added <link linkend="conf-overshort-step">overshort_step</link> and <link linkend="conf-stopword-step">stopword_step</link> directives</para></listitem>
<listitem><para>added <link linkend="extended-syntax">strict order operator</link> (aka operator before, eg. "one << two << three")</para></listitem>
<listitem><para>added <link linkend="ref-indextool">indextool</link> utility, moved --dumpheader there, added --debugdocids, --dumphitlist options</para></listitem>
<listitem><para>added own RNG, reseeded on @random sort query (bug #183)</para></listitem>
<listitem><para>added <link linkend="extended-syntax">field-start and field-end modifiers support</link> (syntax is "^hello world$"; field-end requires reindex)</para></listitem>
<listitem><para>added MVA attribute support to IN() function</para></listitem>
<listitem><para>added <link linkend="sort-expr">AND, OR, and NOT support</link> to expressions</para></listitem>
<listitem><para>improved logging of (optimized) multi-queries (now logging chained query count)</para></listitem>
<listitem><para>improved handshake error handling, fixed protocol version byte order (omg)</para></listitem>
<listitem><para>updated SphinxSE to protocol 1.22</para></listitem>
<listitem><para>allowed phrase_boundary_step=-1 (trick to emulate keyword expansion)</para></listitem>
<listitem><para>removed SPH_MAX_QUERY_WORDS limit</para></listitem>
<listitem><para>fixed CLI search vs documents missing from DB (bug #257)</para></listitem>
<listitem><para>fixed libsphinxclient results leak on subsequent sphinx_run_queries call (bug #256)</para></listitem>
<listitem><para>fixed libsphinxclient handling of zero max_matches and cutoff (bug #208)</para></listitem>
<listitem><para>fixed Java API over-64K string reads (eg. big snippets) in Java API (bug #181)</para></listitem>
<listitem><para>fixed Java API 2nd Query() after network error in 1st Query() call (bug #308)</para></listitem>
<listitem><para>fixed typo-class bugs in SetFilterFloatRange (bug #259), SetSortMode (bug #248)</para></listitem>
<listitem><para>fixed missing @@relaxed support (bug #276), fixed missing error on @nosuchfield queries, documented @@relaxed</para></listitem>
<listitem><para>fixed UNIX socket permissions to 0777 (bug #288)</para></listitem>
<listitem><para>fixed xmlpipe2 crash on schemas with no fields, added better document structure checks</para></listitem>
<listitem><para>fixed (and optimized) expr parser vs IN() with huge (10K+) args count</para></listitem>
<listitem><para>fixed double EarlyCalc() in fullscan mode (minor performance impact)</para></listitem>
<listitem><para>fixed phrase boundary handling in some cases (on buffer end, on trailing whitespace)</para></listitem>
<listitem><para>fixes in snippets (aka excerpts) generation</para></listitem>
<listitem><para>fixed inline attrs vs id64 index corruption</para></listitem>
<listitem><para>fixed head searchd crash on config re-parse failure</para></listitem>
<listitem><para>fixed handling of numeric keywords with leading zeroes such as "007" (bug #251)</para></listitem>
<listitem><para>fixed junk in SphinxSE status variables (bug #304)</para></listitem>
<listitem><para>fixed wordlist checkpoints serialization (bug #236)</para></listitem>
<listitem><para>fixed unaligned docinfo id access (bug #230)</para></listitem>
<listitem><para>fixed GetRawBytes() vs oversized blocks (headers with over 32K charset_table should now work, bug #300)</para></listitem>
<listitem><para>fixed buffer overflow caused by too long dest wordform, updated tests</para></listitem>
<listitem><para>fixed IF() return type (was always int, is deduced now)</para></listitem>
<listitem><para>fixed legacy queries vs. special chars vs. multiple indexes</para></listitem>
<listitem><para>fixed write-write-read socket access pattern vs Nagle vs delays vs FreeBSD (oh wow)</para></listitem>
<listitem><para>fixed exceptions vs query-parser issue</para></listitem>
<listitem><para>fixed late calc vs @weight in expressions (bug #285)</para></listitem>
<listitem><para>fixed early lookup/calc vs filters (bug #284)</para></listitem>
<listitem><para>fixed emulated MATCH_ANY queries (empty proximity and phrase queries are allowed now)</para></listitem>
<listitem><para>fixed MATCH_ANY ranker vs fields with no matches</para></listitem>
<listitem><para>fixed index file size vs inplace_enable (bug #245)</para></listitem>
<listitem><para>fixed that old logs were not closed on USR1 (bug #221)</para></listitem>
<listitem><para>fixed handling of '!' alias to NOT operator (bug #237)</para></listitem>
<listitem><para>fixed error handling vs query steps (step failure was not reported)</para></listitem>
<listitem><para>fixed querying vs inline attributes</para></listitem>
<listitem><para>fixed stupid bug in escaping code, fixed EscapeString() and made it static</para></listitem>
<listitem><para>fixed parser vs @field -keyword, foo|@field bar, "" queries (bug #310)</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel099rc1"><title>Version 0.9.9-rc1, 17 nov 2008</title>
<itemizedlist>
<listitem><para>added <link linkend="conf-min-stemming-len">min_stemming_len</link> directive</para></listitem>
<listitem><para>added <link linkend="api-func-isconnecterror">IsConnectError()</link> API call (helps distingusih API vs remote errors)</para></listitem>
<listitem><para>added duplicate log messages filter to searchd</para></listitem>
<listitem><para>added --nodetach debugging switch to searchd</para></listitem>
<listitem><para>added blackhole agents support for debugging/testing (<link linkend="conf-agent-blackhole">agent_blackhole</link> directive)</para></listitem>
<listitem><para>added <link linkend="conf-max-filters">max_filters</link>, <link linkend="conf-max-filter-values">max_filter_values</link> directives (were hardcoded before)</para></listitem>
<listitem><para>added int64 expression evaluation path, automatic inference, and BIGINT() enforcer function</para></listitem>
<listitem><para>added crash handler for debugging (<option>crash_log_path</option> directive)</para></listitem>
<listitem><para>added MS SQL (aka SQL Server) source support (Windows only, <link linkend="conf-mssql-winauth">mssql_winauth</link> and mssql_unicode directives)</para></listitem>
<listitem><para>added indexer-side column unpacking feature (<link linkend="conf-unpack-zlib">unpack_zlib</link>, <link linkend="conf-unpack-mysqlcompress">unpack_mysqlcompress</link> directives)</para></listitem>
<listitem><para>added nested brackers and NOTs support to <link linkend="extended-syntax">query language</link>, rewritten query parser</para></listitem>
<listitem><para>added persistent connections support (<link linkend="api-func-open">Open()</link> and <link linkend="api-func-close">Close()</link> API calls)</para></listitem>
<listitem><para>added <link linkend="conf-index-exact-words">index_exact_words</link> feature, and exact form operator to query language ("hello =world")</para></listitem>
<listitem><para>added status variables support to SphinxSE (SHOW STATUS LIKE 'sphinx_%')</para></listitem>
<listitem><para>added <link linkend="conf-max-packet-size">max_packet_size</link> directive (was hardcoded at 8M before)</para></listitem>
<listitem><para>added UNIX socket support, and multi-interface support (<link linkend="conf-listen">listen</link> directive)</para></listitem>
<listitem><para>added star-syntax support to <link linkend="api-func-buildexcerpts">BuildExcerpts()</link> API call</para></listitem>
<listitem><para>added inplace inversion of .spa and .spp (<link linkend="conf-inplace-enable">inplace_enable</link> directive, 1.5-2x less disk space for indexing)</para></listitem>
<listitem><para>added builtin Czech stemmer (morphology=stem_cz)</para></listitem>
<listitem><para>added <link linkend="sort-expr">IDIV(), NOW(), INTERVAL(), IN() functions</link> to expressions</para></listitem>
<listitem><para>added index-level early-reject based on filters</para></listitem>
<listitem><para>added MVA updates feature (<link linkend="conf-mva-updates-pool">mva_updates_pool</link> directive)</para></listitem>
<listitem><para>added select-list feature with computed expressions support (see <link linkend="api-func-setselect">SetSelect()</link> API call, test.php --select switch), protocol 1.22</para></listitem>
<listitem><para>added integer expressions support (2x faster than float)</para></listitem>
<listitem><para>added multiforms support (multiple source words in wordforms file)</para></listitem>
<listitem><para>added <link linkend="api-func-setrankingmode">legacy rankers</link> (MATCH_ALL/MATCH_ANY/etc), removed legacy matching code (everything runs on V2 engine now)</para></listitem>
<listitem><para>added <link linkend="extended-syntax">field position limit</link> modifier to field operator (syntax: @title[50] hello world)</para></listitem>
<listitem><para>added killlist support (<link linkend="conf-sql-query-killlist">sql_query_killlist</link> directive, --merge-killlists switch)</para></listitem>
<listitem><para>added on-disk SPI support (ondisk_dict directive)</para></listitem>
<listitem><para>added indexer IO stats</para></listitem>
<listitem><para>added periodic .spa flush (<link linkend="conf-attr-flush-period">attr_flush_period</link> directive)</para></listitem>
<listitem><para>added config reload on SIGHUP</para></listitem>
<listitem><para>added per-query attribute overrides feature (see <link linkend="api-func-setoverride">SetOverride()</link> API call); protocol 1.21</para></listitem>
<listitem><para>added signed 64bit attrs support (<link linkend="conf-sql-attr-bigint">sql_attr_bigint</link> directive)</para></listitem>
<listitem><para>improved HTML stripper to also skip PIs (<? ... ?>, such as <?php ... ?>)</para></listitem>
<listitem><para>improved excerpts speed (upto 50x faster on big documents)</para></listitem>
<listitem><para>fixed a short window of searchd inaccessibility on startup (started listen()ing too early before)</para></listitem>
<listitem><para>fixed .spa loading on systems where read() is 2GB capped</para></listitem>
<listitem><para>fixed infixes vs morphology issues</para></listitem>
<listitem><para>fixed backslash escaping, added backslash to EscapeString()</para></listitem>
<listitem><para>fixed handling of over-2GB dictionary files (.spi)</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel0981"><title>Version 0.9.8.1, 30 oct 2008</title>
<itemizedlist>
<listitem><para>added configure script to libsphinxclient</para></listitem>
<listitem><para>changed proximity/quorum operator syntax to require whitespace after length</para></listitem>
<listitem><para>fixed potential head process crash on SIGPIPE during "maxed out" message</para></listitem>
<listitem><para>fixed handling of incomplete remote replies (caused over-degraded distributed results, in rare cases)</para></listitem>
<listitem><para>fixed sending of big remote requests (caused distributed requests to fail, in rare cases)</para></listitem>
<listitem><para>fixed FD_SET() overflow (caused searchd to crash on startup, in rare cases)</para></listitem>
<listitem><para>fixed MVA vs distributed indexes (caused loss of 1st MVA value in result set)</para></listitem>
<listitem><para>fixed tokenizing of exceptions terminated by specials (eg. "GPS AT&T" in extended mode)</para></listitem>
<listitem><para>fixed buffer overrun in stemmer on overlong tokens occasionally emitted by proximity/quorum operator parser (caused crashes on certain proximity/quorum queries)</para></listitem>
<listitem><para>fixed wordcount ranker (could be dropping hits)</para></listitem>
<listitem><para>fixed --merge feature (numerous different fixes, caused broken indexes)</para></listitem>
<listitem><para>fixed --merge-dst-range performance</para></listitem>
<listitem><para>fixed prefix/infix generation for stopwords</para></listitem>
<listitem><para>fixed ignore_chars vs specials</para></listitem>
<listitem><para>fixed misplaced F_SETLKW check (caused certain build types, eg. RPM build on FC8, to fail)</para></listitem>
<listitem><para>fixed dictionary-defined charsets support in spelldump, added \x-style wordchars support</para></listitem>
<listitem><para>fixed Java API to properly send long strings (over 64K; eg. long document bodies for excerpts)</para></listitem>
<listitem><para>fixed Python API to accept offset/limit of 'long' type</para></listitem>
<listitem><para>fixed default ID range (that filtered out all 64-bit values) in Java and Python APIs</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel098"><title>Version 0.9.8, 14 jul 2008</title>
<bridgehead>Indexing</bridgehead>
<itemizedlist>
<listitem><para>added support for 64-bit document and keyword IDs, --enable-id64 switch to configure</para></listitem>
<listitem><para>added support for floating point attributes</para></listitem>
<listitem><para>added support for bitfields in attributes, <link linkend="conf-sql-attr-bool">sql_attr_bool</link> directive and bit-widths part in <link linkend="conf-sql-attr-uint">sql_attr_uint</link> directive</para></listitem>
<listitem><para>added support for multi-valued attributes (MVA)</para></listitem>
<listitem><para>added metaphone preprocessor</para></listitem>
<listitem><para>added libstemmer library support, provides stemmers for a number of additional languages</para></listitem>
<listitem><para>added xmlpipe2 source type, that supports arbitrary fields and attributes</para></listitem>
<listitem><para>added word form dictionaries, <link linkend="conf-wordforms">wordforms</link> directive (and spelldump utility)</para></listitem>
<listitem><para>added tokenizing exceptions, <link linkend="conf-exceptions">exceptions</link> directive</para></listitem>
<listitem><para>added an option to fully remove element contents to HTML stripper, <link linkend="conf-html-remove-elements">html_remove_elements</link> directive</para></listitem>
<listitem><para>added HTML entities decoder (with full XHTML1 set support) to HTML stripper</para></listitem>
<listitem><para>added per-index HTML stripping settings, <link linkend="conf-html-strip">html_strip</link>, <link linkend="conf-html-index-attrs">html_index_attrs</link>, and <link linkend="conf-html-remove-elements">html_remove_elements</link> directives</para></listitem>
<listitem><para>added IO load throttling, <link linkend="conf-max-iops">max_iops</link> and <link linkend="conf-max-iosize">max_iosize</link> directives</para></listitem>
<listitem><para>added SQL load throttling, <link linkend="conf-sql-ranged-throttle">sql_ranged_throttle</link> directive</para></listitem>
<listitem><para>added an option to index prefixes/infixes for given fields only, <link linkend="conf-prefix-fields">prefix_fields</link> and <link linkend="conf-infix-fields">infix_fields</link> directives</para></listitem>
<listitem><para>added an option to ignore certain characters (instead of just treating them as whitespace), <link linkend="conf-ignore-chars">ignore_chars</link> directive</para></listitem>
<listitem><para>added an option to increment word position on phrase boundary characters, <link linkend="conf-phrase-boundary">phrase_boundary</link> and <link linkend="conf-phrase-boundary-step">phrase_boundary_step</link> directives</para></listitem>
<listitem><para>added --merge-dst-range switch (and filters) to index merging feature (--merge switch)</para></listitem>
<listitem><para>added <link linkend="conf-mysql-connect-flags">mysql_connect_flags</link> directive (eg. to reduce indexing time MySQL network traffic and/or time)</para></listitem>
<listitem><para>improved ordinals sorting; now runs in fixed RAM</para></listitem>
<listitem><para>improved handling of documents with zero/NULL ids, now skipping them instead of aborting</para></listitem>
</itemizedlist>
<bridgehead>Search daemon</bridgehead>
<itemizedlist>
<listitem><para>added an option to unlink old index on succesful rotation, <link linkend="conf-unlink-old">unlink_old</link> directive</para></listitem>
<listitem><para>added an option to keep index files open at all times (fixes subtle races on rotation), <link linkend="conf-preopen">preopen</link> and <link linkend="conf-preopen-indexes">preopen_indexes</link> directives</para></listitem>
<listitem><para>added an option to profile searchd disk I/O, --iostats command-line option</para></listitem>
<listitem><para>added an option to rotate index seamlessly (fully avoids query stalls), <link linkend="conf-seamless-rotate">seamless_rotate</link> directive</para></listitem>
<listitem><para>added HTML stripping support to excerpts (uses per-index settings)</para></listitem>
<listitem><para>added 'exact_phrase', 'single_passage', 'use_boundaries', 'weight_order 'options to <link linkend="api-func-buildexcerpts">BuildExcerpts()</link> API call</para></listitem>
<listitem><para>added distributed attribute updates propagation</para></listitem>
<listitem><para>added distributed retries on master node side</para></listitem>
<listitem><para>added log reopen on SIGUSR1</para></listitem>
<listitem><para>added --stop switch (sends SIGTERM to running instance)</para></listitem>
<listitem><para>added Windows service mode, and --servicename switch</para></listitem>
<listitem><para>added Windows --rotate support</para></listitem>
<listitem><para>improved log timestamping, now with millisecond precision</para></listitem>
</itemizedlist>
<bridgehead>Querying</bridgehead>
<itemizedlist>
<listitem><para>added extended engine V2 (faster, cleaner, better; SPH_MATCH_EXTENDED2 mode)</para></listitem>
<listitem><para>added ranking modes support (V2 engine only; <link linkend="api-func-setrankingmode">SetRankingMode()</link> API call)</para></listitem>
<listitem><para>added quorum searching support to query language (V2 engine only; example: "any three of all these words"/3)</para></listitem>
<listitem><para>added query escaping support to query language, and <link linkend="api-func-escapestring">EscapeString()</link> API call</para></listitem>
<listitem><para>added multi-field syntax support to query language (example: "@(field1,field2) something"), and @@relaxed field checks option</para></listitem>
<listitem><para>added optional star-syntax ('word*') support in keywords, enable_star directive (for prefix/infix indexes only)</para></listitem>
<listitem><para>added full-scan support (query must be fully empty; can perform block-reject optimization)</para></listitem>
<listitem><para>added COUNT(DISTINCT(attr)) calculation support, <link linkend="api-func-setgroupdistinct">SetGroupDistinct()</link> API call</para></listitem>
<listitem><para>added group-by on MVA support, <link linkend="api-func-setarrayresult">SetArrayResult()</link> PHP API call</para></listitem>
<listitem><para>added per-index weights feature, <link linkend="api-func-setindexweights">SetIndexWeights()</link> API call</para></listitem>
<listitem><para>added geodistance support, <link linkend="api-func-setgeoanchor">SetGeoAnchor()</link> API call</para></listitem>
<listitem><para>added result set sorting by arbitrary expressions in run time (eg. "@weight+log(price)*2.5"), SPH_SORT_EXPR mode</para></listitem>
<listitem><para>added result set sorting by @custom compile-time sorting function (see src/sphinxcustomsort.inl)</para></listitem>
<listitem><para>added result set sorting by @random value</para></listitem>
<listitem><para>added result set merging for indexes with different schemas</para></listitem>
<listitem><para>added query comments support (3rd arg to <link linkend="api-func-query">Query()</link>/<link linkend="api-func-addquery">AddQuery()</link> API calls, copied verbatim to query log)</para></listitem>
<listitem><para>added keyword extraction support, <link linkend="api-func-buildkeywords">BuildKeywords()</link> API call</para></listitem>
<listitem><para>added binding field weights by name, <link linkend="api-func-setfieldweights">SetFieldWeights()</link> API call</para></listitem>
<listitem><para>added optional limit on query time, <link linkend="api-func-setmaxquerytime">SetMaxQueryTime()</link> API call</para></listitem>
<listitem><para>added optional limit on found matches count (4rd arg to <link linkend="api-func-setlimits">SetLimits()</link> API call, so-called 'cutoff')</para></listitem>
</itemizedlist>
<bridgehead>APIs and SphinxSE</bridgehead>
<itemizedlist>
<listitem><para>added pure C API (libsphinxclient)</para></listitem>
<listitem><para>added Ruby API (thanks to Dmytro Shteflyuk)</para></listitem>
<listitem><para>added Java API</para></listitem>
<listitem><para>added SphinxSE support for MVAs (use varchar), floats (use float), 64bit docids (use bigint)</para></listitem>
<listitem><para>added SphinxSE options "floatrange", "geoanchor", "fieldweights", "indexweights", "maxquerytime", "comment", "host" and "port"; and support for "expr:CLAUSE"</para></listitem>
<listitem><para>improved SphinxSE max query size (using MySQL condition pushdown), upto 256K now</para></listitem>
</itemizedlist>
<bridgehead>General</bridgehead>
<itemizedlist>
<listitem><para>added scripting (shebang syntax) support to config files (example: #!/usr/bin/php in the first line)</para></listitem>
<listitem><para>added unified config handling and validation to all programs</para></listitem>
<listitem><para>added unified documentation </para></listitem>
<listitem><para>added .spec file for RPM builds</para></listitem>
<listitem><para>added automated testing suite</para></listitem>
<listitem><para>improved index locking, now fcntl()-based instead of buggy file-existence-based</para></listitem>
<listitem><para>fixed unaligned RAM accesses, now works on SPARC and ARM</para></listitem>
</itemizedlist>
<bridgehead id="rel098-fixes-since-rc2">Changes and fixes since 0.9.8-rc2</bridgehead>
<itemizedlist>
<listitem><para>added pure C API (libsphinxclient)</para></listitem>
<listitem><para>added Ruby API</para></listitem>
<listitem><para>added SetConnectTimeout() PHP API call</para></listitem>
<listitem><para>added allowed type check to UpdateAttributes() handler (bug #174)</para></listitem>
<listitem><para>added defensive MVA checks on index preload (protection against broken indexes, bug #168)</para></listitem>
<listitem><para>added sphinx-min.conf sample file</para></listitem>
<listitem><para>added --without-iconv switch to configure</para></listitem>
<listitem><para>removed redundant -lz dependency in searchd</para></listitem>
<listitem><para>removed erroneous "xmlpipe2 deprecated" warning</para></listitem>
<listitem><para>fixed EINTR handling in piped read (bug #166)</para></listitem>
<listitem><para>fixup query time before logging and sending to client (bug #153)</para></listitem>
<listitem><para>fixed attribute updates vs full-scan early-reject index (bug #149)</para></listitem>
<listitem><para>fixed gcc warnings (bug #160)</para></listitem>
<listitem><para>fixed mysql connection attempt vs pgsql source type (bug #165)</para></listitem>
<listitem><para>fixed 32-bit wraparound when preloading over 2 GB files</para></listitem>
<listitem><para>fixed "out of memory" message vs over 2 GB allocs (bug #116)</para></listitem>
<listitem><para>fixed unaligned RAM access detection on ARM (where unaligned reads do not crash but produce wrong results)</para></listitem>
<listitem><para>fixed missing full scan results in some cases</para></listitem>
<listitem><para>fixed several bugs in --merge, --merge-dst-range</para></listitem>
<listitem><para>fixed @geodist vs MultiQuery and filters, @expr vs MultiQuery</para></listitem>
<listitem><para>fixed GetTokenEnd() vs 1-grams (was causing crash in excerpts)</para></listitem>
<listitem><para>fixed sql_query_range to handle empty strings in addition to NULL strings (Postgres specific)</para></listitem>
<listitem><para>fixed morphology=none vs infixes</para></listitem>
<listitem><para>fixed case sensitive attributes names in UpdateAttributes()</para></listitem>
<listitem><para>fixed ext2 ranking vs. stopwords (now using atompos from query parser)</para></listitem>
<listitem><para>fixed EscapeString() call</para></listitem>
<listitem><para>fixed escaped specials (now handled as whitespace if not in charset)</para></listitem>
<listitem><para>fixed schema minimizer (now handles type/size mismatches)</para></listitem>
<listitem><para>fixed word stats in extended2; stemmed form is now returned</para></listitem>
<listitem><para>fixed spelldump case folding vs dictionary-defined character sets</para></listitem>
<listitem><para>fixed Postgres BOOLEAN handling </para></listitem>
<listitem><para>fixed enforced "inline" docinfo on empty indexes (normally ok, but index merge was really confused)</para></listitem>
<listitem><para>fixed rare count(distinct) out-of-bounds issue (it occasionaly caused too high @distinct values)</para></listitem>
<listitem><para>fixed hangups on documents with id=DOCID_MAX in some cases</para></listitem>
<listitem><para>fixed rare crash in tokenizer (prefixed synonym vs. input stream eof)</para></listitem>
<listitem><para>fixed query parser vs "aaa (bbb ccc)|ddd" queries</para></listitem>
<listitem><para>fixed BuildExcerpts() request in Java API</para></listitem>
<listitem><para>fixed Postgres specific memory leak</para></listitem>
<listitem><para>fixed handling of overshort keywords (less than min_word_len)</para></listitem>
<listitem><para>fixed HTML stripper (now emits space after indexed attributes)</para></listitem>
<listitem><para>fixed 32-field case in query parser</para></listitem>
<listitem><para>fixed rare count(distinct) vs. querying multiple local indexes vs. reusable sorter issue</para></listitem>
<listitem><para>fixed sorting of negative floats in SPH_SORT_EXTENDED mode</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel097"><title>Version 0.9.7, 02 apr 2007</title>
<itemizedlist>
<listitem><para>added support for <option>sql_str2ordinal_column</option></para></listitem>
<listitem><para>added support for upto 5 sort-by attrs (in extended sorting mode)</para></listitem>
<listitem><para>added support for separate groups sorting clause (in group-by mode)</para></listitem>
<listitem><para>added support for on-the-fly attribute updates (PRE-ALPHA; will change heavily; use for preliminary testing ONLY)</para></listitem>
<listitem><para>added support for zero/NULL attributes</para></listitem>
<listitem><para>added support for 0.9.7 features to SphinxSE</para></listitem>
<listitem><para>added support for n-grams (alpha, 1-grams only for now)</para></listitem>
<listitem><para>added support for warnings reported to client</para></listitem>
<listitem><para>added support for exclude-filters</para></listitem>
<listitem><para>added support for prefix and infix indexing (see <option>max_prefix_len</option>, <option>max_infix_len</option>)</para></listitem>
<listitem><para>added <option>@*</option> syntax to reset current field to query language</para></listitem>
<listitem><para>added removal of duplicate entries in query index order</para></listitem>
<listitem><para>added PHP API workarounds for PHP signed/unsigned braindamage</para></listitem>
<listitem><para>added locks to avoid two concurrent indexers working on same index</para></listitem>
<listitem><para>added check for existing attributes vs. <option>docinfo=none</option> case</para></listitem>
<listitem><para>improved groupby code a lot (better precision, and upto 25x times faster in extreme cases)</para></listitem>
<listitem><para>improved error handling and reporting</para></listitem>
<listitem><para>improved handling of broken indexes (reports error instead of hanging/crashing)</para></listitem>
<listitem><para>improved <option>mmap()</option> limits for attributes and wordlists (now able to map over 4 GB on x64 and over 2 GB on x32 where possible)</para></listitem>
<listitem><para>improved <option>malloc()</option> pressure in head daemon (search time should not degrade with time any more)</para></listitem>
<listitem><para>improved <filename>test.php</filename> command line options</para></listitem>
<listitem><para>improved error reporting (distributed query, broken index etc issues now reported to client)</para></listitem>
<listitem><para>changed default network packet size to be 8M, added extra checks</para></listitem>
<listitem><para>fixed division by zero in BM25 on 1-document collections (in extended matching mode)</para></listitem>
<listitem><para>fixed <filename>.spl</filename> files getting unlinked</para></listitem>
<listitem><para>fixed crash in schema compatibility test</para></listitem>
<listitem><para>fixed UTF-8 Russian stemmer</para></listitem>
<listitem><para>fixed requested matches count when querying distributed agents</para></listitem>
<listitem><para>fixed signed vs. unsigned issues everywhere (ranged queries, CLI search output, and obtaining docid)</para></listitem>
<listitem><para>fixed potential crashes vs. negative query offsets</para></listitem>
<listitem><para>fixed 0-match docs vs. extended mode vs. stats</para></listitem>
<listitem><para>fixed group/timestamp filters being ignored if querying from older clients</para></listitem>
<listitem><para>fixed docs to mention <option>pgsql</option> source type</para></listitem>
<listitem><para>fixed issues with explicit '&' in extended matching mode</para></listitem>
<listitem><para>fixed wrong assertion in SBCS encoder</para></listitem>
<listitem><para>fixed crashes with no-attribute indexes after rotate</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel097rc2"><title>Version 0.9.7-rc2, 15 dec 2006</title>
<itemizedlist>
<listitem><para>added support for extended matching mode (query language)</para></listitem>
<listitem><para>added support for extended sorting mode (sorting clauses)</para></listitem>
<listitem><para>added support for SBCS excerpts</para></listitem>
<listitem><para>added <option>mmap()ing</option> for attributes and wordlist (improves search time, speeds up <option>fork()</option> greatly)</para></listitem>
<listitem><para>fixed attribute name handling to be case insensitive</para></listitem>
<listitem><para>fixed default compiler options to simplify post-mortem debugging (added <option>-g</option>, removed <option>-fomit-frame-pointer</option>)</para></listitem>
<listitem><para>fixed rare memory leak</para></listitem>
<listitem><para>fixed "hello hello" queries in "match phrase" mode</para></listitem>
<listitem><para>fixed issue with excerpts, texts and overlong queries</para></listitem>
<listitem><para>fixed logging multiple index name (no longer tokenized)</para></listitem>
<listitem><para>fixed trailing stopword not flushed from tokenizer</para></listitem>
<listitem><para>fixed boolean evaluation</para></listitem>
<listitem><para>fixed pidfile being wrongly <option>unlink()ed</option> on <option>bind()</option> failure</para></listitem>
<listitem><para>fixed <option>--with-mysql-includes/libs</option> (they conflicted with well-known paths)</para></listitem>
<listitem><para>fixes for 64-bit platforms</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel097rc"><title>Version 0.9.7-rc1, 26 oct 2006</title>
<itemizedlist>
<listitem><para>added alpha index merging code</para></listitem>
<listitem><para>added an option to decrease <option>max_matches</option> per-query</para></listitem>
<listitem><para>added an option to specify IP address for searchd to listen on</para></listitem>
<listitem><para>added support for unlimited amount of configured sources and indexes</para></listitem>
<listitem><para>added support for group-by queries</para></listitem>
<listitem><para>added support for /2 range modifier in charset_table</para></listitem>
<listitem><para>added support for arbitrary amount of document attributes</para></listitem>
<listitem><para>added logging filter count and index name</para></listitem>
<listitem><para>added <option>--with-debug</option> option to configure to compile in debug mode</para></listitem>
<listitem><para>added <option>-DNDEBUG</option> when compiling in default mode</para></listitem>
<listitem><para>improved search time (added doclist size hints, in-memory wordlist cache, and used VLB coding everywhere)</para></listitem>
<listitem><para>improved (refactored) SQL driver code (adding new drivers should be very easy now)</para></listitem>
<listitem><para>improved exceprts generation</para></listitem>
<listitem><para>fixed issue with empty sources and ranged queries</para></listitem>
<listitem><para>fixed querying purely remote distributed indexes</para></listitem>
<listitem><para>fixed suffix length check in English stemmer in some cases</para></listitem>
<listitem><para>fixed UTF-8 decoder for codes over U+20000 (for CJK)</para></listitem>
<listitem><para>fixed UTF-8 encoder for 3-byte sequences (for CJK)</para></listitem>
<listitem><para>fixed overshort (less than <option>min_word_len</option>) words prepended to next field</para></listitem>
<listitem><para>fixed source connection order (indexer does not connect to all sources at once now)</para></listitem>
<listitem><para>fixed line numbering in config parser</para></listitem>
<listitem><para>fixed some issues with index rotation</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel096"><title>Version 0.9.6, 24 jul 2006</title>
<itemizedlist>
<listitem><para>added support for empty indexes</para></listitem>
<listitem><para>added support for multiple sql_query_pre/post/post_index</para></listitem>
<listitem><para>fixed timestamp ranges filter in "match any" mode</para></listitem>
<listitem><para>fixed configure issues with --without-mysql and --with-pgsql options</para></listitem>
<listitem><para>fixed building on Solaris 9</para></listitem>
</itemizedlist>
</sect1>
<sect1 id="rel096rc1"><title>Version 0.9.6-rc1, 26 jun 2006</title>
<itemizedlist>
<listitem><para>added boolean queries support (experimental, beta version)</para></listitem>
<listitem><para>added simple file-based query cache (experimental, beta version)</para></listitem>
<listitem><para>added storage engine for MySQL 5.0 and 5.1 (experimental, beta version)</para></listitem>
<listitem><para>added GNU style <filename>configure</filename> script</para></listitem>
<listitem><para>added new searchd protocol (all binary, and should be backwards compatible)</para></listitem>
<listitem><para>added distributed searching support to searchd</para></listitem>
<listitem><para>added PostgreSQL driver</para></listitem>
<listitem><para>added excerpts generation</para></listitem>
<listitem><para>added <option>min_word_len</option> option to index</para></listitem>
<listitem><para>added <option>max_matches</option> option to searchd, removed hardcoded MAX_MATCHES limit</para></listitem>
<listitem><para>added initial documentation, and a working <filename>example.sql</filename></para></listitem>
<listitem><para>added support for multiple sources per index</para></listitem>
<listitem><para>added soundex support</para></listitem>
<listitem><para>added group ID ranges support</para></listitem>
<listitem><para>added <option>--stdin</option> command-line option to search utility</para></listitem>
<listitem><para>added <option>--noprogress</option> option to indexer</para></listitem>
<listitem><para>added <option>--index</option> option to search</para></listitem>
<listitem><para>fixed UTF-8 decoder (3-byte codepoints did not work)</para></listitem>
<listitem><para>fixed PHP API to handle big result sets faster</para></listitem>
<listitem><para>fixed config parser to handle empty values properly</para></listitem>
<listitem><para>fixed redundant <code>time(NULL)</code> calls in time-segments mode</para></listitem>
</itemizedlist>
</sect1>
</appendix>
</book>
|