1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 3008 3009 3010 3011 3012 3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 3026 3027 3028 3029 3030 3031 3032 3033 3034 3035 3036 3037 3038 3039 3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 3053 3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 3081 3082 3083 3084 3085 3086 3087 3088 3089 3090 3091 3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 3132 3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 3193 3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 3370 3371 3372 3373 3374 3375 3376 3377 3378 3379 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 3408 3409 3410 3411 3412 3413 3414 3415 3416 3417 3418 3419 3420 3421 3422 3423 3424 3425 3426 3427 3428 3429 3430 3431 3432 3433 3434 3435 3436 3437 3438 3439 3440 3441 3442 3443 3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 3454 3455 3456 3457 3458 3459 3460 3461 3462 3463 3464 3465 3466 3467 3468 3469 3470 3471 3472 3473 3474 3475 3476 3477 3478 3479 3480 3481 3482 3483 3484 3485 3486 3487 3488 3489 3490 3491 3492 3493 3494 3495 3496 3497 3498 3499 3500 3501 3502 3503 3504 3505 3506 3507 3508 3509 3510 3511 3512 3513 3514 3515 3516 3517 3518 3519 3520 3521 3522 3523 3524 3525 3526 3527 3528 3529 3530 3531 3532 3533 3534 3535 3536 3537 3538 3539 3540 3541 3542 3543 3544 3545 3546 3547 3548 3549 3550 3551 3552 3553 3554 3555 3556 3557 3558 3559 3560 3561 3562 3563 3564 3565 3566 3567 3568 3569 3570 3571 3572 3573 3574 3575 3576 3577 3578 3579 3580 3581 3582 3583 3584 3585 3586 3587 3588 3589 3590 3591 3592 3593 3594 3595 3596 3597 3598 3599 3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 3610 3611 3612 3613 3614 3615 3616 3617 3618 3619 3620 3621 3622 3623 3624 3625 3626 3627 3628 3629 3630 3631 3632 3633 3634 3635 3636 3637 3638 3639 3640 3641 3642 3643 3644 3645 3646 3647 3648 3649 3650 3651 3652 3653 3654 3655 3656 3657 3658 3659 3660 3661 3662 3663 3664 3665 3666 3667 3668 3669 3670 3671 3672 3673 3674 3675 3676 3677 3678 3679 3680 3681 3682 3683 3684 3685 3686 3687 3688 3689 3690 3691 3692 3693 3694 3695 3696 3697 3698 3699 3700 3701 3702 3703 3704 3705 3706 3707 3708 3709 3710 3711 3712 3713 3714 3715 3716 3717 3718 3719 3720 3721 3722 3723 3724 3725 3726 3727 3728 3729 3730 3731 3732 3733 3734 3735 3736 3737 3738 3739 3740 3741 3742 3743 3744 3745 3746 3747 3748 3749 3750 3751 3752 3753 3754 3755 3756 3757 3758 3759 3760 3761 3762 3763 3764 3765 3766 3767 3768 3769 3770 3771 3772 3773 3774 3775 3776 3777 3778 3779 3780 3781 3782 3783 3784 3785 3786 3787 3788 3789 3790 3791 3792 3793 3794 3795 3796 3797 3798 3799 3800 3801 3802 3803 3804 3805 3806 3807 3808 3809 3810 3811 3812 3813 3814 3815 3816 3817 3818 3819 3820 3821 3822 3823 3824 3825 3826 3827 3828 3829 3830 3831 3832 3833 3834 3835 3836 3837 3838 3839 3840 3841 3842 3843 3844 3845 3846 3847 3848 3849 3850 3851 3852 3853 3854 3855 3856 3857 3858 3859 3860 3861 3862 3863 3864 3865 3866 3867 3868 3869 3870 3871 3872 3873 3874 3875 3876 3877 3878 3879 3880 3881 3882 3883 3884 3885 3886 3887 3888 3889 3890 3891 3892 3893 3894 3895 3896 3897 3898 3899 3900 3901 3902 3903 3904 3905 3906 3907 3908 3909 3910 3911 3912 3913 3914 3915 3916 3917 3918 3919 3920 3921 3922 3923 3924 3925 3926 3927 3928 3929 3930 3931 3932 3933 3934 3935 3936 3937 3938 3939 3940 3941 3942 3943 3944 3945 3946 3947 3948 3949 3950 3951 3952 3953 3954 3955 3956 3957 3958 3959 3960 3961 3962 3963 3964 3965 3966 3967 3968 3969 3970 3971 3972 3973 3974 3975 3976 3977 3978 3979 3980 3981 3982 3983 3984 3985 3986 3987 3988 3989 3990 3991 3992 3993 3994 3995 3996 3997 3998 3999 4000 4001 4002 4003 4004 4005 4006 4007 4008 4009 4010 4011 4012 4013 4014 4015 4016 4017 4018 4019 4020 4021 4022 4023 4024 4025 4026 4027 4028 4029 4030 4031 4032 4033 4034 4035 4036 4037 4038 4039 4040 4041 4042 4043 4044 4045 4046 4047 4048 4049 4050 4051 4052 4053 4054 4055 4056 4057 4058 4059 4060 4061 4062 4063 4064 4065 4066 4067 4068 4069 4070 4071 4072 4073 4074 4075 4076 4077 4078 4079 4080 4081 4082 4083 4084 4085 4086 4087 4088 4089 4090 4091 4092 4093 4094 4095 4096 4097 4098 4099 4100 4101 4102 4103 4104 4105 4106 4107 4108 4109 4110 4111 4112 4113 4114 4115 4116 4117 4118 4119 4120 4121 4122 4123 4124 4125 4126 4127 4128 4129 4130 4131 4132 4133 4134 4135 4136 4137 4138 4139 4140 4141 4142 4143 4144 4145 4146 4147 4148 4149 4150 4151 4152 4153 4154 4155 4156 4157 4158 4159 4160 4161 4162 4163 4164 4165 4166 4167 4168 4169 4170 4171 4172 4173 4174 4175 4176 4177 4178 4179 4180 4181 4182 4183 4184 4185 4186 4187 4188 4189 4190 4191 4192 4193 4194 4195 4196 4197 4198 4199 4200 4201 4202 4203 4204 4205 4206 4207 4208 4209 4210 4211 4212 4213 4214 4215 4216 4217 4218 4219 4220 4221 4222 4223 4224 4225 4226 4227 4228 4229 4230 4231 4232 4233 4234 4235 4236 4237 4238 4239 4240 4241 4242 4243 4244 4245 4246 4247 4248 4249 4250 4251 4252 4253 4254 4255 4256 4257 4258 4259 4260 4261 4262 4263 4264 4265 4266 4267 4268 4269 4270 4271 4272 4273 4274 4275 4276 4277 4278 4279 4280 4281 4282 4283 4284 4285 4286 4287 4288 4289 4290 4291 4292 4293 4294 4295 4296 4297 4298 4299 4300 4301 4302 4303 4304 4305 4306 4307 4308 4309 4310 4311 4312 4313 4314 4315 4316 4317 4318 4319 4320 4321 4322 4323 4324 4325 4326 4327 4328 4329 4330 4331 4332 4333 4334 4335 4336 4337 4338 4339 4340 4341 4342 4343 4344 4345 4346 4347 4348 4349 4350 4351 4352 4353 4354 4355 4356 4357 4358 4359 4360 4361 4362 4363 4364 4365 4366 4367 4368 4369 4370 4371 4372 4373 4374 4375 4376 4377 4378 4379 4380 4381 4382 4383 4384 4385 4386 4387 4388 4389 4390 4391 4392 4393 4394 4395 4396 4397 4398 4399 4400 4401 4402 4403 4404 4405 4406 4407 4408 4409 4410 4411 4412 4413 4414 4415 4416 4417 4418 4419 4420 4421 4422 4423 4424 4425 4426 4427 4428 4429 4430 4431 4432 4433 4434 4435 4436 4437 4438 4439 4440 4441 4442 4443 4444 4445 4446 4447 4448 4449 4450 4451 4452 4453 4454 4455 4456 4457 4458 4459 4460 4461 4462 4463 4464 4465 4466 4467 4468 4469 4470 4471 4472 4473 4474 4475 4476 4477 4478 4479 4480 4481 4482 4483 4484 4485 4486 4487 4488 4489 4490 4491 4492 4493 4494 4495 4496 4497 4498 4499 4500 4501 4502 4503 4504 4505 4506 4507 4508 4509 4510 4511 4512 4513 4514 4515 4516 4517 4518 4519 4520 4521 4522 4523 4524 4525 4526 4527 4528 4529 4530 4531 4532 4533 4534 4535 4536 4537 4538 4539 4540 4541 4542 4543 4544 4545 4546 4547 4548 4549 4550 4551 4552 4553 4554 4555 4556 4557 4558 4559 4560 4561 4562 4563 4564 4565 4566 4567 4568 4569 4570 4571 4572 4573 4574 4575 4576 4577 4578 4579 4580 4581 4582 4583 4584 4585 4586 4587 4588 4589 4590 4591 4592 4593 4594 4595 4596 4597 4598 4599 4600 4601 4602 4603 4604 4605 4606 4607 4608 4609 4610 4611 4612 4613 4614 4615 4616 4617 4618 4619 4620 4621 4622 4623 4624 4625 4626 4627 4628 4629 4630 4631 4632 4633 4634 4635 4636 4637 4638 4639 4640 4641 4642 4643 4644 4645 4646 4647 4648 4649 4650 4651 4652 4653 4654 4655 4656 4657 4658 4659 4660 4661 4662 4663 4664 4665 4666 4667 4668 4669 4670 4671 4672 4673 4674 4675 4676 4677 4678 4679 4680 4681 4682 4683 4684 4685 4686 4687 4688 4689 4690 4691 4692 4693 4694 4695 4696 4697 4698 4699 4700 4701 4702 4703 4704 4705 4706 4707 4708 4709 4710 4711 4712 4713 4714 4715 4716 4717 4718 4719 4720 4721 4722 4723 4724 4725 4726 4727 4728 4729 4730 4731 4732 4733 4734 4735 4736 4737 4738 4739 4740 4741 4742 4743 4744 4745 4746 4747 4748 4749 4750 4751 4752 4753 4754 4755 4756 4757 4758 4759 4760 4761 4762 4763 4764 4765 4766 4767 4768 4769 4770 4771 4772 4773 4774 4775 4776 4777 4778 4779 4780 4781 4782 4783 4784 4785 4786 4787 4788 4789 4790 4791 4792 4793 4794 4795 4796 4797 4798 4799 4800 4801 4802 4803 4804 4805 4806 4807 4808 4809 4810 4811 4812 4813 4814 4815 4816 4817 4818 4819 4820 4821 4822 4823 4824 4825 4826 4827 4828 4829 4830 4831 4832 4833 4834 4835 4836 4837 4838 4839 4840 4841 4842 4843 4844 4845 4846 4847 4848 4849 4850 4851 4852 4853 4854 4855 4856 4857 4858 4859 4860 4861 4862 4863 4864 4865 4866 4867 4868 4869 4870 4871 4872 4873 4874 4875 4876 4877 4878 4879 4880 4881 4882 4883 4884 4885 4886 4887 4888 4889 4890 4891 4892 4893 4894 4895 4896 4897 4898 4899 4900 4901 4902 4903 4904 4905 4906 4907 4908 4909 4910 4911 4912 4913 4914 4915 4916 4917 4918 4919 4920 4921 4922 4923 4924 4925 4926 4927 4928 4929 4930 4931 4932 4933 4934 4935 4936 4937 4938 4939 4940 4941 4942 4943 4944 4945 4946 4947 4948 4949 4950 4951 4952 4953 4954 4955 4956 4957 4958 4959 4960 4961 4962 4963 4964 4965 4966 4967 4968 4969 4970 4971 4972 4973 4974 4975 4976 4977 4978 4979 4980 4981 4982 4983 4984 4985 4986 4987 4988 4989 4990 4991 4992 4993 4994 4995 4996 4997 4998 4999 5000 5001 5002 5003 5004 5005 5006 5007 5008 5009 5010 5011 5012 5013 5014 5015 5016 5017 5018 5019 5020 5021 5022 5023 5024 5025 5026 5027 5028 5029 5030 5031 5032 5033 5034 5035 5036 5037 5038 5039 5040 5041 5042 5043 5044 5045 5046 5047 5048 5049 5050 5051 5052 5053 5054 5055 5056 5057 5058 5059 5060 5061 5062 5063 5064 5065 5066 5067 5068 5069 5070 5071 5072 5073 5074 5075 5076 5077 5078 5079 5080 5081 5082 5083 5084 5085 5086 5087 5088 5089 5090 5091 5092 5093 5094 5095 5096 5097 5098 5099 5100 5101 5102 5103 5104 5105 5106 5107 5108 5109 5110 5111 5112 5113 5114 5115 5116 5117 5118 5119 5120 5121 5122 5123 5124 5125 5126 5127 5128 5129 5130 5131 5132 5133 5134 5135 5136 5137 5138 5139 5140 5141 5142 5143 5144 5145 5146 5147 5148 5149 5150 5151 5152 5153 5154 5155 5156 5157 5158 5159 5160 5161 5162 5163 5164 5165 5166 5167 5168 5169 5170 5171 5172 5173 5174 5175 5176 5177 5178 5179 5180 5181 5182 5183 5184 5185 5186 5187
|
------------------------
HAProxy Management Guide
------------------------
version 3.3
This document describes how to start, stop, manage, and troubleshoot HAProxy,
as well as some known limitations and traps to avoid. It does not describe how
to configure it (for this please read configuration.txt).
Note to documentation contributors :
This document is formatted with 80 columns per line, with even number of
spaces for indentation and without tabs. Please follow these rules strictly
so that it remains easily printable everywhere. If you add sections, please
update the summary below for easier searching.
Summary
-------
1. Prerequisites
2. Quick reminder about HAProxy's architecture
3. Starting HAProxy
4. Stopping and restarting HAProxy
5. File-descriptor limitations
6. Memory management
7. CPU usage
8. Logging
9. Statistics and monitoring
9.1. CSV format
9.2. Typed output format
9.3. Unix Socket commands
9.4. Master CLI
9.4.1. Master CLI commands
9.5. Stats-file
10. Tricks for easier configuration management
11. Well-known traps to avoid
12. Debugging and performance issues
13. Security considerations
13.1. Linux capabilities support
1. Prerequisites
----------------
In this document it is assumed that the reader has sufficient administration
skills on a UNIX-like operating system, uses the shell on a daily basis and is
familiar with troubleshooting utilities such as strace and tcpdump.
2. Quick reminder about HAProxy's architecture
----------------------------------------------
HAProxy is a multi-threaded, event-driven, non-blocking daemon. This means it
uses event multiplexing to schedule all of its activities instead of relying on
the system to schedule between multiple activities. Most of the time it runs as
a single process, so the output of "ps aux" on a system will report only one
"haproxy" process, unless a soft reload is in progress and an older process is
finishing its job in parallel to the new one. It is thus always easy to trace
its activity using the strace utility. In order to scale with the number of
available processors, by default haproxy will start one worker thread per
processor it is allowed to run on. Unless explicitly configured differently,
the incoming traffic is spread over all these threads, all running the same
event loop. A great care is taken to limit inter-thread dependencies to the
strict minimum, so as to try to achieve near-linear scalability. This has some
impacts such as the fact that a given connection is served by a single thread.
Thus in order to use all available processing capacity, it is needed to have at
least as many connections as there are threads, which is almost always granted.
HAProxy is designed to isolate itself into a chroot jail during startup, where
it cannot perform any file-system access at all. This is also true for the
libraries it depends on (eg: libc, libssl, etc). The immediate effect is that
a running process will not be able to reload a configuration file to apply
changes, instead a new process will be started using the updated configuration
file. Some other less obvious effects are that some timezone files or resolver
files the libc might attempt to access at run time will not be found, though
this should generally not happen as they're not needed after startup. A nice
consequence of this principle is that the HAProxy process is totally stateless,
and no cleanup is needed after it's killed, so any killing method that works
will do the right thing.
HAProxy doesn't write log files, but it relies on the standard syslog protocol
to send logs to a remote server (which is often located on the same system).
HAProxy uses its internal clock to enforce timeouts, that is derived from the
system's time but where unexpected drift is corrected. This is done by limiting
the time spent waiting in poll() for an event, and measuring the time it really
took. In practice it never waits more than one second. This explains why, when
running strace over a completely idle process, periodic calls to poll() (or any
of its variants) surrounded by two gettimeofday() calls are noticed. They are
normal, completely harmless and so cheap that the load they imply is totally
undetectable at the system scale, so there's nothing abnormal there. Example :
16:35:40.002320 gettimeofday({1442759740, 2605}, NULL) = 0
16:35:40.002942 epoll_wait(0, {}, 200, 1000) = 0
16:35:41.007542 gettimeofday({1442759741, 7641}, NULL) = 0
16:35:41.007998 gettimeofday({1442759741, 8114}, NULL) = 0
16:35:41.008391 epoll_wait(0, {}, 200, 1000) = 0
16:35:42.011313 gettimeofday({1442759742, 11411}, NULL) = 0
HAProxy is a TCP proxy, not a router. It deals with established connections that
have been validated by the kernel, and not with packets of any form nor with
sockets in other states (eg: no SYN_RECV nor TIME_WAIT), though their existence
may prevent it from binding a port. It relies on the system to accept incoming
connections and to initiate outgoing connections. An immediate effect of this is
that there is no relation between packets observed on the two sides of a
forwarded connection, which can be of different size, numbers and even family.
Since a connection may only be accepted from a socket in LISTEN state, all the
sockets it is listening to are necessarily visible using the "netstat" utility
to show listening sockets. Example :
# netstat -ltnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1629/sshd
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 2847/haproxy
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 2847/haproxy
3. Starting HAProxy
-------------------
HAProxy is started by invoking the "haproxy" program with a number of arguments
passed on the command line. The actual syntax is :
$ haproxy [<options>]*
where [<options>]* is any number of options. An option always starts with '-'
followed by one of more letters, and possibly followed by one or multiple extra
arguments. Without any option, HAProxy displays the help page with a reminder
about supported options. Available options may vary slightly based on the
operating system. A fair number of these options overlap with an equivalent one
in the "global" section. In this case, the command line always has precedence
over the configuration file, so that the command line can be used to quickly
enforce some settings without touching the configuration files. The current
list of options is :
-- <cfgfile>* : all the arguments following "--" are paths to configuration
file/directory to be loaded and processed in the declaration order. It is
mostly useful when relying on the shell to load many files that are
numerically ordered. See also "-f". The difference between "--" and "-f" is
that one "-f" must be placed before each file name, while a single "--" is
needed before all file names. Both options can be used together, the
command line ordering still applies. When more than one file is specified,
each file must start on a section boundary, so the first keyword of each
file must be one of "global", "defaults", "peers", "listen", "frontend",
"backend", and so on. A file cannot contain just a server list for example.
-f <cfgfile|cfgdir> : adds <cfgfile> to the list of configuration files to be
loaded. If <cfgdir> is a directory, all the files (and only files) it
contains are added in lexical order (using LC_COLLATE=C) to the list of
configuration files to be loaded ; only files with ".cfg" extension are
added, only non hidden files (not prefixed with ".") are added.
Configuration files are loaded and processed in their declaration order.
This option may be specified multiple times to load multiple files. See
also "--". The difference between "--" and "-f" is that one "-f" must be
placed before each file name, while a single "--" is needed before all file
names. Both options can be used together, the command line ordering still
applies. When more than one file is specified, each file must start on a
section boundary, so the first keyword of each file must be one of
"global", "defaults", "peers", "listen", "frontend", "backend", and so on.
A file cannot contain just a server list for example.
-C <dir> : changes to directory <dir> before loading configuration
files. This is useful when using relative paths. Warning when using
wildcards after "--" which are in fact replaced by the shell before
starting haproxy.
-D : start as a daemon. The process detaches from the current terminal after
forking, and errors are not reported anymore in the terminal. It is
equivalent to the "daemon" keyword in the "global" section of the
configuration. It is recommended to always force it in any init script so
that a faulty configuration doesn't prevent the system from booting.
-L <name> : change the local peer name to <name>, which defaults to the local
hostname. This is used only with peers replication. You can use the
variable $HAPROXY_LOCALPEER in the configuration file to reference the
peer name.
-N <limit> : sets the default per-proxy maxconn to <limit> instead of the
builtin default value (usually 2000). Only useful for debugging.
-V : enable verbose mode (disables quiet mode). Reverts the effect of "-q" or
"quiet".
-W : master-worker mode. It is equivalent to the "master-worker" keyword in
the "global" section of the configuration. This mode will launch a "master"
which will monitor the "workers". Using this mode, you can reload HAProxy
directly by sending a SIGUSR2 signal to the master. The master-worker mode
is compatible either with the foreground or daemon mode. It is
recommended to use this mode with multiprocess and systemd.
-Ws : master-worker mode with support of `notify` type of systemd service.
-4 : force DNS resolvers to query and accept IPv4 addresses only ("A"
records). This can be used when facing difficulties in certain
environments lacking end-to-end dual-stack connectivity. It overrides
the global "dns-accept-family" directive and forces it to "ipv4".
-c : only performs a check of the configuration files and exits before trying
to bind. The exit status is zero if everything is OK, or non-zero if an
error is encountered. Presence of warnings will be reported if any.
By default this option does not report a success message. Combined with
"-V" this will print the message "Configuration file is valid" upon
success.
Scripts must use the exit status to determine the success of the
command.
-cc : evaluates a condition as used within a conditional block of the
configuration. The exit status is zero if the condition is true, 1 if the
condition is false or 2 if an error is encountered.
-d : enable debug mode. This disables daemon mode, forces the process to stay
in foreground and to show incoming and outgoing events. It must never be
used in an init script.
-dC[key] : dump the configuration file. It is performed after the lines are
tokenized, so comments are stripped and indenting is forced. If a non-zero
key is specified, lines are truncated before sensitive/confidential fields,
and identifiers and addresses are emitted hashed with this key using the
same algorithm as the one used by the anonymized mode on the CLI. This
means that the output may safely be shared with a developer who needs it
to figure what's happening in a dump that was anonymized using the same
key. Please also see the CLI's "set anon" command.
-dD : enable diagnostic mode. This mode will output extra warnings about
suspicious configuration statements. This will never prevent startup even in
"zero-warning" mode nor change the exit status code.
-dF : disable data fast-forward. It is a mechanism to optimize the data
forwarding by passing data directly from a side to the other one without
waking the stream up. Thanks to this directive, it is possible to disable
this optimization. Note it also disable any kernel tcp splicing. This
command is not meant for regular use, it will generally only be suggested by
developers along complex debugging sessions.
-dG : disable use of getaddrinfo() to resolve host names into addresses. It
can be used when suspecting that getaddrinfo() doesn't work as expected.
This option was made available because many bogus implementations of
getaddrinfo() exist on various systems and cause anomalies that are
difficult to troubleshoot.
-dI : enable the insecure fork. This is the equivalent of the
"insecure-fork-wanted" in the global section. It can be useful when running
all the reg-tests with ASAN which need to fork addr2line to resolve the
addresses.
-dK<class[,class]*> : dumps the list of registered keywords in each class.
The list of classes is available with "-dKhelp". All classes may be dumped
using "-dKall", otherwise a selection of those shown in the help can be
specified as a comma-delimited list. The output format will vary depending
on what class of keywords is being dumped (e.g. "cfg" will show the known
configuration keywords in a format resembling the config file format while
"smp" will show sample fetch functions prefixed with a compatibility matrix
with each rule set). These may rarely be used as-is by humans but can be of
great help for external tools that try to detect the appearance of new
keywords at certain places to automatically update some documentation,
syntax highlighting files, configuration parsers, API etc. The output
format may evolve a bit over time so it is really recommended to use this
output mostly to detect differences with previous archives. Note that not
all keywords are listed because many keywords have existed long before the
different keyword registration subsystems were created, and they do not
appear there. However since new keywords are only added via the modern
mechanisms, it's reasonably safe to assume that this output may be used to
detect language additions with a good accuracy. The keywords are only
dumped after the configuration is fully parsed, so that even dynamically
created keywords can be dumped. A good way to dump and exit is to run a
silent config check on an existing configuration:
./haproxy -dKall -q -c -f foo.cfg
If no configuration file is available, using "-f /dev/null" will work as
well to dump all default keywords, but then the return status will not be
zero since there will be no listener, and will have to be ignored.
-dL : dumps the list of dynamic shared libraries that are loaded at the end
of the config processing. This will generally also include deep dependencies
such as anything loaded from Lua code for example, as well as the executable
itself. The list is printed in a format that ought to be easy enough to
sanitize to directly produce a tarball of all dependencies. Since it doesn't
stop the program's startup, it is recommended to only use it in combination
with "-c" and "-q" where only the list of loaded objects will be displayed
(or nothing in case of error). In addition, keep in mind that when providing
such a package to help with a core file analysis, most libraries are in fact
symbolic links that need to be dereferenced when creating the archive:
./haproxy -W -q -c -dL -f foo.cfg | tar -T - -hzcf archive.tgz
When started in verbose mode (-V) the shared libraries' address ranges are
also enumerated, unless the quiet mode is in use (-q).
-dM[<byte>[,]][help|options,...] : forces memory poisoning, and/or changes
memory other debugging options. Memory poisonning means that each and every
memory region allocated with malloc() or pool_alloc() will be filled with
<byte> before being passed to the caller. When <byte> is not specified, it
defaults to 0x50 ('P'). While this slightly slows down operations, it is
useful to reliably trigger issues resulting from missing initializations in
the code that cause random crashes. Note that -dM0 has the effect of
turning any malloc() into a calloc(). In any case if a bug appears or
disappears when using this option it means there is a bug in haproxy, so
please report it. A number of other options are available either alone or
after a comma following the byte. The special option "help" will list the
currently supported options and their current value. Each debugging option
may be forced on or off. The most optimal options are usually chosen at
build time based on the operating system and do not need to be adjusted,
unless suggested by a developer. Supported debugging options include
(set/clear):
- fail / no-fail:
This enables randomly failing memory allocations, in conjunction with
the global "tune.fail-alloc" setting. This is used to detect missing
error checks in the code. Setting the option presets the ratio to 1%
failure rate.
- no-merge / merge:
By default, pools of very similar sizes are merged, resulting in more
efficiency, but this complicates the analysis of certain memory dumps.
This option allows to disable this mechanism, and may slightly increase
the memory usage.
- cold-first / hot-first:
In order to optimize the CPU cache hit ratio, by default the most
recently released objects ("hot") are recycled for new allocations.
But doing so also complicates analysis of memory dumps and may hide
use-after-free bugs. This option allows to instead pick the coldest
objects first, which may result in a slight increase of CPU usage.
- integrity / no-integrity:
When this option is enabled, memory integrity checks are enabled on
the allocated area to verify that it hasn't been modified since it was
last released. This works best with "no-merge", "cold-first" and "tag".
Enabling this option will slightly increase the CPU usage.
- backup / no-backup:
This option performs a copy of each released object at release time,
allowing developers to inspect them. It also performs a comparison at
allocation time to detect if anything changed in between, indicating a
use-after-free condition. This doubles the memory usage and slightly
increases the CPU usage (similar to "integrity"). If combined with
"integrity", it still duplicates the contents but doesn't perform the
comparison (which is performed by "integrity"). Just like "integrity",
it works best with "no-merge", "cold-first" and "tag".
- no-global / global:
Depending on the operating system, a process-wide global memory cache
may be enabled if it is estimated that the standard allocator is too
slow or inefficient with threads. This option allows to forcefully
disable it or enable it. Disabling it may result in a CPU usage
increase with inefficient allocators. Enabling it may result in a
higher memory usage with efficient allocators.
- no-cache / cache:
Each thread uses a very fast local object cache for allocations, which
is always enabled by default. This option allows to disable it. Since
the global cache also passes via the local caches, this will
effectively result in disabling all caches and allocating directly from
the default allocator. This may result in a significant increase of CPU
usage, but may also result in small memory savings on tiny systems.
- caller / no-caller:
Enabling this option reserves some extra space in each allocated object
to store the address of the last caller that allocated or released it.
This helps developers go back in time when analysing memory dumps and
to guess how something unexpected happened.
- tag / no-tag:
Enabling this option reserves some extra space in each allocated object
to store a tag that allows to detect bugs such as double-free, freeing
an invalid object, and buffer overflows. It offers much stronger
reliability guarantees at the expense of 4 or 8 extra bytes per
allocation. It usually is the first step to detect memory corruption.
- poison / no-poison:
Enabling this option will fill allocated objects with a fixed pattern
that will make sure that some accidental values such as 0 will not be
present if a newly added field was mistakenly forgotten in an
initialization routine. Such bugs tend to rarely reproduce, especially
when pools are not merged. This is normally enabled by directly passing
the byte's value to -dM but using this option allows to disable/enable
use of a previously set value.
-dR : disable SO_REUSEPORT socket option on listening ports. It is equivalent
to the "global" section's "noreuseport" keyword. This may be applied in
multi-threading scenarios, when load distribution issues observed among the
haproxy threads (could be monitored with top).
-dS : disable use of the splice() system call. It is equivalent to the
"global" section's "nosplice" keyword. This may be used when splice() is
suspected to behave improperly or to cause performance issues, or when
using strace to see the forwarded data (which do not appear when using
splice()).
-dT : disable the use of ktls. It is equivalent to the "global" section's
keyword "noktls". It is mostly useful when suspecting a bug related to
ktls.
-dV : disable SSL verify on the server side. It is equivalent to having
"ssl-server-verify none" in the "global" section. This is useful when
trying to reproduce production issues out of the production
environment. Never use this in an init script as it degrades SSL security
to the servers.
-dW : if set, haproxy will refuse to start if any warning was emitted while
processing the configuration. This helps detect subtle mistakes and keep the
configuration clean and portable across versions. It is recommended to set
this option in service scripts when configurations are managed by humans,
but it is recommended not to use it with generated configurations, which
tend to emit more warnings. It may be combined with "-c" to cause warnings
in checked configurations to fail. This is equivalent to global option
"zero-warning".
-dZ : disable forwarding of data in "zero-copy" mode. It is equivalent to the
"global" section's "tune.disable-zero-copy-forwarding" keyword. This may be
helpful in case of issues with data loss or data integrity, or when using
strace to see the forwarded data, as it also disables any kernel tcp
splicing.
-db : disable background mode and multi-process mode. The process remains in
foreground. It is mainly used during development or during small tests, as
Ctrl-C is enough to stop the process. Never use it in an init script.
-dc : enable CPU affinity debugging. The list of selected and evicted CPUs as
well as their topology will be reported before starting.
-de : disable the use of the "epoll" poller. It is equivalent to the "global"
section's keyword "noepoll". It is mostly useful when suspecting a bug
related to this poller. On systems supporting epoll, the fallback will
generally be the "poll" poller.
-dk : disable the use of the "kqueue" poller. It is equivalent to the
"global" section's keyword "nokqueue". It is mostly useful when suspecting
a bug related to this poller. On systems supporting kqueue, the fallback
will generally be the "poll" poller.
-dp : disable the use of the "poll" poller. It is equivalent to the "global"
section's keyword "nopoll". It is mostly useful when suspecting a bug
related to this poller. On systems supporting poll, the fallback will
generally be the "select" poller, which cannot be disabled and is limited
to 1024 file descriptors.
-dr : ignore server address resolution failures. It is very common when
validating a configuration out of production not to have access to the same
resolvers and to fail on server address resolution, making it difficult to
test a configuration. This option simply appends the "none" method to the
list of address resolution methods for all servers, ensuring that even if
the libc fails to resolve an address, the startup sequence is not
interrupted.
-dt [<trace_desc>,...] : activates traces on stderr. Without argument, this
enables all trace sources on error level. This can notably be useful to
detect protocol violations from clients or servers. An optional argument
can be used to specify a list of various trace configurations using ',' as
separator. Each element activates one or all trace sources. Additionally,
level and verbosity can be optionally specified on each element using ':'
as inner separator with trace name. When entering an invalid verbosity or
level name, the list of available keywords is presented. For example it can
be convenient to pass 'help' for each field to consult the list first.
-dv : disable the use of the "evports" poller. It is equivalent to the
"global" section's keyword "noevports". It is mostly useful when suspecting
a bug related to this poller. On systems supporting event ports (SunOS
derived from Solaris 10 and later), the fallback will generally be the
"poll" poller.
-m <limit> : limit allocatable memory, which is used to keep process's data,
to <limit> megabytes. This may cause some connection refusals or some
slowdowns depending on the amount of memory needed for normal operations.
This is mostly used to force haproxy process to work in a constrained
resource consumption scenario. It is important to note that the memory is
not shared between haproxy processes and a child process created via fork()
system call inherits its parent's resource limits. So, in a master-worker
mode this memory limit is separately applied to the master and its forked
worker process.
-n <limit> : limits the per-process connection limit to <limit>. This is
equivalent to the global section's keyword "maxconn". It has precedence
over this keyword. This may be used to quickly force lower limits to avoid
a service outage on systems where resource limits are too low.
-p <file> : write all processes' pids into <file> during startup. This is
equivalent to the "global" section's keyword "pidfile". The file is opened
before entering the chroot jail, and after doing the chdir() implied by
"-C". Each pid appears on its own line.
-q : set "quiet" mode. This disables the output messages. It can be used in
combination with "-c" to just check if a configuration file is valid or not.
-S <bind>[,bind_options...]: in master-worker mode, bind a master CLI, which
allows the access to every processes, running or leaving ones.
For security reasons, it is recommended to bind the master CLI to a local
UNIX socket. The bind options are the same as the keyword "bind" in
the configuration file with words separated by commas instead of spaces.
Note that this socket can't be used to retrieve the listening sockets from
an old process during a seamless reload.
-sf <pid>* : send the "finish" signal (SIGUSR1) to older processes after boot
completion to ask them to finish what they are doing and to leave. <pid>
is a list of pids to signal (one per argument). The list ends on any
option starting with a "-". It is not a problem if the list of pids is
empty, so that it can be built on the fly based on the result of a command
like "pidof" or "pgrep".
-st <pid>* : send the "terminate" signal (SIGTERM) to older processes after
boot completion to terminate them immediately without finishing what they
were doing. <pid> is a list of pids to signal (one per argument). The list
ends on any option starting with a "-". It is not a problem if the list
of pids is empty, so that it can be built on the fly based on the result of
a command like "pidof" or "pgrep".
-v : report the version and build date.
-vv : display the version, build options, libraries versions and usable
pollers. This output is systematically requested when filing a bug report.
-x <unix_socket> : connect to the specified socket and try to retrieve any
listening sockets from the old process, and use them instead of trying to
bind new ones. This is useful to avoid missing any new connection when
reloading the configuration on Linux.
Without master-worker mode, the capability must be enable on the stats
socket using "expose-fd listeners" in your configuration.
In master-worker mode, it does not need "expose-fd listeners", the master
will use automatically this option upon a reload with the "sockpair@"
syntax, which allows the master to connect directly to a worker without using
any stats socket declared in the configuration. If you want to disable this,
you can pass -x /dev/null.
A safe way to start HAProxy from an init file consists in forcing the daemon
mode, storing existing pids to a pid file and using this pid file to notify
older processes to finish before leaving :
haproxy -f /etc/haproxy.cfg \
-D -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)
When the configuration is split into a few specific files (eg: tcp vs http),
it is recommended to use the "-f" option :
haproxy -f /etc/haproxy/global.cfg -f /etc/haproxy/stats.cfg \
-f /etc/haproxy/default-tcp.cfg -f /etc/haproxy/tcp.cfg \
-f /etc/haproxy/default-http.cfg -f /etc/haproxy/http.cfg \
-D -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)
When an unknown number of files is expected, such as customer-specific files,
it is recommended to assign them a name starting with a fixed-size sequence
number and to use "--" to load them, possibly after loading some defaults :
haproxy -f /etc/haproxy/global.cfg -f /etc/haproxy/stats.cfg \
-f /etc/haproxy/default-tcp.cfg -f /etc/haproxy/tcp.cfg \
-f /etc/haproxy/default-http.cfg -f /etc/haproxy/http.cfg \
-D -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) \
-f /etc/haproxy/default-customers.cfg -- /etc/haproxy/customers/*
Sometimes a failure to start may happen for whatever reason. Then it is
important to verify if the version of HAProxy you are invoking is the expected
version and if it supports the features you are expecting (eg: SSL, PCRE,
compression, Lua, etc). This can be verified using "haproxy -vv". Some
important information such as certain build options, the target system and
the versions of the libraries being used are reported there. It is also what
you will systematically be asked for when posting a bug report :
$ haproxy -vv
HAProxy version 1.6-dev7-a088d3-4 2015/10/08
Copyright 2000-2015 Willy Tarreau <willy@haproxy.org>
Build options :
TARGET = linux2628
CPU = generic
CC = gcc
CFLAGS = -pg -O0 -g -fno-strict-aliasing -Wdeclaration-after-statement \
-DBUFSIZE=8030 -DMAXREWRITE=1030 -DSO_MARK=36 -DTCP_REPAIR=19
OPTIONS = USE_ZLIB=1 USE_DLMALLOC=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1
Default settings :
maxconn = 2000, bufsize = 8030, maxrewrite = 1030, maxpollevents = 200
Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.6
Compression algorithms supported : identity("identity"), deflate("deflate"), \
raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.1o 12 Jun 2015
Running on OpenSSL version : OpenSSL 1.0.1o 12 Jun 2015
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.12 2011-01-15
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with Lua version : Lua 5.3.1
Built with transparent proxy support using: IP_TRANSPARENT IP_FREEBIND
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.
The relevant information that many non-developer users can verify here are :
- the version : 1.6-dev7-a088d3-4 above means the code is currently at commit
ID "a088d3" which is the 4th one after after official version "1.6-dev7".
Version 1.6-dev7 would show as "1.6-dev7-8c1ad7". What matters here is in
fact "1.6-dev7". This is the 7th development version of what will become
version 1.6 in the future. A development version not suitable for use in
production (unless you know exactly what you are doing). A stable version
will show as a 3-numbers version, such as "1.5.14-16f863", indicating the
14th level of fix on top of version 1.5. This is a production-ready version.
- the release date : 2015/10/08. It is represented in the universal
year/month/day format. Here this means August 8th, 2015. Given that stable
releases are issued every few months (1-2 months at the beginning, sometimes
6 months once the product becomes very stable), if you're seeing an old date
here, it means you're probably affected by a number of bugs or security
issues that have since been fixed and that it might be worth checking on the
official site.
- build options : they are relevant to people who build their packages
themselves, they can explain why things are not behaving as expected. For
example the development version above was built for Linux 2.6.28 or later,
targeting a generic CPU (no CPU-specific optimizations), and lacks any
code optimization (-O0) so it will perform poorly in terms of performance.
- libraries versions : zlib version is reported as found in the library
itself. In general zlib is considered a very stable product and upgrades
are almost never needed. OpenSSL reports two versions, the version used at
build time and the one being used, as found on the system. These ones may
differ by the last letter but never by the numbers. The build date is also
reported because most OpenSSL bugs are security issues and need to be taken
seriously, so this library absolutely needs to be kept up to date. Seeing a
4-months old version here is highly suspicious and indeed an update was
missed. PCRE provides very fast regular expressions and is highly
recommended. Certain of its extensions such as JIT are not present in all
versions and still young so some people prefer not to build with them,
which is why the build status is reported as well. Regarding the Lua
scripting language, HAProxy expects version 5.3 which is very young since
it was released a little time before HAProxy 1.6. It is important to check
on the Lua web site if some fixes are proposed for this branch.
- Available polling systems will affect the process's scalability when
dealing with more than about one thousand of concurrent connections. These
ones are only available when the correct system was indicated in the TARGET
variable during the build. The "epoll" mechanism is highly recommended on
Linux, and the kqueue mechanism is highly recommended on BSD. Lacking them
will result in poll() or even select() being used, causing a high CPU usage
when dealing with a lot of connections.
4. Stopping and restarting HAProxy
----------------------------------
HAProxy supports a graceful and a hard stop. The hard stop is simple, when the
SIGTERM signal is sent to the haproxy process, it immediately quits and all
established connections are closed. The graceful stop is triggered when the
SIGUSR1 signal is sent to the haproxy process. It consists in only unbinding
from listening ports, but continue to process existing connections until they
close. Once the last connection is closed, the process leaves.
The hard stop method is used for the "stop" or "restart" actions of the service
management script. The graceful stop is used for the "reload" action which
tries to seamlessly reload a new configuration in a new process.
Both of these signals may be sent by the new haproxy process itself during a
reload or restart, so that they are sent at the latest possible moment and only
if absolutely required. This is what is performed by the "-st" (hard) and "-sf"
(graceful) options respectively.
In master-worker mode, it is not needed to start a new haproxy process in
order to reload the configuration. The master process reacts to the SIGUSR2
signal by reexecuting itself with the -sf parameter followed by the PIDs of
the workers. The master will then parse the configuration file and fork new
workers.
To understand better how these signals are used, it is important to understand
the whole restart mechanism.
First, an existing haproxy process is running. The administrator uses a system
specific command such as "/etc/init.d/haproxy reload" to indicate they want to
take the new configuration file into effect. What happens then is the following.
First, the service script (/etc/init.d/haproxy or equivalent) will verify that
the configuration file parses correctly using "haproxy -c". After that it will
try to start haproxy with this configuration file, using "-st" or "-sf".
Then HAProxy tries to bind to all listening ports. If some fatal errors happen
(eg: address not present on the system, permission denied), the process quits
with an error. If a socket binding fails because a port is already in use, then
the process will first send a SIGTTOU signal to all the pids specified in the
"-st" or "-sf" pid list. This is what is called the "pause" signal. It instructs
all existing haproxy processes to temporarily stop listening to their ports so
that the new process can try to bind again. During this time, the old process
continues to process existing connections. If the binding still fails (because
for example a port is shared with another daemon), then the new process sends a
SIGTTIN signal to the old processes to instruct them to resume operations just
as if nothing happened. The old processes will then restart listening to the
ports and continue to accept connections. Note that this mechanism is system
dependent and some operating systems may not support it in multi-process mode.
If the new process manages to bind correctly to all ports, then it sends either
the SIGTERM (hard stop in case of "-st") or the SIGUSR1 (graceful stop in case
of "-sf") to all processes to notify them that it is now in charge of operations
and that the old processes will have to leave, either immediately or once they
have finished their job.
It is important to note that during this timeframe, there are two small windows
of a few milliseconds each where it is possible that a few connection failures
will be noticed during high loads. Typically observed failure rates are around
1 failure during a reload operation every 10000 new connections per second,
which means that a heavily loaded site running at 30000 new connections per
second may see about 3 failed connection upon every reload. The two situations
where this happens are :
- if the new process fails to bind due to the presence of the old process,
it will first have to go through the SIGTTOU+SIGTTIN sequence, which
typically lasts about one millisecond for a few tens of frontends, and
during which some ports will not be bound to the old process and not yet
bound to the new one. HAProxy works around this on systems that support the
SO_REUSEPORT socket options, as it allows the new process to bind without
first asking the old one to unbind. Most BSD systems have been supporting
this almost forever. Linux has been supporting this in version 2.0 and
dropped it around 2.2, but some patches were floating around by then. It
was reintroduced in kernel 3.9, so if you are observing a connection
failure rate above the one mentioned above, please ensure that your kernel
is 3.9 or newer, or that relevant patches were backported to your kernel
(less likely).
- when the old processes close the listening ports, the kernel may not always
redistribute any pending connection that was remaining in the socket's
backlog. Under high loads, a SYN packet may happen just before the socket
is closed, and will lead to an RST packet being sent to the client. In some
critical environments where even one drop is not acceptable, these ones are
sometimes dealt with using firewall rules to block SYN packets during the
reload, forcing the client to retransmit. This is totally system-dependent,
as some systems might be able to visit other listening queues and avoid
this RST. A second case concerns the ACK from the client on a local socket
that was in SYN_RECV state just before the close. This ACK will lead to an
RST packet while the haproxy process is still not aware of it. This one is
harder to get rid of, though the firewall filtering rules mentioned above
will work well if applied one second or so before restarting the process.
For the vast majority of users, such drops will never ever happen since they
don't have enough load to trigger the race conditions. And for most high traffic
users, the failure rate is still fairly within the noise margin provided that at
least SO_REUSEPORT is properly supported on their systems.
5. File-descriptor limitations
------------------------------
In order to ensure that all incoming connections will successfully be served,
HAProxy computes at load time the total number of file descriptors that will be
needed during the process's life. A regular Unix process is generally granted
1024 file descriptors by default, and a privileged process can raise this limit
itself. This is one reason for starting HAProxy as root and letting it adjust
the limit. The default limit of 1024 file descriptors roughly allow about 500
concurrent connections to be processed. The computation is based on the global
maxconn parameter which limits the total number of connections per process, the
number of listeners, the number of servers which have a health check enabled,
the agent checks, the peers, the loggers and possibly a few other technical
requirements. A simple rough estimate of this number consists in simply
doubling the maxconn value and adding a few tens to get the approximate number
of file descriptors needed.
Originally HAProxy did not know how to compute this value, and it was necessary
to pass the value using the "ulimit-n" setting in the global section. This
explains why even today a lot of configurations are seen with this setting
present. Unfortunately it was often miscalculated resulting in connection
failures when approaching maxconn instead of throttling incoming connection
while waiting for the needed resources. For this reason it is important to
remove any vestigial "ulimit-n" setting that can remain from very old versions.
Raising the number of file descriptors to accept even moderate loads is
mandatory but comes with some OS-specific adjustments. First, the select()
polling system is limited to 1024 file descriptors. In fact on Linux it used
to be capable of handling more but since certain OS ship with excessively
restrictive SELinux policies forbidding the use of select() with more than
1024 file descriptors, HAProxy now refuses to start in this case in order to
avoid any issue at run time. On all supported operating systems, poll() is
available and will not suffer from this limitation. It is automatically picked
so there is nothing to do to get a working configuration. But poll's becomes
very slow when the number of file descriptors increases. While HAProxy does its
best to limit this performance impact (eg: via the use of the internal file
descriptor cache and batched processing), a good rule of thumb is that using
poll() with more than a thousand concurrent connections will use a lot of CPU.
For Linux systems base on kernels 2.6 and above, the epoll() system call will
be used. It's a much more scalable mechanism relying on callbacks in the kernel
that guarantee a constant wake up time regardless of the number of registered
monitored file descriptors. It is automatically used where detected, provided
that HAProxy had been built for one of the Linux flavors. Its presence and
support can be verified using "haproxy -vv".
For BSD systems which support it, kqueue() is available as an alternative. It
is much faster than poll() and even slightly faster than epoll() thanks to its
batched handling of changes. At least FreeBSD and OpenBSD support it. Just like
with Linux's epoll(), its support and availability are reported in the output
of "haproxy -vv".
Having a good poller is one thing, but it is mandatory that the process can
reach the limits. When HAProxy starts, it immediately sets the new process's
file descriptor limits and verifies if it succeeds. In case of failure, it
reports it before forking so that the administrator can see the problem. As
long as the process is started by as root, there should be no reason for this
setting to fail. However, it can fail if the process is started by an
unprivileged user. If there is a compelling reason for *not* starting haproxy
as root (eg: started by end users, or by a per-application account), then the
file descriptor limit can be raised by the system administrator for this
specific user. The effectiveness of the setting can be verified by issuing
"ulimit -n" from the user's command line. It should reflect the new limit.
Warning: when an unprivileged user's limits are changed in this user's account,
it is fairly common that these values are only considered when the user logs in
and not at all in some scripts run at system boot time nor in crontabs. This is
totally dependent on the operating system, keep in mind to check "ulimit -n"
before starting haproxy when running this way. The general advice is never to
start haproxy as an unprivileged user for production purposes. Another good
reason is that it prevents haproxy from enabling some security protections.
Once it is certain that the system will allow the haproxy process to use the
requested number of file descriptors, two new system-specific limits may be
encountered. The first one is the system-wide file descriptor limit, which is
the total number of file descriptors opened on the system, covering all
processes. When this limit is reached, accept() or socket() will typically
return ENFILE. The second one is the per-process hard limit on the number of
file descriptors, it prevents setrlimit() from being set higher. Both are very
dependent on the operating system. On Linux, the system limit is set at boot
based on the amount of memory. It can be changed with the "fs.file-max" sysctl.
And the per-process hard limit is set to 1048576 by default, but it can be
changed using the "fs.nr_open" sysctl.
File descriptor limitations may be observed on a running process when they are
set too low. The strace utility will report that accept() and socket() return
"-1 EMFILE" when the process's limits have been reached. In this case, simply
raising the "ulimit-n" value (or removing it) will solve the problem. If these
system calls return "-1 ENFILE" then it means that the kernel's limits have
been reached and that something must be done on a system-wide parameter. These
trouble must absolutely be addressed, as they result in high CPU usage (when
accept() fails) and failed connections that are generally visible to the user.
One solution also consists in lowering the global maxconn value to enforce
serialization, and possibly to disable HTTP keep-alive to force connections
to be released and reused faster.
6. Memory management
--------------------
HAProxy uses a simple and fast pool-based memory management. Since it relies on
a small number of different object types, it's much more efficient to pick new
objects from a pool which already contains objects of the appropriate size than
to call malloc() for each different size. The pools are organized as a stack or
LIFO, so that newly allocated objects are taken from recently released objects
still hot in the CPU caches. Pools of similar sizes are merged together, in
order to limit memory fragmentation.
By default, since the focus is set on performance, each released object is put
back into the pool it came from, and allocated objects are never freed since
they are expected to be reused very soon.
On the CLI, it is possible to check how memory is being used in pools thanks to
the "show pools" command :
> show pools
Dumping pools usage. Use SIGQUIT to flush them.
- Pool cache_st (16 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9ccc40=03 [SHARED]
- Pool pipe (32 bytes) : 5 allocated (160 bytes), 5 used, 0 failures, 2 users, @0x9ccac0=00 [SHARED]
- Pool comp_state (48 bytes) : 3 allocated (144 bytes), 3 used, 0 failures, 5 users, @0x9cccc0=04 [SHARED]
- Pool filter (64 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 3 users, @0x9ccbc0=02 [SHARED]
- Pool vars (80 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 2 users, @0x9ccb40=01 [SHARED]
- Pool uniqueid (128 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 2 users, @0x9cd240=15 [SHARED]
- Pool task (144 bytes) : 55 allocated (7920 bytes), 55 used, 0 failures, 1 users, @0x9cd040=11 [SHARED]
- Pool session (160 bytes) : 1 allocated (160 bytes), 1 used, 0 failures, 1 users, @0x9cd140=13 [SHARED]
- Pool h2s (208 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 2 users, @0x9ccec0=08 [SHARED]
- Pool h2c (288 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9cce40=07 [SHARED]
- Pool spoe_ctx (304 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 2 users, @0x9ccf40=09 [SHARED]
- Pool connection (400 bytes) : 2 allocated (800 bytes), 2 used, 0 failures, 1 users, @0x9cd1c0=14 [SHARED]
- Pool hdr_idx (416 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9cd340=17 [SHARED]
- Pool dns_resolut (480 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9ccdc0=06 [SHARED]
- Pool dns_answer_ (576 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9ccd40=05 [SHARED]
- Pool stream (960 bytes) : 1 allocated (960 bytes), 1 used, 0 failures, 1 users, @0x9cd0c0=12 [SHARED]
- Pool requri (1024 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9cd2c0=16 [SHARED]
- Pool buffer (8030 bytes) : 3 allocated (24090 bytes), 2 used, 0 failures, 1 users, @0x9cd3c0=18 [SHARED]
- Pool trash (8062 bytes) : 1 allocated (8062 bytes), 1 used, 0 failures, 1 users, @0x9cd440=19
Total: 19 pools, 42296 bytes allocated, 34266 used.
The pool name is only indicative, it's the name of the first object type using
this pool. The size in parenthesis is the object size for objects in this pool.
Object sizes are always rounded up to the closest multiple of 16 bytes. The
number of objects currently allocated and the equivalent number of bytes is
reported so that it is easy to know which pool is responsible for the highest
memory usage. The number of objects currently in use is reported as well in the
"used" field. The difference between "allocated" and "used" corresponds to the
objects that have been freed and are available for immediate use. The address
at the end of the line is the pool's address, and the following number is the
pool index when it exists, or is reported as -1 if no index was assigned.
It is possible to limit the amount of memory allocated per process using the
"-m" command line option, followed by a number of megabytes. It covers all of
the process's addressable space, so that includes memory used by some libraries
as well as the stack, but it is a reliable limit when building a resource
constrained system. It works the same way as "ulimit -v" on systems which have
it, or "ulimit -d" for the other ones.
If a memory allocation fails due to the memory limit being reached or because
the system doesn't have any enough memory, then haproxy will first start to
free all available objects from all pools before attempting to allocate memory
again. This mechanism of releasing unused memory can be triggered by sending
the signal SIGQUIT to the haproxy process.
During a reload operation, the process switched to the graceful stop state also
automatically performs some flushes after releasing any connection so that all
possible memory is released to save it for the new process.
7. CPU usage
------------
HAProxy normally spends most of its time in the system and a smaller part in
userland. A finely tuned 3.5 GHz CPU can sustain a rate about 80000 end-to-end
connection setups and closes per second at 100% CPU on a single core. When one
core is saturated, typical figures are :
- 95% system, 5% user for long TCP connections or large HTTP objects
- 85% system and 15% user for short TCP connections or small HTTP objects in
close mode
- 70% system and 30% user for small HTTP objects in keep-alive mode
The amount of rules processing and regular expressions will increase the user
land part. The presence of firewall rules, connection tracking, complex routing
tables in the system will instead increase the system part.
On most systems, the CPU time observed during network transfers can be cut in 4
parts :
- the interrupt part, which concerns all the processing performed upon I/O
receipt, before the target process is even known. Typically Rx packets are
accounted for in interrupt. On some systems such as Linux where interrupt
processing may be deferred to a dedicated thread, it can appear as softirq,
and the thread is called ksoftirqd/0 (for CPU 0). The CPU taking care of
this load is generally defined by the hardware settings, though in the case
of softirq it is often possible to remap the processing to another CPU.
This interrupt part will often be perceived as parasitic since it's not
associated with any process, but it actually is some processing being done
to prepare the work for the process.
- the system part, which concerns all the processing done using kernel code
called from userland. System calls are accounted as system for example. All
synchronously delivered Tx packets will be accounted for as system time. If
some packets have to be deferred due to queues filling up, they may then be
processed in interrupt context later (eg: upon receipt of an ACK opening a
TCP window).
- the user part, which exclusively runs application code in userland. HAProxy
runs exclusively in this part, though it makes heavy use of system calls.
Rules processing, regular expressions, compression, encryption all add to
the user portion of CPU consumption.
- the idle part, which is what the CPU does when there is nothing to do. For
example HAProxy waits for an incoming connection, or waits for some data to
leave, meaning the system is waiting for an ACK from the client to push
these data.
In practice regarding HAProxy's activity, it is in general reasonably accurate
(but totally inexact) to consider that interrupt/softirq are caused by Rx
processing in kernel drivers, that user-land is caused by layer 7 processing
in HAProxy, and that system time is caused by network processing on the Tx
path.
Since HAProxy runs around an event loop, it waits for new events using poll()
(or any alternative) and processes all these events as fast as possible before
going back to poll() waiting for new events. It measures the time spent waiting
in poll() compared to the time spent doing processing events. The ratio of
polling time vs total time is called the "idle" time, it's the amount of time
spent waiting for something to happen. This ratio is reported in the stats page
on the "idle" line, or "Idle_pct" on the CLI. When it's close to 100%, it means
the load is extremely low. When it's close to 0%, it means that there is
constantly some activity. While it cannot be very accurate on an overloaded
system due to other processes possibly preempting the CPU from the haproxy
process, it still provides a good estimate about how HAProxy considers it is
working : if the load is low and the idle ratio is low as well, it may indicate
that HAProxy has a lot of work to do, possibly due to very expensive rules that
have to be processed. Conversely, if HAProxy indicates the idle is close to
100% while things are slow, it means that it cannot do anything to speed things
up because it is already waiting for incoming data to process. In the example
below, haproxy is completely idle :
$ echo "show info" | socat - /var/run/haproxy.sock | grep ^Idle
Idle_pct: 100
When the idle ratio starts to become very low, it is important to tune the
system and place processes and interrupts correctly to save the most possible
CPU resources for all tasks. If a firewall is present, it may be worth trying
to disable it or to tune it to ensure it is not responsible for a large part
of the performance limitation. It's worth noting that unloading a stateful
firewall generally reduces both the amount of interrupt/softirq and of system
usage since such firewalls act both on the Rx and the Tx paths. On Linux,
unloading the nf_conntrack and ip_conntrack modules will show whether there is
anything to gain. If so, then the module runs with default settings and you'll
have to figure how to tune it for better performance. In general this consists
in considerably increasing the hash table size. On FreeBSD, "pfctl -d" will
disable the "pf" firewall and its stateful engine at the same time.
If it is observed that a lot of time is spent in interrupt/softirq, it is
important to ensure that they don't run on the same CPU. Most systems tend to
pin the tasks on the CPU where they receive the network traffic because for
certain workloads it improves things. But with heavily network-bound workloads
it is the opposite as the haproxy process will have to fight against its kernel
counterpart. Pinning haproxy to one CPU core and the interrupts to another one,
all sharing the same L3 cache tends to sensibly increase network performance
because in practice the amount of work for haproxy and the network stack are
quite close, so they can almost fill an entire CPU each. On Linux this is done
using taskset (for haproxy) or using cpu-map (from the haproxy config), and the
interrupts are assigned under /proc/irq. Many network interfaces support
multiple queues and multiple interrupts. In general it helps to spread them
across a small number of CPU cores provided they all share the same L3 cache.
Please always stop irq_balance which always does the worst possible thing on
such workloads.
For CPU-bound workloads consisting in a lot of SSL traffic or a lot of
compression, it may be worth using multiple processes dedicated to certain
tasks, though there is no universal rule here and experimentation will have to
be performed.
In order to increase the CPU capacity, it is possible to make HAProxy run as
several processes, using the "nbproc" directive in the global section. There
are some limitations though :
- health checks are run per process, so the target servers will get as many
checks as there are running processes ;
- maxconn values and queues are per-process so the correct value must be set
to avoid overloading the servers ;
- outgoing connections should avoid using port ranges to avoid conflicts
- stick-tables are per process and are not shared between processes ;
- each peers section may only run on a single process at a time ;
- the CLI operations will only act on a single process at a time.
With this in mind, it appears that the easiest setup often consists in having
one first layer running on multiple processes and in charge for the heavy
processing, passing the traffic to a second layer running in a single process.
This mechanism is suited to SSL and compression which are the two CPU-heavy
features. Instances can easily be chained over UNIX sockets (which are cheaper
than TCP sockets and which do not waste ports), and the proxy protocol which is
useful to pass client information to the next stage. When doing so, it is
generally a good idea to bind all the single-process tasks to process number 1
and extra tasks to next processes, as this will make it easier to generate
similar configurations for different machines.
On Linux versions 3.9 and above, running HAProxy in multi-process mode is much
more efficient when each process uses a distinct listening socket on the same
IP:port ; this will make the kernel evenly distribute the load across all
processes instead of waking them all up. Please check the "process" option of
the "bind" keyword lines in the configuration manual for more information.
8. Logging
----------
For logging, HAProxy always relies on a syslog server since it does not perform
any file-system access. The standard way of using it is to send logs over UDP
to the log server (by default on port 514). Very commonly this is configured to
127.0.0.1 where the local syslog daemon is running, but it's also used over the
network to log to a central server. The central server provides additional
benefits especially in active-active scenarios where it is desirable to keep
the logs merged in arrival order. HAProxy may also make use of a UNIX socket to
send its logs to the local syslog daemon, but it is not recommended at all,
because if the syslog server is restarted while haproxy runs, the socket will
be replaced and new logs will be lost. Since HAProxy will be isolated inside a
chroot jail, it will not have the ability to reconnect to the new socket. It
has also been observed in field that the log buffers in use on UNIX sockets are
very small and lead to lost messages even at very light loads. But this can be
fine for testing however.
It is recommended to add the following directive to the "global" section to
make HAProxy log to the local daemon using facility "local0" :
log 127.0.0.1:514 local0
and then to add the following one to each "defaults" section or to each frontend
and backend section :
log global
This way, all logs will be centralized through the global definition of where
the log server is.
Some syslog daemons do not listen to UDP traffic by default, so depending on
the daemon being used, the syntax to enable this will vary :
- on sysklogd, you need to pass argument "-r" on the daemon's command line
so that it listens to a UDP socket for "remote" logs ; note that there is
no way to limit it to address 127.0.0.1 so it will also receive logs from
remote systems ;
- on rsyslogd, the following lines must be added to the configuration file :
$ModLoad imudp
$UDPServerAddress *
$UDPServerRun 514
- on syslog-ng, a new source can be created the following way, it then needs
to be added as a valid source in one of the "log" directives :
source s_udp {
udp(ip(127.0.0.1) port(514));
};
Please consult your syslog daemon's manual for more information. If no logs are
seen in the system's log files, please consider the following tests :
- restart haproxy. Each frontend and backend logs one line indicating it's
starting. If these logs are received, it means logs are working.
- run "strace -tt -s100 -etrace=sendmsg -p <haproxy's pid>" and perform some
activity that you expect to be logged. You should see the log messages
being sent using sendmsg() there. If they don't appear, restart using
strace on top of haproxy. If you still see no logs, it definitely means
that something is wrong in your configuration.
- run tcpdump to watch for port 514, for example on the loopback interface if
the traffic is being sent locally : "tcpdump -As0 -ni lo port 514". If the
packets are seen there, it's the proof they're sent then the syslogd daemon
needs to be troubleshooted.
While traffic logs are sent from the frontends (where the incoming connections
are accepted), backends also need to be able to send logs in order to report a
server state change consecutive to a health check. Please consult HAProxy's
configuration manual for more information regarding all possible log settings.
It is convenient to chose a facility that is not used by other daemons. HAProxy
examples often suggest "local0" for traffic logs and "local1" for admin logs
because they're never seen in field. A single facility would be enough as well.
Having separate logs is convenient for log analysis, but it's also important to
remember that logs may sometimes convey confidential information, and as such
they must not be mixed with other logs that may accidentally be handed out to
unauthorized people.
For in-field troubleshooting without impacting the server's capacity too much,
it is recommended to make use of the "halog" utility provided with HAProxy.
This is sort of a grep-like utility designed to process HAProxy log files at
a very fast data rate. Typical figures range between 1 and 2 GB of logs per
second. It is capable of extracting only certain logs (eg: search for some
classes of HTTP status codes, connection termination status, search by response
time ranges, look for errors only), count lines, limit the output to a number
of lines, and perform some more advanced statistics such as sorting servers
by response time or error counts, sorting URLs by time or count, sorting client
addresses by access count, and so on. It is pretty convenient to quickly spot
anomalies such as a bot looping on the site, and block them.
9. Statistics and monitoring
----------------------------
It is possible to query HAProxy about its status. The most commonly used
mechanism is the HTTP statistics page. This page also exposes an alternative
CSV output format for monitoring tools. The same format is provided on the
Unix socket.
Statistics are regroup in categories labelled as domains, corresponding to the
multiple components of HAProxy. There are two domains available: proxy and
resolvers. If not specified, the proxy domain is selected. Note that only the
proxy statistics are printed on the HTTP page.
9.1. CSV format
---------------
The statistics may be consulted either from the unix socket or from the HTTP
page. Both means provide a CSV format whose fields follow. The first line
begins with a sharp ('#') and has one word per comma-delimited field which
represents the title of the column. All other lines starting at the second one
use a classical CSV format using a comma as the delimiter, and the double quote
('"') as an optional text delimiter, but only if the enclosed text is ambiguous
(if it contains a quote or a comma). The double-quote character ('"') in the
text is doubled ('""'), which is the format that most tools recognize. Please
do not insert any column before these ones in order not to break tools which
use hard-coded column positions.
For proxy statistics, after each field name, the types which may have a value
for that field are specified in brackets. The types are L (Listeners), F
(Frontends), B (Backends), and S (Servers). There is a fixed set of static
fields that are always available in the same order. A column containing the
character '-' delimits the end of the static fields, after which presence or
order of the fields are not guaranteed.
Here is the list of static fields using the proxy statistics domain:
0. pxname [LFBS]: proxy name
1. svname [LFBS]: service name (FRONTEND for frontend, BACKEND for backend,
any name for server/listener)
2. qcur [..BS]: current queued requests. For the backend this reports the
number queued without a server assigned.
3. qmax [..BS]: max value of qcur
4. scur [LFBS]: current sessions
5. smax [LFBS]: max sessions
6. slim [LFBS]: configured session limit
7. stot [LFBS]: cumulative number of sessions
8. bin [LFBS]: bytes in
9. bout [LFBS]: bytes out
10. dreq [LFB.]: requests denied because of security concerns.
- For tcp this is because of a matched tcp-request content rule.
- For http this is because of a matched http-request or tarpit rule.
11. dresp [LFBS]: responses denied because of security concerns.
- For http this is because of a matched http-request rule, or
"option checkcache".
12. ereq [LF..]: request errors. Some of the possible causes are:
- early termination from the client, before the request has been sent.
- read error from the client
- client timeout
- client closed connection
- various bad requests from the client.
- request was tarpitted.
13. econ [..BS]: number of requests that encountered an error trying to
connect to a backend server. The backend stat is the sum of the stat
for all servers of that backend, plus any connection errors not
associated with a particular server (such as the backend having no
active servers).
14. eresp [..BS]: response errors. srv_abrt will be counted here also.
Some other errors are:
- write error on the client socket (won't be counted for the server stat)
- failure applying filters to the response.
15. wretr [..BS]: number of times a connection to a server was retried.
16. wredis [..BS]: number of times a request was redispatched to another
server. The server value counts the number of times that server was
switched away from.
17. status [LFBS]: status (UP/DOWN/NOLB/MAINT/MAINT(via)/MAINT(resolution)...)
18. weight [..BS]: total effective weight (backend), effective weight (server)
19. act [..BS]: number of active servers (backend), server is active (server)
20. bck [..BS]: number of backup servers (backend), server is backup (server)
21. chkfail [...S]: number of failed checks. (Only counts checks failed when
the server is up.)
22. chkdown [..BS]: number of UP->DOWN transitions. The backend counter counts
transitions to the whole backend being down, rather than the sum of the
counters for each server.
23. lastchg [..BS]: number of seconds since the last UP<->DOWN transition
24. downtime [..BS]: total downtime (in seconds). The value for the backend
is the downtime for the whole backend, not the sum of the server downtime.
25. qlimit [...S]: configured maxqueue for the server, or nothing in the
value is 0 (default, meaning no limit)
26. pid [LFBS]: process id (0 for first instance, 1 for second, ...)
27. iid [LFBS]: unique proxy id
28. sid [L..S]: server id (unique inside a proxy)
29. throttle [...S]: current throttle percentage for the server, when
slowstart is active, or no value if not in slowstart.
30. lbtot [..BS]: total number of times a server was selected, either for new
sessions, or when re-dispatching. The server counter is the number
of times that server was selected.
31. tracked [...S]: id of proxy/server if tracking is enabled.
32. type [LFBS]: (0=frontend, 1=backend, 2=server, 3=socket/listener)
33. rate [.FBS]: number of sessions per second over last elapsed second
34. rate_lim [.F..]: configured limit on new sessions per second
35. rate_max [.FBS]: max number of new sessions per second
36. check_status [...S]: status of last health check, one of:
UNK -> unknown
INI -> initializing
SOCKERR -> socket error
L4OK -> check passed on layer 4, no upper layers testing enabled
L4TOUT -> layer 1-4 timeout
L4CON -> layer 1-4 connection problem, for example
"Connection refused" (tcp rst) or "No route to host" (icmp)
L6OK -> check passed on layer 6
L6TOUT -> layer 6 (SSL) timeout
L6RSP -> layer 6 invalid response - protocol error
L7OK -> check passed on layer 7
L7OKC -> check conditionally passed on layer 7, for example 404 with
disable-on-404
L7TOUT -> layer 7 (HTTP/SMTP) timeout
L7RSP -> layer 7 invalid response - protocol error
L7STS -> layer 7 response error, for example HTTP 5xx
Notice: If a check is currently running, the last known status will be
reported, prefixed with "* ". e. g. "* L7OK".
37. check_code [...S]: layer5-7 code, if available
38. check_duration [...S]: time in ms took to finish last health check
39. hrsp_1xx [.FBS]: http responses with 1xx code
40. hrsp_2xx [.FBS]: http responses with 2xx code
41. hrsp_3xx [.FBS]: http responses with 3xx code
42. hrsp_4xx [.FBS]: http responses with 4xx code
43. hrsp_5xx [.FBS]: http responses with 5xx code
44. hrsp_other [.FBS]: http responses with other codes (protocol error)
45. hanafail [...S]: failed health checks details
46. req_rate [.F..]: HTTP requests per second over last elapsed second
47. req_rate_max [.F..]: max number of HTTP requests per second observed
48. req_tot [.FB.]: total number of HTTP requests received
49. cli_abrt [..BS]: number of data transfers aborted by the client
50. srv_abrt [..BS]: number of data transfers aborted by the server
(inc. in eresp)
51. comp_in [.FB.]: number of HTTP response bytes fed to the compressor
52. comp_out [.FB.]: number of HTTP response bytes emitted by the compressor
53. comp_byp [.FB.]: number of bytes that bypassed the HTTP compressor
(CPU/BW limit)
54. comp_rsp [.FB.]: number of HTTP responses that were compressed
55. lastsess [..BS]: number of seconds since last session assigned to
server/backend
56. last_chk [...S]: last health check contents or textual error
57. last_agt [...S]: last agent check contents or textual error
58. qtime [..BS]: the average queue time in ms over the 1024 last requests
59. ctime [..BS]: the average connect time in ms over the 1024 last requests
60. rtime [..BS]: the average response time in ms over the 1024 last requests
(0 for TCP)
61. ttime [..BS]: the average total session time in ms over the 1024 last
requests
62. agent_status [...S]: status of last agent check, one of:
UNK -> unknown
INI -> initializing
SOCKERR -> socket error
L4OK -> check passed on layer 4, no upper layers testing enabled
L4TOUT -> layer 1-4 timeout
L4CON -> layer 1-4 connection problem, for example
"Connection refused" (tcp rst) or "No route to host" (icmp)
L7OK -> agent reported "up"
L7STS -> agent reported "fail", "stop", or "down"
63. agent_code [...S]: numeric code reported by agent if any (unused for now)
64. agent_duration [...S]: time in ms taken to finish last check
65. check_desc [...S]: short human-readable description of check_status
66. agent_desc [...S]: short human-readable description of agent_status
67. check_rise [...S]: server's "rise" parameter used by checks
68. check_fall [...S]: server's "fall" parameter used by checks
69. check_health [...S]: server's health check value between 0 and rise+fall-1
70. agent_rise [...S]: agent's "rise" parameter, normally 1
71. agent_fall [...S]: agent's "fall" parameter, normally 1
72. agent_health [...S]: agent's health parameter, between 0 and rise+fall-1
73. addr [L..S]: address:port or "unix". IPv6 has brackets around the address.
74: cookie [..BS]: server's cookie value or backend's cookie name
75: mode [LFBS]: proxy mode (tcp, http, health, unknown)
76: algo [..B.]: load balancing algorithm
77: conn_rate [.F..]: number of connections over the last elapsed second
78: conn_rate_max [.F..]: highest known conn_rate
79: conn_tot [.F..]: cumulative number of connections
80: intercepted [.FB.]: cum. number of intercepted requests (monitor, stats)
81: dcon [LF..]: requests denied by "tcp-request connection" rules
82: dses [LF..]: requests denied by "tcp-request session" rules
83: wrew [LFBS]: cumulative number of failed header rewriting warnings
84: connect [..BS]: cumulative number of connection establishment attempts
85: reuse [..BS]: cumulative number of connection reuses
86: cache_lookups [.FB.]: cumulative number of cache lookups
87: cache_hits [.FB.]: cumulative number of cache hits
88: srv_icur [...S]: current number of idle connections available for reuse
89: src_ilim [...S]: limit on the number of available idle connections
90. qtime_max [..BS]: the maximum observed queue time in ms
91. ctime_max [..BS]: the maximum observed connect time in ms
92. rtime_max [..BS]: the maximum observed response time in ms (0 for TCP)
93. ttime_max [..BS]: the maximum observed total session time in ms
94. eint [LFBS]: cumulative number of internal errors
95. idle_conn_cur [...S]: current number of unsafe idle connections
96. safe_conn_cur [...S]: current number of safe idle connections
97. used_conn_cur [...S]: current number of connections in use
98. need_conn_est [...S]: estimated needed number of connections
99. uweight [..BS]: total user weight (backend), server user weight (server)
100. agg_server_status [..B.]: backend aggregated gauge of server's status
101. agg_server_status_check [..B.]: (deprecated)
102. agg_check_status [..B.]: backend aggregated gauge of server's state check
status
103. srid [...S]: server id revision
104. sess_other [.F..]: total number of sessions other than HTTP since process
started
105. h1_sess [.F..]: total number of HTTP/1 sessions since process started
106. h2_sess [.F..]: total number of HTTP/2 sessions since process started
107. h3_sess [.F..]: total number of HTTP/3 sessions since process started
108. req_other [.F..]: total number of sessions other than HTTP processed by
this object since the worker process started
109. h1req [.F..]: total number of HTTP/1 sessions processed by this object
since the worker process started
110. h2req [.F..]: total number of hTTP/2 sessions processed by this object
since the worker process started
111. h3req [.F..]: total number of HTTP/3 sessions processed by this object
since the worker process started
112. proto [L...]: protocol
113. priv_idle_cur [...S]: current number of private idle connections
For all other statistics domains, the presence or the order of the fields are
not guaranteed. In this case, the header line should always be used to parse
the CSV data.
9.2. Typed output format
------------------------
Both "show info" and "show stat" support a mode where each output value comes
with its type and sufficient information to know how the value is supposed to
be aggregated between processes and how it evolves.
In all cases, the output consists in having a single value per line with all
the information split into fields delimited by colons (':').
The first column designates the object or metric being dumped. Its format is
specific to the command producing this output and will not be described in this
section. Usually it will consist in a series of identifiers and field names.
The second column contains 4 characters respectively indicating the origin, the
nature, the scope and the persistence state of the value being reported. The
first character (the origin) indicates where the value was extracted from.
Possible characters are :
M The value is a metric. It is valid at one instant any may change depending
on its nature .
S The value is a status. It represents a discrete value which by definition
cannot be aggregated. It may be the status of a server ("UP" or "DOWN"),
the PID of the process, etc.
K The value is a sorting key. It represents an identifier which may be used
to group some values together because it is unique among its class. All
internal identifiers are keys. Some names can be listed as keys if they
are unique (eg: a frontend name is unique). In general keys come from the
configuration, even though some of them may automatically be assigned. For
most purposes keys may be considered as equivalent to configuration.
C The value comes from the configuration. Certain configuration values make
sense on the output, for example a concurrent connection limit or a cookie
name. By definition these values are the same in all processes started
from the same configuration file.
P The value comes from the product itself. There are very few such values,
most common use is to report the product name, version and release date.
These elements are also the same between all processes.
The second character (the nature) indicates the nature of the information
carried by the field in order to let an aggregator decide on what operation to
use to aggregate multiple values. Possible characters are :
A The value represents an age since a last event. This is a bit different
from the duration in that an age is automatically computed based on the
current date. A typical example is how long ago did the last session
happen on a server. Ages are generally aggregated by taking the minimum
value and do not need to be stored.
a The value represents an already averaged value. The average response times
and server weights are of this nature. Averages can typically be averaged
between processes.
C The value represents a cumulative counter. Such measures perpetually
increase until they wrap around. Some monitoring protocols need to tell
the difference between a counter and a gauge to report a different type.
In general counters may simply be summed since they represent events or
volumes. Examples of metrics of this nature are connection counts or byte
counts.
D The value represents a duration for a status. There are a few usages of
this, most of them include the time taken by the last health check and
the time a server has spent down. Durations are generally not summed,
most of the time the maximum will be retained to compute an SLA.
G The value represents a gauge. It's a measure at one instant. The memory
usage or the current number of active connections are of this nature.
Metrics of this type are typically summed during aggregation.
L The value represents a limit (generally a configured one). By nature,
limits are harder to aggregate since they are specific to the point where
they were retrieved. In certain situations they may be summed or be kept
separate.
M The value represents a maximum. In general it will apply to a gauge and
keep the highest known value. An example of such a metric could be the
maximum amount of concurrent connections that was encountered in the
product's life time. To correctly aggregate maxima, you are supposed to
output a range going from the maximum of all maxima and the sum of all
of them. There is indeed no way to know if they were encountered
simultaneously or not.
m The value represents a minimum. In general it will apply to a gauge and
keep the lowest known value. An example of such a metric could be the
minimum amount of free memory pools that was encountered in the product's
life time. To correctly aggregate minima, you are supposed to output a
range going from the minimum of all minima and the sum of all of them.
There is indeed no way to know if they were encountered simultaneously
or not.
N The value represents a name, so it is a string. It is used to report
proxy names, server names and cookie names. Names have configuration or
keys as their origin and are supposed to be the same among all processes.
O The value represents a free text output. Outputs from various commands,
returns from health checks, node descriptions are of such nature.
R The value represents an event rate. It's a measure at one instant. It is
quite similar to a gauge except that the recipient knows that this measure
moves slowly and may decide not to keep all values. An example of such a
metric is the measured amount of connections per second. Metrics of this
type are typically summed during aggregation.
T The value represents a date or time. A field emitting the current date
would be of this type. The method to aggregate such information is left
as an implementation choice. For now no field uses this type.
The third character (the scope) indicates what extent the value reflects. Some
elements may be per process while others may be per configuration or per system.
The distinction is important to know whether or not a single value should be
kept during aggregation or if values have to be aggregated. The following
characters are currently supported :
C The value is valid for a whole cluster of nodes, which is the set of nodes
communicating over the peers protocol. An example could be the amount of
entries present in a stick table that is replicated with other peers. At
the moment no metric use this scope.
P The value is valid only for the process reporting it. Most metrics use
this scope.
S The value is valid for the whole service, which is the set of processes
started together from the same configuration file. All metrics originating
from the configuration use this scope. Some other metrics may use it as
well for some shared resources (eg: shared SSL cache statistics).
s The value is valid for the whole system, such as the system's hostname,
current date or resource usage. At the moment this scope is not used by
any metric.
The fourth character (persistence state) indicates that the value (the metric)
is volatile or persistent across reloads. The following characters are expected :
V The metric is volatile because it is local to the current process so
the value will be lost when reloading.
P The metric is persistent because it may be shared with other co-processes
so that the value is preserved across reloads.
Consumers of these information will generally have enough of these 4 characters
to determine how to accurately report aggregated information across multiple
processes.
After this column, the third column indicates the type of the field, among "s32"
(signed 32-bit integer), "s64" (signed 64-bit integer), "u32" (unsigned 32-bit
integer), "u64" (unsigned 64-bit integer), "str" (string). It is important to
know the type before parsing the value in order to properly read it. For example
a string containing only digits is still a string an not an integer (eg: an
error code extracted by a check).
Then the fourth column is the value itself, encoded according to its type.
Strings are dumped as-is immediately after the colon without any leading space.
If a string contains a colon, it will appear normally. This means that the
output should not be exclusively split around colons or some check outputs
or server addresses might be truncated.
9.3. Unix Socket commands
-------------------------
The stats socket is not enabled by default. In order to enable it, it is
necessary to add one line in the global section of the haproxy configuration.
A second line is recommended to set a larger timeout, always appreciated when
issuing commands by hand :
global
stats socket /var/run/haproxy.sock mode 600 level admin
stats timeout 2m
It is also possible to add multiple instances of the stats socket by repeating
the line, and make them listen to a TCP port instead of a UNIX socket. This is
never done by default because this is dangerous, but can be handy in some
situations :
global
stats socket /var/run/haproxy.sock mode 600 level admin
stats socket ipv4@192.168.0.1:9999 level admin
stats timeout 2m
To access the socket, an external utility such as "socat" is required. Socat is
a swiss-army knife to connect anything to anything. We use it to connect
terminals to the socket, or a couple of stdin/stdout pipes to it for scripts.
The two main syntaxes we'll use are the following :
# socat /var/run/haproxy.sock stdio
# socat /var/run/haproxy.sock readline
The first one is used with scripts. It is possible to send the output of a
script to haproxy, and pass haproxy's output to another script. That's useful
for retrieving counters or attack traces for example.
The second one is only useful for issuing commands by hand. It has the benefit
that the terminal is handled by the readline library which supports line
editing and history, which is very convenient when issuing repeated commands
(eg: watch a counter).
The socket supports three operation modes :
- non-interactive, silent
- interactive, silent
- interactive with prompt
The non-interactive mode is the default when socat connects to the socket. In
this mode, a single line may be sent. It is processed as a whole, responses are
sent back, and the connection closes after the end of the response. This is the
mode that scripts and monitoring tools use. It is possible to send multiple
commands in this mode, they need to be delimited by a semi-colon (';'). For
example :
# echo "show info;show stat;show table" | socat /var/run/haproxy stdio
If a command needs to use a semi-colon or a backslash (eg: in a value), it
must be preceded by a backslash ('\').
The interactive mode allows new commands to be sent after the ones from the
previous lines finish. It exists in two variants, one silent, which works like
the non-interactive mode except that the socket waits for a new command instead
of closing, and one where a prompt is displayed ('>') at the beginning of the
line. The interactive mode is preferred for advanced tools while the prompt
mode is preferred for humans.
The mode can be changed using the "prompt" command. By default, it toggles the
interactive+prompt modes. Entering "prompt" in interactive mode will switch to
prompt mode. The command optionally takes a specific mode among which:
- "n" : non-interactive mode (single command and quits)
- "i" : interactive mode (multiple commands, no prompt)
- "p" : prompt mode (multiple commands with a prompt)
Since the default mode is non-interactive, "prompt" must be used as the first
command in order to switch it, otherwise the previous command will cause the
connection to be closed. Switching to non-interactive mode will result in the
connection to be closed after all the commands of the same line complete.
For this reason, when debugging by hand, it's quite common to start with the
"prompt" command :
# socat /var/run/haproxy readline
prompt
> show info
...
>
Interactive tools might prefer starting with "prompt i" to switch to interactive
mode without the prompt.
Optionally the process' uptime may be displayed in the prompt. In order to
enable this, the "prompt timed" command will enable the prompt and toggle the
displaying of the time. The uptime is displayed in format "d:hh:mm:ss" where
"d" is the number of days, and "hh", "mm", "ss" are respectively the number
of hours, minutes and seconds on two digits each:
# socat /var/run/haproxy readline
prompt timed
[23:03:34:39]> show version
2.8-dev9-e5e622-18
[23:03:34:41]> quit
When the timed prompt is set on the master CLI, the prompt will display the
currently selected process' uptime, so this will work for the master, current
worker or an older worker:
master> prompt timed
[0:00:00:50] master> show proc
(...)
[0:00:00:58] master> @!11955 <-- master, switch to current worker
[0:00:01:03] 11955> @!11942 <-- current worker, switch to older worker
[0:00:02:17] 11942> @ <-- older worker, switch back to master
[0:00:01:10] master>
Since multiple commands may be issued at once, haproxy uses the empty line as a
delimiter to mark an end of output for each command, and takes care of ensuring
that no command can emit an empty line on output. A script can thus easily
parse the output even when multiple commands were pipelined on a single line.
Some commands may take an optional payload. To add one to a command, the first
line needs to end with the "<<\n" pattern. The next lines will be treated as
the payload and can contain as many lines as needed. To validate a command with
a payload, it needs to end with an empty line.
The payload pattern can be customized in order to change the way the payload
ends. In order to end a payload with something else than an empty line, a
customized pattern can be set between '<<' and '\n'. Only 7 characters can be
used in addiction to '<<', otherwise this won't be considered a payload.
For example, to use a PEM file that contains empty lines and comments:
# echo -e "set ssl cert common.pem <<%EOF%\n$(cat common.pem)\n%EOF%\n" | \
socat /var/run/haproxy.stat -
Limitations do exist: the length of the whole buffer passed to the CLI must
not be greater than tune.bfsize and the pattern "<<" must not be glued to the
last word of the line.
When entering a payload while in interactive mode, the prompt will change from
"> " to "+ ".
It is important to understand that when multiple haproxy processes are started
on the same sockets, any process may pick up the request and will output its
own stats.
The list of commands currently supported on the stats socket is provided below.
If an unknown command is sent, haproxy displays the usage message which reminds
all supported commands. Some commands support a more complex syntax, generally
it will explain what part of the command is invalid when this happens.
Some commands require a higher level of privilege to work. If you do not have
enough privilege, you will get an error "Permission denied". Please check
the "level" option of the "bind" keyword lines in the configuration manual
for more information.
abort ssl ca-file <cafile>
Abort and destroy a temporary CA file update transaction.
See also "set ssl ca-file" and "commit ssl ca-file".
abort ssl cert <filename>
Abort and destroy a temporary SSL certificate update transaction.
See also "set ssl cert" and "commit ssl cert".
abort ssl crl-file <crlfile>
Abort and destroy a temporary CRL file update transaction.
See also "set ssl crl-file" and "commit ssl crl-file".
acme renew <certificate>
Starts an ACME certificate generation task with the given certificate name.
The certificate must be linked to an acme section, see section 12.8 "ACME"
of the configuration manual. See also "acme status".
acme status
Show the status of every certificates that were configured with ACME.
This command outputs, separated by a tab:
- The name of the certificate configured in haproxy
- The acme section used in the configuration
- The state of the acme task, either "Running", "Scheduled" or "Stopped"
- The UTC expiration date of the certificate in ISO8601 format
- The relative expiration time (0d if expired)
- The UTC scheduled date of the certificate in ISO8601 format
- The relative schedule time (0d if Running)
Example:
$ echo "@1; acme status" | socat /tmp/master.sock - | column -t -s $'\t'
# certificate section state expiration date (UTC) expires in scheduled date (UTC) scheduled in
ecdsa.pem LE Running 2020-01-18T09:31:12Z 0d 0h00m00s 2020-01-15T21:31:12Z 0d 0h00m00s
foobar.pem.rsa LE Scheduled 2025-08-04T11:50:54Z 89d 23h01m13s 2025-07-27T23:50:55Z 82d 11h01m14s
add acl [@<ver>] <acl> <pattern>
Add an entry into the acl <acl>. <acl> is the #<id> or the <name> returned by
"show acl". This command does not verify if the entry already exists. Entries
are added to the current version of the ACL, unless a specific version is
specified with "@<ver>". This version number must have preliminary been
allocated by "prepare acl", and it will be comprised between the versions
reported in "curr_ver" and "next_ver" on the output of "show acl". Entries
added with a specific version number will not match until a "commit acl"
operation is performed on them. They may however be consulted using the
"show acl @<ver>" command, and cleared using a "clear acl @<ver>" command.
This command cannot be used if the reference <acl> is a name also used with
a map. In this case, the "add map" command must be used instead.
add map [@<ver>] <map> <key> <value>
add map [@<ver>] <map> <payload>
Add an entry into the map <map> to associate the value <value> to the key
<key>. This command does not verify if the entry already exists. It is
mainly used to fill a map after a "clear" or "prepare" operation. Entries
are added to the current version of the ACL, unless a specific version is
specified with "@<ver>". This version number must have preliminary been
allocated by "prepare acl", and it will be comprised between the versions
reported in "curr_ver" and "next_ver" on the output of "show acl". Entries
added with a specific version number will not match until a "commit map"
operation is performed on them. They may however be consulted using the
"show map @<ver>" command, and cleared using a "clear acl @<ver>" command.
If the designated map is also used as an ACL, the ACL will only match the
<key> part and will ignore the <value> part. Using the payload syntax it is
possible to add multiple key/value pairs by entering them on separate lines.
On each new line, the first word is the key and the rest of the line is
considered to be the value which can even contains spaces.
Example:
# socat /tmp/sock1 -
prompt
> add map #-1 <<
+ key1 value1
+ key2 value2 with spaces
+ key3 value3 also with spaces
+ key4 value4
>
add server <backend>/<server> [args]*
Instantiate a new server attached to the backend <backend>.
The <server> name must not be already used in the backend. A special
restriction is put on the backend which must used a dynamic load-balancing
algorithm. A subset of keywords from the server config file statement can be
used to configure the server behavior (see "add server help" to list them).
Also note that no settings will be reused from an hypothetical
'default-server' statement in the same backend.
Currently a dynamic server is statically initialized with the "none"
init-addr method. This means that no resolution will be undertaken if a FQDN
is specified as an address, even if the server creation will be validated.
To support the reload operations, it is expected that the server created via
the CLI is also manually inserted in the relevant haproxy configuration file.
A dynamic server not present in the configuration won't be restored after a
reload operation.
A dynamic server may use the "track" keyword to follow the check status of
another server from the configuration. However, it is not possible to track
another dynamic server. This is to ensure that the tracking chain is kept
consistent even in the case of dynamic servers deletion.
Use the "check" keyword to enable health-check support. Note that the
health-check is disabled by default and must be enabled independently from
the server using the "enable health" command. For agent checks, use the
"agent-check" keyword and the "enable agent" command. Note that in this case
the server may be activated via the agent depending on the status reported,
without an explicit "enable server" command. This also means that extra care
is required when removing a dynamic server with agent check. The agent should
be first deactivated via "disable agent" to be able to put the server in the
required maintenance mode before removal.
It may be possible to reach the fd limit when using a large number of dynamic
servers. Please refer to the "u-limit" global keyword documentation in this
case.
add server help
List the keywords supported for dynamic servers by the current haproxy
version. Keyword syntax is similar to the server line from the configuration
file, please refer to their individual documentation for details.
add ssl ca-file <cafile> <payload>
Add a new certificate to a ca-file. This command is useful when you reached
the buffer size limit on the CLI and want to add multiple certificates.
Instead of doing a "set" with all the certificates you are able to add each
certificate individually. A "set ssl ca-file" will reset the ca-file.
Example:
echo -e "set ssl ca-file cafile.pem <<\n$(cat rootCA.crt)\n" | \
socat /var/run/haproxy.stat -
echo -e "add ssl ca-file cafile.pem <<\n$(cat intermediate1.crt)\n" | \
socat /var/run/haproxy.stat -
echo -e "add ssl ca-file cafile.pem <<\n$(cat intermediate2.crt)\n" | \
socat /var/run/haproxy.stat -
echo "commit ssl ca-file cafile.pem" | socat /var/run/haproxy.stat -
add ssl crt-list <crtlist> <certificate>
add ssl crt-list <crtlist> <payload>
Add an certificate in a crt-list. It can also be used for directories since
directories are now loaded the same way as the crt-lists. This command allow
you to use a certificate name in parameter, to use SSL options or filters a
crt-list line must sent as a payload instead. Only one crt-list line is
supported in the payload. This command will load the certificate for every
bind lines using the crt-list. To push a new certificate to HAProxy the
commands "new ssl cert" and "set ssl cert" must be used.
Example:
$ echo "new ssl cert foobar.pem" | socat /tmp/sock1 -
$ echo -e "set ssl cert foobar.pem <<\n$(cat foobar.pem)\n" | socat
/tmp/sock1 -
$ echo "commit ssl cert foobar.pem" | socat /tmp/sock1 -
$ echo "add ssl crt-list certlist1 foobar.pem" | socat /tmp/sock1 -
$ echo -e 'add ssl crt-list certlist1 <<\nfoobar.pem [allow-0rtt] foo.bar.com
!test1.com\n' | socat /tmp/sock1 -
add ssl ech <bind> <payload>
Add an ECH key to a <bind> line. The payload must be in the PEM for ECH format.
(https://datatracker.ietf.org/doc/html/draft-farrell-tls-pemesni)
The bind line format is <frontend>/@<filename>:<linenum> (Example:
frontend1/@haproxy.conf:19) or <frontend>/<name> if the bind line was named
with the "name" keyword.
Necessitates an OpenSSL version that supports ECH, and HAProxy must be
compiled with USE_ECH=1. This command is only supported on a CLI connection
running in experimental mode (see "experimental-mode on").
See also "show ssl ech" and "ech" in the Section 5.1 of the configuration
manual.
Example:
$ openssl ech -public_name foobar.com -out foobar3.com.ech
$ echo -e "experimental-mode on; add ssl ech frontend1/@haproxy.conf:19 <<%EOF%\n$(cat foobar3.com.ech)\n%EOF%\n" | \
socat /tmp/haproxy.sock -
added a new ECH config to frontend1
add ssl jwt <filename>
Add an already loaded certificate to the list of certificates that can be
used for JWT validation (see "jwt_verify_cert" converter). This command does
not work on ongoing transactions.
See also "del ssl jwt" and "show ssl jwt" commands.
See "jwt" certificate option for more information.
clear counters
Clear the max values of the statistics counters in each proxy (frontend &
backend) and in each server. The accumulated counters are not affected. The
internal activity counters reported by "show activity" are also reset. This
can be used to get clean counters after an incident, without having to
restart nor to clear traffic counters. This command is restricted and can
only be issued on sockets configured for levels "operator" or "admin".
clear counters all
Clear all statistics counters in each proxy (frontend & backend) and in each
server. This has the same effect as restarting. This command is restricted
and can only be issued on sockets configured for level "admin".
clear acl [@<ver>] <acl>
Remove all entries from the acl <acl>. <acl> is the #<id> or the <name>
returned by "show acl". Note that if the reference <acl> is a name and is
shared with a map, this map will be also cleared. By default only the current
version of the ACL is cleared (the one being matched against). However it is
possible to specify another version using '@' followed by this version.
clear map [@<ver>] <map>
Remove all entries from the map <map>. <map> is the #<id> or the <name>
returned by "show map". Note that if the reference <map> is a name and is
shared with a acl, this acl will be also cleared. By default only the current
version of the map is cleared (the one being matched against). However it is
possible to specify another version using '@' followed by this version.
clear table <table> [ data.<type> <operator> <value> ] | [ key <key> ] |
[ ptr <ptr> ]
Remove entries from the stick-table <table>.
This is typically used to unblock some users complaining they have been
abusively denied access to a service, but this can also be used to clear some
stickiness entries matching a server that is going to be replaced (see "show
table" below for details). Note that sometimes, removal of an entry will be
refused because it is currently tracked by a session. Retrying a few seconds
later after the session ends is usual enough.
In the case where no options arguments are given all entries will be removed.
When the "data." form is used entries matching a filter applied using the
stored data (see "stick-table" in section 4.2) are removed. A stored data
type must be specified in <type>, and this data type must be stored in the
table otherwise an error is reported. The data is compared according to
<operator> with the 64-bit integer <value>. Operators are the same as with
the ACLs :
- eq : match entries whose data is equal to this value
- ne : match entries whose data is not equal to this value
- le : match entries whose data is less than or equal to this value
- ge : match entries whose data is greater than or equal to this value
- lt : match entries whose data is less than this value
- gt : match entries whose data is greater than this value
When the key form is used the entry <key> is removed. The key must be of the
same type as the table, which currently is limited to IPv4, IPv6, integer and
string.
When the ptr form is used the entry <ptr> is removed. <ptr> is written in
the form 0xffff and must correspond to the address returned by a previous
"show table" command. Matching an entry using its pointer may be relevant if
the entry cannot be matched using the key due to empty key or incompatible
characters on the cli.
If data.<type> is an array type, "[]" may be used to access a specific
index in the array, like so: data.gpt[1]
Example :
$ echo "show table http_proxy" | socat stdio /tmp/sock1
>>> # table: http_proxy, type: ip, size:204800, used:2
>>> 0x80e6a4c: key=127.0.0.1 use=0 exp=3594729 gpc0=0 conn_rate(30000)=1 \
bytes_out_rate(60000)=187
>>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
bytes_out_rate(60000)=191
>>> 0x80e6b40: key=127.0.0.3 use=0 exp=3594743 gpc0=2 conn_rate(30000)=10 \
bytes_out_rate(60000)=200
$ echo "clear table http_proxy key 127.0.0.1" | socat stdio /tmp/sock1
$ echo "show table http_proxy" | socat stdio /tmp/sock1
>>> # table: http_proxy, type: ip, size:204800, used:1
>>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
>>> 0x80e6b40: key=127.0.0.3 use=0 exp=3594743 gpc0=2 conn_rate(30000)=10 \
bytes_out_rate(60000)=200
bytes_out_rate(60000)=191
$ echo "clear table http_proxy data.gpc0 eq 1" | socat stdio /tmp/sock1
$ echo "show table http_proxy" | socat stdio /tmp/sock1
>>> # table: http_proxy, type: ip, size:204800, used:1
>>> 0x80e6b40: key=127.0.0.3 use=0 exp=3594743 gpc0=2 conn_rate(30000)=10 \
bytes_out_rate(60000)=200
$ echo "clear table http_proxy ptr 0x80e6b40" | socat stdio /tmp/sock1
$ echo "show table http_proxy" | socat stdio /tmp/sock1
>>> # table: http_proxy, type: ip, size:204800, used:0
commit acl @<ver> <acl>
Commit all changes made to version <ver> of ACL <acl>, and deletes all past
versions. <acl> is the #<id> or the <name> returned by "show acl". The
version number must be between "curr_ver"+1 and "next_ver" as reported in
"show acl". The contents to be committed to the ACL can be consulted with
"show acl @<ver> <acl>" if desired. The specified version number has normally
been created with the "prepare acl" command. The replacement is atomic. It
consists in atomically updating the current version to the specified version,
which will instantly cause all entries in other versions to become invisible,
and all entries in the new version to become visible. It is also possible to
use this command to perform an atomic removal of all visible entries of an
ACL by calling "prepare acl" first then committing without adding any
entries. This command cannot be used if the reference <acl> is a name also
used as a map. In this case, the "commit map" command must be used instead.
commit map @<ver> <map>
Commit all changes made to version <ver> of map <map>, and deletes all past
versions. <map> is the #<id> or the <name> returned by "show map". The
version number must be between "curr_ver"+1 and "next_ver" as reported in
"show map". The contents to be committed to the map can be consulted with
"show map @<ver> <map>" if desired. The specified version number has normally
been created with the "prepare map" command. The replacement is atomic. It
consists in atomically updating the current version to the specified version,
which will instantly cause all entries in other versions to become invisible,
and all entries in the new version to become visible. It is also possible to
use this command to perform an atomic removal of all visible entries of an
map by calling "prepare map" first then committing without adding any
entries.
commit ssl ca-file <cafile>
Commit a temporary SSL CA file update transaction.
In the case of an existing CA file (in a "Used" state in "show ssl ca-file"),
the new CA file tree entry is inserted in the CA file tree and every instance
that used the CA file entry is rebuilt, along with the SSL contexts it needs.
All the contexts previously used by the rebuilt instances are removed.
Upon success, the previous CA file entry is removed from the tree.
Upon failure, nothing is removed or deleted, and all the original SSL
contexts are kept and used.
Once the temporary transaction is committed, it is destroyed.
In the case of a new CA file (after a "new ssl ca-file" and in a "Unused"
state in "show ssl ca-file"), the CA file will be inserted in the CA file
tree but it won't be used anywhere in HAProxy. To use it and generate SSL
contexts that use it, you will need to add it to a crt-list with "add ssl
crt-list".
See also "new ssl ca-file", "set ssl ca-file", "add ssl ca-file",
"abort ssl ca-file" and "add ssl crt-list".
commit ssl cert <filename>
Commit a temporary SSL certificate update transaction.
In the case of an existing certificate (in a "Used" state in "show ssl
cert"), generate every SSL contexts and SNIs it needs, insert them, and
remove the previous ones. Replace in memory the previous SSL certificates
everywhere the <filename> was used in the configuration. Upon failure it
doesn't remove or insert anything. Once the temporary transaction is
committed, it is destroyed.
In the case of a new certificate (after a "new ssl cert" and in a "Unused"
state in "show ssl cert"), the certificate will be committed in a certificate
storage, but it won't be used anywhere in haproxy. To use it and generate
its SNIs you will need to add it to a crt-list or a directory with "add ssl
crt-list".
See also "new ssl cert", "set ssl cert", "abort ssl cert" and
"add ssl crt-list".
commit ssl crl-file <crlfile>
Commit a temporary SSL CRL file update transaction.
In the case of an existing CRL file (in a "Used" state in "show ssl
crl-file"), the new CRL file entry is inserted in the CA file tree (which
holds both the CA files and the CRL files) and every instance that used the
CRL file entry is rebuilt, along with the SSL contexts it needs.
All the contexts previously used by the rebuilt instances are removed.
Upon success, the previous CRL file entry is removed from the tree.
Upon failure, nothing is removed or deleted, and all the original SSL
contexts are kept and used.
Once the temporary transaction is committed, it is destroyed.
In the case of a new CRL file (after a "new ssl crl-file" and in a "Unused"
state in "show ssl crl-file"), the CRL file will be inserted in the CRL file
tree but it won't be used anywhere in HAProxy. To use it and generate SSL
contexts that use it, you will need to add it to a crt-list with "add ssl
crt-list".
See also "new ssl crl-file", "set ssl crl-file", "abort ssl crl-file" and
"add ssl crt-list".
debug counters [reset|show|on|off|all|bug|chk|cnt|glt|?]*
List internal counters placed in the code, which may vary depending on some
build options. Some of them depend on DEBUG_STRICT, others on DEBUG_COUNTERS.
The command takes a combination of multiple arguments, some defining actions
and others defining filters:
- bug enables listing the counters for BUG_ON() statements
- cnt enables listing the counters for COUNT_IF() statements
- chk enables listing the counters for CHECK_IF() statements
- glt enables listing the counters for COUNT_GLITCH() statements
- all enables showing counters that never triggered (value 0)
- off action: disables updating of the COUNT_IF() counters
- on action: enables updating of the COUNT_IF() counters
- reset action: resets all specified counters
- show action: shows all specified counters
By default, the action is "show" to show counters, and the listed counters
are all types with a non-zero value. The "show" command is implicit when no
other action is specified, and is only present to ease the production of
commands from scripts.
The output starts with an integer counter, followed by the type of the
counter in upper case, then its location in the code (file:line), the
function name, and optionally ": " followed by a description. Please note
that the output format might change between major versions, and new types
and entries might be backported to stable versions for the purpose of
improved debugging capabilities. Any monitoring performed on them should
only be done in a very lenient and permissive way, and preferably not.
Normally, end users will not use this command, but they may be invited to do
so by a developer trying to figure the cause of an issue, looking for CNT or
GLT entries. By the way, non-zero "CHK" entries are not expected to happen
and should be reported to developers as they might indicate some incorrect
assumptions in the code.
debug dev <command> [args]*
Call a developer-specific command. Only supported on a CLI connection running
in expert mode (see "expert-mode on"). Such commands are extremely dangerous
and not forgiving, any misuse may result in a crash of the process. They are
intended for experts only, and must really not be used unless told to do so.
Some of them are only available when haproxy is built with DEBUG_DEV defined
because they may have security implications. All of these commands require
admin privileges, and are purposely not documented to avoid encouraging their
use by people who are not at ease with the source code.
del acl <acl> [<key>|#<ref>]
Delete all the acl entries from the acl <acl> corresponding to the key <key>.
<acl> is the #<id> or the <name> returned by "show acl". If the <ref> is used,
this command delete only the listed reference. The reference can be found with
listing the content of the acl. Note that if the reference <acl> is a name and
is shared with a map, the entry will be also deleted in the map.
del map <map> [<key>|#<ref>]
Delete all the map entries from the map <map> corresponding to the key <key>.
<map> is the #<id> or the <name> returned by "show map". If the <ref> is used,
this command delete only the listed reference. The reference can be found with
listing the content of the map. Note that if the reference <map> is a name and
is shared with a acl, the entry will be also deleted in the map.
del ssl ca-file <cafile>
Delete a CA file tree entry from HAProxy. The CA file must be unused and
removed from any crt-list. "show ssl ca-file" displays the status of the CA
files. The deletion doesn't work with a certificate referenced directly with
the "ca-file" or "ca-verify-file" directives in the configuration.
del ssl cert <certfile>
Delete a certificate store from HAProxy. The certificate must be unused
(included for JWT validation) and removed from any crt-list or directory.
"show ssl cert" displays the status of the certificate. The deletion doesn't
work with a certificate referenced directly with the "crt" directive in the
configuration.
del ssl crl-file <crlfile>
Delete a CRL file tree entry from HAProxy. The CRL file must be unused and
removed from any crt-list. "show ssl crl-file" displays the status of the CRL
files. The deletion doesn't work with a certificate referenced directly with
the "crl-file" directive in the configuration.
del ssl crt-list <filename> <certfile[:line]>
Delete an entry in a crt-list. This will delete every SNIs used for this
entry in the frontends. If a certificate is used several time in a crt-list,
you will need to provide which line you want to delete. To display the line
numbers, use "show ssl crt-list -n <crtlist>".
det ssl ech <bind>
Delete the ECH keys of a bind line.
The bind line format is <frontend>/@<filename>:<linenum> (Example:
frontend1/@haproxy.conf:19) or <frontend>/<name> if the bind line was named
with the "name" keyword.
Necessitates an OpenSSL version that supports ECH, and HAProxy must be
compiled with USE_ECH=1. This command is only supported on a CLI connection
running in experimental mode (see "experimental-mode on").
See also "show ssl ech", "add ssl ech" and "ech" in the Section 5.1 of the
configuration manual.
Example:
$ echo "experimental-mode on; del ssl ech frontend1/@haproxy.conf:19" | socat /tmp/haproxy.sock -
deleted all ECH configs from frontend1/@haproxy.conf:19
del ssl jwt <filename>
Remove an already loaded certificate to the list of certificates that can be
used for JWT validation (see "jwt_verify_cert" converter). This command does
not work on ongoing transactions.
See also "add ssl jwt" and "show ssl jwt" commands.
See "jwt" certificate option for more information.
del server <backend>/<server>
Delete a removable server attached to the backend <backend>. A removable
server is the server which satisfies all of these conditions :
- not referenced by other configuration elements
- must already be in maintenance (see "disable server")
- must not have any active or idle connections
If any of these conditions is not met, the command will fail.
Active connections are those with at least one ongoing request. It is
possible to speed up their termination using "shutdown sessions server". It
is highly recommended to use "wait srv-removable" before "del server" to
ensure that all active or idle connections are closed and that the command
succeeds.
disable agent <backend>/<server>
Mark the auxiliary agent check as temporarily stopped.
In the case where an agent check is being run as a auxiliary check, due
to the agent-check parameter of a server directive, new checks are only
initialized when the agent is in the enabled. Thus, disable agent will
prevent any new agent checks from begin initiated until the agent
re-enabled using enable agent.
When an agent is disabled the processing of an auxiliary agent check that
was initiated while the agent was set as enabled is as follows: All
results that would alter the weight, specifically "drain" or a weight
returned by the agent, are ignored. The processing of agent check is
otherwise unchanged.
The motivation for this feature is to allow the weight changing effects
of the agent checks to be paused to allow the weight of a server to be
configured using set weight without being overridden by the agent.
This command is restricted and can only be issued on sockets configured for
level "admin".
disable dynamic-cookie backend <backend>
Disable the generation of dynamic cookies for the backend <backend>
disable frontend <frontend>
Mark the frontend as temporarily stopped. This corresponds to the mode which
is used during a soft restart : the frontend releases the port but can be
enabled again if needed. This should be used with care as some non-Linux OSes
are unable to enable it back. This is intended to be used in environments
where stopping a proxy is not even imaginable but a misconfigured proxy must
be fixed. That way it's possible to release the port and bind it into another
process to restore operations. The frontend will appear with status "STOP"
on the stats page.
The frontend may be specified either by its name or by its numeric ID,
prefixed with a sharp ('#').
This command is restricted and can only be issued on sockets configured for
level "admin".
disable health <backend>/<server>
Mark the primary health check as temporarily stopped. This will disable
sending of health checks, and the last health check result will be ignored.
The server will be in unchecked state and considered UP unless an auxiliary
agent check forces it down.
This command is restricted and can only be issued on sockets configured for
level "admin".
disable server <backend>/<server>
Mark the server DOWN for maintenance. In this mode, no more checks will be
performed on the server until it leaves maintenance.
If the server is tracked by other servers, those servers will be set to DOWN
during the maintenance.
In the statistics page, a server DOWN for maintenance will appear with a
"MAINT" status, its tracking servers with the "MAINT(via)" one.
Both the backend and the server may be specified either by their name or by
their numeric ID, prefixed with a sharp ('#').
This command is restricted and can only be issued on sockets configured for
level "admin".
dump ssl cert <certfile>
Dump a certificate loaded into HAProxy memory. This will dump the certificate
in PEM format, the private key, then the leaf certificate and finally the
chain will be dumped. You can also dump a transaction by prefixing the
filename by an asterisk.
This is useful in order to save certificates on the filesystem when it was
updated on the CLI and not on the filesystem.
This command is restricted and can only be issued on sockets configured for
level "admin".
Examples:
$ echo "dump ssl cert cert1.pem" | socat /tmp/sock1 -
$ echo "dump ssl cert cert1.pem" | socat /tmp/sock1 - | openssl storeutl -noout -text /dev/stdin
dump stats-file
Generate a stats-file which can be used to preload haproxy counters values on
startup. See "Stats-file" section for more detail.
echo <text>
Print some text with the CLI. Can be useful to wrote commentaries between
commands when dumping the result of multiple commands.
Example:
echo "expert-mode on; echo FDs from fdtab; show fd; echo wild FDs; debug dev fd" | socat /var/run/haproxy.sock -
enable agent <backend>/<server>
Resume auxiliary agent check that was temporarily stopped.
See "disable agent" for details of the effect of temporarily starting
and stopping an auxiliary agent.
This command is restricted and can only be issued on sockets configured for
level "admin".
enable dynamic-cookie backend <backend>
Enable the generation of dynamic cookies for the backend <backend>.
A secret key must also be provided.
enable frontend <frontend>
Resume a frontend which was temporarily stopped. It is possible that some of
the listening ports won't be able to bind anymore (eg: if another process
took them since the 'disable frontend' operation). If this happens, an error
is displayed. Some operating systems might not be able to resume a frontend
which was disabled.
The frontend may be specified either by its name or by its numeric ID,
prefixed with a sharp ('#').
This command is restricted and can only be issued on sockets configured for
level "admin".
enable health <backend>/<server>
Resume a primary health check that was temporarily stopped. This will enable
sending of health checks again. Please see "disable health" for details.
This command is restricted and can only be issued on sockets configured for
level "admin".
enable server <backend>/<server>
If the server was previously marked as DOWN for maintenance, this marks the
server UP and checks are re-enabled.
Both the backend and the server may be specified either by their name or by
their numeric ID, prefixed with a sharp ('#').
This command is restricted and can only be issued on sockets configured for
level "admin".
experimental-mode [on|off]
Without options, this indicates whether the experimental mode is enabled or
disabled on the current connection. When passed "on", it turns the
experimental mode on for the current CLI connection only. With "off" it turns
it off.
The experimental mode is used to access to extra features still in
development. These features are currently not stable and should be used with
care. They may be subject to breaking changes across versions.
When used from the master CLI, this command shouldn't be prefixed, as it will
set the mode for any worker when connecting to its CLI.
Example:
echo "@1; experimental-mode on; <experimental_cmd>..." | socat /var/run/haproxy.master -
echo "experimental-mode on; @1 <experimental_cmd>..." | socat /var/run/haproxy.master -
expert-mode [on|off]
This command is similar to experimental-mode but is used to toggle the
expert mode.
The expert mode enables displaying of expert commands that can be extremely
dangerous for the process and which may occasionally help developers collect
important information about complex bugs. Any misuse of these features will
likely lead to a process crash. Do not use this option without being invited
to do so. Note that this command is purposely not listed in the help message.
This command is only accessible in admin level. Changing to another level
automatically resets the expert mode.
When used from the master CLI, this command shouldn't be prefixed, as it will
set the mode for any worker when connecting to its CLI.
Example:
echo "@1; expert-mode on; debug dev exit 1" | socat /var/run/haproxy.master -
echo "expert-mode on; @1 debug dev exit 1" | socat /var/run/haproxy.master -
get map <map> <value>
get acl <acl> <value>
Lookup the value <value> in the map <map> or in the ACL <acl>. <map> or <acl>
are the #<id> or the <name> returned by "show map" or "show acl". This command
returns all the matching patterns associated with this map. This is useful for
debugging maps and ACLs. The output format is composed by one line par
matching type. Each line is composed by space-delimited series of words.
The first two words are:
<match method>: The match method applied. It can be "found", "bool",
"int", "ip", "bin", "len", "str", "beg", "sub", "dir",
"dom", "end" or "reg".
<match result>: The result. Can be "match" or "no-match".
The following words are returned only if the pattern matches an entry.
<index type>: "tree" or "list". The internal lookup algorithm.
<case>: "case-insensitive" or "case-sensitive". The
interpretation of the case.
<entry matched>: match="<entry>". Return the matched pattern. It is
useful with regular expressions.
The two last word are used to show the returned value and its type. With the
"acl" case, the pattern doesn't exist.
return=nothing: No return because there are no "map".
return="<value>": The value returned in the string format.
return=cannot-display: The value cannot be converted as string.
type="<type>": The type of the returned sample.
get var <name>
Show the existence, type and contents of the process-wide variable 'name'.
Only process-wide variables are readable, so the name must begin with
'proc.' otherwise no variable will be found. This command requires levels
"operator" or "admin".
get weight <backend>/<server>
Report the current weight and the initial weight of server <server> in
backend <backend> or an error if either doesn't exist. The initial weight is
the one that appears in the configuration file. Both are normally equal
unless the current weight has been changed. Both the backend and the server
may be specified either by their name or by their numeric ID, prefixed with a
sharp ('#').
help [<command>]
Print the list of known keywords and their basic usage, or commands matching
the requested one. The same help screen is also displayed for unknown
commands.
httpclient [--htx] <method> <URI>
Launch an HTTP client request and print the response on the CLI. Only
supported on a CLI connection running in expert mode (see "expert-mode on").
It's only meant for debugging. The httpclient is able to resolve a server
name in the URL using the "default" resolvers section, which is populated
with the DNS servers of your /etc/resolv.conf by default. However it won't be
able to resolve an host from /etc/hosts if you don't use a local dns daemon
which can resolve those.
The --htx option allow to use the haproxy internal htx representation using
the htx_dump() function, mainly used for debugging.
new ssl ca-file <cafile>
Create a new empty CA file tree entry to be filled with a set of CA
certificates and added to a crt-list. This command should be used in
combination with "set ssl ca-file", "add ssl ca-file" and "add ssl crt-list".
new ssl cert <filename>
Create a new empty SSL certificate store to be filled with a certificate and
added to a directory or a crt-list. This command should be used in
combination with "set ssl cert" and "add ssl crt-list".
new ssl crl-file <crlfile>
Create a new empty CRL file tree entry to be filled with a set of CRLs
and added to a crt-list. This command should be used in combination with "set
ssl crl-file" and "add ssl crt-list".
prepare acl <acl>
Allocate a new version number in ACL <acl> for atomic replacement. <acl> is
the #<id> or the <name> returned by "show acl". The new version number is
shown in response after "New version created:". This number will then be
usable to prepare additions of new entries into the ACL which will then
atomically replace the current ones once committed. It is reported as
"next_ver" in "show acl". There is no impact of allocating new versions, as
unused versions will automatically be removed once a more recent version is
committed. Version numbers are unsigned 32-bit values which wrap at the end,
so care must be taken when comparing them in an external program. This
command cannot be used if the reference <acl> is a name also used as a map.
In this case, the "prepare map" command must be used instead.
prepare map <map>
Allocate a new version number in map <map> for atomic replacement. <map> is
the #<id> or the <name> returned by "show map". The new version number is
shown in response after "New version created:". This number will then be
usable to prepare additions of new entries into the map which will then
atomically replace the current ones once committed. It is reported as
"next_ver" in "show map". There is no impact of allocating new versions, as
unused versions will automatically be removed once a more recent version is
committed. Version numbers are unsigned 32-bit values which wrap at the end,
so care must be taken when comparing them in an external program.
prompt [help | n | i | p | timed]*
Changes the behavior of the interactive mode and the prompt displayed at the
beginning of the line in interactive mode:
- "help" : displays the command's usage
- "n" : switches to non-interactive mode
- "i" : switches to interactive mode
- "p" : switches to interactive + prompt mode
- "timed" : toggles displaying the time in the prompt
Without any option, this will cycle through prompt mode then non-interactive
mode. In non-interactive mode, the connection is closed after the last
command of the current line completes. In interactive mode, the connection is
not closed after a command completes, so that a new one can be entered. In
prompt mode, the interactive mode is still in use, and a prompt will appear
at the beginning of the line, indicating to the user that the interpreter is
waiting for a new command. The prompt consists in a right angle bracket
followed by a space "> ".
The prompt mode is more suited to human users, the interactive mode to
advanced scripts, and the non-interactive mode (default) to basic scripts.
Note that the non-interactive mode is not available for the master socket.
quit
Close the connection when in interactive mode.
set anon [on|off] [<key>]
This command enables or disables the "anonymized mode" for the current CLI
session, which replaces certain fields considered sensitive or confidential
in command outputs with hashes that preserve sufficient consistency between
elements to help developers identify relations between elements when trying
to spot bugs, but a low enough bit count (24) to make them non-reversible due
to the high number of possible matches. When turned on, if no key is
specified, the global key will be used (either specified in the configuration
file by "anonkey" or set via the CLI command "set anon global-key"). If no such
key was set, a random one will be generated. Otherwise it's possible to
specify the 32-bit key to be used for the current session, for example, to
reuse the key that was used in a previous dump to help compare outputs.
Developers will never need this key and it's recommended never to share it as
it could allow to confirm/infirm some guesses about what certain hashes could
be hiding.
set dynamic-cookie-key backend <backend> <value>
Modify the secret key used to generate the dynamic persistent cookies.
This will break the existing sessions.
set anon global-key <key>
This sets the global anonymizing key to <key>, which must be a 32-bit
integer between 0 and 4294967295 (0 disables the global key). This command
requires admin privilege.
set map <map> [<key>|#<ref>] <value>
Modify the value corresponding to each key <key> in a map <map>. <map> is the
#<id> or <name> returned by "show map". If the <ref> is used in place of
<key>, only the entry pointed by <ref> is changed. The new value is <value>.
set maxconn frontend <frontend> <value>
Dynamically change the specified frontend's maxconn setting. Any positive
value is allowed including zero, but setting values larger than the global
maxconn does not make much sense. If the limit is increased and connections
were pending, they will immediately be accepted. If it is lowered to a value
below the current number of connections, new connections acceptation will be
delayed until the threshold is reached. The frontend might be specified by
either its name or its numeric ID prefixed with a sharp ('#').
set maxconn server <backend/server> <value>
Dynamically change the specified server's maxconn setting. Any positive
value is allowed including zero, but setting values larger than the global
maxconn does not make much sense.
set maxconn global <maxconn>
Dynamically change the global maxconn setting within the range defined by the
initial global maxconn setting. If it is increased and connections were
pending, they will immediately be accepted. If it is lowered to a value below
the current number of connections, new connections acceptation will be
delayed until the threshold is reached. A value of zero restores the initial
setting.
set profiling { tasks | memory } { auto | on | off }
Enables or disables CPU or memory profiling for the indicated subsystem. This
is equivalent to setting or clearing the "profiling" settings in the "global"
section of the configuration file. Please also see "show profiling". Note
that manually setting the tasks profiling to "on" automatically resets the
scheduler statistics, thus allows to check activity over a given interval.
The memory profiling is limited to certain operating systems (known to work
on the linux-glibc target), and requires USE_MEMORY_PROFILING to be set at
compile time.
set rate-limit connections global <value>
Change the process-wide connection rate limit, which is set by the global
'maxconnrate' setting. A value of zero disables the limitation. This limit
applies to all frontends and the change has an immediate effect. The value
is passed in number of connections per second.
set rate-limit http-compression global <value>
Change the maximum input compression rate, which is set by the global
'maxcomprate' setting. A value of zero disables the limitation. The value is
passed in number of kilobytes per second. The value is available in the "show
info" on the line "CompressBpsRateLim" in bytes.
set rate-limit sessions global <value>
Change the process-wide session rate limit, which is set by the global
'maxsessrate' setting. A value of zero disables the limitation. This limit
applies to all frontends and the change has an immediate effect. The value
is passed in number of sessions per second.
set rate-limit ssl-sessions global <value>
Change the process-wide SSL session rate limit, which is set by the global
'maxsslrate' setting. A value of zero disables the limitation. This limit
applies to all frontends and the change has an immediate effect. The value
is passed in number of sessions per second sent to the SSL stack. It applies
before the handshake in order to protect the stack against handshake abuses.
set server <backend>/<server> addr <ip4 or ip6 address> [port <port>]
Replace the current IP address of a server by the one provided.
Optionally, the port can be changed using the 'port' parameter.
Note that changing the port also support switching from/to port mapping
(notation with +X or -Y), only if a port is configured for the health check.
set server <backend>/<server> agent [ up | down ]
Force a server's agent to a new state. This can be useful to immediately
switch a server's state regardless of some slow agent checks for example.
Note that the change is propagated to tracking servers if any.
set server <backend>/<server> agent-addr <addr> [port <port>]
Change addr for servers agent checks. Allows to migrate agent-checks to
another address at runtime. You can specify both IP and hostname, it will be
resolved.
Optionally, change the port agent.
set server <backend>/<server> agent-port <port>
Change the port used for agent checks.
set server <backend>/<server> agent-send <value>
Change agent string sent to agent check target. Allows to update string while
changing server address to keep those two matching.
set server <backend>/<server> health [ up | stopping | down ]
Force a server's health to a new state. This can be useful to immediately
switch a server's state regardless of some slow health checks for example.
Note that the change is propagated to tracking servers if any.
set server <backend>/<server> check-addr <ip4 | ip6> [port <port>]
Change the IP address used for server health checks.
Optionally, change the port used for server health checks.
set server <backend>/<server> check-port <port>
Change the port used for health checking to <port>
set server <backend>/<server> state [ ready | drain | maint ]
Force a server's administrative state to a new state. This can be useful to
disable load balancing and/or any traffic to a server. Setting the state to
"ready" puts the server in normal mode, and the command is the equivalent of
the "enable server" command. Setting the state to "maint" disables any traffic
to the server as well as any health checks. This is the equivalent of the
"disable server" command. Setting the mode to "drain" only removes the server
from load balancing but still allows it to be checked and to accept new
persistent connections. Changes are propagated to tracking servers if any.
set server <backend>/<server> weight <weight>[%]
Change a server's weight to the value passed in argument. This is the exact
equivalent of the "set weight" command below.
set server <backend>/<server> fqdn <FQDN>
Change a server's FQDN to the value passed in argument. This requires the
internal run-time DNS resolver to be configured and enabled for this server.
set server <backend>/<server> ssl [ on | off ] (deprecated)
This option configures SSL ciphering on outgoing connections to the server.
When switch off, all traffic becomes plain text; health check path is not
changed.
This command is deprecated, create a new server dynamically with or without
SSL instead, using the "add server" command.
set severity-output [ none | number | string ]
Change the severity output format of the stats socket connected to for the
duration of the current session.
set ssl ca-file <cafile> <payload>
this command is part of a transaction system, the "commit ssl ca-file" and
"abort ssl ca-file" commands could be required.
if there is no on-going transaction, it will create a ca file tree entry into
which the certificates contained in the payload will be stored. the ca file
entry will not be stored in the ca file tree and will only be kept in a
temporary transaction. if a transaction with the same filename already exists,
the previous ca file entry will be deleted and replaced by the new one.
once the modifications are done, you have to commit the transaction through
a "commit ssl ca-file" call. If you want to add multiple certificates
separately, you can use the "add ssl ca-file" command
Example:
echo -e "set ssl ca-file cafile.pem <<\n$(cat rootCA.crt)\n" | \
socat /var/run/haproxy.stat -
echo "commit ssl ca-file cafile.pem" | socat /var/run/haproxy.stat -
set ssl cert <filename> <payload>
This command is part of a transaction system, the "commit ssl cert" and
"abort ssl cert" commands could be required.
This whole transaction system works on any certificate displayed by the
"show ssl cert" command, so on any frontend or backend certificate.
If there is no on-going transaction, it will duplicate the certificate
<filename> in memory to a temporary transaction, then update this
transaction with the PEM file in the payload. If a transaction exists with
the same filename, it will update this transaction. It's also possible to
update the files linked to a certificate (.issuer, .sctl, .oscp etc.)
Once the modification are done, you have to "commit ssl cert" the
transaction.
Injection of files over the CLI must be done with caution since an empty line
is used to notify the end of the payload. It is recommended to inject a PEM
file which has been sanitized. A simple method would be to remove every empty
line and only leave what are in the PEM sections. It could be achieved with a
sed command.
Example:
# With some simple sanitizing
echo -e "set ssl cert localhost.pem <<\n$(sed -n '/^$/d;/-BEGIN/,/-END/p' 127.0.0.1.pem)\n" | \
socat /var/run/haproxy.stat -
# Complete example with commit
echo -e "set ssl cert localhost.pem <<\n$(cat 127.0.0.1.pem)\n" | \
socat /var/run/haproxy.stat -
echo -e \
"set ssl cert localhost.pem.issuer <<\n $(cat 127.0.0.1.pem.issuer)\n" | \
socat /var/run/haproxy.stat -
echo -e \
"set ssl cert localhost.pem.ocsp <<\n$(base64 -w 1000 127.0.0.1.pem.ocsp)\n" | \
socat /var/run/haproxy.stat -
echo "commit ssl cert localhost.pem" | socat /var/run/haproxy.stat -
set ssl crl-file <crlfile> <payload>
This command is part of a transaction system, the "commit ssl crl-file" and
"abort ssl crl-file" commands could be required.
If there is no on-going transaction, it will create a CRL file tree entry into
which the Revocation Lists contained in the payload will be stored. The CRL
file entry will not be stored in the CRL file tree and will only be kept in a
temporary transaction. If a transaction with the same filename already exists,
the previous CRL file entry will be deleted and replaced by the new one.
Once the modifications are done, you have to commit the transaction through
a "commit ssl crl-file" call.
Example:
echo -e "set ssl crl-file crlfile.pem <<\n$(cat rootCRL.pem)\n" | \
socat /var/run/haproxy.stat -
echo "commit ssl crl-file crlfile.pem" | socat /var/run/haproxy.stat -
set ssl ech <bind> <payload>
Replace the ECH keys of a bind line with this one. The payload must be in the
PEM for ECH format.
(https://datatracker.ietf.org/doc/html/draft-farrell-tls-pemesni)
The bind line format is <frontend>/@<filename>:<linenum> (Example:
frontend1/@haproxy.conf:19) or <frontend>/<name> if the bind line was named
with the "name" keyword.
Necessitates an OpenSSL version that supports ECH, and HAProxy must be
compiled with USE_ECH=1. This command is only supported on a CLI connection
running in experimental mode (see "experimental-mode on").
See also "show ssl ech", "add ssl ech" and "ech" in the Section 5.1 of the
configuration manual.
$ openssl ech -public_name foobar.com -out foobar3.com.ech
$ echo -e "experimental-mode on;
set ssl ech frontend1/@haproxy.conf:19 <<%EOF%\n$(cat foobar3.com.ech)\n%EOF%\n" | \
socat /tmp/haproxy.sock -
set new ECH configs for frontend1/@haproxy.conf:19
set ssl ocsp-response <response | payload>
This command is used to update an OCSP Response for a certificate (see "crt"
on "bind" lines). Same controls are performed as during the initial loading of
the response. The <response> must be passed as a base64 encoded string of the
DER encoded response from the OCSP server. This command is not supported with
BoringSSL.
Example:
openssl ocsp -issuer issuer.pem -cert server.pem \
-host ocsp.issuer.com:80 -respout resp.der
echo "set ssl ocsp-response $(base64 -w 10000 resp.der)" | \
socat stdio /var/run/haproxy.stat
using the payload syntax:
echo -e "set ssl ocsp-response <<\n$(base64 resp.der)\n" | \
socat stdio /var/run/haproxy.stat
set ssl tls-key <id> <tlskey>
Set the next TLS key for the <id> listener to <tlskey>. This key becomes the
ultimate key, while the penultimate one is used for encryption (others just
decrypt). The oldest TLS key present is overwritten. <id> is either a numeric
#<id> or <file> returned by "show tls-keys". <tlskey> is a base64 encoded 48
or 80 bits TLS ticket key (ex. openssl rand 80 | openssl base64 -A).
set table <table> key <key> [data.<data_type> <value>]*
set table <table> ptr <ptr> [data.<data_type> <value>]*
Create or update a stick-table entry in the table. If the key is not present,
an entry is inserted. See stick-table in section 4.2 to find all possible
values for <data_type>. The most likely use consists in dynamically entering
entries for source IP addresses, with a flag in gpc0 to dynamically block an
IP address or affect its quality of service. It is possible to pass multiple
data_types in a single call.
Optional ptr lookup may be used instead of key lookup for an existing entry:
<ptr> is written in the form 0xffff and must correspond to the address
returned by a previous "show table" command. Matching an entry using its
pointer may be relevant if the entry cannot be matched using the key due to
empty key or incompatible characters on the cli.
If data.<data_type> is an array type, "[]" may be used to access a specific
index in the array, like so: data.gpt[1]
set timeout cli <delay>
Change the CLI interface timeout for current connection. This can be useful
during long debugging sessions where the user needs to constantly inspect
some indicators without being disconnected. The delay is passed in seconds.
set var <name> <expression>
set var <name> expr <expression>
set var <name> fmt <format>
Allows to set or overwrite the process-wide variable 'name' with the result
of expression <expression> or format string <format>. Only process-wide
variables may be used, so the name must begin with 'proc.' otherwise no
variable will be set. The <expression> and <format> may only involve
"internal" sample fetch keywords and converters even though the most likely
useful ones will be str('something'), int(), simple strings or references to
other variables. Note that the command line parser doesn't know about quotes,
so any space in the expression must be preceded by a backslash. This command
requires levels "operator" or "admin". This command is only supported on a
CLI connection running in experimental mode (see "experimental-mode on").
set weight <backend>/<server> <weight>[%]
Change a server's weight to the value passed in argument. If the value ends
with the '%' sign, then the new weight will be relative to the initially
configured weight. Absolute weights are permitted between 0 and 256.
Relative weights must be positive with the resulting absolute weight is
capped at 256. Servers which are part of a farm running a static
load-balancing algorithm have stricter limitations because the weight
cannot change once set. Thus for these servers, the only accepted values
are 0 and 100% (or 0 and the initial weight). Changes take effect
immediately, though certain LB algorithms require a certain amount of
requests to consider changes. A typical usage of this command is to
disable a server during an update by setting its weight to zero, then to
enable it again after the update by setting it back to 100%. This command
is restricted and can only be issued on sockets configured for level
"admin". Both the backend and the server may be specified either by their
name or by their numeric ID, prefixed with a sharp ('#').
show acl [[@<ver>] <acl>]
Dump info about acl converters. Without argument, the list of all available
acls is returned. If a <acl> is specified, its contents are dumped. <acl> is
the #<id> or <name>. By default the current version of the ACL is shown (the
version currently being matched against and reported as 'curr_ver' in the ACL
list). It is possible to instead dump other versions by prepending '@<ver>'
before the ACL's identifier. The version works as a filter and non-existing
versions will simply report no result. The dump format is the same as for the
maps even for the sample values. The data returned are not a list of
available ACL, but are the list of all patterns composing any ACL. Many of
these patterns can be shared with maps. The 'entry_cnt' value represents the
count of all the ACL entries, not just the active ones, which means that it
also includes entries currently being added.
show anon
Display the current state of the anonymized mode (enabled or disabled) and
the current session's key.
show backend
Dump the list of backends available in the running process
show cli level
Display the CLI level of the current CLI session. The result could be
'admin', 'operator' or 'user'. See also the 'operator' and 'user' commands.
Example :
$ socat /tmp/sock1 readline
prompt
> operator
> show cli level
operator
> user
> show cli level
user
> operator
Permission denied
operator
Decrease the CLI level of the current CLI session to operator. It can't be
increased. It also drops expert and experimental mode. See also "show cli
level".
user
Decrease the CLI level of the current CLI session to user. It can't be
increased. It also drops expert and experimental mode. See also "show cli
level".
show activity [-1 | 0 | thread_num]
Reports some counters about internal events that will help developers and
more generally people who know haproxy well enough to narrow down the causes
of reports of abnormal behaviours. A typical example would be a properly
running process never sleeping and eating 100% of the CPU. The output fields
will be made of one line per metric, and per-thread counters on the same
line. These counters are 32-bit and will wrap during the process's life, which
is not a problem since calls to this command will typically be performed
twice. The fields are purposely not documented so that their exact meaning is
verified in the code where the counters are fed. These values are also reset
by the "clear counters" command. On multi-threaded deployments, the first
column will indicate the total (or average depending on the nature of the
metric) for all threads, and the list of all threads' values will be
represented between square brackets in the thread order. Optionally the
thread number to be dumped may be specified in argument. The special value
"0" will report the aggregated value (first column), and "-1", which is the
default, will display all the columns. Note that just like in single-threaded
mode, there will be no brackets when a single column is requested.
show cli sockets
List CLI sockets. The output format is composed of 3 fields separated by
spaces. The first field is the socket address, it can be a unix socket, a
ipv4 address:port couple or a ipv6 one. Socket of other types won't be dump.
The second field describe the level of the socket: 'admin', 'user' or
'operator'. The last field list the processes on which the socket is bound,
separated by commas, it can be numbers or 'all'.
Example :
$ echo 'show cli sockets' | socat stdio /tmp/sock1
# socket lvl processes
/tmp/sock1 admin all
127.0.0.1:9999 user 2,3,4
127.0.0.2:9969 user 2
[::1]:9999 operator 2
show cache
List the configured caches and the objects stored in each cache tree.
$ echo 'show cache' | socat stdio /tmp/sock1
0x7f6ac6c5b03a: foobar (shctx:0x7f6ac6c5b000, available blocks:3918)
1 2 3 4
1. pointer to the cache structure
2. cache name
3. pointer to the mmap area (shctx)
4. number of blocks available for reuse in the shctx
0x7f6ac6c5b4cc hash:286881868 vary:0x0011223344556677 size:39114 (39 blocks), refcount:9, expire:237
1 2 3 4 5 6 7
1. pointer to the cache entry
2. first 32 bits of the hash
3. secondary hash of the entry in case of vary
4. size of the object in bytes
5. number of blocks used for the object
6. number of transactions using the entry
7. expiration time, can be negative if already expired
show dev
This command is meant to centralize some information that HAProxy developers
might need to better understand the causes of a given problem. It generally
does not provide useful information for the user, but these information allow
developers to eliminate certain hypothesis. The format is roughly a series of
sections containing indented lines with one element per line, such as the OS
type and version, the CPU type or the boot-time FD limits for example. Some
fields will be omitted to avoid repetition or output pollution when they do
not add value (e.g. unlimited values). More fields may appear in the future,
and some may change. This output is not meant for being parsed by scripts, and
should not be considered with a high degree of reliability, it's essentially
aimed at saving time for those who can read it.
Technically speaking, such information are taken as-is out of an internal
structure that stores them together at boot time so that they can also be
found in a core file after a crash. As such, it may happen that developers
ask for an early output on a well behaving process to compare with what is
found in a core dump, or to compare between several reloads (e.g. some limits
might change). If anonymizing is enabled, any possibly sensitive value will
be anonymized as well (e.g. the node name).
Example of output:
$ socat stdio /tmp/sock1 <<< "show dev"
Platform info
machine vendor: To be filled by O.E.M
machine family: Altra
cpu model: Impl 0x41 Arch 8 Part 0xd0c r3p1
virtual machine: no
container: no
OS name: Linux
OS release: 6.2.0-36-generic
OS version: #37~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon Oct 9 18:01:07 UTC 2
OS architecture: aarch64
node name: 489aaf
Process info
pid: 1735846
boot uid: 509
boot gid: 1002
fd limit (soft): 1024
fd limit (hard): 1048576
show env [<name>]
Dump one or all environment variables known by the process. Without any
argument, all variables are dumped. With an argument, only the specified
variable is dumped if it exists. Otherwise "Variable not found" is emitted.
Variables are dumped in the same format as they are stored or returned by the
"env" utility, that is, "<name>=<value>". This can be handy when debugging
certain configuration files making heavy use of environment variables to
ensure that they contain the expected values. This command is restricted and
can only be issued on sockets configured for levels "operator" or "admin".
show errors [<iid>|<proxy>] [request|response]
Dump last known HTTP/1.x request and response errors collected by frontends
and backends. If <iid> is specified, the limit the dump to errors concerning
either frontend or backend whose ID is <iid>. Proxy ID "-1" will cause all
instances to be dumped. If a proxy name is specified instead, its ID will be
used as the filter. If "request" or "response" is added after the proxy name
or ID, only request or response errors will be dumped. This command is
restricted and can only be issued on sockets configured for levels "operator"
or "admin".
The errors which may be collected are the last request and response errors
caused by protocol violations, often due to invalid characters in header
names. The report precisely indicates what exact character violated the
protocol. Other important information such as the exact date the error was
detected, frontend and backend names, the server name (when known), the
internal transaction ID and the source address which has initiated the
session are reported too.
All characters are returned, and non-printable characters are encoded. The
most common ones (\t = 9, \n = 10, \r = 13 and \e = 27) are encoded as one
letter following a backslash. The backslash itself is encoded as '\\' to
avoid confusion. Other non-printable characters are encoded '\xNN' where
NN is the two-digits hexadecimal representation of the character's ASCII
code.
Lines are prefixed with the position of their first character, starting at 0
for the beginning of the buffer. At most one input line is printed per line,
and large lines will be broken into multiple consecutive output lines so that
the output never goes beyond 79 characters wide. It is easy to detect if a
line was broken, because it will not end with '\n' and the next line's offset
will be followed by a '+' sign, indicating it is a continuation of previous
line.
Example :
$ echo "show errors -1 response" | socat stdio /tmp/sock1
>>> [04/Mar/2009:15:46:56.081] backend http-in (#2) : invalid response
src 127.0.0.1, session #54, frontend fe-eth0 (#1), server s2 (#1)
response length 213 bytes, error at position 23:
00000 HTTP/1.0 200 OK\r\n
00017 header/bizarre:blah\r\n
00038 Location: blah\r\n
00054 Long-line: this is a very long line which should b
00104+ e broken into multiple lines on the output buffer,
00154+ otherwise it would be too large to print in a ter
00204+ minal\r\n
00211 \r\n
In the example above, we see that the backend "http-in" which has internal
ID 2 has blocked an invalid response from its server s2 which has internal
ID 1. The request was on transaction 54 (called "session" here) initiated
by source 127.0.0.1 and received by frontend fe-eth0 whose ID is 1. The
total response length was 213 bytes when the error was detected, and the
error was at byte 23. This is the slash ('/') in header name
"header/bizarre", which is not a valid HTTP character for a header name.
show events [<sink>] [-w] [-n] [-0]
With no option, this lists all known event sinks and their types. With an
option, it will dump all available events in the designated sink if it is of
type buffer. If option "-w" is passed after the sink name, then once the end
of the buffer is reached, the command will wait for new events and display
them. It is possible to stop the operation by entering any input (which will
be discarded) or by closing the session. Finally, option "-n" is used to
directly seek to the end of the buffer, which is often convenient when
combined with "-w" to only report new events. For convenience, "-wn" or "-nw"
may be used to enable both options at once. By default, all events are
delimited by a line feed character ('\n' or 10 or 0x0A). It is possible to
change this to the NUL character ('\0' or 0) by passing the "-0" argument.
show fd [-!plcfbsd]* [<fd>]
Dump the list of either all open file descriptors or just the one number <fd>
if specified. A set of flags may optionally be passed to restrict the dump
only to certain FD types or to omit certain FD types. When '-' or '!' are
encountered, the selection is inverted for the following characters in the
same argument. The inversion is reset before each argument word delimited by
white spaces. Selectable FD types include 'p' for pipes, 'l' for listeners,
'c' for connections (any type), 'f' for frontend connections, 'b' for backend
connections (any type), 's' for connections to servers, 'd' for connections
to the "dispatch" address or the backend's transparent address. With this,
'b' is a shortcut for 'sd' and 'c' for 'fb' or 'fsd'. 'c!f' is equivalent to
'b' ("any connections except frontend connections" are indeed backend
connections). This is only aimed at developers who need to observe internal
states in order to debug complex issues such as abnormal CPU usages. One fd
is reported per lines, and for each of them, its state in the poller using
upper case letters for enabled flags and lower case for disabled flags, using
"P" for "polled", "R" for "ready", "A" for "active", the events status using
"H" for "hangup", "E" for "error", "O" for "output", "P" for "priority" and
"I" for "input", a few other flags like "N" for "new" (just added into the fd
cache), "U" for "updated" (received an update in the fd cache), "L" for
"linger_risk", "C" for "cloned", then the cached entry position, the pointer
to the internal owner, the pointer to the I/O callback and its name when
known. When the owner is a connection, the connection flags, and the target
are reported (frontend, proxy or server). When the owner is a listener, the
listener's state and its frontend are reported. There is no point in using
this command without a good knowledge of the internals. It's worth noting
that the output format may evolve over time so this output must not be parsed
by tools designed to be durable. Some internal structure states may look
suspicious to the function listing them, in this case the output line will be
suffixed with an exclamation mark ('!'). This may help find a starting point
when trying to diagnose an incident.
show info [typed|json] [desc] [float]
Dump info about haproxy status on current process. If "typed" is passed as an
optional argument, field numbers, names and types are emitted as well so that
external monitoring products can easily retrieve, possibly aggregate, then
report information found in fields they don't know. Each field is dumped on
its own line. If "json" is passed as an optional argument then
information provided by "typed" output is provided in JSON format as a
list of JSON objects. By default, the format contains only two columns
delimited by a colon (':'). The left one is the field name and the right
one is the value. It is very important to note that in typed output
format, the dump for a single object is contiguous so that there is no
need for a consumer to store everything at once. If "float" is passed as an
optional argument, some fields usually emitted as integers may switch to
floats for higher accuracy. It is purposely unspecified which ones are
concerned as this might evolve over time. Using this option implies that the
consumer is able to process floats. The output format used is sprintf("%f").
When using the typed output format, each line is made of 4 columns delimited
by colons (':'). The first column is a dot-delimited series of 3 elements. The
first element is the numeric position of the field in the list (starting at
zero). This position shall not change over time, but holes are to be expected,
depending on build options or if some fields are deleted in the future. The
second element is the field name as it appears in the default "show info"
output. The third element is the relative process number starting at 1.
The rest of the line starting after the first colon follows the "typed output
format" described in the section above. In short, the second column (after the
first ':') indicates the origin, nature and scope of the variable. The third
column indicates the type of the field, among "s32", "s64", "u32", "u64" and
"str". Then the fourth column is the value itself, which the consumer knows
how to parse thanks to column 3 and how to process thanks to column 2.
Thus the overall line format in typed mode is :
<field_pos>.<field_name>.<process_num>:<tags>:<type>:<value>
When "desc" is appended to the command, one extra colon followed by a quoted
string is appended with a description for the metric. At the time of writing,
this is only supported for the "typed" and default output formats.
Example :
> show info
Name: HAProxy
Version: 1.7-dev1-de52ea-146
Release_date: 2016/03/11
Nbproc: 1
Process_num: 1
Pid: 28105
Uptime: 0d 0h00m04s
Uptime_sec: 4
Memmax_MB: 0
PoolAlloc_MB: 0
PoolUsed_MB: 0
PoolFailed: 0
(...)
> show info typed
0.Name.1:POSV:str:HAProxy
1.Version.1:POSV:str:3.1-dev0-7c653d-2466
2.Release_date.1:POSV:str:2025/07/01
3.Nbthread.1:CGSV:u32:1
4.Nbproc.1:CGSV:u32:1
5.Process_num.1:KGPV:u32:1
6.Pid.1:SGPV:u32:638069
7.Uptime.1:MDPV:str:0d 0h00m07s
8.Uptime_sec.1:MDPV:u32:7
9.Memmax_MB.1:CLPV:u32:0
10.PoolAlloc_MB.1:MGPV:u32:0
11.PoolUsed_MB.1:MGPV:u32:0
12.PoolFailed.1:MCPV:u32:0
(...)
In the typed format, the presence of the process ID at the end of the
first column makes it very easy to visually aggregate outputs from
multiple processes.
Example :
$ ( echo show info typed | socat /var/run/haproxy.sock1 ; \
echo show info typed | socat /var/run/haproxy.sock2 ) | \
sort -t . -k 1,1n -k 2,2 -k 3,3n
0.Name.1:POS:str:HAProxy
0.Name.2:POS:str:HAProxy
1.Version.1:POS:str:1.7-dev1-868ab3-148
1.Version.2:POS:str:1.7-dev1-868ab3-148
2.Release_date.1:POS:str:2016/03/11
2.Release_date.2:POS:str:2016/03/11
3.Nbproc.1:CGS:u32:2
3.Nbproc.2:CGS:u32:2
4.Process_num.1:KGP:u32:1
4.Process_num.2:KGP:u32:2
5.Pid.1:SGP:u32:30120
5.Pid.2:SGP:u32:30121
6.Uptime.1:MDP:str:0d 0h01m28s
6.Uptime.2:MDP:str:0d 0h01m28s
(...)
The format of JSON output is described in a schema which may be output
using "show schema json".
The JSON output contains no extra whitespace in order to reduce the
volume of output. For human consumption passing the output through a
pretty printer may be helpful. Example :
$ echo "show info json" | socat /var/run/haproxy.sock stdio | \
python -m json.tool
The JSON output contains no extra whitespace in order to reduce the
volume of output. For human consumption passing the output through a
pretty printer may be helpful. Example :
$ echo "show info json" | socat /var/run/haproxy.sock stdio | \
python -m json.tool
show libs
Dump the list of loaded shared dynamic libraries and object files, on systems
that support it. When available, for each shared object the range of virtual
addresses will be indicated, the size and the path to the object. This can be
used for example to try to estimate what library provides a function that
appears in a dump. Note that on many systems, addresses will change upon each
restart (address space randomization), so that this list would need to be
retrieved upon startup if it is expected to be used to analyse a core file.
This command may only be issued on sockets configured for levels "operator"
or "admin". Note that the output format may vary between operating systems,
architectures and even haproxy versions, and ought not to be relied on in
scripts.
show map [[@<ver>] <map>]
Dump info about map converters. Without argument, the list of all available
maps is returned. If a <map> is specified, its contents are dumped. <map> is
the #<id> or <name>. By default the current version of the map is shown (the
version currently being matched against and reported as 'curr_ver' in the map
list). It is possible to instead dump other versions by prepending '@<ver>'
before the map's identifier. The version works as a filter and non-existing
versions will simply report no result. The 'entry_cnt' value represents the
count of all the map entries, not just the active ones, which means that it
also includes entries currently being added.
In the output, the first column is a unique entry identifier, which is usable
as a reference for operations "del map" and "set map". The second column is
the pattern and the third column is the sample if available. The data returned
are not directly a list of available maps, but are the list of all patterns
composing any map. Many of these patterns can be shared with ACL.
show peers [dict|-] [<peers section>]
Dump info about the peers configured in "peers" sections. Without argument,
the list of the peers belonging to all the "peers" sections are listed. If
<peers section> is specified, only the information about the peers belonging
to this "peers" section are dumped. When "dict" is specified before the peers
section name, the entire Tx/Rx dictionary caches will also be dumped (very
large). Passing "-" may be required to dump a peers section called "dict".
Here are two examples of outputs where hostA, hostB and hostC peers belong to
"sharedlb" peers sections. Only hostA and hostB are connected. Only hostA has
sent data to hostB.
$ echo "show peers" | socat - /tmp/hostA
0x55deb0224320: [15/Apr/2019:11:28:01] id=sharedlb state=0 flags=0x3 \
resync_timeout=<PAST> task_calls=45122
0x55deb022b540: id=hostC(remote) addr=127.0.0.12:10002 status=CONN \
reconnect=4s confirm=0
flags=0x0
0x55deb022a440: id=hostA(local) addr=127.0.0.10:10000 status=NONE \
reconnect=<NEVER> confirm=0
flags=0x0
0x55deb0227d70: id=hostB(remote) addr=127.0.0.11:10001 status=ESTA
reconnect=2s confirm=0
flags=0x20000200 appctx:0x55deb028fba0 st0=7 st1=0 task_calls=14456 \
state=EST
xprt=RAW src=127.0.0.1:37257 addr=127.0.0.10:10000
remote_table:0x55deb0224a10 id=stkt local_id=1 remote_id=1
last_local_table:0x55deb0224a10 id=stkt local_id=1 remote_id=1
shared tables:
0x55deb0224a10 local_id=1 remote_id=1 flags=0x0 remote_data=0x65
last_acked=0 last_pushed=3 last_get=0 teaching_origin=0 update=3
table:0x55deb022d6a0 id=stkt update=3 localupdate=3 \
commitupdate=3 syncing=0
$ echo "show peers" | socat - /tmp/hostB
0x55871b5ab320: [15/Apr/2019:11:28:03] id=sharedlb state=0 flags=0x3 \
resync_timeout=<PAST> task_calls=3
0x55871b5b2540: id=hostC(remote) addr=127.0.0.12:10002 status=CONN \
reconnect=3s confirm=0
flags=0x0
0x55871b5b1440: id=hostB(local) addr=127.0.0.11:10001 status=NONE \
reconnect=<NEVER> confirm=0
flags=0x0
0x55871b5aed70: id=hostA(remote) addr=127.0.0.10:10000 status=ESTA \
reconnect=2s confirm=0
flags=0x20000200 appctx:0x7fa46800ee00 st0=7 st1=0 task_calls=62356 \
state=EST
remote_table:0x55871b5ab960 id=stkt local_id=1 remote_id=1
last_local_table:0x55871b5ab960 id=stkt local_id=1 remote_id=1
shared tables:
0x55871b5ab960 local_id=1 remote_id=1 flags=0x0 remote_data=0x65
last_acked=3 last_pushed=0 last_get=3 teaching_origin=0 update=0
table:0x55871b5b46a0 id=stkt update=1 localupdate=0 \
commitupdate=0 syncing=0
show pools [byname|bysize|byusage] [detailed] [match <pfx>] [<nb>]
Dump the status of internal memory pools. This is useful to track memory
usage when suspecting a memory leak for example. It does exactly the same
as the SIGQUIT when running in foreground except that it does not flush the
pools. The output is not sorted by default. If "byname" is specified, it is
sorted by pool name; if "bysize" is specified, it is sorted by item size in
reverse order; if "byusage" is specified, it is sorted by total usage in
reverse order, and only used entries are shown. It is also possible to limit
the output to the <nb> first entries (e.g. when sorting by usage). It is
possible to also dump more internal details, including the list of all pools
that were merged together, by specifying "detailed". Finally, if "match"
followed by a prefix is specified, then only pools whose name starts with
this prefix will be shown. The reported total only concerns pools matching
the filtering criteria. Example:
$ socat - /tmp/haproxy.sock <<< "show pools match quic byusage"
Dumping pools usage. Use SIGQUIT to flush them.
- Pool quic_conn_r (65560 bytes) : 1337 allocated (87653720 bytes), ...
- Pool quic_crypto (1048 bytes) : 6685 allocated (7005880 bytes), ...
- Pool quic_conn (4056 bytes) : 1337 allocated (5422872 bytes), ...
- Pool quic_rxbuf (262168 bytes) : 8 allocated (2097344 bytes), ...
- Pool quic_conne (184 bytes) : 9359 allocated (1722056 bytes), ...
- Pool quic_frame (184 bytes) : 7938 allocated (1460592 bytes), ...
- Pool quic_tx_pac (152 bytes) : 6454 allocated (981008 bytes), ...
- Pool quic_tls_ke (56 bytes) : 12033 allocated (673848 bytes), ...
- Pool quic_rx_pac (408 bytes) : 1596 allocated (651168 bytes), ...
- Pool quic_tls_se (88 bytes) : 6685 allocated (588280 bytes), ...
- Pool quic_cstrea (88 bytes) : 4011 allocated (352968 bytes), ...
- Pool quic_tls_iv (24 bytes) : 12033 allocated (288792 bytes), ...
- Pool quic_dgram (344 bytes) : 732 allocated (251808 bytes), ...
- Pool quic_arng (56 bytes) : 4011 allocated (224616 bytes), ...
- Pool quic_conn_c (152 bytes) : 1337 allocated (203224 bytes), ...
Total: 15 pools, 109578176 bytes allocated, 109578176 used ...
show profiling [{all | status | tasks | memory}] [byaddr|bytime|aggr|<max_lines>]*
Dumps the current profiling settings, one per line, as well as the command
needed to change them. When tasks profiling is enabled, some per-function
statistics collected by the scheduler will also be emitted, with a summary
covering the number of calls, total/avg CPU time and total/avg latency. When
memory profiling is enabled, some information such as the number of
allocations/releases and their sizes will be reported. It is possible to
limit the dump to only the profiling status, the tasks, or the memory
profiling by specifying the respective keywords; by default all profiling
information are dumped. It is also possible to limit the number of lines
of output of each category by specifying a numeric limit. If is possible to
request that the output is sorted by address or by total execution time
instead of usage, e.g. to ease comparisons between subsequent calls or to
check what needs to be optimized, and to aggregate task activity by called
function instead of seeing the details. Please note that profiling is
essentially aimed at developers since it gives hints about where CPU cycles
or memory are wasted in the code. There is nothing useful to monitor there.
show resolvers [<resolvers section id>]
Dump statistics for the given resolvers section, or all resolvers sections
if no section is supplied.
For each name server, the following counters are reported:
sent: number of DNS requests sent to this server
valid: number of DNS valid responses received from this server
update: number of DNS responses used to update the server's IP address
cname: number of CNAME responses
cname_error: CNAME errors encountered with this server
any_err: number of empty response (IE: server does not support ANY type)
nx: non existent domain response received from this server
timeout: how many time this server did not answer in time
refused: number of requests refused by this server
other: any other DNS errors
invalid: invalid DNS response (from a protocol point of view)
too_big: too big response
outdated: number of response arrived too late (after another name server)
show quic [<format>] [<filter>]
Dump information on all active QUIC frontend connections. This command is
restricted and can only be issued on sockets configured for levels "operator"
or "admin".
An optional argument can be specified to control the verbosity. Its value can
be interpreted in different way. The first possibility is to used predefined
values, "oneline" for the default format, "stream" to list every active
streams and "full" to display all information. Alternatively, a list of
comma-delimited fields can be specified to restrict output. Currently
supported values are "tp", "sock", "pktns", "cc" and "mux". Finally, "help"
in the format will instead show a more detailed help message.
The final argument is used to restrict or extend the connection list. By
default, active frontend connections only are displayed. Use the extra
argument "clo" to list instead closing frontend connections, "be" for backend
connections or "all" for every categories. It's also possible to restrict to
a single connection by specifying its hexadecimal address.
show servers conn [<backend>]
Dump the current and idle connections state of the servers belonging to the
designated backend (or all backends if none specified). A backend name or
identifier may be used.
The output consists in a header line showing the fields titles, then one
server per line with for each, the backend name and ID, server name and ID,
the address, port and a series or values. The number of fields varies
depending on thread count. The exact format of the output may vary slightly
across versions and depending on the number of threads. One needs to pay
attention to the header line to match columns when extracting output values,
and to the number of threads as the last columns are per-thread:
bkname/svname Backend name '/' server name
bkid/svid Backend ID '/' server ID
addr Server's IP address
port Server's port (or zero if none)
- Unused field, serves as a visual delimiter
purge_delay Interval between connection purges, in milliseconds
served Number of connections currently in use
used_cur Number of connections currently in use
note that this excludes conns attached to a session
used_max Highest value of used_cur since the process started
need_est Floating estimate of total needed connections
idle_sess Number of idle connections flagged as private
unsafe_nb Number of idle connections considered as "unsafe"
safe_nb Number of idle connections considered as "safe"
idle_lim Configured maximum number of idle connections
idle_cur Total of the per-thread currently idle connections
idle_per_thr[NB] Idle conns per thread for each one of the NB threads
HAProxy will kill a portion of <idle_cur> every <purge_delay> when the total
of <idle_cur> + <used_cur> exceeds the estimate <need_est>. This estimate
varies based on connection activity.
Given the threaded nature of idle connections, it's important to understand
that some values may change once read, and that as such, consistency within a
line isn't granted. This output is mostly provided as a debugging tool and is
not relevant to be routinely monitored nor graphed.
show servers state [<backend>]
Dump the state of the servers found in the running configuration. A backend
name or identifier may be provided to limit the output to this backend only.
The dump has the following format:
- first line contains the format version (1 in this specification);
- second line contains the column headers, prefixed by a sharp ('#');
- third line and next ones contain data;
- each line starting by a sharp ('#') is considered as a comment.
Since multiple versions of the output may co-exist, below is the list of
fields and their order per file format version :
1:
be_id: Backend unique id.
be_name: Backend label.
srv_id: Server unique id (in the backend).
srv_name: Server label.
srv_addr: Server IP address.
srv_op_state: Server operational state (UP/DOWN/...).
0 = SRV_ST_STOPPED
The server is down.
1 = SRV_ST_STARTING
The server is warming up (up but
throttled).
2 = SRV_ST_RUNNING
The server is fully up.
3 = SRV_ST_STOPPING
The server is up but soft-stopping
(eg: 404).
srv_admin_state: Server administrative state (MAINT/DRAIN/...).
The state is actually a mask of values :
0x01 = SRV_ADMF_FMAINT
The server was explicitly forced into
maintenance.
0x02 = SRV_ADMF_IMAINT
The server has inherited the maintenance
status from a tracked server.
0x04 = SRV_ADMF_CMAINT
The server is in maintenance because of
the configuration.
0x08 = SRV_ADMF_FDRAIN
The server was explicitly forced into
drain state.
0x10 = SRV_ADMF_IDRAIN
The server has inherited the drain status
from a tracked server.
0x20 = SRV_ADMF_RMAINT
The server is in maintenance because of an
IP address resolution failure.
0x40 = SRV_ADMF_HMAINT
The server FQDN was set from stats socket.
srv_uweight: User visible server's weight.
srv_iweight: Server's initial weight.
srv_time_since_last_change: Time since last operational change.
srv_check_status: Last health check status.
srv_check_result: Last check result (FAILED/PASSED/...).
0 = CHK_RES_UNKNOWN
Initialized to this by default.
1 = CHK_RES_NEUTRAL
Valid check but no status information.
2 = CHK_RES_FAILED
Check failed.
3 = CHK_RES_PASSED
Check succeeded and server is fully up
again.
4 = CHK_RES_CONDPASS
Check reports the server doesn't want new
sessions.
srv_check_health: Checks rise / fall current counter.
srv_check_state: State of the check (ENABLED/PAUSED/...).
The state is actually a mask of values :
0x01 = CHK_ST_INPROGRESS
A check is currently running.
0x02 = CHK_ST_CONFIGURED
This check is configured and may be
enabled.
0x04 = CHK_ST_ENABLED
This check is currently administratively
enabled.
0x08 = CHK_ST_PAUSED
Checks are paused because of maintenance
(health only).
srv_agent_state: State of the agent check (ENABLED/PAUSED/...).
This state uses the same mask values as
"srv_check_state", adding this specific one :
0x10 = CHK_ST_AGENT
Check is an agent check (otherwise it's a
health check).
bk_f_forced_id: Flag to know if the backend ID is forced by
configuration.
srv_f_forced_id: Flag to know if the server's ID is forced by
configuration.
srv_fqdn: Server FQDN.
srv_port: Server port.
srvrecord: DNS SRV record associated to this SRV.
srv_use_ssl: use ssl for server connections.
srv_check_port: Server health check port.
srv_check_addr: Server health check address.
srv_agent_addr: Server health agent address.
srv_agent_port: Server health agent port.
show sess [<options>*]
Dump all known active streams (formerly called "sessions"). Avoid doing this
on slow connections as this can be huge. This command is restricted and can
only be issued on sockets configured for levels "operator" or "admin". Note
that on machines with quickly recycled connections, it is possible that this
output reports less entries than really exist because it will dump all
existing streams up to the last one that was created before the command was
entered; those which die in the mean time will not appear.
For supported options, see below.
show sess [<id> | all | help] [<options>*]
Display a lot of internal information about the matching streams. The command
knows two output formats: a short one, which is the default when not asking
for a specific stream identifier, and an extended one when listing designated
streams. The short format, used by default with "show sess", only dumps one
stream per line with a few info, and the stream identifier at the beginning
of the line in hexadecimal (it corresponds to the pointer to the stream).
In the extended form, used by "show sess <id>" or "show sess all", streams
are dumped with a huge amount of debugging details over multiple lines
(around 20 each), and still start with their identifier. The delimiter
between streams here is the identifier at the beginning of the line; extra
lines belonging to the same stream start with one or multiple spaces (the
stream is dumped indented). Dumping many streams can produce a huge output,
take a lot of time and be CPU intensive, so it's always better to only dump
the minimum needed. Those information are useless to most users but may be
used by HAProxy developers to troubleshoot a complex bug. The exact output
format is intentionally not documented so that it can freely evolve depending
on requirements, including in stable branches. This output is meant to be
interpreted while checking function strm_dump_to_buffer() in src/stream.c to
figure the exact meaning of certain fields.
The "help" argument will show the detailed usage of the command instead of
dumping streams.
It is possible to set some options to customize the dump or apply some
filters. Here are the supported options:
- backend <b> only display streams attached to this backend
- frontend <f> only display streams attached to this frontend
- older <age> only display streams older than <age> seconds
- server <b/s> only show streams attached to this backend+server
- show-uri dump the transaction URI, as captured during the request
analysis. It is only displayed if it was captured.
- susp only show streams considered as suspicious by the developers
based on criteria that may in time or vary along versions.
show stat [domain <resolvers|proxy>] [{<iid>|<proxy>} <type> <sid>] \
[typed|json] [desc] [up|no-maint]
Dump statistics. The domain is used to select which statistics to print;
resolvers and proxy are available for now. By default, the CSV format is used;
you can activate the extended typed output format described in the section
above if "typed" is passed after the other arguments; or in JSON if "json" is
passed after the other arguments. By passing <id>, <type> and <sid>, it is
possible to dump only selected items :
- <iid> is a proxy ID, -1 to dump everything. Alternatively, a proxy name
<proxy> may be specified. In this case, this proxy's ID will be used as
the ID selector.
- <type> selects the type of dumpable objects : 1 for frontends, 2 for
backends, 4 for servers, -1 for everything. These values can be ORed,
for example:
1 + 2 = 3 -> frontend + backend.
1 + 2 + 4 = 7 -> frontend + backend + server.
- <sid> is a server ID, -1 to dump everything from the selected proxy.
Example :
$ echo "show info;show stat" | socat stdio unix-connect:/tmp/sock1
>>> Name: HAProxy
Version: 1.4-dev2-49
Release_date: 2009/09/23
Nbproc: 1
Process_num: 1
(...)
# pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq, (...)
stats,FRONTEND,,,0,0,1000,0,0,0,0,0,0,,,,,OPEN,,,,,,,,,1,1,0, (...)
stats,BACKEND,0,0,0,0,1000,0,0,0,0,0,,0,0,0,0,UP,0,0,0,,0,250,(...)
(...)
www1,BACKEND,0,0,0,0,1000,0,0,0,0,0,,0,0,0,0,UP,1,1,0,,0,250, (...)
$
In this example, two commands have been issued at once. That way it's easy to
find which process the stats apply to in multi-process mode. This is not
needed in the typed output format as the process number is reported on each
line. Notice the empty line after the information output which marks the end
of the first block. A similar empty line appears at the end of the second
block (stats) so that the reader knows the output has not been truncated.
When "typed" is specified, the output format is more suitable to monitoring
tools because it provides numeric positions and indicates the type of each
output field. Each value stands on its own line with process number, element
number, nature, origin and scope. This same format is available via the HTTP
stats by passing ";typed" after the URI. It is very important to note that in
typed output format, the dump for a single object is contiguous so that there
is no need for a consumer to store everything at once.
The "up" modifier will result in listing only servers which reportedly up or
not checked. Those down, unresolved, or in maintenance will not be listed.
This is analogous to the ";up" option on the HTTP stats. Similarly, the
"no-maint" modifier will act like the ";no-maint" HTTP modifier and will
result in disabled servers not to be listed. The difference is that those
which are enabled but down will not be evicted.
When using the typed output format, each line is made of 4 columns delimited
by colons (':'). The first column is a dot-delimited series of 5 elements. The
first element is a letter indicating the type of the object being described.
At the moment the following object types are known : 'F' for a frontend, 'B'
for a backend, 'L' for a listener, and 'S' for a server. The second element
The second element is a positive integer representing the unique identifier of
the proxy the object belongs to. It is equivalent to the "iid" column of the
CSV output and matches the value in front of the optional "id" directive found
in the frontend or backend section. The third element is a positive integer
containing the unique object identifier inside the proxy, and corresponds to
the "sid" column of the CSV output. ID 0 is reported when dumping a frontend
or a backend. For a listener or a server, this corresponds to their respective
ID inside the proxy. The fourth element is the numeric position of the field
in the list (starting at zero). This position shall not change over time, but
holes are to be expected, depending on build options or if some fields are
deleted in the future. The fifth element is the field name as it appears in
the CSV output. The sixth element is a positive integer and is the relative
process number starting at 1.
The rest of the line starting after the first colon follows the "typed output
format" described in the section above. In short, the second column (after the
first ':') indicates the origin, nature, scope and persistence state of the
variable. The third column indicates the field type, among "s32", "s64",
"u32", "u64", "flt' and "str". Then the fourth column is the value itself,
which the consumer knows how to parse thanks to column 3 and how to process
thanks to column 2.
When "desc" is appended to the command, one extra colon followed by a quoted
string is appended with a description for the metric. At the time of writing,
this is only supported for the "typed" output format.
Thus the overall line format in typed mode is :
<obj>.<px_id>.<id>.<fpos>.<fname>.<process_num>:<tags>:<type>:<value>
Here's an example of typed output format :
$ echo "show stat typed" | socat stdio unix-connect:/tmp/sock1
F.2.0.0.pxname.1:KNSV:str:dummy
F.2.0.1.svname.1:KNSV:str:FRONTEND
F.2.0.4.scur.1:MGPV:u32:0
F.2.0.5.smax.1:MMPV:u32:0
F.2.0.6.slim.1:CLPV:u32:524269
F.2.0.7.stot.1:MCPP:u64:0
F.2.0.8.bin.1:MCPP:u64:0
F.2.0.9.bout.1:MCPP:u64:0
F.2.0.10.dreq.1:MCPP:u64:0
F.2.0.11.dresp.1:MCPP:u64:0
F.2.0.12.ereq.1:MCPP:u64:0
F.2.0.17.status.1:SGPV:str:OPEN
F.2.0.26.pid.1:KGPV:u32:1
F.2.0.27.iid.1:KGSV:u32:2
F.2.0.28.sid.1:KGSV:u32:0
F.2.0.32.type.1:CGSV:u32:0
F.2.0.33.rate.1:MRPP:u32:0
F.2.0.34.rate_lim.1:CLPV:u32:0
F.2.0.35.rate_max.1:MMPV:u32:0
F.2.0.46.req_rate.1:MRPP:u32:0
F.2.0.47.req_rate_max.1:MMPV:u32:0
F.2.0.48.req_tot.1:MCPP:u64:0
F.2.0.51.comp_in.1:MCPP:u64:0
F.2.0.52.comp_out.1:MCPP:u64:0
F.2.0.53.comp_byp.1:MCPP:u64:0
F.2.0.54.comp_rsp.1:MCPP:u64:0
(...)
In the typed format, the presence of the process ID at the end of the
first column makes it very easy to visually aggregate outputs from
multiple processes, as show in the example below where each line appears
for each process :
$ ( echo show stat typed | socat /var/run/haproxy.sock1 - ; \
echo show stat typed | socat /var/run/haproxy.sock2 - ) | \
sort -t . -k 1,1 -k 2,2n -k 3,3n -k 4,4n -k 5,5 -k 6,6n
B.3.0.0.pxname.1:KNSV:str:private-backend
B.3.0.0.pxname.2:KNSV:str:private-backend
B.3.0.1.svname.1:KNSV:str:BACKEND
B.3.0.1.svname.2:KNSV:str:BACKEND
B.3.0.2.qcur.1:MGPV:u32:0
B.3.0.2.qcur.2:MGPV:u32:0
B.3.0.3.qmax.1:MMPV:u32:0
B.3.0.3.qmax.2:MMPV:u32:0
B.3.0.4.scur.1:MGPV:u32:0
B.3.0.4.scur.2:MGPV:u32:0
B.3.0.5.smax.1:MMPV:u32:0
B.3.0.5.smax.2:MMPV:u32:0
B.3.0.6.slim.1:CLPV:u32:1000
B.3.0.6.slim.2:CLPV:u32:1000
(...)
The format of JSON output is described in a schema which may be output
using "show schema json".
The JSON output contains no extra whitespace in order to reduce the
volume of output. For human consumption passing the output through a
pretty printer may be helpful. Example :
$ echo "show stat json" | socat /var/run/haproxy.sock stdio | \
python -m json.tool
The JSON output contains no extra whitespace in order to reduce the
volume of output. For human consumption passing the output through a
pretty printer may be helpful. Example :
$ echo "show stat json" | socat /var/run/haproxy.sock stdio | \
python -m json.tool
show ssl ca-file [[*][\]<cafile>[:<index>]]
Display the list of CA files loaded into the process and their respective
certificate counts. The certificates are not used by any frontend or backend
until their status is "Used".
A "@system-ca" entry can appear in the list, it is loaded by the httpclient
by default. It contains the list of trusted CA of your system returned by
OpenSSL.
If a filename is prefixed by an asterisk, it is a transaction which
is not committed yet. If a <cafile> is specified without <index>, it will show
the status of the CA file ("Used"/"Unused") followed by details about all the
certificates contained in the CA file. The details displayed for every
certificate are the same as the ones displayed by a "show ssl cert" command.
If a <cafile> is specified followed by an <index>, it will only display the
details of the certificate having the specified index. Indexes start from 1.
If the index is invalid (too big for instance), nothing will be displayed.
This command can be useful to check if a CA file was properly updated.
You can also display the details of an ongoing transaction by prefixing the
filename by a '*'. If the first character of the filename is a '*', it can be
escaped with '\*'.
Example :
$ echo "show ssl ca-file" | socat /var/run/haproxy.master -
# transaction
*cafile.crt - 2 certificate(s)
# filename
cafile.crt - 1 certificate(s)
$ echo "show ssl ca-file cafile.crt" | socat /var/run/haproxy.master -
Filename: /home/tricot/work/haproxy/reg-tests/ssl/set_cafile_ca2.crt
Status: Used
Certificate #1:
Serial: 11A4D2200DC84376E7D233CAFF39DF44BF8D1211
notBefore: Apr 1 07:40:53 2021 GMT
notAfter: Aug 17 07:40:53 2048 GMT
Subject Alternative Name:
Algorithm: RSA4096
SHA1 FingerPrint: A111EF0FEFCDE11D47FE3F33ADCA8435EBEA4864
Subject: /C=FR/ST=Some-State/O=HAProxy Technologies/CN=HAProxy Technologies CA
Issuer: /C=FR/ST=Some-State/O=HAProxy Technologies/CN=HAProxy Technologies CA
$ echo "show ssl ca-file *cafile.crt:2" | socat /var/run/haproxy.master -
Filename: */home/tricot/work/haproxy/reg-tests/ssl/set_cafile_ca2.crt
Status: Unused
Certificate #2:
Serial: 587A1CE5ED855040A0C82BF255FF300ADB7C8136
[...]
show ssl cert [[*][\]<filename>]
Display the list of certificates loaded into the process. They are not used
by any frontend or backend until their status is "Used".
If a filename is prefixed by an asterisk, it is a transaction which is not
committed yet. If a filename is specified, it will show details about the
certificate. This command can be useful to check if a certificate was well
updated. You can also display details on a transaction by prefixing the
filename by a '*'. If the first character of the filename is a '*', it can be
escaped with '\*'.
This command can also be used to display the details of a certificate's OCSP
response by suffixing the filename with a ".ocsp" extension. It works for
committed certificates as well as for ongoing transactions. On a committed
certificate, this command is equivalent to calling "show ssl ocsp-response"
with the certificate's corresponding OCSP response ID.
Example :
$ echo "@1 show ssl cert" | socat /var/run/haproxy.master -
# transaction
*test.local.pem
# filename
test.local.pem
$ echo "@1 show ssl cert test.local.pem" | socat /var/run/haproxy.master -
Filename: test.local.pem
Status: Used
Serial: 03ECC19BA54B25E85ABA46EE561B9A10D26F
notBefore: Sep 13 21:20:24 2019 GMT
notAfter: Dec 12 21:20:24 2019 GMT
Issuer: /C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
Subject: /CN=test.local
Subject Alternative Name: DNS:test.local, DNS:imap.test.local
Algorithm: RSA2048
SHA1 FingerPrint: 417A11CAE25F607B24F638B4A8AEE51D1E211477
$ echo "@1 show ssl cert *test.local.pem" | socat /var/run/haproxy.master -
Filename: *test.local.pem
Status: Unused
[...]
$ echo "@1 show ssl cert \*.local.pem" | socat /var/run/haproxy.master -
Filename: *.local.pem
Status: Used
[...]
show ssl crl-file [[*][\]<crlfile>[:<index>]]
Display the list of CRL files loaded into the process. They are not used
by any frontend or backend until their status is "Used".
If a filename is prefixed by an asterisk, it is a transaction which is not
committed yet. If a <crlfile> is specified without <index>, it will show the
status of the CRL file ("Used"/"Unused") followed by details about all the
Revocation Lists contained in the CRL file. The details displayed for every
list are based on the output of "openssl crl -text -noout -in <file>".
If a <crlfile> is specified followed by an <index>, it will only display the
details of the list having the specified index. Indexes start from 1.
If the index is invalid (too big for instance), nothing will be displayed.
This command can be useful to check if a CRL file was properly updated.
You can also display the details of an ongoing transaction by prefixing the
filename by a '*'. If the first character of the filename is a '*', it can be
escaped with '\*'.
Example :
$ echo "show ssl crl-file" | socat /var/run/haproxy.master -
# transaction
*crlfile.pem
# filename
crlfile.pem
$ echo "show ssl crl-file crlfile.pem" | socat /var/run/haproxy.master -
Filename: /home/tricot/work/haproxy/reg-tests/ssl/crlfile.pem
Status: Used
Certificate Revocation List #1:
Version 1
Signature Algorithm: sha256WithRSAEncryption
Issuer: /C=FR/O=HAProxy Technologies/CN=Intermediate CA2
Last Update: Apr 23 14:45:39 2021 GMT
Next Update: Sep 8 14:45:39 2048 GMT
Revoked Certificates:
Serial Number: 1008
Revocation Date: Apr 23 14:45:36 2021 GMT
Certificate Revocation List #2:
Version 1
Signature Algorithm: sha256WithRSAEncryption
Issuer: /C=FR/O=HAProxy Technologies/CN=Root CA
Last Update: Apr 23 14:30:44 2021 GMT
Next Update: Sep 8 14:30:44 2048 GMT
No Revoked Certificates.
show ssl crt-list [-n] [<filename>]
Display the list of crt-list and directories used in the HAProxy
configuration. If a filename is specified, dump the content of a crt-list or
a directory. Once dumped the output can be used as a crt-list file.
The '-n' option can be used to display the line number, which is useful when
combined with the 'del ssl crt-list' option when a entry is duplicated. The
output with the '-n' option is not compatible with the crt-list format and
not loadable by haproxy.
Example:
echo "show ssl crt-list -n localhost.crt-list" | socat /tmp/sock1 -
# localhost.crt-list
common.pem:1 !not.test1.com *.test1.com !localhost
common.pem:2
ecdsa.pem:3 [verify none allow-0rtt ssl-min-ver TLSv1.0 ssl-max-ver TLSv1.3] localhost !www.test1.com
ecdsa.pem:4 [verify none allow-0rtt ssl-min-ver TLSv1.0 ssl-max-ver TLSv1.3]
show ssl ech [<name>]
Display the list of ECH keys loaded in the HAProxy process.
When <name> is specified, displays the keys for a specific bind line. The
bind line format is <frontend>/@<filename>:<linenum> (Example:
frontend1/@haproxy.conf:19) or <frontend>/<name> if the bind line was named
with the "name" keyword.
The 'age' entry represents the time, in seconds, since the key was loaded in
the bind line. This value is reset when HAProxy is started, reloaded, or
restarted.
Necessitates an OpenSSL version that supports ECH, and HAProxy must be
compiled with USE_ECH=1.
This command is only supported on a CLI connection running in experimental
mode (see "experimental-mode on").
See also "ech" in the Section 5.1 of the configuration manual.
Example:
$ echo "experimental-mode on; show ssl ech" | socat /tmp/haproxy.sock -
***
frontend: frontend1
bind: frontend1/@haproxy.conf:19
ECH entry: 0 public_name: example.com age: 557 (has private key)
[fe0d,94,example.com,[0020,0001,0001],c39285b774bf61c071864181c5292a012b30adaf767e39369a566af05573ef2b,00,00]
ECH entry: 1 public_name: example.com age: 557 (has private key)
[fe0d,ee,example.com,[0020,0001,0001],6572191131b5cabba819f8cacf2d2e06fa0b87b30d9b793644daba7b8866d511,00,00]
bind: frontend1/@haproxy.conf:20
ECH entry: 0 public_name: example.com age: 557 (has private key)
[fe0d,94,example.com,[0020,0001,0001],c39285b774bf61c071864181c5292a012b30adaf767e39369a566af05573ef2b,00,00]
ECH entry: 1 public_name: example.com age: 557 (has private key)
[fe0d,ee,example.com,[0020,0001,0001],6572191131b5cabba819f8cacf2d2e06fa0b87b30d9b793644daba7b8866d511,00,00]
$ echo "experimental-mode on; show ssl ech frontend1/@haproxy.conf:19" | socat /tmp/haproxy.sock -
***
ECH for frontend1/@haproxy.conf:19
ECH entry: 0 public_name: example.com age: 786 (has private key)
[fe0d,94,example.com,[0020,0001,0001],c39285b774bf61c071864181c5292a012b30adaf767e39369a566af05573ef2b,00,00]
ECH entry: 1 public_name: example.com age: 786 (has private key)
[fe0d,ee,example.com,[0020,0001,0001],6572191131b5cabba819f8cacf2d2e06fa0b87b30d9b793644daba7b8866d511,00,00]
show ssl jwt
Display the list of certificates that can be used for JWT validation.
See also "add ssl jwt" and "del ssl jwt" commands.
See "jwt" certificate option for more information.
Example:
echo "show ssl jwt" | socat /tmp/sock1 -
#filename
jwt.pem
show ssl ocsp-response [[text|base64] <id|path>]
Display the IDs of the OCSP tree entries corresponding to all the OCSP
responses used in HAProxy, as well as the corresponding frontend
certificate's path, the issuer's name and key hash and the serial number of
the certificate for which the OCSP response was built.
If a valid <id> or the <path> of a valid frontend certificate is provided,
display the contents of the corresponding OCSP response. When an <id> is
provided, it it possible to define the format in which the data is dumped.
The 'text' option is the default one and it allows to display detailed
information about the OCSP response the same way as in an "openssl ocsp
-respin <ocsp-response> -text" call. The 'base64' format allows to dump the
contents of an OCSP response in base64.
Example :
$ echo "show ssl ocsp-response" | socat /var/run/haproxy.master -
# Certificate IDs
Certificate ID key : 303b300906052b0e03021a050004148a83e0060faff709ca7e9b95522a2e81635fda0a0414f652b0e435d5ea923851508f0adbe92d85de007a0202100a
Certificate path : /path_to_cert/foo.pem
Certificate ID:
Issuer Name Hash: 8A83E0060FAFF709CA7E9B95522A2E81635FDA0A
Issuer Key Hash: F652B0E435D5EA923851508F0ADBE92D85DE007A
Serial Number: 100A
$ echo "show ssl ocsp-response 303b300906052b0e03021a050004148a83e0060faff709ca7e9b95522a2e81635fda0a0414f652b0e435d5ea923851508f0adbe92d85de007a0202100a" | socat /var/run/haproxy.master -
OCSP Response Data:
OCSP Response Status: successful (0x0)
Response Type: Basic OCSP Response
Version: 1 (0x0)
Responder Id: C = FR, O = HAProxy Technologies, CN = ocsp.haproxy.com
Produced At: May 27 15:43:38 2021 GMT
Responses:
Certificate ID:
Hash Algorithm: sha1
Issuer Name Hash: 8A83E0060FAFF709CA7E9B95522A2E81635FDA0A
Issuer Key Hash: F652B0E435D5EA923851508F0ADBE92D85DE007A
Serial Number: 100A
Cert Status: good
This Update: May 27 15:43:38 2021 GMT
Next Update: Oct 12 15:43:38 2048 GMT
[...]
$ echo "show ssl ocsp-response base64 /path_to_cert/foo.pem" | socat /var/run/haproxy.sock -
MIIB8woBAKCCAewwggHoBgkrBgEFBQcwAQEEggHZMIIB1TCBvqE[...]
show ssl ocsp-updates
Display information about the entries concerned by the OCSP update mechanism.
The command will output one line per OCSP response and will contain the
expected update time of the response as well as the time of the last
successful update and counters of successful and failed updates. It will also
give the status of the last update (successful or not) in numerical form as
well as text form. See below for a full list of possible errors. The lines
will be sorted by ascending 'Next Update' time. The lines will also contain a
path to the first frontend certificate that uses the OCSP response.
See "show ssl ocsp-response" command and "ocsp-update" option for more
information on the OCSP auto update.
The update error codes and error strings can be the following:
+----+-------------------------------------+
| ID | message |
+----+-------------------------------------+
| 0 | "Unknown" |
| 1 | "Update successful" |
| 2 | "HTTP error" |
| 3 | "Missing \"ocsp-response\" header" |
| 4 | "OCSP response check failure" |
| 5 | "Error during insertion" |
+----+-------------------------------------+
Example :
$ echo "show ssl ocsp-updates" | socat /tmp/haproxy.sock -
OCSP Certid | Path | Next Update | Last Update | Successes | Failures | Last Update Status | Last Update Status (str)
303b300906052b0e03021a050004148a83e0060faff709ca7e9b95522a2e81635fda0a0414f652b0e435d5ea923851508f0adbe92d85de007a02021015 | /path_to_cert/cert.pem | 30/Jan/2023:00:08:09 +0000 | - | 0 | 1 | 2 | HTTP error
304b300906052b0e03021a0500041448dac9a0fb2bd32d4ff0de68d2f567b735f9b3c40414142eb317b75856cbae500940e61faf9d8b14c2c6021203e16a7aa01542f291237b454a627fdea9c1 | /path_to_cert/other_cert.pem | 30/Jan/2023:01:07:09 +0000 | 30/Jan/2023:00:07:09 +0000 | 1 | 0 | 1 | Update successful
show ssl providers
Display the names of the providers loaded by OpenSSL during init. Provider
loading can indeed be configured via the OpenSSL configuration file and this
option allows to check that the right providers were loaded. This command is
only available with OpenSSL v3.
Example :
$ echo "show ssl providers" | socat /var/run/haproxy.master -
Loaded providers :
- fips
- base
show ssl sni [-f <frontend>] [-A] [-t <offset>]
Dump every SNI configured for the designated frontend, or all frontends if no
frontend was specified. It allows to see what SNI are offered for a frontend,
and to identify if a SNI is defined multiple times by multiple certificates for
the same frontend.
The -A option allows to filter the list and only displays the certificates
that are past the notAfter date, allowing to show only expired certificates.
The -t option takes an offset in seconds, or with a time unit (s, m, h, d),
which is added to the current time, allowing to check which certificates
expired after the offset when combined with -A.
For example if you want to check which certificates would be expired in 30d,
just do "show ssl sni -A -t 30d".
Columns are separated by a single \t, allowing to parse it simply.
The 'Frontend/Bind' column shows the frontend name followed by the bind line
position in the configuration (frontend/file:linenum).
The 'SNI' column shows the SNI, it can be either a CN, a SAN or a filter from a crt-list.
The default certificates of a bind line, (which are either declared
explicitly by 'default-crt' or is implicitly the first certificate of a bind
line when no 'strict-sni' is used) shows the '*' character in the SNI column.
The 'Negative Filter' column is the list of negative filters associated to a
wildcard, this will show all negatives filters that are on the same crt-list
line. A dash character is displayed if there are none.
The 'Type' column shows the encryption algorithm type, it can be "rsa", "ecdsa" or "dsa".
The 'Filename' column can be either a filename from the configuration, or an
alias declared in a crt-store.
The 'NotAfter' and 'NotBefore' columns are directly extracted from the X509
leaf certificate.
Example:
$ echo "@1 show ssl sni -A -t 30d" | socat /var/run/haproxy-master.sock - | column -t -s $'\t'
# Frontend/Bind SNI Negative Filter Type Filename NotAfter NotBefore
li1/haproxy.cfg:10021 *.ex.lan !m1.ex.lan rsa example.lan.pem Jun 13 13:37:21 2024 GMT May 14 13:37:21 2024 GMT
li1/haproxy.cfg:10021 machine10 - ecdsa machine10.pem.ecdsa Jun 13 13:37:21 2024 GMT May 14 13:37:21 2024 GMT
li1/haproxy.cfg:10021 machine10 - rsa machine10.pem.rsa Jun 13 13:37:21 2024 GMT May 14 13:37:21 2024 GMT
li1/haproxy.cfg:10021 machine10 - ecdsa machine10.pem.ecdsa Jun 13 13:37:21 2024 GMT May 14 13:37:21 2024 GMT
li1/haproxy.cfg:10021 localhost - rsa localhost.pem.rsa Jun 13 13:37:11 2024 GMT May 14 13:37:11 2024 GMT
li1/haproxy.cfg:10021 localhost - ecdsa localhost.pem.ecdsa Jun 13 13:37:10 2024 GMT May 14 13:37:10 2024 GMT
li1/haproxy.cfg:10021 * - rsa localhost.pem.rsa Jun 13 13:37:11 2024 GMT May 14 13:37:11 2024 GMT
show startup-logs
Dump all messages emitted during the startup of the current haproxy process,
each startup-logs buffer is unique to its haproxy worker.
This keyword also exists on the master CLI, which shows the latest startup or
reload tentative.
show table
Dump general information on all known stick-tables. Their name is returned
(the name of the proxy which holds them), their type (currently zero, always
IP), their size in maximum possible number of entries, and the number of
entries currently in use.
Example :
$ echo "show table" | socat stdio /tmp/sock1
>>> # table: front_pub, type: ip, size:204800, used:171454
>>> # table: back_rdp, type: ip, size:204800, used:0
show table <name> [ data.<type> <operator> <value> [data.<type> ...]] |
[ key <key> ] | [ ptr <ptr> ]
Dump contents of stick-table <name>. In this mode, a first line of generic
information about the table is reported as with "show table", then all
entries are dumped. Since this can be quite heavy, it is possible to specify
a filter in order to specify what entries to display.
When the "data." form is used the filter applies to the stored data (see
"stick-table" in section 4.2). A stored data type must be specified
in <type>, and this data type must be stored in the table otherwise an
error is reported. The data is compared according to <operator> with the
64-bit integer <value>. Operators are the same as with the ACLs :
- eq : match entries whose data is equal to this value
- ne : match entries whose data is not equal to this value
- le : match entries whose data is less than or equal to this value
- ge : match entries whose data is greater than or equal to this value
- lt : match entries whose data is less than this value
- gt : match entries whose data is greater than this value
In this form, you can use multiple data filter entries, up to a maximum
defined during build time (4 by default).
When the key form is used the entry <key> is shown. The key must be of the
same type as the table, which currently is limited to IPv4, IPv6, integer,
and string.
When the ptr form is used the entry <ptr> is shown. <ptr> is written in
the form 0xffff and must correspond to the address returned by a previous
"show table" command. Matching an entry using its pointer may be relevant if
the entry cannot be matched using the key due empty key or incompatible
characters on the cli.
If data.<type> is an array type, "[]" may be used to access a specific
index in the array, like so: data.gpt[1]
Example :
$ echo "show table http_proxy" | socat stdio /tmp/sock1
>>> # table: http_proxy, type: ip, size:204800, used:2
>>> 0x80e6a4c: key=127.0.0.1 use=0 exp=3594729 gpc0=0 conn_rate(30000)=1 \
bytes_out_rate(60000)=187
>>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
bytes_out_rate(60000)=191
$ echo "show table http_proxy data.gpc0 gt 0" | socat stdio /tmp/sock1
>>> # table: http_proxy, type: ip, size:204800, used:2
>>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
bytes_out_rate(60000)=191
$ echo "show table http_proxy data.conn_rate gt 5" | \
socat stdio /tmp/sock1
>>> # table: http_proxy, type: ip, size:204800, used:2
>>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
bytes_out_rate(60000)=191
$ echo "show table http_proxy key 127.0.0.2" | \
socat stdio /tmp/sock1
>>> # table: http_proxy, type: ip, size:204800, used:2
>>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
bytes_out_rate(60000)=191
$ echo "show table http_proxy ptr 0x80e6a80" | \
socat stdio /tmp/sock1
>>> # table: http_proxy, type: ip, size:204800, used:2
>>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
bytes_out_rate(60000)=191
When the data criterion applies to a dynamic value dependent on time such as
a bytes rate, the value is dynamically computed during the evaluation of the
entry in order to decide whether it has to be dumped or not. This means that
such a filter could match for some time then not match anymore because as
time goes, the average event rate drops.
It is possible to use this to extract lists of IP addresses abusing the
service, in order to monitor them or even blacklist them in a firewall.
Example :
$ echo "show table http_proxy data.gpc0 gt 0" \
| socat stdio /tmp/sock1 \
| fgrep 'key=' | cut -d' ' -f2 | cut -d= -f2 > abusers-ip.txt
( or | awk '/key/{ print a[split($2,a,"=")]; }' )
When the stick-table is synchronized to a peers section supporting sharding,
the shard number will be displayed for each key (otherwise '0' is reported).
This allows to know which peers will receive this key.
Example:
$ echo "show table http_proxy" | socat stdio /tmp/sock1 | fgrep shard=
0x7f23b0c822a8: key=10.0.0.2 use=0 exp=296398 shard=9 gpc0=0
0x7f23a063f948: key=10.0.0.6 use=0 exp=296075 shard=12 gpc0=0
0x7f23b03920b8: key=10.0.0.8 use=0 exp=296766 shard=1 gpc0=0
0x7f23a43c09e8: key=10.0.0.12 use=0 exp=295368 shard=8 gpc0=0
show tasks
Dumps the number of tasks currently in the run queue, with the number of
occurrences for each function, and their average latency when it's known
(for pure tasks with task profiling enabled). The dump is a snapshot of the
instant it's done, and there may be variations depending on what tasks are
left in the queue at the moment it happens, especially in mono-thread mode
as there's less chance that I/Os can refill the queue (unless the queue is
full). This command takes exclusive access to the process and can cause
minor but measurable latencies when issued on a highly loaded process, so
it must not be abused by monitoring bots.
show threads
Dumps some internal states and structures for each thread, that may be useful
to help developers understand a problem. The output tries to be readable by
showing one block per thread. When haproxy is built with USE_THREAD_DUMP=1,
an advanced dump mechanism involving thread signals is used so that each
thread can dump its own state in turn. Without this option, the thread
processing the command shows all its details but the other ones are less
detailed. A star ('*') is displayed in front of the thread handling the
command. A right angle bracket ('>') may also be displayed in front of
threads which didn't make any progress since last invocation of this command,
indicating a bug in the code which must absolutely be reported. When this
happens between two threads it usually indicates a deadlock. If a thread is
alone, it's a different bug like a corrupted list. In all cases the process
needs is not fully functional anymore and needs to be restarted.
The output format is purposely not documented so that it can easily evolve as
new needs are identified, without having to maintain any form of backwards
compatibility, and just like with "show activity", the values are meaningless
without the code at hand.
show tls-keys [id|*]
Dump all loaded TLS ticket keys references. The TLS ticket key reference ID
and the file from which the keys have been loaded is shown. Both of those
can be used to update the TLS keys using "set ssl tls-key". If an ID is
specified as parameter, it will dump the tickets, using * it will dump every
keys from every references.
show schema json
Dump the schema used for the output of "show info json" and "show stat json".
The contains no extra whitespace in order to reduce the volume of output.
For human consumption passing the output through a pretty printer may be
helpful. Example :
$ echo "show schema json" | socat /var/run/haproxy.sock stdio | \
python -m json.tool
The schema follows "JSON Schema" (json-schema.org) and accordingly
verifiers may be used to verify the output of "show info json" and "show
stat json" against the schema.
show trace [<source>]
Show the current trace status. For each source a line is displayed with a
single-character status indicating if the trace is stopped, waiting, or
running. The output sink used by the trace is indicated (or "none" if none
was set), as well as the number of dropped events in this sink, followed by a
brief description of the source. If a source name is specified, a detailed
list of all events supported by the source, and their status for each action
(report, start, pause, stop), indicated by a "+" if they are enabled, or a
"-" otherwise. All these events are independent and an event might trigger
a start without being reported and conversely.
show version
Show the version of the current HAProxy process. This is available from
master and workers CLI.
Example:
$ echo "show version" | socat /var/run/haproxy.sock stdio
2.4.9
$ echo "show version" | socat /var/run/haproxy-master.sock stdio
2.5.0
shutdown frontend <frontend>
Completely delete the specified frontend. All the ports it was bound to will
be released. It will not be possible to enable the frontend anymore after
this operation. This is intended to be used in environments where stopping a
proxy is not even imaginable but a misconfigured proxy must be fixed. That
way it's possible to release the port and bind it into another process to
restore operations. The frontend will not appear at all on the stats page
once it is terminated.
The frontend may be specified either by its name or by its numeric ID,
prefixed with a sharp ('#').
This command is restricted and can only be issued on sockets configured for
level "admin".
shutdown session <id>
Immediately terminate the stream matching the specified stream identifier.
This identifier is the first field at the beginning of the lines in the dumps
of "show sess" (it corresponds to the stream pointer). This can be used to
terminate a long-running stream without waiting for a timeout or when an
endless transfer is ongoing. Such terminated streams are reported with a 'K'
flag in the logs.
shutdown sessions server <backend>/<server>
Immediately terminate all the streams attached to the specified server. This
can be used to terminate long-running streams after a server is put into
maintenance mode, for instance. Such terminated streams are reported with a
'K' flag in the logs.
Backend connections are left in idle state, unless the server is already in
maintenance mode, in which case they will be immediately scheduled for
deletion.
trace
The "trace" command alone lists the trace sources, their current status, and
their brief descriptions. It is only meant as a menu to enter next levels,
see other "trace" commands below.
trace 0
Immediately stops all traces. This is made to be used as a quick solution
to terminate a debugging session or as an emergency action to be used in case
complex traces were enabled on multiple sources and impact the service.
trace <source> [<args...>]
Configure traces for the source <source>. Without argument, this will list all
supported sub-commands to the given source. Multiple sub-commands can be
chained. Following sub-commands are supported:
event [ [+|-|!]<name> ]
Without argument, this will list all the events supported by the designated
source. They are prefixed with a "-" if they are not enabled, or a "+" if
they are enabled. It is important to note that a single trace may be
labelled with multiple events, and as long as any of the enabled events
matches one of the events labelled on the trace, the event will be passed to
the trace subsystem. For example, receiving an HTTP/2 frame of type HEADERS
may trigger a frame event and a stream event since the frame creates a new
stream. If either the frame event or the stream event are enabled for this
source, the frame will be passed to the trace framework.
With an argument, it is possible to toggle the state of each event and
individually enable or disable them. Two special keywords are supported,
"none", which matches no event, and is used to disable all events at once,
and "any" which matches all events, and is used to enable all events at
once. Other events are specific to the event source. It is possible to
enable one event by specifying its name, optionally prefixed with '+' for
better readability. It is possible to disable one event by specifying its
name prefixed by a '-' or a '!'.
One way to completely disable a trace source is to pass "event none", and
this source will instantly be totally ignored.
follow <other_source>
This permits the source <source> to also emit traces when the other source
<other_source> is locked on a criteria and the same criteria matches for the
current source as well. For example, if a source is locked on a session,
following that source from another one will make that other one emit traces
for all events related to this session. This may be used to some extents to
track backend requests along with the associated frontend connections. The
"session" source makes this easier by providing a "new" and an "end" events
that are usable for lock-on processing. Note that the source <source> does
not need to have its traces enabled in this case, and its tracing state will
not be affected either. It is, however, possible that some events may be
missing if they do not contain information that allow to correlate them with
the tracked element. The meta-source "all" may also be used with this
command: in this case, all sources will follow <other_source>.
Example:
trace h1 lock session start sess_new pause sess_end follow session
level [<level>]
Without argument, this will list all trace levels for this source, and the
current one will be indicated by a star ('*') prepended in front of it. With
an argument, this will change the trace level to the specified level. Detail
levels are a form of filters that are applied before reporting the events.
These filters are used to selectively include or exclude events depending on
their level of importance. For example a developer might need to know
precisely where in the code an HTTP header was considered invalid while the
end user may not even care about this header's validity at all. There are
currently 5 distinct levels for a trace :
user this will report information that are suitable for use by a
regular haproxy user who wants to observe his traffic.
Typically some HTTP requests and responses will be reported
without much detail. Most sources will set this as the
default level to ease operations.
proto in addition to what is reported at the "user" level, it also
displays protocol-level updates. This can for example be the
frame types or HTTP headers after decoding.
state in addition to what is reported at the "proto" level, it
will also display state transitions (or failed transitions)
which happen in parsers, so this will show attempts to
perform an operation while the "proto" level only shows
the final operation.
data in addition to what is reported at the "state" level, it
will also include data transfers between the various layers.
developer it reports everything available, which can include advanced
information such as "breaking out of this loop" that are
only relevant to a developer trying to understand a bug that
only happens once in a while in field. Function names are
only reported at this level.
It is highly recommended to always use the "user" level only and switch to
other levels only if instructed to do so by a developer. Also it is a good
idea to first configure the events before switching to higher levels, as it
may save from dumping many lines if no filter is applied. The meta-source
"all" may also be used with this command: in this case, the level will be
applied to all existing sources at once.
lock [criterion]
Without argument, this will list all the criteria supported by this source
for lock-on processing, and display the current choice by a star ('*') in
front of it. Lock-on means that the source will focus on the first matching
event and only stick to the criterion which triggered this event, and ignore
all other ones until the trace stops. This allows for example to take a
trace on a single connection or on a single stream. The following criteria
are supported by some traces, though not necessarily all, since some of them
might not be available to the source :
backend lock on the backend that started the trace
connection lock on the connection that started the trace
frontend lock on the frontend that started the trace
listener lock on the listener that started the trace
nothing do not lock on anything
server lock on the server that started the trace
session lock on the session that started the trace
thread lock on the thread that started the trace
In addition to this, each source may provide up to 4 specific criteria such
as internal states or connection IDs. For example in HTTP/2 it is possible
to lock on the H2 stream and ignore other streams once a strace starts.
When a criterion is passed in argument, this one is used instead of the
other ones and any existing tracking is immediately terminated so that it
can restart with the new criterion. The special keyword "nothing" is
supported by all sources to permanently disable tracking.
{ pause | start | stop } [ [+|-|!]event]
Without argument, this will list the events enabled to automatically pause,
start, or stop a trace for this source. These events are specific to each
trace source. With an argument, this will either enable the event for the
specified action (if optionally prefixed by a '+') or disable it (if
prefixed by a '-' or '!'). The special keyword "now" is not an event and
requests to take the action immediately. The keywords "none" and "any" are
supported just like in "trace event".
The 3 supported actions are respectively "pause", "start" and "stop". The
"pause" action enumerates events which will cause a running trace to stop
and wait for a new start event to restart it. The "start" action enumerates
the events which switch the trace into the waiting mode until one of the
start events appears. And the "stop" action enumerates the events which
definitely stop the trace until it is manually enabled again. In practice it
makes sense to manually start a trace using "start now" without caring about
events, and to stop it using "stop now". In order to capture more subtle
event sequences, setting "start" to a normal event (like receiving an HTTP
request) and "stop" to a very rare event like emitting a certain error, will
ensure that the last captured events will match the desired criteria. And
the pause event is useful to detect the end of a sequence, disable the
lock-on and wait for another opportunity to take a capture. In this case it
can make sense to enable lock-on to spot only one specific criterion (e.g. a
stream), and have "start" set to anything that starts this criterion
(e.g. all events which create a stream), "stop" set to the expected anomaly,
and "pause" to anything that ends that criterion (e.g. any end of stream
event). In this case the trace log will contain complete sequences of
perfectly clean series affecting a single object, until the last sequence
containing everything from the beginning to the anomaly.
sink [<sink>]
Without argument, this will list all event sinks available for this source,
and the currently configured one will have a star ('*') prepended in front
of it. Sink "none" is always available and means that all events are simply
dropped, though their processing is not ignored (e.g. lock-on does occur).
Other sinks are available depending on configuration and build options, but
typically "stdout" and "stderr" will be usable in debug mode, and in-memory
ring buffers should be available as well. When a name is specified, the
sink instantly changes for the specified source. Events are not changed
during a sink change. In the worst case some may be lost if an invalid sink
is used (or "none"), but operations do continue to a different
destination. The meta-source "all" may also be used with this command: in
this case, the sink will be applied to all existing sources at once.
verbosity [<level>]
Without argument, this will list all verbosity levels for this source, and
the current one will be indicated by a star ('*') prepended in front of
it. With an argument, this will change the verbosity level to the specified
one.
Verbosity levels indicate how far the trace decoder should go to provide
detailed information. It depends on the trace source, since some sources
will not even provide a specific decoder. Level "quiet" is always available
and disables any decoding. It can be useful when trying to figure what's
happening before trying to understand the details, since it will have a very
low impact on performance and trace size. When no verbosity levels are
declared by a source, level "default" is available and will cause a decoder
to be called when specified in the traces. It is an opportunistic decoding.
When the source declares some verbosity levels, these ones are listed with a
description of what they correspond to. In this case the trace decoder
provided by the source will be as accurate as possible based on the
information available at the trace point. The first level above "quiet" is
set by default.
update ssl ocsp-response <certfile>
Create an OCSP request for the specified <certfile> and send it to the OCSP
responder whose URI should be specified in the "Authority Information Access"
section of the certificate. Only the first URI is taken into account. The
OCSP response that we should receive in return is then checked and inserted
in the local OCSP response tree. This command will only work for certificates
that already had a stored OCSP response, either because it was provided
during init or if it was previously set through the "set ssl cert" or "set
ssl ocsp-response" commands.
If the received OCSP response is valid and was properly inserted into the
local tree, its contents will be displayed on the standard output. The format
is the same as the one described in "show ssl ocsp-response".
wait { -h | <delay> } [<condition> [<args>...]]
In its simplest form without any condition, this simply waits for the
requested delay before continuing. This can be used to collect metrics around
a specific interval.
With a condition and optional arguments, the command will wait for the
specified condition to be satisfied, to unrecoverably fail, or to remain
unsatisfied for the whole <delay> duration. The supported conditions are:
- srv-removable <proxy>/<server> : this will wait for the specified server to
be removable by the "del server" command, i.e. be in maintenance and no
longer have any connection on it (neither active or idle). Some conditions
will never be accepted (e.g. not in maintenance) and will cause the report
of a specific error message indicating what condition is not met. The
server might even have been removed in parallel and no longer exit. If
everything is OK before the delay, a success is returned and the operation
is terminated.
The default unit for the delay is milliseconds, though other units are
accepted if suffixed with the usual timer units (us, ms, s, m, h, d). When
used with the 'socat' utility, do not forget to extend socat's close timeout
to cover the wait time. Passing "-h" as the first or second argument provides
the command's usage.
Example:
$ socat -t20 /path/to/socket - <<< "show activity; wait 10s; show activity"
$ socat -t5 /path/to/socket - <<< "
disable server px/srv1
shutdown sessions server px/srv1
wait 2s srv-removable px/srv1
del server px/srv1"
9.4. Master CLI
---------------
The master CLI is a socket bound to the master process in master-worker mode.
This CLI gives access to the unix socket commands in every running or leaving
processes and allows a basic supervision of those processes.
The master CLI is configurable only from the haproxy program arguments with
the -S option. This option also takes bind options separated by commas.
Example:
# haproxy -W -S 127.0.0.1:1234 -f test1.cfg
# haproxy -Ws -S /tmp/master-socket,uid,1000,gid,1000,mode,600 -f test1.cfg
# haproxy -W -S /tmp/master-socket,level,user -f test1.cfg
9.4.1. Master CLI commands
--------------------------
@<[!]pid>
The master CLI uses a special prefix notation to access the multiple
processes. This notation is easily identifiable as it begins by a @.
A @ prefix can be followed by a relative process number or by an exclamation
point and a PID. (e.g. @1 or @!1271). A @ alone could be use to specify the
master. Leaving processes are only accessible with the PID as relative process
number are only usable with the current processes.
This prefix may be used as a wrapper before a command, indicating that this
command and only this one will be sent to the designated process. In this
case the full command ends at the end of line or semi-colon like any regular
command.
Bugs: the sockpair@ protocol used to implement communication between the
master and the worker is known to not be reliable on macOS because of an
issue in the macOS sendmsg(2) implementation. A command might end up without
response because of that.
Examples:
$ socat /var/run/haproxy-master.sock readline
prompt
master> @1 show info; @2 show info
[...]
Process_num: 1
Pid: 1271
[...]
Process_num: 2
Pid: 1272
[...]
master>
$ echo '@!1271 show info; @!1272 show info' | socat /var/run/haproxy-master.sock -
[...]
The prefix may also be use as a standalone command to switch the default execution
context to the designated process, indicating that all subsequent commands will all
be executed in that process, until a new '@' command changes the execution context
again.
Examples:
$ socat /var/run/haproxy-master.sock readline
prompt
master> @1
1271> show info
[...]
1271> show stat
[...]
1271> @
master>
$ echo '@1; show info; show stat; @2; show info; show stat' | socat /var/run/haproxy-master.sock -
[...]
Note about limitations: a few rare commands alter a CLI session's state
(e.g. "set anon", "set timeout") and may not behave exactly similarly once
run from the master CLI due to commands being sent one at a time on their own
CLI session. Similarly, a few rare commands ("show events", "wait") actively
monitor the CLI for input or closure and are immediately interrupted when the
CLI is closed. These commands will not work as expected through the master
CLI because the command's input is closed after each command. For such rare
casesn the "@@" variant below might be more suited.
@@<[!]pid> [command...]
This prefix or command is very similar to the "@" prefix documented above
except that it enters the worker process, delivers the whole command line
into it as-is and stays there until the command finishes. Semi-colons are
delivered as well, allowing to execute a full pipelined command in a worker
process. The connection with the work remains open until the list of commands
completes. Any data sent after the commands will be forwarded to the worker
process' CLI and may be consumed by the commands being executed and will be
lost for the master process' CLI, offering a truly bidirectional connection
with the worker process. As such, users of such commands must be very careful
to wait for the command's completion before sending new commands to the
master CLI.
Instead of executing a single command, it is also possible to open a fully
interactive session on the worker process by not specifying any command
(i.e. "@@1" on its own line). This session can be terminated either by
closing the connection or by quitting the worker process (using the "quit"
command). In this case, the prompt mode of the master socket (interactive,
prompt, timed) is propagated into the worker process.
Bugs: the sockpair@ protocol used to implement communication between the
master and the worker is known to not be reliable on macOS because of an
issue in the macOS sendmsg(2) implementation. A command might end up without
response because of that.
Examples:
# gracefully close connections and delete a server once idle (wait max 10s)
$ socat -t 11 /var/run/haproxy-master.sock - <<< \
"@@1 disable server app2/srv36; \
wait 10000 srv-removable app2/srv36; \
del server app2/srv36"
# forcefully close connections and quickly delete a server
$ socat /var/run/haproxy-master.sock - <<< \
"@@1 disable server app2/srv36; \
shutdown sessions server app2/srv36; \
wait 100 srv-removable app2/srv36; \
del server app2/srv36"
# show messages arriving to this ring in real time ("tail -f" equivalent)
$ (echo "show events buf0 -w"; read) | socat /var/run/haproxy-master.sock -
expert-mode [on|off]
This command activates the "expert-mode" for every worker accessed from the
master CLI. Combined with "mcli-debug-mode" it also activates the command on
the master. Display the flag "e" in the master CLI prompt.
See also "expert-mode" in Section 9.3 and "mcli-debug-mode" in 9.4.1.
experimental-mode [on|off]
This command activates the "experimental-mode" for every worker accessed from
the master CLI. Combined with "mcli-debug-mode" it also activates the command on
the master. Display the flag "x" in the master CLI prompt.
See also "experimental-mode" in Section 9.3 and "mcli-debug-mode" in 9.4.1.
hard-reload
This command does the same as the "reload" command over the master CLI with
the exception that it does a hard-stop (-st) instead of a stop-stop (-sf) of
the previous process. This means the previous process does not wait to
achieve anything before exiting, so all connections will be closed.
See also the "reload" command.
mcli-debug-mode [on|off]
This keyword allows a special mode in the master CLI which enables every
keywords that were meant for a worker CLI on the master CLI, allowing to debug
the master process. Once activated, you list the new available keywords with
"help". Combined with "experimental-mode" or "expert-mode" it enables even
more keywords. Display the flag "d" in the master CLI prompt.
prompt
When the prompt is enabled (via the "prompt" command), the context the CLI is
working on is displayed in the prompt. The master is identified by the "master"
string, and other processes are identified with their PID. In case the last
reload failed, the master prompt will be changed to "master[ReloadFailed]>" so
that it becomes visible that the process is still running on the previous
configuration and that the new configuration is not operational.
The prompt of the master CLI is able to display several flags which are the
enable modes. "d" for mcli-debug-mode, "e" for expert-mode, "x" for
experimental-mode.
Example:
$ socat /var/run/haproxy-master.sock -
prompt
master> expert-mode on
master(e)> experimental-mode on
master(xe)> mcli-debug-mode on
master(xed)> @1
95191(xed)>
reload
You can also reload the HAProxy master process with the "reload" command which
does the same as a `kill -USR2` on the master process, provided that the user
has at least "operator" or "admin" privileges.
This command allows you to perform a synchronous reload, the command will
return a reload status, once the reload was performed. Be careful with the
timeout if a tool is used to parse it, it is only returned once the
configuration is parsed and the new worker is forked. The "socat" command uses
a timeout of 0.5s by default so it will quit before showing the message if
the reload is too long. "ncat" does not have a timeout by default.
When compiled with USE_SHM_OPEN=1, the reload command is also able to dump
the startup-logs of the master.
Example:
$ echo "reload" | socat -t300 /var/run/haproxy-master.sock stdin
Success=1
--
[NOTICE] (482713) : haproxy version is 2.7-dev7-4827fb-69
[NOTICE] (482713) : path to executable is ./haproxy
[WARNING] (482713) : config : 'http-request' rules ignored for proxy 'frt1' as they require HTTP mode.
[NOTICE] (482713) : New worker (482720) forked
[NOTICE] (482713) : Loading success.
$ echo "reload" | socat -t300 /var/run/haproxy-master.sock stdin
Success=0
--
[NOTICE] (482886) : haproxy version is 2.7-dev7-4827fb-69
[NOTICE] (482886) : path to executable is ./haproxy
[ALERT] (482886) : config : parsing [test3.cfg:1]: unknown keyword 'Aglobal' out of section.
[ALERT] (482886) : config : Fatal errors found in configuration.
[WARNING] (482886) : Loading failure!
$
The reload command is the last executed on the master CLI, every other
command after it are ignored. Once the reload command returns its status, it
will close the connection to the CLI.
Note that a reload will close all connections to the master CLI.
See also the "hard-reload" command.
show proc [debug]
The master CLI introduces a 'show proc' command to surpervise the
processe.
Example:
$ echo 'show proc' | socat /var/run/haproxy-master.sock -
#<PID> <type> <reloads> <uptime> <version>
1162 master 5 [failed: 0] 0d00h02m07s 2.5-dev13
# workers
1271 worker 1 0d00h00m00s 2.5-dev13
# old workers
1233 worker 3 0d00h00m43s 2.0-dev3-6019f6-289
In this example, the master has been reloaded 5 times but one of the old
worker is still running and survived 3 reloads. You could access the CLI of
this worker to understand what's going on.
The 'debug' parameter is useful to show debug details, it currently shows the
FDs for IPC communication. Note that the debug output is not guaranteed to be
stable between haproxy versions.
show startup-logs
HAProxy needs to be compiled with USE_SHM_OPEN=1 to be used correctly on the
master CLI or all messages won't be visible.
Like its counterpart on the stats socket, this command is able to show the
startup messages of HAProxy. However it does not dump the startup messages
of the current worker, but the startup messages of the latest startup or
reload, which means it is able to dump the parsing messages of a failed
reload.
Those messages are also dumped with the "reload" command.
9.5. Stats-file
--------------
A so-called stats-file can be used to preload internal haproxy counters on
process startup with non-null values. Its main purpose is to preserve
statistics for worker processes across reloads. Only an excerpt of all the
exposed haproxy statistics is present in a stats-file as it only makes sense to
preload metric-type values.
For the moment, only proxy counters are supported in stats-file. This allows to
preload values for frontends, backends, servers and listeners. However only
objects instances with a non-empty GUID are stored in a stats-file. This
guarantees that value will be preloaded for object with matching type and GUID,
even if other parameters differ.
The CLI command "dump stats-file" purpose is to generate a stats-file. Format
of the stats-file is internally defined and freely subject to future changes
and extension. It is designed to be compatible at least across adjacent
haproxy stable branch releases, but may require optional extra configuration
when loading a stats-file to a process running on an older version.
10. Tricks for easier configuration management
----------------------------------------------
It is very common that two HAProxy nodes constituting a cluster share exactly
the same configuration modulo a few addresses. Instead of having to maintain a
duplicate configuration for each node, which will inevitably diverge, it is
possible to include environment variables in the configuration. Thus multiple
configuration may share the exact same file with only a few different system
wide environment variables. This started in version 1.5 where only addresses
were allowed to include environment variables, and 1.6 goes further by
supporting environment variables everywhere. The syntax is the same as in the
UNIX shell, a variable starts with a dollar sign ('$'), followed by an opening
curly brace ('{'), then the variable name followed by the closing brace ('}').
Except for addresses, environment variables are only interpreted in arguments
surrounded with double quotes (this was necessary not to break existing setups
using regular expressions involving the dollar symbol).
Environment variables also make it convenient to write configurations which are
expected to work on various sites where only the address changes. It can also
permit to remove passwords from some configs. Example below where the file
"site1.env" file is sourced by the init script upon startup :
$ cat site1.env
LISTEN=192.168.1.1
CACHE_PFX=192.168.11
SERVER_PFX=192.168.22
LOGGER=192.168.33.1
STATSLP=admin:pa$$w0rd
ABUSERS=/etc/haproxy/abuse.lst
TIMEOUT=10s
$ cat haproxy.cfg
global
log "${LOGGER}:514" local0
defaults
mode http
timeout client "${TIMEOUT}"
timeout server "${TIMEOUT}"
timeout connect 5s
frontend public
bind "${LISTEN}:80"
http-request reject if { src -f "${ABUSERS}" }
stats uri /stats
stats auth "${STATSLP}"
use_backend cache if { path_end .jpg .css .ico }
default_backend server
backend cache
server cache1 "${CACHE_PFX}.1:18080" check
server cache2 "${CACHE_PFX}.2:18080" check
backend server
server cache1 "${SERVER_PFX}.1:8080" check
server cache2 "${SERVER_PFX}.2:8080" check
11. Well-known traps to avoid
-----------------------------
Once in a while, someone reports that after a system reboot, the haproxy
service wasn't started, and that once they start it by hand it works. Most
often, these people are running a clustered IP address mechanism such as
keepalived, to assign the service IP address to the master node only, and while
it used to work when they used to bind haproxy to address 0.0.0.0, it stopped
working after they bound it to the virtual IP address. What happens here is
that when the service starts, the virtual IP address is not yet owned by the
local node, so when HAProxy wants to bind to it, the system rejects this
because it is not a local IP address. The fix doesn't consist in delaying the
haproxy service startup (since it wouldn't stand a restart), but instead to
properly configure the system to allow binding to non-local addresses. This is
easily done on Linux by setting the net.ipv4.ip_nonlocal_bind sysctl to 1. This
is also needed in order to transparently intercept the IP traffic that passes
through HAProxy for a specific target address.
Multi-process configurations involving source port ranges may apparently seem
to work but they will cause some random failures under high loads because more
than one process may try to use the same source port to connect to the same
server, which is not possible. The system will report an error and a retry will
happen, picking another port. A high value in the "retries" parameter may hide
the effect to a certain extent but this also comes with increased CPU usage and
processing time. Logs will also report a certain number of retries. For this
reason, port ranges should be avoided in multi-process configurations.
Since HAProxy uses SO_REUSEPORT and supports having multiple independent
processes bound to the same IP:port, during troubleshooting it can happen that
an old process was not stopped before a new one was started. This provides
absurd test results which tend to indicate that any change to the configuration
is ignored. The reason is that in fact even the new process is restarted with a
new configuration, the old one also gets some incoming connections and
processes them, returning unexpected results. When in doubt, just stop the new
process and try again. If it still works, it very likely means that an old
process remains alive and has to be stopped. Linux's "netstat -lntp" is of good
help here.
When adding entries to an ACL from the command line (eg: when blacklisting a
source address), it is important to keep in mind that these entries are not
synchronized to the file and that if someone reloads the configuration, these
updates will be lost. While this is often the desired effect (for blacklisting)
it may not necessarily match expectations when the change was made as a fix for
a problem. See the "add acl" action of the CLI interface.
12. Debugging and performance issues
------------------------------------
When HAProxy is started with the "-d" option, it will stay in the foreground
and will print one line per event, such as an incoming connection, the end of a
connection, and for each request or response header line seen. This debug
output is emitted before the contents are processed, so they don't consider the
local modifications. The main use is to show the request and response without
having to run a network sniffer. The output is less readable when multiple
connections are handled in parallel, though the "debug2ansi" and "debug2html"
scripts found in the examples/ directory definitely help here by coloring the
output.
If a HTTP/1.x request or response is rejected because HAProxy finds it is
malformed, the best thing to do is to connect to the CLI and issue "show
errors", which will report the last captured faulty HTTP/1.x request and
response for each frontend and backend, with all the necessary information to
indicate precisely the first character of the input stream that was
rejected. This is sometimes needed to prove to customers or to developers that a
bug is present in their code. In this case it is often possible to relax the
checks (but still keep the captures) using "option
accept-unsafe-violations-in-http-request" or its equivalent for responses coming
from the server "option accept-unsafe-violations-in-http-response". Please see
the configuration manual for more details.
Example :
> show errors
Total events captured on [13/Oct/2015:13:43:47.169] : 1
[13/Oct/2015:13:43:40.918] frontend HAProxyLocalStats (#2): invalid request
backend <NONE> (#-1), server <NONE> (#-1), event #0
src 127.0.0.1:51981, session #0, session flags 0x00000080
HTTP msg state 26, msg flags 0x00000000, tx flags 0x00000000
HTTP chunk len 0 bytes, HTTP body len 0 bytes
buffer flags 0x00808002, out 0 bytes, total 31 bytes
pending 31 bytes, wrapping at 8040, error at position 13:
00000 GET /invalid request HTTP/1.1\r\n
The output of "show info" on the CLI provides a number of useful information
regarding the maximum connection rate ever reached, maximum SSL key rate ever
reached, and in general all information which can help to explain temporary
issues regarding CPU or memory usage. Example :
> show info
Name: HAProxy
Version: 1.6-dev7-e32d18-17
Release_date: 2015/10/12
Nbproc: 1
Process_num: 1
Pid: 7949
Uptime: 0d 0h02m39s
Uptime_sec: 159
Memmax_MB: 0
Ulimit-n: 120032
Maxsock: 120032
Maxconn: 60000
Hard_maxconn: 60000
CurrConns: 0
CumConns: 3
CumReq: 3
MaxSslConns: 0
CurrSslConns: 0
CumSslConns: 0
Maxpipes: 0
PipesUsed: 0
PipesFree: 0
ConnRate: 0
ConnRateLimit: 0
MaxConnRate: 1
SessRate: 0
SessRateLimit: 0
MaxSessRate: 1
SslRate: 0
SslRateLimit: 0
MaxSslRate: 0
SslFrontendKeyRate: 0
SslFrontendMaxKeyRate: 0
SslFrontendSessionReuse_pct: 0
SslBackendKeyRate: 0
SslBackendMaxKeyRate: 0
SslCacheLookups: 0
SslCacheMisses: 0
CompressBpsIn: 0
CompressBpsOut: 0
CompressBpsRateLim: 0
ZlibMemUsage: 0
MaxZlibMemUsage: 0
Tasks: 5
Run_queue: 1
Idle_pct: 100
node: wtap
description:
When an issue seems to randomly appear on a new version of HAProxy (eg: every
second request is aborted, occasional crash, etc), it is worth trying to enable
memory poisoning so that each call to malloc() is immediately followed by the
filling of the memory area with a configurable byte. By default this byte is
0x50 (ASCII for 'P'), but any other byte can be used, including zero (which
will have the same effect as a calloc() and which may make issues disappear).
Memory poisoning is enabled on the command line using the "-dM" option. It
slightly hurts performance and is not recommended for use in production. If
an issue happens all the time with it or never happens when poisoning uses
byte zero, it clearly means you've found a bug and you definitely need to
report it. Otherwise if there's no clear change, the problem it is not related.
When debugging some latency issues, it is important to use both strace and
tcpdump on the local machine, and another tcpdump on the remote system. The
reason for this is that there are delays everywhere in the processing chain and
it is important to know which one is causing latency to know where to act. In
practice, the local tcpdump will indicate when the input data come in. Strace
will indicate when haproxy receives these data (using recv/recvfrom). Warning,
openssl uses read()/write() syscalls instead of recv()/send(). Strace will also
show when haproxy sends the data, and tcpdump will show when the system sends
these data to the interface. Then the external tcpdump will show when the data
sent are really received (since the local one only shows when the packets are
queued). The benefit of sniffing on the local system is that strace and tcpdump
will use the same reference clock. Strace should be used with "-tts200" to get
complete timestamps and report large enough chunks of data to read them.
Tcpdump should be used with "-nvvttSs0" to report full packets, real sequence
numbers and complete timestamps.
In practice, received data are almost always immediately received by haproxy
(unless the machine has a saturated CPU or these data are invalid and not
delivered). If these data are received but not sent, it generally is because
the output buffer is saturated (ie: recipient doesn't consume the data fast
enough). This can be confirmed by seeing that the polling doesn't notify of
the ability to write on the output file descriptor for some time (it's often
easier to spot in the strace output when the data finally leave and then roll
back to see when the write event was notified). It generally matches an ACK
received from the recipient, and detected by tcpdump. Once the data are sent,
they may spend some time in the system doing nothing. Here again, the TCP
congestion window may be limited and not allow these data to leave, waiting for
an ACK to open the window. If the traffic is idle and the data take 40 ms or
200 ms to leave, it's a different issue (which is not an issue), it's the fact
that the Nagle algorithm prevents empty packets from leaving immediately, in
hope that they will be merged with subsequent data. HAProxy automatically
disables Nagle in pure TCP mode and in tunnels. However it definitely remains
enabled when forwarding an HTTP body (and this contributes to the performance
improvement there by reducing the number of packets). Some HTTP non-compliant
applications may be sensitive to the latency when delivering incomplete HTTP
response messages. In this case you will have to enable "option http-no-delay"
to disable Nagle in order to work around their design, keeping in mind that any
other proxy in the chain may similarly be impacted. If tcpdump reports that data
leave immediately but the other end doesn't see them quickly, it can mean there
is a congested WAN link, a congested LAN with flow control enabled and
preventing the data from leaving, or more commonly that HAProxy is in fact
running in a virtual machine and that for whatever reason the hypervisor has
decided that the data didn't need to be sent immediately. In virtualized
environments, latency issues are almost always caused by the virtualization
layer, so in order to save time, it's worth first comparing tcpdump in the VM
and on the external components. Any difference has to be credited to the
hypervisor and its accompanying drivers.
When some TCP SACK segments are seen in tcpdump traces (using -vv), it always
means that the side sending them has got the proof of a lost packet. While not
seeing them doesn't mean there are no losses, seeing them definitely means the
network is lossy. Losses are normal on a network, but at a rate where SACKs are
not noticeable at the naked eye. If they appear a lot in the traces, it is
worth investigating exactly what happens and where the packets are lost. HTTP
doesn't cope well with TCP losses, which introduce huge latencies.
The "netstat -i" command will report statistics per interface. An interface
where the Rx-Ovr counter grows indicates that the system doesn't have enough
resources to receive all incoming packets and that they're lost before being
processed by the network driver. Rx-Drp indicates that some received packets
were lost in the network stack because the application doesn't process them
fast enough. This can happen during some attacks as well. Tx-Drp means that
the output queues were full and packets had to be dropped. When using TCP it
should be very rare, but will possibly indicate a saturated outgoing link.
13. Security considerations
---------------------------
HAProxy is designed to run with very limited privileges. The standard way to
use it is to isolate it into a chroot jail and to drop its privileges to a
non-root user without any permissions inside this jail so that if any future
vulnerability were to be discovered, its compromise would not affect the rest
of the system.
In order to perform a chroot, it first needs to be started as a root user. It is
pointless to build hand-made chroots to start the process there, these ones are
painful to build, are never properly maintained and always contain way more
bugs than the main file-system. And in case of compromise, the intruder can use
the purposely built file-system. Unfortunately many administrators confuse
"start as root" and "run as root", resulting in the uid change to be done prior
to starting haproxy, and reducing the effective security restrictions.
HAProxy will need to be started as root in order to :
- adjust the file descriptor limits
- bind to privileged port numbers
- bind to a specific network interface
- transparently listen to a foreign address
- isolate itself inside the chroot jail
- drop to another non-privileged UID
HAProxy may require to be run as root in order to :
- bind to an interface for outgoing connections
- bind to privileged source ports for outgoing connections
- transparently bind to a foreign address for outgoing connections
Most users will never need the "run as root" case. But the "start as root"
covers most usages.
A safe configuration will have :
- a chroot statement pointing to an empty location without any access
permissions. This can be prepared this way on the UNIX command line :
# mkdir /var/empty && chmod 0 /var/empty || echo "Failed"
and referenced like this in the HAProxy configuration's global section :
chroot /var/empty
- both a uid/user and gid/group statements in the global section :
user haproxy
group haproxy
- a stats socket whose mode, uid and gid are set to match the user and/or
group allowed to access the CLI so that nobody may access it :
stats socket /var/run/haproxy.stat uid hatop gid hatop mode 600
13.1. Linux capabilities support
------------------------------
Since version v2.9 haproxy supports Linux capabilities. If the binary is
compiled with USE_LINUX_CAP=1, it is able to preserve capabilities given in
'setcap' keyword during switching from root user to a non-root.
Since version v3.1 haproxy also checks if capabilities given in 'setcap'
keyword were set in its binary file Permitted set by administrator
(capget syscall). If this a case it performs transition of these capabilities
in its process Effective set (capset syscall), while running as a non-root
user.
This was done to avoid all potential use cases when haproxy starts and runs as
root: transparent proxy mode, binding to privileged ports.
'setcap' keyword supports following network capabilities:
- cap_net_admin: transparent proxying, binding socket to a specific network
interface, using set-mark action;
- cap_net_raw (subset of cap_net_admin): transparent proxying;
- cap_net_bind_service: binding socket to a specific network interface;
- cap_sys_admin: creating socket in a specific network namespace.
Haproxy never does the transition of these capabilities from its Permitted set
to the Effective, if they are not listed as 'setcap' argument. See more
information about 'setcap' keyword and supported capabilities in the chapter
3.1 Process management and security in the Configuration guide.
Administrator may add needed capabilities in the haproxy binary file Permitted
set with the following command:
Example:
# setcap cap_net_admin,cap_net_bind_service=p /usr/local/sbin/haproxy
Added capabilities will be seen in process Permitted set after its start.
If the same capabilities are the arguments of 'setcap' keyword, they could be
also seen in the process Effective set. This could be check with the following
command:
Example:
# grep Cap /proc/<haproxy PID>/status
CapInh: 0000000000000000
CapPrm: 0000000000001400
CapEff: 0000000000001400
CapBnd: 000001ffffffffff
CapAmb: 0000000000000000
See more details about setcap and capabilities sets in Linux man pages
(capabilities(7)).
In some use cases like transparent proxying or creating socket in a specific
network namespace, configuration file parser detects that cap_net_raw or
cap_sys_admin or some other supported capabilities are needed. Then, during
the initialization stage, haproxy process checks, if these capabilities could
be put in its Effective set. If it's not possible due to capget or capset
syscall failure (restrictions set on syscalls by some security modules like
SELinux, Seccomp, etc), process emits diagnostic warnings (start with -dD).
Due to support of many different platforms with different system settings,
it's impossible for the parser to deduce from the configuration file, if
binding to privileged ports will be done. So, in the case of insufficient
privileges (run as non-root) process will terminate only with an alert
message like below. It's up to a user to recheck its configuration and haproxy
binary capabilities set.
Example:
$ haproxy -dD -f haproxy.cfg
...
[ALERT] (96797) : Binding [haproxy.cfg:36] for frontend fe: cannot bind socket (Permission denied) for [0.0.0.0:80]
[ALERT] (96797) : [haproxy.main()] Some protocols failed to start their listeners! Exiting.
|