1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739
|
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<HTML>
<HEAD>
<META NAME="GENERATOR" CONTENT="LinuxDoc-Tools 0.9.21">
<TITLE>SQUID Frequently Asked Questions: Troubleshooting</TITLE>
<LINK HREF="FAQ-12.html" REL=next>
<LINK HREF="FAQ-10.html" REL=previous>
<LINK HREF="FAQ.html#toc11" REL=contents>
</HEAD>
<BODY>
<A HREF="FAQ-12.html">Next</A>
<A HREF="FAQ-10.html">Previous</A>
<A HREF="FAQ.html#toc11">Contents</A>
<HR>
<H2><A NAME="s11">11.</A> <A HREF="FAQ.html#toc11">Troubleshooting</A></H2>
<H2><A NAME="ss11.1">11.1</A> <A HREF="FAQ.html#toc11.1">Why am I getting ``Proxy Access Denied?''</A>
</H2>
<P>You may need to set up the <EM>http_access</EM> option to allow
requests from your IP addresses. Please see
<A HREF="FAQ-10.html#access-controls">the Access Controls section</A> for information about that.</P>
<P>If <EM>squid</EM> is in httpd-accelerator mode, it will accept normal
HTTP requests and forward them to a HTTP server, but it will not
honor proxy requests. If you want your cache to also accept
proxy-HTTP requests then you must enable this feature:
<PRE>
httpd_accel_with_proxy on
</PRE>
Alternately, you may have misconfigured one of your ACLs. Check the
<EM>access.log</EM> and <EM>squid.conf</EM> files for clues.</P>
<H2><A NAME="ss11.2">11.2</A> <A HREF="FAQ.html#toc11.2">I can't get <CODE>local_domain</CODE> to work; <EM>Squid</EM> is caching the objects from my local servers.</A>
</H2>
<P>The <CODE>local_domain</CODE> directive does not prevent local
objects from being cached. It prevents the use of sibling caches
when fetching local objects. If you want to prevent objects from
being cached, use the <CODE>cache_stoplist</CODE> or <CODE>http_stop</CODE>
configuration options (depending on your version).</P>
<H2><A NAME="ss11.3">11.3</A> <A HREF="FAQ.html#toc11.3">I get <CODE>Connection Refused</CODE> when the cache tries to retrieve an object located on a sibling, even though the sibling thinks it delivered the object to my cache.</A>
</H2>
<P>If the HTTP port number is wrong but the ICP port is correct you
will send ICP queries correctly and the ICP replies will fool your
cache into thinking the configuration is correct but large objects
will fail since you don't have the correct HTTP port for the sibling
in your <EM>squid.conf</EM> file. If your sibling changed their
<CODE>http_port</CODE>, you could have this problem for some time
before noticing.</P>
<H2><A NAME="filedescriptors"></A> <A NAME="ss11.4">11.4</A> <A HREF="FAQ.html#toc11.4">Running out of filedescriptors</A>
</H2>
<P>If you see the <CODE>Too many open files</CODE> error message, you
are most likely running out of file descriptors. This may be due
to running Squid on an operating system with a low filedescriptor
limit. This limit is often configurable in the kernel or with
other system tuning tools. There are two ways to run out of file
descriptors: first, you can hit the per-process limit on file
descriptors. Second, you can hit the system limit on total file
descriptors for all processes.</P>
<H3>Linux</H3>
<P>Linux kernel 2.2.12 and later supports "unlimited" number of open files without patching. So does most of glibc-2.1.1 and later (all areas touched by Squid is safe from what I can tell, even more so in later glibc releases). But you still need to take some actions as the kernel defaults to only allow processes to use up to 1024 filedescriptors, and Squid picks up the limit at build time.</P>
<P>
<UL>
<LI>Edit /usr/include/bits/types.h to define __FD_SETSIZE to at least the amount of filedescriptors you'd like to support (Not required for Squid-2.5 and later).</LI>
<LI>Before configuring Squid run "<EM>ulimit -HSn ####</EM>" (where #### is the number of filedescriptors you need to support). Be sure to run "make clean" before configure if you have already run configure as the script might otherwise have cached the prior result.</LI>
<LI>Configure, build and install Squid as usual</LI>
<LI>Make sure your script for starting Squid contains the above <EM>ulimit</EM> command to raise the filedescriptor limit. You may also need to allow a larger port span for outgoing connections (set in /proc/sys/net/ipv4/, like in "<EM>echo 1024 32768 > /proc/sys/net/ipv4/ip_local_port_range</EM>")</LI>
</UL>
</P>
<P>Alternatively you can
<UL>
<LI>Run configure with your needed configure options</LI>
<LI>edit include/autoconf.h and define SQUID_MAXFD to your desired limit. Make sure to make it a nice and clean modulo 64 value (multiple of 64) to avoid various bugs in the libc headers.</LI>
<LI>build and install Squid as usual</LI>
<LI>Set the runtime ulimit as described above when starting Squid.</LI>
</UL>
</P>
<P>If running things as root is not an option then get your sysadmin to install a the needed ulimit command in /etc/inittscript (see man initscript), install a patched kernel where INR_OPEN in include/linux/fs.h is changed to at least the amount you need or have them install a small suid program which sets the limit (see link below).</P>
<P>More information can be found from Henriks
<A HREF="http://squid.sourceforge.net/hno/linux-lfd.html">How to get many filedescriptors on Linux 2.2.X and later</A> page.</P>
<H3>Solaris</H3>
<P>Add the following to your <EM>/etc/system</EM> file and reboot to
increase your maximum file descriptors per process:</P>
<P>
<PRE>
set rlim_fd_max = 4096
</PRE>
</P>
<P>Next you should re-run the <EM>configure</EM> script
in the top directory so that it finds the new value.
If it does not find the new limit, then you might try
editing <EM>include/autoconf.h</EM> and setting
<CODE>#define DEFAULT_FD_SETSIZE</CODE> by hand. Note that
<EM>include/autoconf.h</EM> is created from <EM>autoconf.h.in</EM>
every time you run configure. Thus, if you edit it by
hand, you might lose your changes later on.</P>
<P>
<A HREF="mailto:voeckler at rvs dot uni-hannover dot de">Jens-S. Voeckler</A>
advises that you should NOT change the default soft limit (<EM>rlim_fd_cur</EM>) to anything
larger than 256. It will break other programs, such as the license
manager needed for the SUN workshop compiler. Jens-S. also says that it
should be safe to raise the limit for the Squid process as high as 16,384
except that there may be problems duruing reconfigure or logrotate if all of
the lower 256 filedescriptors are in use at the time or rotate/reconfigure.</P>
<H3>FreeBSD</H3>
<P>by
<A HREF="mailto:torsten.sturm@axis.de">Torsten Sturm</A>
<OL>
<LI>How do I check my maximum filedescriptors?
<P>Do <CODE>sysctl -a</CODE> and look for the value of
<CODE>kern.maxfilesperproc</CODE>.</P>
</LI>
<LI>How do I increase them?
<PRE>
sysctl -w kern.maxfiles=XXXX
sysctl -w kern.maxfilesperproc=XXXX
</PRE>
<B>Warning</B>: You probably want <CODE>maxfiles
> maxfilesperproc</CODE> if you're going to be pushing the
limit.</LI>
<LI>What is the upper limit?
<P>I don't think there is a formal upper limit inside the kernel.
All the data structures are dynamically allocated. In practice
there might be unintended metaphenomena (kernel spending too much
time searching tables, for example).</P>
</LI>
</OL>
</P>
<H3>General BSD</H3>
<P>For most BSD-derived systems (SunOS, 4.4BSD, OpenBSD, FreeBSD,
NetBSD, BSD/OS, 386BSD, Ultrix) you can also use the ``brute force''
method to increase these values in the kernel (requires a kernel
rebuild):
<OL>
<LI>How do I check my maximum filedescriptors?
<P>Do <CODE>pstat -T</CODE> and look for the <CODE>files</CODE>
value, typically expressed as the ratio of <CODE>current</CODE>maximum/.</P>
</LI>
<LI>How do I increase them the easy way?
<P>One way is to increase the value of the <CODE>maxusers</CODE> variable
in the kernel configuration file and build a new kernel. This method
is quick and easy but also has the effect of increasing a wide variety of
other variables that you may not need or want increased.</P>
</LI>
<LI>Is there a more precise method?
<P>Another way is to find the <EM>param.c</EM> file in your kernel
build area and change the arithmetic behind the relationship between
<CODE>maxusers</CODE> and the maximum number of open files.</P>
</LI>
</OL>
Here are a few examples which should lead you in the right direction:
<OL>
<LI>SunOS
<P>Change the value of <CODE>nfile</CODE> in <CODE></CODE>usr/kvm/sys/conf.common/param.c/tt> by altering this equation:
<PRE>
int nfile = 16 * (NPROC + 16 + MAXUSERS) / 10 + 64;
</PRE>
Where <CODE>NPROC</CODE> is defined by:
<PRE>
#define NPROC (10 + 16 * MAXUSERS)
</PRE>
</P>
</LI>
<LI>FreeBSD (from the 2.1.6 kernel)
<P>Very similar to SunOS, edit <EM>/usr/src/sys/conf/param.c</EM>
and alter the relationship between <CODE>maxusers</CODE> and the
<CODE>maxfiles</CODE> and <CODE>maxfilesperproc</CODE> variables:
<PRE>
int maxfiles = NPROC*2;
int maxfilesperproc = NPROC*2;
</PRE>
Where <CODE>NPROC</CODE> is defined by:
<CODE>#define NPROC (20 + 16 * MAXUSERS)</CODE>
The per-process limit can also be adjusted directly in the kernel
configuration file with the following directive:
<CODE>options OPEN_MAX=128</CODE></P>
</LI>
<LI>BSD/OS (from the 2.1 kernel)
<P>Edit <CODE>/usr/src/sys/conf/param.c</CODE> and adjust the
<CODE>maxfiles</CODE> math here:
<PRE>
int maxfiles = 3 * (NPROC + MAXUSERS) + 80;
</PRE>
Where <CODE>NPROC</CODE> is defined by:
<CODE>#define NPROC (20 + 16 * MAXUSERS)</CODE>
You should also set the <CODE>OPEN_MAX</CODE> value in your kernel
configuration file to change the per-process limit.</P>
</LI>
</OL>
</P>
<H3>Reconfigure afterwards</H3>
<P><B>NOTE:</B> After you rebuild/reconfigure your kernel with more
filedescriptors, you must then recompile Squid. Squid's configure
script determines how many filedescriptors are available, so you
must make sure the configure script runs again as well. For example:
<PRE>
cd squid-1.1.x
make realclean
./configure --prefix=/usr/local/squid
make
</PRE>
</P>
<H2><A NAME="ss11.5">11.5</A> <A HREF="FAQ.html#toc11.5">What are these strange lines about removing objects?</A>
</H2>
<P>For example:
<PRE>
97/01/23 22:31:10| Removed 1 of 9 objects from bucket 3913
97/01/23 22:33:10| Removed 1 of 5 objects from bucket 4315
97/01/23 22:35:40| Removed 1 of 14 objects from bucket 6391
</PRE>
</P>
<P>These log entries are normal, and do not indicate that <EM>squid</EM> has
reached <CODE>cache_swap_high</CODE>.</P>
<P>Consult your cache information page in <EM>cachemgr.cgi</EM> for
a line like this:</P>
<P>
<PRE>
Storage LRU Expiration Age: 364.01 days
</PRE>
</P>
<P>Objects which have not been used for that amount of time are removed as
a part of the regular maintenance. You can set an upper limit on the
<CODE>LRU Expiration Age</CODE> value with <CODE>reference_age</CODE> in the config
file.</P>
<H2><A NAME="ss11.6">11.6</A> <A HREF="FAQ.html#toc11.6">Can I change a Windows NT FTP server to list directories in Unix format?</A>
</H2>
<P>Why, yes you can! Select the following menus:
<UL>
<LI>Start</LI>
<LI>Programs</LI>
<LI>Microsoft Internet Server (Common)</LI>
<LI>Internet Service Manager</LI>
</UL>
</P>
<P>This will bring up a box with icons for your various services. One of
them should be a little ftp ``folder.'' Double click on this.</P>
<P>You will then have to select the server (there should only be one)
Select that and then choose ``Properties'' from the menu and choose the
``directories'' tab along the top.</P>
<P>There will be an option at the bottom saying ``Directory listing style.''
Choose the ``Unix'' type, not the ``MS-DOS'' type.</P>
<P>
<BLOCKQUOTE>
--Oskar Pearson <oskar@is.co.za>
</BLOCKQUOTE>
</P>
<H2><A NAME="ss11.7">11.7</A> <A HREF="FAQ.html#toc11.7">Why am I getting ``Ignoring MISS from non-peer x.x.x.x?''</A>
</H2>
<P>You are receiving ICP MISSes (via UDP) from a parent or sibling cache
whose IP address your cache does not know about. This may happen
in two situations.</P>
<P>
<OL>
<LI>If the peer is multihomed, it is sending packets out an interface
which is not advertised in the DNS. Unfortunately, this is a
configuration problem at the peer site. You can tell them to either
add the IP address interface to their DNS, or use Squid's
"udp_outgoing_address" option to force the replies
out a specific interface. For example:
<P><EM>on your parent squid.conf:</EM>
<PRE>
udp_outgoing_address proxy.parent.com
</PRE>
<EM>on your squid.conf:</EM>
<PRE>
cache_peer proxy.parent.com parent 3128 3130
</PRE>
</P>
</LI>
<LI>You can also see this warning when sending ICP queries to
multicast addresses. For security reasons, Squid requires
your configuration to list all other caches listening on the
multicast group address. If an unknown cache listens to that address
and sends replies, your cache will log the warning message. To fix
this situation, either tell the unknown cache to stop listening
on the multicast address, or if they are legitimate, add them
to your configuration file.</LI>
</OL>
</P>
<H2><A NAME="ss11.8">11.8</A> <A HREF="FAQ.html#toc11.8">DNS lookups for domain names with underscores (_) always fail.</A>
</H2>
<P>The standards for naming hosts
(
<A HREF="ftp://ftp.isi.edu/in-notes/rfc952.txt">RFC 952</A>,
<A HREF="ftp://ftp.isi.edu/in-notes/rfc1101.txt">RFC 1101</A>)
do not allow underscores in domain names:
<BLOCKQUOTE>
A "name" (Net, Host, Gateway, or Domain name) is a text string up
to 24 characters drawn from the alphabet (A-Z), digits (0-9), minus
sign (-), and period (.).
</BLOCKQUOTE>
The resolver library that ships with recent versions of BIND enforces
this restriction, returning an error for any host with underscore in
the hostname. The best solution is to complain to the hostmaster of the
offending site, and ask them to rename their host.</P>
<P>See also the
<A HREF="http://www.intac.com/~cdp/cptd-faq/section4.html#underscore">comp.protocols.tcp-ip.domains FAQ</A>.</P>
<P>Some people have noticed that
<A HREF="ftp://ftp.isi.edu/in-notes/rfc1033.txt">RFC 1033</A>
implies that underscores <B>are</B> allowed. However, this is an
<EM>informational</EM> RFC with a poorly chosen
example, and not a <EM>standard</EM> by any means.</P>
<H2><A NAME="ss11.9">11.9</A> <A HREF="FAQ.html#toc11.9">Why does Squid say: ``Illegal character in hostname; underscores are not allowed?'</A>
</H2>
<P>See the above question. The underscore character is not
valid for hostnames.</P>
<P>Some DNS resolvers allow the underscore, so yes, the hostname
might work fine when you don't use Squid.</P>
<P>To make Squid allow underscores in hostnames, re-run the
<EM>configure</EM> script with this option:
<PRE>
% ./configure --enable-underscores ...
</PRE>
and then recompile:
<PRE>
% make clean
% make
</PRE>
</P>
<H2><A NAME="ss11.10">11.10</A> <A HREF="FAQ.html#toc11.10">Why am I getting access denied from a sibling cache?</A>
</H2>
<P>The answer to this is somewhat complicated, so please hold on.
<EM>NOTE:</EM> most of this text is taken from
<A HREF="http://www.life-gone-hazy.com/writings/icp-squid.ps.gz">ICP and the Squid Web Cache</A>.</P>
<P>An ICP query does not include any parent or sibling designation,
so the receiver really has no indication of how the peer
cache is configured to use it. This issue becomes important
when a cache is willing to serve cache hits to anyone, but only
handle cache misses for its paying users or customers. In other
words, whether or not to allow the request depends on if the
result is a hit or a miss. To accomplish this,
Squid acquired the <CODE>miss_access</CODE> feature
in October of 1996.</P>
<P>The necessity of ``miss access'' makes life a little bit complicated,
and not only because it was awkward to implement. Miss access
means that the ICP query reply must be an extremely accurate prediction
of the result of a subsequent HTTP request. Ascertaining
this result is actually very hard, if not impossible to
do, since the ICP request cannot convey the
full HTTP request.
Additionally, there are more types of HTTP request results than there
are for ICP. The ICP query reply will either be a hit or miss.
However, the HTTP request might result in a ``<CODE>304 Not Modified</CODE>'' reply
sent from the origin server. Such a reply is not strictly a hit since the peer
needed to forward a conditional request to the source. At the same time,
its not strictly a miss either since the local object data is still valid,
and the Not-Modified reply is quite small.</P>
<P>One serious problem for cache hierarchies is mismatched freshness
parameters. Consider a cache <EM>C</EM> using ``strict''
freshness parameters so its users get maximally current data.
<EM>C</EM> has a sibling <EM>S</EM> with less strict freshness parameters.
When an object is requested at <EM>C</EM>, <EM>C</EM> might
find that <EM>S</EM> already has the object via an ICP query and
ICP HIT response. <EM>C</EM> then retrieves the object
from <EM>S</EM>.</P>
<P>In an HTTP/1.0 world, <EM>C</EM> (and <EM>C</EM>'s client)
will receive an object that was never
subject to its local freshness rules. Neither HTTP/1.0 nor ICP provides
any way to ask only for objects less than a certain age. If the
retrieved object is stale by <EM>C</EM>s rules,
it will be removed from <EM>C</EM>s cache, but
it will subsequently be fetched from <EM>S</EM> so long as it
remains fresh there. This configuration miscoupling
problem is a significant deterrent to establishing
both parent and sibling relationships.</P>
<P>HTTP/1.1 provides numerous request headers to specify freshness
requirements, which actually introduces
a different problem for cache hierarchies: ICP
still does not include any age information, neither in query nor
reply. So <EM>S</EM> may return an ICP HIT if its
copy of the object is fresh by its configuration
parameters, but the subsequent HTTP request may result
in a cache miss due to any
<CODE>Cache-control:</CODE> headers originated by <EM>C</EM> or by
<EM>C</EM>'s client. Situations now emerge where the ICP reply
no longer matches the HTTP request result.</P>
<P>In the end, the fundamental problem is that the ICP query does not
provide enough information to accurately predict whether
the HTTP request
will be a hit or miss. In fact, the current ICP Internet Draft is very
vague on this subject. What does ICP HIT really mean? Does it mean
``I know a little about that URL and have some copy of the object?'' Or
does it mean ``I have a valid copy of that object and you are allowed to
get it from me?''</P>
<P>So, what can be done about this problem? We really need to change ICP
so that freshness parameters are included. Until that happens, the members
of a cache hierarchy have only two options to totally eliminate the ``access
denied'' messages from sibling caches:
<OL>
<LI>Make sure all members have the same <CODE>refresh_rules</CODE> parameters.</LI>
<LI>Do not use <CODE>miss_access</CODE> at all. Promise your sibling cache
administrator that <EM>your</EM> cache is properly configured and that you
will not abuse their generosity. The sibling cache administrator can
check his log files to make sure you are keeping your word.</LI>
</OL>
If neither of these is realistic, then the sibling relationship should not
exist.</P>
<H2><A NAME="ss11.11">11.11</A> <A HREF="FAQ.html#toc11.11">Cannot bind socket FD NN to *:8080 (125) Address already in use</A>
</H2>
<P>This means that another processes is already listening on port 8080
(or whatever you're using). It could mean that you have a Squid process
already running, or it could be from another program. To verify, use
the <EM>netstat</EM> command:
<PRE>
netstat -naf inet | grep LISTEN
</PRE>
That will show all sockets in the LISTEN state. You might also try
<PRE>
netstat -naf inet | grep 8080
</PRE>
If you find that some process has bound to your port, but you're not sure
which process it is, you might be able to use the excellent
<A HREF="ftp://vic.cc.purdue.edu/pub/tools/unix/lsof/">lsof</A>
program. It will show you which processes own every open file descriptor
on your system.</P>
<H2><A NAME="ss11.12">11.12</A> <A HREF="FAQ.html#toc11.12">icpDetectClientClose: ERROR xxx.xxx.xxx.xxx: (32) Broken pipe</A>
</H2>
<P>This means that the client socket was closed by the client
before Squid was finished sending data to it. Squid detects this
by trying to <CODE>read(2)</CODE> some data from the socket. If the
<CODE>read(2)</CODE> call fails, then Squid konws the socket has been
closed. Normally the <CODE>read(2)</CODE> call returns <EM>ECONNRESET: Connection reset by peer</EM>
and these are NOT logged. Any other error messages (such as
<EM>EPIPE: Broken pipe</EM> are logged to <EM>cache.log</EM>. See the ``intro'' of
section 2 of your Unix manual for a list of all error codes.</P>
<H2><A NAME="ss11.13">11.13</A> <A HREF="FAQ.html#toc11.13">icpDetectClientClose: FD 135, 255 unexpected bytes</A>
</H2>
<P>These are caused by misbehaving Web clients attempting to use persistent
connections. Squid-1.1 does not support persistent connections.</P>
<H2><A NAME="ss11.14">11.14</A> <A HREF="FAQ.html#toc11.14">Does Squid work with NTLM Authentication?</A>
</H2>
<P>
<A HREF="/Versions/v2/2.5/">Version 2.5</A> will
support Microsoft NTLM authentication. However, there are some
limits on our support: We cannot proxy connections to a origin
server that use NTLM authentication, but we can act as a web
accelerator or proxy server and authenticate the client connection
using NTLM.</P>
<P>We support NT4, Samba, and Windows 2000 Domain Controllers. For
more information see
<A HREF="FAQ-23.html#winbind">winbind</A>
.</P>
<P>Why we cannot proxy NTLM even though we can use it.
Quoting from summary at the end of the browser authentication section in
<A HREF="http://support.microsoft.com/support/kb/articles/Q198/1/16.ASP">this article</A>:
<BLOCKQUOTE>
In summary, Basic authentication does not require an implicit end-to-end
state, and can therefore be used through a proxy server. Windows NT
Challenge/Response authentication requires implicit end-to-end state and
will not work through a proxy server.
</BLOCKQUOTE>
</P>
<P>Squid transparently passes the NTLM request and response headers between
clients and servers. NTLM relies on a single end-end connection (possibly
with men-in-the-middle, but a single connection every step of the way. This
implies that for NTLM authentication to work at all with proxy caches, the
proxy would need to tightly link the client-proxy and proxy-server links, as
well as understand the state of the link at any one time. NTLM through a
CONNECT might work, but we as far as we know that hasn't been implemented
by anyone, and it would prevent the pages being cached - removing the value
of the proxy.</P>
<P>NTLM authentication is carried entirely inside the HTTP protocol, but is not
a true HTTP authentication protocol and is different from Basic and Digest
authentication in many ways.</P>
<P>
<OL>
<LI>It is dependent on a stateful end-to-end connection which collides with
RFC 2616 for proxy-servers to disjoin the client-proxy and proxy-server
connections.
</LI>
<LI>It is only taking place once per connection, not per request. Once the
connection is authenticated then all future requests on the same connection
inherities the authentication. The connection must be reestablished to set
up other authentication or re-identify the user. This too collides with
RFC 2616 where authentication is defined as a property of the HTTP messages,
not connections.</LI>
</OL>
</P>
<P>The reasons why it is not implemented in Netscape is probably:</P>
<P>
<UL>
<LI> It is very specific for the Windows platform
</LI>
<LI> It is not defined in any RFC or even internet draft.
</LI>
<LI> The protocol has several shortcomings, where the most apparent one is
that it cannot be proxied.
</LI>
<LI> There exists an open internet standard which does mostly the same but
without the shortcomings or platform dependencies:
<A HREF="ftp://ftp.isi.edu/in-notes/rfc2617.txt">digest authentication</A>.</LI>
</UL>
</P>
<H2><A NAME="ss11.15">11.15</A> <A HREF="FAQ.html#toc11.15">The <EM>default</EM> parent option isn't working!</A>
</H2>
<P>This message was received at <EM>squid-bugs</EM>:
<BLOCKQUOTE>
<I>If you have only one parent, configured as:</I>
<PRE>
cache_peer xxxx parent 3128 3130 no-query default
</PRE>
<I>nothing is sent to the parent; neither UDP packets, nor TCP connections.</I>
</BLOCKQUOTE>
</P>
<P>Simply adding <EM>default</EM> to a parent does not force all requests to be sent
to that parent. The term <EM>default</EM> is perhaps a poor choice of words. A <EM>default</EM>
parent is only used as a <B>last resort</B>. If the cache is able to make direct connections,
direct will be preferred over default. If you want to force all requests to your parent
cache(s), use the <EM>never_direct</EM> option:
<PRE>
acl all src 0.0.0.0/0.0.0.0
never_direct allow all
</PRE>
</P>
<H2><A NAME="ss11.16">11.16</A> <A HREF="FAQ.html#toc11.16">``Hot Mail'' complains about: Intrusion Logged. Access denied.</A>
</H2>
<P>``Hot Mail'' is proxy-unfriendly and requires all requests to come from
the same IP address. You can fix this by adding to your
<EM>squid.conf</EM>:
<PRE>
hierarchy_stoplist hotmail.com
</PRE>
</P>
<H2><A NAME="ss11.17">11.17</A> <A HREF="FAQ.html#toc11.17">My Squid becomes very slow after it has been running for some time.</A>
</H2>
<P>This is most likely because Squid is using more memory than it should be
for your system. When the Squid process becomes large, it experiences a lot
of paging. This will very rapidly degrade the performance of Squid.
Memory usage is a complicated problem. There are a number
of things to consider.</P>
<P>Then, examine the Cache Manager <EM>Info</EM> ouput and look at these two lines:
<PRE>
Number of HTTP requests received: 121104
Page faults with physical i/o: 16720
</PRE>
Note, if your system does not have the <EM>getrusage()</EM> function, then you will
not see the page faults line.</P>
<P>Divide the number of page faults by the number of connections. In this
case 16720/121104 = 0.14. Ideally this ratio should be in the 0.0 - 0.1
range. It may be acceptable to be in the 0.1 - 0.2 range. Above that,
however, and you will most likely find that Squid's performance is
unacceptably slow.</P>
<P>If the ratio is too high, you will need to make some changes to
<A HREF="FAQ-8.html#lower-mem-usage">lower the amount of memory Squid uses</A>.</P>
<P>See also
<A HREF="FAQ-8.html#how-much-ram">How much memory do I need in my Squid server?</A>.</P>
<H2><A NAME="ss11.18">11.18</A> <A HREF="FAQ.html#toc11.18">WARNING: Failed to start 'dnsserver'</A>
</H2>
<P>This could be a permission problem. Does the Squid userid have
permission to execute the <EM>dnsserver</EM> program?</P>
<P>You might also try testing <EM>dnsserver</EM> from the command line:
<PRE>
> echo oceana.nlanr.net | ./dnsserver
</PRE>
Should produce something like:
<PRE>
$name oceana.nlanr.net
$h_name oceana.nlanr.net
$h_len 4
$ipcount 1
132.249.40.200
$aliascount 0
$ttl 82067
$end
</PRE>
</P>
<H2><A NAME="ss11.19">11.19</A> <A HREF="FAQ.html#toc11.19">Sending in Squid bug reports</A>
</H2>
<P>Bug reports for Squid should be registered in our
<A HREF="http://www.squid-cache.org/bugs/">bug database</A>. Any bug report must include
<UL>
<LI>The Squid version</LI>
<LI>Your Operating System type and version</LI>
<LI>A clear description of the bug symptoms.</LI>
<LI>If your Squid crashes the report must include a
<A HREF="#coredumps">stack trace</A> as described below</LI>
</UL>
</P>
<P>Please note that bug reports are only processed if they can be reproduced
or identified in the current STABLE or development versions of Squid. If
you are running an older version of Squid the first response will be
to ask you to upgrade unless the developer who looks at your bug report
immediately can identify that the bug also exists in the current versions.
It should also be noted that any patches provided by the Squid developer
team will be to the current STABLE version even if you run an older version.</P>
<H3><A NAME="coredumps"></A> crashes and core dumps</H3>
<P>There are two conditions under which squid will exit abnormally and
generate a coredump. First, a SIGSEGV or SIGBUS signal will cause Squid
to exit and dump core. Second, many functions include consistency
checks. If one of those checks fail, Squid calls abort() to generate a
core dump.</P>
<P>Many people report that Squid doesn't leave a coredump anywhere. This may be
due to one of the following reasons:
<UL>
<LI>Resource Limits. The shell has limits on the size of a coredump
file. You may need to increase the limit.</LI>
<LI>sysctl options. On FreeBSD, you won't get a coredump from
programs that call setuid() and/or setgid() (like Squid sometimes does)
unless you enable this option:
<PRE>
# sysctl -w kern.sugid_coredump=1
</PRE>
</LI>
<LI>No debugging symbols.
The Squid binary must have debugging symbols in order to get
a meaningful coredump. </LI>
<LI>Threads and Linux. On Linux, threaded applications do not generate
core dumps. When you use the aufs cache_dir type, it uses threads and
you can't get a coredump.</LI>
<LI>It did leave a coredump file, you just can't find it.</LI>
</UL>
</P>
<P><B>Resource Limits</B>:
These limits can usually be changed in
shell scripts. The command to change the resource limits is usually
either <EM>limit</EM> or <EM>limits</EM>. Sometimes it is a shell-builtin function,
and sometimes it is a regular program. Also note that you can set resource
limits in the <EM>/etc/login.conf</EM> file on FreeBSD and maybe other BSD
systems.</P>
<P>To change the coredumpsize limit you might use a command like:
<PRE>
limit coredumpsize unlimited
</PRE>
or
<PRE>
limits coredump unlimited
</PRE>
</P>
<P><B>Debugging Symbols</B>:
To see if your Squid binary has debugging symbols, use this command:
<PRE>
% nm /usr/local/squid/bin/squid | head
</PRE>
The binary has debugging symbols if you see gobbledegook like this:
<PRE>
0812abec B AS_tree_head
080a7540 D AclMatchedName
080a73fc D ActionTable
080908a4 r B_BYTES_STR
080908bc r B_GBYTES_STR
080908ac r B_KBYTES_STR
080908b4 r B_MBYTES_STR
080a7550 D Biggest_FD
08097c0c R CacheDigestHashFuncCount
08098f00 r CcAttrs
</PRE>
There are no debugging symbols if you see this instead:
<PRE>
/usr/local/squid/bin/squid: no symbols
</PRE>
Debugging symbols may have been
removed by your <EM>install</EM> program. If you look at the
squid binary from the source directory, then it might have
the debugging symbols.</P>
<P><B>Coredump Location</B>:
The core dump file will be left in one of the following locations:
<OL>
<LI>The <EM>coredump_dir</EM> directory, if you set that option.</LI>
<LI>The first <EM>cache_dir</EM> directory if you have used the
<EM>cache_effective_user</EM> option.</LI>
<LI>The current directory when Squid was started</LI>
</OL>
Recent versions of Squid report their current directory after
starting, so look there first:
<PRE>
2000/03/14 00:12:36| Set Current Directory to /usr/local/squid/cache
</PRE>
If you cannot find a core file, then either Squid does not have
permission to write in its current directory, or perhaps your shell
limits are preventing the core file from being written.</P>
<P>Often you can get a coredump if you run Squid from the
command line like this (csh shells and clones):
<PRE>
% limit core un
% /usr/local/squid/bin/squid -NCd1
</PRE>
</P>
<P>Once you have located the core dump file, use a debugger such as
<EM>dbx</EM> or <EM>gdb</EM> to generate a stack trace:
<PRE>
tirana-wessels squid/src 270% gdb squid /T2/Cache/core
GDB is free software and you are welcome to distribute copies of it
under certain conditions; type "show copying" to see the conditions.
There is absolutely no warranty for GDB; type "show warranty" for details.
GDB 4.15.1 (hppa1.0-hp-hpux10.10), Copyright 1995 Free Software Foundation, Inc...
Core was generated by `squid'.
Program terminated with signal 6, Aborted.
[...]
(gdb) where
#0 0xc01277a8 in _kill ()
#1 0xc00b2944 in _raise ()
#2 0xc007bb08 in abort ()
#3 0x53f5c in __eprintf (string=0x7b037048 "", expression=0x5f <Address 0x5f out of bounds>, line=8, filename=0x6b <Address 0x6b out of bounds>)
#4 0x29828 in fd_open (fd=10918, type=3221514150, desc=0x95e4 "HTTP Request") at fd.c:71
#5 0x24f40 in comm_accept (fd=2063838200, peer=0x7b0390b0, me=0x6b) at comm.c:574
#6 0x23874 in httpAccept (sock=33, notused=0xc00467a6) at client_side.c:1691
#7 0x25510 in comm_select_incoming () at comm.c:784
#8 0x25954 in comm_select (sec=29) at comm.c:1052
#9 0x3b04c in main (argc=1073745368, argv=0x40000dd8) at main.c:671
</PRE>
</P>
<P>If possible, you might keep the coredump file around for a day or
two. It is often helpful if we can ask you to send additional
debugger output, such as the contents of some variables. But please
note that a core file is only useful if paired with the exact same binary
as generated the corefile. If you recompile Squid then any coredumps from
previous versions will be useless unless you have saved the corresponding
Squid binaries, and any attempts to analyze such coredumps will most certainly
give misleading information about the cause to the crash.</P>
<P>If you CANNOT get Squid to leave a core file for you then one of
the following approaches can be used
<A NAME="nocore"></A> </P>
<P>First alternative is to start Squid under the contol of GDB</P>
<P>
<PRE>
% gdb /path/to/squid
handle SIGPIPE pass nostop noprint
run -DNYCd3
[wait for crash]
backtrace
quit
</PRE>
</P>
<P>The drawback from the above is that it isn't really suitable to run on a
production system as Squid then won't restart automatically if it
crashes. The good news is that it is fully possible to automate the
process above to automatically get the stack trace and then restart
Squid. Here is a short automated script that should work:</P>
<P>
<PRE>
#!/bin/sh
trap "rm -f $$.gdb" 0
cat <<EOF >$$.gdb
handle SIGPIPE pass nostop noprint
run -DNYCd3
backtrace
quit
EOF
while sleep 2; do
gdb -x $$.gdb /path/to/squid 2>&1 | tee -a squid.out
done
</PRE>
</P>
<P>Other options if the above cannot be done is to:</P>
<P>a) Build Squid with the --enable-stacktraces option, if support exists for your OS (exists for Linux glibc on Intel, and Solaris with some extra libraries which seems rather impossible to find these days..)</P>
<P>b) Run Squid using the "catchsegv" tool. (Linux glibc Intel)</P>
<P>but these approaches does not by far provide as much details as using
gdb.</P>
<H2><A NAME="debug"></A> <A NAME="ss11.20">11.20</A> <A HREF="FAQ.html#toc11.20">Debugging Squid</A>
</H2>
<P>If you believe you have found a non-fatal bug (such as incorrect HTTP
processing) please send us a section of your cache.log with debugging to
demonstrate the problem. The cache.log file can become very large, so
alternatively, you may want to copy it to an FTP or HTTP server where we
can download it.</P>
<P>It is very simple to
enable full debugging on a running squid process. Simply use the <EM>-k debug</EM>
command line option:
<PRE>
% ./squid -k debug
</PRE>
This causes every <EM>debug()</EM> statement in the source code to write a line
in the <EM>cache.log</EM> file.
You also use the same command to restore Squid to normal debugging level.</P>
<P>To enable selective debugging (e.g. for one source file only), you
need to edit <EM>squid.conf</EM> and add to the <EM>debug_options</EM> line.
Every Squid source file is assigned a different debugging <EM>section</EM>.
The debugging section assignments can be found by looking at the top
of individual source files, or by reading the file <EM>doc/debug-levels.txt</EM>
(correctly renamed to <EM>debug-sections.txt</EM> for Squid-2).
You also specify the debugging <EM>level</EM> to control the amount of
debugging. Higher levels result in more debugging messages.
For example, to enable full debugging of Access Control functions,
you would use
<PRE>
debug_options ALL,1 28,9
</PRE>
Then you have to restart or reconfigure Squid.</P>
<P>Once you have the debugging captured to <EM>cache.log</EM>, take a look
at it yourself and see if you can make sense of the behaviour which
you see. If not, please feel free to send your debugging output
to the <EM>squid-users</EM> or <EM>squid-bugs</EM> lists.</P>
<H2><A NAME="ss11.21">11.21</A> <A HREF="FAQ.html#toc11.21">FATAL: ipcache_init: DNS name lookup tests failed</A>
</H2>
<P>Squid normally tests your system's DNS configuration before
it starts server requests. Squid tries to resolve some
common DNS names, as defined in the <EM>dns_testnames</EM> configuration
directive. If Squid cannot resolve these names, it could mean:
<OL>
<LI>your DNS nameserver is unreachable or not running.</LI>
<LI>your <EM>/etc/resolv.conf</EM> file may contain incorrect information.</LI>
<LI>your <EM>/etc/resolv.conf</EM> file may have incorrect permissions, and
may be unreadable by Squid.</LI>
</OL>
</P>
<P>To disable this feature, use the <EM>-D</EM> command line option.</P>
<P>Note, Squid does NOT use the <EM>dnsservers</EM> to test the DNS. The
test is performed internally, before the <EM>dnsservers</EM> start.</P>
<H2><A NAME="ss11.22">11.22</A> <A HREF="FAQ.html#toc11.22">FATAL: Failed to make swap directory /var/spool/cache: (13) Permission denied</A>
</H2>
<P>Starting with version 1.1.15, we have required that you first run
<PRE>
squid -z
</PRE>
to create the swap directories on your filesystem. If you have set the
<EM>cache_effective_user</EM> option, then the Squid process takes on the
given userid before making the directories. If the <EM>cache_dir</EM>
directory (e.g. /var/spool/cache) does not exist, and the Squid userid
does not have permission to create it, then you will get the ``permission
denied'' error. This can be simply fixed by manually creating the
cache directory.
<PRE>
# mkdir /var/spool/cache
# chown <userid> <groupid> /var/spool/cache
# squid -z
</PRE>
</P>
<P>Alternatively, if the directory already exists, then your operating
system may be returning ``Permission Denied'' instead of ``File Exists''
on the mkdir() system call. This
<A HREF="store.c-mkdir.patch">patch</A>
by
<A HREF="mailto:miquels@cistron.nl">Miquel van Smoorenburg</A>
should fix it.</P>
<H2><A NAME="ss11.23">11.23</A> <A HREF="FAQ.html#toc11.23">FATAL: Cannot open HTTP Port</A>
</H2>
<P>Either (1) the Squid userid does not have permission to bind to the port, or
(2) some other process has bound itself to the port.
Remember that root privileges are required to open port numbers
less than 1024. If you see this message when using a high port number,
or even when starting Squid as root, then the port has already been
opened by another process.
Maybe you are running in the HTTP Accelerator mode and there is
already a HTTP server running on port 80? If you're really stuck,
install the way cool
<A HREF="ftp://vic.cc.purdue.edu/pub/tools/unix/lsof/">lsof</A>
utility to show you which process has your port in use.</P>
<H2><A NAME="ss11.24">11.24</A> <A HREF="FAQ.html#toc11.24">FATAL: All redirectors have exited!</A>
</H2>
<P>This is explained in the
<A HREF="FAQ-15.html#redirectors-exit">Redirector section</A>.</P>
<H2><A NAME="ss11.25">11.25</A> <A HREF="FAQ.html#toc11.25">FATAL: file_map_allocate: Exceeded filemap limit</A>
</H2>
<P>See the next question.</P>
<H2><A NAME="ss11.26">11.26</A> <A HREF="FAQ.html#toc11.26">FATAL: You've run out of swap file numbers.</A>
</H2>
<P><EM>Note: The information here applies to version 2.2 and earlier.</EM></P>
<P>Squid keeps an in-memory bitmap of disk files that are
available for use, or are being used. The size of this
bitmap is determined at run name, based on two things:
the size of your cache, and the average (mean) cache object size.</P>
<P>The size of your cache is specified in squid.conf, on the
<EM>cache_dir</EM> lines. The mean object size can also
be specified in squid.conf, with the 'store_avg_object_size'
directive. By default, Squid uses 13 Kbytes as the average size.</P>
<P>When allocating the bitmaps, Squid allocates this many bits:
<PRE>
2 * cache_size / store_avg_object_size
</PRE>
</P>
<P>So, if you exactly specify the correct average object size,
Squid should have 50% filemap bits free when the cache is full.
You can see how many filemap bits are being used by looking
at the 'storedir' cache manager page. It looks like this:</P>
<P>
<PRE>
Store Directory #0: /usr/local/squid/cache
First level subdirectories: 4
Second level subdirectories: 4
Maximum Size: 1024000 KB
Current Size: 924837 KB
Percent Used: 90.32%
Filemap bits in use: 77308 of 157538 (49%)
Flags:
</PRE>
</P>
<P>Now, if you see the ``You've run out of swap file numbers'' message,
then it means one of two things:
<OL>
<LI>You've found a Squid bug.</LI>
<LI>Your cache's average file size is much smaller
than the 'store_avg_object_size' value.</LI>
</OL>
</P>
<P>To check the average file size of object currently in your
cache, look at the cache manager 'info' page, and you will
find a line like:
<PRE>
Mean Object Size: 11.96 KB
</PRE>
</P>
<P>To make the warning message go away, set 'store_avg_object_size'
to that value (or lower) and then restart Squid.</P>
<H2><A NAME="ss11.27">11.27</A> <A HREF="FAQ.html#toc11.27">I am using up over 95% of the filemap bits?!!</A>
</H2>
<P><EM>Note: The information here is current for version 2.3</EM></P>
<P>Calm down, this is now normal. Squid now dynamically allocates
filemap bits based on the number of objects in your cache.
You won't run out of them, we promise.</P>
<H2><A NAME="ss11.28">11.28</A> <A HREF="FAQ.html#toc11.28">FATAL: Cannot open /usr/local/squid/logs/access.log: (13) Permission denied</A>
</H2>
<P>In Unix, things like <EM>processes</EM> and <EM>files</EM> have an <EM>owner</EM>.
For Squid, the process owner and file owner should be the same. If they
are not the same, you may get messages like ``permission denied.''</P>
<P>To find out who owns a file, use the <EM>ls -l</EM> command:
<PRE>
% ls -l /usr/local/squid/logs/access.log
</PRE>
</P>
<P>A process is normally owned by the user who starts it. However,
Unix sometimes allows a process to change its owner. If you
specified a value for the <EM>effective_user</EM>
option in <EM>squid.conf</EM>, then that will be the process owner.
The files must be owned by this same userid.</P>
<P>If all this is confusing, then you probably should not be
running Squid until you learn some more about Unix.
As a reference, I suggest
<A HREF="http://www.oreilly.com/catalog/lunix4/">Learning the UNIX Operating System, 4th Edition</A>.</P>
<H2><A NAME="ss11.29">11.29</A> <A HREF="FAQ.html#toc11.29">When using a username and password, I can not access some files.</A>
</H2>
<P><I>If I try by way of a test, to access</I>
<PRE>
ftp://username:password@ftpserver/somewhere/foo.tar.gz
</PRE>
<I>I get</I>
<PRE>
somewhere/foo.tar.gz: Not a directory.
</PRE>
</P>
<P>Use this URL instead:
<PRE>
ftp://username:password@ftpserver/%2fsomewhere/foo.tar.gz
</PRE>
</P>
<H2><A NAME="ss11.30">11.30</A> <A HREF="FAQ.html#toc11.30">pingerOpen: icmp_sock: (13) Permission denied</A>
</H2>
<P>This means your <EM>pinger</EM> program does not have root priveleges.
You should either do this:
<PRE>
% su
# make install-pinger
</PRE>
or
<PRE>
# chown root /usr/local/squid/bin/pinger
# chmod 4755 /usr/local/squid/bin/pinger
</PRE>
</P>
<H2><A NAME="ss11.31">11.31</A> <A HREF="FAQ.html#toc11.31">What is a forwarding loop?</A>
</H2>
<P>A forwarding loop is when a request passes through one proxy more than
once. You can get a forwarding loop if
<UL>
<LI>a cache forwards requests to itself. This might happen with
interception caching (or server acceleration) configurations.</LI>
<LI>a pair or group of caches forward requests to each other. This can
happen when Squid uses ICP, Cache Digests, or the ICMP RTT database
to select a next-hop cache.</LI>
</UL>
</P>
<P>Forwarding loops are detected by examining the <EM>Via</EM> request header.
Each cache which "touches" a request must add its hostname to the
<EM>Via</EM> header. If a cache notices its own hostname in this header
for an incoming request, it knows there is a forwarding loop somewhere.</P>
<P>NOTE:
Squid may report a forwarding loop if a request goes through
two caches that have the same <EM>visible_hostname</EM> value.
If you want to have multiple machines with the same
<EM>visible_hostname</EM> then you must give each machine a different
<EM>unique_hostname</EM> so that forwarding loops are correctly detected.</P>
<P>When Squid detects a forwarding loop, it is logged to the <EM>cache.log</EM>
file with the recieved <EM>Via</EM> header. From this header you can determine
which cache (the last in the list) forwarded the request to you.</P>
<P>One way to reduce forwarding loops is to change a <EM>parent</EM>
relationship to a <EM>sibling</EM> relationship.</P>
<P>Another way is to use <EM>cache_peer_access</EM> rules. For example:
<PRE>
# Our parent caches
cache_peer A.example.com parent 3128 3130
cache_peer B.example.com parent 3128 3130
cache_peer C.example.com parent 3128 3130
# An ACL list
acl PEERS src A.example.com
acl PEERS src B.example.com
acl PEERS src C.example.com
# Prevent forwarding loops
cache_peer_access A.example.com allow !PEERS
cache_peer_access B.example.com allow !PEERS
cache_peer_access C.example.com allow !PEERS
</PRE>
The above configuration instructs squid to NOT forward a request
to parents A, B, or C when a request is received from any one
of those caches.</P>
<H2><A NAME="ss11.32">11.32</A> <A HREF="FAQ.html#toc11.32">accept failure: (71) Protocol error</A>
</H2>
<P>This error message is seen mostly on Solaris systems.
<A HREF="mailto:mtk@ny.ubs.com">Mark Kennedy</A>
gives a great explanation:
<BLOCKQUOTE>
Error 71 [EPROTO] is an obscure way of reporting that clients made it onto your
server's TCP incoming connection queue but the client tore down the
connection before the server could accept it. I.e. your server ignored
its clients for too long. We've seen this happen when we ran out of
file descriptors. I guess it could also happen if something made squid
block for a long time.
</BLOCKQUOTE>
</P>
<H2><A NAME="ss11.33">11.33</A> <A HREF="FAQ.html#toc11.33">storeSwapInFileOpened: ... Size mismatch</A>
</H2>
<P><I>Got these messages in my cache log - I guess it means that the index
contents do not match the contents on disk.</I>
<PRE>
1998/09/23 09:31:30| storeSwapInFileOpened: /var/cache/00/00/00000015: Size mismatch: 776(fstat) != 3785(object)
1998/09/23 09:31:31| storeSwapInFileOpened: /var/cache/00/00/00000017: Size mismatch: 2571(fstat) != 4159(object)
</PRE>
</P>
<P><I>What does Squid do in this case?</I></P>
<P>NOTE, these messages are specific to Squid-2. These happen when Squid
reads an object from disk for a cache hit. After it opens the file,
Squid checks to see if the size is what it expects it should be. If the
size doesn't match, the error is printed. In this case, Squid does not
send the wrong object to the client. It will re-fetch the object from
the source.</P>
<H2><A NAME="ss11.34">11.34</A> <A HREF="FAQ.html#toc11.34">Why do I get <EM>fwdDispatch: Cannot retrieve 'https://www.buy.com/corp/ordertracking.asp'</EM></A>
</H2>
<P>These messages are caused by buggy clients, mostly Netscape Navigator.
What happens is, Netscape sends an HTTPS/SSL request over a persistent HTTP connection.
Normally, when Squid gets an SSL request, it looks like this:
<PRE>
CONNECT www.buy.com:443 HTTP/1.0
</PRE>
Then Squid opens a TCP connection to the destination host and port, and
the <EM>real</EM> request is sent encrypted over this connection. Thats the
whole point of SSL, that all of the information must be sent encrypted.</P>
<P>With this client bug, however, Squid receives a request like this:
<PRE>
GET https://www.buy.com/corp/ordertracking.asp HTTP/1.0
Accept: */*
User-agent: Netscape ...
...
</PRE>
Now, all of the headers, and the message body have been sent, <EM>unencrypted</EM>
to Squid. There is no way for Squid to somehow turn this into an SSL request.
The only thing we can do is return the error message.</P>
<P>Note, this browser bug does represent a security risk because the browser
is sending sensitive information unencrypted over the network.</P>
<H2><A NAME="ss11.35">11.35</A> <A HREF="FAQ.html#toc11.35">Squid can't access URLs like http://3626046468/ab2/cybercards/moreinfo.html</A>
</H2>
<P>by Dave J Woolley (DJW at bts dot co dot uk)</P>
<P>These are illegal URLs, generally only used by illegal sites;
typically the web site that supports a spammer and is expected to
survive a few hours longer than the spamming account.</P>
<P>Their intention is to:
<UL>
<LI>confuse content filtering rules on proxies, and possibly
some browsers' idea of whether they are trusted sites on
the local intranet;</LI>
<LI>confuse whois (?);</LI>
<LI>make people think they are not IP addresses and unknown
domain names, in an attempt to stop them trying to locate
and complain to the ISP.</LI>
</UL>
</P>
<P>Any browser or proxy that works with them should be considered a
security risk.</P>
<P>
<A HREF="http://www.ietf.org/rfc/rfc1738.txt">RFC 1738</A>
has this to say about the hostname part of a URL:
<BLOCKQUOTE>
The fully qualified domain name of a network host, or its IP
address as a set of four decimal digit groups separated by
".". Fully qualified domain names take the form as described
in Section 3.5 of RFC 1034 [13] and Section 2.1 of RFC 1123
[5]: a sequence of domain labels separated by ".", each domain
label starting and ending with an alphanumerical character and
possibly also containing "-" characters. The rightmost domain
label will never start with a digit, though, which
syntactically distinguishes all domain names from the IP
addresses.
</BLOCKQUOTE>
</P>
<H2><A NAME="ss11.36">11.36</A> <A HREF="FAQ.html#toc11.36">I get a lot of ``URI has whitespace'' error messages in my cache log, what should I do?</A>
</H2>
<P>Whitespace characters (space, tab, newline, carriage return) are
not allowed in URI's and URL's. Unfortunately, a number of Web services
generate URL's with whitespace. Of course your favorite browser silently
accomodates these bad URL's. The servers (or people) that generate
these URL's are in violation of Internet standards. The whitespace
characters should be encoded. </P>
<P>If you want Squid to accept URL's with whitespace, you have to
decide how to handle them. There are four choices that you
can set with the <EM>uri_whitespace</EM> option:
<OL>
<LI>DENY:
The request is denied with an ``Invalid Request'' message.
This is the default.</LI>
<LI>ALLOW:
The request is allowed and the URL remains unchanged.</LI>
<LI>ENCODE:
The whitespace characters are encoded according to
<A HREF="http://www.ietf.org/rfc/rfc1738.txt">RFC 1738</A>. This can be considered a violation
of the HTTP specification.</LI>
<LI>CHOP:
The URL is chopped at the first whitespace character
and then processed normally. This also can be considered
a violation of HTTP.</LI>
</OL>
</P>
<H2><A NAME="comm-bind-loopback-fail"></A> <A NAME="ss11.37">11.37</A> <A HREF="FAQ.html#toc11.37">commBind: Cannot bind socket FD 5 to 127.0.0.1:0: (49) Can't assign requested address</A>
</H2>
<P>This likely means that your system does not have a loopback network device, or
that device is not properly configured.
All Unix systems should have a network device named <EM>lo0</EM>, and it should
be configured with the address 127.0.0.1. If not, you may get the above
error message.
To check your system, run:
<PRE>
% ifconfig lo0
</PRE>
The result should look something like:
<PRE>
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
inet 127.0.0.1 netmask 0xff000000
</PRE>
</P>
<P>If you use FreeBSD, see
<A HREF="FAQ-14.html#freebsd-no-lo0">this</A>.</P>
<H2><A NAME="ss11.38">11.38</A> <A HREF="FAQ.html#toc11.38">Unknown cache_dir type '/var/squid/cache'</A>
</H2>
<P>The format of the <EM>cache_dir</EM> option changed with version
2.3. It now takes a <EM>type</EM> argument. All you need to do
is insert <CODE>ufs</CODE> in the line, like this:
<PRE>
cache_dir ufs /var/squid/cache ...
</PRE>
</P>
<H2><A NAME="ss11.39">11.39</A> <A HREF="FAQ.html#toc11.39">unrecognized: 'cache_dns_program /usr/local/squid/bin/dnsserver'</A>
</H2>
<P>As of Squid 2.3, the default is to use internal DNS lookup code.
The <EM>cache_dns_program</EM> and <EM>dns_children</EM> options are not
known squid.conf directives in this case. Simply comment out
these two options.</P>
<P>If you want to use external DNS lookups, with the <EM>dnsserver</EM>
program, then add this to your configure command:
<PRE>
--disable-internal-dns
</PRE>
</P>
<H2><A NAME="ss11.40">11.40</A> <A HREF="FAQ.html#toc11.40">Is <EM>dns_defnames</EM> broken in Squid-2.3 and later</A>
</H2>
<P>Sort of. As of Squid 2.3, the default is to use internal DNS lookup code.
The <EM>dns_defnames</EM> option is only used with the external <EM>dnsserver</EM>
processes. If you relied on <EM>dns_defnames</EM> before, you have three choices:
<OL>
<LI>See if the <EM>append_domain</EM> option will work for you instead.</LI>
<LI>Configure squid with --disable-internal-dns to use the external
dnsservers.</LI>
<LI>Enhance <EM>src/dns_internal.c</EM> to understand the <CODE>search</CODE>
and <CODE>domain</CODE> lines from <EM>/etc/resolv.conf</EM>.</LI>
</OL>
</P>
<H2><A NAME="ss11.41">11.41</A> <A HREF="FAQ.html#toc11.41">What does <EM>sslReadClient: FD 14: read failure: (104) Connection reset by peer</EM> mean?</A>
</H2>
<P>``Connection reset by peer'' is an error code that Unix operating systems
sometimes return for <EM>read</EM>, <EM>write</EM>, <EM>connect</EM>, and other
system calls.</P>
<P>Connection reset means that the other host, the peer, sent us a RESET
packet on a TCP connection. A host sends a RESET when it receives
an unexpected packet for a nonexistent connection. For example, if
one side sends data at the same time that the other side closes
a connection, when the other side receives the data it may send
a reset back.</P>
<P>The fact that these messages appear in Squid's log might indicate
a problem, such as a broken origin server or parent cache. On
the other hand, they might be ``normal,'' especially since
some applications are known to force connection resets rather
than a proper close.</P>
<P>You probably don't need to worry about them, unless you receive
a lot of user complaints relating to SSL sites.</P>
<P>
<A HREF="mailto:raj at cup dot hp dot com">Rick Jones</A> notes that
if the server is running a Microsoft TCP stack, clients
receive RST segments whenever the listen queue overflows. In other words,
if the server is really busy, new connections receive the reset message.
This is contrary to rational behaviour, but is unlikely to change.</P>
<H2><A NAME="ss11.42">11.42</A> <A HREF="FAQ.html#toc11.42">What does <EM>Connection refused</EM> mean?</A>
</H2>
<P>This is an error message, generated by your operating system,
in response to a <EM>connect()</EM> system call. It happens when
there is no server at the other end listening on the port number
that we tried to connect to.</P>
<P>Its quite easy to generate this error on your own. Simply
telnet to a random, high numbered port:
<PRE>
% telnet localhost 12345
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
</PRE>
It happens because there is no server listening for connections
on port 12345.</P>
<P>When you see this in response to a URL request, it probably means
the origin server web site is temporarily down. It may also mean
that your parent cache is down, if you have one.</P>
<H2><A NAME="ss11.43">11.43</A> <A HREF="FAQ.html#toc11.43">squid: ERROR: no running copy</A>
</H2>
<P>You may get this message when you run commands like <CODE>squid -krotate</CODE>.</P>
<P>This error message usually means that the <EM>squid.pid</EM> file is
missing. Since the PID file is normally present when squid is running,
the absence of the PID file usually means Squid is not running.
If you accidentally delete the PID file, Squid will continue running, and
you won't be able to send it any signals.</P>
<P>If you accidentally removed the PID file, there are two ways to get it back.
<OL>
<LI>run <CODE>ps</CODE> and find the Squid process id. You'll probably see
two processes, like this:
<PRE>
bender-wessels % ps ax | grep squid
83617 ?? Ss 0:00.00 squid -s
83619 ?? S 0:00.48 (squid) -s (squid)
</PRE>
You want the second process id, 83619 in this case. Create the PID file and put the
process id number there. For example:
<PRE>
echo 83619 > /usr/local/squid/logs/squid.pid
</PRE>
</LI>
<LI>Use the above technique to find the Squid process id. Send the process a HUP
signal, which is the same as <CODE>squid -kreconfigure</CODE>:
<PRE>
kill -HUP 83619
</PRE>
The reconfigure process creates a new PID file automatically.</LI>
</OL>
</P>
<H2><A NAME="ss11.44">11.44</A> <A HREF="FAQ.html#toc11.44">FATAL: getgrnam failed to find groupid for effective group 'nogroup'</A>
</H2>
<P>You are probably starting Squid as root. Squid is trying to find
a group-id that doesn't have any special priveleges that it will
run as. The default is <EM>nogroup</EM>, but this may not be defined
on your system. You need to edit <EM>squid.conf</EM> and set
<EM>cache_effective_group</EM> to the name of an unpriveledged group
from <EM>/etc/group</EM>. There is a good chance that <EM>nobody</EM>
will work for you.</P>
<H2><A NAME="ss11.45">11.45</A> <A HREF="FAQ.html#toc11.45">``Unsupported Request Method and Protocol'' for <EM>https</EM> URLs.</A>
</H2>
<P><EM>Note: The information here is current for version 2.3.</EM></P>
<P>This is correct. Squid does not know what to do with an <EM>https</EM>
URL. To handle such a URL, Squid would need to speak the SSL
protocol. Unfortunately, it does not (yet).</P>
<P>Normally, when you type an <EM>https</EM> URL into your browser, one of
two things happens.
<OL>
<LI>The browser opens an SSL connection directly to the origin
server.</LI>
<LI>The browser tunnels the request through Squid with the
<EM>CONNECT</EM> request method.</LI>
</OL>
</P>
<P>The <EM>CONNECT</EM> method is a way to tunnel any kind of
connection through an HTTP proxy. The proxy doesn't
understand or interpret the contents. It just passes
bytes back and forth between the client and server.
For the gory details on tunnelling and the CONNECT
method, please see
<A HREF="ftp://ftp.isi.edu/in-notes/rfc2817.txt">RFC 2817</A>
and
<A HREF="http://www.web-cache.com/Writings/Internet-Drafts/draft-luotonen-web-proxy-tunneling-01.txt">Tunneling TCP based protocols through Web proxy servers</A> (expired).</P>
<H2><A NAME="ss11.46">11.46</A> <A HREF="FAQ.html#toc11.46">Squid uses 100% CPU</A>
</H2>
<P>There may be many causes for this.</P>
<P>Andrew Doroshenko reports that removing <EM>/dev/null</EM>, or
mounting a filesystem with the <EM>nodev</EM> option, can cause
Squid to use 100% of CPU. His suggested solution is to ``touch /dev/null.''</P>
<H2><A NAME="ss11.47">11.47</A> <A HREF="FAQ.html#toc11.47">Webmin's <EM>cachemgr.cgi</EM> crashes the operating system</A>
</H2>
<P>Mikael Andersson reports that clicking on Webmin's <EM>cachemgr.cgi</EM>
link creates numerous instances of <EM>cachemgr.cgi</EM> that quickly
consume all available memory and brings the system to its knees.</P>
<P>Joe Cooper reports this to be caused by SSL problems in some browsers
(mainly Netscape 6.x/Mozilla) if your Webmin is SSL enabled. Try with
another browser such as Netscape 4.x or Microsoft IE, or disable SSL
encryption in Webmin.</P>
<H2><A NAME="ss11.48">11.48</A> <A HREF="FAQ.html#toc11.48">Segment Violation at startup or upon first request</A>
</H2>
<P>Some versions of GCC (notably 2.95.1 through 2.95.4 at least) have bugs
with compiler optimization. These GCC bugs may cause NULL pointer
accesses in Squid, resulting in a ``FATAL: Received Segment
Violation...dying'' message and a core dump.</P>
<P>You can work around these GCC bugs by disabling compiler
optimization. The best way to do that is start with a clean
source tree and set the CC options specifically:
<PRE>
% cd squid-x.y
% make distclean
% setenv CFLAGS='-g -Wall'
% ./configure ...
</PRE>
</P>
<P>To check that you did it right, you can search for AC_CFLAGS in
<EM>src/Makefile</EM>:
<PRE>
% grep AC_CFLAGS src/Makefile
AC_CFLAGS = -g -Wall
</PRE>
Now when you recompile, GCC won't try to optimize anything:
<PRE>
% make
Making all in lib...
gcc -g -Wall -I../include -I../include -c rfc1123.c
...etc...
</PRE>
</P>
<P>NOTE: some people worry that disabling compiler optimization will
negatively impact Squid's performance. The impact should be
negligible, unless your cache is really busy and already runs
at a high CPU usage. For most people, the compiler optimization
makes little or no difference at all.</P>
<H2><A NAME="ss11.49">11.49</A> <A HREF="FAQ.html#toc11.49">urlParse: Illegal character in hostname 'proxy.mydomain.com:8080proxy.mydomain.com'</A>
</H2>
<P>By Yomler of fnac.net</P>
<P>A combination of a bad configuration of Internet Explorer and any
application which use the cydoor DLLs will produce the entry in the log.
See
<A HREF="http://www.cydoor.com/">cydoor.com</A> for a complete list.</P>
<P>The bad configuration of IE is the use of a active configuration script
(proxy.pac) and an active or inactive, but filled proxy settings. IE will
only use the proxy.pac. Cydoor aps will use both and will generate the errors.</P>
<P>Disabling the old proxy settings in IE is not enought, you should delete
them completely and only use the proxy.pac for example.</P>
<H2><A NAME="ss11.50">11.50</A> <A HREF="FAQ.html#toc11.50">Requests for international domain names does not work</A>
</H2>
<P>By Henrik Nordström</P>
<P>Some people have asked why requests for domain names using national
symbols as "supported" by the certain domain registrars does not work
in Squid. This is because there as of yet is no standard on how to
manage national characters in the current Internet protocols such
as HTTP or DNS. The current Internet standards is very strict
on what is an acceptable hostname and only accepts A-Z a-z 0-9 and -
in Internet hostname labels. Anything outside this is outside
the current Internet standards and will cause interoperability
issues such as the problems seen with such names and Squid.</P>
<P>When there is a consensus in the DNS and HTTP standardization groups
on how to handle international domain names Squid will be changed to
support this if any changes to Squid will be required.</P>
<P>If you are interested in the progress of the standardization process
for international domain names please see the IETF IDN
working group's
<A HREF="http://www.i-d-n.net/">dedicated page</A>.</P>
<H2><A NAME="ss11.51">11.51</A> <A HREF="FAQ.html#toc11.51">Why do I sometimes get ``Zero Sized Reply''?</A>
</H2>
<P>This happens when Squid makes a TCP connection to an origin server, but
for some reason, the connection is closed before Squid reads any data.
Depending on various factors, Squid may be able to retry the request again.
If you see the ``Zero Sized Reply'' error message, it means that Squid
was unable to retry, or that all retry attempts also failed.</P>
<P>What causes a connection to close prematurely? It could be a number
of things, including:
<OL>
<LI>An overloaded origin server.</LI>
<LI>TCP implementation/interoperability bugs. See the
<A HREF="FAQ-14.html#sysdeps">System-Dependent Weirdnesses</A> section for details.</LI>
<LI>Race conditions with HTTP persistent connections.</LI>
<LI>Buggy or misconfigured NAT boxes, firewalls, and load-balancers.</LI>
<LI>Denial of service attacks.</LI>
<LI>Utilizing
<A HREF="FAQ-14.html#freebsd-zsr">TCP blackholing on FreeBSD</A>.</LI>
</OL>
</P>
<P>You may be able to use <EM>tcpdump</EM> to track down and observe the
problem.</P>
<P>Some users believe the problem is caused by very large cookies.
One user reports that his Zero Sized Reply problem went away
when he told Internet Explorer to not accept third-party
cookies.</P>
<P>Here are some things you can try to reduce the occurance of the
Zero Sized Reply error:
<OL>
<LI>Delete or rename your cookie file and configure your
browser to prompt you before accepting any new cookies.</LI>
<LI>Disable HTTP persistent connections with the
<EM>server_persistent_connections</EM> and <EM>client_persistent_connections</EM>
directives.</LI>
<LI>Disable any advanced TCP features on the Squid system. Disable
ECN on Linux with <CODE>echo 0 > /proc/sys/net/ipv4/tcp_ecn/</CODE>.</LI>
<LI>Upgrade to Squid-2.5.STABLE4 or later to work around a Host
header related bug in Cisco PIX HTTP inspection. The Cisco PIX firewall wrongly
assumes the Host header can be found in the first packet of the request.</LI>
</OL>
</P>
<P>If this error causes serious problems for you and the above does not help,
Squid developers would be happy to help you uncover the problem. However,
we will require high-quality debugging information from you, such as
<EM>tcpdump</EM> output, server IP addresses, operating system versions,
and <EM>access.log</EM> entries with full HTTP headers.</P>
<P>If you want to make Squid give the Zero Sized error
on demand, you can use the short C program below. Simply compile and
start the program on a system that doesn't already have a server
running on port 80. Then try to connect to this fake server through
Squid:
<PRE>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <assert.h>
int
main(int a, char **b)
{
struct sockaddr_in S;
int s,t,x;
s = socket(PF_INET, SOCK_STREAM, 0);
assert(s > 0);
memset(&S, '\0', sizeof(S));
S.sin_family = AF_INET;
S.sin_port = htons(80);
x = bind(s, (struct sockaddr *) &S, sizeof(S));
assert(x == 0);
x = listen(s, 10);
assert(x == 0);
while (1) {
struct sockaddr_in F;
int fl = sizeof(F);
t = accept(s, (struct sockaddr *) &F, &fl);
fprintf(stderr, "accpeted FD %d from %s:%d\n",
t, inet_ntoa(F.sin_addr), (int)ntohs(F.sin_port));
close(t);
fprintf(stderr, "closed FD %d\n", t);
}
return 0;
}
</PRE>
</P>
<H2><A NAME="ss11.52">11.52</A> <A HREF="FAQ.html#toc11.52">Why do I get "The request or reply is too large" errors?</A>
</H2>
<P>by Grzegorz Janoszka</P>
<P>This error message appears when you try downloading large file using
GET or uploading it using POST/PUT.
There are three parameters to look for: <EM>request_body_max_size</EM>,
<EM>reply_body_max_size</EM> (these two are set to 0 by default now, which means
no limits at all, earlier version of squid had e.g. 1MB in request) and
<EM>request_header_max_size</EM> - it defaults to 10kB (now, earlier versions
had here 4 or even 2 kB) - in some rather rare circumstances even
10kB is too low, so you can increase this value.</P>
<H2><A NAME="ss11.53">11.53</A> <A HREF="FAQ.html#toc11.53">Negative or very large numbers in Store Directory Statistics, or constant complaints about cache above limit</A>
</H2>
<P>In some situations where swap.state has been corrupted Squid can be
very confused about how much data it has in the cache. Such corruption
may happen after a power failure or similar fatal event.
To recover first stop Squid, then delete the swap.state files from
each cache directory and then start Squid again. Squid will automatically
rebuild the swap.state index from the cached files reasonably well.</P>
<P>If this does not work or causes too high load on your server due to
the reindexing of the cache then delete the cache content as explained
in
<A HREF="FAQ-7.html#cleancache">I want to restart Squid with a clean cache</A>.</P>
<H2><A NAME="ss11.54">11.54</A> <A HREF="FAQ.html#toc11.54">Squid problems with WindowsUpdate v5</A>
</H2>
<P>By Janno de Wit</P>
<P>There seems to be some problems with Microsoft Windows to access the Windows Update website.
This is especially a problem when you block all traffic by a firewall and force your users to go through the Squid Cache.</P>
<P>Symptom: WindowsUpdate gives error codes like 0x80072EFD and cannot update, automatic updates aren't working too.</P>
<P>Cause:
In earlier Windows-versions WindowsUpdate takes the proxy-settings from Internet Explorer. Since XP SP2 this is not sure.
At my machine I ran Windows XP SP1 without WindowsUpdate problems. When I upgraded to SP2 WindowsUpdate started to give errors
when searching updates etc.</P>
<P>The problem was that WU did not go through the proxy and tries to establish direct HTTP connections to Update-servers. Even when I
set the proxy in IE again, it didn't help . It isn't Squid's problem that Windows Update doesn't work, but it is in Windows itself.
The solution is to use the 'proxycfg' tool shipped with Windows XP.
With this tool you can set the proxy for WinHTTP.</P>
<P>Commands:
<PRE>
C:\> proxycfg
# gives information about the current connection type. Note: 'Direct Connection' does not force WU to bypass proxy
C:\> proxycfg -d
# Set Direct Connection
C:\> proxycfg -p wu-proxy.lan:8080
# Set Proxy to use with Windows Update to wu-proxy.lan, port 8080
c:\> proxycfg -u
# Set proxy to Internet Explorer settings.
</PRE>
</P>
<HR>
<A HREF="FAQ-12.html">Next</A>
<A HREF="FAQ-10.html">Previous</A>
<A HREF="FAQ.html#toc11">Contents</A>
</BODY>
</HTML>
|