File: FAQ-11.html

package info (click to toggle)
squid 2.4.6-2woody8
  • links: PTS
  • area: main
  • in suites: woody
  • size: 8,724 kB
  • ctags: 9,570
  • sloc: ansic: 75,398; sh: 2,213; makefile: 1,839; perl: 1,099; awk: 35
file content (1377 lines) | stat: -rw-r--r-- 54,002 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Draft//EN">
<HTML>
<HEAD>
<TITLE>SQUID Frequently Asked Questions: Troubleshooting</TITLE>
</HEAD>
<BODY>
<A HREF="FAQ-12.html">Next</A>
<A HREF="FAQ-10.html">Previous</A>
<A HREF="FAQ.html#toc11">Contents</A>
<HR>
<H2><A NAME="s11">11. Troubleshooting</A></H2>

<H2><A NAME="ss11.1">11.1 Why am I getting ``Proxy Access Denied?''</A>
</H2>

<P>You may need to set up the <EM>http_access</EM> option to allow
requests from your IP addresses.    Please see 
<A HREF="FAQ-10.html#access-controls">the Access Controls section</A> for information about that.
<P>If <EM>squid</EM> is in httpd-accelerator mode, it will accept normal
HTTP requests and forward them to a HTTP server, but it will not
honor proxy requests.  If you want your cache to also accept
proxy-HTTP requests then you must enable this feature:
<PRE>
        httpd_accel_with_proxy on
</PRE>

Alternately, you may have misconfigured one of your ACLs.  Check the
<EM>access.log</EM> and <EM>squid.conf</EM> files for clues.
<P>
<H2><A NAME="ss11.2">11.2 I can't get <CODE>local_domain</CODE> to work; <EM>Squid</EM> is caching the objects from my local servers.</A>
</H2>

<P>The <CODE>local_domain</CODE> directive does not prevent local
objects from being cached.  It prevents the use of sibling caches
when fetching local objects.  If you want to prevent objects from
being cached, use the <CODE>cache_stoplist</CODE> or <CODE>http_stop</CODE>
configuration options (depending on your version).
<P>
<H2><A NAME="ss11.3">11.3 I get <CODE>Connection Refused</CODE> when the cache tries to retrieve an object located on a sibling, even though the sibling thinks it delivered the object to my cache.</A>
</H2>

<P>
<P>If the HTTP port number is wrong but the ICP port is correct you
will send ICP queries correctly and the ICP replies will fool your
cache into thinking the configuration is correct but large objects
will fail since you don't have the correct HTTP port for the sibling
in your <EM>squid.conf</EM> file.  If your sibling changed their
<CODE>http_port</CODE>, you could have this problem for some time
before noticing.
<P>
<H2><A NAME="filedescriptors"></A> <A NAME="ss11.4">11.4 Running out of filedescriptors</A>
</H2>

<P>
<P>If you see the <CODE>Too many open files</CODE> error message, you
are most likely running out of file descriptors.  This may be due
to running Squid on an operating system with a low filedescriptor
limit.  This limit is often configurable in the kernel or with
other system tuning tools.  There are two ways to run out of file
descriptors:  first, you can hit the per-process limit on file
descriptors.  Second, you can hit the system limit on total file
descriptors for all processes.
<P>
<H3>Linux</H3>

<P>Start with Dancer's 
<A HREF="http://www2.simegen.com/~dancer/minihowto.html">Mini-'Adding File-descriptors-to-linux for squid' HOWTO</A>, but realize that
this information is specific to the Linux 2.0.36 kernel.
<P>
<P>You also might want to
have a look at
<A HREF="http://www.linux.org.za/oskar/patches/kernel/filehandle/">filehandle patch</A>
by
<A HREF="mailto:michael@metal.iinet.net.au">Michael O'Reilly</A><P>
<P>If your kernel version is 2.2.x or greater, you can read and write
the maximum number of file handles and/or inodes
simply by accessing the special files:
<PRE>
        /proc/sys/fs/file-max
        /proc/sys/fs/inode-max
</PRE>

So, to increase your file descriptor limit:
<PRE>
        echo 3072 > /proc/sys/fs/file-max
</PRE>
<P>
<P>If your kernel version is between 2.0.35 and 2.1.x (?), you can read and write
the maximum number of file handles and/or inodes
simply by accessing the special files:
<PRE>
        /proc/sys/kernel/file-max
        /proc/sys/kernel/inode-max
</PRE>
<P>
<P>While this does increase the current number of file descriptors,
Squid's <EM>configure</EM> script probably won't figure out the
new value unless you also update the include files, specifically
the value of <EM>OPEN_MAX</EM> in
<EM>/usr/include/linux/limits.h</EM>.
<P>
<H3>Solaris</H3>

<P>Add the following to your <EM>/etc/system</EM> file to
increase your maximum file descriptors per process:
<P>
<PRE>
        set rlim_fd_max = 4096
</PRE>
<P>Next you should re-run the <EM>configure</EM> script
in the top directory so that it finds the new value.
If it does not find the new limit, then you might try
editing  <EM>include/autoconf.h</EM> and setting
<CODE>#define DEFAULT_FD_SETSIZE</CODE> by hand.  Note that
<EM>include/autoconf.h</EM> is created from <EM>autoconf.h.in</EM>
every time you run configure.  Thus, if you edit it by
hand, you might lose your changes later on.
<P>
<P>If you have a very old version of Squid (1.1.X), and you
want to use more than 1024 descriptors, then you must
edit <EM>src/Makefile</EM> and enable
<CODE>$(USE_POLL_OPT)</CODE>.  Then recompile <EM>squid</EM>.
<P>
<P>
<A HREF="mailto:voeckler at rvs dot uni-hannover dot de">Jens-S. Voeckler</A>
advises that you should NOT change the soft limit (<EM>rlim_fd_cur</EM>) to anything
larger than 256.  It will break other programs, such as the license
manager needed for the SUN workshop compiler.  Jens-S. also says that it
should be safe to raise the limit as high as 16,384.
<P>
<H3>IRIX</H3>

<P>For some hints, please see SGI's 
<A HREF="http://www.sgi.com/tech/web/irix62.html">Tuning IRIX 6.2 for a Web Server</A> document.
<P>
<H3>FreeBSD</H3>

<P>by 
<A HREF="mailto:torsten.sturm@axis.de">Torsten Sturm</A>
<OL>
<LI>How do I check my maximum filedescriptors?
<P>Do <CODE>sysctl -a</CODE> and look for the value of
<CODE>kern.maxfilesperproc</CODE>.
</LI>
<LI>How do I increase them?
<PRE>
        sysctl -w kern.maxfiles=XXXX
        sysctl -w kern.maxfilesperproc=XXXX
</PRE>

<B>Warning</B>: You probably want <CODE>maxfiles
&gt; maxfilesperproc</CODE> if you're going to be pushing the
limit.</LI>
<LI>What is the upper limit?
<P>I don't think there is a formal upper limit inside the kernel.
All the data structures are dynamically allocated.  In practice
there might be unintended metaphenomena (kernel spending too much
time searching tables, for example).
</LI>
</OL>
<P>
<H3>General BSD</H3>

<P>For most BSD-derived systems (SunOS, 4.4BSD, OpenBSD, FreeBSD,
NetBSD, BSD/OS, 386BSD, Ultrix) you can also use the ``brute force''
method to increase these values in the kernel (requires a kernel
rebuild):
<OL>
<LI>How do I check my maximum filedescriptors?
<P>Do <CODE>pstat -T</CODE> and look for the <CODE>files</CODE>
value, typically expressed as the ratio of <CODE>current</CODE>maximum/.
</LI>
<LI>How do I increase them the easy way?
<P>One way is to increase the value of the <CODE>maxusers</CODE> variable
in the kernel configuration file and build a new kernel.  This method
is quick and easy but also has the effect of increasing a wide variety of
other variables that you may not need or want increased.
</LI>
<LI>Is there a more precise method?
<P>Another way is to find the <EM>param.c</EM> file in your kernel
build area and change the arithmetic behind the relationship between
<CODE>maxusers</CODE> and the maximum number of open files.
</LI>
</OL>

Here are a few examples which should lead you in the right direction:
<OL>
<LI>SunOS
<P>Change the value of <CODE>nfile</CODE> in <CODE></CODE>usr/kvm/sys/conf.common/param.c/tt> by altering this equation:
<PRE>
        int     nfile = 16 * (NPROC + 16 + MAXUSERS) / 10 + 64;
</PRE>

Where <CODE>NPROC</CODE> is defined by:
<PRE>
        #define NPROC (10 + 16 * MAXUSERS)
</PRE>
</LI>
<LI>FreeBSD (from the 2.1.6 kernel)
<P>Very similar to SunOS, edit <EM>/usr/src/sys/conf/param.c</EM>
and alter the relationship between <CODE>maxusers</CODE> and the
<CODE>maxfiles</CODE> and <CODE>maxfilesperproc</CODE> variables:
<PRE>
        int     maxfiles = NPROC*2;
        int     maxfilesperproc = NPROC*2;
</PRE>

Where <CODE>NPROC</CODE> is defined by:
<CODE>#define NPROC (20 + 16 * MAXUSERS)</CODE>
The per-process limit can also be adjusted directly in the kernel
configuration file with the following directive:
<CODE>options OPEN_MAX=128</CODE>
</LI>
<LI>BSD/OS (from the 2.1 kernel)
<P>Edit <CODE>/usr/src/sys/conf/param.c</CODE> and adjust the
<CODE>maxfiles</CODE> math here:
<PRE>
        int     maxfiles = 3 * (NPROC + MAXUSERS) + 80;
</PRE>

Where <CODE>NPROC</CODE> is defined by:
<CODE>#define NPROC (20 + 16 * MAXUSERS)</CODE>
You should also set the <CODE>OPEN_MAX</CODE> value in your kernel
configuration file to change the per-process limit.
</LI>
</OL>
<P>
<H3>Reconfigure afterwards</H3>

<P><B>NOTE:</B> After you rebuild/reconfigure your kernel with more
filedescriptors, you must then recompile Squid.  Squid's configure
script determines how many filedescriptors are available, so you
must make sure the configure script runs again as well.  For example:
<PRE>
    cd squid-1.1.x
        make realclean
        ./configure --prefix=/usr/local/squid
        make
</PRE>
<P>
<H2><A NAME="ss11.5">11.5 What are these strange lines about removing objects?</A>
</H2>

<P>For example:
<PRE>
        97/01/23 22:31:10| Removed 1 of 9 objects from bucket 3913
        97/01/23 22:33:10| Removed 1 of 5 objects from bucket 4315
        97/01/23 22:35:40| Removed 1 of 14 objects from bucket 6391
</PRE>
<P>These log entries are normal, and do not indicate that <EM>squid</EM> has
reached <CODE>cache_swap_high</CODE>.
<P>
<P>Consult your cache information page in <EM>cachemgr.cgi</EM> for
a line like this:
<P>
<PRE>
       Storage LRU Expiration Age:     364.01 days
</PRE>
<P>Objects which have not been used for that amount of time are removed as
a part of the regular maintenance.  You can set an upper limit on the
<CODE>LRU Expiration Age</CODE> value with <CODE>reference_age</CODE> in the config
file.
<P>
<H2><A NAME="ss11.6">11.6 Can I change a Windows NT FTP server to list directories in Unix format?</A>
</H2>

<P>Why, yes you can!  Select the following menus:
<UL>
<LI>Start</LI>
<LI>Programs</LI>
<LI>Microsoft Internet Server (Common)</LI>
<LI>Internet Service Manager</LI>
</UL>
<P>This will bring up a box with icons for your various services. One of
them should be a little ftp ``folder.'' Double click on this.
<P>You will then have to select the server (there should only be one)
Select that and then choose ``Properties'' from the menu and choose the
``directories'' tab along the top.
<P>There will be an option at the bottom saying ``Directory listing style.''
Choose the ``Unix'' type, not the ``MS-DOS'' type.
<P>
<BLOCKQUOTE>
--Oskar Pearson &lt;oskar@is.co.za&gt;
</BLOCKQUOTE>
<P>
<H2><A NAME="ss11.7">11.7 Why am I getting ``Ignoring MISS from non-peer x.x.x.x?''</A>
</H2>

<P>You are receiving ICP MISSes (via UDP) from a parent or sibling cache
whose IP address your cache does not know about.  This may happen
in two situations.
<P>
<P>
<OL>
<LI>If the peer is multihomed, it is sending packets out an interface
which is not advertised in the DNS.  Unfortunately, this is a
configuration problem at the peer site.  You can tell them to either
add the IP address interface to their DNS, or use Squid's
'udp_outgoing_address' option to force the replies
out a specific interface.  For example:
<P><EM>on your parent squid.conf:</EM>
<PRE>
        udp_outgoing_address proxy.parent.com
</PRE>

<EM>on your squid.conf:</EM>
<PRE>
        cache_host proxy.parent.com parent 3128 3130
</PRE>
<P>
</LI>
<LI>You can also see this warning when sending ICP queries to
multicast addresses.  For security reasons, Squid requires
your configuration to list all other caches listening on the
multicast group address.  If an unknown cache listens to that address
and sends replies, your cache will log the warning message.  To fix
this situation, either tell the unknown cache to stop listening
on the multicast address, or if they are legitimate, add them
to your configuration file.</LI>
</OL>
<P>
<H2><A NAME="ss11.8">11.8 DNS lookups for domain names with underscores (_) always fail.</A>
</H2>

<P>The standards for naming hosts
(
<A HREF="http://ds.internic.net/rfc/rfc952.txt">RFC 952</A>,
<A HREF="http://ds.internic.net/rfc/rfc1101.txt">RFC 1101</A>)
do not allow underscores in domain names:
<BLOCKQUOTE>
A "name" (Net, Host, Gateway, or Domain name) is a text string up
to 24 characters drawn from the alphabet (A-Z), digits (0-9), minus
sign (-), and period (.).
</BLOCKQUOTE>

The resolver library that ships with recent versions of BIND enforces
this restriction, returning an error for any host with underscore in
the hostname.  The best solution is to complain to the hostmaster of the
offending site, and ask them to rename their host.
<P>
<P>See also the
<A HREF="http://www.intac.com/~cdp/cptd-faq/section4.html#underscore">comp.protocols.tcp-ip.domains FAQ</A>.
<P>
<P>Some people have noticed that
<A HREF="http://ds.internic.net/rfc/rfc1033.txt">RFC 1033</A>
implies that underscores <B>are</B> allowed.  However, this is an
<EM>informational</EM> RFC with a poorly chosen
example, and not a <EM>standard</EM> by any means.
<P>
<H2><A NAME="ss11.9">11.9 Why does Squid say: ``Illegal character in hostname; underscores are not allowed?'</A>
</H2>

<P>See the above question.  The underscore character is not
valid for hostnames.
<P>
<P>Some DNS resolvers allow the underscore, so yes, the hostname
might work fine when you don't use Squid.
<P>
<P>To make Squid allow underscores in hostnames, re-run the
<EM>configure</EM> script with this option:
<PRE>
        % ./configure --enable-underscores ...
</PRE>

and then recompile:
<PRE>
        % make clean
        % make
</PRE>
<P>
<H2><A NAME="ss11.10">11.10 Why am I getting access denied from a sibling cache?</A>
</H2>

<P>The answer to this is somewhat complicated, so please hold on.
<EM>NOTE:</EM> most of this text is taken from
<A HREF="http://www.nlanr.net/%7ewessels/Papers/icp-squid.ps.gz">ICP and the Squid Web Cache</A>.
<P>
<P>An ICP query does not include any parent or sibling designation,
so the receiver really has no indication of how the peer
cache is configured to use it.  This issue becomes important
when a cache is willing to serve cache hits to anyone, but only
handle cache misses for its paying users or customers.  In other
words, whether or not to allow the request depends on if the
result is a hit or a miss.  To accomplish this,
Squid acquired the <CODE>miss_access</CODE> feature
in October of 1996.
<P>
<P>The necessity of ``miss access'' makes life a little bit complicated,
and not only because it was awkward to implement.  Miss access
means that the ICP query reply must be an extremely accurate prediction
of the result of a subsequent HTTP request.  Ascertaining
this result is actually very hard, if not impossible to
do, since the ICP request cannot convey the
full HTTP request.
Additionally, there are more types of HTTP request results than there
are for ICP.  The ICP query reply will either be a hit or miss.
However, the HTTP request might result in a ``<CODE>304 Not Modified</CODE>'' reply
sent from the origin server.  Such a reply is not strictly a hit since the peer
needed to forward a conditional request to the source.  At the same time,
its not strictly a miss either since the local object data is still valid,
and the Not-Modified reply is quite small.
<P>
<P>One serious problem for cache hierarchies is mismatched freshness
parameters.  Consider a cache <EM>C</EM> using ``strict''
freshness parameters so its users get maximally current data.
<EM>C</EM> has a sibling <EM>S</EM> with less strict freshness parameters.
When an object is requested at <EM>C</EM>, <EM>C</EM> might
find that <EM>S</EM> already has the object via an ICP query and
ICP HIT response.  <EM>C</EM> then retrieves the object
from <EM>S</EM>.
<P>
<P>In an HTTP/1.0 world, <EM>C</EM> (and <EM>C</EM>'s client)
will receive an object that was never
subject to its local freshness rules.  Neither HTTP/1.0 nor ICP provides
any way to ask only for objects less than a certain age.  If the
retrieved object is stale by <EM>C</EM>s rules,
it will be removed from <EM>C</EM>s cache, but
it will subsequently be fetched from <EM>S</EM> so long as it
remains fresh there.  This configuration miscoupling
problem is a significant deterrent to establishing
both parent and sibling relationships.
<P>
<P>HTTP/1.1 provides numerous request headers to specify freshness
requirements, which actually introduces
a different problem for cache hierarchies:  ICP
still does not include any age information, neither in query nor
reply.  So <EM>S</EM> may return an ICP HIT if its
copy of the object is fresh by its configuration
parameters, but the subsequent HTTP request may result
in a cache miss due to any
<CODE>Cache-control:</CODE> headers originated by <EM>C</EM> or by
<EM>C</EM>'s client.  Situations now emerge where the ICP reply
no longer matches the HTTP request result.
<P>
<P>In the end, the fundamental problem is that the ICP query does not
provide enough information to accurately predict whether
the HTTP request
will be a hit or miss.   In fact, the current ICP Internet Draft is very
vague on this subject.  What does ICP HIT really mean?  Does it mean
``I know a little about that URL and have some copy of the object?''  Or
does it mean ``I have a valid copy of that object and you are allowed to
get it from me?''
<P>
<P>So, what can be done about this problem?  We really need to change ICP
so that freshness parameters are included.  Until that happens, the members
of a cache hierarchy have only two options to totally eliminate the ``access
denied'' messages from sibling caches:
<OL>
<LI>Make sure all members have the same <CODE>refresh_rules</CODE> parameters.</LI>
<LI>Do not use <CODE>miss_access</CODE> at all.  Promise your sibling cache
administrator that <EM>your</EM> cache is properly configured and that you
will not abuse their generosity.  The sibling cache administrator can
check his log files to make sure you are keeping your word.</LI>
</OL>

If neither of these is realistic, then the sibling relationship should not
exist.
<P>
<H2><A NAME="ss11.11">11.11 Cannot bind socket FD NN to *:8080 (125) Address already in use</A>
</H2>

<P>This means that another processes is already listening on port 8080
(or whatever you're using).  It could mean that you have a Squid process
already running, or it could be from another program.  To verify, use
the <EM>netstat</EM> command:
<PRE>
        netstat -naf inet | grep LISTEN
</PRE>

That will show all sockets in the LISTEN state.  You might also try
<PRE>
        netstat -naf inet | grep 8080
</PRE>

If you find that some process has bound to your port, but you're not sure
which process it is, you might be able to use the excellent
<A HREF="ftp://vic.cc.purdue.edu/pub/tools/unix/lsof/">lsof</A>
program.  It will show you which processes own every open file descriptor
on your system.
<P>
<H2><A NAME="ss11.12">11.12 icpDetectClientClose: ERROR xxx.xxx.xxx.xxx: (32) Broken pipe</A>
</H2>

<P>This means that the client socket was closed by the client
before Squid was finished sending data to it.  Squid detects this
by trying to <CODE>read(2)</CODE> some data from the socket.  If the
<CODE>read(2)</CODE> call fails, then Squid konws the socket has been
closed.   Normally the <CODE>read(2)</CODE> call returns <EM>ECONNRESET: Connection reset by peer</EM>
and these are NOT logged.  Any other error messages (such as
<EM>EPIPE: Broken pipe</EM> are logged to <EM>cache.log</EM>.  See the ``intro'' of
section 2 of your Unix manual for a list of all error codes.
<P>
<H2><A NAME="ss11.13">11.13 icpDetectClientClose: FD 135, 255 unexpected bytes</A>
</H2>

<P>These are caused by misbehaving Web clients attempting to use persistent
connections.  Squid-1.1 does not support persistent connections.
<P>
<H2><A NAME="ss11.14">11.14 How come Squid doesn't work with NTLM Authorization.</A>
</H2>

<P>We are not sure. We were unable to find any detailed information on NTLM
(thanks Microsoft!), but 
<A HREF="http://support.microsoft.com/support/kb/articles/Q198/1/16.ASP">here</A> is a reference.
<P>
<P>We quote from the summary at the end of the browser authentication section:
<BLOCKQUOTE>
In summary, Basic authentication does not require an implicit end-to-end
state, and can therefore be used through a proxy server. Windows NT
Challenge/Response authentication requires implicit end-to-end state and  
will not work through a proxy server.
</BLOCKQUOTE>
<P>
<P>
<P>Squid transparently passes the NTLM request and response headers between
clients and servers. NTLM relies on a single end-end connection (possibly
with men-in-the-middle, but a single connection every step of the way. This
implies that for NTLM authentication to work at all with proxy caches, the
proxy would need to tightly link the client-proxy and proxy-server links, as
well as understand the state of the link at any one time. NTLM through a 
CONNECT might work, but we as far as we know that  hasn't been implemented
by anyone, and it would prevent the pages being cached - removing the value
of the proxy.
<P>
<P>NTLM authentication is carried entirely inside the HTTP protocol, but is
different from Basic authentication in many ways.
<P>
<OL>
<LI>It is dependent on a stateful end-to-end connection which collides with
RFC 2616 for proxy-servers to disjoin the client-proxy and proxy-server
connections.
</LI>
<LI>It is only taking place once per connection, not per request. Once the
connection is authenticated then all future requests on the same connection
inherities the authentication. The connection must be reestablished to set
up other authentication or re-identify the user.</LI>
</OL>
<P>
<P>The reasons why it is not implemented in Netscape is probably:
<P>
<UL>
<LI> It is very specific for the Windows platform
</LI>
<LI> It is not defined in any RFC or even internet draft.
</LI>
<LI> The protocol has several shortcomings, where the most apparent one is
that it cannot be proxied.
</LI>
<LI> There exists an open internet standard which does mostly the same but
without the shortcomings or platform dependencies: 
<A HREF="ftp://ftp.isi.edu/in-notes/rfc2617.txt">digest authentication</A>.</LI>
</UL>
<P>
<P>
<H2><A NAME="ss11.15">11.15 The <EM>default</EM> parent option isn't working!</A>
</H2>

<P>This message was received at <EM>squid-bugs</EM>:
<BLOCKQUOTE>
<I>If you have only one parent, configured as:</I>
<PRE>
        cache_host xxxx parent 3128 3130 no-query default
</PRE>

<I>nothing is sent to the parent; neither UDP packets, nor TCP connections.</I>
</BLOCKQUOTE>
<P>
<P>Simply adding <EM>default</EM> to a parent does not force all requests to be sent
to that parent.  The term <EM>default</EM> is perhaps a poor choice of words.  A <EM>default</EM>
parent is only used as a <B>last resort</B>.  If the cache is able to make direct connections,
direct will be preferred over default.  If you want to force all requests to your parent
cache(s), use the <EM>never_direct</EM> option:
<PRE>
        acl all src 0.0.0.0/0.0.0.0
        never_direct allow all
</PRE>
<P>
<H2><A NAME="ss11.16">11.16 ``Hot Mail'' complains about: Intrusion Logged. Access denied.</A>
</H2>

<P>``Hot Mail'' is proxy-unfriendly and requires all requests to come from
the same IP address.  You can fix this by adding to your
<EM>squid.conf</EM>:
<PRE>
        hierarchy_stoplist hotmail.com
</PRE>
<P>
<H2><A NAME="ss11.17">11.17 My Squid becomes very slow after it has been running for some time.</A>
</H2>

<P>This is most likely because Squid is using more memory than it should be
for your system.  When the Squid process becomes large, it experiences a lot
of paging.  This will very rapidly degrade the performance of Squid.
Memory usage is a complicated problem.  There are a number
of things to consider.
<P>
<P>First, examine the Cache Manager <EM>Info</EM> ouput and look at these two lines:
<PRE>
        Number of HTTP requests received:  121104
        Page faults with physical i/o:      16720
</PRE>

Note, if your system does not have the <EM>getrusage()</EM> function, then you will
not see the page faults line.
<P>
<P>Divide the number of page faults by the number of connections.  In this
case 16720/121104 = 0.14.  Ideally this ratio should be in the 0.0 - 0.1
range.  It may be acceptable to be in the 0.1 - 0.2 range.  Above that,
however, and you will most likely find that Squid's performance is
unacceptably slow.
<P>
<P>If the ratio is too high, you will need to make some changes to
<A HREF="FAQ-8.html#lower-mem-usage">lower the amount of memory Squid uses</A>.
<P>
<H2><A NAME="ss11.18">11.18 WARNING: Failed to start 'dnsserver'</A>
</H2>

<P>This could be a permission problem.  Does the Squid userid have
permission to execute the <EM>dnsserver</EM> program?
<P>
<P>You might also try testing <EM>dnsserver</EM> from the command line:
<PRE>
        > echo oceana.nlanr.net | ./dnsserver
</PRE>

Should produce something like:
<PRE>
        $name oceana.nlanr.net
        $h_name oceana.nlanr.net
        $h_len 4
        $ipcount 1
        132.249.40.200
        $aliascount 0
        $ttl 82067
        $end
</PRE>
<P>
<H2><A NAME="ss11.19">11.19 Sending in Squid bug reports</A>
</H2>

<P>Bug reports for Squid should be sent to the 
<A HREF="mailto:squid-bugs@ircache.net">squid-bugs alias</A>.  Any bug report must include
<UL>
<LI>The Squid version</LI>
<LI>Your Operating System type and version</LI>
</UL>
<P>
<H3><A NAME="coredumps"></A> crashes and core dumps</H3>

<P>There are two conditions under which squid will exit abnormally and
generate a coredump.  First, a SIGSEGV or SIGBUS signal will cause Squid
to exit and dump core.  Second, many functions include consistency
checks.  If one of those checks fail, Squid calls abort() to generate a
core dump.
<P>
<P>Many people report that Squid doesn't leave a coredump anywhere.  This may be
due to one of the following reasons:
<UL>
<LI>Resource Limits.  The shell has limits on the size of a coredump
file.  You may need to increase the limit.</LI>
<LI>No debugging symbols.
The Squid binary must have debugging symbols in order to get
a meaningful coredump. </LI>
<LI>Threads and Linux.  On Linux, threaded applications do not generate
core dumps.  When you use --enable-async-io, it uses threads and
you can't get a coredump.</LI>
<LI>It did leave a coredump file, you just can't find it.</LI>
</UL>
<P>
<P>
<P><B>Resource Limits</B>:
These limits can usually be changed in
shell scripts.  The command to change the resource limits is usually
either <EM>limit</EM> or <EM>limits</EM>.  Sometimes it is a shell-builtin function,
and sometimes it is a regular program.  Also note that you can set resource
limits in the <EM>/etc/login.conf</EM> file on FreeBSD and maybe other BSD
systems.
<P>
<P>To change the coredumpsize limit you might use a command like:
<PRE>
        limit coredumpsize unlimited
</PRE>

or
<PRE>
        limits coredump unlimited
</PRE>
<P>
<P><B>Debugging Symbols</B>:
To see if your Squid binary has debugging symbols, use this command:
<PRE>
        % nm /usr/local/squid/bin/squid | head
</PRE>

The binary has debugging symbols if you see gobbledegook like this:
<PRE>
        0812abec B AS_tree_head
        080a7540 D AclMatchedName
        080a73fc D ActionTable
        080908a4 r B_BYTES_STR
        080908bc r B_GBYTES_STR
        080908ac r B_KBYTES_STR
        080908b4 r B_MBYTES_STR
        080a7550 D Biggest_FD
        08097c0c R CacheDigestHashFuncCount
        08098f00 r CcAttrs
</PRE>

There are no debugging symbols if you see this instead:
<PRE>
        /usr/local/squid/bin/squid: no symbols
</PRE>

Debugging symbols may have been
removed by your <EM>install</EM> program.  If you look at the
squid binary from the source directory, then it might have
the debugging symbols.
<P>
<P>
<P><B>Coredump Location</B>:
The core dump file will be left in one of the following locations:
<OL>
<LI>The <EM>coredump_dir</EM> directory, if you set that option.</LI>
<LI>The first <EM>cache_dir</EM> directory if you have used the
<EM>cache_effective_user</EM> option.</LI>
<LI>The current directory when Squid was started</LI>
</OL>

Recent versions of Squid report their current directory after
starting, so look there first:
<PRE>
        2000/03/14 00:12:36| Set Current Directory to /usr/local/squid/cache
</PRE>

If you cannot find a core file, then either Squid does not have
permission to write in its current directory, or perhaps your shell
limits (csh and clones) are preventing the core file from being written.
<P>
<P>Often you can get a coredump if you run Squid from the 
command line like this:
<PRE>
        % limit core un
        % /usr/local/squid/bin/squid -NCd1
</PRE>
<P>
<P>
<P>Once you have located the core dump file, use a debugger such as
<EM>dbx</EM> or <EM>gdb</EM> to generate a stack trace:
<PRE>

tirana-wessels squid/src 270% gdb squid /T2/Cache/core
GDB is free software and you are welcome to distribute copies of it
 under certain conditions; type "show copying" to see the conditions.
There is absolutely no warranty for GDB; type "show warranty" for details.
GDB 4.15.1 (hppa1.0-hp-hpux10.10), Copyright 1995 Free Software Foundation, Inc...
Core was generated by `squid'.
Program terminated with signal 6, Aborted.

[...]

(gdb) where
#0  0xc01277a8 in _kill ()
#1  0xc00b2944 in _raise ()
#2  0xc007bb08 in abort ()
#3  0x53f5c in __eprintf (string=0x7b037048 "", expression=0x5f &lt;Address 0x5f out of bounds>, line=8, filename=0x6b &lt;Address 0x6b out of bounds>)
#4  0x29828 in fd_open (fd=10918, type=3221514150, desc=0x95e4 "HTTP Request") at fd.c:71
#5  0x24f40 in comm_accept (fd=2063838200, peer=0x7b0390b0, me=0x6b) at comm.c:574
#6  0x23874 in httpAccept (sock=33, notused=0xc00467a6) at client_side.c:1691
#7  0x25510 in comm_select_incoming () at comm.c:784
#8  0x25954 in comm_select (sec=29) at comm.c:1052
#9  0x3b04c in main (argc=1073745368, argv=0x40000dd8) at main.c:671
</PRE>
<P>
<P>If possible, you might keep the coredump file around for a day or
two.  It is often helpful if we can ask you to send additional
debugger output, such as the contents of some variables.
<P>
<H2><A NAME="ss11.20">11.20 Debugging Squid</A>
</H2>

<P>If you believe you have found a non-fatal bug (such as incorrect HTTP
processing) please send us a section of your cache.log with debugging to
demonstrate the problem.  The cache.log file can become very large, so
alternatively, you may want to copy it to an FTP or HTTP server where we
can download it.
<P>
<P>It is very simple to
enable full debugging on a running squid process.  Simply use the <EM>-k debug</EM>
command line option:
<PRE>
        % ./squid -k debug
</PRE>

This causes every <EM>debug()</EM> statement in the source code to write a line
in the <EM>cache.log</EM> file.
You also use the same command to restore Squid to normal debugging.
<P>
<P>To enable selective debugging (e.g. for one source file only), you
need to edit <EM>squid.conf</EM> and add to the <EM>debug_options</EM> line.
Every Squid source file is assigned a different debugging <EM>section</EM>.
The debugging section assignments can be found by looking at the top
of individual source files, or by reading the file <EM>doc/debug-levels.txt</EM>
(correctly renamed to <EM>debug-sections.txt</EM> for Squid-2).
You also specify the debugging <EM>level</EM> to control the amount of
debugging.  Higher levels result in more debugging messages.
For example, to enable full debugging of Access Control functions,
you would use
<PRE>
        debug_options ALL,1 28,9
</PRE>

Then you have to restart or reconfigure Squid.
<P>
<P>Once you have the debugging captured to <EM>cache.log</EM>, take a look
at it yourself and see if you can make sense of the behaviour which
you see.  If not, please feel free to send your debugging output
to the <EM>squid-users</EM> or <EM>squid-bugs</EM> lists.
<P>
<H2><A NAME="ss11.21">11.21 FATAL: ipcache_init: DNS name lookup tests failed</A>
</H2>

<P>Squid normally tests your system's DNS configuration before
it starts server requests.  Squid tries to resolve some
common DNS names, as defined in the <EM>dns_testnames</EM> configuration
directive.  If Squid cannot resolve these names, it could mean:
<OL>
<LI>your DNS nameserver is unreachable or not running.</LI>
<LI>your <EM>/etc/resolv.conf</EM> file may contain incorrect information.</LI>
<LI>your <EM>/etc/resolv.conf</EM> file may have incorrect permissions, and
may be unreadable by Squid.</LI>
</OL>
<P>
<P>To disable this feature, use the <EM>-D</EM> command line option.
<P>
<P>Note, Squid does NOT use the <EM>dnsservers</EM> to test the DNS.  The
test is performed internally, before the <EM>dnsservers</EM> start.
<P>
<H2><A NAME="ss11.22">11.22 FATAL: Failed to make swap directory /var/spool/cache: (13) Permission denied</A>
</H2>

<P>Starting with version 1.1.15, we have required that you first run
<PRE>
        squid -z
</PRE>

to create the swap directories on your filesystem.  If you have set the
<EM>cache_effective_user</EM> option, then the Squid process takes on the
given userid before making the directories.  If the <EM>cache_dir</EM>
directory (e.g. /var/spool/cache) does not exist, and the Squid userid
does not have permission to create it, then you will get the ``permission
denied'' error.  This can be simply fixed by manually creating the
cache directory.
<PRE>
        # mkdir /var/spool/cache
        # chown &lt;userid> &lt;groupid> /var/spool/cache
        # squid -z
</PRE>
<P>
<P>Alternatively, if the directory already exists, then your operating
system may be returning ``Permission Denied'' instead of ``File Exists''
on the mkdir() system call.  This
<A HREF="store.c-mkdir.patch">patch</A>
by
<A HREF="mailto:miquels@cistron.nl">Miquel van Smoorenburg</A>
should fix it.
<P>
<H2><A NAME="ss11.23">11.23 FATAL: Cannot open HTTP Port</A>
</H2>

<P>Either (1) the Squid userid does not have permission to bind to the port, or
(2) some other process has bound itself to the port.
Remember that root privileges are required to open port numbers
less than 1024.  If you see this message when using a high port number,
or even when starting Squid as root, then the port has already been
opened by another process.
Maybe you are running in the HTTP Accelerator mode and there is
already a HTTP server running on port 80?  If you're really stuck,
install the way cool
<A HREF="ftp://vic.cc.purdue.edu/pub/tools/unix/lsof/">lsof</A>
utility to show you which process has your port in use.
<P>
<H2><A NAME="ss11.24">11.24 FATAL: All redirectors have exited!</A>
</H2>

<P>This is explained in the 
<A HREF="FAQ-15.html#redirectors-exit">Redirector section</A>.
<P>
<H2><A NAME="ss11.25">11.25 FATAL: file_map_allocate: Exceeded filemap limit</A>
</H2>

<P>See the next question.
<P>
<H2><A NAME="ss11.26">11.26 FATAL: You've run out of swap file numbers.</A>
</H2>

<P><EM>Note: The information here applies to version 2.2 and earlier.</EM>
<P>Squid keeps an in-memory bitmap of disk files that are
available for use, or are being used.  The size of this
bitmap is determined at run name, based on two things:
the size of your cache, and the average (mean) cache object size.
<P>The size of your cache is specified in squid.conf, on the
<EM>cache_dir</EM> lines.  The mean object size can also
be specified in squid.conf, with the 'store_avg_object_size'
directive.  By default, Squid uses 13 Kbytes as the average size.
<P>
<P>When allocating the bitmaps, Squid allocates this many bits:
<PRE>
        2 * cache_size / store_avg_object_size
</PRE>
<P>So, if you exactly specify the correct average object size,
Squid should have 50% filemap bits free when the cache is full.
You can see how many filemap bits are being used by looking
at the 'storedir' cache manager page.  It looks like this:
<P>
<PRE>
        Store Directory #0: /usr/local/squid/cache
        First level subdirectories: 4
        Second level subdirectories: 4
        Maximum Size: 1024000 KB
        Current Size: 924837 KB
        Percent Used: 90.32%
        Filemap bits in use: 77308 of 157538 (49%)
        Flags:
</PRE>
<P>
<P>Now, if you see the ``You've run out of swap file numbers'' message,
then it means one of two things:
<OL>
<LI>You've found a Squid bug.</LI>
<LI>Your cache's average file size is much smaller
than the 'store_avg_object_size' value.</LI>
</OL>
<P>To check the average file size of object currently in your
cache, look at the cache manager 'info' page, and you will
find a line like:
<PRE>
        Mean Object Size:       11.96 KB
</PRE>
<P>
<P>To make the warning message go away, set 'store_avg_object_size'
to that value (or lower) and then restart Squid.
<P>
<H2><A NAME="ss11.27">11.27 I am using up over 95% of the filemap bits?!!</A>
</H2>

<P><EM>Note: The information here is current for version 2.3</EM>
<P>Calm down, this is now normal.  Squid now dynamically allocates
filemap bits based on the number of objects in your cache.
You won't run out of them, we promise.
<P>
<P>
<H2><A NAME="ss11.28">11.28 FATAL: Cannot open /usr/local/squid/logs/access.log: (13) Permission denied</A>
</H2>

<P>In Unix, things like <EM>processes</EM> and <EM>files</EM> have an <EM>owner</EM>.  
For Squid, the process owner and file owner should be the same.  If they
are not the same, you may get messages like ``permission denied.''
<P>To find out who owns a file, use the <EM>ls -l</EM> command:
<PRE>
        % ls -l /usr/local/squid/logs/access.log
</PRE>
<P>
<P>A process is normally owned by the user who starts it.  However,
Unix sometimes allows a process to change its owner.  If you
specified a value for the <EM>effective_user</EM>
option in <EM>squid.conf</EM>, then that will be the process owner.
The files must be owned by this same userid.
<P>
<P>If all this is confusing, then you probably should not be
running Squid until you learn some more about Unix.
As a reference, I suggest 
<A HREF="http://www.oreilly.com/catalog/lunix4/">Learning the UNIX Operating System, 4th Edition</A>.
<P>
<H2><A NAME="ss11.29">11.29 When using a username and password, I can not access some files.</A>
</H2>

<P><I>If I try by way of a test, to access</I>
<PRE>
        ftp://username:password@ftpserver/somewhere/foo.tar.gz
</PRE>

<I>I get</I>
<PRE>
        somewhere/foo.tar.gz: Not a directory.
</PRE>
<P>
<P>Use this URL instead:
<PRE>
        ftp://username:password@ftpserver/%2fsomewhere/foo.tar.gz
</PRE>
<P>
<H2><A NAME="ss11.30">11.30 pingerOpen: icmp_sock: (13) Permission denied</A>
</H2>

<P>This means your <EM>pinger</EM> program does not have root priveleges.
You should either do this:
<PRE>
        % su
        # make install-pinger
</PRE>

or
<PRE>
        # chown root /usr/local/squid/bin/pinger
        # chmod 4755 /usr/local/squid/bin/pinger
</PRE>
<P>
<H2><A NAME="ss11.31">11.31 What is a forwarding loop?</A>
</H2>

<P>A forwarding loop is when a request passes through one proxy more than
once.  You can get a forwarding loop if
<UL>
<LI>a cache forwards requests to itself.  This might happen with
transparent caching (or server acceleration) configurations.</LI>
<LI>a pair or group of caches forward requests to each other.  This can
happen when Squid uses ICP, Cache Digests, or the ICMP RTT database
to select a next-hop cache.</LI>
</UL>
<P>
<P>Forwarding loops are detected by examining the <EM>Via</EM> request header.
Each cache which "touches" a request must add its hostname to the
<EM>Via</EM> header.  If a cache notices its own hostname in this header
for an incoming request, it knows there is a forwarding loop somewhere.
NOTE:
A pair of caches which have the same <EM>visible_hostname</EM> value
will report forwarding loops.
<P>
<P>When Squid detects a forwarding loop, it is logged to the <EM>cache.log</EM>
file with the recieved <EM>Via</EM> header.  From this header you can determine
which cache (the last in the list) forwarded the request to you.
<P>
<P>One way to reduce forwarding loops is to change a <EM>parent</EM>
relationship to a <EM>sibling</EM> relationship.
<P>
<P>Another way is to use <EM>cache_peer_access</EM> rules.  For example:
<PRE>
        # Our parent caches
        cache_peer A.example.com parent 3128 3130
        cache_peer B.example.com parent 3128 3130
        cache_peer C.example.com parent 3128 3130

        # An ACL list
        acl PEERS src A.example.com
        acl PEERS src B.example.com
        acl PEERS src C.example.com

        # Prevent forwarding loops
        cache_peer_access A.example.com allow !PEERS
        cache_peer_access B.example.com allow !PEERS
        cache_peer_access C.example.com allow !PEERS
</PRE>

The above configuration instructs squid to NOT forward a request
to parents A, B, or C when a request is received from any one
of those caches.
<P>
<H2><A NAME="ss11.32">11.32 accept failure: (71) Protocol error</A>
</H2>

<P>This error message is seen mostly on Solaris systems.
<A HREF="mailto:mtk@ny.ubs.com">Mark Kennedy</A>
gives a great explanation:
<BLOCKQUOTE>
Error 71 [EPROTO] is an obscure way of reporting that clients made it onto your
server's TCP incoming connection queue but the client tore down the
connection before the server could accept it.  I.e.  your server ignored
its clients for too long.  We've seen this happen when we ran out of
file descriptors.  I guess it could also happen if something made squid
block for a long time.
</BLOCKQUOTE>
<P>
<H2><A NAME="ss11.33">11.33 storeSwapInFileOpened: ... Size mismatch</A>
</H2>

<P><I>Got these messages in my cache log - I guess it means that the index
contents do not match the contents on disk.</I>
<PRE>
1998/09/23 09:31:30| storeSwapInFileOpened: /var/cache/00/00/00000015: Size mismatch: 776(fstat) != 3785(object)
1998/09/23 09:31:31| storeSwapInFileOpened: /var/cache/00/00/00000017: Size mismatch: 2571(fstat) != 4159(object)
</PRE>
<P>
<P><I>What does Squid do in this case?</I>
<P>
<P>NOTE, these messages are specific to Squid-2.  These happen when Squid
reads an object from disk for a cache hit.  After it opens the file,
Squid checks to see if the size is what it expects it should be.  If the
size doesn't match, the error is printed.  In this case, Squid does not
send the wrong object to the client.  It will re-fetch the object from
the source.
<P>
<H2><A NAME="ss11.34">11.34 Why do I get <EM>fwdDispatch: Cannot retrieve 'https://www.buy.com/corp/ordertracking.asp'</EM></A>
</H2>

<P>These messages are caused by buggy clients, mostly Netscape Navigator.
What happens is, Netscape sends an HTTPS/SSL request over a persistent HTTP connection.
Normally, when Squid gets an SSL request, it looks like this:
<PRE>
        CONNECT www.buy.com:443 HTTP/1.0
</PRE>

Then Squid opens a TCP connection to the destination host and port, and
the <EM>real</EM> request is sent encrypted over this connection.  Thats the
whole point of SSL, that all of the information must be sent encrypted.
<P>
<P>With this client bug, however, Squid receives a request like this:
<PRE>
        GET https://www.buy.com/corp/ordertracking.asp HTTP/1.0
        Accept: */*
        User-agent: Netscape ...
        ...
</PRE>

Now, all of the headers, and the message body have been sent, <EM>unencrypted</EM>
to Squid.  There is no way for Squid to somehow turn this into an SSL request.
The only thing we can do is return the error message.
<P>
<P>Note, this browser bug does represent a security risk because the browser
is sending sensitive information unencrypted over the network.
<P>
<H2><A NAME="ss11.35">11.35 Squid can't access URLs like http://3626046468/ab2/cybercards/moreinfo.html</A>
</H2>

<P>by Dave J Woolley (DJW at bts dot co dot uk)
<P>These are illegal URLs, generally only used by illegal sites;
typically the web site that supports a spammer and is expected to
survive a few hours longer than the spamming account.
<P>Their intention is to:
<UL>
<LI>confuse content filtering rules on proxies, and possibly
some browsers' idea of whether they are trusted sites on
the local intranet;</LI>
<LI>confuse whois (?);</LI>
<LI>make people think they are not IP addresses and unknown
domain names, in an attempt to stop them trying to locate
and complain to the ISP.</LI>
</UL>
<P>Any browser or proxy that works with them should be considered a
security risk.
<P>
<A HREF="http://www.ietf.org/rfc/rfc1738.txt">RFC 1738</A>
has this to say about the hostname part of a URL:
<BLOCKQUOTE>
The fully qualified domain name of a network host, or its IP
address as a set of four decimal digit groups separated by
".". Fully qualified domain names take the form as described
in Section 3.5 of RFC 1034 [13] and Section 2.1 of RFC 1123
[5]: a sequence of domain labels separated by ".", each domain
label starting and ending with an alphanumerical character and
possibly also containing "-" characters. The rightmost domain
label will never start with a digit, though, which
syntactically distinguishes all domain names from the IP
addresses.
</BLOCKQUOTE>
<P>
<H2><A NAME="ss11.36">11.36 I get a lot of ``URI has whitespace'' error messages in my cache log, what should I do?</A>
</H2>

<P>Whitespace characters (space, tab, newline, carriage return) are
not allowed in URI's and URL's.  Unfortunately, a number of Web services
generate URL's with whitespace.  Of course your favorite browser silently
accomodates these bad URL's.  The servers (or people) that generate
these URL's are in violation of Internet standards.  The whitespace
characters should be encoded.  
<P>
<P>If you want Squid to accept URL's with whitespace, you have to
decide how to handle them.  There are four choices that you
can set with the <EM>uri_whitespace</EM> option:
<OL>
<LI>DENY:
The request is denied with an ``Invalid Request'' message.
This is the default.</LI>
<LI>ALLOW:
The request is allowed and the URL remains unchanged.</LI>
<LI>ENCODE:
The whitespace characters are encoded according to
<A HREF="http://www.ietf.org/rfc/rfc1738.txt">RFC 1738</A>.  This can be considered a violation
of the HTTP specification.</LI>
<LI>CHOP:
The URL is chopped at the first whitespace character
and then processed normally.  This also can be considered
a violation of HTTP.</LI>
</OL>
<P>
<H2><A NAME="comm-bind-loopback-fail"></A> <A NAME="ss11.37">11.37 commBind: Cannot bind socket FD 5 to 127.0.0.1:0: (49) Can't assign requested address</A>
</H2>

<P>This likely means that your system does not have a loopback network device, or
that device is not properly configured.
All Unix systems should have a network device named <EM>lo0</EM>, and it should
be configured with the address 127.0.0.1.  If not, you may get the above
error message.
To check your system, run:
<PRE>
        % ifconfig lo0
</PRE>

The result should look something like:
<PRE>
        lo0: flags=8049&lt;UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
                inet 127.0.0.1 netmask 0xff000000 
</PRE>
<P>
<P>If you use FreeBSD, see 
<A HREF="FAQ-14.html#freebsd-no-lo0">this</A>.
<P>
<H2><A NAME="ss11.38">11.38 Unknown cache_dir type '/var/squid/cache'</A>
</H2>

<P>The format of the <EM>cache_dir</EM> option changed with version
2.3.  It now takes a <EM>type</EM> argument.  All you need to do
is insert <CODE>ufs</CODE> in the line, like this:
<PRE>
        cache_dir ufs /var/squid/cache ...
</PRE>
<P>
<H2><A NAME="ss11.39">11.39 unrecognized: 'cache_dns_program /usr/local/squid/bin/dnsserver'</A>
</H2>

<P>As of Squid 2.3, the default is to use internal DNS lookup code.
The <EM>cache_dns_program</EM> and <EM>dns_children</EM> options are not
known squid.conf directives in this case.  Simply comment out
these two options.
<P>If you want to use external DNS lookups, with the <EM>dnsserver</EM>
program, then add this to your configure command:
<PRE>
        --disable-internal-dns
</PRE>
<P>
<H2><A NAME="ss11.40">11.40 Is <EM>dns_defnames</EM> broken in 2.3.STABLE1 and STABLE2?</A>
</H2>

<P>Sort of.   As of Squid 2.3, the default is to use internal DNS lookup code.
The <EM>dns_defnames</EM> option is only used with the external <EM>dnsserver</EM>
processes.  If you relied on <EM>dns_defnames</EM> before, you have three choices:
<OL>
<LI>See if the <EM>append_domain</EM> option will work for you instead.</LI>
<LI>Configure squid with --disable-internal-dns to use the external
dnsservers.</LI>
<LI>Enhance <EM>src/dns_internal.c</EM> to understand the <CODE>search</CODE>
and <CODE>domain</CODE> lines from <EM>/etc/resolv.conf</EM>.</LI>
</OL>
<P>
<H2><A NAME="ss11.41">11.41 What does <EM>sslReadClient: FD 14: read failure: (104) Connection reset by peer</EM> mean?</A>
</H2>

<P>``Connection reset by peer'' is an error code that Unix operating systems
sometimes return for <EM>read</EM>, <EM>write</EM>, <EM>connect</EM>, and other 
system calls.
<P>Connection reset means that the other host, the peer, sent us a RESET
packet on a TCP connection.  A host sends a RESET when it receives
an unexpected packet for a nonexistent connection.  For example, if 
one side sends data at the same time that the other side closes
a connection, when the other side receives the data it may send
a reset back.
<P>The fact that these messages appear in Squid's log might indicate
a problem, such as a broken origin server or parent cache.  On
the other hand, they might be ``normal,'' especially since
some applications are known to force connection resets rather
than a proper close.
<P>You probably don't need to worry about them, unless you receive
a lot of user complaints relating to SSL sites.
<P>
<A HREF="raj at cup dot hp dot com">Rick Jones</A> notes that
if the server is running a Microsoft TCP stack, clients
receive RST segments whenever the listen queue overflows.  In other words,
if the server is really busy, new connections receive the reset message.
This is contrary to rational behaviour, but is unlikely to change.
<P>
<P>
<H2><A NAME="ss11.42">11.42 What does <EM>Connection refused</EM> mean?</A>
</H2>

<P>This is an error message, generated by your operating system,
in response to a <EM>connect()</EM> system call.  It happens when 
there is no server at the other end listening on the port number
that we tried to connect to.
<P>Its quite easy to generate this error on your own.  Simply
telnet to a random, high numbered port:
<PRE>
% telnet localhost 12345
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
</PRE>

It happens because there is no server listening for connections
on port 12345.
<P>When you see this in response to a URL request, it probably means
the origin server web site is temporarily down.  It may also mean
that your parent cache is down, if you have one.
<P>
<H2><A NAME="ss11.43">11.43 squid: ERROR: no running copy</A>
</H2>

<P>You may get this message when you run commands like <CODE>squid -krotate</CODE>.
<P>This error message usually means that the <EM>squid.pid</EM> file is
missing.  Since the PID file is normally present when squid is running,
the absence of the PID file usually means Squid is not running.
If you accidentally delete the PID file, Squid will continue running, and
you won't be able to send it any signals.
<P>If you accidentally removed the PID file, there are two ways to get it back.
<OL>
<LI>run <CODE>ps</CODE> and find the Squid process id.  You'll probably see
two processes, like this:
<PRE>
bender-wessels % ps ax | grep squid
83617  ??  Ss     0:00.00 squid -s
83619  ??  S      0:00.48 (squid) -s (squid)
</PRE>

You want the second process id, 83619 in this case.   Create the PID file and put the
process id number there.  For example:
<PRE>
echo 83619 > /usr/local/squid/logs/squid.pid
</PRE>
</LI>
<LI>Use the above technique to find the Squid process id.  Send the process a HUP
signal, which is the same as <CODE>squid -kreconfigure</CODE>:
<PRE>
kill -HUP 83619
</PRE>

The reconfigure process creates a new PID file automatically.</LI>
</OL>
<P>
<H2><A NAME="ss11.44">11.44 FATAL: getgrnam failed to find groupid for effective group 'nogroup'</A>
</H2>

<P>You are probably starting Squid as root.  Squid is trying to find
a group-id that doesn't have any special priveleges that it will
run as.  The default is <EM>nogroup</EM>, but this may not be defined
on your system.  You need to edit <EM>squid.conf</EM> and set 
<EM>cache_effective_group</EM> to the name of an unpriveledged group
from <EM>/etc/group</EM>.  There is a good chance that <EM>nobody</EM>
will work for you.
<P>
<H2><A NAME="ss11.45">11.45 ``Unsupported Request Method and Protocol'' for <EM>https</EM> URLs.</A>
</H2>

<P><EM>Note: The information here is current for version 2.3.</EM>
<P>This is correct.  Squid does not know what to do with an <EM>https</EM>
URL.  To handle such a URL, Squid would need to speak the SSL 
protocol.  Unfortunately, it does not (yet).
<P>Normally, when you type an <EM>https</EM> URL into your browser, one of
two things happens.
<OL>
<LI>The browser opens an SSL connection directly to the origin
server.</LI>
<LI>The browser tunnels the request through Squid with the
<EM>CONNECT</EM> request method.</LI>
</OL>
<P>The <EM>CONNECT</EM> method is a way to tunnel any kind of
connection through an HTTP proxy.  The proxy doesn't 
understand or interpret the contents.  It just passes 
bytes back and forth between the client and server.
For the gory details on tunnelling and the CONNECT
method, please see
<A HREF="ftp://ftp.isi.edu/in-notes/rfc2817.txt">RFC 2817</A>
and 
<A HREF="http://www.web-cache.com/Writings/Internet-Drafts/draft-luotonen-web-proxy-tunneling-01.txt">Tunneling TCP based protocols through Web proxy servers</A> (expired).
<P>
<H2><A NAME="ss11.46">11.46 Squid uses 100% CPU</A>
</H2>

<P>There may be many causes for this.
<P>Andrew Doroshenko reports that removing <EM>/dev/null</EM>, or 
mounting a filesystem with the <EM>nodev</EM> option, can cause
Squid to use 100% of CPU.  His suggested solution is to ``touch /dev/null.''
<P>
<P>
<P>
<HR>
<A HREF="FAQ-12.html">Next</A>
<A HREF="FAQ-10.html">Previous</A>
<A HREF="FAQ.html#toc11">Contents</A>
</BODY>
</HTML>