File: FAQ-10.html

package info (click to toggle)
squid 1.1.21-1
  • links: PTS
  • area: main
  • in suites: hamm
  • size: 2,828 kB
  • ctags: 3,705
  • sloc: ansic: 34,400; sh: 1,975; perl: 899; makefile: 559
file content (945 lines) | stat: -rw-r--r-- 35,750 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
<HTML>
<HEAD>
<TITLE>SQUID Frequently Asked Questions: Troubleshooting</TITLE>
</HEAD>
<BODY>
<A HREF="FAQ-9.html">Previous</A>
<A HREF="FAQ-11.html">Next</A>
<A HREF="FAQ.html#toc10">Table of Contents</A>
<HR>
<H2><A NAME="s10">10. Troubleshooting</A></H2>

<H2><A NAME="ss10.1">10.1 Why am I getting ``Proxy Access Denied?''</A></H2>

<P>If <EM>squid</EM> is in httpd-accelerator mode, it will accept normal
HTTP requests and forward them to a HTTP server, but it will not
honor proxy requests.  If you want your cache to also accept
proxy-HTTP requests then you must enable this feature:
<PRE>
        http_accel_with_proxy on
</PRE>

Alternately, you may have misconfigured one of your ACLs.  Check the
<EM>access.log</EM> and <EM>squid.conf</EM> files for clues.</P>


<H2><A NAME="ss10.2">10.2 I can't get <CODE>local_domain</CODE> to work; <EM>Squid</EM> is caching the objects from my local servers.</A></H2>

<P>The <CODE>local_domain</CODE> directive does not prevent local
objects from being cached.  It prevents the use of sibling caches
when fetching local objects.  If you want to prevent objects from
being cached, use the <CODE>cache_stoplist</CODE> or <CODE>http_stop</CODE>
configuration options (depending on your version).</P>


<H2><A NAME="ss10.3">10.3 I get <CODE>Connection Refused</CODE> when the cache tries to retrieve an object located on a sibling, even though the sibling thinks it delivered the object to my cache.</A></H2>


<P>If the HTTP port number is wrong but the ICP port is correct you
will send ICP queries correctly and the ICP replies will fool your
cache into thinking the configuration is correct but large objects
will fail since you don't have the correct HTTP port for the sibling
in your <EM>squid.conf</EM> file.  If your sibling changed their
<CODE>http_port</CODE>, you could have this problem for some time
before noticing.</P>


<H2><A NAME="ss10.4">10.4 Running out of filedescriptors</A></H2>


<P>If you see the <CODE>Too many open files</CODE> error message, you
are most likely running out of file descriptors.  This may be due
to running Squid on an operating system with a low filedescriptor
limit.  This limit is often configurable in the kernel or with
other system tuning tools.  There are two ways to run out of file
descriptors:  first, you can hit the per-process limit on file
descriptors.  Second, you can hit the system limit on total file
descriptors for all processes.</P>

<P>For Linux, have a look at
<A HREF="http://www.linux.org.za/filehandle.patch.linux">filehandle.patch.linux</A>
by
<A HREF="mailto:michael@metal.iinet.net.au">Michael O'Reilly</A></P>

<P>For Solaris, add the following to your <EM>/etc/system</EM> file to
increase your maximum file descriptors per process:</P>
<P>
<PRE>
        set rlim_fd_max = 4096
        set rlim_fd_cur = 1024
</PRE>
</P>
<P>You should also <CODE>#define SQUID_FD_SETSIZE</CODE> in
<EM>include/config.h</EM> to whatever you set
<CODE>rlim_fd_max</CODE> to.  Going beyond 4096 may break things
in the kernel.</P>
<P>Solaris' <CODE>select(2)</CODE> only handles 1024 descriptors, so
if you need more, edit <EM>src</EM>Makefile/ and enable
<CODE>$(USE_POLL_OPT)</CODE>.  Then recompile <EM>squid</EM>.</P>

<P>For FreeBSD (by Torsten Sturm &lt;torsten.sturm@axis.de&gt;):
<OL>
<LI>How do I check my maximum filedescriptors?
<P>Do <CODE>sysctl -a</CODE> and look for the value of
<CODE>kern.maxfilesperproc</CODE>.</P>
</LI>
<LI>How do I increase them?
<PRE>
        sysctl -w kern.maxfiles=XXXX
        sysctl -w kern.maxfilesperproc=XXXX
</PRE>

<B>Warning</B>: You probably want <CODE>maxfiles
&gt; maxfilesperproc</CODE> if you're going to be pushing the
limit.</LI>
<LI>What is the upper limit?
<P>I don't think there is a formal upper limit inside the kernel.
All the data structures are dynamically allocated.  In practice
there might be unintended metaphenomena (kernel spending too much
time searching tables, for example).</P>
</LI>
</OL>
</P>

<P>For most BSD-derived systems (SunOS, 4.4BSD, OpenBSD, FreeBSD,
NetBSD, BSD/OS, 386BSD, Ultrix) you can also use the ``brute force''
method to increase these values in the kernel (requires a kernel
rebuild):
<OL>
<LI>How do I check my maximum filedescriptors?
<P>Do <CODE>pstat -T</CODE> and look for the <CODE>files</CODE>
value, typically expressed as the ratio of <CODE>current</CODE>maximum/.</P>
</LI>
<LI>How do I increase them the easy way?
<P>One way is to increase the value of the <CODE>maxusers</CODE> variable
in the kernel configuration file and build a new kernel.  This method
is quick and easy but also has the effect of increasing a wide variety of
other variables that you may not need or want increased.</P>
</LI>
<LI>Is there a more precise method?
<P>Another way is to find the <EM>param.c</EM> file in your kernel
build area and change the arithmetic behind the relationship between
<CODE>maxusers</CODE> and the maximum number of open files.</P>
</LI>
</OL>

Here are a few examples which should lead you in the right direction:
<OL>
<LI>SunOS
<P>Change the value of <CODE>nfile</CODE> in <CODE></CODE>usr/kvm/sys/conf.common/param.c/tt> by altering this equation:
<PRE>
        int     nfile = 16 * (NPROC + 16 + MAXUSERS) / 10 + 64;
</PRE>

Where <CODE>NPROC</CODE> is defined by:
<PRE>
        #define NPROC (10 + 16 * MAXUSERS)
</PRE>
</P>
</LI>
<LI>FreeBSD (from the 2.1.6 kernel)
<P>Very similar to SunOS, edit <EM>/usr/src/sys/conf/param.c</EM>
and alter the relationship between <CODE>maxusers</CODE> and the
<CODE>maxfiles</CODE> and <CODE>maxfilesperproc</CODE> variables:
<PRE>
        int     maxfiles = NPROC*2;
        int     maxfilesperproc = NPROC*2;
</PRE>

Where <CODE>NPROC</CODE> is defined by:
<CODE>#define NPROC (20 + 16 * MAXUSERS)</CODE>
The per-process limit can also be adjusted directly in the kernel
configuration file with the following directive:
<CODE>options OPEN_MAX=128</CODE></P>
</LI>
<LI>BSD/OS (from the 2.1 kernel)
<P>Edit <CODE>/usr/src/sys/conf/param.c</CODE> and adjust the
<CODE>maxfiles</CODE> math here:
<PRE>
        int     maxfiles = 3 * (NPROC + MAXUSERS) + 80;
</PRE>

Where <CODE>NPROC</CODE> is defined by:
<CODE>#define NPROC (20 + 16 * MAXUSERS)</CODE>
You should also set the <CODE>OPEN_MAX</CODE> value in your kernel
configuration file to change the per-process limit.</P>
</LI>
</OL>
</P>
<P><B>NOTE:</B> After you rebuild/reconfigure your kernel with more
filedescriptors, you must then recompile Squid.  Squid's configure
script determines how many filedescriptors are available, so you
must make sure the configure script runs again as well.  For example:
<PRE>
    cd squid-1.1.x
        make realclean
        ./configure --prefix=/usr/local/squid
        make
</PRE>
</P>


<H2><A NAME="malloc-death"></A> <A NAME="ss10.5">10.5 My <EM>squid</EM> dies periodically, and I see log entries complaining about being unable to <CODE>malloc(3)</CODE> more memory, but my system has lots of RAM available!</A></H2>

<P>by 
<A HREF="mailto:hno@hem.passagen.se">Henrik Nordstrom</A></P>

<P>The message "FATAL: xcalloc: Unable to allocate 4096 blocks of 1 bytes!"
is seen when Squid can't allocate more memory, and on most operating systems
(inclusive BSD) there are only two possible reasons:
<OL>
<LI>The machine is out of swap</LI>
<LI>The max data segment size is reached</LI>
</OL>

The first case is detected using the normal swap monitoring tools
available on the platform (<EM>pstat</EM> on SunOS, perhaps <EM>pstat</EM> is used on BSD
as well).</P>
<P>To tell if it is the second case, first rule out the first case and then
monitor the size of the Squid process. If it dies at a certain size with
plenty of swap left then the max data segment size is reached without no
doubts.</P>
<P>The data segment size can be limited by two factors:
<OL>
<LI>Kernel imposed maximum, which no user can go above</LI>
<LI>The size set with ulimit.</LI>
</OL>
</P>
<P>When squid starts it sets data and file ulimit's to the hard level. If
you manually tune ulimit before starting Squid make sure that you set
the hard limit and not only the soft limit (the default operation of
ulimit is to only change the soft limit). root is allowed to raise the
soft limit above the hard limit.</P>
<P>This command prints the hard limits:
<PRE>
        ulimit -aH
</PRE>
</P>
<P>This command sets the data size to unlimited:
<PRE>
        ulimit -HSd unlimited
</PRE>
</P>


<H3>BSD/OS</H3>

<P>by 
<A HREF="mailto:Arjan.deVet@adv.IAEhv.nl">Arjan de Vet</A></P>
<P>The default kernel limit on BSD/OS for datasize is 64MB (at least on 3.0
which I'm using).</P>

<P>Recompile a kernel with larger datasize settings:</P>
<P>
<PRE>
        maxusers        128
        # Support for large inpcb hash tables, e.g. busy WEB servers.
        options         INET_SERVER
        # support for large routing tables, e.g. gated with full Internet routing:
        options         &quot;KMEMSIZE=\(16*1024*1024\)&quot;
        options         &quot;DFLDSIZ=\(128*1024*1024\)&quot;
        options         &quot;DFLSSIZ=\(8*1024*1024\)&quot;
        options         &quot;SOMAXCONN=128&quot;
        options         &quot;MAXDSIZ=\(256*1024*1024\)&quot;
</PRE>
</P>
<P>See <EM>/usr/share/doc/bsdi/config.n</EM> for more info.</P>

<P>In /etc/login.conf I have this:</P>
<P>
<PRE>
        default:\
                :path=/bin /usr/bin /usr/contrib/bin:\
                :datasize-cur=256M:\
                :openfiles-cur=1024:\
                :openfiles-max=1024:\
                :maxproc-cur=1024:\
                :stacksize-cur=64M:\
                :radius-challenge-styles=activ,crypto,skey,snk,token:\
                :tc=auth-bsdi-defaults:\
                :tc=auth-ftp-bsdi-defaults:
        
        #
        # Settings used by /etc/rc and root
        # This must be set properly for daemons started as root by inetd as well.
        # Be sure reset these values back to system defaults in the default class!
        #
        daemon:\
                :path=/bin /usr/bin /sbin /usr/sbin:\
                :widepasswords:\
                :tc=default:
        #       :datasize-cur=128M:\
        #       :openfiles-cur=256:\
        #       :maxproc-cur=256:\
</PRE>
</P>

<P>This should give enough space for a 256MB squid process.</P>

<H3>FreeBSD (2.2.X)</H3>

<P>by Duane Wessels</P>
<P>The procedure is almost identical to that for BSD/OS above.
Increase the open filedescriptor limit in <EM>/sys/conf/param.c</EM>:
<PRE>
        int     maxfiles = 4096;
        int     maxfilesperproc = 1024;
</PRE>

Increase the maximum and default data segment size in your kernel
config file, e.g. <EM>/sys/conf/i386/CONFIG</EM>:
<PRE>
        options         &quot;MAXDSIZ=(512*1024*1024)&quot;
        options         &quot;DFLDSIZ=(128*1024*1024)&quot;
</PRE>

We also found it necessary to increase the number of mbuf clusters:
<PRE>
        options         &quot;NMBCLUSTERS=10240&quot;
</PRE>

And, if you have more than 256 MB of physical memory, you probably
have to disable BOUNCE_BUFFERS (whatever that is), so comment
out this line:
<PRE>
        #options        BOUNCE_BUFFERS          #include support for DMA bounce buffers
</PRE>
</P>

<P>Also, update limits in <EM>/etc/login.conf</EM>:
<PRE>
        # Settings used by /etc/rc
        #
        daemon:\
                :coredumpsize=infinity:\
                :datasize=infinity:\
                :maxproc=256:\
                :maxproc-cur@:\
                :memoryuse-cur=64M:\
                :memorylocked-cur=64M:\
                :openfiles=4096:\
                :openfiles-cur@:\
                :stacksize=64M:\
                :tc=default:
</PRE>

And don't forget to run ``cap_mkdb /etc/login.conf'' after editing that file.</P>


<H3>OSF, Digital Unix</H3>

<P>by 
<A HREF="mailto:ongbh@zpoprp.zpo.dec.com">Ong Beng Hui</A></P>
<P>To increase the data size for Digital UNIX, edit the file <CODE>/etc/sysconfigtab</CODE>
and add the entry...
<PRE>
        proc:
                per-proc-data-size=1073741824
</PRE>

Or, with csh, use the limit command, such as
<PRE>
        &gt; limit datasize 1024M
</PRE>
</P>

<P>Editing <CODE>/etc/sysconfigtab</CODE> requires a reboot, but the limit command
doesn't.</P>


<H2><A NAME="ss10.6">10.6 What are these strange lines about removing objects?</A></H2>

<P>For example:
<PRE>
        97/01/23 22:31:10| Removed 1 of 9 objects from bucket 3913
        97/01/23 22:33:10| Removed 1 of 5 objects from bucket 4315
        97/01/23 22:35:40| Removed 1 of 14 objects from bucket 6391
</PRE>
</P>
<P>These log entries are normal, and do not indicate that <EM>squid</EM> has
reached <CODE>cache_swap_high</CODE>.</P>

<P>Consult your cache information page in <EM>cachemgr.cgi</EM> for
a line like this:</P>
<P>
<PRE>
       Storage LRU Expiration Age:     364.01 days
</PRE>
</P>
<P>Objects which have not been used for that amount of time are removed as
a part of the regular maintenance.  You can set an upper limit on the
<CODE>LRU Expiration Age</CODE> value with <CODE>reference_age</CODE> in the config
file.</P>


<H2><A NAME="ss10.7">10.7 Can I change a Windows NT FTP server to list directories in Unix format?</A></H2>

<P>Why, yes you can!  Select the following menus:
<UL>
<LI>Start</LI>
<LI>Programs</LI>
<LI>Microsoft Internet Server (Common)</LI>
<LI>Internet Service Manager</LI>
</UL>
</P>
<P>This will bring up a box with icons for your various services. One of
them should be a little ftp ``folder.'' Double click on this.</P>
<P>You will then have to select the server (there should only be one)
Select that and then choose ``Properties'' from the menu and choose the
``directories'' tab along the top.</P>
<P>There will be an option at the bottom saying ``Directory listing style.''
Choose the ``Unix'' type, not the ``MS-DOS'' type.</P>
<P>
<BLOCKQUOTE>
--Oskar Pearson &lt;oskar@is.co.za&gt;
</BLOCKQUOTE>
</P>


<H2><A NAME="ss10.8">10.8 Why does Squid use so much memory!?</A></H2>

<P>One reason that Squid is fast and able to handle a lot of requests with a 
single process is because it uses a lot of memory.
First, please see these other related FAQ entries:
<UL>
<LI>
<A HREF="FAQ-8.html#huge-memory-pool">The in-memory object pool</A></LI>
<LI>
<A HREF="FAQ-8.html#analyze-memory-usage">Analyzing memory usage</A></LI>
<LI>
<A HREF="#malloc-death">Malloc failures</A></LI>
</UL>
</P>

<P>Many users have found improved performance when linking Squid with an external malloc
library. See 
<A HREF="FAQ-3.html#gnu-malloc">Using GNU malloc</A>.</P>


<H2><A NAME="ss10.9">10.9 Why am I getting ``Ignoring MISS from non-peer x.x.x.x?''</A></H2>

<P>You are receiving ICP MISSes (via UDP) from a parent or sibling cache
whose IP address your cache does not know about.  This may happen
in two situations.</P>

<P>
<OL>
<LI>If the peer is multihomed, it is sending packets out an interface
which is not advertized in the DNS.  Unfortunately, this is a
configuration problem at the peer site.  You can tell them to either
add the IP address interface to their DNS, or use Squid's
'udp_outgoing_address' option to force the replies
out a specific interface.  For example:
<P><EM>on your parent squid.conf:</EM>
<PRE>
        udp_outgoing_address proxy.parent.com
</PRE>

<EM>on your squid.conf:</EM>
<PRE>
        cache_host proxy.parent.com parent 3128 3130
</PRE>
</P>

</LI>
<LI>You can also see this warning when sending ICP queries to 
multicast addresses.  For security reasons, Squid requires
your configuration to list all other caches listening on the
multicast group address.  If an unknown cache listens to that address
and sends replies, your cache will log the warning message.  To fix
this situation, either tell the unknown cache to stop listening
on the multicast address, or if they are legitimate, add them
to your configuration file.</LI>
</OL>
</P>


<H2><A NAME="ss10.10">10.10 DNS lookups for domain names with underscores (_) always fail.</A></H2>

<P>The standards for naming hosts
(
<A HREF="http://ds.internic.net/rfc/rfc952.txt">RFC 952</A>,
<A HREF="http://ds.internic.net/rfc/rfc1101.txt">RFC 1101</A>)
do not allow underscores in domain names:
<BLOCKQUOTE>
A "name" (Net, Host, Gateway, or Domain name) is a text string up
to 24 characters drawn from the alphabet (A-Z), digits (0-9), minus
sign (-), and period (.).
</BLOCKQUOTE>

The resolver library that ships with recent versions of BIND enforces
this restriction, returning an error for any host with underscore in
the hostname.  The best solution is to complain to the hostmaster of the
offending site, and ask them to rename their host.</P>


<H2><A NAME="ss10.11">10.11 Why am I getting access denied from a sibling cache?</A></H2>

<P>The answer to this is somewhat complicated, so please hold on.
<EM>NOTE:</EM> most of this text is taken from
<A HREF="http://www.nlanr.net/~wessels/Papers/icp-squid.ps.gz">ICP and the Squid Web Cache</A>.  </P>

<P>An ICP query does not include any parent or sibling designation,
so the receiver really has no indication of how the peer
cache is configured to use it.  This issue becomes important
when a cache is willing to serve cache hits to anyone, but only
handle cache misses for its paying users or customers.  In other
words, whether or not to allow the request depends on if the
result is a hit or a miss.  To accomplish this,
Squid acquired the <CODE>miss_access</CODE> feature
in October of 1996.</P>

<P>The necessity of ``miss access'' makes life a little bit complicated,
and not only because it was awkward to implement.  Miss access
means that the ICP query reply must be an extremely accurate prediction
of the result of a subsequent HTTP request.  Ascertaining
this result is actually very hard, if not impossible to
do, since the ICP request cannot convey the
full HTTP request.
Additionally, there are more types of HTTP request results than there
are for ICP.  The ICP query reply will either be a hit or miss.
However, the HTTP request might result in a ``<CODE>304 Not Modified</CODE>'' reply
sent from the origin server.  Such a reply is not strictly a hit since the peer
needed to forward a conditional request to the source.  At the same time,
its not strictly a miss either since the local object data is still valid,
and the Not-Modified reply is quite small.</P>

<P>One serious problem for cache hierarchies is mismatched freshness
parameters.  Consider a cache <EM>C</EM> using ``strict''
freshness parameters so its users get maximally current data.
<EM>C</EM> has a sibling <EM>S</EM> with less strict freshness parameters.
When an object is requested at <EM>C</EM>, <EM>C</EM> might
find that <EM>S</EM> already has the object via an ICP query and
ICP HIT response.  <EM>C</EM> then retrieves the object
from <EM>S</EM>.</P>

<P>In an HTTP/1.0 world, <EM>C</EM> (and <EM>C</EM>'s client)
will receive an object that was never
subject to its local freshness rules.  Neither HTTP/1.0 nor ICP provides
any way to ask only for objects less than a certain age.  If the
retrieved object is stale by <EM>C</EM>s rules,
it will be removed from <EM>C</EM>s cache, but
it will subsequently be fetched from <EM>S</EM> so long as it
remains fresh there.  This configuration miscoupling
problem is a significant deterrent to establishing
both parent and sibling relationships.</P>

<P>HTTP/1.1 provides numerous request headers to specify freshness
requirements, which actually introduces
a different problem for cache hierarchies:  ICP
still does not include any age information, neither in query nor
reply.  So <EM>S</EM> may return an ICP HIT if its
copy of the object is fresh by its configuration
parameters, but the subsequent HTTP request may result
in a cache miss due to any
<CODE>Cache-control:</CODE> headers originated by <EM>C</EM> or by
<EM>C</EM>'s client.  Situations now emerge where the ICP reply
no longer matches the HTTP request result.</P>

<P>In the end, the fundamental problem is that the ICP query does not
provide enough information to accurately predict whether
the HTTP request
will be a hit or miss.   In fact, the current ICP Internet Draft is very
vague on this subject.  What does ICP HIT really mean?  Does it mean
``I know a little about that URL and have some copy of the object?''  Or
does it mean ``I have a valid copy of that object and you are allowed to
get it from me?''</P>

<P>So, what can be done about this problem?  We really need to change ICP
so that freshness parameters are included.  Until that happens, the members
of a cache hierarchy have only two options to toally eliminate the ``access
denied'' messages from sibling caches:
<OL>
<LI>Make sure all members have the same <CODE>refresh_rules</CODE> parameters.</LI>
<LI>Do not use <CODE>miss_access</CODE> at all.  Promise your sibling cache
administrator that <EM>your</EM> cache is properly configured and that you
will not abuse their generosity.  The sibling cache administrator can
check his log files to make sure you are keeping your word.</LI>
</OL>

If neither of these is realistic, then the sibling relationship should not
exist.</P>


<H2><A NAME="ss10.12">10.12 Cannot bind socket FD NN to *:8080 (125) Address already in use</A></H2>

<P>This means that another processes is already listening on port 8080 
(or whatever you're using).  It could mean that you have a Squid process
already running, or it could be from another program.  To verify, use
the <EM>netstat</EM> command:
<PRE>
        netstat -naf inet | grep LISTEN
</PRE>

That will show all sockets in the LISTEN state.  You might also try
<PRE>
        netstat -naf inet | grep 8080
</PRE>

If you find that some process has bound to your port, but you're not sure
which process it is, you might be able to use the excellent
<A HREF="ftp://vic.cc.purdue.edu/pub/tools/unix/lsof/">lsof</A>
program.  It will show you which processes own every open file descriptor
on your system.</P>


<H2><A NAME="ss10.13">10.13 icpDetectClientClose: ERROR xxx.xxx.xxx.xxx: (32) Broken pipe</A></H2>

<P>This means that the client socket was closed by the client 
before Squid was finished sending data to it.  Squid detects this
by trying to <CODE>read(2)</CODE> some data from the socket.  If the 
<CODE>read(2)</CODE> call fails, then Squid konws the socket has been
closed.   Normally the <CODE>read(2)</CODE> call returns <EM>ECONNRESET: Connection reset by peer</EM>
and these are NOT logged.  Any other error messages (such as
<EM>EPIPE: Broken pipe</EM> are logged to <EM>cache.log</EM>.  See the ``intro'' of
section 2 of your Unix manual for a list of all error codes.</P>


<H2><A NAME="ss10.14">10.14 icpDetectClientClose: FD 135, 255 unexpected bytes</A></H2>

<P>These are caused by misbehaving Web clients attempting to use persistent
connections.  Squid&nbspnbsp;1.1 does not support persistent connections.</P>


<H2><A NAME="ss10.15">10.15 How come Squid doesn't work with NTLM Authorization.</A></H2>

<P>We are not sure. We were unable to find any detailed information
on NTLM (thanks Microsoft!), but here is our best guess:</P>

<P>Squid transparently passes the NTLM request and response headers between 
clients and servers.  The encrypted challenge and response strings most likely
encode the IP address of the client.  Because the proxy is passing these
strings and is connected with a different IP address, the authentication
scheme breaks down.
This implies that if NTLM authentication works at all with proxy caches, the proxy
would need to intercept the NTLM headers and process them itself.</P>

<P>If anyone knows more about NTLM and knows the above to be false, please let us know.</P>


<H2><A NAME="ss10.16">10.16 The <EM>default</EM> parent option isn't working!</A></H2>

<P>This message was received at <EM>squid-bugs</EM>:
<BLOCKQUOTE>
<I>If you have only ony parent, configured as:</I>
<PRE>
        cache_host xxxx parent 3128 3130 no-query default
</PRE>

<I>nothing is sent to the parent; neither UDP packets, nor TCP connections.</I>
</BLOCKQUOTE>
</P>

<P>Simply adding <EM>default</EM> to a parent does not force all requests to be sent
to that parent.  The term <EM>default</EM> is perhaps a poor choice of words.  A <EM>default</EM>
parent is only used as a <B>last resort</B>.  If the cache is able to make direct connections,
direct will be preferred over default.  If you want to force all requests to your parent
cache(s), use the <EM>inside_firewall</EM> option:
<PRE>
        inside_firewall none
</PRE>
</P>


<H2><A NAME="ss10.17">10.17 ``Hot Mail'' complains about: Intrusion Logged. Access denied.</A></H2>

<P>``Hot Mail'' is proxy-unfriendly and requires all requests to come from
the same IP address.  You can fix this by adding to your
<EM>squid.conf</EM>:
<PRE>
        hierarchy_stoplist hotmail.com
</PRE>
</P>


<H2><A NAME="ss10.18">10.18 My Squid becomes very slow after it has been running for some time.</A></H2>

<P>This is most likely because Squid is using more memory than it should be
for your system.  When the Squid process becomes large, it experiences a lot
of paging.  This will very rapidly degrade the performance of Squid.
Memory usage is a complicated problem.  There are a number
of things to consider.</P>

<P>First, examine the Cache Manager <EM>Info</EM> ouput and look at these two lines:
<PRE>
        Number of TCP connections:      121104
        Page faults with physical i/o: 16720
</PRE>

Note, if your system does not have the <EM>getrusage()</EM> function, then you will
not see the page faults line.  </P>

<P>Divide the number of page faults by the number of connections.  In this
case 16720/121104 = 0.14.  Ideally this ratio should be in the 0.0 - 0.1
range.  It may be acceptable to be in the 0.1 - 0.2 range.  Above that,
however, and you will most likely find that Squid's performance is
unacceptably slow.</P>

<P>If the ratio is too high, you will need to make some changes to lower the 
amount of memory Squid uses.  There are a number of things to try:
<UL>
<LI>Buy more memory for your system.</LI>
<LI>Try a different malloc library, such as 
<A HREF="FAQ-3.html#gnu-malloc">GNU malloc</A>.</LI>
<LI>Reduce the <EM>cache_mem</EM> parameter in the config file.</LI>
<LI>Turn the <EM>memory_pools off</EM> in the config file.</LI>
<LI>Reduce the <EM>cache_swap</EM> parameter in your config file.  This will reduce
the number of objects Squid keeps.  Your hit ratio may go down a little, but your
cache will perform better.</LI>
<LI>Reduce the <EM>maximum_object_size</EM> parameter.  You won't be able to
cache the larger objects, and your byte volume hit ratio may go down,
but Squid will perform better overall.</LI>
<LI>Try the ``NOVM'' version of Squid.</LI>
</UL>
</P>


<H2><A NAME="ss10.19">10.19 WARNING: Failed to start 'dnsserver'</A></H2>

<P>This could be a permission problem.  Does the Squid userid have
permission to execute the <EM>dnsserver</EM> program?</P>

<P>You might also try testing <EM>dnsserver</EM> from the command line:
<PRE>
        &gt; echo oceana.nlanr.net | ./dnsserver
</PRE>

Should produce something like:
<PRE>
        $name oceana.nlanr.net
        $h_name oceana.nlanr.net
        $h_len 4
        $ipcount 1
        132.249.40.200
        $aliascount 0
        $ttl 82067
        $end
</PRE>
</P>


<H2><A NAME="ss10.20">10.20 Sending in Squid bug reports</A></H2>

<P>Bug reports for Squid should be sent to the 
<A HREF="mailto:squid-bugs@nlanr.net">squid-bugs alias</A>.  Any bug report must include
<UL>
<LI>The Squid version</LI>
<LI>Your Operating System type and version</LI>
</UL>
</P>

<H3>crashes and core dumps</H3>

<P>There are two conditions under which squid will exit abnormally and
generate a coredump.  First, a SIGSEGV or SIGBUS signal will cause Squid
to exit and dump core.  Second, many functions include consistency
checks.  If one of those checks fail, Squid calls abort() to generate a
core dump.</P>

<P>The core dump file will be left in either one of two locations:
<OL>
<LI>The current directory when Squid was started</LI>
<LI>The first <EM>cache_dir</EM> directory if you have used the
<EM>cache_effective_user</EM> option.</LI>
</OL>

If you cannot find a core file, then either Squid does not have
permission to write in its current directory, or perhaps your shell
limits (csh and clones) are preventing the core file from being written.
If you suspect the current directory is not writable, you can add
<PRE>
        cd /tmp
</PRE>

to your script which starts Squid (e.g. RunCache).</P>

<P>Once you have located the core dump file, use a debugger such as
<EM>dbx</EM> or <EM>gdb</EM> to generate a stack trace:
<PRE>

tirana-wessels squid/src 270% gdb squid /T2/Cache/core
GDB is free software and you are welcome to distribute copies of it
 under certain conditions; type &quot;show copying&quot; to see the conditions.
There is absolutely no warranty for GDB; type &quot;show warranty&quot; for details.
GDB 4.15.1 (hppa1.0-hp-hpux10.10), Copyright 1995 Free Software Foundation, Inc...
Core was generated by `squid'.
Program terminated with signal 6, Aborted.

[...]

(gdb) where
#0  0xc01277a8 in _kill ()
#1  0xc00b2944 in _raise ()
#2  0xc007bb08 in abort ()
#3  0x53f5c in __eprintf (string=0x7b037048 &quot;&quot;, expression=0x5f &lt;Address 0x5f out of bounds&gt;, line=8, filename=0x6b &lt;Address 0x6b out of bounds&gt;)
#4  0x29828 in fd_open (fd=10918, type=3221514150, desc=0x95e4 &quot;HTTP Request&quot;) at fd.c:71
#5  0x24f40 in comm_accept (fd=2063838200, peer=0x7b0390b0, me=0x6b) at comm.c:574
#6  0x23874 in httpAccept (sock=33, notused=0xc00467a6) at client_side.c:1691
#7  0x25510 in comm_select_incoming () at comm.c:784
#8  0x25954 in comm_select (sec=29) at comm.c:1052
#9  0x3b04c in main (argc=1073745368, argv=0x40000dd8) at main.c:671
</PRE>
</P>

<P>If possible, you might keep the coredump file around for a day or
two.  It is often helpful if we can ask you to send additional
debugger output, such as the contents of some variables.</P>

<H3>Non-fatal bugs</H3>

<P>If you find a non-fatal bug, such as incorrect HTTP processing, please
send us a section of your cache.log with full debugging to demonstrate
the problem.  The cache.log file can become very large, so alternatively,
you may want to copy it to an FTP or HTTP server where we can download it.</P>

<P>To enable full debugging on a running squid process, use the <EM>-k debug</EM>
command line option:
<PRE>
        % ./squid -k debug
</PRE>

Use the same command to restore Squid to normal debugging.</P>


<H2><A NAME="ss10.21">10.21 fork: (12) Cannot allocate memory</A></H2>

<P>When Squid is reconfigured (SIGHUP) or the logs are rotated (SIGUSR1),
some of the helper processes (ftpget, dnsserver) must be killed and
restarted.  If your system does not have enough virtual memory, 
the Squid process may not be able to fork to start the new helper
processes. 
The best way to fix this is to increase your virtual memory by adding
swap space.  Normally your system uses raw disk partitions for swap
space, but most operating systems also support swapping on regular
files (Digital Unix excepted).  See your system manual pages for
<EM>swap</EM>, <EM>swapon</EM>, and <EM>mkfile</EM>.</P>


<H2><A NAME="ss10.22">10.22 FATAL: ipcache_init: DNS name lookup tests failed</A></H2>

<P>Squid normally tests your system's DNS configuration before
it starts server requests.  Squit tries to resolve some 
common DNS names, as defined in the <EM>dns_testnames</EM> configuration
directive.  If Squid cannot resolve these names, it could mean:
<OL>
<LI>your DNS nameserver is unreachable or not running.</LI>
<LI>your <EM>/etc/resolv.conf</EM> file may contain incorrect information.</LI>
<LI>your <EM>/etc/resolv.conf</EM> file may have incorrect permissions, and
may be unreadable by Squid.</LI>
</OL>
</P>

<P>To disable this feature, use the <EM>-D</EM> command line option.</P>

<P>Note, Squid does NOT use the <EM>dnsservers</EM> to test the DNS.  The
test is performed internally, before the <EM>dnsservers</EM> start.</P>


<H2><A NAME="ss10.23">10.23 FATAL: Failed to make swap directory /var/spool/cache: (13) Permission denied</A></H2>

<P>Starting with version 1.1.15, we have required that you first run
<PRE>
        squid -z
</PRE>

to create the swap directories on your filesystem.  If you have set the
<EM>cache_effective_user</EM> option, then the Squid process takes on the 
given userid before making the directories.  If the <EM>cache_dir</EM>
directory (e.g. /var/spool/cache) does not exist, and the Squid userid
does not have permission to create it, then you will get the ``permission
denied'' error.  This can be simply fixed by manually creating the
cache directory.
<PRE>
        # mkdir /var/spool/cache
        # chown &lt;userid&gt; &lt;groupid&gt; /var/spool/cache
        # squid -z
</PRE>
</P>

<P>Alternatively, if the directory already exists, then your operating
system may be returning ``Permission Denied'' instead of ``File Exists''
on the mkdir() system call.  This
<A HREF="store.c-mkdir.patch">patch</A>
by 
<A HREF="mailto:miquels@cistron.nl">Miquel van Smoorenburg</A>
should fix it.</P>


<H2><A NAME="ss10.24">10.24 You need to recompile with a larger value for MAX_SWAP_FILE</A></H2>

<P>This message began appearing in version 1.1.19 due to a change made
by Duane Wessels which was poorly implemented and thought through.</P>

<P>For all versions prior to version 1.1.19, Squid used a fixed size
``filemap.''  This filemap is an array of bits used to indicate
which swap file numbers are in use and which are not.  The fixed
size value was 2,097,152 files.  This worked alright until some
people reported running caches with more than 2 million swap files.</P>

<P>Clearly we needed to change the fixed-size value to a value which would
be calculated at run-time, depending on the cache size.  In store.c
we already had code to estimate the number of objects in the cache.
This estimate was used to build the StoreEntry hash table.  Duane
believed that having the filemap size be 1.5 times the number of 
estimated objects would leave plenty of margin for error.</P>

<P>The calculation for the number of objects in a cache is:
<PRE>
        n_objects = cache_size / avg_object_size
</PRE>

For quite some time, Squid has used 20 KB as the default average
object size.  As it turns out, 20 KB is much too high, almost by
a factor of two.  So the estimated filemap size was not large enough.</P>

<P>In the ensuing confusion, a flurry of messages and complaints were
sent to the
<A HREF="http://squid.nlanr.net/Mail-Archive/squid-users/">squid-users mailing list</A>.
The consensus was that 13 KB is a much more accurate value for the
average object size in most everyone's caches.</P>

<P>For Squid version 1.1.20, the following changes have been made to
hopefully fix this serious problem:
<UL>
<LI>The default 'store_avg_object_size' value is 13 KB</LI>
<LI>The filemap size is double the estimated number of objects</LI>
<LI>Squid will not exit if the filemap limit is reached.  Instead
it will write a warning to cache.log (repeating every hour
if necessary).  No new objects will be written to disk until
other swap files are released.</LI>
</UL>
</P>


<H2><A NAME="ss10.25">10.25 When using a username and password, I can not access some files.</A></H2>

<P><I>If I try by way of a test, to acccess</I>
<PRE>
        
        ftp://username:password@ftpserver/somewhere/foo.tar.gz
</PRE>

<I>I get</I>
<PRE>
        somewhere/foo.tar.gz: Not a directory. 
</PRE>
</P>

<P>Use this URL instead:
<PRE>
        
        ftp://username:password@ftpserver/%2fsomewhere/foo.tar.gz
</PRE>
</P>






<HR>
<A HREF="FAQ-9.html">Previous</A>
<A HREF="FAQ-11.html">Next</A>
<A HREF="FAQ.html#toc10">Table of Contents</A>
</BODY>
</HTML>