1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552
|
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<HTML>
<HEAD>
<META NAME="GENERATOR" CONTENT="LinuxDoc-Tools 0.9.21">
<TITLE>SQUID Frequently Asked Questions: Memory</TITLE>
<LINK HREF="FAQ-9.html" REL=next>
<LINK HREF="FAQ-7.html" REL=previous>
<LINK HREF="FAQ.html#toc8" REL=contents>
</HEAD>
<BODY>
<A HREF="FAQ-9.html">Next</A>
<A HREF="FAQ-7.html">Previous</A>
<A HREF="FAQ.html#toc8">Contents</A>
<HR>
<H2><A NAME="memorye"></A> <A NAME="s8">8.</A> <A HREF="FAQ.html#toc8">Memory</A></H2>
<H2><A NAME="ss8.1">8.1</A> <A HREF="FAQ.html#toc8.1">Why does Squid use so much memory!?</A>
</H2>
<P>Squid uses a lot of memory for performance reasons. It takes much, much
longer to read something from disk than it does to read directly from
memory.</P>
<P>A small amount of metadata for each cached object is kept in memory.
This is the <EM>StoreEntry</EM> data structure. For <EM>Squid-2</EM> this is
56-bytes on "small" pointer architectures (Intel, Sparc, MIPS, etc) and
88-bytes on "large" pointer architectures (Alpha). In addition, There
is a 16-byte cache key (MD5 checksum) associated with each
<EM>StoreEntry</EM>. This means there are 72 or 104 bytes of metadata in
memory for every object in your cache. A cache with 1,000,000
objects therefore requires 72 MB of memory for <EM>metadata only</EM>.
In practice it requires much more than that.</P>
<P>Squid-1.1 also uses a lot of memory to store in-transit objects.
This version stores incoming objects only in memory, until the transfer
is complete. At that point it decides whether or not to store the object
on disk. This means that when users download large files, your memory
usage will increase significantly. The squid.conf parameter <EM>maximum_object_size</EM>
determines how much memory an in-transit object can consume before we
mark it as uncachable. When an object is marked uncachable, there is no
need to keep all of the object in memory, so the memory is freed for
the part of the object which has already been written to the client.
In other words, lowering <EM>maximum_object_size</EM> also lowers Squid-1.1
memory usage.</P>
<P>Other uses of memory by Squid include:
<UL>
<LI>Disk buffers for reading and writing</LI>
<LI>Network I/O buffers</LI>
<LI>IP Cache contents</LI>
<LI>FQDN Cache contents</LI>
<LI>Netdb ICMP measurement database</LI>
<LI>Per-request state information, including full request and
reply headers</LI>
<LI>Miscellaneous statistics collection.</LI>
<LI>``Hot objects'' which are kept entirely in memory.</LI>
</UL>
</P>
<H2><A NAME="ss8.2">8.2</A> <A HREF="FAQ.html#toc8.2">How can I tell how much memory my Squid process is using?</A>
</H2>
<P>One way is to simply look at <EM>ps</EM> output on your system.
For BSD-ish systems, you probably want to use the <EM>-u</EM> option
and look at the <EM>VSZ</EM> and <EM>RSS</EM> fields:
<PRE>
wessels ~ 236% ps -axuhm
USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
squid 9631 4.6 26.4 141204 137852 ?? S 10:13PM 78:22.80 squid -NCYs
</PRE>
For SYSV-ish, you probably want to use the <EM>-l</EM> option.
When interpreting the <EM>ps</EM> output, be sure to check your <EM>ps</EM>
manual page. It may not be obvious if the reported numbers are kbytes,
or pages (usually 4 kb).</P>
<P>A nicer way to check the memory usage is with a program called
<EM>top</EM>:
<PRE>
last pid: 20128; load averages: 0.06, 0.12, 0.11 14:10:58
46 processes: 1 running, 45 sleeping
CPU states: % user, % nice, % system, % interrupt, % idle
Mem: 187M Active, 1884K Inact, 45M Wired, 268M Cache, 8351K Buf, 1296K Free
Swap: 1024M Total, 256K Used, 1024M Free
PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND
9631 squid 2 0 138M 135M select 78:45 3.93% 3.93% squid
</PRE>
</P>
<P>Finally, you can ask the Squid process to report its own memory
usage. This is available on the Cache Manager <EM>info</EM> page.
Your output may vary depending upon your operating system and
Squid version, but it looks similar to this:
<PRE>
Resource usage for squid:
Maximum Resident Size: 137892 KB
Memory usage for squid via mstats():
Total space in arena: 140144 KB
Total free: 8153 KB 6%
</PRE>
</P>
<P>If your RSS (Resident Set Size) value is much lower than your
process size, then your cache performance is most likely suffering
due to
<A HREF="FAQ-9.html#paging">paging</A>.</P>
<H2><A NAME="ss8.3">8.3</A> <A HREF="FAQ.html#toc8.3">My Squid process grows without bounds.</A>
</H2>
<P>You might just have your <EM>cache_mem</EM> parameter set too high.
See the ``
<A HREF="#lower-mem-usage">What can I do to reduce Squid's memory usage?</A>''
entry below.</P>
<P>When a process continually grows in size, without levelling off
or slowing down, it often indicates a memory leak. A memory leak
is when some chunk of memory is used, but not free'd when it is
done being used.</P>
<P>Memory leaks are a real problem for programs (like Squid) which do all
of their processing within a single process. Historically, Squid has
had real memory leak problems. But as the software has matured, we
believe almost all of Squid's memory leaks have been eliminated, and
new ones are least easy to identify.</P>
<P>Memory leaks may also be present in your system's libraries, such
as <EM>libc.a</EM> or even <EM>libmalloc.a</EM>. If you experience the ever-growing
process size phenomenon, we suggest you first try an
<A HREF="#alternate-malloc">alternative malloc library</A>.</P>
<H2><A NAME="ss8.4">8.4</A> <A HREF="FAQ.html#toc8.4">I set <EM>cache_mem</EM> to XX, but the process grows beyond that!</A>
</H2>
<P>The <EM>cache_mem</EM> parameter <B>does NOT</B> specify the maximum
size of the process. It only specifies how much memory to use
for caching ``hot'' (very popular) replies. Squid's actual memory
usage is depends very strongly on your cache size (disk space) and
your incoming request load. Reducing <EM>cache_mem</EM> will usually
also reduce the process size, but not necessarily, and there are
other ways to reduce Squid's memory usage (see below).</P>
<P>See also
<A HREF="#how-much-ram">How much memory do I need in my Squid server?</A>.</P>
<H2><A NAME="analyze-memory-usage"></A> <A NAME="ss8.5">8.5</A> <A HREF="FAQ.html#toc8.5">How do I analyze memory usage from the cache manger output?</A>
</H2>
<P><I>Note: This information is specific to Squid-1.1 versions</I></P>
<P>Look at your <EM>cachemgr.cgi</EM> <CODE>Cache
Information</CODE> page. For example:
<PRE>
Memory usage for squid via mallinfo():
Total space in arena: 94687 KB
Ordinary blocks: 32019 KB 210034 blks
Small blocks: 44364 KB 569500 blks
Holding blocks: 0 KB 5695 blks
Free Small blocks: 6650 KB
Free Ordinary blocks: 11652 KB
Total in use: 76384 KB 81%
Total free: 18302 KB 19%
Meta Data:
StoreEntry 246043 x 64 bytes = 15377 KB
IPCacheEntry 971 x 88 bytes = 83 KB
Hash link 2 x 24 bytes = 0 KB
URL strings = 11422 KB
Pool MemObject structures 514 x 144 bytes = 72 KB ( 70 free)
Pool for Request structur 516 x 4380 bytes = 2207 KB ( 2121 free)
Pool for in-memory object 6200 x 4096 bytes = 24800 KB ( 22888 free)
Pool for disk I/O 242 x 8192 bytes = 1936 KB ( 1888 free)
Miscellaneous = 2600 KB
total Accounted = 58499 KB
</PRE>
</P>
<P>First note that <CODE>mallinfo()</CODE> reports 94M in ``arena.'' This
is pretty close to what <EM>top</EM> says (97M).</P>
<P>Of that 94M, 81% (76M) is actually being used at the moment. The
rest has been freed, or pre-allocated by <CODE>malloc(3)</CODE>
and not yet used.</P>
<P>Of the 76M in use, we can account for 58.5M (76%). There are some
calls to <CODE>malloc(3)</CODE> for which we can't account.</P>
<P>The <CODE>Meta Data</CODE> list gives the breakdown of where the
accounted memory has gone. 45% has gone to <CODE>StoreEntry</CODE>
and URL strings. Another 42% has gone to buffering hold objects
in VM while they are fetched and relayed to the clients (<CODE>Pool
for in-memory object</CODE>).</P>
<P>The pool sizes are specified by <EM>squid.conf</EM> parameters.
In version 1.0, these pools are somewhat broken: we keep a stack
of unused pages instead of freeing the block. In the <CODE>Pool
for in-memory object</CODE>, the unused stack size is 1/2 of
<CODE>cache_mem</CODE>. The <CODE>Pool for disk I/O</CODE> is
hardcoded at 200. For <CODE>MemObject</CODE> and <CODE>Request</CODE>
it's 1/8 of your system's <CODE>FD_SETSIZE</CODE> value.</P>
<P>If you need to lower your process size, we recommend lowering the
max object sizes in the 'http', 'ftp' and 'gopher' config lines.
You may also want to lower <CODE>cache_mem</CODE> to suit your
needs. But if you <CODE>make cache_mem</CODE> too low, then some
objects may not get saved to disk during high-load periods. Newer
Squid versions allow you to set <CODE>memory_pools off</CODE> to
disable the free memory pools.</P>
<H2><A NAME="ss8.6">8.6</A> <A HREF="FAQ.html#toc8.6">The ``Total memory accounted'' value is less than the size of my Squid process.</A>
</H2>
<P>We are not able to account for <EM>all</EM> memory that Squid uses. This
would require excessive amounts of code to keep track of every last byte.
We do our best to account for the major uses of memory.</P>
<P>Also, note that the <EM>malloc</EM> and <EM>free</EM> functions have
their own overhead. Some additional memory is required to keep
track of which chunks are in use, and which are free. Additionally,
most operating systems do not allow processes to shrink in size.
When a process gives up memory by calling <EM>free</EM>, the total
process size does not shrink. So the process size really
represents the maximum size your Squid process has reached.</P>
<H2><A NAME="malloc-death"></A> <A NAME="ss8.7">8.7</A> <A HREF="FAQ.html#toc8.7">xmalloc: Unable to allocate 4096 bytes!</A>
</H2>
<P>by
<A HREF="mailto:hno@squid-cache.org">Henrik Nordstrom</A></P>
<P>Messages like "FATAL: xcalloc: Unable to allocate 4096 blocks of 1 bytes!"
appear when Squid can't allocate more memory, and on most operating systems
(inclusive BSD) there are only two possible reasons:
<OL>
<LI>The machine is out of swap</LI>
<LI>The process' maximum data segment size has been reached</LI>
</OL>
The first case is detected using the normal swap monitoring tools
available on the platform (<EM>pstat</EM> on SunOS, perhaps <EM>pstat</EM> is
used on BSD as well).</P>
<P>To tell if it is the second case, first rule out the first case and then
monitor the size of the Squid process. If it dies at a certain size with
plenty of swap left then the max data segment size is reached without no
doubts.</P>
<P>The data segment size can be limited by two factors:
<OL>
<LI>Kernel imposed maximum, which no user can go above</LI>
<LI>The size set with ulimit, which the user can control.</LI>
</OL>
</P>
<P>When squid starts it sets data and file ulimit's to the hard level. If
you manually tune ulimit before starting Squid make sure that you set
the hard limit and not only the soft limit (the default operation of
ulimit is to only change the soft limit). root is allowed to raise the
soft limit above the hard limit.</P>
<P>This command prints the hard limits:
<PRE>
ulimit -aH
</PRE>
</P>
<P>This command sets the data size to unlimited:
<PRE>
ulimit -HSd unlimited
</PRE>
</P>
<H3>BSD/OS</H3>
<P>by
<A HREF="mailto:Arjan.deVet@adv.IAEhv.nl">Arjan de Vet</A></P>
<P>The default kernel limit on BSD/OS for datasize is 64MB (at least on 3.0
which I'm using).</P>
<P>Recompile a kernel with larger datasize settings:</P>
<P>
<PRE>
maxusers 128
# Support for large inpcb hash tables, e.g. busy WEB servers.
options INET_SERVER
# support for large routing tables, e.g. gated with full Internet routing:
options "KMEMSIZE=\(16*1024*1024\)"
options "DFLDSIZ=\(128*1024*1024\)"
options "DFLSSIZ=\(8*1024*1024\)"
options "SOMAXCONN=128"
options "MAXDSIZ=\(256*1024*1024\)"
</PRE>
</P>
<P>See <EM>/usr/share/doc/bsdi/config.n</EM> for more info.</P>
<P>In /etc/login.conf I have this:</P>
<P>
<PRE>
default:\
:path=/bin /usr/bin /usr/contrib/bin:\
:datasize-cur=256M:\
:openfiles-cur=1024:\
:openfiles-max=1024:\
:maxproc-cur=1024:\
:stacksize-cur=64M:\
:radius-challenge-styles=activ,crypto,skey,snk,token:\
:tc=auth-bsdi-defaults:\
:tc=auth-ftp-bsdi-defaults:
#
# Settings used by /etc/rc and root
# This must be set properly for daemons started as root by inetd as well.
# Be sure reset these values back to system defaults in the default class!
#
daemon:\
:path=/bin /usr/bin /sbin /usr/sbin:\
:widepasswords:\
:tc=default:
# :datasize-cur=128M:\
# :openfiles-cur=256:\
# :maxproc-cur=256:\
</PRE>
</P>
<P>This should give enough space for a 256MB squid process.</P>
<H3>FreeBSD (2.2.X)</H3>
<P>by Duane Wessels</P>
<P>The procedure is almost identical to that for BSD/OS above.
Increase the open filedescriptor limit in <EM>/sys/conf/param.c</EM>:
<PRE>
int maxfiles = 4096;
int maxfilesperproc = 1024;
</PRE>
Increase the maximum and default data segment size in your kernel
config file, e.g. <EM>/sys/conf/i386/CONFIG</EM>:
<PRE>
options "MAXDSIZ=(512*1024*1024)"
options "DFLDSIZ=(128*1024*1024)"
</PRE>
We also found it necessary to increase the number of mbuf clusters:
<PRE>
options "NMBCLUSTERS=10240"
</PRE>
And, if you have more than 256 MB of physical memory, you probably
have to disable BOUNCE_BUFFERS (whatever that is), so comment
out this line:
<PRE>
#options BOUNCE_BUFFERS #include support for DMA bounce buffers
</PRE>
</P>
<P>Also, update limits in <EM>/etc/login.conf</EM>:
<PRE>
# Settings used by /etc/rc
#
daemon:\
:coredumpsize=infinity:\
:datasize=infinity:\
:maxproc=256:\
:maxproc-cur@:\
:memoryuse-cur=64M:\
:memorylocked-cur=64M:\
:openfiles=4096:\
:openfiles-cur@:\
:stacksize=64M:\
:tc=default:
</PRE>
And don't forget to run ``cap_mkdb /etc/login.conf'' after editing that file.</P>
<H3>OSF, Digital Unix</H3>
<P>by
<A HREF="mailto:ongbh@zpoprp.zpo.dec.com">Ong Beng Hui</A></P>
<P>To increase the data size for Digital UNIX, edit the file <CODE>/etc/sysconfigtab</CODE>
and add the entry...
<PRE>
proc:
per-proc-data-size=1073741824
</PRE>
Or, with csh, use the limit command, such as
<PRE>
> limit datasize 1024M
</PRE>
</P>
<P>Editing <CODE>/etc/sysconfigtab</CODE> requires a reboot, but the limit command
doesn't.</P>
<H2><A NAME="ss8.8">8.8</A> <A HREF="FAQ.html#toc8.8">fork: (12) Cannot allocate memory</A>
</H2>
<P>When Squid is reconfigured (SIGHUP) or the logs are rotated (SIGUSR1),
some of the helper processes (dnsserver) must be killed and
restarted. If your system does not have enough virtual memory,
the Squid process may not be able to fork to start the new helper
processes. This is due to the UNIX way of starting child processes
using the fork() system call which temporary duplicates the whole Squid
process, and when rapidly starting many child processes such as on
"squid -k rotate" the memory usage can temporarily grow to many times
the normal memory usage due to several temporary copies of the whole
process.</P>
<P>The best way to fix this is to increase your virtual memory by adding
swap space. Normally your system uses raw disk partitions for swap
space, but most operating systems also support swapping on regular
files (Digital Unix excepted). See your system manual pages for
<EM>swap</EM>, <EM>swapon</EM>, and <EM>mkfile</EM>. Alternatively you can use the
sleep_after_fork directive to make Squid sleep a little while invoking
helpers to allow the helper to start up before trying to start the next
one. This can be helpful if you find that Squid sometimes fail to restart
all helpers on "squid -k reconfigure".</P>
<H2><A NAME="lower-mem-usage"></A> <A NAME="ss8.9">8.9</A> <A HREF="FAQ.html#toc8.9">What can I do to reduce Squid's memory usage?</A>
</H2>
<P>If your cache performance is suffering because of memory limitations,
you might consider buying more memory. But if that is not an option,
There are a number of things to try:
<UL>
<LI>Try a
<A HREF="#alternate-malloc">different malloc library</A>.</LI>
<LI>Reduce the <EM>cache_mem</EM> parameter in the config file. This controls
how many ``hot'' objects are kept in memory. Reducing this parameter
will not significantly affect performance, but you may recieve
some warnings in <EM>cache.log</EM> if your cache is busy.</LI>
<LI>Turn the <EM>memory_pools off</EM> in the config file. This causes
Squid to give up unused memory by calling <EM>free()</EM> instead of
holding on to the chunk for potential, future use.</LI>
<LI>Reduce the <EM>cache_swap</EM> parameter in your config file. This will
reduce the number of objects Squid keeps. Your overall hit ratio may go down a
little, but your cache will perform significantly better.</LI>
<LI>Reduce the <EM>maximum_object_size</EM> parameter (Squid-1.1 only).
You won't be able to
cache the larger objects, and your byte volume hit ratio may go down,
but Squid will perform better overall.</LI>
<LI>If you are using Squid-1.1.x, try the ``NOVM'' version.</LI>
</UL>
</P>
<H2><A NAME="alternate-malloc"></A> <A NAME="ss8.10">8.10</A> <A HREF="FAQ.html#toc8.10">Using an alternate <EM>malloc</EM> library.</A>
</H2>
<P>Many users have found improved performance and memory utilization when
linking Squid with an external malloc library. We recommend either
GNU malloc, or dlmalloc.</P>
<H3>Using GNU malloc</H3>
<P>To make Squid use GNU malloc follow these simple steps:</P>
<P>
<OL>
<LI>Download the GNU malloc source, available from one of
<A HREF="http://www.gnu.org/order/ftp.html">The GNU FTP Mirror sites</A>.</LI>
<LI>Compile GNU malloc
<PRE>
% gzip -dc malloc.tar.gz | tar xf -
% cd malloc
% vi Makefile # edit as needed
% make
</PRE>
</LI>
<LI>Copy libmalloc.a to your system's library directory and be sure to
name it <EM>libgnumalloc.a</EM>.
<PRE>
% su
# cp malloc.a /usr/lib/libgnumalloc.a
</PRE>
</LI>
<LI>(Optional) Copy the GNU malloc.h to your system's include directory and
be sure to name it <EM>gnumalloc.h</EM>. This step is not required, but if
you do this, then Squid will be able to use the <EM>mstat()</EM> function to
report memory usage statistics on the cachemgr info page.
<PRE>
# cp malloc.h /usr/include/gnumalloc.h
</PRE>
</LI>
<LI>Reconfigure and recompile Squid
<PRE>
% make realclean
% ./configure ...
% make
% make install
</PRE>
Note, In later distributions, 'realclean' has been changed to 'distclean'.
As the configure script runs, watch its output. You should find that
it locates libgnumalloc.a and optionally gnumalloc.h.</LI>
</OL>
</P>
<H3>dlmalloc</H3>
<P>
<A HREF="http://g.oswego.edu/dl/html/malloc.html">dlmalloc</A>
has been written by
<A HREF="mailto:dl@cs.oswego.edu">Doug Lea</A>. According to Doug:
<BLOCKQUOTE>
This is not the fastest, most space-conserving, most portable, or
most tunable malloc ever written. However it is among the fastest
while also being among the most space-conserving, portable and tunable.
</BLOCKQUOTE>
</P>
<P>dlmalloc is included with the <EM>Squid-2</EM> source distribution.
To use this library, you simply give an option to the <EM>configure</EM>
script:
<PRE>
% ./configure --enable-dlmalloc ...
</PRE>
</P>
<H2><A NAME="how-much-ram"></A> <A NAME="ss8.11">8.11</A> <A HREF="FAQ.html#toc8.11">How much memory do I need in my Squid server?</A>
</H2>
<P>As a rule of thumb on Squid uses approximately 10 MB of RAM per GB of the
total of all cache_dirs (more on 64 bit servers such as Alpha), plus your
cache_mem setting and about an additional 10-20MB. It is recommended to
have at least twice this amount of physical RAM available on your Squid
server. For a more detailed discussion on Squid's memory usage see the
sections above.</P>
<P>The recommended extra RAM besides what is used by Squid is used by the
operating system to improve disk I/O performance and by other applications or
services running on the server. This will be true even of a server which
runs Squid as the only tcp service, since there is a minimum level of
memory needed for process management, logging, and other OS level
routines.</P>
<P>If you have a low memory server, and a large disk, then you will not
necessarily be able to use all the disk space, since as the cache fills
the memory available will be insufficient, forcing Squid to swap out
memory and affecting performance. A very large cache_dir total and
insufficient physical RAM + Swap could cause Squid to stop functioning
completely. The solution for larger caches is to get more physical RAM;
allocating more to Squid via cache_mem will not help.</P>
<HR>
<A HREF="FAQ-9.html">Next</A>
<A HREF="FAQ-7.html">Previous</A>
<A HREF="FAQ.html#toc8">Contents</A>
</BODY>
</HTML>
|