1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266
|
INFORMATION ON USING BAD RAM MODULES
====================================
Introduction
RAM is getting smaller and smaller, and as a result, also more and more
vulnerable. This makes the manufacturing of hardware more expensive,
since an excessive amount of RAM chips must be discarded on account of
a single cell that is wrong. Similarly, static discharge may damage a
RAM module forever, which is usually remedied by replacing it
entirely.
This is not necessary, as the BadRAM code shows: By informing the Linux
kernel which addresses in a RAM are damaged, the kernel simply avoids
ever allocating such addresses but makes all the rest available.
Reasons for this feature
There are many reasons why this kernel feature is useful:
- Chip manufacture is resource intensive; waste less and sleep better
- It's another chance to promote Linux as "the flexible OS"
- Some laptops have their RAM soldered in... and then it fails!
- It's plain cool ;-)
Running example
To run this project, I was given two DIMMs, 32 MB each. One, that we
shall use as a running example in this text, contained 512 faulty bits,
spread over 1/4 of the address range in a regular pattern. Some tricks
with a RAM tester and a few binary calculations were sufficient to
write these faults down in 2 longword numbers.
The kernel recognised the correct number of pages with faults and did
not give them out for allocation. The allocation routines could
therefore progress as normally, without any adaption.
So, I gained 30 MB of DIMM which would otherwise have been thrown
away. After booting the kernel, the kernel behaved exactly as it
always had.
Initial checks
If you experience RAM trouble, first read /usr/src/linux/memory.txt
and try out the mem=4M trick to see if at least some initial parts
of your RAM work well. The BadRAM routines halt the kernel in panic
if the reserved area of memory (containing kernel stuff) contains
a faulty address.
Running a RAM checker
The memory checker is not built into the kernel, to avoid delays at
runtime. If you experience problems that may be caused by RAM, run
a good RAM checker, such as
http://reality.sgi.com/cbrady_denver/memtest86
The output of a RAM checker provides addresses that went wrong. In
the 32 MB chip with 512 faulty bits mentioned above, the errors were
found in the 8MB-16MB range (the DIMM was in slot #0) at addresses
xxx42f4
xxx62f4
xxxc2f4
xxxe2f4
and the error was a "sticky 1 bit", a memory bit that stayed "1" no
matter what was written to it. The regularity of this pattern
suggests the death of a buffer at the output stages of a row on one of
the chips. I expect such regularity to be commonplace. Finding this
regularity currently is human effort, but it should not be hard to
alter a RAM checker to capture it in some sort of pattern, possibly
the BadRAM patterns described below.
By the way, if you manage to get hold of memtest86 version 2.3 or
beyond, you can configure the printing mode to produce BadRAM patterns,
which find out exactly what you must enter on the LILO: commandline,
except that you shouldn't mention the added spacing. That means that
you can skip the following step, which saves you a *lot* of work.
Also by the way, if your machine has the ISA memory gap in the 15M-16M
range unstoppable, Linux can get in trouble. One way of handling that
situation is by specifying the total memory size to Linux with a boot
parameter mem=... and then to tell it to treat the 15M-16M range as
faulty with an additional boot parameter, for instance:
mem=24M badram=0x00f00000,0xfff00000
if you installed 24MB of RAM in total.
Capturing errors in a pattern
Instead of manually providing all 512 errors to the kernel, it's nicer
to generate a pattern. Since the regularity is based on address decoding
software, which generally takes certain bits into account and ignores
others, we shall provide a faulty address F, together with a bit mask M
that specifies which bits must be equal to F. In C code, an address A
is faulty if and only if
(F & M) == (A & M)
or alternately (closer to a hardware implementation):
~((F ^ A) & M)
In the example 32 MB chip, we had the faulty addresses in 8MB-16MB:
xxx42f4 ....0100....
xxx62f4 ....0110....
xxxc2f4 ....1100....
xxxe2f4 ....1110....
The second column represents the alternating hex digit in binary form.
Apperantly, the first and one-but last binary digit can be anything,
so the binary mask for that part is 0101. The mask for the part after
this is 0xfff, and the part before should select anything in the range
8MB-16MB, or 0x00800000-0x01000000; this is done with a bitmask
0xff80xxxx. Combining these partial masks, we get:
F=0x008042f4 M=0xff805fff
That covers everything for this DIMM; for more complicated failing
DIMMs, or for a combination of multiple failing DIMMs, it can be
necessary to set up a number of such F/M pairs.
Rebooting Linux
Now that these patterns are known (and double-checked, the calculations
are highly error-prone... it would be neat to test them in the RAM
checker...) we simply restart Linux with these F/M pairs as a parameter.
If you normally boot as follows:
LILO: linux
you should now boot with
LILO: linux badram=0x008042f4,0xff805fff
or perhaps by mentioning more F/M pairs in an order F0,M0,F1,M1,...
When you provide an odd number of arguments to badram, the default mask
0xffffffff (only one address matched) is applied to the pattern.
Beware of the commandline length. At least up to LILO version 0.21,
the commandline is cut off after the 78th character; later versions
may go as far as the kernel goes, namely 255 characters. In no way is
it possible to enter more than 10 numbers to the badram boot option.
When the kernel now boots, it should not give any trouble with RAM.
Mind you, this is under the assumption that the kernel and its data
storage do not overlap an erroneous part. If this happens, and the
kernel does not choke on it right away, it will stop with a panic.
You will need to provide a RAM where the initial, say 2MB, is faultless.
Now look up your memory status with
dmesg | grep ^Memory:
which prints a single line with information like
Memory: 158524k/163840k available
(940k kernel code,
412k reserved,
1856k data,
60k init,
0k highmem,
2048k BadRAM)
The latter entry, the badram, is 2048k to represent the loss of 2MB
of general purpose RAM due to the errors. Or, positively rephrased,
instead of throwing out 32MB as useless, you only throw out 2MB.
If the system is stable (try compiling a few kernels, and do a few
finds in / or so) you may add the boot parameter to /etc/lilo.conf
as a line to _all_ the kernels that handle this trouble with a line
append="badram=0x008042f4,0xff805fff"
after which you run "lilo".
Warning: Don't experiment with these settings on your only boot image.
If the BadRAM overlays kernel code, data, init, or other reserved
memory, the kernel will halt in panic. Try settings on a test boot
image first, and if you get a panic you should change the order of
your DIMMs [which may involve buying a new one just to be able to
change the order].
You are allowed to enter any number of BadRAM patterns in all the
places documented in this file. They will all apply. It is even
possible to mention several BadRAM patterns in a single place. The
completion of an odd number of arguments with the default mask is
done separately for each badram=... option.
Kernel Customisation
Some people prefer to enter their badram patterns in the kernel, and
this is also possible. In mm/page_alloc.c there is an array of unsigned
long integers into which the parameters can be entered, prefixed with
the number of integers (twice the number of patterns). The array is
named badram_custom and it will be added to the BadRAM list whenever an
option 'badram' is provided on the commandline when booting, either
with or without additional patterns.
For the previous example, the code would become
static unsigned long __init badram_custom[] = {
2, // Number of longwords that follow, as F/M pairs
0x008042f4L, 0xff805fffL,
};
Even on this place you may assume the default mask to be filled in
when you enter an odd number of longwords. Specify the number of
longwords to be 0 to avoid influence of this custom BadRAM list.
BadRAM classification
This technique may start a lively market for "dead" RAM. It is important
to realise that some RAMs are more dead than others. So, instead of
just providing a RAM size, it is also important to know the BadRAM
class, which is defined as follows:
A BadRAM class N means that at most 2^N bytes have a problem,
and that all problems with the RAMs are persistent: They
are predictable and always show up.
The DIMM that serves as an example here was of class 9, since 512=2^9
errors were found. Higher classes are worse, "correct" RAM is of class
-1 (or even less, at your choice).
Class N also means that the bitmask for your chip (if there's just one,
that is) counts N bits "0" and it means that (if no faults fall in the
same page) an amount of 2^N*PAGESIZE memory is lost, in the example on
an i386 architecture that would be 2^9*4k=2MB, which accounts for the
initial claim of 30MB RAM gained with this DIMM.
Note that this scheme has deliberately been defined to be independent
of memory technology and of computer architecture.
Known Bugs
LILO is known to cut off commandlines which are too long. For the
lilo-0.21 distribution, a commandline may not exceed 78 characters,
while actually, 255 would be possible [on i386, kernel 2.2.16].
LILO does _not_ report too-long commandlines, but the error will
show up as either a panic at boot time, stating
panic: BadRAM page in initial area
or the dmesg line starting with Memory: will mention an unpredicted
number of kilobytes. (Note that the latter number only includes
errors in accessed memory.)
Future Possibilities
It would be possible to use even more of the faulty RAMs by employing
them for slabs. The smaller allocation granularity of slabs makes it
possible to throw out just, say, 32 bytes surrounding an error. This
would mean that the example DIMM only looses 16kB instead of 2MB.
It might even be possible to allocate the slabs in such a way that,
where possible, the remaining bytes in a slab structure are allocated
around the error, reducing the RAM loss to 0 in the optimal situation!
However, this yield is somewhat faked: It is possible to provide 512
pages of 32-byte slabs, but it is not certain that anyone would use
that many 32-byte slabs at any time.
A better solution might be to alter the page allocation for a slab to
have a preference for BadRAM pages, and given those a special treatment.
This way, the BadRAM would be spread over all the slabs, which seems
more likely to be a `true' pay-off. This would yield more overhead at
slab allocation time, but on the other hand, by the nature of slabs,
such allocations are made as rare as possible, so it might not matter
that much. I am uncertain where to go.
Many suggestions have been made to insert a RAM checker at boot time;
since this would leave the time to do only very meager checking, it
is not a reasonable option; we already have a BIOS doing that in most
systems!
It would be interesting to integrate this functionality with the
self-verifying nature of ECC RAM. These memories can even distinguish
between recorable and unrecoverable errors! Such memory has been
handled in older operating systems by `testing' once-failed memory
blocks for a while, by placing only (reloadable) program code in it.
Unfortunately, I possess no faulty ECC modules to work this out.
Names and Places
The home page of this project is on
http://rick.vanrein.org/linux/badram
This page also links to Nico Schmoigl's experimental extensions to
this patch (with debugging and a few other fancy things).
In case you have experiences with the BadRAM software which differ from
the test reportings on that site, I hope you will mail me with that
new information.
The BadRAM project is an idea and implementation by
Rick van Rein
Binnenes 67
9407 CX Assen
The Netherlands
vanrein@cs.utwente.nl
If you like it, a postcard would be much appreciated ;-)
Enjoy,
-Rick.
|