1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300
|
<!-- retain these comments for translator revision tracking -->
<!-- $Id$ -->
<sect3 id="mdcfg">
<title>Configuring Multidisk Devices (Software RAID)</title>
<para>
If you have more than one harddrive<footnote><para>
To be honest, you can construct an MD device even from partitions
residing on single physical drive, but that won't give any benefits.
</para></footnote> in your computer, you can use
<command>mdcfg</command> to set up your drives for increased
performance and/or better reliability of your data. The result is
called <firstterm>Multidisk Device</firstterm> (or after its most
famous variant <firstterm>software RAID</firstterm>).
</para><para>
MD is basically a bunch of partitions located on different disks and
combined together to form a <emphasis>logical</emphasis> device. This
device can then be used like an ordinary partition (i.e. in
<command>partman</command> you can format it, assign a mountpoint,
etc.).
</para><para>
What benefits this brings depends on the type of MD device you are
creating. Currently supported are:
<variablelist>
<varlistentry>
<term>RAID0</term><listitem><para>
Is mainly aimed at performance. RAID0 splits all incoming data into
<firstterm>stripes</firstterm> and distributes them equally over each
disk in the array. This can increase the speed of read/write
operations, but when one of the disks fails, you will lose
<emphasis>everything</emphasis> (part of the information is still on
the healthy disk(s), the other part <emphasis>was</emphasis> on the
failed disk).
</para><para>
The typical use for RAID0 is a partition for video editing.
</para></listitem>
</varlistentry>
<varlistentry>
<term>RAID1</term><listitem><para>
Is suitable for setups where reliability is the first concern. It
consists of several (usually two) equally-sized partitions where every
partition contains exactly the same data. This essentially means three
things. First, if one of your disks fails, you still have the data
mirrored on the remaining disks. Second, you can use only a fraction
of the available capacity (more precisely, it is the size of the
smallest partition in the RAID). Third, file-reads are load-balanced among
the disks, which can improve performance on a server, such as a file
server, that tends to be loaded with more disk reads than writes.
</para><para>
Optionally you can have a spare disk in the array which will take the
place of the failed disk in the case of failure.
</para></listitem>
</varlistentry>
<varlistentry>
<term>RAID5</term><listitem><para>
Is a good compromise between speed, reliability and data redundancy.
RAID5 splits all incoming data into stripes and distributes them
equally on all but one disk (similar to RAID0). Unlike RAID0, RAID5
also computes <firstterm>parity</firstterm> information, which gets
written on the remaining disk. The parity disk is not static (that
would be called RAID4), but is changing periodically, so the parity
information is distributed equally on all disks. When one of the
disks fails, the missing part of information can be computed from
remaining data and its parity. RAID5 must consist of at least three
active partitions. Optionally you can have a spare disk in the array
which will take the place of the failed disk in the case of failure.
</para><para>
As you can see, RAID5 has a similar degree of reliability to RAID1
while achieving less redundancy. On the other hand, it might be a bit
slower on write operations than RAID0 due to computation of parity
information.
</para></listitem>
</varlistentry>
<varlistentry>
<term>RAID6</term><listitem><para>
Is similar to RAID5 except that it uses two parity devices instead of
one.
</para><para>
A RAID6 array can survive up to two disk failures.
</para></listitem>
</varlistentry>
<varlistentry>
<term>RAID10</term><listitem><para>
RAID10 combines striping (as in RAID0) and mirroring (as in RAID1).
It creates <replaceable>n</replaceable> copies of incoming data and
distributes them across the partitions so that none of the copies of
the same data are on the same device.
The default value of <replaceable>n</replaceable> is 2, but it can be
set to something else in expert mode. The number of partitions used
must be at least <replaceable>n</replaceable>.
RAID10 has different layouts for distributing the copies. The default is
near copies. Near copies have all of the copies at about the same offset
on all of the disks. Far copies have the copies at different offsets on
the disks. Offset copies copy the stripe, not the individual copies.
</para><para>
RAID10 can be used to achieve reliability and redundancy without the
drawback of having to calculate parity.
</para></listitem>
</varlistentry>
</variablelist>
To sum it up:
<informaltable>
<tgroup cols="5">
<thead>
<row>
<entry>Type</entry>
<entry>Minimum Devices</entry>
<entry>Spare Device</entry>
<entry>Survives disk failure?</entry>
<entry>Available Space</entry>
</row>
</thead>
<tbody>
<row>
<entry>RAID0</entry>
<entry>2</entry>
<entry>no</entry>
<entry>no</entry>
<entry>Size of the smallest partition multiplied by number of devices in RAID</entry>
</row>
<row>
<entry>RAID1</entry>
<entry>2</entry>
<entry>optional</entry>
<entry>yes</entry>
<entry>Size of the smallest partition in RAID</entry>
</row>
<row>
<entry>RAID5</entry>
<entry>3</entry>
<entry>optional</entry>
<entry>yes</entry>
<entry>
Size of the smallest partition multiplied by (number of devices in
RAID minus one)
</entry>
</row>
<row>
<entry>RAID6</entry>
<entry>4</entry>
<entry>optional</entry>
<entry>yes</entry>
<entry>
Size of the smallest partition multiplied by (number of devices in
RAID minus two)
</entry>
</row>
<row>
<entry>RAID10</entry>
<entry>2</entry>
<entry>optional</entry>
<entry>yes</entry>
<entry>
Total of all partitions divided by the number of chunk copies (defaults to two)
</entry>
</row>
</tbody></tgroup></informaltable>
</para><para>
If you want to know more about Software RAID, have a look
at <ulink url="&url-software-raid-howto;">Software RAID HOWTO</ulink>.
</para><para>
To create an MD device, you need to have the desired partitions it
should consist of marked for use in a RAID. (This is done in
<command>partman</command> in the <guimenu>Partition
settings</guimenu> menu where you should select <menuchoice>
<guimenu>Use as:</guimenu> <guimenuitem>physical volume for
RAID</guimenuitem> </menuchoice>.)
</para><note><para>
Make sure that the system can be booted with the partitioning scheme
you are planning. In general it will be necessary to create a separate
file system for <filename>/boot</filename> when using RAID for the root
(<filename>/</filename>) file system.
Most boot loaders <phrase arch="x86">(including lilo and grub)</phrase>
do support mirrored (not striped!) RAID1, so using for example RAID5 for
<filename>/</filename> and RAID1 for <filename>/boot</filename> can be
an option.
</para></note><para>
Next, you should choose <guimenuitem>Configure software
RAID</guimenuitem> from the main <command>partman</command> menu.
(The menu will only appear after you mark at least one partition for
use as <guimenuitem>physical volume for RAID</guimenuitem>.)
On the first screen of <command>mdcfg</command> simply select
<guimenuitem>Create MD device</guimenuitem>. You will be presented with
a list of supported types of MD devices, from which you should choose
one (e.g. RAID1). What follows depends on the type of MD you selected.
</para>
<itemizedlist>
<listitem><para>
RAID0 is simple — you will be issued with the list of available
RAID partitions and your only task is to select the partitions which
will form the MD.
</para></listitem>
<listitem><para>
RAID1 is a bit more tricky. First, you will be asked to enter the
number of active devices and the number of spare devices which will
form the MD. Next, you need to select from the list of available RAID
partitions those that will be active and then those that will be
spare. The count of selected partitions must be equal to the number
provided earlier. Don't worry. If you make a mistake and
select a different number of partitions, &d-i; won't let you
continue until you correct the issue.
</para></listitem>
<listitem><para>
RAID5 has a setup procedure similar to RAID1 with the exception that you
need to use at least <emphasis>three</emphasis> active partitions.
</para></listitem>
<listitem><para>
RAID6 also has a setup procedure similar to RAID1 except that at least
<emphasis>four</emphasis> active partitions are required.
</para></listitem>
<listitem><para>
RAID10 again has a setup procedure similar to RAID1 except in expert
mode. In expert mode, &d-i; will ask you for the layout.
The layout has two parts. The first part is the layout type. It is either
<literal>n</literal> (for near copies), <literal>f</literal> (for far
copies), or <literal>o</literal> (for offset copies). The second part is
the number of copies to make of the data. There must be at least that
many active devices so that all of the copies can be distributed onto
different disks.
</para></listitem>
</itemizedlist>
<para>
It is perfectly possible to have several types of MD at once. For
example, if you have three 200 GB hard drives dedicated to MD, each
containing two 100 GB partitions, you can combine the first partitions on
all three disks into the RAID0 (fast 300 GB video editing partition)
and use the other three partitions (2 active and 1 spare) for RAID1
(quite reliable 100 GB partition for <filename>/home</filename>).
</para><para>
After you set up MD devices to your liking, you can
<guimenuitem>Finish</guimenuitem> <command>mdcfg</command> to return
back to the <command>partman</command> to create filesystems on your
new MD devices and assign them the usual attributes like mountpoints.
</para>
</sect3>
|