1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054
|
Installing EVMS
These instructions will help you make informed decisions about how to install
and configure EVMS for your system. In order to get the best understanding of
the various options, please read these instructions completely before beginning
the installation process.
These instructions are almost certainly out-of-date. The most up-to-date copy
of these instructions can be found at:
http://evms.sourceforge.net/install/
Contents
========
1. Downloading Packages
2. Building the Kernel
3. Installing the EVMS Tools
4. Activating EVMS Volumes
5. Configuring Your Boot-Loader
6. Root Filesystem on an EVMS Volume
7. Configuring EVMS for a High-Availability Cluster
===============================================================================
1. Downloading Packages
Before building and installing EVMS, you must first download the EVMS package
and a few other related packages.
NOTE: These example commands assume the files will be untarred in the
/usr/src directory. Other directories will work just as well.
1. EVMS
* EVMS Source Package
Download the latest EVMS source package from SourceForge
(http://sf.net/project/showfiles.php?group_id=25076). The current
version is evms-2.5.2.tar.gz. This file contains all of the source code
for the user-space administration tools, as well as some patches for
the Linux kernel. After downloading the file, untar it in an
appropriate place, using the following commands:
cd /usr/src
tar xvzf evms-2.5.2.tar.gz
* EVMS Init-Ramdisk Image
In order to mount your root filesystem through an EVMS volume, you will
need an init-ramdisk. EVMS provides a sample initrd image, which you
can download from the EVMS website
(http://sf.net/project/showfiles.php?group_id=25076). Get the file
evms-2.5.2-initrd.gz and save it in your /boot directory.
* Updated EVMS Patches
Between EVMS releases, updated patches and bug-fixes are uploaded to
the Extra-Patches section of the EVMS website
(http://evms.sf.net/patches/). Each release has its own directory, and
each release directory has an engine subdirectory containing updated
patches for the tools, and a kernel subdirectory containing updated
patches for the kernel. Please check to see if any extra patches are
currently available for the 2.5.2 release, and download them.
2. Linux Kernel
If you don't already have a Linux kernel source tree, download the latest
version from kernel.org (http://www.kernel.org/). The most recent 2.4
kernel is 2.4.29
(ftp://ftp.kernel.org/pub/linux/kernel/v2.4/linux-2.4.29.tar.bz2), and
the most recent 2.6 kernel is 2.6.10
(ftp://ftp.kernel.org/pub/linux/kernel/v2.6/linux-2.6.10.tar.bz2)
along with the 2.6.11-rc5 patch
(ftp://ftp.kernel.org/pub/linux/kernel/v2.6/testing/patch-2.6.11-rc5.bz2)
Untar the kernel package using the following commands:
cd /usr/src
tar xvjf linux-2.6.10.tar.bz2
bunzip2 patch-2.6.11-rc5.bz2
mv linux-2.6.10 linux-2.6.11-rc5
cd linux-2.6.11-rc5
patch -p1 < ../patch-2.6.11-rc5
3. Device-Mapper
Download the latest Device-Mapper package from the Red Hat FTP server
(ftp://sources.redhat.com/pub/dm/). The most recent release
is device-mapper.1.00.21.tgz. Untar the Device-Mapper package using the
following commands:
cd /usr/src
tar xvzf device-mapper.1.00.21.tgz
4. Extra Device-Mapper Patches
The most recent kernel patches for Device-Mapper are on the DM Resources
web page (http://sources.redhat.com/dm/). Download one of these packages,
depending on which kernel version you are using. For 2.4 kernels, there
are currently no extra patches to download. For 2.6 kernels, download
2.6.11-rc3-udm2.tar.bz2. Untar the extra Device-Mapper patches using the
following commands:
cd /usr/src
tar xvjf 2.6.11-rc3-udm2.tar.bz2
cd 2.6.11-rc3-udm2
rm 00001.patch
rm 00002.patch
rm 00014.patch
rm 00016.patch
5. LILO
If you are using LILO (http://freshmeat.net/projects/lilo/) as your
boot-loader, you should download the most recent version of LILO, so
you'll be able to mount your /boot filesystem using an EVMS volume. The
most recent version is lilo-22.6.1
(http://home.san.rr.com/johninsd/pub/linux/lilo/lilo-22.6.1.tar.gz).
Untar the LILO package using the following commands:
cd /usr/src
tar xvzf lilo-22.6.1.tar.gz
If you are using Grub as your boot-loader, the version that you are
currently running will work fine.
With either LILO or Grub, there are limitations on what type of volume
can be used for your /boot filesystem. The boot-loader configuration
section (section 5) provides more details about these limitations.
6. LILO-Devmapper Patch
If you are using LILO as your boot-loader, you will need an extra patch
from Christophe Saout (http://www.saout.de/misc/) to allow LILO to
recognize Device-Mapper devices. The latest patch is
lilo-22.6.1-devmapper.patch. This is a simple patch file, so it does not
need to be untarred.
As with the previous step, if you are using Grub as your boot-loader,
this patch is unnecessary.
7. Linux-HA
If you plan to use EVMS in a high-availability cluster, you will need to
get the Linux-HA software (http://www.linux-ha.org/). The latest version
is heartbeat-1.2.2, which is available on the Linux-HA download page
(http://www.linux-ha.org/download/). You should get the base package,
as well as the pils and stonith sub-packages. These packages are
available as source tarballs, as well as in binary rpm and deb formats.
If you will be using EVMS on a stand-alone system, you won't need any
of the Linux-HA packages.
===============================================================================
2. Patching, Configuring, and Building Your Kernel
These instructions are only intended to cover the steps necessary to patch and
configure your kernel for use with EVMS. For general instructions on
configuring and compiling a Linux kernel, please see the The Kernel HOWTO
(http://www.tldp.org/HOWTO/Kernel-HOWTO.html).
The new Device-Mapper driver with the version-4 ioctl interface is now
available. EVMS works with either this new version of Device-Mapper or with
version 3 which was used for EVMS 2.1.0 and earlier releases.
If you already have a 2.4 kernel patched with Device-Mapper and an earlier
version of EVMS, you can continue to use that same kernel with EVMS 2.5.2, and
you can skip to the next section of these instructions. However, we recommend
upgrading to the new version of Device-Mapper if possible.
NOTE: Debian users should first read through the file
/usr/share/doc/kernel-patch-evms/README.Debian for instructions on
automatically patching and building your kernel with make-kpkg.
1. Base Device-Mapper Driver
The Device-Mapper driver is already present in the 2.6 kernel. However,
the 2.4 kernel does not yet include Device-Mapper, so the full driver
must be patched in. The patches are located in the Device-Mapper package
downloaded in the previous section (1.3).
* 2.6 kernels
(Base Device-Mapper driver is already included. Skip to the step 2.)
* 2.4.22 through 2.4.29 kernels
cd /usr/src/linux-2.4.29
patch -p1 < /usr/src/device-mapper.1.00.21/patches/linux-2.4.28-pre4-devmapper-ioctl.patch
* 2.4.21 kernel
cd /usr/src/linux-2.4.21
patch -p1 < /usr/src/device-mapper.1.00.21/patches/linux-2.4.21-devmapper-ioctl.patch
2. Extra Device-Mapper Patches
As mentioned in the download section (1.4), some recent patches and
bug-fixes for Device-Mapper are available on the DM Resources web page.
Some of these have not yet made it into the 2.6 kernel tree or into the
official Device-Mapper package.
* 2.6.11-rc5 kernel
You only need to apply these extra patches if you wish to use the
multipath or BBR features in EVMS.
cd /usr/src/linux-2.6.11-rc5/
cat /usr/src/2.6.11-rc3-udm2/*.patch | patch -p1
* 2.4.20 through 2.4.29 kernels
(No extra patches necessary at this time.)
3. EVMS Patches
In addition to the base MD and Device-Mapper support, EVMS requires a few
additional patches for certain EVMS features to work correctly with these
drivers. These patches are provided in the EVMS package, in the
kernel/2.4/ and kernel/2.6/ subdirectories. See the INDEX files in those
directories for descriptions of the patches.
A. Snapshotting.
If you will be using the Snapshot plugin in EVMS, apply the following
patches.
* 2.6 kernels
(No extra patches necessary at this time.)
* 2.4.22 through 2.4.29 kernels
patch -p1 < /usr/src/device-mapper.1.00.21/patches/linux-2.4.22-VFS-lock.patch
patch -p1 < /usr/src/evms-2.5.2/kernel/2.4/dm-snapshot.patch
* 2.4.21 kernel
patch -p1 < /usr/src/device-mapper.1.00.21/patches/linux-2.4.21-VFS-lock.patch
patch -p1 < /usr/src/evms-2.5.2/kernel/2.4/dm-snapshot.patch
patch -p1 < /usr/src/evms-2.5.2/kernel/2.4/jfs.patch
* XFS Users
If you are using an XFS-enabled 2.4.21 kernel, apply this extra
patch in addition to the above patches. This patch is not necessary
for 2.4.22 or later kernels.
patch -p1 < /usr/src/evms-2.5.2/kernel/2.4/vfs-lock-xfs.patch
B. Bad-Block-Relocation
If you will be using the BBR plugin in EVMS, apply the following
patches.
* 2.6 kernels
patch -p1 < /usr/src/evms-2.5.2/kernel/2.6/dm-bbr.patch
* 2.4 kernels
patch -p1 < /usr/src/evms-2.5.2/kernel/2.4/dm-bbr.patch
C. Software-RAID
If you will be using the MD RAID-1 or RAID-5 plugins in EVMS, apply
the following patches.
* 2.6 and 2.4.22 through 2.4.29 kernels
(No extra patches necessary at this time.)
* 2.4.21 kernel
patch -p1 < /usr/src/evms-2.5.2/kernel/2.4/md-raid.patch
D. BD-Claim Patch
The 2.6 kernels now prevent multiple "owners" of a block-device. This
means that the stock kernel will not allow you to mount a filesystem
on one of the kernel's built-in disk-partitions as well as use EVMS to
activate volumes on that disk.
More specifically, the kernel has its own partitioning code that runs
when the kernel boots, and provides the traditional partition devices
(e.g. /dev/hda1). When a filesystem mounts one of these partitions, the
filesystem "claims" that partition and no one else can claim it. When
this happens, the kernel's partitioning code (not the filesystem code)
also claims the underlying disk, meaning that disk is only available
for use by the kernel's built-in partitions on that disk. Other
filesystems may mount other partitions on that disk, but the disk
itself is "owned" by the partitioning code.
However, in order to allow easy management of partitions, EVMS does
its own partition detection, and creates devices to represent those
partitions using Device-Mapper (not the kernel's built-in partitioning
code). When DM creates a device, it also attempts to claim the
underlying devices (in this case the disk that holds the partition).
But, if the user has already mounted one of the kernel's built-in
partitions on that same disk, then the disk will already have been
claimed. DM will be unable to claim it, and the DM device activation
will fail.
The end result is that a single disk cannot be used both for EVMS and
for mounting the kernel's built-in partitions.
There are three solutions to this problem.
1. Switch to using EVMS for *all* your volumes and partitions. If none
of the kernel's built-in partitions are mounted, then there won't
be any conflicts when DM tries to claim the disks. This is, of
course, the preferred solution, but also requires some extra work
on your part to convert to mounting your root filesystem using an
EVMS volume. Please see section 6 of this install guide as well
as http://evms.sf.net/convert.html for more details on this option.
2. Tell EVMS to exclude any disks that contain partitions that you are
going to mount using the kernel's built-in partitions. You can do
this by adding the names of these disks to the
"sysfs_devices.exclude" line in your /etc/evms.conf file. If you
choose this option, EVMS will completely ignore the specified disks
and not discover any of the partitions or volumes on those disks.
3. Apply this patch, which will is a reversal of the patch that
prevents Device-Mapper and the kernel's built-in partitions from
using the same disk at the same time. This patch is not supported
by the kernel community, and in fact removes functionality that
they specifically added. However, it will allow you to share your
disks between EVMS and the kernel's built-in partitioning code, if
that's the choice you wish to make for your system.
patch -p1 < /usr/src/evms-2.5.2/kernel/2.6/bd-claim.patch
This issue does not exist on 2.4 kernels.
E. Update Patches
If you downloaded any EVMS update-patches for the kernel (section 1.1),
they should be applied now, before configuring and building the kernel.
The top of each patch file contains a description of the patch, why it
is necessary, and instructions on how to apply the patch to the kernel
source tree.
4. Configure the Kernel
After patching the kernel, the next step is configuring it with the
required support. To configure the kernel, complete the following steps:
A. Type the following command:
make xconfig
NOTE: You can also use config or menuconfig.
B. Select the Main Menu->Code Maturity Level Options menu and enable the
following option:
<y> Prompt for development and/or incomplete code/drivers
C. To enable MD and DM support, select the Main Menu->Multi-Device
Support (RAID and LVM) menu, and select the following options. These
drivers can also be built as modules if desired.
<y> Multiple devices driver support (RAID and LVM)
<y> RAID support
<y> RAID-1 (mirroring) mode
<y> RAID-4/RAID-5 mode
<y> Device mapper support
<y> Crypt target support (only applicable on 2.6 kernels)
<y> Multipath target (only applicable on 2.6 kernels)
<y> Snapshot target (only applicable on 2.6 kernels)
<y> Mirror target
<y> Zero target (only applicable on 2.6 kernels)
<y> Flakey target (only applicable on 2.6 kernels)
<y> Bad Block Relocation Device Target
D. To enable init-ramdisk support, select the Main Menu->Block devices
menu, and select the following options.
<y> Loopback device support
<y> RAM disk support
(4096) Default RAM disk size
<y> Initial RAM disk (initrd) support
Loopback can be built as a module. The remaining options cannot be
built as modules.
E. If you wish to use devfs (the kernel device-filesystem), you should
also configure your kernel to automatically mount devfs on /dev at
boot. This is necessary for the sample init-ramdisk to work properly
when activating your root volume (see later sections (section 6) for
more details).
NOTE: EVMS does not require devfs, and the EVMS team has no specific
recommendations about using it or not using it. If you do not
wish to use devfs, leave both of these options off.
In the Main Menu->File Systems menu, select the following options.
<y> /dev file system support (EXPERIMENTAL)
<y> Automatically mount at boot
Continue configuring your kernel as required for your system and hardware.
When you have finished configuring your kernel, choose Save and Exit to
quit the kernel configuration.
5. Build the Kernel
Once you have configured the kernel, you will need to build the kernel.
A. Type the following command:
* 2.6 kernels
make
make modules_install
* 2.4 kernels
make dep
make bzImage
make modules
make modules_install
B. Copy the new kernel to the appropriate location (usually in /boot).
NOTE: On Intel machines, use arch/i386/boot/bzImage.
C. If you use LILO as your boot-loader, add an appropriate entry to your
/etc/lilo.conf file and run lilo to install the new kernel image.
D. Re-boot your machine to start the new kernel.
===============================================================================
3. Building and Installing the EVMS Tools
The EVMS Engine consists of all the user-space administration tools, libraries
and plugins for EVMS.
1. Apply Update Patches
If you downloaded any update-patches for the EVMS tools (section 1.1),
they should be applied now, before configuring and building the tools.
The top of each patch file contains a description of the patch, when and
why it is necessary, and instructions on how to apply the patch to the
source tree.
2. Configure EVMS
cd /usr/src/evms-2.5.2/
./configure [--options]
Select the appropriate options for your configuration. Some of the more
important ones are listed here. If you do not specify any options to the
./configure command, the most appropriate settings for your system will
be used.
--prefix=dir
The default installation path is /.
--libdir=dir
The directory to install the main engine library. The default path is
${prefix}/lib. The EVMS plugin libraries will be installed in the
evms subdirectory of this path.
--sbindir=dir
The directory to install all EVMS user-interface binaries. The default
path is ${prefix}/sbin.
--disable-"plugin-name"
By default, all EVMS plug-ins are compiled (unless a plug-in has
dependencies that are not satisfied on the building machine). This
option allows the user to remove one or more plug-ins from the build.
Acceptable options for "plugin-name" are: bbr, bbr_seg, bsd, csm,
disk, dos, drivelink, ext2, gpt, ha, jfs, lvm, lvm2, mac, md,
multipath, ntfs, ogfs, reiser, replace, rsct, s390, snapshot, swap,
and xfs.
--disable-"interface-name"
By default, all EVMS user interfaces are compiled (unless an interface
has dependencies that are not satisfied on the building machine). This
option allows the user to remove one or more interfaces from the
build. Acceptable options for "interface-name" are: cli, gui,
text-mode, and utils.
--with-debug
Include extra debugging information when building EVMS. This option
is only necessary if you intend to run EVMS within a debugger.
--with-efence
Specify this if the engine should be linked with the ElectricFence
memory-debugging library. You must have libefence installed on your
system for this option to work.
--with-static-glib
Specify this if the text-mode UI should be statically linked against
the glib and panel libraries. This should make it possible to run
evmsn without /usr being mounted.
3. Build and Install EVMS
After the engine is configured, use the following commands to build and
install the tools.
make
make install
ldconfig
Unless you specified other directories, the following list describes
where files will be installed on your system:
* The core Engine library will be installed in /lib.
* All plug-in libraries will be installed in /lib/evms/2.5.2.
* All user interface binaries will be installed in /sbin.
* The EVMS man pages will be installed in /usr/man/man8.
* The EVMS header files will be installed in /usr/include/evms.
* The EVMS configuration file will be installed in /etc.
* The EVMS failover script will be installed in /etc/ha.d/resource.d
(Only applicable when building the HA plugin).
If you specified your own installation path, you will need to add the
Engine library path to your LD_LIBRARY_PATH environment variable, or to
your /etc/ld.so.conf file. Do not add the plug-in library path because
the Engine will dynamically load these libraries directly.
4. Edit the Configuration File
Use your favorite text editor to examine the EVMS configuration file,
located at /etc/evms.conf. This file contains settings to control how
EVMS operates. For example, the logging level, the location of the engine
log, and the list of disk devices to examine can all be controlled
through settings in the configuration file. The sample file is well
commented, and will advise you of appropriate values for each setting.
The configuration file is normally installed as /etc/evms.conf. However,
if you already have a configuration file from a previous version of EVMS,
the new one will be installed as /etc/evms.conf.sample. You should
examine the new sample to see if your existing file should be updated.
5. Check For Virtual Filesystems
EVMS requires the procfs filesystem to be mounted on /proc in order to
find the Device-Mapper driver. By now, all distributions should
automatically mount /proc at boot-time, so this probably isn't an issue
for you.
However, when running on a 2.6 kernel, EVMS also requires that the sysfs
filesystem be mounted in order to get information about the disks on your
system. Sysfs is the new virtual-filesystem that provides information
about the devices and drivers on your system. It is similar to procfs,
and the generally-agreed-upon-location for mounting sysfs is /sys.
However, most distros do not yet automatically mount this filesystem, so
you will probably want to add a new entry for it to your /etc/fstab file.
The procfs and sysfs entries in /etc/fstab should look something like
this:
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
6. Start EVMS
Now that the tools have been installed (and if you have booted your
Device-Mapper-enabled kernel), you can begin using EVMS by using one of
the following commands.
* evmsgui (to start the graphical interface)
* evmsn (to start the text-mode interface)
* evms (to start the command-line interface)
7. Install the Device-Mapper library and tools
EVMS does not rely on the Device-Mapper user-space library or
command-line tools, but they can come in very handy in debugging
situations. However, the Device-Mapper library *is* required if you
are using LILO as your boot-loader and your /boot directory is on an
EVMS volume (see chapter 5 for more details). Thus, we recommend you
install this library and tool so they are available in the (hopefully
unlikely) event that you need to report a problem with EVMS.
To install libdevmapper and dmsetup, please see the INSTALL file in the
/usr/src/device-mapper.1.00.21/ directory for full instructions. However,
the standard build and install can usually be performed with the
following commands.
cd /usr/src/device-mapper.1.00.21/
./configure
make
make install
./scripts/devmap_mknod.sh
===============================================================================
4. Activating Your EVMS Volumes
In the old EVMS design (releases 1.2.1 and earlier), volume discovery was
performed in the kernel, and all volumes were immediately activated at boot
time. With the new EVMS design, volume discovery is performed in user-space,
and volumes are activated by communicating with the kernel. Thus, in order to
activate your volumes, you must open one of the EVMS user-interfaces and
perform a save, which will activate all inactive volumes.
* For instance, start the GUI by running evmsgui. You should see all empty
checkboxes in the "Active" column. Press the "Save" button, which will
activate all of the volumes, and each of those checkboxes should then be
filled in.
In addition to manually starting one of the EVMS UIs, there is a utility called
evms_activate. This utility simply opens the EVMS engine and issues a save
command. You should add a call to evms_activate to your boot scripts in order
to automatically activate your volumes at boot time. If you have volumes listed
in your /etc/fstab file, you will need to call evms_activate before the fstab
file is processed.
NOTE: As mentioned in the previous section (3.5), EVMS requires /proc (and
/sys on 2.6 kernels) to be mounted before the tools will run correctly.
If you run evms_activate before processing the fstab file, you may
need to manually mount and unmount /proc (and /sys for 2.6) around the
call to evms_activate.
EVMS also provides a sample SysV init script for activating the EVMS volumes
for use on Linux distributions that use SysV init scripts at boot time (most of
the major distros do). This sample script is called init.d_evms and can be
found in the doc directory in the EVMS source package. You should copy this
file to the appropriate directory for your distro (probably /etc/init.d or
/etc/rc.d/init.d), and rename it evms. Then create a symbolic link in the
appropriate runlevel subdirectory, so this script is automatically run at boot
time.
* For example, on a United-Linux-based distro (like SuSE), you might run the
following commands.
cd /usr/src/evms-2.5.2/doc/
cp init.d_evms /etc/init.d/evms
cd /etc/init.d/boot.d/
ln -s ../evms S03evms
Once the volumes are activated, you may mount them in the normal fashion, using
the device-nodes in the /dev/evms/ directory.
===============================================================================
5. Configuring Your Boot-Loader For EVMS
Currently, there are two boot-loaders commonly in use on Linux:
LILO (http://freshmeat.net/projects/lilo/) and
Grub (http://www.gnu.org/software/grub/).
The bootloader you are running will determine how you will access your /boot
filesystem through an EVMS volume. (If /boot is not on its own volume, then
this discussion applies to the root filesystem instead, since that is where
/boot will reside.)
NOTE: There are some limitations on the type of volume that can be used to
hold your /boot filesystem, regardless of which boot-loader you're
using. This volume must be created from a simple disk
segment/partition, or from a raid-1 region on top of simple segments.
Using a volume created from LVM regions to hold /boot is not supported
at this time. The volume itself can be either an EVMS or a
compatibility volume.
* LILO Users
After compiling a new kernel, you run the lilo command to record the
kernel's location in a place that is accessible at boot time. LILO does
this by asking the filesystem for a list of the blocks that make up the
kernel image. It then translates this list of blocks to a list of sectors
on the raw disk. This list of sectors is recorded and accessible to LILO
at boot time. However, LILO can only generate this list when the kernel
image is on a regular partition. For more complex volumes, it must ask the
kernel driver for that volume to translate the sector location within the
volume to a sector location on the disk.
Currently, LILO does not natively support Device-Mapper devices. However,
Christophe Saout has created a patch for LILO to work with Device-Mapper,
in conjunction with libdevmapper. This patch has been tested and works
with a limited set of EVMS volumes (as described in the note above).
To use LILO with EVMS volumes, follow these steps.
1. Be sure you have installed the Device-Mapper library and tools as
described in section 3.7.
2. Apply the lilo-devmapper patch to LILO.
cd /usr/src/lilo-22.6.1/
patch -p1 < /usr/src/lilo-22.6.1-devmapper.patch
3. Build and install LILO.
Please see the README and QuickInst files in the lilo-22.6.1
directory for full instructions on building and installing LILO. For
the most common setup, you can simply use the following commands, which
should preserve your existing /etc/lilo.conf file (but making a backup
copy would also be a wise idea).
cd /usr/src/lilo-22.6.1/
make
make install
* Grub Users
Grub works differently than LILO, in that it contains native support for
partitions and filesystems. At boot time, it finds the partition
containing /boot and looks for its configuration file in the filesystem.
It uses this config file to locate the kernel image, which it then loads
into memory.
Grub is bound by the same limitations on the type of volume used for /boot
as described in the above note.
Now that your boot-loader is properly configured, you can mount your /boot
filesystem using the appropriate EVMS volume device-node (e.g. /dev/evms/hda1
or /dev/evms/Boot).
===============================================================================
6. Mounting Your Root Filesystem Through EVMS
Now that volume discovery and activation are done in user-space, there is an
issue with having your system's root filesystem on an EVMS volume. In order for
the root filesystem's volume to be activated, the EVMS tools must run. But in
order to get to the EVMS tools, the root filesystem must be mounted.
The solution to this dilemma is to use an initial ramdisk (initrd). This is a
ram-based device that acts as a temporary root filesystem at boot time, and
provides the ability to run programs and load modules that are necessary to
activate the true root filesystem.
In order to simplify the process of setting up an initrd, a usable sample
initrd is now provided on the EVMS website, which you downloaded earlier
(section 1.1) and saved in the /boot directory. This compressed initrd image
contains pre-compiled EVMS 2.5.2 tools, and a linuxrc script to run
evms_activate to activate your volumes so the root filesystem can be mounted.
You can use this initrd image without modification if your kernel has the
following features compiled in statically (not as modules):
* Device-Mapper
* MD (if you use Software-RAID in your volumes)
* Drivers for your disk drives (IDE and/or SCSI)
* RAM Disk and Initrd
* Ext2 Filesystem
* The type of filesystem used for your root filesystem.
You must also have an "/initrd" directory in your root filesystem. At bootup,
once the EVMS volumes are activated, the root volume will be mounted within
the initrd. Then the linuxrc script will "pivot" the root filesystem from the
initrd to the real root filesystem and the initrd will end up mounted at
/initrd. If you do not already have this directory, create it now:
mkdir /initrd
This initrd image will work without devfs. It will also work with devfs as long
as you configure your kernel to automatically mount devfs at boot-time, and if
you normally run devfsd.
Boot-Loader Setup
To get your kernel to load and run this initrd image and to mount your root
filesystem using your EVMS volumes, you just have to make a small change to
your boot-loader configuration file.
* LILO Users
Edit your /etc/lilo.conf file. If you haven't already, add an image
section for the kernel you will be using with EVMS. The section should
look something like this:
image = /boot/vmlinuz-2.4.29
label = 2.4.29
read-only
initrd = /boot/evms-2.5.2-initrd.gz
append = "ramdisk=8192 root=/dev/evms/Root_Volume"
The "image" and "label" lines specify the kernel image you want to boot,
and what it should be called in the LILO menu. The "initrd" line specifies
where to find the initrd image that you downloaded. The "append" line tells
the kernel how big the initrd image is (in kilobytes, uncompressed), as
well as where to find your root filesystem. You should obviously replace
"/dev/evms/Root_Volume" with the name of the EVMS volume that contains
your root filesystem.
If you need to pass any special mount options to your root filesystem, or
if your root filesystem is one that the kernel cannot auto-detect, you can
also specify the "rootflags=" and "rootfstype=" options on the "append"
line in your /etc/lilo.conf file. The initrd will use these extra options
when initially mounting your root filesystem. See the
Documentation/kernel-parameters.txt file in the kernel source tree for
more details on these extra kernel parameters.
Save your /etc/lilo.conf and run lilo.
* Grub Users
Edit your /boot/grub/menu.list file. If you haven't already, add a menu
entry for the kernel you will be using with EVMS. The entry should look
something like this:
title 2.4.29
kernel (hd0,0)/vmlinuz-2.4.29 ramdisk=8192 root=/dev/evms/Root_Volume
initrd (hd0,0)/evms-2.5.2-initrd.gz
The "title" is the label you would like to use for that kernel in the Grub
menu. The "kernel" line specifies where to find the kernel image, where to
find the root filesystem, and the size of the initrd image (in kilobytes,
uncompressed). You should obviously replace "/dev/evms/Root_Volume" with
the name of the EVMS volume that contains your root filesystem. The
"initrd" line specifies where to find the initrd image that you
downloaded. See the Grub documentation
(http://www.gnu.org/software/grub/manual/) for which (hdx,y) value to use
(in this example, the /boot filesystem is on its own partition, which is
the first partition on the first IDE disk).
If you need to pass any special mount options to your root filesystem, or
if your root filesystem is one that the kernel cannot auto-detect, you can
also specify the "rootflags=" and "rootfstype=" options on the "kernel"
line in your /boot/grub/menu.list file. The initrd will use these extra
options when initially mounting your root filesystem. See the
Documentation/kernel-parameters.txt file in the kernel source tree for
more details on these extra kernel parameters.
Save your /boot/grub/menu.list file.
Fstab Setup
In addition to the boot-loader configuration, your /etc/fstab file must also be
modified to indicate you want to mount your root filesystem using an EVMS
volume. Edit your /etc/fstab file, and find the line that specifies your root
filesystem. It will look something like this:
/dev/hda1 / ext3 defaults 1 1
The first item in this line is the root device. Simply change this device name
to the EVMS volume name. In this example, you just change /dev/hda1 to
/dev/evms/hda1.
At this point, if your kernel meets the above requirements, you can reboot your
system. After the kernel initializes itself, it will load the init-ramdisk,
which will run evms_activate. When the init-ramdisk finishes, your root
filesystem will be mounted using your EVMS volume.
--------------------------------
Modifying the Init-Ramdisk Image
If your setup doesn't exactly match the above requirements, you might be able
to make some simple modifications to the sample initrd image to make it work
properly on your system. A couple of examples of why you might want to modify
this initrd are:
* You need to load kernel modules from the initrd.
* You have a different devfs setup.
* You have other utilities you need to run from your initrd.
This initrd is 16 MB in size, but only about 1/4 of that is currently used.
This was done to provide you with enough extra space to add things like kernel
modules or other utilities that you need to run. (If you find that 16 MB is not
enough space, you might need to create a new initrd image from scratch.)
1. Mount the initrd
To modify the initrd, you first need to uncompress and mount the image.
Use the following commands:
cd /boot
gunzip evms-2.5.2-initrd.gz
cd /mnt
mkdir initrd
mount -o loop /boot/evms-2.5.2-initrd initrd/
In the /mnt/initrd/ directory you will see the directory layout of the
init-ramdisk. It looks very much like a stripped-down version of your
root filesystem. This init-ramdisk uses Busybox (http://www.busybox.net/),
which acts as a replacement for many of the commonly used Linux utilities,
but takes up a fraction of the space.
Browse through the directory structure to get an idea of the programs and
libraries that are currently available. Of particular interest is the
linuxrc script at the top-level of the init-ramdisk. This is the script
that is run when the initrd is loaded by the kernel. As you add extra
libraries, utilities, and/or kernel modules, you will need to make
modifications to the linuxrc script to make use of your additions.
2. Loading Kernel Modules
If you have compiled kernel modules that need to be loaded during the
initrd, those kernel modules need to be copied to the initrd. You will
find your kernel modules in the /lib/modules/x.y.z/ directory tree, where
x.y.z is your kernel version. You will need to create a corresponding
directory on the initrd and copy the necessary modules to this new
directory. For example, if you compiled the MD/Software-RAID and Device-
Mapper drivers as modules, and you're running a 2.4.29 kernel, enter the
following commands.
mkdir -p /mnt/initrd/lib/modules/2.4.29/
cd /lib/modules/2.4.29/kernel/drivers/md/
cp *.o /mnt/initrd/lib/modules/2.4.29/
After copying the kernel modules, you must add the appropriate commands
to the linuxrc script to load the modules from the initrd. Continuing the
previous example, you would add the following commands to load the
MD/Software-RAID modules.
insmod md
insmod linear
insmod raid0
insmod raid1
insmod xor
insmod raid5
insmod multipath
3. Changing the Devfs Setup
There are a variety of ways to use devfs, and the sample initrd is setup
to allow for the use of devfs. If devfs is mounted automatically by the
kernel, the initrd will run devfsd (the devfs daemon). A devfsd.conf file
is included on the initrd that tells devfsd to setup the compatibility
symlinks in /dev. If you don't normally use this setup for devfsd, you
should either modify /mnt/initrd/etc/devfsd.conf, or copy your own
/etc/devfsd.conf to /mnt/initrd/etc/. For example, you might want to do
this if you prefer the /dev/ide/host0/bus0/target0/lun0/disc style names
instead of /dev/hda.
4. Running Other Programs
If you have other programs that need to be run from the init-ramdisk, you
should copy those to the initrd. Most likely, they should be copied into
the /mnt/initrd/bin or /mnt/initrd/sbin directories. You will also need
to add an appropriate call to this program in the linuxrc script.
5. Unmount the initrd
When you are done with your modifications, you just need to unmount and
compress the initrd image.
cd /boot
umount /mnt/initrd
gzip evms-2.5.2-initrd
If you are using LILO as your boot-loader, you also need to run the lilo
command after compressing the initrd image.
===============================================================================
7. Configuring EVMS for a High-Availability Cluster
1. Installing Linux-HA
In order to run EVMS in a high-availability cluster environment, you must
first have the Linux-HA software installed and configured. If you do not
already have Linux-HA installed, you should have downloaded the necessary
packages in a previous section.
These instructions will not cover the full details of installing and
setting up Linux-HA. Please read through the the Linux-HA Getting Started
and FAQ 'n Tips documents for in-depth instructions on installing the
base Linux-HA software.
If you downloaded the RPM packages for Linux-HA, they can easily be
installed with the following command.
rpm -i heartbeat-pils-xxx.arch.rpm heartbeat-stonith-xxx.arch.rpm \
heartbeat-xxx.arch.rpm
where xxx is the version of heartbeat and arch is the architecture for
which the RPMs were built.
If you are running on a two-node cluster, it is important to configure
the STONITH package, which will provide the fail-over capabilities for
the cluster. EVMS will still operate without STONITH configured, but the
integrity of the data on shared volumes may be at risk. EVMS's ability to
reassign ownership of shared disks is limited to the support provided by
the cluster manager (Linux-HA). Please refer to the STONITH device manual
for instructions on how to configure the STONITH device.
2. Configuring EVMS for Linux-HA
Perform the following steps on all the cluster nodes.
A. Configure CCM Services
EVMS depends on CCM services offered by Heartbeat. Execute the
following command to configure CCM.
echo "respawn haclient /usr/lib/heartbeat/ccm" >> /etc/ha.d/ha.cf
B. Create Communication Channels
Linux-HA expects its clients to setup a private communication channel.
Create the following fifos.
mkfifo /var/lib/heartbeat/api/evms.req
mkfifo /var/lib/heartbeat/api/evms.rsp
chown root:haclient /var/lib/heartbeat/api/evms.req
chown root:haclient /var/lib/heartbeat/api/evms.rsp
chmod 200 /var/lib/heartbeat/api/evms.req
chmod 600 /var/lib/heartbeat/api/evms.rsp
C. Tell Linux-HA to Activate EVMS
To ensure the EVMS daemon is activated whenever Linux-HA starts,
execute the following command.
echo "respawn root /sbin/evmsd" >> /etc/ha.d/ha.cf
This command assumes that the evmsd daemon is installed in /sbin. If
you installed EVMS in a non-default location, change the above
command to refer to the correct location of evmsd.
3. Configuring EVMS for Fail-Over
NOTE: Ensure that evms_activate is run before heartbeat at system
start-up. If Linux-HA starts before EVMS, the fail-overs may not
work correctly.
NOTE: Linux-HA currently supports fail-over only on two-node clusters.
Clusters with more than two nodes cannot be configured for failover.
NOTE: Only private disk-groups (owned by a single node in the cluster)
can be failed over. Shared disk-groups are available to all nodes
in the cluster simultaneously, and thus their ownership will not
change when one node fails.
A. For each private disk-group that you wish to participate in fail-overs,
add an entry to the /etc/ha.d/haresources file.
For example, if disk-groups dg1 and dg2 are currently owned by node n1,
and should be failed-over together when n1 dies, then add the following
entry.
n1 evms_failover::dg1 evms_failover::dg2
For more details about the semantics of Linux-HA resource groups,
please see the Linux-HA Getting Started guide.
B. Validate that the /etc/ha.d/ha.cf and /etc/haresources files are the
same on all nodes of the cluster.
C. Restart the Linux-HA cluster manager on all nodes by executing the
following command.
/etc/init.d/heartbeat restart
===============================================================================
|