1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344
|
EVMS Release 2.5.2
==================
See the INSTALL file for installation instructions. The instructions are also
available at http://evms.sourceforge.net/install/.
See the User-Guide at http://evms.sourceforge.net/user_guide/ for detailed
usage information. The User-Guide is also available in multiple formats on
The Linux Documentation Project web site at http://www.tldp.org/guides.html.
Important notes concerning this release:
1. Init-Ramdisk Changes
The EVMS sample init-ramdisk has gone through some changes to bring it
more up-to-date with common conventions for initrds. In particular, the
change-root method of mounting the root filesystem has been dropped in
favor of the pivot-root method. In addition, the error-handling has been
significantly improved, and will provide directions in the event that
something goes wrong during the activation and mounting of the root volume.
The only change that most users will actually notice is that the EVMS
initrd now requires that a /initrd directory be created on the root
filesystem before rebooting. Other less-noticeable changes include adding
support for detecting the "rootflags" and "rootfstype" kernel parameters.
If you specify these parameters, the EVMS initrd will use them when
mounting the root filesystem.
Thanks to Michel Bouissou and Syrius for their help in developing and
testing these updates to the EVMS initrd.
2. New MD Superblock
This release of EVMS now includes support for the new version of the
MD/Software-RAID superblock. The superblock is the piece of metadata that
EVMS and the MD kernel driver use to identify a device as belonging to a
Software-RAID region. The new superblock format is simpler, while providing
improved flexibility and scalability.
For the most part, the functionality of the RAID regions is not affected
by the superblock format. All the different RAID levels provide the same
options as before. The most noticeable change is that with the new
superblock, the MD kernel driver can have a resync of a RAID-1 or RAID-5
interrupted and later restart that resync from the point it left off.
IMPORTANT: The new superblock is only supported on 2.6.10 and later kernels.
The MD driver in the 2.4 kernel does not understand this format, so any
Software-RAID regions you create using the new superblock will only work
with 2.6 kernels. Also, the superblock format was modified slightly after
2.6.9 was released, so the EVMS support will only work with 2.6.10 and later
versions. And finally, there is a couple minor MD bugs in 2.6.10 that need
to be fixed for the new superblock format to work correctly. Make sure
you've applied md-fixes.patch from the kernel/2.6/ directory.
3. Metadata Backup and Restore
EVMS now provides the capability of backing up all the metadata that
defines the current volume configuration. This backup information is
stored in a file which can later be used to restore all or parts of that
configuration in the event that the volume metadata is damaged or
corrupted.
EVMS metadata backups do not include a backup of any of the filesystem
information. It is strictly limited to the metadata that defines the
volumes, storage-objects, and containers in the system.
Two new tools are provided for using these backups: evms_metadata_backup
and evms_metadata_restore. Please see the manual pages for these tools,
as well as the corresponding section in the EVMS User-Guide for more
information about the new metadata backup capabilities.
4. LVM2 Mapping-Move
The ability to "move" all or portions of a region has now been added to
the LVM2 plugin. This is similar to the Move-PV and Move-Extent functions
in the LVM1 plugin. The new function in LVM2 is called Move-Mapping. Each
LVM2 region is made of one or more logical mappings, with each mapping
representing a contiguous area on one of the container's PV-objects. This
new function allows you to move a mapping to a different physically-
contiguous area in the container, and automatically copies the data to
that location. This copying can be performed while the region is mounted
and in use.
In addition to Move-Mapping, two other functions have been added to the
LVM2 plugin to assist with moves. The first, called Split-Mapping, allows
you to split a single mapping into two separate mappings at a given offset
within the mapping. This is helpful in situations where you want to move
a mapping but don't have enough contiguous freespace to move it all at
once. The second, called Merge-Mappings, allows you to find all the split
mappings that are actually consecutive on disk and merge them back into
a single logical mapping.
See the LVM2 appendix in the EVMS User-Guide for more details on how to
use the Move-Mapping functionality.
5. BBR Segments
- Metadata Update For All BBR Segments
In EVMS 2.4.0 and earlier versions, the size of BBR segments was always
calculated based on the size of the child object (during volume discovery
and when creating or resizing BBR segments). However, this calculation was
based on the child object's block-size, which is not a fixed value. If the
block-size changes, the BBR segment size and start could change, which
could lead to not properly discovering objects on top of the BBR segment.
This behavior has been seen frequently when switching from a 2.4 kernel to
a 2.6 kernel, since the two kernels provide different default block-sizes
for some disks.
To fix this behavior, we've updated the BBR metadata to include a size and
start field, so these values will not change depending on the underlying
disk's block-size. If your volume configuration contains any BBR segments,
the first time you run EVMS 2.4.1 (or later) it will detect the need for
the metadata update. EVMS will prompt you to update the metadata and save
changes to write the new metadata to disk.
IMPORTANT: Only perform this metadata update if all your volumes have been
discovered and activated correctly. You may want to skip the update
initially so you can check your volumes. If everything looks normal, you
can then restart the EVMS UI and complete the BBR metadata update.
If you notice that any of your volumes have not been discovered properly or
if you have any other configuration problems, please revert back to a
version of EVMS and a version of the Linux kernel that are known to work
correctly. When you are back to a working configuration, upgrade to the
latest version of EVMS without changing kernels. Then you can complete the
BBR metadata update.
If you don't use BBR segments, then there is no metadata update for your
system.
6. Selective Activation
EVMS now allows users to specify which volumes and objects should be
activated and which should be left inactive.
There is a new section in the EVMS config file (/etc/evms.conf) called
"activate". This section has two entries, "include" and "exclude", which
work similarly to the entries in the legacy_devices and sysfs_devices
sections. The user can specify exact names of volumes or objects, or provide
a pattern to match multiple volume and object names. Everything in the
include list will be added to the list of volumes and objects to activate,
and then everything in the exclude list will be removed from this activation
list. Thus, if a name matches in both the include and exclude lists, the
exclude list has precedence.
Activation and deactivation dependencies are automatically enforced. This
means that for an object to be activated, all of its child objects must
also be activated. Likewise, for an object to be deactivated, all of its
parent objects must also be deactivated. Specifying a volume or object in
the "include" list in the config file's "activate" section implies that all
child objects will also be included, and specifying an object in the
"exclude" list implies that all parents of that object will also be
excluded. (For clarity, volumes are always the highest parents in the stack
and disks are always the lowest children in the stack. See the TERMINOLOGY
file for more details.)
In addition to the new config file section, the EVMS user-interfaces offer
the ability to activate or deactivate a particular volume or object. These
options are availble from the "Actions" menu in the GUI and text-mode UIs,
and also on the context pop-up menus for each volume or object. In addition,
the CLI provides new commands called "activate" and "deactivate". Upon
saving, the appropriate volume or object will be activated or deactivated
(along with any activation dependencies as mentioned above). Currently,
however, EVMS does not update the config file following a manual activation
or deactivation in the UIs. If the user does not also add the appropriate
entry to their config file, this activation or deactivation will be
temporary. The next time the user-interface runs and the state is saved,
objects that had been deactivated from the UI will be reactivated.
By default, all volumes and objects are included and none are excluded,
which will activate everything in the system. This matches the previous
behavior of EVMS.
7. LVM2 Volumes
EVMS has a new plugin for recognizing and managing the new volume format
introduced by the LVM2 tools. Just as with the existing LVM plugin, the
LVM2 plugin will discover your LVM2 volume groups as EVMS containers and
your logical volumes as EVMS regions. The regions will also automatically
be made into compatibility volumes the first time you run EVMS. An LVM2 LV
named /dev/group1/vol1 will have a region name of lvm2/group1/vol1 and a
compatibility volume name of /dev/evms/lvm2/group1/vol1.
Some users may experience a problem with this new plugin not discovering
all of their LVM2 PVs. This is most likely due to a size-check inconsistency
between the LVM2 tools and the EVMS LVM2 plugin. If you notice that not all
of your LVM2 PVs are discovered by EVMS, please edit your EVMS config file
(/etc/evms.conf). There is a new "lvm2" section at the end, with an entry
called "device_size_prompt". Set this entry to "yes", and EVMS will then
prompt you when it finds an object that might be a PV, but isn't passing
the size-checks for that object. Answer the questions to proceed with
discovering your LVM2 containers and regions.
On the other hand, if you get these prompts during discovery, and you know
that the specified object is not an LVM2 PV, you can set the
"lvm2.device_size_prompt" entry in your EVMS config file to "no" to prevent
these discovery prompts in the future. You might be in this situation if
you have LVM2 groups/volumes on top of MD software-RAID devices.
The EVMS LVM2 plugin does not support LVM2 snapshots. EVMS provides its
own snapshot plugin which you can use to create snapshots of your LVM2
volumes or any other volume within EVMS. Please delete any LVM2 snapshots
you have before migrating your setup to EVMS. Any remaining LVM2 snapshot
volumes will be treated as simple regions.
The EVMS LVM2 plugin does not modify any of the files in /etc/lvm/ that are
maintained by the LVM2 tools. If you make modifications to your LVM2 groups
and/or volumes using EVMS and you later decide to use the LVM2 tools again,
you will need to run "vgscan" for the LVM2 tools to detect the changes you
made using EVMS.
The EVMS LVM2 plugin does not yet provide PE-move and PV-move capabilities.
This feature will be added in a future release.
8. Software-RAID
- RAID-0 and RAID-5 Resize
RAID-0 and RAID-5 regions can now be resized by adding new objects to the
region or removing objects from the region. The data in that region will be
"re-striped" to account for the change in number of child objects.
To prevent data corruption, this operation must be performed while the region
is unmounted and deactivated.
Be forewarned, the expand and shrink process can take a *long* time. During
the "re-striping" process, each chunk of data in the RAID region must be
moved from it's current location to it's new location. During initial tests,
it seems that a larger RAID chunk-size will decrease the time necessary to
complete an expand or shrink. Unfortunately, the chunk-size cannot be
changed after the RAID region is created. If you are creating new RAID
regions that you might want to expand or shrink in the future, you might
want to consider a larger chunk-size.
IMPORTANT: Please have a suitable backup available before attempting a
RAID-0 or RAID-5 resize. If the expand or shrink process is interrupted
before it completes (e.g., the EVMS process gets killed, the machine
crashes, or a disk in the RAID region starts returning I/O errors), then
the state of that region cannot be ensured in all situations.
**DO NOT INTERRUPT THE RESIZE PROCESS BEFORE IT FINISHES**.
EVMS will *attempt* to recover following a problem during a RAID resize. The
MD plugin does keep track of the progress of the resize in the MD metadata.
Each time a data chunk is moved, the MD metadata is updated to reflect which
chunk is currently being processed. If EVMS or the machine crashes during a
resize, the next time you run EVMS the MD plugin will try to restore the
state of that region based on the latest metadata information. If an expand
was taking place, the region will be "rolled-back" to its state before the
expand. If a shrink was taking place, the shrink will continue from the
point it stopped. However, this recovery is not always enough to ensure
that the entire volume stack is in the correct state. If the RAID region is
made directly into a volume, then it will likely be restored to the correct
state. On the other hand, if the RAID region is a consumed-object in an
LVM container, or a child-object of another RAID region, then the metadata
for *those* plugins may not always be in the correct state. Thus, the
containers, objects, and volumes built on top of the RAID region may not
reflect the correct size.
ALSO IMPORTANT: Because RAID-resizes can be so long-running, there is the
potential for the EVMS engine log to grow very large if the logging level is
set too high. In one test, the log level grew to the maximum file size for
the underlying filesystem and caused the EVMS engine process to be killed.
When performing a RAID-resize, be sure to set the EVMS logging level to
"default" or lower. This can be done by editting the engine.debug_level in
the /etc/evms.conf file or running the EVMS UI with the "-d" option.
- Disabling RAID Auto-detect
If you have existing Software-RAID devices that you would like to migrate
to using EVMS, please make sure you are not using RAID auto-detect. EVMS
requires volume discovery to be done in user-space. Having the kernel
auto-detect just the RAID arrays will cause some inconsistencies in the
RAID superblocks.
If you are using auto-detect, you will need to use fdisk to change the
partition types from 0xfd to 0x83.
- For further information about the EVMS MD plugin, please see the newly
rewritten MD appendix of the User-Guide at
http://evms.sourceforge.net/user_guide/#appxmdreg.
9. Snapshots
- Snapshot Activation
Due to the new selective-activation capabilities, there are some minor
changes to when snapshots are activated and deactivated. In previous versions
of EVMS, creating a snapshot object did not activate that snapshot. The
snapshot would only be activated when an EVMS volume was added on top of the
snapshot object. When the EVMS volume was removed, the snapshot would be
deactivated, even if the snapshot object wasn't deleted.
Under the new scheme, snapshot objects will always be activated once they are
created, regardless of whether there are EVMS volumes on top of the snapshot
objects. In order to keep a snapshot object from being activated, users
should add an appropriate entry to the activate.exclude entry in their EVMS
config file.
Any time that a snapshot object is inactive or deactivated while its origin
volume remains active, that shapshot will be forceably reset. The next time
that snapshot is activated, it will be a new, fresh snapshot of its origin
volume. Not doing this would create an inconsistent snapshot, since the data
flowing through the origin volume would not be subject to the monitoring that
takes place when the snapshot is active.
- Snapshots of Software-RAID volumes.
Snapshots cannot be taken of compatibility or EVMS volumes that are made
directly from MD RAID-1 and RAID-5 regions or full disks. In order to take
a snapshot of a volume, the top object in that volume must be a Device-
Mapper-managed device. This is necessary because that object's mapping must
be modified to include hooks for copy-on-write to the snapshot device. Since
RAID objects are handled by the MD kernel driver, and full disks are managed
by the IDE or SCSI drivers, their "mappings" cannot change.
For now, the snapshot plugin will simply not give the option of taking
snapshots of these types of volumes. Future releases of EVMS will try to
get around this restriction.
- For further information about EVMS snapshots, please see the Snapshot section
of the User-Guide at http://evms.sourceforge.net/user_guide/#evmscreatesnap.
10. Expanding and Shrinking Containers
In previous versions of EVMS, the only method for resizing a container was
to add or remove entire objects from the container. As of EVMS 2.4.0, LVM1
and LVM2 containers also allow expanding and shrinking objects that are
already consumed by the container.
If a container's consumed-object is expandable, then the LVM plugins will
allow that object to expand, and then add the appropriate number of
physical-extents to fill in that new space. If a container's consumed-
object is shrinkable, and that object has physical-extents at the end of
the object which aren't allocated to LVM regions, then the LVM plugin will
allow that object to shrink by the number of unallocated PEs at the end
of the object.
This new feature is especially useful in conjunction with the new RAID-0 and
RAID-5 resize capabilities. If an LVM container is created from a RAID-0 or
RAID-5 region, that RAID region can be expanded by adding a new disk, which
in turn will increase the amount of freespace available in the LVM
container. That new freespace can then be used to expand existing LVM
regions or create new LVM regions.
|