1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241
|
#!/usr/bin/env bash
# Copyright (C) 2018 Red Hat, Inc. All rights reserved.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU General Public License v.2.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
SKIP_WITH_LVMPOLLD=1
. lib/inittest
which mkfs.ext4 || skip
aux have_integrity 1 5 0 || skip
# Avoid 4K ramdisk devices on older kernels
aux kernel_at_least 5 10 || export LVM_TEST_PREFER_BRD=0
mnt="mnt"
mkdir -p $mnt
aux prepare_devs 5 64
# Use awk instead of anoyingly long log out from printf
#printf "%0.sA" {1..16384} >> fileA
awk 'BEGIN { while (z++ < 16384) printf "A" }' > fileA
awk 'BEGIN { while (z++ < 16384) printf "B" }' > fileB
awk 'BEGIN { while (z++ < 16384) printf "C" }' > fileC
# generate random data
dd if=/dev/urandom of=randA bs=512K count=2
dd if=/dev/urandom of=randB bs=512K count=3
dd if=/dev/urandom of=randC bs=512K count=4
_prepare_vg() {
vgcreate $SHARED $vg "$dev1" "$dev2" "$dev3" "$dev4" "$dev5"
pvs
}
_add_new_data_to_mnt() {
mkfs.ext4 "$DM_DEV_DIR/$vg/$lv1"
mount "$DM_DEV_DIR/$vg/$lv1" $mnt
# add original data
cp randA $mnt
cp randB $mnt
cp randC $mnt
mkdir $mnt/1
cp fileA $mnt/1
cp fileB $mnt/1
cp fileC $mnt/1
mkdir $mnt/2
cp fileA $mnt/2
cp fileB $mnt/2
cp fileC $mnt/2
}
_add_more_data_to_mnt() {
mkdir $mnt/more
cp fileA $mnt/more
cp fileB $mnt/more
cp fileC $mnt/more
cp randA $mnt/more
cp randB $mnt/more
cp randC $mnt/more
}
_verify_data_on_mnt() {
diff randA $mnt/randA
diff randB $mnt/randB
diff randC $mnt/randC
diff fileA $mnt/1/fileA
diff fileB $mnt/1/fileB
diff fileC $mnt/1/fileC
diff fileA $mnt/2/fileA
diff fileB $mnt/2/fileB
diff fileC $mnt/2/fileC
}
_verify_data_on_lv() {
lvchange -ay $vg/$lv1
mount "$DM_DEV_DIR/$vg/$lv1" $mnt
_verify_data_on_mnt
rm $mnt/randA
rm $mnt/randB
rm $mnt/randC
rm -rf $mnt/1
rm -rf $mnt/2
umount $mnt
lvchange -an $vg/$lv1
}
# lvrename
_prepare_vg
lvcreate --type raid1 -m1 --raidintegrity y -n $lv1 -l 8 $vg
aux wait_recalc $vg/${lv1}_rimage_0
aux wait_recalc $vg/${lv1}_rimage_1
aux wait_recalc $vg/$lv1
_add_new_data_to_mnt
umount $mnt
lvrename $vg/$lv1 $vg/$lv2
mount "$DM_DEV_DIR/$vg/$lv2" $mnt
_verify_data_on_mnt
umount $mnt
lvchange -an $vg/$lv2
lvremove $vg/$lv2
vgremove -ff $vg
# lvconvert --replace
# an existing dev is replaced with another dev
# lv must be active
_prepare_vg
lvcreate --type raid1 -m1 --raidintegrity y -n $lv1 -l 8 $vg "$dev1" "$dev2"
aux wait_recalc $vg/${lv1}_rimage_0
aux wait_recalc $vg/${lv1}_rimage_1
aux wait_recalc $vg/$lv1
lvs -o raidintegritymode $vg/$lv1 | grep journal
_add_new_data_to_mnt
lvconvert --replace "$dev1" $vg/$lv1 "$dev3"
lvs -a -o+devices $vg > out
cat out
grep "$dev2" out
grep "$dev3" out
not grep "$dev1" out
_add_more_data_to_mnt
_verify_data_on_mnt
umount $mnt
lvchange -an $vg/$lv1
_verify_data_on_lv
lvremove $vg/$lv1
vgremove -ff $vg
# lvconvert --replace
# same as prev but with bitmap mode
_prepare_vg
lvcreate --type raid1 -m1 --raidintegrity y --raidintegritymode bitmap -n $lv1 -l 8 $vg "$dev1" "$dev2"
aux wait_recalc $vg/${lv1}_rimage_0
aux wait_recalc $vg/${lv1}_rimage_1
aux wait_recalc $vg/$lv1
lvs -o raidintegritymode $vg/$lv1 | grep bitmap
_add_new_data_to_mnt
lvconvert --replace "$dev1" $vg/$lv1 "$dev3"
lvs -a -o+devices $vg > out
cat out
grep "$dev2" out
grep "$dev3" out
not grep "$dev1" out
_add_more_data_to_mnt
_verify_data_on_mnt
umount $mnt
lvchange -an $vg/$lv1
_verify_data_on_lv
lvremove $vg/$lv1
vgremove -ff $vg
# lvconvert --repair
# while lv is active a device goes missing (with rimage,rmeta,imeta,orig).
# lvconvert --repair should replace the missing dev with another,
# (like lvconvert --replace does for a dev that's not missing).
_prepare_vg
lvcreate --type raid1 -m1 --raidintegrity y -n $lv1 -l 8 $vg "$dev1" "$dev2"
aux wait_recalc $vg/${lv1}_rimage_0
aux wait_recalc $vg/${lv1}_rimage_1
aux wait_recalc $vg/$lv1
_add_new_data_to_mnt
aux disable_dev "$dev2"
lvs -a -o+devices $vg > out
cat out
grep unknown out
lvconvert -vvvv -y --repair $vg/$lv1
lvs -a -o+devices $vg > out
cat out
not grep "$dev2" out
not grep unknown out
_add_more_data_to_mnt
_verify_data_on_mnt
umount $mnt
lvchange -an $vg/$lv1
lvremove $vg/$lv1
aux enable_dev "$dev2"
vgremove -ff $vg
# lvchange activationmode
# a device is missing (with rimage,rmeta,imeta,iorig), the lv
# is already inactive, and it cannot be activated, with
# activationmode degraded or partial, or in any way,
# until integrity is removed.
_prepare_vg
lvcreate --type raid1 -m1 --raidintegrity y -n $lv1 -l 8 $vg "$dev1" "$dev2"
aux wait_recalc $vg/${lv1}_rimage_0
aux wait_recalc $vg/${lv1}_rimage_1
aux wait_recalc $vg/$lv1
_add_new_data_to_mnt
umount $mnt
lvchange -an $vg/$lv1
aux disable_dev "$dev2"
lvs -a -o+devices $vg
not lvchange -ay $vg/$lv1
not lvchange -ay --activationmode degraded $vg/$lv1
not lvchange -ay --activationmode partial $vg/$lv1
lvconvert --raidintegrity n $vg/$lv1
lvchange -ay --activationmode degraded $vg/$lv1
mount "$DM_DEV_DIR/$vg/$lv1" $mnt
_add_more_data_to_mnt
_verify_data_on_mnt
umount $mnt
lvchange -an $vg/$lv1
lvremove $vg/$lv1
aux enable_dev "$dev2"
vgremove -ff $vg
# When disks for raid images are missing, vgreduce --removemissing --force
# should remove the missing images from the raid LV.
_prepare_vg
lvcreate --type raid1 -m2 --raidintegrity y -n $lv1 -l 8 $vg "$dev1" "$dev2" "$dev3"
aux wait_recalc $vg/${lv1}_rimage_0
aux wait_recalc $vg/${lv1}_rimage_1
aux wait_recalc $vg/${lv1}_rimage_2
aux wait_recalc $vg/$lv1
_add_new_data_to_mnt
aux disable_dev "$dev2"
lvs -a -o+devices,segtype $vg |tee out
# The vgreduce uses error target for missing image
vgreduce --removemissing --force $vg
lvs -a -o+devices,segtype $vg |tee out
cat out
not grep "$dev2" out
grep error out
_add_more_data_to_mnt
_verify_data_on_mnt
umount $mnt
lvchange -an $vg/$lv1
lvremove $vg/$lv1
aux enable_dev "$dev2"
vgremove -ff $vg
|