1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422
|
=head1 NAME
guestfs-faq - libguestfs Frequently Asked Questions (FAQ)
=head1 ABOUT LIBGUESTFS
=head2 What is libguestfs?
libguestfs is a way to create, access and modify disk images. You can
look inside disk images, modify the files they contain, create them
from scratch, resize them, and much more. It’s especially useful from
scripts and programs and from the command line.
libguestfs is a C library (hence "lib-"), and a set of tools built on
this library, and bindings for many common programming languages.
For more information about what libguestfs can do read the
introduction on the home page (L<http://libguestfs.org>).
=head2 What are the virt tools?
Virt tools (website: L<http://virt-tools.org>) are a whole set of
virtualization management tools aimed at system administrators. Some
of them come from libguestfs, some from libvirt and many others from
other open source projects. So virt tools is a superset of
libguestfs. However libguestfs comes with many important tools. See
L<http://libguestfs.org> for a full list.
=head2 Does libguestfs need { libvirt / KVM / Red Hat / Fedora }?
No!
libvirt is not a requirement for libguestfs.
libguestfs works with any disk image, including ones created in
VMware, KVM, qemu, VirtualBox, Xen, and many other hypervisors, and
ones which you have created from scratch.
S<Red Hat> sponsors (ie. pays for) development of libguestfs and a
huge number of other open source projects. But you can run libguestfs
and the virt tools on many different Linux distros and Mac OS X. We
try our best to support all Linux distros as first-class citizens.
Some virt tools have been ported to Windows.
=head2 How does libguestfs compare to other tools?
=over 4
=item I<vs. kpartx>
Libguestfs takes a different approach from kpartx. kpartx needs root,
and mounts filesystems on the host kernel (which can be insecure - see
L<guestfs-security(1)>). Libguestfs isolates your host kernel from
guests, is more flexible, scriptable, supports LVM, doesn't require
root, is isolated from other processes, and cleans up after itself.
Libguestfs is more than just file access because you can use it to
create images from scratch.
=item I<vs. vdfuse>
vdfuse is like kpartx but for VirtualBox images. See the kpartx
comparison above. You can use libguestfs on the partition files
exposed by vdfuse, although it’s not necessary since libguestfs can
access VirtualBox images directly.
=item I<vs. qemu-nbd>
NBD (Network Block Device) is a protocol for exporting block devices
over the network. qemu-nbd is an NBD server which can handle any disk
format supported by qemu (eg. raw, qcow2). You can use libguestfs and
qemu-nbd or nbdkit together to access block devices over the network,
for example: C<guestfish -a nbd://remote>
=item I<vs. mounting filesystems in the host>
Mounting guest filesystems in the host is insecure and should be
avoided completely for untrusted guests. Use libguestfs to provide a
layer of protection against filesystem exploits. See also
L<guestmount(1)>.
=item I<vs. parted>
Libguestfs supports LVM. Libguestfs uses parted and provides most
parted features through the libguestfs API.
=back
=head1 GETTING HELP AND REPORTING BUGS
=head2 How do I know what version I'm using?
The simplest method is:
guestfish --version
Libguestfs development happens along an unstable branch and we
periodically create a stable branch which we backport stable patches
to. To find out more, read L<guestfs(3)/LIBGUESTFS VERSION NUMBERS>.
=head2 How can I get help?
=head2 What mailing lists or chat rooms are available?
If you are a S<Red Hat> customer using Red Hat Enterprise Linux, please
contact S<Red Hat Support>: L<http://redhat.com/support>
There is a mailing list, mainly for development, but users are also
welcome to ask questions about libguestfs and the virt tools:
L<https://lists.libguestfs.org>
You can also talk to us on IRC channel C<#guestfs> on Libera Chat.
We're not always around, so please stay in the channel after asking
your question and someone will get back to you.
For other virt tools (not ones supplied with libguestfs) there is a
general virt tools mailing list:
L<https://www.redhat.com/mailman/listinfo/virt-tools-list>
=head2 How do I report bugs?
Please use the following link to enter a bug in Bugzilla:
L<https://bugzilla.redhat.com/enter_bug.cgi?component=libguestfs&product=Virtualization+Tools>
Include as much detail as you can and a way to reproduce the problem.
Include the full output of L<libguestfs-test-tool(1)>.
=head1 COMMON PROBLEMS
See also L<guestfs(3)/LIBGUESTFS GOTCHAS> for some "gotchas" with
using the libguestfs API.
=head2 "Could not allocate dynamic translator buffer"
This obscure error is in fact an SELinux failure. You have to enable
the following SELinux boolean:
setsebool -P virt_use_execmem=on
For more information see
L<https://bugzilla.redhat.com/show_bug.cgi?id=806106>.
=head2 "child process died unexpectedly"
[This error message was changed in libguestfs 1.21.18 to something
more explanatory.]
This error indicates that qemu failed or the host kernel could not boot.
To get further information about the failure, you have to run:
libguestfs-test-tool
If, after using this, you still don’t understand the failure, contact
us (see previous section).
=head2 libguestfs: error: cannot find any suitable libguestfs supermin, fixed or old-style appliance on LIBGUESTFS_PATH
=head2 febootstrap-supermin-helper: ext2: parent directory not found
=head2 supermin-helper: ext2: parent directory not found
[This issue is fixed permanently in libguestfs E<ge> 1.26.]
If you see any of these errors on Debian/Ubuntu, you need to run the
following command:
sudo update-guestfs-appliance
=head2 "Permission denied" when running libguestfs as root
You get a permission denied error when opening a disk image, even
though you are running libguestfs as root.
This is caused by libvirt, and so only happens when using the libvirt
backend. When run as root, libvirt decides to run the qemu appliance
as user C<qemu.qemu>. Unfortunately this usually means that qemu
cannot open disk images, especially if those disk images are owned by
root, or are present in directories which require root access.
There is a bug open against libvirt to fix this:
L<https://bugzilla.redhat.com/show_bug.cgi?id=1045069>
You can work around this by one of the following methods:
=over 4
=item *
Switch to the direct backend:
export LIBGUESTFS_BACKEND=direct
=item *
Don’t run libguestfs as root.
=item *
Chmod the disk image and any parent directories so that the qemu user
can access them.
=item *
(Nasty) Edit F</etc/libvirt/qemu.conf> and change the C<user> setting.
=back
=head2 execl: /init: Permission denied
B<Note:> If this error happens when you are using a distro package of
libguestfs (eg. from Fedora, Debian, etc) then file a bug against the
distro. This is not an error which normal users should ever see if
the distro package has been prepared correctly.
This error happens during the supermin boot phase of starting the
appliance:
supermin: mounting new root on /root
supermin: chroot
execl: /init: Permission denied
supermin: debug: listing directory /
[...followed by a lot of debug output...]
This is a complicated bug related to L<supermin(1)> appliances. The
appliance is constructed by copying files like F</bin/bash> and many
libraries from the host. The file C<hostfiles> lists the files that
should be copied from the host into the appliance. If some files
don't exist on the host then they are missed out, but if these files
are needed in order to (eg) run F</bin/bash> then you'll see the above
error.
Diagnosing the problem involves studying the libraries needed by
F</bin/bash>, ie:
ldd /bin/bash
comparing that with C<hostfiles>, with the files actually available in
the host filesystem, and with the debug output printed in the error
message. Once you've worked out which file is missing, install that
file using your package manager and try again.
You should also check that files like F</init> and F</bin/bash> (in
the appliance) are executable. The debug output shows file modes.
=head1 DOWNLOADING, INSTALLING, COMPILING LIBGUESTFS
=begin html
<!-- old anchor for the next section -->
<a name="binaries"/>
=end html
=head2 Where can I get the latest binaries for ...?
=over 4
=item Fedora E<ge> 11
Use:
yum install '*guestf*'
For the latest builds, see:
L<http://koji.fedoraproject.org/koji/packageinfo?packageID=8391>
=item Red Hat Enterprise Linux
=over 4
=item RHEL 6
=item RHEL 7
It is part of the default install. On RHEL 6 and 7 (only) you have to
install C<libguestfs-winsupport> to get Windows guest support.
=back
=item Debian and Ubuntu
For libguestfs E<lt> 1.26, after installing libguestfs you need to do:
sudo update-guestfs-appliance
(This script has been removed on Debian/Ubuntu with libguestfs E<ge> 1.26
and instead the appliance is built on demand.)
On Ubuntu only:
sudo chmod 0644 /boot/vmlinuz*
You may need to add yourself to the C<kvm> group:
sudo usermod -a -G kvm yourlogin
=over 4
=item Debian Squeeze (6)
Hilko Bengen has built libguestfs in squeeze backports:
L<http://packages.debian.org/search?keywords=guestfs&searchon=names§ion=all&suite=squeeze-backports>
=item Debian Wheezy and later (7+)
Hilko Bengen supports libguestfs on Debian. Official
Debian packages are available:
L<http://packages.debian.org/search?keywords=libguestfs>
=item Ubuntu
We don’t have a full time Ubuntu maintainer, and the packages supplied
by Canonical (which are outside our control) are sometimes broken.
Canonical decided to change the permissions on the kernel so that it's
not readable except by root. This is completely stupid, but they
won't change it
(L<https://bugs.launchpad.net/ubuntu/+source/linux/+bug/759725>).
So every user should do this:
sudo chmod 0644 /boot/vmlinuz*
=over 4
=item Ubuntu 12.04
libguestfs in this version of Ubuntu works, but you need to update
febootstrap and seabios to the latest versions.
You need febootstrap E<ge> 3.14-2 from:
L<http://packages.ubuntu.com/precise/febootstrap>
After installing or updating febootstrap, rebuild the appliance:
sudo update-guestfs-appliance
You need seabios E<ge> 0.6.2-0ubuntu2.1 or E<ge> 0.6.2-0ubuntu3 from:
L<http://packages.ubuntu.com/precise-updates/seabios>
or
L<http://packages.ubuntu.com/quantal/seabios>
Also you need to do (see above):
sudo chmod 0644 /boot/vmlinuz*
=back
=back
=item Gentoo
Libguestfs was added to Gentoo in 2012-07 by Andreis Vinogradovs
(libguestfs) and Maxim Koltsov (mainly hivex). Do:
emerge libguestfs
=item Mageia
Libguestfs was added to Mageia in 2013-08. Do:
urpmi libguestfs
=item SuSE
Libguestfs was added to SuSE in 2012 by Olaf Hering.
=item ArchLinux
Libguestfs was added to the AUR in 2010.
=item Other Linux distro
Compile from source (next section).
=item Other non-Linux distro
You'll have to compile from source, and port it.
=back
=head2 How can I compile and install libguestfs from source?
You can compile libguestfs from git or a source tarball. Read the
README file before starting.
Git: L<https://github.com/libguestfs/libguestfs>
Source tarballs: L<http://libguestfs.org/download>
Don’t run C<make install>! Use the C<./run> script instead (see README).
=head2 How can I compile and install libguestfs if my distro doesn't
have new enough qemu/supermin/kernel?
Libguestfs needs supermin 5. If supermin 5 hasn't been ported to your
distro, then see the question below.
First compile qemu, supermin and/or the kernel from source. You do
I<not> need to C<make install> them.
In the libguestfs source directory, create two files. C<localconfigure>
should contain:
source localenv
#export PATH=/tmp/qemu/x86_64-softmmu:$PATH
./configure --prefix /usr "$@"
Make C<localconfigure> executable.
C<localenv> should contain:
#export SUPERMIN=/tmp/supermin/src/supermin
#export LIBGUESTFS_HV=/tmp/qemu/x86_64-softmmu/qemu-system-x86_64
#export SUPERMIN_KERNEL=/tmp/linux/arch/x86/boot/bzImage
#export SUPERMIN_KERNEL_VERSION=4.XX.0
#export SUPERMIN_MODULES=/tmp/lib/modules/4.XX.0
Uncomment and adjust these lines as required to use the alternate
programs you have compiled.
Use C<./localconfigure> instead of C<./configure>, but otherwise you
compile libguestfs as usual.
Don’t run C<make install>! Use the C<./run> script instead (see README).
=head2 How can I compile and install libguestfs without supermin?
If supermin 5 supports your distro, but you don’t happen to have a new
enough supermin installed, then see the previous question.
If supermin 5 doesn't support your distro at all, you will need to use
the "fixed appliance method" where you use a pre-compiled binary
appliance. To build libguestfs without supermin, you need to pass
C<--disable-appliance --disable-daemon> to either F<./configure> or
F<./configure> (depending whether you are building respectively from
git or from tarballs). Then, when using libguestfs, you B<must> set
the C<LIBGUESTFS_PATH> environment variable to the directory of a
pre-compiled appliance, as also described in
L<guestfs-internals(1)/FIXED APPLIANCE>.
For pre-compiled appliances, see also:
L<http://libguestfs.org/download/binaries/appliance/>.
Patches to port supermin to more Linux distros are welcome.
=head2 How can I add support for sVirt?
B<Note for Fedora/RHEL users:> This configuration is the default
starting with S<Fedora 18> and S<RHEL 7>. If you find any problems,
please let us know or file a bug.
L<SVirt|http://selinuxproject.org/page/SVirt> provides a hardened
appliance using SELinux, making it very hard for a rogue disk image to
"escape" from the confinement of libguestfs and damage the host (it's
fair to say that even in standard libguestfs this would be hard, but
sVirt provides an extra layer of protection for the host and more
importantly protects virtual machines on the same host from each
other).
Currently to enable sVirt you will need libvirt E<ge> 0.10.2 (1.0 or
later preferred), libguestfs E<ge> 1.20, and the SELinux policies from
recent Fedora. If you are not running S<Fedora 18+>, you will need to
make changes to your SELinux policy - contact us on the mailing list.
Once you have the requirements, do:
./configure --with-default-backend=libvirt # libguestfs >= 1.22
./configure --with-default-attach-method=libvirt # libguestfs <= 1.20
make
Set SELinux to Enforcing mode, and sVirt should be used automatically.
All, or almost all, features of libguestfs should work under sVirt.
There is one known shortcoming: L<virt-rescue(1)> will not use libvirt
(hence sVirt), but falls back to direct launch of qemu. So you won't
currently get the benefit of sVirt protection when using virt-rescue.
You can check if sVirt is being used by enabling libvirtd logging (see
F</etc/libvirt/libvirtd.log>), killing and restarting libvirtd, and
checking the log files for S<"Setting SELinux context on ..."> messages.
In theory sVirt should support AppArmor, but we have not tried it. It
will almost certainly require patching libvirt and writing an AppArmor
policy.
=head2 Libguestfs has a really long list of dependencies!
The base library doesn't depend on very much, but there are three
causes of the long list of other dependencies:
=over 4
=item 1.
Libguestfs has to be able to read and edit many different disk
formats. For example, XFS support requires XFS tools.
=item 2.
There are language bindings for many different languages, all
requiring their own development tools. All language bindings (except
C) are optional.
=item 3.
There are some optional library features which can be disabled.
=back
Since libguestfs E<ge> 1.26 it is possible to split up the appliance
dependencies (item 1 in the list above) and thus have (eg)
C<libguestfs-xfs> as a separate subpackage for processing XFS disk
images. We encourage downstream packagers to start splitting the base
libguestfs package into smaller subpackages.
=head2 Errors during launch on Fedora E<ge> 18, RHEL E<ge> 7
In Fedora E<ge> 18 and RHEL E<ge> 7, libguestfs uses libvirt to manage
the appliance. Previously (and upstream) libguestfs runs qemu
directly:
┌──────────────────────────────────┐
│ libguestfs │
├────────────────┬─────────────────┤
│ direct backend │ libvirt backend │
└────────────────┴─────────────────┘
↓ ↓
┌───────┐ ┌──────────┐
│ qemu │ │ libvirtd │
└───────┘ └──────────┘
↓
┌───────┐
│ qemu │
└───────┘
upstream Fedora 18+
non-Fedora RHEL 7+
non-RHEL
The libvirt backend is more sophisticated, supporting SELinux/sVirt
(see above) and more. It is, however, more complex and so less
robust.
If you have permissions problems using the libvirt backend, you can
switch to the direct backend by setting this environment variable:
export LIBGUESTFS_BACKEND=direct
before running any libguestfs program or virt tool.
=head2 How can I switch to a fixed / prebuilt appliance?
This may improve the stability and performance of libguestfs on Fedora
and RHEL.
Any time after installing libguestfs, run the following commands as
root:
mkdir -p /usr/local/lib/guestfs/appliance
libguestfs-make-fixed-appliance /usr/local/lib/guestfs/appliance
ls -l /usr/local/lib/guestfs/appliance
Now set the following environment variable before using libguestfs or
any virt tool:
export LIBGUESTFS_PATH=/usr/local/lib/guestfs/appliance
Of course you can change the path to any directory you want. You can
share the appliance across machines that have the same architecture
(eg. all x86-64), but note that libvirt will prevent you from sharing
the appliance across NFS because of permissions problems (so either
switch to the direct backend or don't use NFS).
=head2 How can I speed up libguestfs builds?
By far the most important thing you can do is to install and properly
configure Squid. Note that the default configuration that ships with
Squid is rubbish, so configuring it is not optional.
A very good place to start with Squid configuration is here:
L<https://fedoraproject.org/wiki/Extras/MockTricks#Using_Squid_to_Speed_Up_Mock_package_downloads>
Make sure Squid is running, and that the environment variables
C<$http_proxy> and C<$ftp_proxy> are pointing to it.
With Squid running and correctly configured, appliance builds should
be reduced to a few minutes.
=head3 How can I speed up libguestfs builds (Debian)?
Hilko Bengen suggests using "approx" which is a Debian archive proxy
(L<http://packages.debian.org/approx>). This tool is documented on
Debian in the approx(8) manual page.
=head1 SPEED, DISK SPACE USED BY LIBGUESTFS
B<Note:> Most of the information in this section has moved:
L<guestfs-performance(1)>.
=head2 Upload or write seem very slow.
If the underlying disk is not fully allocated (eg. sparse raw or
qcow2) then writes can be slow because the host operating system has
to do costly disk allocations while you are writing. The solution is
to use a fully allocated format instead, ie. non-sparse raw, or qcow2
with the C<preallocation=metadata> option.
=head2 Libguestfs uses too much disk space!
libguestfs caches a large-ish appliance in:
/var/tmp/.guestfs-<UID>
If the environment variable C<TMPDIR> is defined, then
F<$TMPDIR/.guestfs-E<lt>UIDE<gt>> is used instead.
It is safe to delete this directory when you are not using libguestfs.
=head2 virt-sparsify seems to make the image grow to the
full size of the virtual disk
If the input to L<virt-sparsify(1)> is raw, then the output will be
raw sparse. Make sure you are measuring the output with a tool which
understands sparseness such as C<du -sh>. It can make a huge difference:
$ ls -lh test1.img
-rw-rw-r--. 1 rjones rjones 100M Aug 8 08:08 test1.img
$ du -sh test1.img
3.6M test1.img
(Compare the apparent size B<100M> vs the actual size B<3.6M>)
If all this confuses you, use a non-sparse output format by specifying
the I<--convert> option, eg:
virt-sparsify --convert qcow2 disk.raw disk.qcow2
=head2 Why doesn't virt-resize work on the disk image in-place?
Resizing a disk image is very tricky -- especially making sure that
you don't lose data or break the bootloader. The current method
effectively creates a new disk image and copies the data plus
bootloader from the old one. If something goes wrong, you can always
go back to the original.
If we were to make virt-resize work in-place then there would have to
be limitations: for example, you wouldn't be allowed to move existing
partitions (because moving data across the same disk is most likely to
corrupt data in the event of a power failure or crash), and LVM would
be very difficult to support (because of the almost arbitrary mapping
between LV content and underlying disk blocks).
Another method we have considered is to place a snapshot over the
original disk image, so that the original data is untouched and only
differences are recorded in the snapshot. You can do this today using
C<qemu-img create> + C<virt-resize>, but qemu currently isn't smart
enough to recognize when the same block is written back to the
snapshot as already exists in the backing disk, so you will find that
this doesn't save you any space or time.
In summary, this is a hard problem, and what we have now mostly works
so we are reluctant to change it.
=head2 Why doesn't virt-sparsify work on the disk image in-place?
In libguestfs E<ge> 1.26, virt-sparsify can now work on disk images in
place. Use:
virt-sparsify --in-place disk.img
But first you should read L<virt-sparsify(1)/IN-PLACE SPARSIFICATION>.
=head1 PROBLEMS OPENING DISK IMAGES
=head2 Remote libvirt guests cannot be opened.
Opening remote libvirt guests is not supported at this time. For
example this won't work:
guestfish -c qemu://remote/system -d Guest
To open remote disks you have to export them somehow, then connect to
the export. For example if you decided to use NBD:
remote$ qemu-nbd -t -p 10809 guest.img
local$ guestfish -a nbd://remote:10809 -i
Other possibilities include ssh (if qemu is recent enough), NFS or
iSCSI. See L<guestfs(3)/REMOTE STORAGE>.
=head2 How can I open this strange disk source?
You have a disk image located inside another system that requires
access via a library / HTTP / REST / proprietary API, or is compressed
or archived in some way. (One example would be remote access to
OpenStack glance images without actually downloading them.)
We have a sister project called nbdkit
(L<https://github.com/libguestfs/nbdkit>). This project lets you turn
any disk source into an NBD server. Libguestfs can access NBD servers
directly, eg:
guestfish -a nbd://remote
nbdkit is liberally licensed, so you can link it to or include it in
proprietary libraries and code. It also has a simple, stable plugin
API so you can easily write plugins against the API which will
continue to work in future.
=head2 Error opening VMDK disks: "uses a vmdk feature which is not supported by this qemu version: VMDK version 3"
Qemu (and hence libguestfs) only supports certain VMDK disk images.
Others won't work, giving this or similar errors.
Ideally someone would fix qemu to support the latest VMDK features,
but in the meantime you have three options:
=over 4
=item 1.
If the guest is hosted on a live, reachable ESX server, then locate
and download the disk image called F<I<somename>-flat.vmdk>. Despite
the name, this is a raw disk image, and can be opened by anything.
If you have a recent enough version of qemu and libguestfs, then you
may be able to access this disk image remotely using either HTTPS or
ssh. See L<guestfs(3)/REMOTE STORAGE>.
=item 2.
Use VMware’s proprietary vdiskmanager tool to convert the image to raw
format.
=item 3.
Use nbdkit with the proprietary VDDK plugin to live export the disk
image as an NBD source. This should allow you to read and write the
VMDK file.
=back
=head2 UFS disks (as used by BSD) cannot be opened.
The UFS filesystem format has many variants, and these are not
self-identifying. The Linux kernel has to be told which variant of
UFS it has to use, which libguestfs cannot know.
You have to pass the right C<ufstype> mount option when mounting these
filesystems.
See L<https://www.kernel.org/doc/Documentation/filesystems/ufs.txt>
=head2 Windows ReFS
Windows ReFS is Microsoft’s ZFS/Btrfs copy. This filesystem has not
yet been reverse engineered and implemented in the Linux kernel, and
therefore libguestfs doesn't support it. At the moment it seems to be
very rare "in the wild".
=head2 Non-ASCII characters don’t appear on VFAT filesystems.
Typical symptoms of this problem:
=over 4
=item *
You get an error when you create a file where the filename contains
non-ASCII characters, particularly non 8-bit characters from Asian
languages (Chinese, Japanese, etc). The filesystem is VFAT.
=item *
When you list a directory from a VFAT filesystem, filenames appear as
question marks.
=back
This is a design flaw of the GNU/Linux system.
VFAT stores long filenames as UTF-16 characters. When opening or
returning filenames, the Linux kernel has to translate these to some
form of 8 bit string. UTF-8 would be the obvious choice, except for
Linux users who persist in using non-UTF-8 locales (the user’s locale
is not known to the kernel because it’s a function of libc).
Therefore you have to tell the kernel what translation you want done
when you mount the filesystem. The two methods are the C<iocharset>
parameter (which is not relevant to libguestfs) and the C<utf8> flag.
So to use a VFAT filesystem you must add the C<utf8> flag when
mounting. From guestfish, use:
><fs> mount-options utf8 /dev/sda1 /
or on the guestfish command line:
guestfish [...] -m /dev/sda1:/:utf8
or from the API:
guestfs_mount_options (g, "utf8", "/dev/sda1", "/");
The kernel will then translate filenames to and from UTF-8 strings.
We considered adding this mount option transparently, but
unfortunately there are several problems with doing that:
=over 4
=item *
On some Linux systems, the C<utf8> mount option doesn't work. We
don't precisely understand what systems or why, but this was reliably
reported by one user.
=item *
It would prevent you from using the C<iocharset> parameter because it
is incompatible with C<utf8>. It is probably not a good idea to use
this parameter, but we don't want to prevent it.
=back
=head2 Non-ASCII characters appear as underscore (_) on ISO9660 filesystems.
The filesystem was not prepared correctly with mkisofs or genisoimage.
Make sure the filesystem was created using Joliet and/or Rock Ridge
extensions. libguestfs does not require any special mount options to
handle the filesystem.
=head2 Cannot open Windows guests which use NTFS.
You see errors like:
mount: unknown filesystem type 'ntfs'
On Red Hat Enterprise Linux or CentOS E<lt> 7.2, you have to install
the
L<libguestfs-winsupport|https://people.redhat.com/~rjones/libguestfs-winsupport/>
package. In RHEL E<ge> 7.2, C<libguestfs-winsupport> is part of the
base RHEL distribution, but see the next question.
=head2 "mount: unsupported filesystem type" with NTFS in RHEL E<ge> 7.2
In RHEL 7.2 we were able to add C<libguestfs-winsupport> to the base
RHEL distribution, but we had to disable the ability to use it for
opening and editing filesystems. It is only supported when used with
L<virt-v2v(1)>. If you try to use L<guestfish(1)> or L<guestmount(1)>
or some other programs on an NTFS filesystem, you will see the error:
mount: unsupported filesystem type
This is not a supported configuration, and it will not be made to work
in RHEL. Don't bother to open a bug about it, as it will be
immediately C<CLOSED -E<gt> WONTFIX>.
You may
L<compile your own libguestfs removing this restriction|https://www.redhat.com/archives/libguestfs/2016-February/msg00145.html>,
but that won't be endorsed or supported by Red Hat.
=head2 Cannot open or inspect RHEL 7 guests.
=head2 Cannot open Linux guests which use XFS.
RHEL 7 guests, and any other guests that use XFS, can be opened by
libguestfs, but you have to install the C<libguestfs-xfs> package.
=head1 USING LIBGUESTFS IN YOUR OWN PROGRAMS
=head2 The API has hundreds of methods, where do I start?
We recommend you start by reading the API overview:
L<guestfs(3)/API OVERVIEW>.
Although the API overview covers the C API, it is still worth reading
even if you are going to use another programming language, because the
API is the same, just with simple logical changes to the names of the
calls:
C guestfs_ln_sf (g, target, linkname);
Python g.ln_sf (target, linkname);
OCaml g#ln_sf target linkname;
Perl $g->ln_sf (target, linkname);
Shell (guestfish) ln-sf target linkname
PHP guestfs_ln_sf ($g, $target, $linkname);
Once you're familiar with the API overview, you should look at this
list of starting points for other language bindings:
L<guestfs(3)/USING LIBGUESTFS WITH OTHER PROGRAMMING LANGUAGES>.
=head2 Can I use libguestfs in my proprietary / closed source /
commercial program?
In general, yes. However this is not legal advice - read the license
that comes with libguestfs, and if you have specific questions contact
a lawyer.
In the source tree the license is in the file C<COPYING.LIB> (LGPLv2+
for the library and bindings) and C<COPYING> (GPLv2+ for the
standalone programs).
=begin html
<!-- old anchor for the next section -->
<a name="debug"/>
=end html
=head1 DEBUGGING LIBGUESTFS
=head2 Help, it’s not working!
If no libguestfs program seems to work at all, run the program below
and paste the B<complete, unedited> output into an email to
C<guestfs@lists.libguestfs.org>:
libguestfs-test-tool
If a particular operation fails, supply all the information in this
checklist, in an email to C<libguestfs> @ C<redhat.com>:
=over 4
=item 1.
What are you trying to do?
=item 2.
What exact command(s) did you run?
=item 3.
What was the precise error or output of these commands?
=item 4.
Enable debugging, run the commands again, and capture the B<complete>
output. B<Do not edit the output.>
export LIBGUESTFS_DEBUG=1
export LIBGUESTFS_TRACE=1
=item 5.
Include the version of libguestfs, the operating system version, and
how you installed libguestfs (eg. from source, C<yum install>, etc.)
=back
=head2 How do I debug when using any libguestfs program or tool
(eg. virt-customize or virt-df)?
There are two C<LIBGUESTFS_*> environment variables you can set in
order to get more information from libguestfs.
=over 4
=item C<LIBGUESTFS_TRACE>
Set this to 1 and libguestfs will print out each command / API call in
a format which is similar to guestfish commands.
=item C<LIBGUESTFS_DEBUG>
Set this to 1 in order to enable massive amounts of debug messages.
If you think there is some problem inside the libguestfs appliance,
then you should use this option.
=back
To set these from the shell, do this before running the program:
export LIBGUESTFS_TRACE=1
export LIBGUESTFS_DEBUG=1
For csh/tcsh the equivalent commands would be:
setenv LIBGUESTFS_TRACE 1
setenv LIBGUESTFS_DEBUG 1
For further information, see: L<guestfs(3)/ENVIRONMENT VARIABLES>.
=head2 How do I debug when using guestfish?
You can use the same environment variables above. Alternatively use
the guestfish options -x (to trace commands) or -v (to get the full
debug output), or both.
For further information, see: L<guestfish(1)>.
=head2 How do I debug when using the API?
Call L<guestfs(3)/guestfs_set_trace> to enable command traces, and/or
L<guestfs(3)/guestfs_set_verbose> to enable debug messages.
For best results, call these functions as early as possible, just
after creating the guestfs handle if you can, and definitely before
calling launch.
=head2 How do I capture debug output and put it into my logging system?
Use the event API. For examples, see:
L<guestfs(3)/SETTING CALLBACKS TO HANDLE EVENTS> and the
F<examples/debug-logging.c> program in the libguestfs sources.
=head2 Digging deeper into the appliance boot process.
Enable debugging and then read this documentation on the appliance
boot process: L<guestfs-internals(1)>.
=head2 libguestfs hangs or fails during run/launch.
Enable debugging and look at the full output. If you cannot work out
what is going on, file a bug report, including the I<complete> output
of L<libguestfs-test-tool(1)>.
=head2 Debugging libvirt
If you are using the libvirt backend, and libvirt is failing, then you
can enable debugging by editing F</etc/libvirt/libvirtd.conf>.
If you are running as non-root, then you have to edit a different
file. Create F<~/.config/libvirt/libvirtd.conf> containing:
log_level=1
log_outputs="1:file:/tmp/libvirtd.log"
Kill any session (non-root) libvirtd that is running, and next time
you run the libguestfs command, you should see a large amount of
useful debugging information from libvirtd in F</tmp/libvirtd.log>
=head2 Broken kernel, or trying a different kernel.
You can choose a different kernel for the appliance by setting some
L<supermin environment variables|supermin(8)/ENVIRONMENT VARIABLES>:
export SUPERMIN_KERNEL_VERSION=4.8.0-1.fc25.x86_64
export SUPERMIN_KERNEL=/boot/vmlinuz-$SUPERMIN_KERNEL_VERSION
export SUPERMIN_MODULES=/lib/modules/$SUPERMIN_KERNEL_VERSION
rm -rf /var/tmp/.guestfs-*
libguestfs-test-tool
=head2 Broken qemu, or trying a different qemu.
You can choose a different qemu by setting the hypervisor
L<environment variable|guestfs(3)/ENVIRONMENT VARIABLES>:
export LIBGUESTFS_HV=/path/to/qemu-system-x86_64
libguestfs-test-tool
=head1 DESIGN/INTERNALS OF LIBGUESTFS
See also L<guestfs-internals(1)>.
=head2 Why don’t you do everything through the FUSE / filesystem
interface?
We offer a command called L<guestmount(1)> which lets you mount guest
filesystems on the host. This is implemented as a FUSE module. Why
don't we just implement the whole of libguestfs using this mechanism,
instead of having the large and rather complicated API?
The reasons are twofold. Firstly, libguestfs offers API calls for
doing things like creating and deleting partitions and logical
volumes, which don't fit into a filesystem model very easily. Or
rather, you could fit them in: for example, creating a partition could
be mapped to C<mkdir /fs/hda1> but then you'd have to specify some
method to choose the size of the partition (maybe C<echo 100M E<gt>
/fs/hda1/.size>), and the partition type, start and end sectors etc.,
but once you've done that the filesystem-based API starts to look more
complicated than the call-based API we currently have.
The second reason is for efficiency. FUSE itself is reasonably
efficient, but it does make lots of small, independent calls into the
FUSE module. In guestmount these have to be translated into messages
to the libguestfs appliance which has a big overhead (in time and
round trips). For example, reading a file in 64 KB chunks is
inefficient because each chunk would turn into a single round trip.
In the libguestfs API it is much more efficient to download an entire
file or directory through one of the streaming calls like
C<guestfs_download> or C<guestfs_tar_out>.
=head2 Why don’t you do everything through GVFS?
The problems are similar to the problems with FUSE.
GVFS is a better abstraction than POSIX/FUSE. There is an FTP backend
for GVFS, which is encouraging because FTP is conceptually similar to
the libguestfs API. However the GVFS FTP backend makes multiple
simultaneous connections in order to keep interactivity, which we
can't easily do with libguestfs.
=begin html
<!-- old anchor for the next section -->
<a name="backup"/>
=end html
=head2 Why can I write to the disk, even though I added it read-only?
=head2 Why does C<--ro> appear to have no effect?
When you add a disk read-only, libguestfs places a writable overlay on
top of the underlying disk. Writes go into this overlay, and are
discarded when the handle is closed (or C<guestfish> etc. exits).
There are two reasons for doing it this way: Firstly read-only disks
aren't possible in many cases (eg. IDE simply doesn't support them, so
you couldn't have an IDE-emulated read-only disk, although this is not
common in real libguestfs installations).
Secondly and more importantly, even if read-only disks were possible,
you wouldn't want them. Mounting any filesystem that has a journal,
even C<mount -o ro>, causes writes to the filesystem because the
journal has to be replayed and metadata updated. If the disk was
truly read-only, you wouldn't be able to mount a dirty filesystem.
To make it usable, we create the overlay as a place to temporarily
store these writes, and then we discard it afterwards. This ensures
that the underlying disk is always untouched.
Note also that there is a regression test for this when building
libguestfs (in C<tests/qemu>). This is one reason why it’s important
for packagers to run the test suite.
=head2 Does C<--ro> make all disks read-only?
I<No!> The C<--ro> option only affects disks added on the command
line, ie. using C<-a> and C<-d> options.
In guestfish, if you use the C<add> command, then disk is added
read-write (unless you specify the C<readonly:true> flag explicitly
with the command).
=head2 Can I use C<guestfish --ro> as a way to backup my virtual machines?
Usually this is I<not> a good idea. The question is answered in more
detail in this mailing list posting:
L<https://www.redhat.com/archives/libguestfs/2010-August/msg00024.html>
See also the next question.
=head2 Why can’t I run fsck on a live filesystem using C<guestfish --ro>?
This command will usually I<not> work:
guestfish --ro -a /dev/vg/my_root_fs run : fsck /dev/sda
The reason for this is that qemu creates a snapshot over the original
filesystem, but it doesn't create a strict point-in-time snapshot.
Blocks of data on the underlying filesystem are read by qemu at
different times as the fsck operation progresses, with host writes in
between. The result is that fsck sees massive corruption (imaginary,
not real!) and fails.
What you have to do is to create a point-in-time snapshot. If it’s a
logical volume, use an LVM2 snapshot. If the filesystem is located
inside something like a btrfs/ZFS file, use a btrfs/ZFS snapshot, and
then run the fsck on the snapshot. In practice you don't need to use
libguestfs for this -- just run F</sbin/fsck> directly.
Creating point-in-time snapshots of host devices and files is outside
the scope of libguestfs, although libguestfs can operate on them once
they are created.
=head2 What’s the difference between guestfish and virt-rescue?
A lot of people are confused by the two superficially similar tools we
provide:
$ guestfish --ro -a guest.img
><fs> run
><fs> fsck /dev/sda1
$ virt-rescue --ro guest.img
><rescue> /sbin/fsck /dev/sda1
And the related question which then arises is why you can’t type in
full shell commands with all the --options in guestfish (but you can
in L<virt-rescue(1)>).
L<guestfish(1)> is a program providing structured access to the
L<guestfs(3)> API. It happens to be a nice interactive shell too, but
its primary purpose is structured access from shell scripts. Think of
it more like a language binding, like Python and other bindings, but
for shell. The key differentiating factor of guestfish (and the
libguestfs API in general) is the ability to automate changes.
L<virt-rescue(1)> is a free-for-all freeform way to boot the
libguestfs appliance and make arbitrary changes to your VM. It’s not
structured, you can't automate it, but for making quick ad-hoc fixes
to your guests, it can be quite useful.
But, libguestfs also has a "backdoor" into the appliance allowing you
to send arbitrary shell commands. It’s not as flexible as
virt-rescue, because you can't interact with the shell commands, but
here it is anyway:
><fs> debug sh "cmd arg1 arg2 ..."
Note that you should B<not> rely on this. It could be removed or
changed in future. If your program needs some operation, please add it
to the libguestfs API instead.
=head2 What’s the deal with C<guestfish -i>?
=head2 Why does virt-cat only work on a real VM image, but virt-df
works on any disk image?
=head2 What does "no root device found in this operating system image"
mean?
These questions are all related at a fundamental level which may not
be immediately obvious.
At the L<guestfs(3)> API level, a "disk image" is just a pile of
partitions and filesystems.
In contrast, when the virtual machine boots, it mounts those
filesystems into a consistent hierarchy such as:
/ (/dev/sda2)
│
├── /boot (/dev/sda1)
│
├── /home (/dev/vg_external/Homes)
│
├── /usr (/dev/vg_os/lv_usr)
│
└── /var (/dev/vg_os/lv_var)
(or drive letters on Windows).
The API first of all sees the disk image at the "pile of filesystems"
level. But it also has a way to inspect the disk image to see if it
contains an operating system, and how the disks are mounted when the
operating system boots: L<guestfs(3)/INSPECTION>.
Users expect some tools (like L<virt-cat(1)>) to work with VM paths:
virt-cat fedora.img /var/log/messages
How does virt-cat know that F</var> is a separate partition? The
trick is that virt-cat performs inspection on the disk image, and uses
that to translate the path correctly.
Some tools (including L<virt-cat(1)>, L<virt-edit(1)>, L<virt-ls(1)>)
use inspection to map VM paths. Other tools, such as L<virt-df(1)>
and L<virt-filesystems(1)> operate entirely at the raw "big pile of
filesystems" level of the libguestfs API, and don't use inspection.
L<guestfish(1)> is in an interesting middle ground. If you use the
I<-a> and I<-m> command line options, then you have to tell guestfish
exactly how to add disk images and where to mount partitions. This is
the raw API level.
If you use the I<-i> option, libguestfs performs inspection and mounts
the filesystems for you.
The error C<no root device found in this operating system image> is
related to this. It means inspection was unable to locate an
operating system within the disk image you gave it. You might see
this from programs like virt-cat if you try to run them on something
which is just a disk image, not a virtual machine disk image.
=head2 What do these C<debug*> and C<internal-*> functions do?
There are some functions which are used for debugging and
internal purposes which are I<not> part of the stable API.
The C<debug*> (or C<guestfs_debug*>) functions, primarily
L<guestfs(3)/guestfs_debug> and a handful of others, are used for
debugging libguestfs. Although they are not part of the stable API
and thus may change or be removed at any time, some programs may want
to call these while waiting for features to be added to libguestfs.
The C<internal-*> (or C<guestfs_internal_*>) functions are purely to
be used by libguestfs itself. There is no reason for programs to call
them, and programs should not try to use them. Using them will often
cause bad things to happen, as well as not being part of the
documented stable API.
=head1 DEVELOPERS
=head2 Where do I send patches?
Please send patches to the libguestfs mailing list
L<https://lists.libguestfs.org>. You don't have
to be subscribed, but there will be a delay until your posting is
manually approved.
B<Please don’t use github pull requests - they will be ignored>. The
reasons are (a) we want to discuss and dissect patches on the mailing
list, and (b) github pull requests turn into merge commits but we
prefer to have a linear history.
=head2 How do I propose a feature?
Large new features that you intend to contribute should be discussed
on the mailing list first
(L<https://lists.libguestfs.org>). This avoids
disappointment and wasted work if we don't think the feature would fit
into the libguestfs project.
If you want to suggest a useful feature but don’t want to write the
code, you can file a bug (see L</GETTING HELP AND REPORTING BUGS>)
with C<"RFE: "> at the beginning of the Summary line.
=head2 Who can commit to libguestfs git?
About 5 people have commit access to github. Patches should be posted
on the list first and ACKed. The policy for ACKing and pushing
patches is outlined here:
L<https://www.redhat.com/archives/libguestfs/2012-January/msg00023.html>
=head2 Can I fork libguestfs?
Of course you can. Git makes it easy to fork libguestfs. Github
makes it even easier. It’s nice if you tell us on the mailing list
about forks and the reasons for them.
=head1 MISCELLANEOUS QUESTIONS
=head2 Can I monitor the live disk activity of a virtual machine using libguestfs?
A common request is to be able to use libguestfs to monitor the live
disk activity of a guest, for example, to get notified every time a
guest creates a new file. Libguestfs does I<not> work in the way some
people imagine, as you can see from this diagram:
┌─────────────────────────────────────┐
│ monitoring program using libguestfs │
└─────────────────────────────────────┘
↓
┌───────────┐ ┌──────────────────────┐
│ live VM │ │ libguestfs appliance │
├───────────┤ ├──────────────────────┤
│ kernel (1)│ │ appliance kernel (2) │
└───────────┘ └──────────────────────┘
↓ ↓ (r/o connection)
┌──────────────────────┐
| disk image |
└──────────────────────┘
This scenario is safe (as long as you set the C<readonly> flag when
adding the drive). However the libguestfs appliance kernel (2) does
not see all the changes made to the disk image, for two reasons:
=over 4
=item i.
The VM kernel (1) can cache data in memory, so it doesn't appear in
the disk image.
=item ii.
The libguestfs appliance kernel (2) doesn't expect that the disk image
is changing underneath it, so its own cache is not magically updated
even when the VM kernel (1) does update the disk image.
=back
The only supported solution is to restart the entire libguestfs
appliance whenever you want to look at changes in the disk image. At
the API level that corresponds to calling C<guestfs_shutdown> followed
by C<guestfs_launch>, which is a heavyweight operation (see also
L<guestfs-performance(3)>).
There are some unsupported hacks you can try if relaunching the
appliance is really too costly:
=over 4
=item *
Call C<guestfs_drop_caches (g, 3)>. This causes all cached data help
by the libguestfs appliance kernel (2) to be discarded, so it goes
back to the disk image.
However this on its own is not sufficient, because qemu also caches
some data. You will also need to patch libguestfs to (re-)enable the
C<cache=none> mode. See:
L<https://rwmj.wordpress.com/2013/09/02/new-in-libguestfs-allow-cache-mode-to-be-selected/>
=item *
Use a tool like L<virt-bmap|http://git.annexia.org/?p=virt-bmap.git>
instead.
=item *
Run an agent inside the guest.
=back
Nothing helps if the guest is making more fundamental changes (eg.
deleting filesystems). For those kinds of things you must relaunch
the appliance.
(Note there is a third problem that you need to use consistent
snapshots to really examine live disk images, but that’s a general
problem with using libguestfs against any live disk image.)
=head1 SEE ALSO
L<guestfish(1)>,
L<guestfs(3)>,
L<http://libguestfs.org/>.
=head1 AUTHORS
Richard W.M. Jones (C<rjones at redhat dot com>)
=head1 COPYRIGHT
Copyright (C) 2012-2023 Red Hat Inc.
|