File: guestfs-faq.pod

package info (click to toggle)
libguestfs 1%3A1.48.6-2
  • links: PTS, VCS
  • area: main
  • in suites: bookworm
  • size: 98,368 kB
  • sloc: ansic: 376,405; ml: 38,310; sh: 10,217; java: 9,578; cs: 6,328; haskell: 5,674; makefile: 5,165; python: 3,800; perl: 2,454; erlang: 2,446; ruby: 350; xml: 303; pascal: 257; javascript: 157; cpp: 10
file content (1013 lines) | stat: -rw-r--r-- 49,989 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013

=head1 名前

guestfs-faq - libguestfs のよくある質問 (FAQ)

=head1 libguestfs について

=head2 libguestfs とは?

libguestfs is a way to create, access and modify disk images.  You can look inside disk images, modify the files they contain, create them from scratch, resize them, and much more.  It’s especially useful from scripts and programs and from the command line.

libguestfs is a C library (hence "lib-"), and a set of tools built on this library, and bindings for many common programming languages.

libguestfs が実行できることの詳細はホームページ (L<http://libguestfs.org>) にある紹介を参照できます。

=head2 virt ツールとは何か?

Virt tools (website: L<http://virt-tools.org>) are a whole set of virtualization management tools aimed at system administrators.  Some of them come from libguestfs, some from libvirt and many others from other open source projects.  So virt tools is a superset of libguestfs.  However libguestfs comes with many important tools.  See L<http://libguestfs.org> for a full list.

=head2 libguestfs は { libvirt / KVM / Red Hat / Fedora } が必要ですか?

いいえ!

libvirt は libguestfs に必須ではありません。

libguestfs works with any disk image, including ones created in VMware, KVM, qemu, VirtualBox, Xen, and many other hypervisors, and ones which you have created from scratch.

S<Red Hat> sponsors (ie. pays for) development of libguestfs and a huge number of other open source projects.  But you can run libguestfs and the virt tools on many different Linux distros and Mac OS X.  We try our best to support all Linux distros as first-class citizens.  Some virt tools have been ported to Windows.

=head2 How does libguestfs compare to other tools?

=over 4

=item I<vs. kpartx>

Libguestfs takes a different approach from kpartx.  kpartx needs root, and mounts filesystems on the host kernel (which can be insecure - see L<guestfs-security(1)>).  Libguestfs isolates your host kernel from guests, is more flexible, scriptable, supports LVM, doesn't require root, is isolated from other processes, and cleans up after itself.  Libguestfs is more than just file access because you can use it to create images from scratch.

=item I<vs. vdfuse>

vdfuse is like kpartx but for VirtualBox images.  See the kpartx comparison above.  You can use libguestfs on the partition files exposed by vdfuse, although it’s not necessary since libguestfs can access VirtualBox images directly.

=item I<vs. qemu-nbd>

NBD (Network Block Device) is a protocol for exporting block devices over the network.  qemu-nbd is an NBD server which can handle any disk format supported by qemu (eg. raw, qcow2).  You can use libguestfs and qemu-nbd or nbdkit together to access block devices over the network, for example: C<guestfish -a nbd://remote>

=item I<vs. mounting filesystems in the host>

Mounting guest filesystems in the host is insecure and should be avoided completely for untrusted guests.  Use libguestfs to provide a layer of protection against filesystem exploits.  See also L<guestmount(1)>.

=item I<vs. parted>

Libguestfs supports LVM.  Libguestfs uses parted and provides most parted features through the libguestfs API.

=back

=head1 GETTING HELP AND REPORTING BUGS

=head2 How do I know what version I'm using?

最も簡単な方法は次のとおりです:

 guestfish --version

Libguestfs development happens along an unstable branch and we periodically create a stable branch which we backport stable patches to.  To find out more, read L<guestfs(3)/LIBGUESTFS VERSION NUMBERS>.

=head2 How can I get help?

=head2 What mailing lists or chat rooms are available?

If you are a S<Red Hat> customer using Red Hat Enterprise Linux, please contact S<Red Hat Support>: L<http://redhat.com/support>

There is a mailing list, mainly for development, but users are also welcome to ask questions about libguestfs and the virt tools: L<https://www.redhat.com/mailman/listinfo/libguestfs>

You can also talk to us on IRC channel C<#guestfs> on Libera Chat.  We're not always around, so please stay in the channel after asking your question and someone will get back to you.

For other virt tools (not ones supplied with libguestfs) there is a general virt tools mailing list: L<https://www.redhat.com/mailman/listinfo/virt-tools-list>

=head2 どのようにバグを報告しますか?

Bugzilla にバグを入力するには、以下のリンクを使用してください:

L<https://bugzilla.redhat.com/enter_bug.cgi?component=libguestfs&product=Virtualization+Tools>

Include as much detail as you can and a way to reproduce the problem.

Include the full output of L<libguestfs-test-tool(1)>.

=head1 一般的な問題

See also L<guestfs(3)/LIBGUESTFS GOTCHAS> for some "gotchas" with using the libguestfs API.

=head2 "Could not allocate dynamic translator buffer"

This obscure error is in fact an SELinux failure.  You have to enable the following SELinux boolean:

 setsebool -P virt_use_execmem=on

詳細は L<https://bugzilla.redhat.com/show_bug.cgi?id=806106> を参照してください。

=head2 "child process died unexpectedly"

[This error message was changed in libguestfs 1.21.18 to something more explanatory.]

This error indicates that qemu failed or the host kernel could not boot.  To get further information about the failure, you have to run:

 libguestfs-test-tool

If, after using this, you still don’t understand the failure, contact us (see previous section).

=head2 libguestfs: error: cannot find any suitable libguestfs supermin, fixed or
old-style appliance on LIBGUESTFS_PATH

=head2 febootstrap-supermin-helper: ext2: parent directory not found

=head2 supermin-helper: ext2: parent directory not found

[This issue is fixed permanently in libguestfs E<ge> 1.26.]

If you see any of these errors on Debian/Ubuntu, you need to run the following command:

 sudo update-guestfs-appliance

=head2 "Permission denied" when running libguestfs as root

You get a permission denied error when opening a disk image, even though you are running libguestfs as root.

This is caused by libvirt, and so only happens when using the libvirt backend.  When run as root, libvirt decides to run the qemu appliance as user C<qemu.qemu>.  Unfortunately this usually means that qemu cannot open disk images, especially if those disk images are owned by root, or are present in directories which require root access.

There is a bug open against libvirt to fix this: L<https://bugzilla.redhat.com/show_bug.cgi?id=1045069>

You can work around this by one of the following methods:

=over 4

=item *

Switch to the direct backend:

 export LIBGUESTFS_BACKEND=direct

=item *

Don’t run libguestfs as root.

=item *

Chmod the disk image and any parent directories so that the qemu user can access them.

=item *

(Nasty) Edit F</etc/libvirt/qemu.conf> and change the C<user> setting.

=back

=head2 execl: /init: Permission denied

B<Note:> If this error happens when you are using a distro package of libguestfs (eg. from Fedora, Debian, etc) then file a bug against the distro.  This is not an error which normal users should ever see if the distro package has been prepared correctly.

This error happens during the supermin boot phase of starting the appliance:

 supermin: mounting new root on /root
 supermin: chroot
 execl: /init: Permission denied
 supermin: debug: listing directory /
 [...followed by a lot of debug output...]

This is a complicated bug related to L<supermin(1)> appliances.  The appliance is constructed by copying files like F</bin/bash> and many libraries from the host.  The file C<hostfiles> lists the files that should be copied from the host into the appliance.  If some files don't exist on the host then they are missed out, but if these files are needed in order to (eg) run F</bin/bash> then you'll see the above error.

Diagnosing the problem involves studying the libraries needed by F</bin/bash>, ie:

 ldd /bin/bash

comparing that with C<hostfiles>, with the files actually available in the host filesystem, and with the debug output printed in the error message. Once you've worked out which file is missing, install that file using your package manager and try again.

You should also check that files like F</init> and F</bin/bash> (in the appliance) are executable.  The debug output shows file modes.

=head1 DOWNLOADING, INSTALLING, COMPILING LIBGUESTFS

=begin HTML

<!-- 次のセクション向け古いアンカー --> <a name="binaries"/>

=end HTML

=head2 どこから最新のバイナリーを入手できますか ...?

=over 4

=item Fedora E<ge> 11

こうします:

 yum install '*guestf*'

最新版は次を参照してください: L<http://koji.fedoraproject.org/koji/packageinfo?packageID=8391>

=item Red Hat Enterprise Linux

=over 4

=item RHEL 6

=item RHEL 7

It is part of the default install.  On RHEL 6 and 7 (only) you have to install C<libguestfs-winsupport> to get Windows guest support.

=back

=item Debian および Ubuntu

For libguestfs E<lt> 1.26, after installing libguestfs you need to do:

 sudo update-guestfs-appliance

(This script has been removed on Debian/Ubuntu with libguestfs E<ge> 1.26 and instead the appliance is built on demand.)

On Ubuntu only:

 sudo chmod 0644 /boot/vmlinuz*

You may need to add yourself to the C<kvm> group:

 sudo usermod -a -G kvm yourlogin

=over 4

=item Debian Squeeze (6)

Hilko Bengen has built libguestfs in squeeze backports: L<http://packages.debian.org/search?keywords=guestfs&searchon=names&section=all&suite=squeeze-backports>

=item Debian Wheezy およびそれ以降 (7+)

Hilko Bengen supports libguestfs on Debian.  Official Debian packages are available: L<http://packages.debian.org/search?keywords=libguestfs>

=item Ubuntu

We don’t have a full time Ubuntu maintainer, and the packages supplied by Canonical (which are outside our control) are sometimes broken.

Canonical はカーネルにおけるパーミッションを変更することを決定したため、これは root により読み込めません。これは完全におかしいですが、変更しようとはしません (L<https://bugs.launchpad.net/ubuntu/+source/linux/+bug/759725>)。そのため、すべてのユーザーはこうする必要があります:

 sudo chmod 0644 /boot/vmlinuz*

=over 4

=item Ubuntu 12.04

このバージョンの Ubuntu にある libguestfs が動作しますが、febootstrap および seabios を最新バージョンに更新する必要があります。

次のところにある febootstrap E<ge> 3.14-2 が必要です: L<http://packages.ubuntu.com/precise/febootstrap>

febootstrap のインストールまたは更新後、アプライアンスを再構築します:

 sudo update-guestfs-appliance

次のところにある seabios E<ge> 0.6.2-0ubuntu2.1 または E<ge> 0.6.2-0ubuntu3 が必要です: L<http://packages.ubuntu.com/precise-updates/seabios> または L<http://packages.ubuntu.com/quantal/seabios>

次のことも実行する必要があります (上述、参照):

 sudo chmod 0644 /boot/vmlinuz*

=back

=back

=item Gentoo

libguestfs が Andreis Vinogradovs (libguestfs) および Maxim Koltsov (おもに hivex) により 2012-07 に Gentoo に追加されました。次のとおり実行します:

 emerge libguestfs

=item Mageia

Libguestfs was added to Mageia in 2013-08. Do:

 urpmi libguestfs

=item SuSE

libguestfs が Olaf Hering により 2012 年に SuSE に追加されました。

=item ArchLinux

libguestfs が 2010 年に AUR に追加されました。

=item 他の Linux ディストリビューション

ソースからコンパイルします (次のセクション)。

=item 他の非 Linux ディストリビューション

ソースからコンパイルして、取り込む必要があります。

=back

=head2 How can I compile and install libguestfs from source?

You can compile libguestfs from git or a source tarball.  Read the README file before starting.

Git: L<https://github.com/libguestfs/libguestfs> Source tarballs: L<http://libguestfs.org/download>

Don’t run C<make install>! Use the C<./run> script instead (see README).

=head2 How can I compile and install libguestfs if my distro doesn't have new
enough qemu/supermin/kernel?

Libguestfs needs supermin 5.  If supermin 5 hasn't been ported to your distro, then see the question below.

First compile qemu, supermin and/or the kernel from source.  You do I<not> need to C<make install> them.

In the libguestfs source directory, create two files.  C<localconfigure> should contain:

 source localenv
 #export PATH=/tmp/qemu/x86_64-softmmu:$PATH
 ./configure --prefix /usr "$@"

Make C<localconfigure> executable.

C<localenv> should contain:

 #export SUPERMIN=/tmp/supermin/src/supermin
 #export LIBGUESTFS_HV=/tmp/qemu/x86_64-softmmu/qemu-system-x86_64
 #export SUPERMIN_KERNEL=/tmp/linux/arch/x86/boot/bzImage
 #export SUPERMIN_KERNEL_VERSION=4.XX.0
 #export SUPERMIN_MODULES=/tmp/lib/modules/4.XX.0

Uncomment and adjust these lines as required to use the alternate programs you have compiled.

Use C<./localconfigure> instead of C<./configure>, but otherwise you compile libguestfs as usual.

Don’t run C<make install>! Use the C<./run> script instead (see README).

=head2 How can I compile and install libguestfs without supermin?

If supermin 5 supports your distro, but you don’t happen to have a new enough supermin installed, then see the previous question.

If supermin 5 doesn't support your distro at all, you will need to use the "fixed appliance method" where you use a pre-compiled binary appliance.  To build libguestfs without supermin, you need to pass C<--disable-appliance --disable-daemon> to either F<./configure> or F<./configure> (depending whether you are building respectively from git or from tarballs).  Then, when using libguestfs, you B<must> set the C<LIBGUESTFS_PATH> environment variable to the directory of a pre-compiled appliance, as also described in L<guestfs-internals(1)/FIXED APPLIANCE>.

For pre-compiled appliances, see also: L<http://libguestfs.org/download/binaries/appliance/>.

Patches to port supermin to more Linux distros are welcome.

=head2 どのように sVirt をサポートしますか?

B<Note for Fedora/RHEL users:> This configuration is the default starting with S<Fedora 18> and S<RHEL 7>.  If you find any problems, please let us know or file a bug.

L<SVirt|http://selinuxproject.org/page/SVirt> provides a hardened appliance using SELinux, making it very hard for a rogue disk image to "escape" from the confinement of libguestfs and damage the host (it's fair to say that even in standard libguestfs this would be hard, but sVirt provides an extra layer of protection for the host and more importantly protects virtual machines on the same host from each other).

Currently to enable sVirt you will need libvirt E<ge> 0.10.2 (1.0 or later preferred), libguestfs E<ge> 1.20, and the SELinux policies from recent Fedora.  If you are not running S<Fedora 18+>, you will need to make changes to your SELinux policy - contact us on the mailing list.

Once you have the requirements, do:

 ./configure --with-default-backend=libvirt       # libguestfs >= 1.22
 ./configure --with-default-attach-method=libvirt # libguestfs <= 1.20
 make

Set SELinux to Enforcing mode, and sVirt should be used automatically.

All, or almost all, features of libguestfs should work under sVirt.  There is one known shortcoming: L<virt-rescue(1)> will not use libvirt (hence sVirt), but falls back to direct launch of qemu.  So you won't currently get the benefit of sVirt protection when using virt-rescue.

You can check if sVirt is being used by enabling libvirtd logging (see F</etc/libvirt/libvirtd.log>), killing and restarting libvirtd, and checking the log files for S<"Setting SELinux context on ..."> messages.

In theory sVirt should support AppArmor, but we have not tried it.  It will almost certainly require patching libvirt and writing an AppArmor policy.

=head2 Libguestfs has a really long list of dependencies!

The base library doesn't depend on very much, but there are three causes of the long list of other dependencies:

=over 4

=item 1.

Libguestfs has to be able to read and edit many different disk formats.  For example, XFS support requires XFS tools.

=item 2.

There are language bindings for many different languages, all requiring their own development tools.  All language bindings (except C) are optional.

=item 3.

There are some optional library features which can be disabled.

=back

Since libguestfs E<ge> 1.26 it is possible to split up the appliance dependencies (item 1 in the list above) and thus have (eg) C<libguestfs-xfs> as a separate subpackage for processing XFS disk images. We encourage downstream packagers to start splitting the base libguestfs package into smaller subpackages.

=head2 Errors during launch on Fedora E<ge> 18, RHEL E<ge> 7

In Fedora E<ge> 18 and RHEL E<ge> 7, libguestfs uses libvirt to manage the appliance.  Previously (and upstream) libguestfs runs qemu directly:

 ┌──────────────────────────────────┐
 │ libguestfs                       │
 ├────────────────┬─────────────────┤
 │ direct backend │ libvirt backend │
 └────────────────┴─────────────────┘
        ↓                  ↓
    ┌───────┐         ┌──────────┐
    │ qemu  │         │ libvirtd │
    └───────┘         └──────────┘
                       ┌───────┐
                       │ qemu  │
                       └───────┘
 
    upstream          Fedora 18+
    non-Fedora         RHEL 7+
    non-RHEL

The libvirt backend is more sophisticated, supporting SELinux/sVirt (see above) and more.  It is, however, more complex and so less robust.

If you have permissions problems using the libvirt backend, you can switch to the direct backend by setting this environment variable:

 export LIBGUESTFS_BACKEND=direct

before running any libguestfs program or virt tool.

=head2 How can I switch to a fixed / prebuilt appliance?

This may improve the stability and performance of libguestfs on Fedora and RHEL.

Any time after installing libguestfs, run the following commands as root:

 mkdir -p /usr/local/lib/guestfs/appliance
 libguestfs-make-fixed-appliance /usr/local/lib/guestfs/appliance
 ls -l /usr/local/lib/guestfs/appliance

Now set the following environment variable before using libguestfs or any virt tool:

 export LIBGUESTFS_PATH=/usr/local/lib/guestfs/appliance

Of course you can change the path to any directory you want.  You can share the appliance across machines that have the same architecture (eg. all x86-64), but note that libvirt will prevent you from sharing the appliance across NFS because of permissions problems (so either switch to the direct backend or don't use NFS).

=head2 How can I speed up libguestfs builds?

By far the most important thing you can do is to install and properly configure Squid.  Note that the default configuration that ships with Squid is rubbish, so configuring it is not optional.

A very good place to start with Squid configuration is here: L<https://fedoraproject.org/wiki/Extras/MockTricks#Using_Squid_to_Speed_Up_Mock_package_downloads>

Make sure Squid is running, and that the environment variables C<$http_proxy> and C<$ftp_proxy> are pointing to it.

With Squid running and correctly configured, appliance builds should be reduced to a few minutes.

=head3 How can I speed up libguestfs builds (Debian)?

Hilko Bengen suggests using "approx" which is a Debian archive proxy (L<http://packages.debian.org/approx>).  This tool is documented on Debian in the approx(8) manual page.

=head1 SPEED, DISK SPACE USED BY LIBGUESTFS

B<Note:> Most of the information in this section has moved: L<guestfs-performance(1)>.

=head2 Upload or write seem very slow.

If the underlying disk is not fully allocated (eg. sparse raw or qcow2) then writes can be slow because the host operating system has to do costly disk allocations while you are writing. The solution is to use a fully allocated format instead, ie. non-sparse raw, or qcow2 with the C<preallocation=metadata> option.

=head2 Libguestfs uses too much disk space!

libguestfs caches a large-ish appliance in:

 /var/tmp/.guestfs-<UID>

If the environment variable C<TMPDIR> is defined, then F<$TMPDIR/.guestfs-E<lt>UIDE<gt>> is used instead.

libguestfs を使用していないとき、このディレクトリーを安全に削除できます。

=head2 virt-sparsify は仮想ディスクの全容量までイメージを拡大します。

If the input to L<virt-sparsify(1)> is raw, then the output will be raw sparse.  Make sure you are measuring the output with a tool which understands sparseness such as C<du -sh>.  It can make a huge difference:

 $ ls -lh test1.img
 -rw-rw-r--. 1 rjones rjones 100M Aug  8 08:08 test1.img
 $ du -sh test1.img
 3.6M	test1.img

(見た目の容量 B<100M> と実際の容量 B<3.6M> を比較します)

If all this confuses you, use a non-sparse output format by specifying the I<--convert> option, eg:

 virt-sparsify --convert qcow2 disk.raw disk.qcow2

=head2 Why doesn't virt-resize work on the disk image in-place?

Resizing a disk image is very tricky -- especially making sure that you don't lose data or break the bootloader.  The current method effectively creates a new disk image and copies the data plus bootloader from the old one.  If something goes wrong, you can always go back to the original.

If we were to make virt-resize work in-place then there would have to be limitations: for example, you wouldn't be allowed to move existing partitions (because moving data across the same disk is most likely to corrupt data in the event of a power failure or crash), and LVM would be very difficult to support (because of the almost arbitrary mapping between LV content and underlying disk blocks).

Another method we have considered is to place a snapshot over the original disk image, so that the original data is untouched and only differences are recorded in the snapshot.  You can do this today using C<qemu-img create> + C<virt-resize>, but qemu currently isn't smart enough to recognize when the same block is written back to the snapshot as already exists in the backing disk, so you will find that this doesn't save you any space or time.

In summary, this is a hard problem, and what we have now mostly works so we are reluctant to change it.

=head2 Why doesn't virt-sparsify work on the disk image in-place?

In libguestfs E<ge> 1.26, virt-sparsify can now work on disk images in place.  Use:

 virt-sparsify --in-place disk.img

But first you should read L<virt-sparsify(1)/IN-PLACE SPARSIFICATION>.

=head1 PROBLEMS OPENING DISK IMAGES

=head2 Remote libvirt guests cannot be opened.

Opening remote libvirt guests is not supported at this time.  For example this won't work:

 guestfish -c qemu://remote/system -d Guest

To open remote disks you have to export them somehow, then connect to the export.  For example if you decided to use NBD:

 remote$ qemu-nbd -t -p 10809 guest.img
  local$ guestfish -a nbd://remote:10809 -i

Other possibilities include ssh (if qemu is recent enough), NFS or iSCSI. See L<guestfs(3)/REMOTE STORAGE>.

=head2 How can I open this strange disk source?

You have a disk image located inside another system that requires access via a library / HTTP / REST / proprietary API, or is compressed or archived in some way.  (One example would be remote access to OpenStack glance images without actually downloading them.)

We have a sister project called nbdkit (L<https://github.com/libguestfs/nbdkit>).  This project lets you turn any disk source into an NBD server.  Libguestfs can access NBD servers directly, eg:

 guestfish -a nbd://remote

nbdkit is liberally licensed, so you can link it to or include it in proprietary libraries and code.  It also has a simple, stable plugin API so you can easily write plugins against the API which will continue to work in future.

=head2 Error opening VMDK disks: "uses a vmdk feature which is not supported by
this qemu version: VMDK version 3"

Qemu (and hence libguestfs) only supports certain VMDK disk images.  Others won't work, giving this or similar errors.

Ideally someone would fix qemu to support the latest VMDK features, but in the meantime you have three options:

=over 4

=item 1.

If the guest is hosted on a live, reachable ESX server, then locate and download the disk image called F<I<somename>-flat.vmdk>.  Despite the name, this is a raw disk image, and can be opened by anything.

If you have a recent enough version of qemu and libguestfs, then you may be able to access this disk image remotely using either HTTPS or ssh.  See L<guestfs(3)/REMOTE STORAGE>.

=item 2.

Use VMware’s proprietary vdiskmanager tool to convert the image to raw format.

=item 3.

Use nbdkit with the proprietary VDDK plugin to live export the disk image as an NBD source.  This should allow you to read and write the VMDK file.

=back

=head2 UFS disks (as used by BSD) cannot be opened.

The UFS filesystem format has many variants, and these are not self-identifying.  The Linux kernel has to be told which variant of UFS it has to use, which libguestfs cannot know.

You have to pass the right C<ufstype> mount option when mounting these filesystems.

See L<https://www.kernel.org/doc/Documentation/filesystems/ufs.txt>

=head2 Windows ReFS

Windows ReFS is Microsoft’s ZFS/Btrfs copy.  This filesystem has not yet been reverse engineered and implemented in the Linux kernel, and therefore libguestfs doesn't support it.  At the moment it seems to be very rare "in the wild".

=head2 Non-ASCII characters don’t appear on VFAT filesystems.

Typical symptoms of this problem:

=over 4

=item *

You get an error when you create a file where the filename contains non-ASCII characters, particularly non 8-bit characters from Asian languages (Chinese, Japanese, etc).  The filesystem is VFAT.

=item *

When you list a directory from a VFAT filesystem, filenames appear as question marks.

=back

This is a design flaw of the GNU/Linux system.

VFAT stores long filenames as UTF-16 characters.  When opening or returning filenames, the Linux kernel has to translate these to some form of 8 bit string.  UTF-8 would be the obvious choice, except for Linux users who persist in using non-UTF-8 locales (the user’s locale is not known to the kernel because it’s a function of libc).

Therefore you have to tell the kernel what translation you want done when you mount the filesystem.  The two methods are the C<iocharset> parameter (which is not relevant to libguestfs) and the C<utf8> flag.

そのため、VFAT ファイルシステムを使用するには、マウント時に C<utf8> フラグを追加する必要があります。guestfish から、次のように使用します:

 ><fs> mount-options utf8 /dev/sda1 /

または guestfish コマンドラインにおいて:

 guestfish [...] -m /dev/sda1:/:utf8

または API から:

 guestfs_mount_options (g, "utf8", "/dev/sda1", "/");

The kernel will then translate filenames to and from UTF-8 strings.

We considered adding this mount option transparently, but unfortunately there are several problems with doing that:

=over 4

=item *

On some Linux systems, the C<utf8> mount option doesn't work.  We don't precisely understand what systems or why, but this was reliably reported by one user.

=item *

It would prevent you from using the C<iocharset> parameter because it is incompatible with C<utf8>.  It is probably not a good idea to use this parameter, but we don't want to prevent it.

=back

=head2 Non-ASCII characters appear as underscore (_) on ISO9660 filesystems.

The filesystem was not prepared correctly with mkisofs or genisoimage.  Make sure the filesystem was created using Joliet and/or Rock Ridge extensions. libguestfs does not require any special mount options to handle the filesystem.

=head2 Cannot open Windows guests which use NTFS.

You see errors like:

 mount: unknown filesystem type 'ntfs'

On Red Hat Enterprise Linux or CentOS E<lt> 7.2, you have to install the L<libguestfs-winsupport|https://people.redhat.com/~rjones/libguestfs-winsupport/> package.  In RHEL E<ge> 7.2, C<libguestfs-winsupport> is part of the base RHEL distribution, but see the next question.

=head2 "mount: unsupported filesystem type" with NTFS in RHEL E<ge> 7.2

In RHEL 7.2 we were able to add C<libguestfs-winsupport> to the base RHEL distribution, but we had to disable the ability to use it for opening and editing filesystems.  It is only supported when used with L<virt-v2v(1)>. If you try to use L<guestfish(1)> or L<guestmount(1)> or some other programs on an NTFS filesystem, you will see the error:

 mount: unsupported filesystem type

This is not a supported configuration, and it will not be made to work in RHEL.  Don't bother to open a bug about it, as it will be immediately C<CLOSED -E<gt> WONTFIX>.

You may L<compile your own libguestfs removing this restriction|https://www.redhat.com/archives/libguestfs/2016-February/msg00145.html>, but that won't be endorsed or supported by Red Hat.

=head2 Cannot open or inspect RHEL 7 guests.

=head2 Cannot open Linux guests which use XFS.

RHEL 7 guests, and any other guests that use XFS, can be opened by libguestfs, but you have to install the C<libguestfs-xfs> package.

=head1 USING LIBGUESTFS IN YOUR OWN PROGRAMS

=head2 The API has hundreds of methods, where do I start?

We recommend you start by reading the API overview: L<guestfs(3)/API OVERVIEW>.

Although the API overview covers the C API, it is still worth reading even if you are going to use another programming language, because the API is the same, just with simple logical changes to the names of the calls:

                  C  guestfs_ln_sf (g, target, linkname);
             Python  g.ln_sf (target, linkname);
              OCaml  g#ln_sf target linkname;
               Perl  $g->ln_sf (target, linkname);
  Shell (guestfish)  ln-sf target linkname
                PHP  guestfs_ln_sf ($g, $target, $linkname);

Once you're familiar with the API overview, you should look at this list of starting points for other language bindings: L<guestfs(3)/USING LIBGUESTFS WITH OTHER PROGRAMMING LANGUAGES>.

=head2 Can I use libguestfs in my proprietary / closed source / commercial program?

In general, yes.  However this is not legal advice - read the license that comes with libguestfs, and if you have specific questions contact a lawyer.

In the source tree the license is in the file C<COPYING.LIB> (LGPLv2+ for the library and bindings) and C<COPYING> (GPLv2+ for the standalone programs).

=begin HTML

<!-- old anchor for the next section --> <a name="debug"/>

=end HTML

=head1 libguestfs のデバッグ

=head2 Help, it’s not working!

If no libguestfs program seems to work at all, run the program below and paste the B<complete, unedited> output into an email to C<libguestfs> @ C<redhat.com>:

 libguestfs-test-tool

If a particular operation fails, supply all the information in this checklist, in an email to C<libguestfs> @ C<redhat.com>:

=over 4

=item 1.

What are you trying to do?

=item 2.

What exact command(s) did you run?

=item 3.

What was the precise error or output of these commands?

=item 4.

Enable debugging, run the commands again, and capture the B<complete> output.  B<Do not edit the output.>

 export LIBGUESTFS_DEBUG=1
 export LIBGUESTFS_TRACE=1

=item 5.

Include the version of libguestfs, the operating system version, and how you installed libguestfs (eg. from source, C<yum install>, etc.)

=back

=head2 How do I debug when using any libguestfs program or tool (eg. virt-customize
or virt-df)?

There are two C<LIBGUESTFS_*> environment variables you can set in order to get more information from libguestfs.

=over 4

=item C<LIBGUESTFS_TRACE>

Set this to 1 and libguestfs will print out each command / API call in a format which is similar to guestfish commands.

=item C<LIBGUESTFS_DEBUG>

Set this to 1 in order to enable massive amounts of debug messages.  If you think there is some problem inside the libguestfs appliance, then you should use this option.

=back

To set these from the shell, do this before running the program:

 export LIBGUESTFS_TRACE=1
 export LIBGUESTFS_DEBUG=1

For csh/tcsh the equivalent commands would be:

 setenv LIBGUESTFS_TRACE 1
 setenv LIBGUESTFS_DEBUG 1

詳細は L<guestfs(3)/ENVIRONMENT VARIABLES> 参照。

=head2 How do I debug when using guestfish?

You can use the same environment variables above.  Alternatively use the guestfish options -x (to trace commands) or -v (to get the full debug output), or both.

詳細は L<guestfish(1)> を参照してください。

=head2 API を使用するとき、どのようにデバッグしますか?

Call L<guestfs(3)/guestfs_set_trace> to enable command traces, and/or L<guestfs(3)/guestfs_set_verbose> to enable debug messages.

For best results, call these functions as early as possible, just after creating the guestfs handle if you can, and definitely before calling launch.

=head2 How do I capture debug output and put it into my logging system?

Use the event API.  For examples, see: L<guestfs(3)/SETTING CALLBACKS TO HANDLE EVENTS> and the F<examples/debug-logging.c> program in the libguestfs sources.

=head2 Digging deeper into the appliance boot process.

Enable debugging and then read this documentation on the appliance boot process: L<guestfs-internals(1)>.

=head2 libguestfs hangs or fails during run/launch.

Enable debugging and look at the full output.  If you cannot work out what is going on, file a bug report, including the I<complete> output of L<libguestfs-test-tool(1)>.

=head2 Debugging libvirt

If you are using the libvirt backend, and libvirt is failing, then you can enable debugging by editing F</etc/libvirt/libvirtd.conf>.

If you are running as non-root, then you have to edit a different file. Create F<~/.config/libvirt/libvirtd.conf> containing:

 log_level=1
 log_outputs="1:file:/tmp/libvirtd.log"

Kill any session (non-root) libvirtd that is running, and next time you run the libguestfs command, you should see a large amount of useful debugging information from libvirtd in F</tmp/libvirtd.log>

=head2 Broken kernel, or trying a different kernel.

You can choose a different kernel for the appliance by setting some L<supermin environment variables|supermin(8)/ENVIRONMENT VARIABLES>:

 export SUPERMIN_KERNEL_VERSION=4.8.0-1.fc25.x86_64
 export SUPERMIN_KERNEL=/boot/vmlinuz-$SUPERMIN_KERNEL_VERSION
 export SUPERMIN_MODULES=/lib/modules/$SUPERMIN_KERNEL_VERSION
 rm -rf /var/tmp/.guestfs-*
 libguestfs-test-tool

=head2 Broken qemu, or trying a different qemu.

You can choose a different qemu by setting the hypervisor L<environment variable|guestfs(3)/ENVIRONMENT VARIABLES>:

 export LIBGUESTFS_HV=/path/to/qemu-system-x86_64
 libguestfs-test-tool

=head1 DESIGN/INTERNALS OF LIBGUESTFS

See also L<guestfs-internals(1)>.

=head2 Why don’t you do everything through the FUSE / filesystem interface?

We offer a command called L<guestmount(1)> which lets you mount guest filesystems on the host.  This is implemented as a FUSE module.  Why don't we just implement the whole of libguestfs using this mechanism, instead of having the large and rather complicated API?

The reasons are twofold.  Firstly, libguestfs offers API calls for doing things like creating and deleting partitions and logical volumes, which don't fit into a filesystem model very easily.  Or rather, you could fit them in: for example, creating a partition could be mapped to C<mkdir /fs/hda1> but then you'd have to specify some method to choose the size of the partition (maybe C<echo 100M E<gt> /fs/hda1/.size>), and the partition type, start and end sectors etc., but once you've done that the filesystem-based API starts to look more complicated than the call-based API we currently have.

The second reason is for efficiency.  FUSE itself is reasonably efficient, but it does make lots of small, independent calls into the FUSE module.  In guestmount these have to be translated into messages to the libguestfs appliance which has a big overhead (in time and round trips).  For example, reading a file in 64 KB chunks is inefficient because each chunk would turn into a single round trip.  In the libguestfs API it is much more efficient to download an entire file or directory through one of the streaming calls like C<guestfs_download> or C<guestfs_tar_out>.

=head2 Why don’t you do everything through GVFS?

The problems are similar to the problems with FUSE.

GVFS is a better abstraction than POSIX/FUSE.  There is an FTP backend for GVFS, which is encouraging because FTP is conceptually similar to the libguestfs API.  However the GVFS FTP backend makes multiple simultaneous connections in order to keep interactivity, which we can't easily do with libguestfs.

=begin HTML

<!-- old anchor for the next section --> <a name="backup"/>

=end HTML

=head2 Why can I write to the disk, even though I added it read-only?

=head2 Why does C<--ro> appear to have no effect?

When you add a disk read-only, libguestfs places a writable overlay on top of the underlying disk.  Writes go into this overlay, and are discarded when the handle is closed (or C<guestfish> etc. exits).

There are two reasons for doing it this way: Firstly read-only disks aren't possible in many cases (eg. IDE simply doesn't support them, so you couldn't have an IDE-emulated read-only disk, although this is not common in real libguestfs installations).

Secondly and more importantly, even if read-only disks were possible, you wouldn't want them.  Mounting any filesystem that has a journal, even C<mount -o ro>, causes writes to the filesystem because the journal has to be replayed and metadata updated.  If the disk was truly read-only, you wouldn't be able to mount a dirty filesystem.

To make it usable, we create the overlay as a place to temporarily store these writes, and then we discard it afterwards.  This ensures that the underlying disk is always untouched.

Note also that there is a regression test for this when building libguestfs (in C<tests/qemu>).  This is one reason why it’s important for packagers to run the test suite.

=head2 C<--ro> はすべてのディスクを読み込み専用にしますか?

I<いいえ!> C<--ro> オプションはコマンドラインにおいて、つまり C<-a> および C<-d> オプションを使用して追加されたディスクのみに影響します。

In guestfish, if you use the C<add> command, then disk is added read-write (unless you specify the C<readonly:true> flag explicitly with the command).

=head2 Can I use C<guestfish --ro> as a way to backup my virtual machines?

Usually this is I<not> a good idea.  The question is answered in more detail in this mailing list posting: L<https://www.redhat.com/archives/libguestfs/2010-August/msg00024.html>

See also the next question.

=head2 Why can’t I run fsck on a live filesystem using C<guestfish --ro>?

This command will usually I<not> work:

 guestfish --ro -a /dev/vg/my_root_fs run : fsck /dev/sda

The reason for this is that qemu creates a snapshot over the original filesystem, but it doesn't create a strict point-in-time snapshot.  Blocks of data on the underlying filesystem are read by qemu at different times as the fsck operation progresses, with host writes in between.  The result is that fsck sees massive corruption (imaginary, not real!) and fails.

What you have to do is to create a point-in-time snapshot.  If it’s a logical volume, use an LVM2 snapshot.  If the filesystem is located inside something like a btrfs/ZFS file, use a btrfs/ZFS snapshot, and then run the fsck on the snapshot.  In practice you don't need to use libguestfs for this -- just run F</sbin/fsck> directly.

Creating point-in-time snapshots of host devices and files is outside the scope of libguestfs, although libguestfs can operate on them once they are created.

=head2 What’s the difference between guestfish and virt-rescue?

多くの人々が私たちの提供している 2 つの似たツールにより混乱しています:

 $ guestfish --ro -a guest.img
 ><fs> run
 ><fs> fsck /dev/sda1

 $ virt-rescue --ro guest.img
 ><rescue> /sbin/fsck /dev/sda1

And the related question which then arises is why you can’t type in full shell commands with all the --options in guestfish (but you can in L<virt-rescue(1)>).

L<guestfish(1)> is a program providing structured access to the L<guestfs(3)> API.  It happens to be a nice interactive shell too, but its primary purpose is structured access from shell scripts.  Think of it more like a language binding, like Python and other bindings, but for shell.  The key differentiating factor of guestfish (and the libguestfs API in general) is the ability to automate changes.

L<virt-rescue(1)> is a free-for-all freeform way to boot the libguestfs appliance and make arbitrary changes to your VM. It’s not structured, you can't automate it, but for making quick ad-hoc fixes to your guests, it can be quite useful.

But, libguestfs also has a "backdoor" into the appliance allowing you to send arbitrary shell commands.  It’s not as flexible as virt-rescue, because you can't interact with the shell commands, but here it is anyway:

 ><fs> debug sh "cmd arg1 arg2 ..."

Note that you should B<not> rely on this.  It could be removed or changed in future. If your program needs some operation, please add it to the libguestfs API instead.

=head2 What’s the deal with C<guestfish -i>?

=head2 Why does virt-cat only work on a real VM image, but virt-df works on any
disk image?

=head2 What does "no root device found in this operating system image" mean?

These questions are all related at a fundamental level which may not be immediately obvious.

At the L<guestfs(3)> API level, a "disk image" is just a pile of partitions and filesystems.

In contrast, when the virtual machine boots, it mounts those filesystems into a consistent hierarchy such as:

 /          (/dev/sda2)
 ├── /boot  (/dev/sda1)
 ├── /home  (/dev/vg_external/Homes)
 ├── /usr   (/dev/vg_os/lv_usr)
 └── /var   (/dev/vg_os/lv_var)

(または Windows におけるドライブレター)。

The API first of all sees the disk image at the "pile of filesystems" level.  But it also has a way to inspect the disk image to see if it contains an operating system, and how the disks are mounted when the operating system boots: L<guestfs(3)/INSPECTION>.

Users expect some tools (like L<virt-cat(1)>) to work with VM paths:

 virt-cat fedora.img /var/log/messages

How does virt-cat know that F</var> is a separate partition? The trick is that virt-cat performs inspection on the disk image, and uses that to translate the path correctly.

Some tools (including L<virt-cat(1)>, L<virt-edit(1)>, L<virt-ls(1)>)  use inspection to map VM paths.  Other tools, such as L<virt-df(1)> and L<virt-filesystems(1)> operate entirely at the raw "big pile of filesystems" level of the libguestfs API, and don't use inspection.

L<guestfish(1)> is in an interesting middle ground.  If you use the I<-a> and I<-m> command line options, then you have to tell guestfish exactly how to add disk images and where to mount partitions. This is the raw API level.

If you use the I<-i> option, libguestfs performs inspection and mounts the filesystems for you.

The error C<no root device found in this operating system image> is related to this.  It means inspection was unable to locate an operating system within the disk image you gave it.  You might see this from programs like virt-cat if you try to run them on something which is just a disk image, not a virtual machine disk image.

=head2 What do these C<debug*> and C<internal-*> functions do?

There are some functions which are used for debugging and internal purposes which are I<not> part of the stable API.

The C<debug*> (or C<guestfs_debug*>) functions, primarily L<guestfs(3)/guestfs_debug> and a handful of others, are used for debugging libguestfs.  Although they are not part of the stable API and thus may change or be removed at any time, some programs may want to call these while waiting for features to be added to libguestfs.

The C<internal-*> (or C<guestfs_internal_*>) functions are purely to be used by libguestfs itself.  There is no reason for programs to call them, and programs should not try to use them.  Using them will often cause bad things to happen, as well as not being part of the documented stable API.

=head1 DEVELOPERS

=head2 Where do I send patches?

Please send patches to the libguestfs mailing list L<https://www.redhat.com/mailman/listinfo/libguestfs>.  You don't have to be subscribed, but there will be a delay until your posting is manually approved.

B<Please don’t use github pull requests - they will be ignored>.  The reasons are (a) we want to discuss and dissect patches on the mailing list, and (b) github pull requests turn into merge commits but we prefer to have a linear history.

=head2 How do I propose a feature?

Large new features that you intend to contribute should be discussed on the mailing list first (L<https://www.redhat.com/mailman/listinfo/libguestfs>). This avoids disappointment and wasted work if we don't think the feature would fit into the libguestfs project.

If you want to suggest a useful feature but don’t want to write the code, you can file a bug (see L</GETTING HELP AND REPORTING BUGS>)  with C<"RFE: "> at the beginning of the Summary line.

=head2 Who can commit to libguestfs git?

About 5 people have commit access to github.  Patches should be posted on the list first and ACKed.  The policy for ACKing and pushing patches is outlined here:

L<https://www.redhat.com/archives/libguestfs/2012-January/msg00023.html>

=head2 Can I fork libguestfs?

Of course you can.  Git makes it easy to fork libguestfs.  Github makes it even easier.  It’s nice if you tell us on the mailing list about forks and the reasons for them.

=head1 MISCELLANEOUS QUESTIONS

=head2 Can I monitor the live disk activity of a virtual machine using libguestfs?

A common request is to be able to use libguestfs to monitor the live disk activity of a guest, for example, to get notified every time a guest creates a new file.  Libguestfs does I<not> work in the way some people imagine, as you can see from this diagram:

            ┌─────────────────────────────────────┐
            │ monitoring program using libguestfs │
            └─────────────────────────────────────┘
 ┌───────────┐    ┌──────────────────────┐
 │ live VM   │    │ libguestfs appliance │
 ├───────────┤    ├──────────────────────┤
 │ kernel (1)│    │ appliance kernel (2) │
 └───────────┘    └──────────────────────┘
      ↓                      ↓ (r/o connection)
      ┌──────────────────────┐
      |      disk image      |
      └──────────────────────┘

This scenario is safe (as long as you set the C<readonly> flag when adding the drive).  However the libguestfs appliance kernel (2) does not see all the changes made to the disk image, for two reasons:

=over 4

=item i.

The VM kernel (1) can cache data in memory, so it doesn't appear in the disk image.

=item ii.

The libguestfs appliance kernel (2) doesn't expect that the disk image is changing underneath it, so its own cache is not magically updated even when the VM kernel (1) does update the disk image.

=back

The only supported solution is to restart the entire libguestfs appliance whenever you want to look at changes in the disk image.  At the API level that corresponds to calling C<guestfs_shutdown> followed by C<guestfs_launch>, which is a heavyweight operation (see also L<guestfs-performance(3)>).

There are some unsupported hacks you can try if relaunching the appliance is really too costly:

=over 4

=item *

Call C<guestfs_drop_caches (g, 3)>.  This causes all cached data help by the libguestfs appliance kernel (2) to be discarded, so it goes back to the disk image.

However this on its own is not sufficient, because qemu also caches some data.  You will also need to patch libguestfs to (re-)enable the C<cache=none> mode.  See: L<https://rwmj.wordpress.com/2013/09/02/new-in-libguestfs-allow-cache-mode-to-be-selected/>

=item *

Use a tool like L<virt-bmap|http://git.annexia.org/?p=virt-bmap.git> instead.

=item *

Run an agent inside the guest.

=back

Nothing helps if the guest is making more fundamental changes (eg.  deleting filesystems).  For those kinds of things you must relaunch the appliance.

(Note there is a third problem that you need to use consistent snapshots to really examine live disk images, but that’s a general problem with using libguestfs against any live disk image.)

=head1 関連項目

L<guestfish(1)>, L<guestfs(3)>, L<http://libguestfs.org/>.

=head1 著者

Richard W.M. Jones (C<rjones at redhat dot com>)

=head1 COPYRIGHT

Copyright (C) 2012-2020 Red Hat Inc.