File: guestfs-performance.1

package info (click to toggle)
libguestfs 1%3A1.40.2-2
  • links: PTS, VCS
  • area: main
  • in suites: buster
  • size: 123,660 kB
  • sloc: ansic: 460,074; ml: 63,059; sh: 14,955; java: 9,512; makefile: 9,133; cs: 6,300; haskell: 5,652; python: 3,856; perl: 3,619; erlang: 2,435; xml: 1,683; ruby: 350; pascal: 255; lex: 135; yacc: 128; cpp: 10
file content (663 lines) | stat: -rw-r--r-- 27,325 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
.\" Automatically generated by Podwrapper::Man 1.40.2 (Pod::Simple 3.35)
.\"
.\" Standard preamble:
.\" ========================================================================
.de Sp \" Vertical space (when we can't use .PP)
.if t .sp .5v
.if n .sp
..
.de Vb \" Begin verbatim text
.ft CW
.nf
.ne \\$1
..
.de Ve \" End verbatim text
.ft R
.fi
..
.\" Set up some character translations and predefined strings.  \*(-- will
.\" give an unbreakable dash, \*(PI will give pi, \*(L" will give a left
.\" double quote, and \*(R" will give a right double quote.  \*(C+ will
.\" give a nicer C++.  Capital omega is used to do unbreakable dashes and
.\" therefore won't be available.  \*(C` and \*(C' expand to `' in nroff,
.\" nothing in troff, for use with C<>.
.tr \(*W-
.ds C+ C\v'-.1v'\h'-1p'\s-2+\h'-1p'+\s0\v'.1v'\h'-1p'
.ie n \{\
.    ds -- \(*W-
.    ds PI pi
.    if (\n(.H=4u)&(1m=24u) .ds -- \(*W\h'-12u'\(*W\h'-12u'-\" diablo 10 pitch
.    if (\n(.H=4u)&(1m=20u) .ds -- \(*W\h'-12u'\(*W\h'-8u'-\"  diablo 12 pitch
.    ds L" ""
.    ds R" ""
.    ds C` ""
.    ds C' ""
'br\}
.el\{\
.    ds -- \|\(em\|
.    ds PI \(*p
.    ds L" ``
.    ds R" ''
.    ds C`
.    ds C'
'br\}
.\"
.\" Escape single quotes in literal strings from groff's Unicode transform.
.ie \n(.g .ds Aq \(aq
.el       .ds Aq '
.\"
.\" If the F register is >0, we'll generate index entries on stderr for
.\" titles (.TH), headers (.SH), subsections (.SS), items (.Ip), and index
.\" entries marked with X<> in POD.  Of course, you'll have to process the
.\" output yourself in some meaningful fashion.
.\"
.\" Avoid warning from groff about undefined register 'F'.
.de IX
..
.nr rF 0
.if \n(.g .if rF .nr rF 1
.if (\n(rF:(\n(.g==0)) \{\
.    if \nF \{\
.        de IX
.        tm Index:\\$1\t\\n%\t"\\$2"
..
.        if !\nF==2 \{\
.            nr % 0
.            nr F 2
.        \}
.    \}
.\}
.rr rF
.\" ========================================================================
.\"
.IX Title "guestfs-performance 1"
.TH guestfs-performance 1 "2019-02-07" "libguestfs-1.40.2" "Virtualization Support"
.\" For nroff, turn off justification.  Always turn off hyphenation; it makes
.\" way too many mistakes in technical documents.
.if n .ad l
.nh
.SH "名前"
.IX Header "名前"
guestfs-performance \- engineering libguestfs for greatest performance
.SH "説明"
.IX Header "説明"
This page documents how to get the greatest performance out of libguestfs,
especially when you expect to use libguestfs to manipulate thousands of
virtual machines or disk images.
.PP
Three main areas are covered. Libguestfs runs an appliance (a small Linux
distribution) inside qemu/KVM.  The first two areas are: minimizing the time
taken to start this appliance, and the number of times the appliance has to
be started.  The third area is shortening the time taken for inspection of
VMs.
.SH "BASELINE MEASUREMENTS"
.IX Header "BASELINE MEASUREMENTS"
Before making changes to how you use libguestfs, take baseline measurements.
.SS "Baseline: Starting the appliance"
.IX Subsection "Baseline: Starting the appliance"
On an unloaded machine, time how long it takes to start up the appliance:
.PP
.Vb 1
\& time guestfish \-a /dev/null run
.Ve
.PP
Run this command several times in a row and discard the first few runs, so
that you are measuring a typical \*(L"hot cache\*(R" case.
.PP
\&\fISide note for developers:\fR If you are compiling libguestfs from source,
there is a program called \fIutils/boot\-benchmark/boot\-benchmark\fR which does
the same thing, but performs multiple runs and prints the mean and standard
deviation.  To run it, do:
.PP
.Vb 2
\& make
\& ./run utils/boot\-benchmark/boot\-benchmark
.Ve
.PP
There is a manual page \fIutils/boot\-benchmark/boot\-benchmark.1\fR
.PP
\fI説明\fR
.IX Subsection "説明"
.PP
The guestfish command above starts up the libguestfs appliance on a null
disk, and then immediately shuts it down.  The first time you run the
command, it will create an appliance and cache it (usually under
\&\fI/var/tmp/.guestfs\-*\fR).  Subsequent runs should reuse the cached appliance.
.PP
\fI期待される結果\fR
.IX Subsection "期待される結果"
.PP
You should expect to be getting times under 6 seconds.  If the times you see
on an unloaded machine are above this, then see the section
\&\*(L"\s-1TROUBLESHOOTING POOR PERFORMANCE\*(R"\s0 below.
.SS "Baseline: Performing inspection of a guest"
.IX Subsection "Baseline: Performing inspection of a guest"
For this test you will need an unloaded machine and at least one real guest
or disk image.  If you are planning to use libguestfs against only X guests
(eg. X = Windows), then using an X guest here would be most appropriate.  If
you are planning to run libguestfs against a mix of guests, then use a mix
of guests for testing here.
.PP
Time how long it takes to perform inspection and mount the disks of the
guest.  Use the first command if you will be using disk images, and the
second command if you will be using libvirt.
.PP
.Vb 1
\& time guestfish \-\-ro \-a disk.img \-i exit
\&
\& time guestfish \-\-ro \-d GuestName \-i exit
.Ve
.PP
Run the command several times in a row and discard the first few runs, so
that you are measuring a typical \*(L"hot cache\*(R" case.
.PP
\fI説明\fR
.IX Subsection "説明"
.PP
This command starts up the libguestfs appliance on the named disk image or
libvirt guest, performs libguestfs inspection on it (see
\&\*(L"\s-1INSPECTION\*(R"\s0 in \fBguestfs\fR\|(3)), mounts the guest’s disks, then discards all these
results and shuts down.
.PP
The first time you run the command, it will create an appliance and cache it
(usually under \fI/var/tmp/.guestfs\-*\fR).  Subsequent runs should reuse the
cached appliance.
.PP
\fI期待される結果\fR
.IX Subsection "期待される結果"
.PP
You should expect times which are ≤ 5 seconds greater than measured in
the first baseline test above.  (For example, if the first baseline test ran
in 5 seconds, then this test should run in ≤ 10 seconds).
.SH "UNDERSTANDING THE APPLIANCE AND WHEN IT IS BUILT/CACHED"
.IX Header "UNDERSTANDING THE APPLIANCE AND WHEN IT IS BUILT/CACHED"
The first time you use libguestfs, it will build and cache an appliance.
This is usually in \fI/var/tmp/.guestfs\-*\fR, unless you have set \f(CW$TMPDIR\fR or
\&\f(CW$LIBGUESTFS_CACHEDIR\fR in which case it will be under that temporary
directory.
.PP
For more information about how the appliance is constructed, see
\&\*(L"\s-1SUPERMIN APPLIANCES\*(R"\s0 in \fBsupermin\fR\|(1).
.PP
Every time libguestfs runs it will check that no host files used by the
appliance have changed.  If any have, then the appliance is rebuilt.  This
usually happens when a package is installed or updated on the host
(eg. using programs like \f(CW\*(C`yum\*(C'\fR or \f(CW\*(C`apt\-get\*(C'\fR).  The reason for
reconstructing the appliance is security: the new program that has been
installed might contain a security fix, and so we want to include the fixed
program in the appliance automatically.
.PP
These are the performance implications:
.IP "\(bu" 4
The process of building (or rebuilding) the cached appliance is slow, and
you can avoid this happening by using a fixed appliance (see below).
.IP "\(bu" 4
If not using a fixed appliance, be aware that updating software on the host
will cause a one time rebuild of the appliance.
.IP "\(bu" 4
\&\fI/var/tmp\fR (or \f(CW$TMPDIR\fR, \f(CW$LIBGUESTFS_CACHEDIR\fR) should be on a fast
disk, and have plenty of space for the appliance.
.SH "固定アプライアンスの使用法"
.IX Header "固定アプライアンスの使用法"
To fully control when the appliance is built, you can build a fixed
appliance.  This appliance should be stored on a fast local disk.
.PP
アプライアンスを構築するには、以下のコマンドを実行します:
.PP
.Vb 1
\& libguestfs\-make\-fixed\-appliance <directory>
.Ve
.PP
replacing \f(CW\*(C`<directory>\*(C'\fR with the name of a directory where the
appliance will be stored (normally you would name a subdirectory, for
example: \fI/usr/local/lib/guestfs/appliance\fR or \fI/dev/shm/appliance\fR).
.PP
Then set \f(CW$LIBGUESTFS_PATH\fR (and ensure this environment variable is set in
your libguestfs program), or modify your program so it calls
\&\f(CW\*(C`guestfs_set_path\*(C'\fR.  For example:
.PP
.Vb 1
\& export LIBGUESTFS_PATH=/usr/local/lib/guestfs/appliance
.Ve
.PP
Now you can run libguestfs programs, virt tools, guestfish etc. as normal.
The programs will use your fixed appliance, and will not ever build,
rebuild, or cache their own appliance.
.PP
(この話題の詳細は \fBlibguestfs\-make\-fixed\-appliance\fR\|(1) を参照してください)。
.SS "Performance of the fixed appliance"
.IX Subsection "Performance of the fixed appliance"
In our testing we did not find that using a fixed appliance gave any
measurable performance benefit, even when the appliance was located in
memory (ie. on \fI/dev/shm\fR).  However there are two points to consider:
.IP "1." 4
Using a fixed appliance stops libguestfs from ever rebuilding the appliance,
meaning that libguestfs will have more predictable start-up times.
.IP "2." 4
The appliance is loaded on demand.  A simple test such as:
.Sp
.Vb 1
\& time guestfish \-a /dev/null run
.Ve
.Sp
does not load very much of the appliance.  A real libguestfs program using
complicated \s-1API\s0 calls would demand-load a lot more of the appliance.  Being
able to store the appliance in a specified location makes the performance
more predictable.
.SH "REDUCING THE NUMBER OF TIMES THE APPLIANCE IS LAUNCHED"
.IX Header "REDUCING THE NUMBER OF TIMES THE APPLIANCE IS LAUNCHED"
By far the most effective, though not always the simplest way to get good
performance is to ensure that the appliance is launched the minimum number
of times.  This will probably involve changing your libguestfs application.
.PP
Try to call \f(CW\*(C`guestfs_launch\*(C'\fR at most once per target virtual machine or
disk image.
.PP
Instead of using a separate instance of \fBguestfish\fR\|(1) to make a series of
changes to the same guest, use a single instance of guestfish and/or use the
guestfish \fI\-\-listen\fR option.
.PP
Consider writing your program as a daemon which holds a guest open while
making a series of changes.  Or marshal all the operations you want to
perform before opening the guest.
.PP
You can also try adding disks from multiple guests to a single appliance.
Before trying this, note the following points:
.IP "1." 4
Adding multiple guests to one appliance is a security problem because it may
allow one guest to interfere with the disks of another guest.  Only do it if
you trust all the guests, or if you can group guests by trust.
.IP "2." 4
There is a hard limit to the number of disks you can add to a single
appliance.  Call \*(L"guestfs_max_disks\*(R" in \fBguestfs\fR\|(3) to get this limit.  For
further information see \*(L"\s-1LIMITS\*(R"\s0 in \fBguestfs\fR\|(3).
.IP "3." 4
Using libguestfs this way is complicated.  Disks can have unexpected
interactions: for example, if two guests use the same \s-1UUID\s0 for a filesystem
(because they were cloned), or have volume groups with the same name (but
see \f(CW\*(C`guestfs_lvm_set_filter\*(C'\fR).
.PP
\&\fBvirt\-df\fR\|(1) adds multiple disks by default, so the source code for this
program would be a good place to start.
.SH "SHORTENING THE TIME TAKEN FOR INSPECTION OF VMs"
.IX Header "SHORTENING THE TIME TAKEN FOR INSPECTION OF VMs"
The main advice is obvious: Do not perform inspection (which is expensive)
unless you need the results.
.PP
If you previously performed inspection on the guest, then it may be safe to
cache and reuse the results from last time.
.PP
Some disks don’t need to be inspected at all: for example, if you are
creating a disk image, or if the disk image is not a \s-1VM,\s0 or if the disk
image has a known layout.
.PP
Even when basic inspection (\f(CW\*(C`guestfs_inspect_os\*(C'\fR) is required, auxiliary
inspection operations may be avoided:
.IP "\(bu" 4
Mounting disks is only necessary to get further filesystem information.
.IP "\(bu" 4
Listing applications (\f(CW\*(C`guestfs_inspect_list_applications\*(C'\fR) is an expensive
operation on Linux, but almost free on Windows.
.IP "\(bu" 4
Generating a guest icon (\f(CW\*(C`guestfs_inspect_get_icon\*(C'\fR) is cheap on Linux but
expensive on Windows.
.SH "PARALLEL APPLIANCES"
.IX Header "PARALLEL APPLIANCES"
Libguestfs appliances are mostly I/O bound and you can launch multiple
appliances in parallel.  Provided there is enough free memory, there should
be little difference in launching 1 appliance vs N appliances in parallel.
.PP
On a 2\-core (4\-thread) laptop with 16 \s-1GB\s0 of \s-1RAM,\s0 using the (not especially
realistic) test Perl script below, the following plot shows excellent
scalability when running between 1 and 20 appliances in parallel:
.PP
.Vb 10
\&  12 ++\-\-\-+\-\-\-\-+\-\-\-\-+\-\-\-\-+\-\-\-\-\-+\-\-\-\-+\-\-\-\-+\-\-\-\-+\-\-\-\-+\-\-\-++
\&     +    +    +    +    +     +    +    +    +    +    *
\&     |                                                  |
\&     |                                               *  |
\&  11 ++                                                ++
\&     |                                                  |
\&     |                                                  |
\&     |                                          *  *    |
\&  10 ++                                                ++
\&     |                                        *         |
\&     |                                                  |
\& s   |                                                  |
\&   9 ++                                                ++
\& e   |                                                  |
\&     |                                     *            |
\& c   |                                                  |
\&   8 ++                                  *             ++
\& o   |                                *                 |
\&     |                                                  |
\& n 7 ++                                                ++
\&     |                              *                   |
\& d   |                           *                      |
\&     |                                                  |
\& s 6 ++                                                ++
\&     |                      *  *                        |
\&     |                   *                              |
\&     |                                                  |
\&   5 ++                                                ++
\&     |                                                  |
\&     |                 *                                |
\&     |            * *                                   |
\&   4 ++                                                ++
\&     |                                                  |
\&     |                                                  |
\&     +    *  * *    +    +     +    +    +    +    +    +
\&   3 ++\-*\-+\-\-\-\-+\-\-\-\-+\-\-\-\-+\-\-\-\-\-+\-\-\-\-+\-\-\-\-+\-\-\-\-+\-\-\-\-+\-\-\-++
\&     0    2    4    6    8     10   12   14   16   18   20
\&               number of parallel appliances
.Ve
.PP
It is possible to run many more than 20 appliances in parallel, but if you
are using the libvirt backend then you should be aware that out of the box
libvirt limits the number of client connections to 20.
.PP
The simple Perl script below was used to collect the data for the plot
above, but there is much more information on this subject, including more
advanced test scripts and graphs, available in the following blog postings:
.PP
http://rwmj.wordpress.com/2013/02/25/multiple\-libguestfs\-appliances\-in\-parallel\-part\-1/
http://rwmj.wordpress.com/2013/02/25/multiple\-libguestfs\-appliances\-in\-parallel\-part\-2/
http://rwmj.wordpress.com/2013/02/25/multiple\-libguestfs\-appliances\-in\-parallel\-part\-3/
http://rwmj.wordpress.com/2013/02/25/multiple\-libguestfs\-appliances\-in\-parallel\-part\-4/
.PP
.Vb 1
\& #!/usr/bin/env perl
\& 
\& use strict;
\& use threads;
\& use warnings;
\& use Sys::Guestfs;
\& use Time::HiRes qw(time);
\& 
\& sub test {
\&     my $g = Sys::Guestfs\->new;
\&     $g\->add_drive_ro ("/dev/null");
\&     $g\->launch ();
\&     
\&     # You could add some work for libguestfs to do here.
\&     
\&     $g\->close ();
\& }
\& 
\& # Get everything into cache.
\& test (); test (); test ();
\& 
\& for my $nr_threads (1..20) {
\&     my $start_t = time ();
\&     my @threads;
\&     foreach (1..$nr_threads) {
\&         push @threads, threads\->create (\e&test)
\&     }
\&     foreach (@threads) {
\&         $_\->join ();
\&         if (my $err = $_\->error ()) {
\&             die "launch failed with $nr_threads threads: $err"
\&         }
\&     }
\&     my $end_t = time ();
\&     printf ("%d %.2f\en", $nr_threads, $end_t \- $start_t);
\& }
.Ve
.SH "USING USER-MODE LINUX"
.IX Header "USING USER-MODE LINUX"
Since libguestfs 1.24, it has been possible to use the User-Mode Linux (uml)
backend instead of \s-1KVM\s0 (see \*(L"USER-MODE \s-1LINUX BACKEND\*(R"\s0 in \fBguestfs\fR\|(3)).  This
section makes some general remarks about this backend, but it is \fBhighly
advisable\fR to measure your own workload under \s-1UML\s0 rather than trusting
comments or intuition.
.IP "\(bu" 4
\&\s-1UML\s0 usually performs the same or slightly slower than \s-1KVM,\s0 on baremetal.
.IP "\(bu" 4
However \s-1UML\s0 often performs the same under virtualization as it does on
baremetal, whereas \s-1KVM\s0 can run much slower under virtualization (since
hardware virt acceleration is not available).
.IP "\(bu" 4
Upload and download is as much as 10 times slower on \s-1UML\s0 than \s-1KVM.\s0
Libguestfs sends this data over the \s-1UML\s0 emulated serial port, which is far
less efficient than KVM’s virtio-serial.
.IP "\(bu" 4
\&\s-1UML\s0 lacks some features (eg. qcow2 support), so it may not be applicable at
all.
.PP
For some actual figures, see:
http://rwmj.wordpress.com/2013/08/14/performance\-of\-user\-mode\-linux\-as\-a\-libguestfs\-backend/#content
.SH "性能劣化のトラブルシューティング"
.IX Header "性能劣化のトラブルシューティング"
.SS "Ensure hardware virtualization is available"
.IX Subsection "Ensure hardware virtualization is available"
Use \fI/proc/cpuinfo\fR to ensure that hardware virtualization is available.
Note that you may need to enable it in your \s-1BIOS.\s0
.PP
ハードウェア仮想化は一般的に仮想マシンの中において利用可能ではありません。libguestfs
はどんな他の仮想マシンよりも遅いです。ネスト仮想化は経験上うまく動作しないです。そのため、ベアメタルにおいて libguestfs
を実行するためにほとんど適切ではありません。
.SS "Ensure \s-1KVM\s0 is available"
.IX Subsection "Ensure KVM is available"
Ensure that \s-1KVM\s0 is enabled and available to the user that will run
libguestfs.  It should be safe to set 0666 permissions on \fI/dev/kvm\fR and
most distributions now do this.
.SS "Processors to avoid"
.IX Subsection "Processors to avoid"
Avoid processors that don’t have hardware virtualization, and some
processors which are simply very slow (\s-1AMD\s0 Geode being a great example).
.SS "Xen dom0"
.IX Subsection "Xen dom0"
In Xen, dom0 is a virtual machine, and so hardware virtualization is not
available.
.SS "Use libguestfs ≥ 1.34 and qemu ≥ 2.7"
.IX Subsection "Use libguestfs ≥ 1.34 and qemu ≥ 2.7"
During the libguestfs 1.33 development cycle, we spent a large amount of
time concentrating on boot performance, and added some patches to
libguestfs, qemu and Linux which in some cases can reduce boot times to well
under 1 second.  You may therefore get much better performance by moving to
the versions of libguestfs or qemu mentioned in the heading.
.SH "DETAILED ANALYSIS"
.IX Header "DETAILED ANALYSIS"
.SS "Boot analysis"
.IX Subsection "Boot analysis"
In the libguestfs source directory, in \fIutils/boot\-analysis\fR is a program
called \f(CW\*(C`boot\-analysis\*(C'\fR.  This program is able to produce a very detailed
breakdown of the boot steps (eg. qemu, \s-1BIOS,\s0 kernel, libguestfs init
script), and can measure how long it takes to perform each step.
.PP
To run this program, do:
.PP
.Vb 2
\& make
\& ./run utils/boot\-analysis/boot\-analysis
.Ve
.PP
There is a manual page \fIutils/boot\-benchmark/boot\-analysis.1\fR
.SS "Detailed timings using ts"
.IX Subsection "Detailed timings using ts"
Use the \fBts\fR\|(1) command (from moreutils) to show detailed timings:
.PP
.Vb 10
\& $ guestfish \-a /dev/null run \-v |& ts \-i \*(Aq%.s\*(Aq
\& 0.000022 libguestfs: launch: program=guestfish
\& 0.000134 libguestfs: launch: version=1.29.31fedora=23,release=2.fc23,libvirt
\& 0.000044 libguestfs: launch: backend registered: unix
\& 0.000035 libguestfs: launch: backend registered: uml
\& 0.000035 libguestfs: launch: backend registered: libvirt
\& 0.000032 libguestfs: launch: backend registered: direct
\& 0.000030 libguestfs: launch: backend=libvirt
\& 0.000031 libguestfs: launch: tmpdir=/tmp/libguestfsw18rBQ
\& 0.000029 libguestfs: launch: umask=0002
\& 0.000031 libguestfs: launch: euid=1000
\& 0.000030 libguestfs: libvirt version = 1002012 (1.2.12)
\& [etc]
.Ve
.PP
The timestamps are seconds (incrementally since the previous line).
.SS "Detailed timings using SystemTap"
.IX Subsection "Detailed timings using SystemTap"
libguestfs プログラムから詳細なタイミングを取得するために SystemTap (\fBstap\fR\|(1)) を使用できます。
.PP
Save the following script as \fItime.stap\fR:
.PP
.Vb 1
\& global last;
\& 
\& function display_time () {
\&       now = gettimeofday_us ();
\&       delta = 0;
\&       if (last > 0)
\&             delta = now \- last;
\&       last = now;
\& 
\&       printf ("%d (+%d):", now, delta);
\& }
\& 
\& probe begin {
\&       last = 0;
\&       printf ("ready\en");
\& }
\& 
\& /* Display all calls to static markers. */
\& probe process("/usr/lib*/libguestfs.so.0")
\&           .provider("guestfs").mark("*") ? {
\&       display_time();
\&       printf ("\et%s %s\en", $$name, $$parms);
\& }
\& 
\& /* すべての guestfs_* 関数の呼び出しを一覧表示します。 */
\& probe process("/usr/lib*/libguestfs.so.0")
\&           .function("guestfs_[a\-z]*") ? {
\&       display_time();
\&       printf ("\et%s %s\en", probefunc(), $$parms);
\& }
.Ve
.PP
Run it as root in one window:
.PP
.Vb 2
\& # stap time.stap
\& ready
.Ve
.PP
It prints \*(L"ready\*(R" when SystemTap has loaded the program.  Run your
libguestfs program, guestfish or a virt tool in another window.  For
example:
.PP
.Vb 1
\& $ guestfish \-a /dev/null run
.Ve
.PP
In the stap window you will see a large amount of output, with the time
taken for each step shown (microseconds in parenthesis).  For example:
.PP
.Vb 9
\& xxxx (+0):     guestfs_create 
\& xxxx (+29):    guestfs_set_pgroup g=0x17a9de0 pgroup=0x1
\& xxxx (+9):     guestfs_add_drive_opts_argv g=0x17a9de0 [...]
\& xxxx (+8):     guestfs_int_safe_strdup g=0x17a9de0 str=0x7f8a153bed5d
\& xxxx (+19):    guestfs_int_safe_malloc g=0x17a9de0 nbytes=0x38
\& xxxx (+5):     guestfs_int_safe_strdup g=0x17a9de0 str=0x17a9f60
\& xxxx (+10):    guestfs_launch g=0x17a9de0
\& xxxx (+4):     launch_start 
\& [etc]
.Ve
.PP
You will need to consult, and even modify, the source to libguestfs to fully
understand the output.
.SS "Detailed debugging using gdb"
.IX Subsection "Detailed debugging using gdb"
gdb を使用してアプライアンスの
BIOS/カーネルに接続できます。実行することを理解している場合、ブートの逆行を診断するための有用な方法になりえます。
.PP
Firstly, you have to change qemu so it runs with the \f(CW\*(C`\-S\*(C'\fR and \f(CW\*(C`\-s\*(C'\fR
options.  These options cause qemu to pause at boot and allow you to attach
a debugger.  Read \fBqemu\fR\|(1) for further information.  Libguestfs invokes
qemu several times (to scan the help output and so on) and you only want the
final invocation of qemu to use these options, so use a qemu wrapper script
like this:
.PP
.Vb 1
\& #!/bin/bash \-
\& 
\& # 実際の QEMU バイナリーを指し示すようこれを設定してください。
\& qemu=/usr/bin/qemu\-kvm
\& 
\& if [ "$1" != "\-global" ]; then
\&     # ヘルプの出力などを解析します。
\&     exec $qemu "$@"
\& else 
\&     # Really running qemu.
\&     exec $qemu \-S \-s "$@"
\& fi
.Ve
.PP
Now run guestfish or another libguestfs tool with the qemu wrapper (see
\&\*(L"\s-1QEMU WRAPPERS\*(R"\s0 in \fBguestfs\fR\|(3) to understand what this is doing):
.PP
.Vb 1
\& LIBGUESTFS_HV=/path/to/qemu\-wrapper guestfish \-a /dev/null \-v run
.Ve
.PP
これは \s-1QEMU\s0 の起動後に単に停止しています。他のウィンドウにおいて、gdb を使用して \s-1QEMU\s0 に接続します:
.PP
.Vb 7
\& $ gdb
\& (gdb) set architecture i8086
\& The target architecture is assumed to be i8086
\& (gdb) target remote :1234
\& Remote debugging using :1234
\& 0x0000fff0 in ?? ()
\& (gdb) cont
.Ve
.PP
At this point you can use standard gdb techniques, eg. hitting \f(CW\*(C`^C\*(C'\fR to
interrupt the boot and \f(CW\*(C`bt\*(C'\fR get a stack trace, setting breakpoints, etc.
Note that when you are past the \s-1BIOS\s0 and into the Linux kernel, you'll want
to change the architecture back to 32 or 64 bit.
.SH "PERFORMANCE REGRESSIONS IN OTHER PROGRAMS"
.IX Header "PERFORMANCE REGRESSIONS IN OTHER PROGRAMS"
Sometimes performance regressions happen in other programs (eg. qemu, the
kernel) that cause problems for libguestfs.
.PP
In the libguestfs source, \fIutils/boot\-benchmark/boot\-benchmark\-range.pl\fR is
a script which can be used to benchmark libguestfs across a range of git
commits in another project to find out if any commit is causing a slowdown
(or speedup).
.PP
To find out how to use this script, consult the manual:
.PP
.Vb 1
\& ./utils/boot\-benchmark/boot\-benchmark\-range.pl \-\-man
.Ve
.SH "関連項目"
.IX Header "関連項目"
\&\fBsupermin\fR\|(1), \fBguestfish\fR\|(1), \fBguestfs\fR\|(3), \fBguestfs\-examples\fR\|(3),
\&\fBguestfs\-internals\fR\|(1), \fBlibguestfs\-make\-fixed\-appliance\fR\|(1), \fBstap\fR\|(1),
\&\fBqemu\fR\|(1), \fBgdb\fR\|(1), http://libguestfs.org/.
.SH "著者"
.IX Header "著者"
Richard W.M. Jones (\f(CW\*(C`rjones at redhat dot com\*(C'\fR)
.SH "COPYRIGHT"
.IX Header "COPYRIGHT"
Copyright (C) 2012\-2019 Red Hat Inc.
.SH "LICENSE"
.IX Header "LICENSE"
.SH "BUGS"
.IX Header "BUGS"
To get a list of bugs against libguestfs, use this link:
https://bugzilla.redhat.com/buglist.cgi?component=libguestfs&product=Virtualization+Tools
.PP
To report a new bug against libguestfs, use this link:
https://bugzilla.redhat.com/enter_bug.cgi?component=libguestfs&product=Virtualization+Tools
.PP
When reporting a bug, please supply:
.IP "\(bu" 4
The version of libguestfs.
.IP "\(bu" 4
Where you got libguestfs (eg. which Linux distro, compiled from source, etc)
.IP "\(bu" 4
Describe the bug accurately and give a way to reproduce it.
.IP "\(bu" 4
Run \fBlibguestfs\-test\-tool\fR\|(1) and paste the \fBcomplete, unedited\fR
output into the bug report.