File: virtiofs.rst

package info (click to toggle)
libvirt 11.9.0-2
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 209,020 kB
  • sloc: ansic: 535,831; xml: 321,783; python: 11,974; perl: 2,626; sh: 2,185; makefile: 448; javascript: 126; cpp: 22
file content (225 lines) | stat: -rw-r--r-- 6,261 bytes parent folder | download | duplicates (3)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
===========================
Sharing files with Virtiofs
===========================

.. contents::

Virtiofs
========

Virtiofs is a shared file system that lets virtual machines access
a directory tree on the host. Unlike existing approaches, it
is designed to offer local file system semantics and performance.

See https://virtio-fs.gitlab.io/

*Note:* Older versions of ``virtiofsd`` (prior to ``1.11``)  do not not support
migration so operations such as migration, save/managed-save, or snapshots with
memory may not supported if a VM has a virtiofs filesystem connected.

Additionally snapshot operations managed by libvirt do not snapshot the state
of the files shared via ``virtiofs``, and thus reverting to an earlier state is
not recommended.

Sharing a host directory with a guest
=====================================

#. Add the following domain XML elements to share the host directory `/path`
   with the guest

   ::

     <domain>
       ...
       <memoryBacking>
         <source type='memfd'/>
         <access mode='shared'/>
       </memoryBacking>
       ...
       <devices>
         ...
         <filesystem type='mount' accessmode='passthrough'>
           <driver type='virtiofs' queue='1024'/>
           <source dir='/path'/>
           <target dir='mount_tag'/>
         </filesystem>
         ...
       </devices>
     </domain>

   Don't forget the ``<memoryBacking>`` elements. They are necessary for the
   vhost-user connection with the ``virtiofsd`` daemon.

   Note that despite its name, the ``target dir`` is an arbitrary string called
   a mount tag that is used inside the guest to identify the shared file system
   to be mounted. It does not have to correspond to the desired mount point in the
   guest.

#. Boot the guest and mount the filesystem

   ::

      guest# mount -t virtiofs mount_tag /mnt/mount/path

   Note: this requires virtiofs support in the guest kernel (Linux v5.4 or later)

Running unprivileged
====================

In unprivileged mode (``qemu:///session``), mapping user/group IDs is available
(since libvirt version 10.0.0). The root user (ID 0) in the guest will be mapped
to the current user on the host.

The rest of the IDs will be mapped to the subordinate user IDs specified
in `/etc/subuid`:

::

  $ cat /etc/subuid
  jtomko:100000:65536
  $ cat /etc/subgid
  jtomko:100000:65536

To manually tweak the user ID mapping, the `idmap` element can be used.

Optional parameters
===================

More optional elements can be specified

::

  <filesystem type='mount' accessmode='passthrough'>
    <driver type='virtiofs' queue='1024'/>
    ...
    <binary path='/usr/libexec/virtiofsd' xattr='on'>
      <cache mode='always'/>
      <lock posix='on' flock='on'/>
    </binary>
  </filesystem>

Externally-launched virtiofsd
=============================

Libvirtd can also connect the ``vhost-user-fs`` device to a ``virtiofsd``
daemon launched outside of libvirtd. In that case socket permissions,
the mount tag and all the virtiofsd options are out of libvirtd's
control and need to be set by the application running virtiofsd.

::

  <filesystem type='mount'>
    <driver type='virtiofs' queue='1024'/>
    <source socket='/var/virtiofsd.sock'/>
    <target dir='tag'/>
  </filesystem>

Other options for vhost-user memory setup
=========================================

The following information is necessary if you are using older versions of QEMU
and libvirt or have special memory backend requirements.

Almost all virtio devices (all that use virtqueues) require access to
at least certain portions of guest RAM (possibly policed by DMA). In
case of virtiofsd, much like in case of other vhost-user (see
https://www.qemu.org/docs/master/interop/vhost-user.html) virtio
devices that are realized by an userspace process, this in practice
means that QEMU needs to allocate the backing memory for all the guest
RAM as shared memory. As of QEMU 4.2, it is possible to explicitly
specify a memory backend when specifying the NUMA topology. This
method is however only viable for machine types that do support
NUMA. As of QEMU 5.0.0 and libvirt 6.9.0, it is possible to
specify the memory backend without NUMA (using the so called
memobject interface).

#. Set up the memory backend

   * Use memfd memory

     No host setup is required when using the Linux memfd memory backend.

   * Use file-backed memory

     Configure the directory where the files backing the memory will be stored
     with the ``memory_backing_dir`` option in ``/etc/libvirt/qemu.conf``

     ::

       # This directory is used for memoryBacking source if configured as file.
       # NOTE: big files will be stored here
       memory_backing_dir = "/dev/shm/"

   * Use hugepage-backed memory

     Make sure there are enough huge pages allocated for the requested guest memory.
     For example, for one guest with 2 GiB of RAM backed by 2 MiB hugepages:

     ::

       # virsh allocpages 2M 1024

#. Specify the NUMA topology (this step is only required for the NUMA case)

   in the domain XML of the guest.
   For the simplest one-node topology for a guest with 2GiB of RAM and 8 vCPUs:

   ::

      <domain>
        ...
        <cpu ...>
          <numa>
            <cell id='0' cpus='0-7' memory='2' unit='GiB' memAccess='shared'/>
          </numa>
        </cpu>
       ...
      </domain>

   Note that the CPU element might already be specified and only one is allowed.

#. Specify the memory backend

   One of the following:

   * memfd memory

     ::

        <domain>
          ...
          <memoryBacking>
            <source type='memfd'/>
            <access mode='shared'/>
          </memoryBacking>
          ...
        </domain>

   * File-backed memory

     ::

        <domain>
          ...
          <memoryBacking>
            <access mode='shared'/>
          </memoryBacking>
          ...
        </domain>

     This will create a file in the directory specified in ``qemu.conf``

   * Hugepage-backed memory

     ::

        <domain>
          ...
          <memoryBacking>
            <hugepages>
              <page size='2' unit='M'/>
            </hugepages>
            <access mode='shared'/>
          </memoryBacking>
          ...
        </domain>