1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344
|
===========
VM handling
===========
Configuration
=============
Before you can start installing or managing machines, you need to create
``~/.config/lcitool/config.yml``, ideally by copying the
``config.yml`` template, and set at least the options marked as
"(mandatory)" depending on the flavor (``test``, ``gitlab``) you wish to
use with your machines.
If managing VMs installed locally with libvirt you can use the
`libvirt NSS plugin <https://libvirt.org/nss.html>`_ to your
convenience and after installing an enabling the plugin on the host you can
refer to your machines by their name in the Ansible inventory.
As for the plugin settings, you'll be mainly interested in the ``libvirt_guest``
variant of the plugin.
Ansible inventory
-----------------
In addition to creating a configuration file as described in `Configuration`_,
you may also need to provide an Ansible inventory depending on whether
you want to manage external hosts (e.g. machines hosted in public cloud) with
lcitool. The inventory will then have to be placed under the
``~/.config/lcitool`` directory and must be named ``inventory``. It can either
be a single file or a directory containing multiple inventory sources just like
Ansible would allow. You can use any format Ansible recognizes for inventories
- it can even be a dynamic one, i.e. a script conforming to Ansible's
requirements.
There's one requirement however that any inventory source **must** comply with
to be usable with lcitool - every single host must be a member of a group
corresponding to one of our supported target OS platforms (see the next section
on how to obtain the list of targets).
Please avoid naming your hosts and inventory groups identically, otherwise
Ansible will complain by issuing a warning about this which may in turn result
in an unexpected Ansible behaviour.
Managed hosts
~~~~~~~~~~~~~
Since hosts may come from a public cloud environment, we don't execute all the
Ansible tasks which set up the VM environment by default because some of the
tasks could render such hosts unusable. However, for hosts that are going to
be installed as local VMs, we do recommend adding ``fully_managed=True`` as
an inventory variable because it is safe to run all the Ansible tasks in this
case.
An example of a simple INI inventory:
::
[centos-stream-8]
centos-stream-8-1
centos-stream-8-2
some-other-centos-stream-8
[fedora-35]
fedora-test-1
fedora-test-2 fully_managed=True
[debian-10]
192.168.1.30
Installing local VMs
====================
In order to install a local VM with lcitool, run the following:
::
lcitool install $host --target $target_os
where ``$host`` is the name for the VM and ``$target_os`` is one of the
supported target OS plaforms (see `Usage and examples`_ below).
Another option of installing guests with lcitool is by adding a managed host
entry in the Ansible inventory in which case lcitool's invocation would look
like this:
::
lcitool install $host
Refer to the `Ansible inventory`_ and `Managed hosts`_ sections respectively on
how to use an inventory with lcitool. Note that not all guests can be installed
using the ways described above, e.g. FreeBSD or Alpine guests. See
`Installing FreeBSD VMs`_ to know how to add such a host in that case.
Installing FreeBSD VMs
----------------------
Installation of FreeBSD guests must be performed manually; alternatively,
the official qcow2 images can be used to quickly bring up such guests.
::
$ MAJOR=12
$ MINOR=1
$ VER=$MAJOR.$MINOR-RELEASE
$ sudo wget -O /var/lib/libvirt/images/libvirt-freebsd-$MAJOR.qcow2.xz \
https://download.freebsd.org/ftp/releases/VM-IMAGES/$VER/amd64/Latest/FreeBSD-$VER-amd64.qcow2.xz
$ sudo unxz /var/lib/libvirt/images/libvirt-freebsd-$MAJOR.qcow2.xz
$ virt-install \
--import \
--name libvirt-freebsd-$MAJOR \
--vcpus 2 \
--graphics vnc \
--noautoconsole \
--console pty \
--sound none \
--rng device=/dev/urandom,model=virtio \
--memory 2048 \
--os-variant freebsd$MAJOR.0 \
--disk /var/lib/libvirt/images/libvirt-freebsd-$MAJOR.qcow2
The default qcow2 images are sized too small to be usable. To enlarge
them do
::
$ virsh blockresize libvirt-freebsd-$MAJOR \
/var/lib/libvirt/images/libvirt-freebsd-$MAJOR.qcow2 15G
Then inside the guest, FreeBSD should detect the enlarged volume
and have automatically increased the vtbd0 partition size. Thus
all that is required is to accept the changes and then rexize
the filesystem.
::
# gpart commit vtbd0
# service growfs onestart
Some manual tweaking will be needed, in particular:
* ``/etc/ssh/sshd_config`` must contain the ``PermitRootLogin yes`` directive;
* ``/etc/rc.conf`` must contain the ``sshd_enable="YES"`` setting;
* the root password must be manually set to "root" (without quotes).
Once these steps have been performed, FreeBSD guests can be managed just
like all other guests.
Updating VMs with a given project dependencies
==============================================
So you've installed your VM with lcitool. What's next? Next the VM needs to
go through all the post-installation configuration steps required to
make the newly-added machine usable and ready to be used for building a
project. This includes resetting the root password to the one you set in
``$HOME/.config/lcitool/config.yml``, uploading your SSH key, updating the
system, etc.
``$project``. set up (in other words update) with a given project's package dependencies so
that the respective project
::
$ lcitool projects
You can run update on the VM with
::
# the syntax is 'lcitool update $guest $project'
$ lcitool update my_vm_name libvirt
More hosts (external bare metal hosts are supported as well) can be updated
with more projects at the same time
::
$ lcitool update my_vm_name,my_bare_metal_host libvirt,qemu
It is also recommended to run the same command periodically to
ensure the machine configuration is sane and all installed packages are updated
for maintenance purposes. This is where the special keyword **all** might come
handy as you can go as far as putting the following in your crontab
::
0 0 * * * lcitool update all all
Cloud-init
==========
If you intend to use the generated images as templates to be instantiated in
a cloud environment like OpenStack, then you want to set the
``install.cloud_init`` key to ``true`` in ``~/.config/lcitool/config.yaml``. This will
install the necessary cloud-init packages and enable the corresponding services
at boot time. However, there are still a few manual steps involved to create a
generic template. You'll need to install the ``libguestfs-tools`` package for that.
Once you have it installed, shutdown the machines gracefully. First, we're going to
"unconfigure" the machine in a way, so that clones can be made out of it.
::
$ virt-sysprep -a libvirt-<machine_distro>.qcow2
Then, we sparsify and compress the image in order to shrink the disk to the
smallest size possible
::
$ virt-sparsify --compress --format qcow2 <indisk> <outdisk>
Now you're ready to upload the image to your cloud provider, e.g. OpenStack
::
$ glance image-create --name <image_name> --disk-format qcow2 --file <outdisk>
FreeBSD is tricky with regards to cloud-init, so have a look at the
`Cloud-init with FreeBSD`_ section instead.
Cloud-init with FreeBSD
-----------------------
FreeBSD doesn't fully support cloud-init, so in order to make use of it, there
are a bunch of manual steps involved. First, you want to install the base OS
manually rather than use the official qcow2 images, in contrast to the
suggestion above, because cloud-init requires a specific disk partitioning scheme.
Best you can do is to look at the official
`OpenStack guide <https://docs.openstack.org/image-guide/freebsd-image.html>`_
and follow only the installation guide (along with the ``virt-install`` steps
outlined above).
Now, that you have and OS installed and booted, set the ``install.cloud_init``
key to ``true`` in ``~/.config/lcitool/config.yaml`` and update it with the
desired project.
The sysprep phase is completely manual, as ``virt-sysprep`` cannot work with
FreeBSD's UFS filesystem (because the Linux kernel can only mount it read-only).
Compressing and uploading the image looks the same as was mentioned in the
earlier sections
::
$ virt-sparsify --compress --format qcow2 <indisk> <outdisk>
$ glance image-create --name <image_name> --disk-format qcow2 --file <outdisk>
More VM examples
================
This section provides more usage examples once you have a VM installed and
updated.
To get a list of known target platforms run:
::
$ lcitool targets
If you're interested in the list of hosts currently provided through the
inventory sources, run:
::
$ lcitool hosts
To see the list of supported projects that can be built from source with
lcitool, run:
::
$ lcitool projects
You can run operations involving multiple guests and projects during a single
execution as well since both hosts and project specification support shell
globbing. Using the above inventory as an example, running
::
$ lcitool update '*fedora*' '*osinfo*'
will update all Fedora guests and get them ready to build libosinfo and related
projects. Once hosts have been prepared following the steps above, you can use
``lcitool`` to perform builds as well: for example, running
::
$ lcitool build '*debian*' libvirt-python
will fetch libvirt-python's ``master`` branch from the upstream repository
and build it on all Debian hosts.
You can add more git repositories by tweaking the ``git_urls`` dictionary
defined in ``playbooks/build/jobs/defaults.yml`` and then build arbitrary
branches out of those with
::
$ lcitool build -g github/cool-feature all libvirt
Note that unlike other lcitool commands which take projects as input the 'build'
command doesn't accept the project list specified either as 'all' or with a
wildcard.
Useful tips
===========
If you are a developer trying to reproduce a bug on some OS you don't
have easy access to, you can use these tools to create a suitable test
environment.
The ``test`` flavor is used by default, so you don't need to do anything
special in order to use it: just follow the steps outlined above. Once
a guest has been prepared, you'll be able to log in as ``test`` either
via SSH (your public key will have been authorized) or on the serial
console (password: ``test``).
Once logged in, you'll be able to perform administrative tasks using
``sudo``. Regular root access will still be available, either through
SSH or on the serial console.
Since guests created for this purpose are probably not going to be
long-lived or contain valuable information, you can configure your
SSH client to skip some of the usual verification steps and thus
prompt you less frequently; moreover, you can have the username
selected automatically for you to avoid having to type it in every
single time you want to connect. Just add
::
Host libvirt-*
User test
GSSAPIAuthentication no
StrictHostKeyChecking no
CheckHostIP no
UserKnownHostsFile /dev/null
to your ``~/.ssh/config`` file to achieve all of the above.
|