1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225
|
.. _multi_backend:
====================================
Configure multiple-storage back ends
====================================
When you configure multiple-storage back ends, you can create several
back-end storage solutions that serve the same OpenStack Compute
configuration and one ``cinder-volume`` is launched for each back-end
storage or back-end storage pool.
In a multiple-storage back-end configuration, each back end has a name
(``volume_backend_name``). Several back ends can have the same name.
In that case, the scheduler properly decides which back end the volume
has to be created in.
The name of the back end is declared as an extra-specification of a
volume type (such as, ``volume_backend_name=LVM``). When a volume
is created, the scheduler chooses an appropriate back end to handle the
request, according to the volume type specified by the user.
Enable multiple-storage back ends
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To enable a multiple-storage back ends, you must set the
`enabled_backends` flag in the ``cinder.conf`` file.
This flag defines the names (separated by a comma) of the configuration
groups for the different back ends: one name is associated to one
configuration group for a back end (such as, ``[lvmdriver-1]``).
.. note::
The configuration group name is not related to the ``volume_backend_name``.
.. note::
After setting the ``enabled_backends`` flag on an existing cinder
service, and restarting the Block Storage services, the original ``host``
service is replaced with a new host service. The new service appears
with a name like ``host@backend``. Use:
.. code-block:: console
$ cinder-manage volume update_host --currenthost CURRENTHOST --newhost CURRENTHOST@BACKEND
to convert current block devices to the new host name.
The options for a configuration group must be defined in the group
(or default options are used). All the standard Block Storage
configuration options (``volume_group``, ``volume_driver``, and so on)
might be used in a configuration group. Configuration values in
the ``[DEFAULT]`` configuration group are not used.
These examples show three back ends:
.. code-block:: ini
enabled_backends=lvmdriver-1,lvmdriver-2,lvmdriver-3
[lvmdriver-1]
volume_group=cinder-volumes-1
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name=LVM
[lvmdriver-2]
volume_group=cinder-volumes-2
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name=LVM
[lvmdriver-3]
volume_group=cinder-volumes-3
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name=LVM_b
In this configuration, ``lvmdriver-1`` and ``lvmdriver-2`` have the same
``volume_backend_name``. If a volume creation requests the ``LVM``
back end name, the scheduler uses the capacity filter scheduler to choose
the most suitable driver, which is either ``lvmdriver-1`` or ``lvmdriver-2``.
The capacity filter scheduler is enabled by default. The next section
provides more information. In addition, this example presents a
``lvmdriver-3`` back end.
.. note::
For Fiber Channel drivers that support multipath, the configuration group
requires the ``use_multipath_for_image_xfer=true`` option. In
the example below, you can see details for HPE 3PAR and EMC Fiber
Channel drivers.
.. code-block:: ini
[3par]
use_multipath_for_image_xfer = true
volume_driver = cinder.volume.drivers.hpe.hpe_3par_fc.HPE3PARFCDriver
volume_backend_name = 3parfc
[emc]
use_multipath_for_image_xfer = true
volume_driver = cinder.volume.drivers.emc.emc_smis_fc.EMCSMISFCDriver
volume_backend_name = emcfc
Configure shared volume driver backends
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When configuring multiple volume backends, common configuration
parameters can be shared using the `[backend_defaults]` section. As
an example:
.. code-block:: ini
[DEFAULT]
enabled_backends=backend1,backend2,backend3
[backend_defaults]
image_volume_cache_enabled = True
volume_clear = none
target_helper = tgtadm
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
[backend1]
volume_group = cinder-volume-1
image_volume_cache_enabled = False
[backend2]
volume_group = cinder-volume-2
[backend3]
volume_group = cinder-volume-3
In this configuration, ``backend2`` and ``backend3`` have the same
``image_volume_cache_enabled`` as it is defined in the ``backend_defaults``
section. In other words, ``backend2`` and ``backend3`` have enabled
the image cache features. ``image_volume_cache_enabled`` in ``backend1``
is False, that means any overwritten configuration in a volume backend
will ignore the original value in ``backend_defaults``.
.. note::
The ``backend_defaults`` section should be configured according to
your cloud environment or your backend driver information.
Configure Block Storage scheduler multi back end
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You must enable the `filter_scheduler` option to use
multiple-storage back ends. The filter scheduler:
#. Filters the available back ends. By default, ``AvailabilityZoneFilter``,
``CapacityFilter`` and ``CapabilitiesFilter`` are enabled.
#. Weights the previously filtered back ends. By default, the
`CapacityWeigher` option is enabled. When this option is
enabled, the filter scheduler assigns the highest weight to back
ends with the most available capacity.
The scheduler uses filters and weights to pick the best back end to
handle the request. The scheduler uses volume types to explicitly create
volumes on specific back ends. For more information about filter and weighing,
see :ref:`filter_weigh_scheduler`.
Volume type
~~~~~~~~~~~
Before using it, a volume type has to be declared to Block Storage.
This can be done by the following command:
.. code-block:: console
$ openstack --os-username admin --os-tenant-name admin volume type create lvm
Then, an extra-specification has to be created to link the volume
type to a back end name. Run this command:
.. code-block:: console
$ openstack --os-username admin --os-tenant-name admin volume type set lvm \
--property volume_backend_name=LVM_iSCSI
This example creates a ``lvm`` volume type with
``volume_backend_name=LVM_iSCSI`` as extra-specifications.
Create another volume type:
.. code-block:: console
$ openstack --os-username admin --os-tenant-name admin volume type create lvm_gold
$ openstack --os-username admin --os-tenant-name admin volume type set lvm_gold \
--property volume_backend_name=LVM_iSCSI_b
This second volume type is named ``lvm_gold`` and has ``LVM_iSCSI_b`` as
back end name.
.. note::
To list the extra-specifications, use this command:
.. code-block:: console
$ openstack --os-username admin --os-tenant-name admin volume type list --long
.. note::
If a volume type points to a ``volume_backend_name`` that does not
exist in the Block Storage configuration, the ``filter_scheduler``
returns an error that it cannot find a valid host with the suitable
back end.
Usage
~~~~~
When you create a volume, you must specify the volume type.
The extra-specifications of the volume type are used to determine which
back end has to be used.
.. code-block:: console
$ openstack volume create --size 1 --type lvm test_multi_backend
Considering the ``cinder.conf`` described previously, the scheduler
creates this volume on ``lvmdriver-1`` or ``lvmdriver-2``.
.. code-block:: console
$ openstack volume create --size 1 --type lvm_gold test_multi_backend
This second volume is created on ``lvmdriver-3``.
|