Package: nova / 2014.1.3-11

Metadata

Package Version Patches format
nova 2014.1.3-11 3.0 (quilt)

Patch series

view the series file
Patch File delta Description
path to the xenhost.conf fixup.patch | (download)

plugins/xenserver/xenapi/etc/xapi.d/plugins/xenhost | 2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)

 fixes the path to the xenhost.conf file
install missing files.patch | (download)

MANIFEST.in | 93 93 + 0 - 0 !
1 file changed, 93 insertions(+)

 install some missing files
fix docs build without network.patch | (download)

doc/source/conf.py | 1 0 + 1 - 0 !
1 file changed, 1 deletion(-)

 build docs without network access.
0001_CEPH_remove_redundant_copy_of_test_cache_base_dir_exists.patch | (download)

nova/tests/virt/libvirt/test_imagebackend.py | 14 0 + 14 - 0 !
1 file changed, 14 deletions(-)

 remove redundant copy of test_cache_base_dir_exists
 Second copy of RbdTestCase.test_cache_base_dir_exists was accidentally
 introduced in https://review.openstack.org/82840.
0002_CEPH_Revert Address the comments of the merged image hand.patch | (download)

nova/virt/imagehandler/__init__.py | 176 0 + 176 - 0 !
1 file changed, 176 deletions(-)

 revert "address the comments of the merged image handler patch"
 This reverts commit 9e8513e6fe4bf8a8759ad0c1d71594f952d920ad.
0003_CEPH_Improve shared storage checks for live migration.patch | (download)

nova/compute/manager.py | 80 54 + 26 - 0 !
nova/compute/rpcapi.py | 35 22 + 13 - 0 !
nova/exception.py | 12 9 + 3 - 0 !
nova/tests/compute/test_compute.py | 12 7 + 5 - 0 !
nova/tests/compute/test_rpcapi.py | 31 21 + 10 - 0 !
nova/tests/virt/libvirt/test_libvirt.py | 218 116 + 102 - 0 !
nova/virt/baremetal/driver.py | 5 3 + 2 - 0 !
nova/virt/driver.py | 14 10 + 4 - 0 !
nova/virt/fake.py | 4 2 + 2 - 0 !
nova/virt/hyperv/driver.py | 8 5 + 3 - 0 !
nova/virt/libvirt/driver.py | 87 64 + 23 - 0 !
nova/virt/libvirt/imagebackend.py | 10 10 + 0 - 0 !
nova/virt/vmwareapi/driver.py | 10 6 + 4 - 0 !
nova/virt/xenapi/driver.py | 8 5 + 3 - 0 !
14 files changed, 334 insertions(+), 200 deletions(-)

 improve shared storage checks for live migration
 Due to an assumption that libvirt live migrations work only when both
 instance path and disk data is shared between source and destination
 hosts (e.g. libvirt instances directory is on NFS), instance disks are
 removed from shared storage when instance path is not shared (e.g. Ceph
 RBD backend is enabled).
 .
 Distinguish cases that require shared instance drive and shared libvirt
 instance directory. Reflect the fact that RBD backed instances have
 shared instance drive (and no shared libvirt instance directory) in the
 relevant conditionals.
 .
 UpgradeImpact: Live migrations from or to a compute host running a
 version of Nova pre-dating this commit are disabled in order to
 eliminate possibility of data loss. Upgrade Nova on both the source and
 the target node before attempting a live migration.
0004_CEPH_Move_libvirt_RBD_utilities_to_a_new_file.patch | (download)

nova/tests/virt/libvirt/test_imagebackend.py | 34 9 + 25 - 0 !
nova/tests/virt/libvirt/test_rbd.py | 170 170 + 0 - 0 !
nova/virt/libvirt/imagebackend.py | 129 14 + 115 - 0 !
nova/virt/libvirt/rbd.py | 146 146 + 0 - 0 !
4 files changed, 339 insertions(+), 140 deletions(-)

 move libvirt rbd utilities to a new file
 This will make it easier to share rbd-related code with cinder and glance.
 Port the applicable unit tests over from cinder.
0005_CEPH_Use_library_instead_of_CLI_to_cleanup_RBD_volumes.patch | (download)

nova/tests/virt/libvirt/fake_libvirt_utils.py | 16 0 + 16 - 0 !
nova/tests/virt/libvirt/test_imagebackend.py | 10 7 + 3 - 0 !
nova/tests/virt/libvirt/test_libvirt.py | 37 8 + 29 - 0 !
nova/tests/virt/libvirt/test_libvirt_utils.py | 42 0 + 42 - 0 !
nova/tests/virt/libvirt/test_rbd.py | 15 15 + 0 - 0 !
nova/virt/libvirt/driver.py | 17 6 + 11 - 0 !
nova/virt/libvirt/imagebackend.py | 9 1 + 8 - 0 !
nova/virt/libvirt/rbd.py | 44 44 + 0 - 0 !
nova/virt/libvirt/utils.py | 40 0 + 40 - 0 !
9 files changed, 81 insertions(+), 149 deletions(-)

 use library instead of cli to cleanup rbd volumes
 'rbd list' CLI returns error code when there are no rbd volumes, which causes
 problems during live migration of VMs with RBD backed ephemeral volumes. It's
 safer to use the library that only raises an exception in case of a real
 problem.
 .
 The only case where rbd CLI is still justified is import, which is needed to
 correctly import sparse image files.
 .
 All code related to cleanup of RBD volumes is moved to rbd_utils.py, this
0006_CEPH_Enable_cloning_for_rbd backed_ephemeral_disks.patch | (download)

nova/tests/virt/libvirt/test_imagebackend.py | 128 120 + 8 - 0 !
nova/tests/virt/libvirt/test_libvirt.py | 2 1 + 1 - 0 !
nova/tests/virt/libvirt/test_rbd.py | 103 101 + 2 - 0 !
nova/tests/virt/test_images.py | 34 34 + 0 - 0 !
nova/virt/images.py | 28 27 + 1 - 0 !
nova/virt/libvirt/driver.py | 16 9 + 7 - 0 !
nova/virt/libvirt/imagebackend.py | 59 46 + 13 - 0 !
nova/virt/libvirt/rbd.py | 94 85 + 9 - 0 !
nova/virt/libvirt/utils.py | 5 3 + 2 - 0 !
9 files changed, 426 insertions(+), 43 deletions(-)

 enable cloning for rbd-backed ephemeral disks
 Currently when using rbd as an image backend, nova downloads the
 glance image to local disk and then copies it again into rbd. This
 can be very slow for large images, and wastes bandwidth as well as
 disk space.
 .
 When the glance image is stored in the same ceph cluster, the data is
 being pulled out and pushed back in unnecessarily. Instead, create a
 copy-on-write clone of the image. This is fast, and does not depend
 on the size of the image. Instead of taking minutes, booting takes
 seconds, and is not limited by the disk copy.
 .
 Add some rbd utility functions from cinder to support cloning and
 let the rbd imagebackend rely on librbd instead of the rbd
 command line tool for checking image existence.
 .
 Add a direct_fetch() method to the image backend, so backends like rbd
 can make optimizations like this. Try to use direct_fetch() for the root
 disk when it comes from an image, but fall back to fetch_to_raw() if
 direct_fetch() fails.
 .
 Instead of calling disk.get_disk_size() directly from
 verify_base_size(), which assumes the disk is stored locally, add a new
 method that is overridden by the Rbd subclass to get the disk size.
From d8507cb8f6312b97d87acbca0d4b481f928b0439 Mon Sep 17 00:00:00 2001
0007_CEPH_Use_Ceph_cluster_stats_to_report_disk_info_on_RBD.patch | (download)

nova/virt/libvirt/driver.py | 11 8 + 3 - 0 !
nova/virt/libvirt/rbd.py | 8 8 + 0 - 0 !
2 files changed, 16 insertions(+), 3 deletions(-)

 use ceph cluster stats to report disk info on rbd
 Local disk statistics on compute nodes are irrelevant when ephemeral
 disks are stored in RBD. With RBD, local disk space is not consumed when
 instances are started on a compute node, yet it is possible for
 scheduler to refuse to schedule an instance when combined disk usage of
 instances already running on the node exceeds total disk capacity
 reported by the hypervisor driver.
0008_CEPH_Live_migration_is_broken_for_NFS_shared_storage.patch | (download)

nova/tests/virt/libvirt/test_libvirt.py | 24 24 + 0 - 0 !
nova/virt/libvirt/driver.py | 2 1 + 1 - 0 !
2 files changed, 25 insertions(+), 1 deletion(-)

 live migration is broken for nfs shared storage
 One of the new checks, introduced in
 I2755c59b4db736151000dae351fd776d3c15ca39
 is missing a check against filebased shared storage,
 leading to live migration being broken.
Bug-Ubuntu: https://launchpad.net/bugs/1346385
0009_CEPH_count_image_files_on_NFS_as_shared_block_storage.patch | (download)

nova/virt/libvirt/driver.py | 10 8 + 2 - 0 !
nova/virt/libvirt/imagebackend.py | 13 13 + 0 - 0 !
2 files changed, 21 insertions(+), 2 deletions(-)

 count image files on nfs as shared block storage
 If instance path is shared between compute nodes (e.g. over NFS) and image
 backend uses files placed in the instance directory (e.g. Raw, Qcow2), live
 migration should consider block storage of the instances shared.
From f047dca5e8b858cb2e505d03844b0094a920822b Mon Sep 17 00:00:00 2001
0010_CEPH_Rename_rbb.py_to_rbd_utils.py_in_libvirt_driver_directory.patch | (download)

nova/tests/virt/libvirt/test_imagebackend.py | 16 8 + 8 - 0 !
nova/tests/virt/libvirt/test_libvirt.py | 4 2 + 2 - 0 !
nova/tests/virt/libvirt/test_rbd.py | 63 31 + 32 - 0 !
nova/virt/libvirt/driver.py | 4 2 + 2 - 0 !
nova/virt/libvirt/imagebackend.py | 4 2 + 2 - 0 !
nova/virt/libvirt/rbd.py | 274 0 + 274 - 0 !
nova/virt/libvirt/rbd_utils.py | 274 274 + 0 - 0 !
7 files changed, 319 insertions(+), 320 deletions(-)

 rename rbd.py to rbd_utils.py in libvirt driver directory
 In libvirt driver directory, rbd.py confict with global rbd library which is
 imported in rbd.py, so we rename rbd.py to rbd_utils.py.
0011_CEPH_fix_live_migration_with_configdrive.patch | (download)

nova/exception.py | 5 0 + 5 - 0 !
nova/tests/virt/libvirt/test_libvirt.py | 21 0 + 21 - 0 !
nova/virt/libvirt/driver.py | 27 19 + 8 - 0 !
3 files changed, 19 insertions(+), 34 deletions(-)

 libvirt: support live migrations of instances with config
 drives
 In case of shared storage, to allow an instance with a configdrive
 to be migrated if we must store the configdrive into the same backend
 as other disks.
0013_CEPH_uses_correct_imagebackend_for_configdrive.patch | (download)

nova/virt/libvirt/driver.py | 28 17 + 11 - 0 !
1 file changed, 17 insertions(+), 11 deletions(-)

 libvirt: uses correct imagebackend for configdrive
 When the configdrive file is created it must be moved to the
 configured imagebackend, not always the raw one.
 .
 Otherwise nova can't boot an instance with a configdrive attached
 when the rbd backend is configured.
0014_CEPH_reworks_configdrive_creation.patch | (download)

nova/virt/libvirt/driver.py | 62 32 + 30 - 0 !
1 file changed, 32 insertions(+), 30 deletions(-)

 libvirt: reworks configdrive creation
 This refactor the creation of the configdrive by using
 the same code schema as local/ephemeral/swap/disk.
 .
 Now this is the imagebackend that creates the configdrive like any other
 disk attached to VM. This ensures that the configdrive file is created
 where the imagebackend expect it.
 .
 This also removes the assumption that the configdrive was always created
 at the right place, that was not true for rbd and lvm.
Update_websocketproxy_to_work_with_websockify_0.6.patch | (download)

nova/cmd/novncproxy.py | 32 16 + 16 - 0 !
nova/cmd/spicehtml5proxy.py | 29 15 + 14 - 0 !
nova/console/websocketproxy.py | 75 62 + 13 - 0 !
3 files changed, 93 insertions(+), 43 deletions(-)

 update websocketproxy to work with websockify 0.6
 Websockify version 0.6 bring with it several bugfixes and new features that
 affect Nova (including a fix for novncproxy zombies hanging around and better
 support for the python logging framework).  However, it also broke backwards
 compatibility due to a refactor which brought it inline with other Python
 socket server libraries.
 .
 This patch updates the websockify code to function with websockify version 0.6
 as well as websockify version 0.5.x.
 .
 This is a backport of: https://review.openstack.org/#/c/91663/ to Icehouse.
spiceproxy_config_startup.patch | (download)

nova/cmd/spicehtml5proxy.py | 4 2 + 2 - 0 !
1 file changed, 2 insertions(+), 2 deletions(-)

 reverts spice html5 proxy configuration file syntax used for icehouse
 The patch for websockify 0.6 compatibility introduced in version 2014.1.1-5
 includes an unrelated change to the configuration file syntax that is only
 scheduled for Openstack Juno. Please revert this part of the change.
 .
 For Juno the spciehtml5proxy_{host, port} parameters in the DEFAULT section
 have been moved to html5proxy_{host, port} in the spice section. See upstream
 commit fe02cc830f9c9e1dac234164bc1f0caa0e2072d7 for the details.
 .
 The attached patch fixes the problem and reverts to the configuration file
 syntax used for Icehouse.
 .
 Without this patch the spice proxy refuses to start at all because it tries to
 access options that are not registered in the CONF object.
fix provider networks regression.patch | (download)

etc/nova/policy.json | 3 2 + 1 - 0 !
nova/network/neutronv2/api.py | 8 7 + 1 - 0 !
nova/tests/fake_policy.py | 3 2 + 1 - 0 !
3 files changed, 11 insertions(+), 3 deletions(-)

 [patch] allow attaching external networks based on configurable
 policy

Commit da66d50010d5b1ba1d7fc9c3d59d81b6c01bb0b0 restricted
attaching external networks to admin clients. This patch changes
it to a policy based check instead with the default setting being
admin only. This allows operators to more precisely configure who
they wish to allow to attach external networks without having to
give them admin access

Bug: #1352102

fix live migraton nfs.patch | (download)

nova/compute/manager.py | 5 4 + 1 - 0 !
nova/tests/compute/test_compute_mgr.py | 8 7 + 1 - 0 !
nova/tests/virt/libvirt/test_libvirt.py | 4 2 + 2 - 0 !
nova/virt/driver.py | 2 1 + 1 - 0 !
nova/virt/fake.py | 2 1 + 1 - 0 !
nova/virt/libvirt/driver.py | 10 6 + 4 - 0 !
nova/virt/xenapi/driver.py | 2 1 + 1 - 0 !
7 files changed, 22 insertions(+), 11 deletions(-)

 make sure volumes are well detected during block migration
 Currently, _assert_dest_node_has_enough_disk() calls
 self.get_instance_disk_info(instance['name']), which means that
 get_instance_disk_info() has a block_device_info parameter equal to None, and
 _get_instance_disk_info() as well. In the end, block_device_info_get_mapping()
 returns an empty list, and the 'volume_devices' variable is an empty set.
 Ultimately, this prevents volume devices from being properly detected in
 _get_instance_disk_info(), and Nova tries to migrate them as well, even though
 they should not be migrated.
 .
 Fix this issue by passing 'block_device_info' to
 check_can_live_migrate_source() and have it propagated all the way to
 _get_instance_disk_info().
Bug-Ubuntu: https://launchpad.net/bugs/1356552
Author Cyril Roelandt <cyril.roelandt@enovance.com>
9990_update_german_programm_messages.patch | (download)

nova/locale/de/LC_MESSAGES/nova.po | 17174 7098 + 10076 - 0 !
1 file changed, 7098 insertions(+), 10076 deletions(-)

 update german translation for upstream .po file
CVE 2014 7230_CVE 2014 7231_Sync_process_utils_from_oslo.patch | (download)

nova/openstack/common/processutils.py | 14 9 + 5 - 0 !
1 file changed, 9 insertions(+), 5 deletions(-)

 sync process utils from oslo
 This patch backports the missing change to fix ssh_execute password leak. The
 sync pulls in the following changes:
 .
 105169f8 - Mask passwords in exceptions and error messages (SSH)
CVE 2014 3708_Fixes_DOS_issue_in_instance_list_ip_filter.patch | (download)

nova/compute/api.py | 30 22 + 8 - 0 !
nova/tests/compute/test_compute.py | 75 59 + 16 - 0 !
2 files changed, 81 insertions(+), 24 deletions(-)

 fixes dos issue in instance list ip filter
 Converts the ip filtering to filter the list locally based on the network info
 cache instead of making an extremely expensive call over to nova network where
 it attempts to retrieve a list of every instance in the system.
CVE 2014 8333_Fix_VM_leak_when_deletion_of_VM_during_resizing.patch | (download)

nova/tests/virt/vmwareapi/test_driver_api.py | 40 40 + 0 - 0 !
nova/virt/vmwareapi/vmops.py | 15 15 + 0 - 0 !
2 files changed, 55 insertions(+)

 cve-2014-8333: vmware: fix vm leak when deletion of vm during resizing
 During the VM resizing, before VM arrive RESIZED state, driver
 migrate_disk_and_power_off will initially rename orginal vm 'uuid' to be
 'uuid-orig' and clone a new vm with 'uuid' name. When deletion VM is triggered
 at this time window, it wouldn't be able to delete the VM uuid-orig in VCenter
 and so cause VM leak. As VM task state will be set to 'deleting' and it can
 not be used to determine the resize migrating/migrated state, this fix will
 attempt to delete orig VM within destroy phase.
 .
 NOTE: the aformentioned patch broke Minesweeper. The fix was also cherry
 picked from commit e464bc518e8590d59c2741948466777982ca3319. This was to do
 two things:
  1. Solve the actual bug
  2. Ensure that the unit tests and Minesweeper passed
avoid_changing_UUID_when_redefining_nwfilters.patch | (download)

nova/tests/virt/libvirt/test_libvirt.py | 41 38 + 3 - 0 !
nova/virt/libvirt/firewall.py | 47 34 + 13 - 0 !
2 files changed, 72 insertions(+), 16 deletions(-)

 libvirt: avoid changing uuid when redefining nwfilters
X-Git-Tag: 2014.2.rc1~50^2
CVE 2015 0259_Websocket_Proxy_should_verify_Origin_header_icehouse debian.patch | (download)

nova/console/websocketproxy.py | 39 39 + 0 - 0 !
1 file changed, 39 insertions(+)

 websocket proxy should verify origin header
 If the Origin HTTP header passed in the WebSocket handshake does not match the
 host, this could indicate an attempt at a cross-site attack.  This commit adds
 a check to verify the origin matches the host.
 .
 Note from maintainer: the final patch is a mix-up of both this one:
  https://review.openstack.org/#/c/163035/ (for Icehouse)
 and this one:
  https://review.openstack.org/#/c/163034/
 as Nova Icehouse in Debian is patched to work with Websockify 0.6.