Package: nova / 2014.1.3-11
Metadata
Package | Version | Patches format |
---|---|---|
nova | 2014.1.3-11 | 3.0 (quilt) |
Patch series
view the series filePatch | File delta | Description |
---|---|---|
path to the xenhost.conf fixup.patch | (download) |
plugins/xenserver/xenapi/etc/xapi.d/plugins/xenhost |
2 1 + 1 - 0 ! |
fixes the path to the xenhost.conf file |
install missing files.patch | (download) |
MANIFEST.in |
93 93 + 0 - 0 ! |
install some missing files |
fix docs build without network.patch | (download) |
doc/source/conf.py |
1 0 + 1 - 0 ! |
build docs without network access. |
0001_CEPH_remove_redundant_copy_of_test_cache_base_dir_exists.patch | (download) |
nova/tests/virt/libvirt/test_imagebackend.py |
14 0 + 14 - 0 ! |
remove redundant copy of test_cache_base_dir_exists Second copy of RbdTestCase.test_cache_base_dir_exists was accidentally introduced in https://review.openstack.org/82840. |
0002_CEPH_Revert Address the comments of the merged image hand.patch | (download) |
nova/virt/imagehandler/__init__.py |
176 0 + 176 - 0 ! |
revert "address the comments of the merged image handler patch" This reverts commit 9e8513e6fe4bf8a8759ad0c1d71594f952d920ad. |
0003_CEPH_Improve shared storage checks for live migration.patch | (download) |
nova/compute/manager.py |
80 54 + 26 - 0 ! |
improve shared storage checks for live migration Due to an assumption that libvirt live migrations work only when both instance path and disk data is shared between source and destination hosts (e.g. libvirt instances directory is on NFS), instance disks are removed from shared storage when instance path is not shared (e.g. Ceph RBD backend is enabled). . Distinguish cases that require shared instance drive and shared libvirt instance directory. Reflect the fact that RBD backed instances have shared instance drive (and no shared libvirt instance directory) in the relevant conditionals. . UpgradeImpact: Live migrations from or to a compute host running a version of Nova pre-dating this commit are disabled in order to eliminate possibility of data loss. Upgrade Nova on both the source and the target node before attempting a live migration. |
0004_CEPH_Move_libvirt_RBD_utilities_to_a_new_file.patch | (download) |
nova/tests/virt/libvirt/test_imagebackend.py |
34 9 + 25 - 0 ! |
move libvirt rbd utilities to a new file This will make it easier to share rbd-related code with cinder and glance. Port the applicable unit tests over from cinder. |
0005_CEPH_Use_library_instead_of_CLI_to_cleanup_RBD_volumes.patch | (download) |
nova/tests/virt/libvirt/fake_libvirt_utils.py |
16 0 + 16 - 0 ! |
use library instead of cli to cleanup rbd volumes 'rbd list' CLI returns error code when there are no rbd volumes, which causes problems during live migration of VMs with RBD backed ephemeral volumes. It's safer to use the library that only raises an exception in case of a real problem. . The only case where rbd CLI is still justified is import, which is needed to correctly import sparse image files. . All code related to cleanup of RBD volumes is moved to rbd_utils.py, this |
0006_CEPH_Enable_cloning_for_rbd backed_ephemeral_disks.patch | (download) |
nova/tests/virt/libvirt/test_imagebackend.py |
128 120 + 8 - 0 ! |
enable cloning for rbd-backed ephemeral disks Currently when using rbd as an image backend, nova downloads the glance image to local disk and then copies it again into rbd. This can be very slow for large images, and wastes bandwidth as well as disk space. . When the glance image is stored in the same ceph cluster, the data is being pulled out and pushed back in unnecessarily. Instead, create a copy-on-write clone of the image. This is fast, and does not depend on the size of the image. Instead of taking minutes, booting takes seconds, and is not limited by the disk copy. . Add some rbd utility functions from cinder to support cloning and let the rbd imagebackend rely on librbd instead of the rbd command line tool for checking image existence. . Add a direct_fetch() method to the image backend, so backends like rbd can make optimizations like this. Try to use direct_fetch() for the root disk when it comes from an image, but fall back to fetch_to_raw() if direct_fetch() fails. . Instead of calling disk.get_disk_size() directly from verify_base_size(), which assumes the disk is stored locally, add a new method that is overridden by the Rbd subclass to get the disk size. From d8507cb8f6312b97d87acbca0d4b481f928b0439 Mon Sep 17 00:00:00 2001 |
0007_CEPH_Use_Ceph_cluster_stats_to_report_disk_info_on_RBD.patch | (download) |
nova/virt/libvirt/driver.py |
11 8 + 3 - 0 ! |
use ceph cluster stats to report disk info on rbd Local disk statistics on compute nodes are irrelevant when ephemeral disks are stored in RBD. With RBD, local disk space is not consumed when instances are started on a compute node, yet it is possible for scheduler to refuse to schedule an instance when combined disk usage of instances already running on the node exceeds total disk capacity reported by the hypervisor driver. |
0008_CEPH_Live_migration_is_broken_for_NFS_shared_storage.patch | (download) |
nova/tests/virt/libvirt/test_libvirt.py |
24 24 + 0 - 0 ! |
live migration is broken for nfs shared storage One of the new checks, introduced in I2755c59b4db736151000dae351fd776d3c15ca39 is missing a check against filebased shared storage, leading to live migration being broken. Bug-Ubuntu: https://launchpad.net/bugs/1346385 |
0009_CEPH_count_image_files_on_NFS_as_shared_block_storage.patch | (download) |
nova/virt/libvirt/driver.py |
10 8 + 2 - 0 ! |
count image files on nfs as shared block storage If instance path is shared between compute nodes (e.g. over NFS) and image backend uses files placed in the instance directory (e.g. Raw, Qcow2), live migration should consider block storage of the instances shared. From f047dca5e8b858cb2e505d03844b0094a920822b Mon Sep 17 00:00:00 2001 |
0010_CEPH_Rename_rbb.py_to_rbd_utils.py_in_libvirt_driver_directory.patch | (download) |
nova/tests/virt/libvirt/test_imagebackend.py |
16 8 + 8 - 0 ! |
rename rbd.py to rbd_utils.py in libvirt driver directory In libvirt driver directory, rbd.py confict with global rbd library which is imported in rbd.py, so we rename rbd.py to rbd_utils.py. |
0011_CEPH_fix_live_migration_with_configdrive.patch | (download) |
nova/exception.py |
5 0 + 5 - 0 ! |
libvirt: support live migrations of instances with config drives In case of shared storage, to allow an instance with a configdrive to be migrated if we must store the configdrive into the same backend as other disks. |
0013_CEPH_uses_correct_imagebackend_for_configdrive.patch | (download) |
nova/virt/libvirt/driver.py |
28 17 + 11 - 0 ! |
libvirt: uses correct imagebackend for configdrive When the configdrive file is created it must be moved to the configured imagebackend, not always the raw one. . Otherwise nova can't boot an instance with a configdrive attached when the rbd backend is configured. |
0014_CEPH_reworks_configdrive_creation.patch | (download) |
nova/virt/libvirt/driver.py |
62 32 + 30 - 0 ! |
libvirt: reworks configdrive creation This refactor the creation of the configdrive by using the same code schema as local/ephemeral/swap/disk. . Now this is the imagebackend that creates the configdrive like any other disk attached to VM. This ensures that the configdrive file is created where the imagebackend expect it. . This also removes the assumption that the configdrive was always created at the right place, that was not true for rbd and lvm. |
Update_websocketproxy_to_work_with_websockify_0.6.patch | (download) |
nova/cmd/novncproxy.py |
32 16 + 16 - 0 ! |
update websocketproxy to work with websockify 0.6 Websockify version 0.6 bring with it several bugfixes and new features that affect Nova (including a fix for novncproxy zombies hanging around and better support for the python logging framework). However, it also broke backwards compatibility due to a refactor which brought it inline with other Python socket server libraries. . This patch updates the websockify code to function with websockify version 0.6 as well as websockify version 0.5.x. . This is a backport of: https://review.openstack.org/#/c/91663/ to Icehouse. |
spiceproxy_config_startup.patch | (download) |
nova/cmd/spicehtml5proxy.py |
4 2 + 2 - 0 ! |
reverts spice html5 proxy configuration file syntax used for icehouse The patch for websockify 0.6 compatibility introduced in version 2014.1.1-5 includes an unrelated change to the configuration file syntax that is only scheduled for Openstack Juno. Please revert this part of the change. . For Juno the spciehtml5proxy_{host, port} parameters in the DEFAULT section have been moved to html5proxy_{host, port} in the spice section. See upstream commit fe02cc830f9c9e1dac234164bc1f0caa0e2072d7 for the details. . The attached patch fixes the problem and reverts to the configuration file syntax used for Icehouse. . Without this patch the spice proxy refuses to start at all because it tries to access options that are not registered in the CONF object. |
fix provider networks regression.patch | (download) |
etc/nova/policy.json |
3 2 + 1 - 0 ! |
[patch] allow attaching external networks based on configurable policy Commit da66d50010d5b1ba1d7fc9c3d59d81b6c01bb0b0 restricted attaching external networks to admin clients. This patch changes it to a policy based check instead with the default setting being admin only. This allows operators to more precisely configure who they wish to allow to attach external networks without having to give them admin access Bug: #1352102 |
fix live migraton nfs.patch | (download) |
nova/compute/manager.py |
5 4 + 1 - 0 ! |
make sure volumes are well detected during block migration Currently, _assert_dest_node_has_enough_disk() calls self.get_instance_disk_info(instance['name']), which means that get_instance_disk_info() has a block_device_info parameter equal to None, and _get_instance_disk_info() as well. In the end, block_device_info_get_mapping() returns an empty list, and the 'volume_devices' variable is an empty set. Ultimately, this prevents volume devices from being properly detected in _get_instance_disk_info(), and Nova tries to migrate them as well, even though they should not be migrated. . Fix this issue by passing 'block_device_info' to check_can_live_migrate_source() and have it propagated all the way to _get_instance_disk_info(). Bug-Ubuntu: https://launchpad.net/bugs/1356552 Author Cyril Roelandt <cyril.roelandt@enovance.com> |
9990_update_german_programm_messages.patch | (download) |
nova/locale/de/LC_MESSAGES/nova.po |
17174 7098 + 10076 - 0 ! |
update german translation for upstream .po file |
CVE 2014 7230_CVE 2014 7231_Sync_process_utils_from_oslo.patch | (download) |
nova/openstack/common/processutils.py |
14 9 + 5 - 0 ! |
sync process utils from oslo This patch backports the missing change to fix ssh_execute password leak. The sync pulls in the following changes: . 105169f8 - Mask passwords in exceptions and error messages (SSH) |
CVE 2014 3708_Fixes_DOS_issue_in_instance_list_ip_filter.patch | (download) |
nova/compute/api.py |
30 22 + 8 - 0 ! |
fixes dos issue in instance list ip filter Converts the ip filtering to filter the list locally based on the network info cache instead of making an extremely expensive call over to nova network where it attempts to retrieve a list of every instance in the system. |
CVE 2014 8333_Fix_VM_leak_when_deletion_of_VM_during_resizing.patch | (download) |
nova/tests/virt/vmwareapi/test_driver_api.py |
40 40 + 0 - 0 ! |
cve-2014-8333: vmware: fix vm leak when deletion of vm during resizing During the VM resizing, before VM arrive RESIZED state, driver migrate_disk_and_power_off will initially rename orginal vm 'uuid' to be 'uuid-orig' and clone a new vm with 'uuid' name. When deletion VM is triggered at this time window, it wouldn't be able to delete the VM uuid-orig in VCenter and so cause VM leak. As VM task state will be set to 'deleting' and it can not be used to determine the resize migrating/migrated state, this fix will attempt to delete orig VM within destroy phase. . NOTE: the aformentioned patch broke Minesweeper. The fix was also cherry picked from commit e464bc518e8590d59c2741948466777982ca3319. This was to do two things: 1. Solve the actual bug 2. Ensure that the unit tests and Minesweeper passed |
avoid_changing_UUID_when_redefining_nwfilters.patch | (download) |
nova/tests/virt/libvirt/test_libvirt.py |
41 38 + 3 - 0 ! |
libvirt: avoid changing uuid when redefining nwfilters X-Git-Tag: 2014.2.rc1~50^2 |
CVE 2015 0259_Websocket_Proxy_should_verify_Origin_header_icehouse debian.patch | (download) |
nova/console/websocketproxy.py |
39 39 + 0 - 0 ! |
websocket proxy should verify origin header If the Origin HTTP header passed in the WebSocket handshake does not match the host, this could indicate an attempt at a cross-site attack. This commit adds a check to verify the origin matches the host. . Note from maintainer: the final patch is a mix-up of both this one: https://review.openstack.org/#/c/163035/ (for Icehouse) and this one: https://review.openstack.org/#/c/163034/ as Nova Icehouse in Debian is patched to work with Websockify 0.6. |