Package: nova / 2:31.0.0-7
Metadata
Package | Version | Patches format |
---|---|---|
nova | 2:31.0.0-7 | 3.0 (quilt) |
Patch series
view the series filePatch | File delta | Description |
---|---|---|
Install missed files.patch | (download) |
MANIFEST.in |
11 11 + 0 - 0 ! |
install missed files |
remove svg converter from doc conf.py.patch | (download) |
doc/source/conf.py |
1 0 + 1 - 0 ! |
remove sphinxcontrib.rsvgconverter from doc conf.py |
Add a healtcheck url.patch | (download) |
etc/nova/api-paste.ini |
15 10 + 5 - 0 ! |
[patch] add a /healthcheck url This is useful for operators to configure HAProxy and for monitoring. |
fix exception.NovaException.patch | (download) |
nova/virt/disk/api.py |
4 2 + 2 - 0 ! |
fix exception.novaexception |
Add context switch chance to other thread during get_available_resources.patch | (download) |
nova/virt/libvirt/driver.py |
24 15 + 9 - 0 ! |
add context switch chance to other thread during get_available_resources The get_available_resources method checks host's resource usage by connecting libvirt. The libvirt connection uses libvirt python bindings and the connection handling is implemented in C lang. So the eventlet greenthread can't notice the network connection and doesn't trigger thread context switch while the nova-compute connects to the libvirt. If one hypervisor has over 50 or more instances and libvirt is slow any reason, the no context switch situation causes nova-compute's status down and some other failure since other tasks have no chance to work. . This commit adds greenthread.sleep(0) in the middle of for-loop section which is the long running no-context switch section. This sleep(0) gives other tasks to work even though the resource check task takes long time. The force context switch can prevent lack of any heartbeat operation. |
Fix neutron client dict grabbing.patch | (download) |
nova/network/neutron.py |
6 4 + 2 - 0 ! |
fix neutron client dict grabbing Due to a bug in python3.13 [1] the following code will leads to an emptied dict by the GC even though we hold a reference to the dict. . import gc . class A: . def __init__(self, client): self.__dict__ = client.__dict__ self.client = client . class B: def __init__(self): self.test_attr = "foo" . a = A(B()) print(a.__dict__) print(a.client.__dict__) gc.collect() print("## After gc.collect()") print(a.__dict__) print(a.client.__dict__) . # Output with Python 13 {'test_attr': 'foo', 'client': <__main__.B object at 0x73ea355a8590>} {'test_attr': 'foo', 'client': <__main__.B object at 0x73ea355a8590>} ## After gc.collect() {'test_attr': 'foo', 'client': <__main__.B object at 0x73ea355a8590>} {} . # Output with Python 12 {'test_attr': 'foo', 'client': <__main__.B object at 0x79c86f355400>} {'test_attr': 'foo', 'client': <__main__.B object at 0x79c86f355400>} ## After gc.collect() {'test_attr': 'foo', 'client': <__main__.B object at 0x79c86f355400>} {'test_attr': 'foo', 'client': <__main__.B object at 0x79c86f355400> . Our neutron client has this kind of code and therefore failing in python3.13. This patch adds __getattr__ instead of trying to hold a direct reference to the __dict__. This seems to work around the problem. . [1] https://github.com/python/cpython/issues/130327 |
OSSN 0094_restrict_swap_volume_to_cinder.patch | (download) |
api-ref/source/os-volume-attachments.inc |
20 10 + 10 - 0 ! |
restrict swap volume to cinder This change tightens the validation around the attachment update API to ensure that it can only be called if the source volume has a non empty migration status. . That means it will only accept a request to swap the volume if it is the result of a cinder volume migration. . This change is being made to prevent the instance domain XML from getting out of sync with the nova BDM records and cinder connection info. In the future support for direct swap volume actions can be re-added if and only if the nova libvirt driver is updated to correctly modify the domain. The libvirt driver is the only driver that supported this API outside of a cinder orchestrated swap volume. . By allowing the domain XML and BDMs to get out of sync if an admin later live-migrates the VM the host path will not be modified for the destination host. Normally this results in a live migration failure which often prompts the admin to cold migrate instead. however if the source device path exists on the destination the migration will proceed. This can lead to 2 VMs using the same host block device. At best this will cause a crash or data corruption. At worst it will allow one guest to access the data of another. . Prior to this change there was an explicit warning in nova API ref stating that humans should never call this API because it can lead to this situation. Now it considered a hard error due to the security implications. Bug: https://launchpad.net/bugs/2112187 Depends-on: https://review.opendev.org/c/openstack/tempest/+/957753 |