Package: nova / 2:14.0.0-4+deb9u1

CVE-2017-17051_Refined_fix_for_validating_image_on_rebuild.patch Patch series | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
Author: Dan Smith <dansmith@redhat.com>
Date: Fri, 17 Nov 2017 12:27:34 -0800
Description: CVE-2017-17051 Refined fix for validating image on rebuild
 This aims to fix the issue described in bug 1664931 where a rebuild
 fails to validate the existing host with the scheduler when a new
 image is provided. The previous attempt to do this could cause rebuilds
 to fail unnecessarily because we ran _all_ of the filters during a
 rebuild, which could cause usage/resource filters to prevent an otherwise
 valid rebuild from succeeding.
 .
 This aims to classify filters as useful for rebuild or not, and only apply
 the former during a rebuild scheduler check. We do this by using an internal
 scheduler hint, indicating our intent. This should (a) filter out
 all hosts other than the one we're running on and (b) be detectable by
 the filtering infrastructure as an internally-generated scheduling request
 in order to trigger the correct filtering behavior.
 .
 Conflicts:
      nova/scheduler/utils.py
      nova/tests/unit/compute/test_compute_api.py
 .
 NOTE(mriedem): The conflicts are due to not having
 7d0381c91a6ba8a45ae6527f046f382166eb158d or
 4a7502a5c9e84a8c8cef7f355d72425b26b8c379 in Newton.
 .
 (cherry picked from commit f7c688b8ef88a7390f5b09719a2b3e80368438c0)
 (cherry picked from commit b29a461a8bc05c9b171c0574abb2e7e5b62a2ed7)
 (cherry picked from commit bbfc4230efe3299fa51f9451f54062f32590ed3d)
Bug-Ubuntu: https://bugs.launchpad.net/nova/+bug/1664931
Change-Id: I1a46ef1503be2febcd20f4594f44344d05525446
Origin: upstream, https://review.openstack.org/523434
Last-Update: 2017-12-06

Index: nova/nova/compute/api.py
===================================================================
--- nova.orig/nova/compute/api.py
+++ nova/nova/compute/api.py
@@ -2809,16 +2809,27 @@ class API(base.Base):
             # through the scheduler again, but we want the instance to be
             # rebuilt on the same host it's already on.
             if orig_image_ref != image_href:
-                request_spec.requested_destination = objects.Destination(
-                    host=instance.host,
-                    node=instance.node)
                 # We have to modify the request spec that goes to the scheduler
                 # to contain the new image. We persist this since we've already
                 # changed the instance.image_ref above so we're being
                 # consistent.
                 request_spec.image = objects.ImageMeta.from_dict(image)
                 request_spec.save()
-                host = None     # This tells conductor to call the scheduler.
+                if 'scheduler_hints' not in request_spec:
+                    request_spec.scheduler_hints = {}
+                # Nuke the id on this so we can't accidentally save
+                # this hint hack later
+                del request_spec.id
+
+                # NOTE(danms): Passing host=None tells conductor to
+                # call the scheduler. The _nova_check_type hint
+                # requires that the scheduler returns only the same
+                # host that we are currently on and only checks
+                # rebuild-related filters.
+                request_spec.scheduler_hints['_nova_check_type'] = ['rebuild']
+                request_spec.force_hosts = [instance.host]
+                request_spec.force_nodes = [instance.node]
+                host = None
         except exception.RequestSpecNotFound:
             # Some old instances can still have no RequestSpec object attached
             # to them, we need to support the old way
Index: nova/nova/scheduler/filters/__init__.py
===================================================================
--- nova.orig/nova/scheduler/filters/__init__.py
+++ nova/nova/scheduler/filters/__init__.py
@@ -21,9 +21,27 @@ from nova import filters
 
 class BaseHostFilter(filters.BaseFilter):
     """Base class for host filters."""
-    def _filter_one(self, obj, filter_properties):
+
+    # This is set to True if this filter should be run for rebuild.
+    # For example, with rebuild, we need to ask the scheduler if the
+    # existing host is still legit for a rebuild with the new image and
+    # other parameters. We care about running policy filters (i.e.
+    # ImagePropertiesFilter) but not things that check usage on the
+    # existing compute node, etc.
+    RUN_ON_REBUILD = False
+
+    def _filter_one(self, obj, spec):
         """Return True if the object passes the filter, otherwise False."""
-        return self.host_passes(obj, filter_properties)
+        # Do this here so we don't get scheduler.filters.utils
+        from nova.scheduler import utils
+        if not self.RUN_ON_REBUILD and utils.request_is_rebuild(spec):
+            # If we don't filter, default to passing the host.
+            return True
+        else:
+            # We are either a rebuild filter, in which case we always run,
+            # or this request is not rebuild in which case all filters
+            # should run.
+            return self.host_passes(obj, spec)
 
     def host_passes(self, host_state, filter_properties):
         """Return True if the HostState passes the filter, otherwise False.
Index: nova/nova/scheduler/filters/affinity_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/affinity_filter.py
+++ nova/nova/scheduler/filters/affinity_filter.py
@@ -29,6 +29,8 @@ class DifferentHostFilter(filters.BaseHo
     # The hosts the instances are running on doesn't change within a request
     run_filter_once_per_request = True
 
+    RUN_ON_REBUILD = False
+
     def host_passes(self, host_state, spec_obj):
         affinity_uuids = spec_obj.get_scheduler_hint('different_host')
         if affinity_uuids:
@@ -45,6 +47,8 @@ class SameHostFilter(filters.BaseHostFil
     # The hosts the instances are running on doesn't change within a request
     run_filter_once_per_request = True
 
+    RUN_ON_REBUILD = False
+
     def host_passes(self, host_state, spec_obj):
         affinity_uuids = spec_obj.get_scheduler_hint('same_host')
         if affinity_uuids:
@@ -59,6 +63,8 @@ class SimpleCIDRAffinityFilter(filters.B
     # The address of a host doesn't change within a request
     run_filter_once_per_request = True
 
+    RUN_ON_REBUILD = False
+
     def host_passes(self, host_state, spec_obj):
         affinity_cidr = spec_obj.get_scheduler_hint('cidr', '/24')
         affinity_host_addr = spec_obj.get_scheduler_hint('build_near_host_ip')
@@ -77,6 +83,9 @@ class _GroupAntiAffinityFilter(filters.B
     """Schedule the instance on a different host from a set of group
     hosts.
     """
+
+    RUN_ON_REBUILD = False
+
     def host_passes(self, host_state, spec_obj):
         # Only invoke the filter if 'anti-affinity' is configured
         policies = (spec_obj.instance_group.policies
@@ -110,6 +119,9 @@ class ServerGroupAntiAffinityFilter(_Gro
 class _GroupAffinityFilter(filters.BaseHostFilter):
     """Schedule the instance on to host from a set of group hosts.
     """
+
+    RUN_ON_REBUILD = False
+
     def host_passes(self, host_state, spec_obj):
         # Only invoke the filter if 'affinity' is configured
         policies = (spec_obj.instance_group.policies
Index: nova/nova/scheduler/filters/aggregate_image_properties_isolation.py
===================================================================
--- nova.orig/nova/scheduler/filters/aggregate_image_properties_isolation.py
+++ nova/nova/scheduler/filters/aggregate_image_properties_isolation.py
@@ -32,6 +32,8 @@ class AggregateImagePropertiesIsolation(
     # Aggregate data and instance type does not change within a request
     run_filter_once_per_request = True
 
+    RUN_ON_REBUILD = True
+
     def host_passes(self, host_state, spec_obj):
         """Checks a host in an aggregate that metadata key/value match
         with image properties.
Index: nova/nova/scheduler/filters/aggregate_instance_extra_specs.py
===================================================================
--- nova.orig/nova/scheduler/filters/aggregate_instance_extra_specs.py
+++ nova/nova/scheduler/filters/aggregate_instance_extra_specs.py
@@ -33,6 +33,8 @@ class AggregateInstanceExtraSpecsFilter(
     # Aggregate data and instance type does not change within a request
     run_filter_once_per_request = True
 
+    RUN_ON_REBUILD = False
+
     def host_passes(self, host_state, spec_obj):
         """Return a list of hosts that can create instance_type
 
Index: nova/nova/scheduler/filters/aggregate_multitenancy_isolation.py
===================================================================
--- nova.orig/nova/scheduler/filters/aggregate_multitenancy_isolation.py
+++ nova/nova/scheduler/filters/aggregate_multitenancy_isolation.py
@@ -28,6 +28,8 @@ class AggregateMultiTenancyIsolation(fil
     # Aggregate data and tenant do not change within a request
     run_filter_once_per_request = True
 
+    RUN_ON_REBUILD = False
+
     def host_passes(self, host_state, spec_obj):
         """If a host is in an aggregate that has the metadata key
         "filter_tenant_id" it can only create instances from that tenant(s).
Index: nova/nova/scheduler/filters/all_hosts_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/all_hosts_filter.py
+++ nova/nova/scheduler/filters/all_hosts_filter.py
@@ -23,5 +23,7 @@ class AllHostsFilter(filters.BaseHostFil
     # list of hosts doesn't change within a request
     run_filter_once_per_request = True
 
+    RUN_ON_REBUILD = False
+
     def host_passes(self, host_state, spec_obj):
         return True
Index: nova/nova/scheduler/filters/availability_zone_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/availability_zone_filter.py
+++ nova/nova/scheduler/filters/availability_zone_filter.py
@@ -35,6 +35,8 @@ class AvailabilityZoneFilter(filters.Bas
     # Availability zones do not change within a request
     run_filter_once_per_request = True
 
+    RUN_ON_REBUILD = False
+
     def host_passes(self, host_state, spec_obj):
         availability_zone = spec_obj.availability_zone
 
Index: nova/nova/scheduler/filters/compute_capabilities_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/compute_capabilities_filter.py
+++ nova/nova/scheduler/filters/compute_capabilities_filter.py
@@ -30,6 +30,8 @@ class ComputeCapabilitiesFilter(filters.
     # Instance type and host capabilities do not change within a request
     run_filter_once_per_request = True
 
+    RUN_ON_REBUILD = False
+
     def _get_capabilities(self, host_state, scope):
         cap = host_state
         for index in range(0, len(scope)):
Index: nova/nova/scheduler/filters/compute_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/compute_filter.py
+++ nova/nova/scheduler/filters/compute_filter.py
@@ -25,6 +25,8 @@ LOG = logging.getLogger(__name__)
 class ComputeFilter(filters.BaseHostFilter):
     """Filter on active Compute nodes."""
 
+    RUN_ON_REBUILD = False
+
     def __init__(self):
         self.servicegroup_api = servicegroup.API()
 
Index: nova/nova/scheduler/filters/core_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/core_filter.py
+++ nova/nova/scheduler/filters/core_filter.py
@@ -26,6 +26,8 @@ LOG = logging.getLogger(__name__)
 
 class BaseCoreFilter(filters.BaseHostFilter):
 
+    RUN_ON_REBUILD = False
+
     def _get_cpu_allocation_ratio(self, host_state, spec_obj):
         raise NotImplementedError
 
Index: nova/nova/scheduler/filters/disk_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/disk_filter.py
+++ nova/nova/scheduler/filters/disk_filter.py
@@ -28,6 +28,8 @@ CONF = nova.conf.CONF
 class DiskFilter(filters.BaseHostFilter):
     """Disk Filter with over subscription flag."""
 
+    RUN_ON_REBUILD = False
+
     def _get_disk_allocation_ratio(self, host_state, spec_obj):
         return host_state.disk_allocation_ratio
 
@@ -80,6 +82,8 @@ class AggregateDiskFilter(DiskFilter):
     found.
     """
 
+    RUN_ON_REBUILD = False
+
     def _get_disk_allocation_ratio(self, host_state, spec_obj):
         aggregate_vals = utils.aggregate_values_from_key(
             host_state,
Index: nova/nova/scheduler/filters/exact_core_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/exact_core_filter.py
+++ nova/nova/scheduler/filters/exact_core_filter.py
@@ -25,6 +25,8 @@ LOG = logging.getLogger(__name__)
 class ExactCoreFilter(filters.BaseHostFilter):
     """Exact Core Filter."""
 
+    RUN_ON_REBUILD = False
+
     def host_passes(self, host_state, spec_obj):
         """Return True if host has the exact number of CPU cores."""
         if not host_state.vcpus_total:
Index: nova/nova/scheduler/filters/exact_disk_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/exact_disk_filter.py
+++ nova/nova/scheduler/filters/exact_disk_filter.py
@@ -23,6 +23,8 @@ LOG = logging.getLogger(__name__)
 class ExactDiskFilter(filters.BaseHostFilter):
     """Exact Disk Filter."""
 
+    RUN_ON_REBUILD = False
+
     def host_passes(self, host_state, spec_obj):
         """Return True if host has the exact amount of disk available."""
         requested_disk = (1024 * (spec_obj.root_gb +
Index: nova/nova/scheduler/filters/exact_ram_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/exact_ram_filter.py
+++ nova/nova/scheduler/filters/exact_ram_filter.py
@@ -23,6 +23,8 @@ LOG = logging.getLogger(__name__)
 class ExactRamFilter(filters.BaseHostFilter):
     """Exact RAM Filter."""
 
+    RUN_ON_REBUILD = False
+
     def host_passes(self, host_state, spec_obj):
         """Return True if host has the exact amount of RAM available."""
         requested_ram = spec_obj.memory_mb
Index: nova/nova/scheduler/filters/image_props_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/image_props_filter.py
+++ nova/nova/scheduler/filters/image_props_filter.py
@@ -37,6 +37,8 @@ class ImagePropertiesFilter(filters.Base
     contained in the image dictionary in the request_spec.
     """
 
+    RUN_ON_REBUILD = True
+
     # Image Properties and Compute Capabilities do not change within
     # a request
     run_filter_once_per_request = True
Index: nova/nova/scheduler/filters/io_ops_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/io_ops_filter.py
+++ nova/nova/scheduler/filters/io_ops_filter.py
@@ -28,6 +28,8 @@ CONF = nova.conf.CONF
 class IoOpsFilter(filters.BaseHostFilter):
     """Filter out hosts with too many concurrent I/O operations."""
 
+    RUN_ON_REBUILD = False
+
     def _get_max_io_ops_per_host(self, host_state, spec_obj):
         return CONF.max_io_ops_per_host
 
Index: nova/nova/scheduler/filters/isolated_hosts_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/isolated_hosts_filter.py
+++ nova/nova/scheduler/filters/isolated_hosts_filter.py
@@ -25,6 +25,8 @@ class IsolatedHostsFilter(filters.BaseHo
     # The configuration values do not change within a request
     run_filter_once_per_request = True
 
+    RUN_ON_REBUILD = True
+
     def host_passes(self, host_state, spec_obj):
         """Result Matrix with 'restrict_isolated_hosts_to_isolated_images' set
         to True::
Index: nova/nova/scheduler/filters/json_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/json_filter.py
+++ nova/nova/scheduler/filters/json_filter.py
@@ -26,6 +26,9 @@ class JsonFilter(filters.BaseHostFilter)
     """Host Filter to allow simple JSON-based grammar for
     selecting hosts.
     """
+
+    RUN_ON_REBUILD = False
+
     def _op_compare(self, args, op):
         """Returns True if the specified operator can successfully
         compare the first item in the args with all the rest. Will
Index: nova/nova/scheduler/filters/metrics_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/metrics_filter.py
+++ nova/nova/scheduler/filters/metrics_filter.py
@@ -32,6 +32,8 @@ class MetricsFilter(filters.BaseHostFilt
     these hosts.
     """
 
+    RUN_ON_REBUILD = False
+
     def __init__(self):
         super(MetricsFilter, self).__init__()
         opts = utils.parse_options(CONF.metrics.weight_setting,
Index: nova/nova/scheduler/filters/num_instances_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/num_instances_filter.py
+++ nova/nova/scheduler/filters/num_instances_filter.py
@@ -28,6 +28,8 @@ CONF = nova.conf.CONF
 class NumInstancesFilter(filters.BaseHostFilter):
     """Filter out hosts with too many instances."""
 
+    RUN_ON_REBUILD = False
+
     def _get_max_instances_per_host(self, host_state, spec_obj):
         return CONF.max_instances_per_host
 
Index: nova/nova/scheduler/filters/numa_topology_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/numa_topology_filter.py
+++ nova/nova/scheduler/filters/numa_topology_filter.py
@@ -23,6 +23,8 @@ LOG = logging.getLogger(__name__)
 class NUMATopologyFilter(filters.BaseHostFilter):
     """Filter on requested NUMA topology."""
 
+    RUN_ON_REBUILD = True
+
     def _satisfies_cpu_policy(self, host_state, extra_specs, image_props):
         """Check that the host_state provided satisfies any available
         CPU policy requirements.
Index: nova/nova/scheduler/filters/pci_passthrough_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/pci_passthrough_filter.py
+++ nova/nova/scheduler/filters/pci_passthrough_filter.py
@@ -40,6 +40,8 @@ class PciPassthroughFilter(filters.BaseH
 
     """
 
+    RUN_ON_REBUILD = False
+
     def host_passes(self, host_state, spec_obj):
         """Return true if the host has the required PCI devices."""
         pci_requests = spec_obj.pci_requests
Index: nova/nova/scheduler/filters/ram_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/ram_filter.py
+++ nova/nova/scheduler/filters/ram_filter.py
@@ -25,6 +25,8 @@ LOG = logging.getLogger(__name__)
 
 class BaseRamFilter(filters.BaseHostFilter):
 
+    RUN_ON_REBUILD = False
+
     def _get_ram_allocation_ratio(self, host_state, spec_obj):
         raise NotImplementedError
 
Index: nova/nova/scheduler/filters/retry_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/retry_filter.py
+++ nova/nova/scheduler/filters/retry_filter.py
@@ -26,6 +26,10 @@ class RetryFilter(filters.BaseHostFilter
     purposes
     """
 
+    # NOTE(danms): This does not affect _where_ an instance lands, so not
+    # related to rebuild.
+    RUN_ON_REBUILD = False
+
     def host_passes(self, host_state, spec_obj):
         """Skip nodes that have already been attempted."""
         retry = spec_obj.retry
Index: nova/nova/scheduler/filters/trusted_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/trusted_filter.py
+++ nova/nova/scheduler/filters/trusted_filter.py
@@ -229,6 +229,8 @@ class ComputeAttestation(object):
 class TrustedFilter(filters.BaseHostFilter):
     """Trusted filter to support Trusted Compute Pools."""
 
+    RUN_ON_REBUILD = False
+
     def __init__(self):
         self.compute_attestation = ComputeAttestation()
         LOG.warning(_LW('The TrustedFilter is considered experimental '
Index: nova/nova/scheduler/filters/type_filter.py
===================================================================
--- nova.orig/nova/scheduler/filters/type_filter.py
+++ nova/nova/scheduler/filters/type_filter.py
@@ -25,6 +25,8 @@ class TypeAffinityFilter(filters.BaseHos
     (spread) set to 1 (default).
     """
 
+    RUN_ON_REBUILD = False
+
     def host_passes(self, host_state, spec_obj):
         """Dynamically limits hosts to one instance type
 
@@ -48,6 +50,8 @@ class AggregateTypeAffinityFilter(filter
     # Aggregate data does not change within a request
     run_filter_once_per_request = True
 
+    RUN_ON_REBUILD = False
+
     def host_passes(self, host_state, spec_obj):
         instance_type = spec_obj.flavor
 
Index: nova/nova/scheduler/host_manager.py
===================================================================
--- nova.orig/nova/scheduler/host_manager.py
+++ nova/nova/scheduler/host_manager.py
@@ -552,8 +552,13 @@ class HostManager(object):
                 _match_forced_hosts(name_to_cls_map, force_hosts)
             if force_nodes:
                 _match_forced_nodes(name_to_cls_map, force_nodes)
-            if force_hosts or force_nodes:
-                # NOTE(deva): Skip filters when forcing host or node
+            check_type = ('scheduler_hints' in spec_obj and
+                          spec_obj.scheduler_hints.get('_nova_check_type'))
+            if not check_type and (force_hosts or force_nodes):
+                # NOTE(deva,dansmith): Skip filters when forcing host or node
+                # unless we've declared the internal check type flag, in which
+                # case we're asking for a specific host and for filtering to
+                # be done.
                 if name_to_cls_map:
                     return name_to_cls_map.values()
                 else:
Index: nova/nova/scheduler/utils.py
===================================================================
--- nova.orig/nova/scheduler/utils.py
+++ nova/nova/scheduler/utils.py
@@ -382,3 +382,16 @@ def retry_on_timeout(retries=1):
     return outer
 
 retry_select_destinations = retry_on_timeout(CONF.scheduler_max_attempts - 1)
+
+
+def request_is_rebuild(spec_obj):
+    """Returns True if request is for a rebuild.
+
+    :param spec_obj: An objects.RequestSpec to examine (or None).
+    """
+    if not spec_obj:
+        return False
+    if 'scheduler_hints' not in spec_obj:
+        return False
+    check_type = spec_obj.scheduler_hints.get('_nova_check_type')
+    return check_type == ['rebuild']
Index: nova/nova/tests/functional/test_servers.py
===================================================================
--- nova.orig/nova/tests/functional/test_servers.py
+++ nova/nova/tests/functional/test_servers.py
@@ -35,6 +35,7 @@ from nova.tests.unit.api.openstack impor
 from nova.tests.unit import fake_block_device
 from nova.tests.unit import fake_network
 import nova.tests.unit.image.fake
+from nova.virt import fake
 from nova import volume
 
 
@@ -840,6 +841,16 @@ class ServerRebuildTestCase(integrated_h
         self.flags(scheduler_default_filters=['ImagePropertiesFilter'])
         return self.start_service('scheduler')
 
+    def _disable_compute_for(self, server):
+        # Refresh to get its host
+        server = self.api.get_server(server['id'])
+        host = server['OS-EXT-SRV-ATTR:host']
+
+        # Disable the service it is on
+        self.api_fixture.admin_api.api_put('/os-services/disable',
+                                           {'host': host,
+                                            'binary': 'nova-compute'})
+
     def test_rebuild_with_image_novalidhost(self):
         """Creates a server with an image that is valid for the single compute
         that we have. Then rebuilds the server, passing in an image with
@@ -847,6 +858,12 @@ class ServerRebuildTestCase(integrated_h
         a NoValidHost error. The ImagePropertiesFilter filter is enabled by
         default so that should filter out the host based on the image meta.
         """
+
+        fake.set_nodes(['host2'])
+        self.addCleanup(fake.restore_nodes)
+        self.flags(host='host2')
+        self.compute2 = self.start_service('compute', host='host2')
+
         server_req_body = {
             'server': {
                 # We hard-code from a fake image since we can't get images
@@ -861,6 +878,11 @@ class ServerRebuildTestCase(integrated_h
         }
         server = self.api.post_server(server_req_body)
         self._wait_for_state_change(self.api, server, 'ACTIVE')
+
+        # Disable the host we're on so ComputeFilter would have ruled it out
+        # normally
+        self._disable_compute_for(server)
+
         # Now update the image metadata to be something that won't work with
         # the fake compute driver we're using since the fake driver has an
         # "x86_64" architecture.
Index: nova/nova/tests/unit/compute/test_compute_api.py
===================================================================
--- nova.orig/nova/tests/unit/compute/test_compute_api.py
+++ nova/nova/tests/unit/compute/test_compute_api.py
@@ -3012,6 +3012,7 @@ class _ComputeAPIUnitTestMixIn(object):
                 system_metadata=orig_system_metadata,
                 expected_attrs=['system_metadata'],
                 image_ref=orig_image_href,
+                node='node',
                 vm_mode=vm_mode.HVM)
         flavor = instance.get_flavor()
 
@@ -3023,7 +3024,7 @@ class _ComputeAPIUnitTestMixIn(object):
         _get_image.side_effect = get_image
         bdm_get_by_instance_uuid.return_value = bdms
 
-        fake_spec = objects.RequestSpec()
+        fake_spec = objects.RequestSpec(id=1)
         req_spec_get_by_inst_uuid.return_value = fake_spec
 
         with mock.patch.object(self.compute_api.compute_task_api,
@@ -3041,10 +3042,9 @@ class _ComputeAPIUnitTestMixIn(object):
             # assert the request spec was modified so the scheduler picks
             # the existing instance host/node
             req_spec_save.assert_called_once_with()
-            self.assertIn('requested_destination', fake_spec)
-            requested_destination = fake_spec.requested_destination
-            self.assertEqual(instance.host, requested_destination.host)
-            self.assertEqual(instance.node, requested_destination.node)
+            self.assertIn('_nova_check_type', fake_spec.scheduler_hints)
+            self.assertEqual('rebuild',
+                             fake_spec.scheduler_hints['_nova_check_type'][0])
 
         _check_auto_disk_config.assert_called_once_with(image=new_image)
         _checks_for_create_and_rebuild.assert_called_once_with(self.context,
Index: nova/releasenotes/notes/bug-1664931-refine-validate-image-rebuild-6d730042438eec10.yaml
===================================================================
--- /dev/null
+++ nova/releasenotes/notes/bug-1664931-refine-validate-image-rebuild-6d730042438eec10.yaml
@@ -0,0 +1,20 @@
+---
+fixes:
+  - |
+    The fix for `OSSA-2017-005`_ (CVE-2017-16239) was too far-reaching in that
+    rebuilds can now fail based on scheduling filters that should not apply
+    to rebuild. For example, a rebuild of an instance on a disabled compute
+    host could fail whereas it would not before the fix for CVE-2017-16239.
+    Similarly, rebuilding an instance on a host that is at capacity for vcpu,
+    memory or disk could fail since the scheduler filters would treat it as a
+    new build request even though the rebuild is not claiming *new* resources.
+
+    Therefore this release contains a fix for those regressions in scheduling
+    behavior on rebuild while maintaining the original fix for CVE-2017-16239.
+
+    .. note:: The fix relies on a ``RUN_ON_REBUILD`` variable which is checked
+              for all scheduler filters during a rebuild. The reasoning behind
+              the value for that variable depends on each filter. If you have
+              out-of-tree scheduler filters, you will likely need to assess
+              whether or not they need to override the default value (False)
+              for the new variable.