File: subclassing.rst

package info (click to toggle)
python-astropy 1.3-8~bpo8%2B2
  • links: PTS, VCS
  • area: main
  • in suites: jessie-backports
  • size: 44,292 kB
  • sloc: ansic: 160,360; python: 137,322; sh: 11,493; lex: 7,638; yacc: 4,956; xml: 1,796; makefile: 474; cpp: 364
file content (555 lines) | stat: -rw-r--r-- 21,253 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
.. _nddata_subclassing:

Subclassing
===========

`~astropy.nddata.NDData`
------------------------

This class serves as the base for subclasses that use a `numpy.ndarray` (or
something that presents a numpy-like interface) as the ``data`` attribute.

.. note::
  Each attribute is saved as attribute with one leading underscore. For example
  the ``data`` is saved as ``_data`` and the ``mask`` as ``_mask``, and so on.

Adding another property
^^^^^^^^^^^^^^^^^^^^^^^

    >>> from astropy.nddata import NDData

    >>> class NDDataWithFlags(NDData):
    ...     def __init__(self, *args, **kwargs):
    ...         # Remove flags attribute if given and pass it to the setter.
    ...         self.flags = kwargs.pop('flags') if 'flags' in kwargs else None
    ...         super(NDDataWithFlags, self).__init__(*args, **kwargs)
    ...
    ...     @property
    ...     def flags(self):
    ...         return self._flags
    ...
    ...     @flags.setter
    ...     def flags(self, value):
    ...         self._flags = value

    >>> ndd = NDDataWithFlags([1,2,3])
    >>> ndd.flags is None
    True

    >>> ndd = NDDataWithFlags([1,2,3], flags=[0, 0.2, 0.3])
    >>> ndd.flags
    [0, 0.2, 0.3]

.. note::
  To simplify subclassing each setter (except for ``data``) is called during
  ``__init__`` so putting restrictions on any attribute can be done inside
  the setter and will also apply during instance creation.

Customize the setter for a property
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    >>> import numpy as np

    >>> class NDDataMaskBoolNumpy(NDData):
    ...
    ...     @NDData.mask.setter
    ...     def mask(self, value):
    ...         # Convert mask to boolean numpy array.
    ...         self._mask = np.array(value, dtype=np.bool_)

    >>> ndd = NDDataMaskBoolNumpy([1,2,3])
    >>> ndd.mask = True
    >>> ndd.mask
    array(True, dtype=bool)

Extend the setter for a property
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

``unit``, ``meta`` and ``uncertainty`` implement some additional logic in their
setter so subclasses might define a call to the superclass and let the
super property set the attribute afterwards::

    >>> import numpy as np

    >>> class NDDataUncertaintyShapeChecker(NDData):
    ...
    ...     @NDData.uncertainty.setter
    ...     def uncertainty(self, value):
    ...         value = np.asarray(value)
    ...         if value.shape != self.data.shape:
    ...             raise ValueError('uncertainty must have the same shape as the data.')
    ...         # Call the setter of the super class in case it might contain some
    ...         # important logic (only True for meta, unit and uncertainty)
    ...         super(NDDataUncertaintyShapeChecker, self.__class__).uncertainty.fset(self, value)

    >>> ndd = NDDataUncertaintyShapeChecker([1,2,3], uncertainty=[2,3,4])
    INFO: uncertainty should have attribute uncertainty_type. [astropy.nddata.nddata]
    >>> ndd.uncertainty
    UnknownUncertainty([2, 3, 4])

Having a setter for the data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    >>> class NDDataWithDataSetter(NDData):
    ...
    ...     @NDData.data.setter
    ...     def data(self, value):
    ...         self._data = np.asarray(value)

    >>> ndd = NDDataWithDataSetter([1,2,3])
    >>> ndd.data = [3,2,1]
    >>> ndd.data
    array([3, 2, 1])

`~astropy.nddata.NDDataRef`
---------------------------

`~astropy.nddata.NDDataRef` itself inherits from `~astropy.nddata.NDData` so
any of the possibilities there also apply to NDDataRef. But NDDataRef also
inherits from the Mixins:

- `~astropy.nddata.NDSlicingMixin`
- `~astropy.nddata.NDArithmeticMixin`
- `~astropy.nddata.NDIOMixin`

which allow additional operations.

Add another arithmetic operation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Adding another possible operations is quite easy provided the ``data`` and
``unit`` allow it within the framework of `~astropy.units.Quantity`.

For example adding a power function::

    >>> from astropy.nddata import NDDataRef
    >>> import numpy as np
    >>> from astropy.utils import sharedmethod

    >>> class NDDataPower(NDDataRef):
    ...     @sharedmethod # sharedmethod to allow it also as classmethod
    ...     def pow(self, operand, operand2=None, **kwargs):
    ...         # the uncertainty doesn't allow propagation so set it to None
    ...         kwargs['propagate_uncertainties'] = None
    ...         # Call the _prepare_then_do_arithmetic function with the
    ...         # numpy.power ufunc.
    ...         return self._prepare_then_do_arithmetic(np.power, operand,
    ...                                                 operand2, **kwargs)

This can be used like the other arithmetic methods like
:meth:`~astropy.nddata.NDArithmeticMixin.add`. So it works when calling it
on the class or the instance::

    >>> ndd = NDDataPower([1,2,3])

    >>> # using it on the instance with one operand
    >>> ndd.pow(3)
    NDDataPower([ 1,  8, 27])

    >>> # using it on the instance with two operands
    >>> ndd.pow([1,2,3], [3,4,5])
    NDDataPower([  1,  16, 243])

    >>> # or using it as classmethod
    >>> NDDataPower.pow(6, [1,2,3])
    NDDataPower([  6,  36, 216])

To allow propagation also with ``uncertainty`` see subclassing
`~astropy.nddata.NDUncertainty`.

The ``_prepare_then_do_arithmetic`` implements the relevant checks if it was
called on the class or the instance, and, if one or two operands were given,
and converts the operands, if necessary, to the appropriate classes. Overriding
``_prepare_then_do_arithmetic`` in subclasses should be avoided if
possible.


Arithmetic on an existing property
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Customizing how an existing property is handled during arithmetic is possible
with some arguments to the function calls like
:meth:`~astropy.nddata.NDArithmeticMixin.add` but it's possible to hardcode
behaviour too. The actual operation on the attribute (except for ``unit``) is
done in a method ``_arithmetic_*`` where ``*`` is the name of the property.

For example to customize how the ``meta`` will be affected during arithmetics::

    >>> from astropy.nddata import NDDataRef

    >>> from copy import deepcopy
    >>> class NDDataWithMetaArithmetics(NDDataRef):
    ...
    ...     def _arithmetic_meta(self, operation, operand, handle_mask, **kwds):
    ...         # the function must take the arguments:
    ...         # operation (numpy-ufunc like np.add, np.subtract, ...)
    ...         # operand (the other NDData-like object, already wrapped as NDData)
    ...         # handle_mask (see description for "add")
    ...
    ...         # The meta is dict like but we want the keywords exposure to change
    ...         # Anticipate that one or both might have no meta and take the first one that has
    ...         result_meta = deepcopy(self.meta) if self.meta else deepcopy(operand.meta)
    ...         # Do the operation on the keyword if the keyword exists
    ...         if result_meta and 'exposure' in result_meta:
    ...             result_meta['exposure'] = operation(result_meta['exposure'], operand.data)
    ...         return result_meta # return it

To trigger this method the ``handle_meta`` argument to arithmetic methods can
be anything except ``None`` or ``"first_found"``::

    >>> ndd = NDDataWithMetaArithmetics([1,2,3], meta={'exposure': 10})
    >>> ndd2 = ndd.add(10, handle_meta='')
    >>> ndd2.meta
    {'exposure': 20}

    >>> ndd3 = ndd.multiply(0.5, handle_meta='')
    >>> ndd3.meta
    {'exposure': 5.0}

.. warning::
  To use these internal `_arithmetic_*` methods there are some restrictions on
  the attributes when calling the operation:

  - ``mask``: ``handle_mask`` must not be ``None``, ``"ff"`` or ``"first_found"``.
  - ``wcs``: ``compare_wcs`` argument with the same restrictions as mask.
  - ``meta``: ``handle_meta`` argument with the same restrictions as mask.
  - ``uncertainty``: ``propagate_uncertainties`` must be ``None`` or evaluate
    to ``False``. ``arithmetic_uncertainty`` must also accepts different
    arguments: ``operation, operand, result, correlation, **kwargs``


Changing default argument for arithmetic operations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

If the goal is to change the default value of an existing parameter for
arithmetic methods, maybe because explicitly specifying the parameter each
time you're calling an arithmetic operation is too much effort, you can easily
change the default value of existing parameters by changing it in the method
signature of ``_arithmetic``::

    >>> from astropy.nddata import NDDataRef
    >>> import numpy as np

    >>> class NDDDiffAritDefaults(NDDataRef):
    ...     def _arithmetic(self, *args, **kwargs):
    ...         # Changing the default of handle_mask to None
    ...         if 'handle_mask' not in kwargs:
    ...             kwargs['handle_mask'] = None
    ...         # Call the original with the updated kwargs
    ...         return super(NDDDiffAritDefaults, self)._arithmetic(*args, **kwargs)

    >>> ndd1 = NDDDiffAritDefaults(1, mask=False)
    >>> ndd2 = NDDDiffAritDefaults(1, mask=True)
    >>> ndd1.add(ndd2).mask is None  # it will be None
    True

    >>> # But giving other values is still possible:
    >>> ndd1.add(ndd2, handle_mask=np.logical_or).mask
    True

    >>> ndd1.add(ndd2, handle_mask="ff").mask
    False

The parameter controlling how properties are handled are all keyword-only
so using the ``*args, **kwargs`` approach allows one to only alter one default
without needing to care about the positional order of arguments. But using
``def _arithmetic(self, *args, handle_mask=None, **kwargs)`` doesn't work
for python 2.


Arithmetic with an additional property
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

This also requires overriding the ``_arithmetic`` method. Suppose we have a
``flags`` attribute again::

    >>> from copy import deepcopy
    >>> import numpy as np

    >>> class NDDataWithFlags(NDDataRef):
    ...     def __init__(self, *args, **kwargs):
    ...         # Remove flags attribute if given and pass it to the setter.
    ...         self.flags = kwargs.pop('flags') if 'flags' in kwargs else None
    ...         super(NDDataWithFlags, self).__init__(*args, **kwargs)
    ...
    ...     @property
    ...     def flags(self):
    ...         return self._flags
    ...
    ...     @flags.setter
    ...     def flags(self, value):
    ...         self._flags = value
    ...
    ...     def _arithmetic(self, operation, operand, *args, **kwargs):
    ...         # take all args and kwargs to allow arithmetic on the other properties
    ...         # to work like before.
    ...
    ...         # do the arithmetics on the flags (pop the relevant kwargs, if any!!!)
    ...         if self.flags is not None and operand.flags is not None:
    ...             result_flags = np.logical_or(self.flags, operand.flags)
    ...             # np.logical_or is just a suggestion you can do what you want
    ...         else:
    ...             if self.flags is not None:
    ...                 result_flags = deepcopy(self.flags)
    ...             else:
    ...                 result_flags = deepcopy(operand.flags)
    ...
    ...         # Let the superclass do all the other attributes note that
    ...         # this returns the result and a dictionary containing other attributes
    ...         result, kwargs = super(NDDataWithFlags, self)._arithmetic(operation, operand, *args, **kwargs)
    ...         # The arguments for creating a new instance are saved in kwargs
    ...         # so we need to add another keyword "flags" and add the processed flags
    ...         kwargs['flags'] = result_flags
    ...         return result, kwargs # these must be returned

    >>> ndd1 = NDDataWithFlags([1,2,3], flags=np.array([1,0,1], dtype=bool))
    >>> ndd2 = NDDataWithFlags([1,2,3], flags=np.array([0,0,1], dtype=bool))
    >>> ndd3 = ndd1.add(ndd2)
    >>> ndd3.flags
    array([ True, False,  True], dtype=bool)


Slicing an existing property
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Suppose you have a class expecting a 2 dimensional ``data`` but the mask is
only 1D. This would lead to problems if one were to slice in two dimensions.

    >>> from astropy.nddata import NDDataRef
    >>> import numpy as np

    >>> class NDDataMask1D(NDDataRef):
    ...     def _slice_mask(self, item):
    ...         # Multidimensional slices are represented by tuples:
    ...         if isinstance(item, tuple):
    ...             # only use the first dimension of the slice
    ...             return self.mask[item[0]]
    ...         # Let the superclass deal with the other cases
    ...         return super(NDDataMask1D, self)._slice_mask(item)

    >>> ndd = NDDataMask1D(np.ones((3,3)), mask=np.ones(3, dtype=bool))
    >>> nddsliced = ndd[1:3,1:3]
    >>> nddsliced.mask
    array([ True,  True], dtype=bool)

.. note::
  The methods doing the slicing of the attributes are prefixed by a
  ``_slice_*`` where ``*`` can be ``mask``, ``uncertainty`` or ``wcs``. So
  simply overriding them is the easiest way to customize how the are sliced.

.. note::
  If slicing should affect the ``unit`` or ``meta`` see the next example.


Slicing an additional property
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Building on the added property ``flags`` we want them to be sliceable:

    >>> class NDDataWithFlags(NDDataRef):
    ...     def __init__(self, *args, **kwargs):
    ...         # Remove flags attribute if given and pass it to the setter.
    ...         self.flags = kwargs.pop('flags') if 'flags' in kwargs else None
    ...         super(NDDataWithFlags, self).__init__(*args, **kwargs)
    ...
    ...     @property
    ...     def flags(self):
    ...         return self._flags
    ...
    ...     @flags.setter
    ...     def flags(self, value):
    ...         self._flags = value
    ...
    ...     def _slice(self, item):
    ...         # slice all normal attributes
    ...         kwargs = super(NDDataWithFlags, self)._slice(item)
    ...         # The arguments for creating a new instance are saved in kwargs
    ...         # so we need to add another keyword "flags" and add the sliced flags
    ...         kwargs['flags'] = self.flags[item]
    ...         return kwargs # these must be returned

    >>> ndd = NDDataWithFlags([1,2,3], flags=[0, 0.2, 0.3])
    >>> ndd2 = ndd[1:3]
    >>> ndd2.flags
    [0.2, 0.3]

If you wanted to keep just the original ``flags`` instead of the sliced ones
you could use ``kwargs['flags'] = self.flags`` and omit the ``[item]``.

`~astropy.nddata.NDDataBase`
----------------------------

The class `~astropy.nddata.NDDataBase` is a metaclass -- when subclassing it,
all properties of `~astropy.nddata.NDDataBase` *must* be overridden in the
subclass.

Subclassing from `~astropy.nddata.NDDataBase` gives you complete flexibility
in how you implement data storage and the other properties. If your data is
stored in a numpy array (or something that behaves like a numpy array), it may
be more straightforward to subclass `~astropy.nddata.NDData` instead of
`~astropy.nddata.NDDataBase`.

Implementing the NDDataBase interface
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

For example to create a readonly container::

    >>> from astropy.nddata import NDDataBase

    >>> class NDDataReadOnlyNoRestrictions(NDDataBase):
    ...     def __init__(self, data, unit, mask, uncertainty, meta, wcs):
    ...         self._data = data
    ...         self._unit = unit
    ...         self._mask = mask
    ...         self._uncertainty = uncertainty
    ...         self._meta = meta
    ...         self._wcs = wcs
    ...
    ...     @property
    ...     def data(self):
    ...         return self._data
    ...
    ...     @property
    ...     def unit(self):
    ...         return self._unit
    ...
    ...     @property
    ...     def mask(self):
    ...         return self._mask
    ...
    ...     @property
    ...     def uncertainty(self):
    ...         return self._uncertainty
    ...
    ...     @property
    ...     def meta(self):
    ...         return self._meta
    ...
    ...     @property
    ...     def wcs(self):
    ...         return self._wcs

    >>> # A meaningless test to show that creating this class is possible:
    >>> NDDataReadOnlyNoRestrictions(1,2,3,4,5,6) is not None
    True

.. note::
  Actually defining an ``__init__`` is not necessary and the properties could
  return arbitrary values but the properties **must** be defined.

Subclassing `~astropy.nddata.NDUncertainty`
-------------------------------------------
.. warning::
    The internal interface of NDUncertainty and subclasses is experimental and
    might change in future versions.

Subclasses deriving from `~astropy.nddata.NDUncertainty` need to implement:

- property ``uncertainty_type``, should return a string describing the
  uncertainty for example ``"ivar"`` for inverse variance.
- methods for propagation: `_propagate_*` where ``*`` is the name of the UFUNC
  that is used on the ``NDData`` parent.

Creating an uncertainty without propagation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

`~astropy.nddata.UnknownUncertainty` is a minimal working implementation
without error propagation. So let's create an uncertainty just storing
systematic uncertainties::

    >>> from astropy.nddata import NDUncertainty

    >>> class SystematicUncertainty(NDUncertainty):
    ...     @property
    ...     def uncertainty_type(self):
    ...         return 'systematic'
    ...
    ...     def _propagate_add(self, other_uncert, *args, **kwargs):
    ...         return None
    ...
    ...     def _propagate_subtract(self, other_uncert, *args, **kwargs):
    ...         return None
    ...
    ...     def _propagate_multiply(self, other_uncert, *args, **kwargs):
    ...         return None
    ...
    ...     def _propagate_divide(self, other_uncert, *args, **kwargs):
    ...         return None

    >>> SystematicUncertainty([10])
    SystematicUncertainty([10])

Subclassing `~astropy.nddata.StdDevUncertainty`
-----------------------------------------------

Creating an variance uncertainty
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

`~astropy.nddata.StdDevUncertainty` already implements propagation based
on gaussian standard deviation so this could be the starting point of an
uncertainty using these propagations:

    >>> from astropy.nddata import StdDevUncertainty
    >>> import numpy as np
    >>> import weakref

    >>> class VarianceUncertainty(StdDevUncertainty):
    ...     @property
    ...     def uncertainty_type(self):
    ...         return 'variance'
    ...
    ...     def _propagate_add(self, other_uncert, *args, **kwargs):
    ...         # Neglect the unit assume that both are Variance uncertainties
    ...         this = StdDevUncertainty(np.sqrt(self.array))
    ...         other = StdDevUncertainty(np.sqrt(other_uncert.array))
    ...
    ...         # We need to set the parent_nddata attribute otherwise it will
    ...         # fail for multiplication and division where the data
    ...         # not only the uncertainty matters.
    ...         this.parent_nddata = weakref.ref(self.parent_nddata)
    ...         other.parent_nddata = weakref.ref(other_uncert.parent_nddata)
    ...
    ...         # Call propagation:
    ...         result = this._propagate_add(other, *args, **kwargs)
    ...
    ...         # Return the square of it
    ...         return np.square(result)

    >>> from astropy.nddata import NDDataRef

    >>> ndd1 = NDDataRef([1,2,3], unit='m', uncertainty=VarianceUncertainty([1,4,9]))
    >>> ndd2 = NDDataRef([1,2,3], unit='m', uncertainty=VarianceUncertainty([1,4,9]))
    >>> ndd = ndd1.add(ndd2)
    >>> ndd.uncertainty
    VarianceUncertainty([  2.,   8.,  18.])

this approach certainly works if both are variance uncertainties, but if you
want to allow that the second operand also can be a standard deviation one can
override the ``_convert_uncertainty`` method as well::

    >>> class VarianceUncertainty2(VarianceUncertainty):
    ...     def _convert_uncertainty(self, other_uncert):
    ...         if isinstance(other_uncert, VarianceUncertainty):
    ...             return other_uncert
    ...         elif isinstance(other_uncert, StdDevUncertainty):
    ...             converted = VarianceUncertainty(np.square(other_uncert.array))
    ...             converted.parent_nddata = weakref.ref(other_uncert.parent_nddata)
    ...             return converted
    ...         raise ValueError('not compatible uncertainties.')

    >>> ndd1 = NDDataRef([1,2,3], uncertainty=VarianceUncertainty2([1,4,9]))
    >>> ndd2 = NDDataRef([1,2,3], uncertainty=StdDevUncertainty([1,2,3]))
    >>> ndd = ndd1.add(ndd2)
    >>> ndd.uncertainty
    VarianceUncertainty2([  2.,   8.,  18.])

.. warning::
    This will only allow the **second** operand to have a
    `~astropy.nddata.StdDevUncertainty` uncertainty. It will fail if the first
    operand is standard deviation and the second operand a variance.

.. note::
    Creating a variance uncertainty like this might require more work to
    include proper treatment of the unit of the uncertainty! And of course
    implementing also the ``_propagate_*`` for subtraction, division and
    multiplication.