File: preprocess.rst

package info (click to toggle)
orange3 3.40.0-1
  • links: PTS, VCS
  • area: main
  • in suites: sid
  • size: 15,908 kB
  • sloc: python: 162,745; ansic: 622; makefile: 322; sh: 93; cpp: 77
file content (405 lines) | stat: -rw-r--r-- 14,120 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
.. py:currentmodule:: Orange.preprocess

###################################
Data Preprocessing (``preprocess``)
###################################

.. index:: preprocessing

.. index::
   single: data; preprocessing

Preprocessing module contains data processing utilities like data
discretization, continuization, imputation and transformation.

Impute
======

Imputation replaces missing values with new values (or omits such features).

.. literalinclude:: code/imputation-default.py

There are several imputation methods one can use.

.. literalinclude:: code/imputation-average.py


.. autoclass::Orange.preprocess.Impute

.. index:: discretize data
   single: feature; discretize

Discretization
==============

Discretization replaces continuous features with the corresponding categorical
features:

.. literalinclude:: code/discretization-table.py

The variable in the new data table indicate the bins to which the original
values belong. ::

    Original dataset:
    [5.1, 3.5, 1.4, 0.2 | Iris-setosa]
    [4.9, 3.0, 1.4, 0.2 | Iris-setosa]
    [4.7, 3.2, 1.3, 0.2 | Iris-setosa]
    Discretized dataset:
    [<5.5, >=3.2, <2.5, <0.8 | Iris-setosa]
    [<5.5, [2.8, 3.2), <2.5, <0.8 | Iris-setosa]
    [<5.5, >=3.2, <2.5, <0.8 | Iris-setosa]


Default discretization method (four bins with approximatelly equal number of
data instances) can be replaced with other methods.

.. literalinclude:: code/discretization-table-method.py
    :lines: 3-5

.. autoclass::Orange.preprocess.Discretize

..
    Transformation procedure
    ------------------------

    `Discretization Algorithms`_ return a discretized variable (with fixed
    parameters) that can transform either the learning or the testing data.
    Parameter learning is separate to the transformation, as in machine
    learning only the training set should be used to induce parameters.

    To obtain discretized features, call a discretization algorithm with
    with the data and the feature to discretize. The feature can be given
    either as an index name or :obj:`Orange.data.Variable`. The following
    example creates a discretized feature::

        import Orange
        data = Orange.data.Table("iris.tab")
        disc = Orange.feature.discretization.EqualFreq(n=4)
        disc_var = disc(data, 0)

    The values of the first attribute will be discretized the data is
    transformed to the  :obj:`Orange.data.Domain` domain that includes
    ``disc_var``.  In the example below we add the discretized first attribute
    to the original domain::

      ndomain = Orange.data.Domain([disc_var] + list(data.domain.attributes),
          data.domain.class_vars)
      ndata = Orange.data.Table(ndomain, data)
      print(ndata)

    The printout::

      [[<5.150000, 5.1, 3.5, 1.4, 0.2 | Iris-setosa],
       [<5.150000, 4.9, 3.0, 1.4, 0.2 | Iris-setosa],
       [<5.150000, 4.7, 3.2, 1.3, 0.2 | Iris-setosa],
       [<5.150000, 4.6, 3.1, 1.5, 0.2 | Iris-setosa],
       [<5.150000, 5.0, 3.6, 1.4, 0.2 | Iris-setosa],
       ...
      ]

_`Discretization Algorithms`
----------------------------

.. autoclass:: Orange.preprocess.discretize.EqualWidth

.. autoclass:: Orange.preprocess.discretize.EqualFreq

.. autoclass:: Orange.preprocess.discretize.EntropyMDL

To add a new discretization, derive it from ``Discretization``.

.. autoclass:: Orange.preprocess.discretize.Discretization

Continuization
==============

.. class:: Orange.preprocess.Continuize

    Given a data table, return a new table in which the discretize attributes
    are replaced with continuous or removed.

    * binary variables are transformed into 0.0/1.0 or -1.0/1.0
      indicator variables, depending upon the argument ``zero_based``.

    * multinomial variables are treated according to the argument
      ``multinomial_treatment``.

    * discrete attribute with only one possible value are removed;

    ::

        import Orange
        titanic = Orange.data.Table("titanic")
        continuizer = Orange.preprocess.Continuize()
        titanic1 = continuizer(titanic)

    The class has a number of attributes that can be set either in constructor
    or, later, as attributes.

    .. attribute:: zero_based

        Determines the value used as the "low" value of the variable. When
        binary variables are transformed into continuous or when multivalued
        variable is transformed into multiple variables, the transformed
        variable can either have values 0.0 and 1.0 (default,
        ``zero_based=True``) or -1.0 and 1.0 (``zero_based=False``).

    .. attribute:: multinomial_treatment

       Defines the treatment of multinomial variables.

       ``Continuize.Indicators``

           The variable is replaced by indicator variables, each
           corresponding to one value of the original variable.
           For each value of the original attribute, only the
           corresponding new attribute will have a value of one and others
           will be zero. This is the default behaviour.

           Note that these variables are not independent, so they cannot be
           used (directly) in, for instance, linear or logistic regression.

           For example, dataset "titanic" has feature "status" with
           values "crew", "first", "second" and "third", in that order. Its
           value for the 15th row is "first". Continuization replaces the
           variable with variables "status=crew", "status=first",
           "status=second" and "status=third". After ::

               continuizer = Orange.preprocess.Continuize()
               titanic1 = continuizer(titanic)

           we have ::

               >>> titanic.domain
               [status, age, sex | survived]
               >>> titanic1.domain
               [status=crew, status=first, status=second, status=third,
                age=adult, age=child, sex=female, sex=male | survived]

           For the 15th row, the variable "status=first" has value 1 and the
           values of the other three variables are 0::

               >>> print(titanic[15])
               [first, adult, male | yes]
               >>> print(titanic1[15])
               [0.000, 1.000, 0.000, 0.000, 1.000, 0.000, 0.000, 1.000 | yes]


       ``Continuize.FirstAsBase``
           Similar to the above, except that it creates indicators for all
           values except the first one, according to the order in the variable's
           :obj:`~Orange.data.DiscreteVariable.values` attribute. If all
           indicators in the transformed data instance are 0, the original
           instance had the first value of the corresponding variable.

           Continuizing the variable "status" with this setting gives variables
           "status=first", "status=second" and "status=third". If all of them
           were 0, the status of the original data instance was "crew".

               >>> continuizer.multinomial_treatment = continuizer.FirstAsBase
               >>> continuizer(titanic).domain
               [status=first, status=second, status=third, age=child, sex=male | survived]

       ``Continuize.FrequentAsBase``
           Like above, except that the most frequent value is used as the
           base. If there are multiple most frequent values, the
           one with the lowest index in
           :obj:`~Orange.data.DiscreteVariable.values` is used. The frequency
           of values is extracted from data, so this option does not work if
           only the domain is given.

           Continuizing the Titanic data in this way differs from the above by
           the attributes sex: instead of "sex=male" it constructs "sex=female"
           since there were more females than males on Titanic. ::

                >>> continuizer.multinomial_treatment = continuizer.FrequentAsBase
                >>> continuizer(titanic).domain
                [status=first, status=second, status=third, age=child, sex=female | survived]

       ``Continuize.Remove``
           Discrete variables are removed. ::

               >>> continuizer.multinomial_treatment = continuizer.Remove
               >>> continuizer(titanic).domain
               [ | survived]

       ``Continuize.RemoveMultinomial``
           Discrete variables with more than two values are removed. Binary
           variables are treated the same as in `FirstAsBase`.

            >>> continuizer.multinomial_treatment = continuizer.RemoveMultinomial
            >>> continuizer(titanic).domain
            [age=child, sex=male | survived]

       ``Continuize.ReportError``
           Raise an error if there are any multinomial variables in the data.

       ``Continuize.AsOrdinal``
           Multinomial variables are treated as ordinal and replaced by
           continuous variables with indices within
           :obj:`~Orange.data.DiscreteVariable.values`, e.g. 0, 1, 2, 3...

                >>> continuizer.multinomial_treatment = continuizer.AsOrdinal
                >>> titanic1 = continuizer(titanic)
                >>> titanic[700]
                [third, adult, male | no]
                >>> titanic1[700]
                [3.000, 0.000, 1.000 | no]

       ``Continuize.AsNormalizedOrdinal``
           As above, except that the resulting continuous value will be from
           range 0 to 1, e.g. 0, 0.333, 0.667, 1 for a four-valued variable::

                >>> continuizer.multinomial_treatment = continuizer.AsNormalizedOrdinal
                >>> titanic1 = continuizer(titanic)
                >>> titanic1[700]
                [1.000, 0.000, 1.000 | no]
                >>> titanic1[15]
                [0.333, 0.000, 1.000 | yes]

    .. attribute:: transform_class

        If ``True`` the class is replaced by continuous
        attributes or normalized as well. Multiclass problems are thus
        transformed to multitarget ones. (Default: ``False``)



.. class:: Orange.preprocess.DomainContinuizer

    Construct a domain in which discrete attributes are replaced by
    continuous. ::

        domain_continuizer = Orange.preprocess.DomainContinuizer()
        domain1 = domain_continuizer(titanic)

    :obj:`Orange.preprocess.Continuize` calls `DomainContinuizer` to construct
    the domain.

    Domain continuizers can be given either a dataset or a domain, and return
    a new domain. When given only the domain, use the most frequent value as
    the base value.

    By default, the class does not change continuous and class attributes,
    discrete attributes are replaced with N attributes (``Indicators``) with
    values 0 and 1.

Normalization
=============

.. autoclass:: Orange.preprocess.Normalize


Randomization
=============

.. autoclass:: Orange.preprocess.Randomize


Remove
======

.. autoclass:: Orange.preprocess.Remove

Feature selection
=================

`Feature scoring`
-----------------

Feature scoring is an assessment of the usefulness of features for
prediction of the dependant (class) variable. Orange provides classes
that compute the common feature scores for classification and regression.

The code below computes the information gain of feature "tear_rate"
in the Lenses dataset:

    >>> data = Orange.data.Table("lenses")
    >>> Orange.preprocess.score.InfoGain(data, "tear_rate")
    0.54879494069539858

An alternative way of invoking the scorers is to construct the scoring
object and calculate the scores for all the features at once, like in the
following example:

    >>> gain = Orange.preprocess.score.InfoGain()
    >>> scores = gain(data)
    >>> for attr, score in zip(data.domain.attributes, scores):
    ...     print('%.3f' % score, attr.name)
    0.039 age
    0.040 prescription
    0.377 astigmatic
    0.549 tear_rate

Feature scoring methods work on different feature types (continuous or discrete)
and different types of target variables (i.e. in classification or regression
problems).
Refer to method's `feature_type` and `class_type` attributes for intended type
or employ preprocessing methods (e.g. discretization) for conversion between
data types.

.. autoclass:: Orange.preprocess.score.ANOVA
   :members: feature_type, class_type

.. autoclass:: Orange.preprocess.score.Chi2
   :members: feature_type, class_type

.. autoclass:: Orange.preprocess.score.GainRatio
   :members: feature_type, class_type

.. autoclass:: Orange.preprocess.score.Gini
   :members: feature_type, class_type

.. autoclass:: Orange.preprocess.score.InfoGain
   :members: feature_type, class_type

.. autoclass:: Orange.preprocess.score.UnivariateLinearRegression
   :members: feature_type, class_type

.. autoclass:: Orange.preprocess.score.FCBF
   :members: feature_type, class_type

.. autoclass:: Orange.preprocess.score.ReliefF
   :members: feature_type, class_type

.. autoclass:: Orange.preprocess.score.RReliefF
   :members: feature_type, class_type

Additionally, you can use the ``score_data()`` method of some learners (\
:obj:`Orange.classification.LinearRegressionLearner`,
:obj:`Orange.regression.LogisticRegressionLearner`,
:obj:`Orange.classification.RandomForestLearner`, and
:obj:`Orange.regression.RandomForestRegressionLearner`)
to obtain the feature scores as calculated by these learners. For example:

    >>> learner = Orange.classification.LogisticRegressionLearner()
    >>> learner.score_data(data)
    [0.31571299907366146,
     0.28286199971877485,
     0.67496525667835794,
     0.99930286901257692]


`Feature selection`
-------------------

We can use feature selection to limit the analysis to only the most relevant
or informative features in the dataset.

Feature selection with a scoring method that works on continuous features will
retain all discrete features and vice versa.

The code below constructs a new dataset consisting of two best features
according to the ANOVA method:

    >>> data = Orange.data.Table("wine")
    >>> anova = Orange.preprocess.score.ANOVA()
    >>> selector = Orange.preprocess.SelectBestFeatures(method=anova, k=2)
    >>> data2 = selector(data)
    >>> data2.domain
    [Flavanoids, Proline | Wine]

.. autoclass:: Orange.preprocess.SelectBestFeatures

Preprocessors
=============