File: gbrt.py

package info (click to toggle)
scikit-optimize 0.10.2-4
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 7,672 kB
  • sloc: python: 10,659; javascript: 438; makefile: 136; sh: 6
file content (218 lines) | stat: -rw-r--r-- 8,304 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
from sklearn.utils import check_random_state

from ..utils import cook_estimator
from .base import base_minimize


def gbrt_minimize(
    func,
    dimensions,
    base_estimator=None,
    n_calls=100,
    n_random_starts=None,
    n_initial_points=10,
    initial_point_generator="random",
    acq_func="LCB",
    acq_optimizer="auto",
    x0=None,
    y0=None,
    random_state=None,
    verbose=False,
    callback=None,
    n_points=10000,
    xi=0.01,
    kappa=1.96,
    n_jobs=1,
    model_queue_size=None,
    space_constraint=None,
):
    """Sequential optimization using gradient boosted trees.

    Gradient boosted regression trees are used to model the (very)
    expensive to evaluate function `func`. The model is improved
    by sequentially evaluating the expensive function at the next
    best point. Thereby finding the minimum of `func` with as
    few evaluations as possible.

    The total number of evaluations, `n_calls`, are performed like the
    following. If `x0` is provided but not `y0`, then the elements of `x0`
    are first evaluated, followed by `n_initial_points` evaluations.
    Finally, `n_calls - len(x0) - n_initial_points` evaluations are
    made guided by the surrogate model. If `x0` and `y0` are both
    provided then `n_initial_points` evaluations are first made then
    `n_calls - n_initial_points` subsequent evaluations are made
    guided by the surrogate model.

    The first `n_initial_points` are generated by the
    `initial_point_generator`.

    Parameters
    ----------
    func : callable
        Function to minimize. Should take a single list of parameters
        and return the objective value.

        If you have a search-space where all dimensions have names,
        then you can use `skopt.utils.use_named_args` as a decorator
        on your objective function, in order to call it directly
        with the named arguments. See `use_named_args` for an example.

    dimensions : list, shape (n_dims,)
        List of search space dimensions.
        Each search dimension can be defined either as

        - a `(lower_bound, upper_bound)` tuple (for `Real` or `Integer`
          dimensions),
        - a `(lower_bound, upper_bound, "prior")` tuple (for `Real`
          dimensions),
        - as a list of categories (for `Categorical` dimensions), or
        - an instance of a `Dimension` object (`Real`, `Integer` or
          `Categorical`).

    base_estimator : `GradientBoostingQuantileRegressor`
        The regressor to use as surrogate model

    n_calls : int, default: 100
        Number of calls to `func`.

    n_random_starts : int, default: None
        Number of evaluations of `func` with random points before
        approximating it with `base_estimator`.

        .. deprecated:: 0.8
            use `n_initial_points` instead.

    n_initial_points : int, default: 10
        Number of evaluations of `func` with initialization points
        before approximating it with `base_estimator`. Initial point
        generator can be changed by setting `initial_point_generator`.

    initial_point_generator : str, InitialPointGenerator instance, \
            default: `"random"`
        Sets a initial points generator. Can be either

        - `"random"` for uniform random numbers,
        - `"sobol"` for a Sobol' sequence,
        - `"halton"` for a Halton sequence,
        - `"hammersly"` for a Hammersly sequence,
        - `"lhs"` for a latin hypercube sequence,
        - `"grid"` for a uniform grid sequence

    acq_func : string, default: `"LCB"`
        Function to minimize over the forest posterior. Can be either

        - `"LCB"` for lower confidence bound.
        - `"EI"` for negative expected improvement.
        - `"PI"` for negative probability of improvement.
        - ``"EIps"`` for negated expected improvement per second to take into
          account the function compute time. Then, the objective function is
          assumed to return two values, the first being the objective value and
          the second being the time taken.
        - `"PIps"` for negated probability of improvement per second.

    x0 : list, list of lists or `None`
        Initial input points.

        - If it is a list of lists, use it as a list of input points.
        - If it is a list, use it as a single initial input point.
        - If it is `None`, no initial input points are used.

    y0 : list, scalar or `None`
        Evaluation of initial input points.

        - If it is a list, then it corresponds to evaluations of the function
          at each element of `x0` : the i-th element of `y0` corresponds
          to the function evaluated at the i-th element of `x0`.
        - If it is a scalar, then it corresponds to the evaluation of the
          function at `x0`.
        - If it is None and `x0` is provided, then the function is evaluated
          at each element of `x0`.

    random_state : int, RandomState instance, or None (default)
        Set random state to something other than None for reproducible
        results.

    verbose : boolean, default: False
        Control the verbosity. It is advised to set the verbosity to True
        for long optimization runs.

    callback : callable, optional
        If provided, then `callback(res)` is called after call to func.

    n_points : int, default: 10000
        Number of points to sample when minimizing the acquisition function.

    xi : float, default: 0.01
        Controls how much improvement one wants over the previous best
        values. Used when the acquisition is either `"EI"` or `"PI"`.

    kappa : float, default: 1.96
        Controls how much of the variance in the predicted values should be
        taken into account. If set to be very high, then we are favouring
        exploration over exploitation and vice versa.
        Used when the acquisition is `"LCB"`.

    n_jobs : int, default: 1
        The number of jobs to run in parallel for `fit` and `predict`.
        If -1, then the number of jobs is set to the number of cores.

    model_queue_size : int or None, default: None
        Keeps list of models only as long as the argument given. In the
        case of None, the list has no capped length.

    space_constraint : callable or None, default: None
        Constraint function. Should take a single list of parameters
        (i.e. a point in space) and return True if the point satisfies
        the constraints.
        If None, the space is not conditionally constrained.

    Returns
    -------
    res : `OptimizeResult`, scipy object
        The optimization result returned as a OptimizeResult object.
        Important attributes are:

        - `x` [list]: location of the minimum.
        - `fun` [float]: function value at the minimum.
        - `models`: surrogate models used for each iteration.
        - `x_iters` [list of lists]: location of function evaluation for each
          iteration.
        - `func_vals` [array]: function value for each iteration.
        - `space` [Space]: the optimization space.
        - `specs` [dict]`: the call specifications.
        - `rng` [RandomState instance]: State of the random state
          at the end of minimization.

        For more details related to the OptimizeResult object, refer
        http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.OptimizeResult.html

    .. seealso:: functions :class:`skopt.gp_minimize`,
        :class:`skopt.dummy_minimize`, :class:`skopt.forest_minimize`
    """
    # Check params
    rng = check_random_state(random_state)

    if base_estimator is None:
        base_estimator = cook_estimator("GBRT", random_state=rng, n_jobs=n_jobs)
    return base_minimize(
        func,
        dimensions,
        base_estimator,
        n_calls=n_calls,
        n_points=n_points,
        n_random_starts=n_random_starts,
        n_initial_points=n_initial_points,
        initial_point_generator=initial_point_generator,
        x0=x0,
        y0=y0,
        random_state=random_state,
        xi=xi,
        kappa=kappa,
        acq_func=acq_func,
        verbose=verbose,
        callback=callback,
        acq_optimizer="sampling",
        space_constraint=space_constraint,
        model_queue_size=model_queue_size,
        n_jobs=n_jobs,
    )