1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165
|
.. DO NOT EDIT.
.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY.
.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE:
.. "auto_examples\parallel-optimization.py"
.. LINE NUMBERS ARE GIVEN BELOW.
.. only:: html
.. note::
:class: sphx-glr-download-link-note
:ref:`Go to the end <sphx_glr_download_auto_examples_parallel-optimization.py>`
to download the full example code or to run this example in your browser via Binder
.. rst-class:: sphx-glr-example-title
.. _sphx_glr_auto_examples_parallel-optimization.py:
=====================
Parallel optimization
=====================
Iaroslav Shcherbatyi, May 2017.
Reviewed by Manoj Kumar and Tim Head.
Reformatted by Holger Nahrstaedt 2020
.. currentmodule:: skopt
Introduction
============
For many practical black box optimization problems expensive objective can be
evaluated in parallel at multiple points. This allows to get more objective
evaluations per unit of time, which reduces the time necessary to reach good
objective values when appropriate optimization algorithms are used, see for
example results in [1]_ and the references therein.
One such example task is a selection of number and activation function of a
neural network which results in highest accuracy for some machine learning
problem. For such task, multiple neural networks with different combinations
of number of neurons and activation function type can be evaluated at the same
time in parallel on different cpu cores / computational nodes.
The “ask and tell” API of scikit-optimize exposes functionality that allows to
obtain multiple points for evaluation in parallel. Intended usage of this
interface is as follows:
1. Initialize instance of the `Optimizer` class from skopt
2. Obtain n points for evaluation in parallel by calling the `ask` method of an optimizer instance with the `n_points` argument set to n > 0
3. Evaluate points
4. Provide points and corresponding objectives using the `tell` method of an optimizer instance
5. Continue from step 2 until eg maximum number of evaluations reached
.. GENERATED FROM PYTHON SOURCE LINES 38-41
.. code-block:: Python
print(__doc__)
.. GENERATED FROM PYTHON SOURCE LINES 42-47
Example
=======
A minimalistic example that uses joblib to parallelize evaluation of the
objective function is given below.
.. GENERATED FROM PYTHON SOURCE LINES 47-68
.. code-block:: Python
from joblib import Parallel, delayed
from skopt import Optimizer
# example objective taken from skopt
from skopt.benchmarks import branin
from skopt.space import Real
optimizer = Optimizer(
dimensions=[Real(-5.0, 10.0), Real(0.0, 15.0)], random_state=1, base_estimator='gp'
)
for i in range(10):
x = optimizer.ask(n_points=4) # x is a list of n_points points
y = Parallel(n_jobs=4)(delayed(branin)(v) for v in x) # evaluate points in parallel
optimizer.tell(x, y)
# takes ~ 20 sec to get here
print(min(optimizer.yi)) # print the best objective found
.. rst-class:: sphx-glr-script-out
.. code-block:: none
0.40654213510989656
.. GENERATED FROM PYTHON SOURCE LINES 69-82
Note that if `n_points` is set to some integer > 0 for the `ask` method, the
result will be a list of points, even for `n_points` = 1. If the argument is
set to `None` (default value) then a single point (but not a list of points)
will be returned.
The default "minimum constant liar" [1]_ parallelization strategy is used in
the example, which allows to obtain multiple points for evaluation with a
single call to the `ask` method with any surrogate or acquisition function.
Parallelization strategy can be set using the "strategy" argument of `ask`.
For supported parallelization strategies see the documentation of
scikit-optimize.
.. [1] `<https://hal.archives-ouvertes.fr/hal-00732512/document>`_
.. rst-class:: sphx-glr-timing
**Total running time of the script:** (0 minutes 17.513 seconds)
.. _sphx_glr_download_auto_examples_parallel-optimization.py:
.. only:: html
.. container:: sphx-glr-footer sphx-glr-footer-example
.. container:: binder-badge
.. image:: images/binder_badge_logo.svg
:target: https://mybinder.org/v2/gh/holgern/scikit-optimize/master?urlpath=lab/tree/notebooks/auto_examples/parallel-optimization.ipynb
:alt: Launch binder
:width: 150 px
.. container:: sphx-glr-download sphx-glr-download-jupyter
:download:`Download Jupyter notebook: parallel-optimization.ipynb <parallel-optimization.ipynb>`
.. container:: sphx-glr-download sphx-glr-download-python
:download:`Download Python source code: parallel-optimization.py <parallel-optimization.py>`
.. only:: html
.. rst-class:: sphx-glr-signature
`Gallery generated by Sphinx-Gallery <https://sphinx-gallery.github.io>`_
|