File: optimization_algorithm.rst

package info (click to toggle)
openturns 1.26-4
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 67,708 kB
  • sloc: cpp: 261,605; python: 67,030; ansic: 4,378; javascript: 406; sh: 185; xml: 164; makefile: 101
file content (88 lines) | stat: -rw-r--r-- 3,100 bytes parent folder | download | duplicates (3)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
.. _optimization_algorithm:

Optimization Algorithms
-----------------------

The method is used in the following context:
:math:`\vect{x}= \left( x^1,\ldots,x^{n_X} \right)` is a vector of
unknown variables, :math:`\vect{d}` a vector considered to be well known
or where uncertainty is negligible, and
:math:`y = h(\vect{x},\vect{d})` is the scalar variable of interest.
The objective here is to determine the extreme (minimum and maximum)
values of :math:`y` when :math:`\vect{x}` varies.

| It is possible to use some optimization algorithms. We give the
  principle of the TNC (Truncated Newton Constrained) algorithm which
  minimizes a function with variables subject to bounds, using gradient
  information.
| Truncated-Newton methods are a family of methods suitable for solving
  large nonlinear optimization problems. At each iteration, the current
  estimate of the solution is updated by approximately solving the
  Newton equations using an iterative algorithm. This results in a
  doubly iterative method: an outer iteration for the nonlinear
  optimization problem and an inner iteration for the Newton equations.
  The inner iteration is typically stopped or *truncated* before the
  solution to the Newton equations is obtained.
| The TNC algorithm resolves:
  :math:`\min_{\vect{x} \in [\vect{a},\vect{b}] \in \overline{\Rset}^n} f(\vect{x})`
  and proceeds as follows under the proper regularity of the objective
  function :math:`f`:

  .. math::

     \begin{aligned}
         \left\{
         \begin{array}{l}
           \vect{\nabla f}(\vect{x}) =\vect{0}  \\
           \mat{\nabla_2} f(\vect{x}) \mbox{ is definite positive}
         \end{array}
         \right.
       \end{aligned}

The Taylor development of second order of :math:`f` around
:math:`\vect{x}_k` leads to the determination of the iterate
:math:`\vect{x}_{k+1}` such as:

.. math::
  :label: linearSystem

    \left\{
    \begin{array}{lcl}
      \vect{\Delta}_k & = & \vect{x}_{k+1} - \vect{x}_k  \\
      \mat{\nabla_2} f(\vect{x}_k)\vect{\Delta}_k & = & -\vect{\nabla f}(\vect{x}_k)
    \end{array}
    \right.

The equation :eq:`linearSystem` is truncated: the iterative research of
:math:`\vect{\Delta}_k` is stopped as soon as :math:`\vect{\Delta}_k`
verifies :

.. math::

    || \mat{\nabla_2} f(\vect{x}_k)\vect{\Delta}_k + \vect{\nabla f}(\vect{x}_k) || \leq \eta ||\vect{\nabla f}(\vect{x}_k) ||

At last, the iteration :math:`k+1` is defined by:

.. math::

    \vect{x}_{k+1} = \vect{x}_k + \alpha_k \vect{\Delta}_k

where :math:`\alpha_k` is the parameter *stepmx*.


.. topic:: API:

    - See :class:`~openturns.TNC`
    - See the available :ref:`optimization algorithms <optimization_algorithms>`.

.. topic:: Examples:

    - See :doc:`/auto_numerical_methods/optimization/plot_optimization_rastrigin`
    - See :doc:`/auto_numerical_methods/optimization/plot_optimization_rosenbrock`
    - See :doc:`/auto_numerical_methods/optimization/plot_optimization_constraints`
    - See :doc:`/auto_numerical_methods/optimization/plot_optimization_nlopt`

.. topic:: References:

    - [nash1999]_