File: Multimin-Algorithms.html

package info (click to toggle)
gsl-ref-html 1.10-1
  • links: PTS
  • area: main
  • in suites: lenny
  • size: 4,496 kB
  • ctags: 2,955
  • sloc: makefile: 33
file content (150 lines) | stat: -rw-r--r-- 9,112 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
<html lang="en">
<head>
<title>Multimin Algorithms - GNU Scientific Library -- Reference Manual</title>
<meta http-equiv="Content-Type" content="text/html">
<meta name="description" content="GNU Scientific Library -- Reference Manual">
<meta name="generator" content="makeinfo 4.8">
<link title="Top" rel="start" href="index.html#Top">
<link rel="up" href="Multidimensional-Minimization.html" title="Multidimensional Minimization">
<link rel="prev" href="Multimin-Stopping-Criteria.html" title="Multimin Stopping Criteria">
<link rel="next" href="Multimin-Examples.html" title="Multimin Examples">
<link href="http://www.gnu.org/software/texinfo/" rel="generator-home" title="Texinfo Homepage">
<!--
Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007 The GSL Team.

Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.2 or
any later version published by the Free Software Foundation; with the
Invariant Sections being ``GNU General Public License'' and ``Free Software
Needs Free Documentation'', the Front-Cover text being ``A GNU Manual'',
and with the Back-Cover Text being (a) (see below).  A copy of the
license is included in the section entitled ``GNU Free Documentation
License''.

(a) The Back-Cover Text is: ``You have freedom to copy and modify this
GNU Manual, like GNU software.''-->
<meta http-equiv="Content-Style-Type" content="text/css">
<style type="text/css"><!--
  pre.display { font-family:inherit }
  pre.format  { font-family:inherit }
  pre.smalldisplay { font-family:inherit; font-size:smaller }
  pre.smallformat  { font-family:inherit; font-size:smaller }
  pre.smallexample { font-size:smaller }
  pre.smalllisp    { font-size:smaller }
  span.sc    { font-variant:small-caps }
  span.roman { font-family:serif; font-weight:normal; } 
  span.sansserif { font-family:sans-serif; font-weight:normal; } 
--></style>
</head>
<body>
<div class="node">
<p>
<a name="Multimin-Algorithms"></a>
Next:&nbsp;<a rel="next" accesskey="n" href="Multimin-Examples.html">Multimin Examples</a>,
Previous:&nbsp;<a rel="previous" accesskey="p" href="Multimin-Stopping-Criteria.html">Multimin Stopping Criteria</a>,
Up:&nbsp;<a rel="up" accesskey="u" href="Multidimensional-Minimization.html">Multidimensional Minimization</a>
<hr>
</div>

<h3 class="section">35.7 Algorithms</h3>

<p>There are several minimization methods available. The best choice of
algorithm depends on the problem.  All of the algorithms use the value
of the function and its gradient at each evaluation point, except for
the Simplex algorithm which uses function values only.

<div class="defun">
&mdash; Minimizer: <b>gsl_multimin_fdfminimizer_conjugate_fr</b><var><a name="index-gsl_005fmultimin_005ffdfminimizer_005fconjugate_005ffr-2348"></a></var><br>
<blockquote><p><a name="index-Fletcher_002dReeves-conjugate-gradient-algorithm_002c-minimization-2349"></a><a name="index-Conjugate-gradient-algorithm_002c-minimization-2350"></a><a name="index-minimization_002c-conjugate-gradient-algorithm-2351"></a>This is the Fletcher-Reeves conjugate gradient algorithm. The conjugate
gradient algorithm proceeds as a succession of line minimizations. The
sequence of search directions is used to build up an approximation to the
curvature of the function in the neighborhood of the minimum.

        <p>An initial search direction <var>p</var> is chosen using the gradient, and line
minimization is carried out in that direction.  The accuracy of the line
minimization is specified by the parameter <var>tol</var>.  The minimum
along this line occurs when the function gradient <var>g</var> and the search direction
<var>p</var> are orthogonal.  The line minimization terminates when
<!-- {$p\cdot g < tol |p| |g|$} -->
dot(p,g) &lt; tol |p| |g|.  The
search direction is updated  using the Fletcher-Reeves formula
p' = g' - \beta g where \beta=-|g'|^2/|g|^2, and
the line minimization is then repeated for the new search
direction. 
</p></blockquote></div>

<div class="defun">
&mdash; Minimizer: <b>gsl_multimin_fdfminimizer_conjugate_pr</b><var><a name="index-gsl_005fmultimin_005ffdfminimizer_005fconjugate_005fpr-2352"></a></var><br>
<blockquote><p><a name="index-Polak_002dRibiere-algorithm_002c-minimization-2353"></a><a name="index-minimization_002c-Polak_002dRibiere-algorithm-2354"></a>This is the Polak-Ribiere conjugate gradient algorithm.  It is similar
to the Fletcher-Reeves method, differing only in the choice of the
coefficient \beta. Both methods work well when the evaluation
point is close enough to the minimum of the objective function that it
is well approximated by a quadratic hypersurface. 
</p></blockquote></div>

<div class="defun">
&mdash; Minimizer: <b>gsl_multimin_fdfminimizer_vector_bfgs2</b><var><a name="index-gsl_005fmultimin_005ffdfminimizer_005fvector_005fbfgs2-2355"></a></var><br>
&mdash; Minimizer: <b>gsl_multimin_fdfminimizer_vector_bfgs</b><var><a name="index-gsl_005fmultimin_005ffdfminimizer_005fvector_005fbfgs-2356"></a></var><br>
<blockquote><p><a name="index-BFGS-algorithm_002c-minimization-2357"></a><a name="index-minimization_002c-BFGS-algorithm-2358"></a>These methods use the vector Broyden-Fletcher-Goldfarb-Shanno (BFGS)
algorithm.  This is a quasi-Newton method which builds up an approximation
to the second derivatives of the function f using the difference
between successive gradient vectors.  By combining the first and second
derivatives the algorithm is able to take Newton-type steps towards the
function minimum, assuming quadratic behavior in that region.

        <p>The <code>bfgs2</code> version of this minimizer is the most efficient
version available, and is a faithful implementation of the line
minimization scheme described in Fletcher's <cite>Practical Methods of
Optimization</cite>, Algorithms 2.6.2 and 2.6.4.  It supercedes the original
<code>bfgs</code> routine and requires substantially fewer function and
gradient evaluations.  The user-supplied tolerance <var>tol</var>
corresponds to the parameter \sigma used by Fletcher.  A value
of 0.1 is recommended for typical use (larger values correspond to
less accurate line searches).

        </blockquote></div>

<div class="defun">
&mdash; Minimizer: <b>gsl_multimin_fdfminimizer_steepest_descent</b><var><a name="index-gsl_005fmultimin_005ffdfminimizer_005fsteepest_005fdescent-2359"></a></var><br>
<blockquote><p><a name="index-steepest-descent-algorithm_002c-minimization-2360"></a><a name="index-minimization_002c-steepest-descent-algorithm-2361"></a>The steepest descent algorithm follows the downhill gradient of the
function at each step. When a downhill step is successful the step-size
is increased by a factor of two.  If the downhill step leads to a higher
function value then the algorithm backtracks and the step size is
decreased using the parameter <var>tol</var>.  A suitable value of <var>tol</var>
for most applications is 0.1.  The steepest descent method is
inefficient and is included only for demonstration purposes. 
</p></blockquote></div>

<div class="defun">
&mdash; Minimizer: <b>gsl_multimin_fminimizer_nmsimplex</b><var><a name="index-gsl_005fmultimin_005ffminimizer_005fnmsimplex-2362"></a></var><br>
<blockquote><p><a name="index-Nelder_002dMead-simplex-algorithm-for-minimization-2363"></a><a name="index-simplex-algorithm_002c-minimization-2364"></a><a name="index-minimization_002c-simplex-algorithm-2365"></a>This is the Simplex algorithm of Nelder and Mead. It constructs
n vectors p_i from the
starting vector <var>x</var> and the vector <var>step_size</var> as follows:

     <pre class="example">          p_0 = (x_0, x_1, ... , x_n)
          p_1 = (x_0 + step_size_0, x_1, ... , x_n)
          p_2 = (x_0, x_1 + step_size_1, ... , x_n)
          ... = ...
          p_n = (x_0, x_1, ... , x_n+step_size_n)
</pre>
        <p class="noindent">These vectors form the n+1 vertices of a simplex in n
dimensions.  On each iteration the algorithm tries to improve
the parameter vector p_i corresponding to the highest
function value by simple geometrical transformations.  These
are reflection, reflection followed by expansion, contraction and multiple
contraction. Using these transformations the simplex moves through
the parameter space towards the minimum, where it contracts itself.

        <p>After each iteration, the best vertex is returned.  Note, that due to
the nature of the algorithm not every step improves the current
best parameter vector.  Usually several iterations are required.

        <p>The routine calculates the minimizer specific characteristic size as the
average distance from the geometrical center of the simplex to all its
vertices.  This size can be used as a stopping criteria, as the simplex
contracts itself near the minimum. The size is returned by the function
<code>gsl_multimin_fminimizer_size</code>. 
</p></blockquote></div>

<hr>The GNU Scientific Library - a free numerical library licensed under the GNU GPL<br>Back to the <a href="/software/gsl/">GNU Scientific Library Homepage</a></body></html>