1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208
|
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<HTML>
<HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html;CHARSET=iso-8859-1">
<META NAME="VPSiteProject" CONTENT="file:///E|/euler/docs/Euler.vpp">
<META NAME="GENERATOR" Content="Visual Page 2.0 for Windows">
<TITLE>Special Problem Solvers</TITLE>
<BASE target="_self">
<LINK REL="stylesheet" TYPE="text/css" HREF="euler.css">
</HEAD>
<BODY>
<H1 ALIGN="CENTER">Problem Solvers</H1>
<P>This section describes some problem solvers. These functions are either built in or read from a file at program
start. We explain</P>
<UL>
<LI><A HREF="#Solving Equations">how to solve equations numerically</A>,
<LI><A HREF="#Optimization">how to find minima and maxima of a function in one or several variables</A>,
<LI><A HREF="#Linear Optimization">linear optimization with the Simplex method</A>,
<LI><A HREF="#Integration">numerical integration</A>,
<LI><A HREF="#Differentiation">numerical differentiation</A>,
<LI><A HREF="#Splines">splines</A>,
<LI><A HREF="#Regression Analysis">regression analysis and best fit</A>,
<LI><A HREF="#Iteration and Fixed Points">how to do iteration and find fixed points</A>,
<LI><A HREF="#Differential Equations">numerical solution of differential equations</A>.
</UL>
<H2 ALIGN="CENTER"><A NAME="Solving Equations"></A>Solving Equations</H2>
<P>In UTIL some fuctions are defined, which will to be of general use. First there are some functions for solving
equations.</P>
<PRE> >bisect("f",a,b,...)
</PRE>
<P>uses the bisetion method to solve f(x,...)=0. f must be a function of the real variable x which may have additional
parameters "...". These parameters can be passed to bisect and are in turn passed to f. You can use the
syntax</P>
<PRE> >bisect("f",a,b;v)
</PRE>
<P>to pass the additional parameter v to f(x,v). This syntax applies to all functions below, which accept additional
parameters to be passed to a function.</P>
<P>The method uses the internal epsilon to stop the iteration. You can specify an own epsilon with</P>
<PRE> >biscet("f",a,b,eps=0.1)
</PRE>
<P>Note, that parameters with values must be the last parameters of all. The eps parameter can be applied to many
functions in this section.</P>
<P>It is possible to pass an expression to bisect. This expression must be a string containing a valid EULER expression
for f(x). The name of the unknown variable must be x.</P>
<PRE> >bisect("x^2-2",1,2);
</PRE>
<P>All functions in this section will work with expressions in x.</P>
<PRE> >secant("f",a,b,...)
</PRE>
<P>uses the secant method for the same purpose. "f" may again be an expression containing the variable
x.</P>
<P>If you want to find the root of an expression, which contains one variable or many variables, you can use</P>
<PRE> >root("expr",x)
</PRE>
<P>expr must be any valid EULER expression. All variables in this expression must be global variables of type real.
This function is not to be used in other functions, unless this functions sets the global variables first! x must
be the name of the variable, which you want to solve expr==0 for. The new value of x solves the expression. E.g.,</P>
<PRE> >a=2; x=1; root("x^2-a",x)
</PRE>
<P>will set x equal to sqrt(2).</P>
<P>If you need to find roots of a function with several parameters, you can use Newtons method or the method of
Broyden.</P>
<PRE> >broyden("f",x)
</PRE>
<P>or</P>
<PRE> >broyden("f",x,J,...)
</PRE>
<P>uses the Broyden method to find the roots of f(x,...)=0. This time, f may be a function f. Then x is a vector.
J is an approximation of the Jacobian at x, the starting point. If J==0 or J is missing, then the function computes
J. Again, additional parameters are passed to f.</P>
<PRE> >newton("f","f1",x,...)
</PRE>
<P>finds a zero of f using the Newton method for a function with one parameter. You must provide the derivative
of f in f1. Similarily,</P>
<PRE> >newton2("f","Df",x,...)
</PRE>
<P>is used for several parameters. f(x) must compute an 1xn vector y from the 1xn vector x. Df(x) must compute
the nxn derivative matrix at x.</P>
<H2 ALIGN="CENTER"><A NAME="Optimization"></A>Optimization</H2>
<P>The minimum of a convex function (maximum of a concave function) can be computed with fmin (fmax). E.g.</P>
<PRE> >fmin("x^3-x",0,1)
</PRE>
<P>If f is a function, you may also use fmin("f",0,1). As always, additional parameters are passed to
f.</P>
<P>The function fmin uses the Golden Ratio method. However, Brent's method may be faster and is available with</P>
<PRE> >brentmin("x^3-x",0.5)
</PRE>
<P>where the parameters are an initial try, and optionally a step size for the search and an stopping criterion
epsilon. The basic built-in function is brent. All minima and maxima of a function or expression (in x) are computed
with</P>
<PRE> >fextrema("x^3-x",a,b)
</PRE>
<P>where a and b are the interval ends. There may be an optional n parameter to determine the number of subintervals
to be investigated. The minimum of a function in several variables can be found with the Nelder-Mean simplex algorithm.</P>
<PRE> >neldermin("f",v)
</PRE>
<P>This time, f must take a 1xn vector as an argument and return a real number. Or you may pass an expression in
x, which evaluates for a 1xn vector x to a real. V is a starting point for the search. The basic built-in function
is nelder. Optionally, you may pass a delta as size of the first simplex and a stopping epsilon.</P>
<H2 ALIGN="CENTER"><A NAME="Linear Optimization"></A>Linear Optimization</H2>
<P>EULER has a built in Simplex algorithm, which minimizes c'*x among all x, satisfying x>=0 and A*x<=b.</P>
<PRE> >simplex(A,b,c)
</PRE>
<P>will compute the solution x. To check for feasibility and boundedness, use</P>
<PRE> >{x,r}=simplex(A,b,c)
</PRE>
<P>r will be 0, if x is a correct minimum. If r is -1, the problem is not feasible. If r is 1, the problem is unbounded.
In this case, x contains a feasible point. In any case A must be a real mxn matrix, b an mx1 matrix (a column vector),
and c a 1xn matrix (a row vector). x will be a nx1 column vector.</P>
<P>Note, that the internal epsilon is used for numerical computations.</P>
<H2 ALIGN="CENTER"><A NAME="Integration"></A>Integration</H2>
<PRE> >simpson("f",a,b)
</PRE>
<P>or</P>
<PRE> >simpson("f",a,b,n,...)
</PRE>
<P>computes the Simpson integral with of f in [a,b]. n is the discretization and additional parameters are passed
to f(x,...). f must be able to handle vector input for x. Again, "f" may be an expression in x.</P>
<P>
<PRE> >gauss("f",a,b)
</PRE>
<P>or</P>
<PRE> >gauss("f",a,b,n,...)
</PRE>
<P>performs gauss integration with 10 knots. If n is specified, then the interval is subdivided into n subintervals.
f(x,...) must be able to handle vector intput for x. "f" may be an expression in x.</P>
<PRE> >romberg("f",a,b)
</PRE>
<P>or</P>
<PRE> >romberg("f",a,b,n,...)
</PRE>
<P>uses the Romberg method for integration. n is the starting discretization. All other parameters are as in "simpson".
Again, "f" may be an expression containing the variable x.</P>
<P>There is also an adaptive integrator</P>
<PRE> >adaptiveint("f",a,b,eps)
</PRE>
<P>where eps is the desired accuracy. Optionally, you may pass the number of initial subdivisions. Further parameters
are passed to f.</P>
<H2 ALIGN="CENTER"><A NAME="Differentiation"></A>Differentiation</H2>
<P>To take the derivative, one may use the dif function, as in</P>
<PRE> >dif("f",x)
>dif("f",x,n)
</PRE>
<P>(the latter for the n-th derivative). Again, "f" may be an expression in "x". There are
some approximation tools. Polynomial interpolation has been discussed above.</P>
<H2 ALIGN="CENTER"><A NAME="Splines"></A>Splines</H2>
<P>There is also spline interpolation</P>
<PRE> >sp=spline(t,s)
</PRE>
<P>which returns the second derivatives of the natural cubic spline through (t[i],s[i]). To evaluate this spline
at x,</P>
<PRE> >splineval(t,s,sp,x)
</PRE>
<P>is available.</P>
<H2 ALIGN="CENTER"><A NAME="Regression Analysis"></A>Regression Analysis</H2>
<PRE> >polyfit(t,s,n)
</PRE>
<P>fits a polynomial of n-th degree to (t[i],s[i]) in least square mode. This is an application of</P>
<PRE> >fit(A,b)
</PRE>
<P>which computes x such that the L_2-norm of Ax-b is minimal.</P>
<H2 ALIGN="CENTER"><A NAME="Iteration and Fixed Points"></A>Iteration and Fixed Points</H2>
<PRE> >iterate("f",x,...)
</PRE>
<P>seeks a fixed point of f by iterating the function from x. If you provide additional arguments ..., these are
passed to f. Of course, f must be a function of type</P>
<PRE> >function f(x)
$...
$endfunction
</PRE>
<P>If you want see the iterations, use the following variant.</P>
<PRE> >niterate("f",x,n,...)
</PRE>
<P>iterates f starting at x n times and returns the vector of iterated values.</P>
<PRE> >steffenson("f",x);
</PRE>
<P>Seeks a fixed point starting from f using the Steffenson operator. nsteffenson is similar to niterate.</P>
<PRE> >map("f",x,...)
</PRE>
<P>evaluates the function f(x,...) at all elements of x, if f(x,...) does not work because the function f does
not accept vectors x.</P>
<H2 ALIGN="CENTER"><A NAME="Differential Equations"></A>Differential Equations</H2>
<P>One of the differential equation solvers is the Heun method. To use it, write the function f, which defines
the differential equation y=f(t,y). y can be a 1xN vector. Then call</P>
<PRE> >heun("f",t,y0,...)
</PRE>
<P>The extra parameters ... are passed to f. y0 is the starting value y(t[1]) and t must be a vector of points
t. The function returns the values of the solution at t. The accuracy depends of the distances from t[i] to t[i+1].
Heun is implemented via a UTIL function. A faster solver, implemented as a built-in function, is the Runge-Kutta
method and its adaptive variant.</P>
<PRE> >runge("f",t,y0)
</PRE>
<P>does the same as heun. Optionally, a number of steps to take between the points of t can be passed. Thus you
have to pass addtional arguments for f as</P>
<PRE> >runge("f",t,y0;...)
</PRE>
<P>The adaptive Runge-Kutta method is called adaptiverunge, and takes an epsilon and an initial step size as optional
arguments. Internally, these functions use the built-in runge1 and runge2 respectively.
</BODY>
</HTML>
|