1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92
|
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<HTML>
<HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html;CHARSET=iso-8859-1">
<META NAME="VPSiteProject" CONTENT="file:///E|/euler/docs/Euler.vpp">
<META NAME="GENERATOR" Content="Visual Page 2.0 for Windows">
<TITLE>Polynomials</TITLE>
<BASE target="_self">
<LINK REL="stylesheet" TYPE="text/css" HREF="euler.css">
</HEAD>
<BODY>
<H1 ALIGN="CENTER">Polynomials</H1>
<P>Shows you
<UL>
<LI><A HREF="#Basics">how Euler stores polynomials and how you can evaluate them</A>,
<LI><A HREF="#Interpolation">how to interpolate</A>,
<LI><A HREF="#FFT">how to use the Fast Fourier Transform (FFT)</A>.
</UL>
<H2 ALIGN="CENTER"><A NAME="Basics"></A>Basics</H2>
<P>EULER stores a polynomial a+bx+...+cx^n in the form [a,b,...,c]. Note, that this is different from MatLab. It
can evaluate a polynomial with Horner's scheme</P>
<PRE> >polyval(p,x)
</PRE>
<P>where x can be a matrix, of course. If you want a more exact answer in case of a badly conditioned polynomial,
you may use</P>
<PRE> >xpolyval(p,x)
</PRE>
<P>You may add an additional maximal number of iterations. This function uses the exact scalar product for a residual
iteration. Of course, it is slower that Horner's scheme.</P>
<P>One can multiply polynomials with</P>
<PRE> >polymult(p,q)
</PRE>
<P>or add them with</P>
<PRE> >polyadd(p,q)
</PRE>
<P>The polynomials need not have the same degree.</P>
<PRE> >polydiv(p,q)
</PRE>
<P>divides p by q and returns the multiple values {result,remainder}.</P>
<PRE> >polytrunc(p)
</PRE>
<P>truncates a polynomial to its true degree (using <A HREF="#345">epsilon</A>). In UTIL</P>
<PRE> >polydif(p)
</PRE>
<P>is defined. It differentiates the polynomial once. To construct a polynomial with prescribed zeros z=[z1,...,zn]</P>
<PRE> >polycons(z)
</PRE>
<P>is used. The reverse is obtained with</P>
<PRE> >polysolve(p)
</PRE>
<P>This function uses the Bauhuber method, which converges very stably to the zeros. However, there is always the
problem with multiple zeros destroying the accuracy (but not the speed of convergence).
<H2 ALIGN="CENTER"><A NAME="Interpolation"></A>Interpolation</H2>
<P>Polynomial interpolation can be done with</P>
<PRE> >d=interp(t,s)
</PRE>
<P>where t and s are vectors. The result is a polynomial in divided differences (Newton) form, and can be evaluated
by</P>
<PRE> >interpval(t,d,x)
</PRE>
<P>at x. To transform the Newton form to the usual polynomial form</P>
<PRE> >polytrans(t,d)
</PRE>
<P>may be used.
<H2 ALIGN="CENTER"><A NAME="FFT"></A>Fast Fourier Transform (FFT)</H2>
<P><IMG SRC="fft3.gif" WIDTH="298" HEIGHT="252" ALIGN="RIGHT" BORDER="0"></P>
<P>Interpolation in the roots of unity can be done with the fast Fourier transform</P>
<PRE> >p=ifft(s)
</PRE>
<P>Then p(exp(2*pi*i*k/n))=s[k+1], 0<=k<n-1. For maximal speed, n should be a power of 2, or at least have
many low order factors. The reverse function evaluates a polynomial at the roots of unity simultanuously</P>
<PRE> >s=fft(p)
</PRE>
<P>Note, that polynomials have the lowest coefficient first. Both functions are most often used in computations
of trigonometric sums. The example file contains a demonstration.</P>
<P>If A is a matrix</P>
<PRE> >fft(A)
</PRE>
<P>will compute the two dimensional Fast Fourier Transform of A. This time, the number of columns and rows of A
must be power of 2. ifft(A) returns the inverse.
</BODY>
</HTML>
|