File: algo-basic.chapt.txt

package info (click to toggle)
yacas 1.3.6-2.1
  • links: PTS
  • area: main
  • in suites: bookworm, bullseye
  • size: 7,184 kB
  • sloc: cpp: 13,960; java: 12,602; sh: 11,401; javascript: 1,242; makefile: 552; perl: 517; ansic: 381
file content (590 lines) | stat: -rw-r--r-- 38,791 bytes parent folder | download | duplicates (7)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
		Adaptive function plotting

Here we consider plotting of functions $y=f(x)$.

There are two tasks related to preparation of plots of functions: first, to produce the numbers required for a plot, and second, to draw a plot with axes, symbols, a legend, perhaps additional illustrations and so on.
Here we only concern ourselves with the first task, that of preparation of the numerical data for a plot.
There are many plotting programs that can read a file with numbers and plot it in any desired manner.

Generating data for plots of functions generally does not require
high-precision calculations.
However, we need an algorithm that can be adjusted to produce data to different levels of precision.
In some particularly ill-behaved cases, a precise plot will not be possible and we would not want to waste time producing data that is too accurate for what it is worth.

*A plotting!adaptive algorithms
*A plotting!two-dimensional

*A {Plot2D'adaptive}

A simple approach to plotting would be to divide the interval into many equal subintervals and to evaluate the function on the resulting grid.
Precision of the plot can be adjusted by choosing a larger or a smaller number of points.

However, this approach is not optimal. Sometimes a function changes rapidly near one point but slowly everywhere else.
For example, $f(x)=1/x$ changes very quickly at small $x$.
Suppose we need to plot this function between $0$ and $100$.
It would be wasteful to use the same subdivision interval everywhere: a finer grid is only required over a small portion of the plotting range near $x=0$.

The adaptive plotting routine {Plot2D'adaptive} uses a simple algorithm
to select the optimal grid to approximate a function of one argument $f(x)$.
The algorithm repeatedly subdivides the grid intervals near points where the existing grid does not represent the function well enough.
A similar algorithm
for adaptive grid refinement could be used for numerical integration. The
idea is that plotting and numerical integration require the same kind of
detailed knowledge about the behavior of the function.

The algorithm first splits the interval into a specified initial number of
equal subintervals, and then repeatedly splits each subinterval in half
until the function is well enough approximated by the resulting grid. The
integer parameter {depth} gives the maximum number of binary splittings for
a given initial interval; thus, at most $2^depth$ additional grid points
will be generated. The function {Plot2D'adaptive} should return a list of
pairs of points {{{x1,y1}, {x2,y2}, ...}} to be used directly for plotting.

The adaptive plotting algorithm works like this:

*	 1.  Given an interval ($a$, $c$), we split
it in half, $b:=(a+c)/2$ and first compute $f(x)$ at five grid
points $a$, $a[1]:=(a+b)/2$, $b$, $b[1]:=(b+c)/2$, $c$. 
*	 2. If currently $depth <= 0$, return this list of five points and
values because we cannot refine the grid any more.
*	 3. Otherwise, check that the function does not oscillate too
rapidly on the interval [$a$, $c$]. 
The formal criterion is that the five values are all finite and do not make a "zigzag" pattern
such as (1,3,2,3,1).
More formally, we use the following procedure:
For each three consecutive values, write "1" if the middle value is larger
than the other two, or if it is smaller than the other two,
or if one of them is not a number (e.g. {Infinity} or {Undefined}).
If we have at most two ones now, then we consider the change of values to be
"slow enough". Otherwise it is not "slow enough".
In this case we need to refine the grid; go to step 5. 
Otherwise, go to step 4.
*	 4. Check that the function values are smooth enough through the
interval. Smoothness is controlled by a parameter $epsilon$. The
meaning of the parameter $epsilon$ is the (relative) error of the
numerical approximation of the integral of $f(x)$ by the grid. A good heuristic
value of $epsilon$ is 1/(the number of pixels on the screen)
because it means that no pixels will be missing in the area under
the graph. For this to work we need to make sure that we are actually computing the area <i>under</i> the graph; so we define $g(x):=f(x)-f[0]$ where $f[0]$ is the minimum of the values of $f(x)$ on the five grid points $a$, $a[1]$, $b$, $b[1]$, and $c$; the function $g(x)$ is nonnegative and has the minimum value 0.
Then we compute two different Newton-Cotes quadratures
for $ Integrate(x,b,b[1]) g(x) $ using these five points. (Asymmetric
quadratures are chosen to avoid running into an accidental symmetry of the
function; the first quadrature uses points $a$, $a[1]$, $b$, $b[1]$ and the second
quadrature uses $b$, $b[1]$, $c$.) If the
absolute value of the difference between these quadratures is less
than $epsilon$ * (value of the second quadrature), then we
are done and we return the list of these five points and values.
*	 5. Otherwise, we need to refine the grid. We compute
{Plot2D'adaptive} recursively for the two halves of the interval,
that is, for ($a$, $b$) and ($b$, $c$).
We also decrease {depth} by 1 and multiply $epsilon$ by 2 because we need to maintain a constant <i>absolute</i> precision and this means that the relative error for the two subintervals can be twice as large.
The resulting two lists for the two subintervals are
concatenated (excluding the double value at point $b$) and
returned.

This algorithm works well if the initial number of points and the {depth}
parameter are large enough.
These parameters can be adjusted to balance the available computing time and the desired level of detail in the resulting plot.

Singularities in the function are handled by the step 3.
Namely, the change in the sequence $a$, $a[1]$, $b$, $b[1]$, $c$ is always considered to be "too rapid" if one of these values is a non-number (e.g. {Infinity} or {Undefined}).
Thus, the interval immediately adjacent to a singularity will be plotted at the highest allowed refinement level. When preparing the plotting data, the singular points are simply not printed to the data file, so that
a plotting programs does not encounter any problems.

	    Newton-Cotes quadratures

*A Newton-Cotes quadratures

The meaning of Newton-Cotes quadrature coefficients is that an integral of a function $f(x)$ is approximated by a sum,
$$(Integrate(x,a[0],a[n]) f(x)) <=> h*Sum(k,0,n,c[k]*f(a[k]))$$,
where $a[k]$ are the grid points, $h:=a[1]-a[0]$ is the grid step, and
$c[k]$ are the quadrature coefficients.
It may seem surprising, but these coefficients $c[k]$ are independent
of the function $f(x)$ and can be precomputed
in advance for a given grid $a[k]$.
[The quadrature coefficients do depend on the relative separations of the grid.
Here we assume a uniform grid with a constant step $h=a[k]-a[k-1]$.
Quadrature coefficients can also be found for non-uniform grids.]

The coefficients $c[k]$ for
grids with a constant step $h$ can be found, for example, by solving the following system of equations,
$$Sum(k, 0, n, c[k]*k^p) = n^(p+1)/(p+1)$$
for $p=0$, 1, ..., $n$. This system of equations means that the quadrature correctly gives the integrals of $p+1$ functions $f(x)=x^p$, $p=0$, 1, ..., $n$, over the interval (0, $n$).
The solution of this system always exists and gives quadrature coefficients as rational numbers. For example, the well-known Simpson quadrature $c[0]=1/3$, $c[1]=4/3$, $c[2]=1/3$ is obtained with $n=2$.
An example of using this quadrature is the approximation
$$ (Integrate(x,0,2)f(x))<=> (f(0)+f(2))/3+4/3*f(1) $$.

*A Newton-Cotes quadratures!for partial intervals

In the same way it is possible to find quadratures for the integral over a subinterval rather than over the whole interval of $x$. In the current implementation of the adaptive plotting algorithm, two quadratures are used: the 3-point quadrature ($n=2$) and the 4-point quadrature ($n=3$) for the integral over the first subinterval, $Integrate(x,a[0],a[1]) f(x)$. Their coefficients are ($5/12$, $2/3$, $-1/12$) and ($3/8$, $19/24$, $-5/24$, $1/24$).
An example of using the first of these subinterval quadratures would be the approximation
$$(Integrate(x,0,1) f(x)) <=> 5/12*f(0)+2/3*f(1)-1/12*f(2)$$.
These quadratures are intentionally chosen to be asymmetric to avoid an accidental cancellation when the function $f(x)$ itself is symmetric.
(Otherwise the error estimate could accidentally become exactly zero.)

		Surface plotting

Here we consider plotting of functions $z=f(x,y)$.

*A plotting!of surfaces
*A plotting!three-dimensional

The task of surface plotting is to obtain a picture of a two-dimensional surface as if it were a solid object in three dimensions.
A graphical representation of a surface is a complicated task.
Sometimes it is required to use particular coordinates or projections, to colorize the surface, to remove hidden lines and so on.
We shall only be concerned with the task of obtaining the data for a plot from a given function of two variables $f(x,y)$.
Specialized programs can take a text file with the data and let the user interactively produce a variety of surface plots.

The currently implemented algorithm in the function {Plot3DS} is very similar to the adaptive plotting algorithm for two-dimensional plots.
A given rectangular plotting region $a[1]<=x<=a[2]$, $b[1]<=y<=b[2]$ is subdivided to produce an equally spaced rectangular grid of points.
This is the initial grid which will be adaptively refined where necessary.
The refinement algorithm will divide a given rectangle in four quarters if the available function values indicate that the function does not change smoothly enough on that rectangle.

The criterion of a "smooth enough" change is very similar to the procedure outlined in the previous section.
The change is "smooth enough" if all points are finite, nonsingular values, and if the integral of the function over the rectangle is sufficiently well approximated by a certain low-order "cubature" formula.

The two-dimensional integral of the function is estimated using the following 5-point Newton-Cotes cubature:

	1/12   0   1/12
	
	 0    2/3   0
	
	1/12   0   1/12

An example of using this cubature would be the approximation
$$ (Integrate(y,0,1)Integrate(x,0,1)f(x,y)) <=> (f(0,0)+f(0,1)+f(1,0)+f(1,1))/12$$
$$+2/3*f(1/2,1/2)$$.

Similarly, an 8-point cubature with zero sum is used to estimate the error:

	-1/3   2/3   1/6
	
	-1/6  -2/3  -1/2
	
	 1/2    0    1/3
This set of coefficients was intentionally chosen to be asymmetric to avoid possible exact cancellations when the function itself is symmetric.

One minor problem with adaptive surface plotting is that the resulting set of points may not correspond to a rectangular grid in the parameter space ($x$,$y$).
This is because some rectangles from the initial grid will need to be bisected more times than others.
So, unless adaptive refinement is disabled, the function {Plot3DS} produces a somewhat disordered set of points.
However, most surface plotting programs require that the set of data points be a rectangular grid in the parameter space.
So a smoothing and interpolation procedure is necessary to convert a non-gridded set of data points ("scattered" data) to a gridded set.

		Parametric plots

*A plotting!parametric

Currently, parametric plots are not directly implemented in Yacas.
However, it is possible to use Yacas to obtain numerical data for such plots.
One can then use external programs to produce actual graphics.

A two-dimensional parametric plot is a line in a two-dimensional space, defined by two equations such as $x=f(t)$, $y=g(t)$.
Two functions $f$, $g$ and a range of the independent variable $t$, for example, $t[1]<=t<=t[2]$, need to be specified.

*A plotting!non-Euclidean coordinates

Parametric plots can be used to represent plots of functions in non-Euclidean coordinates.
For example, to plot the function $rho=Cos(4*phi)^2$ in polar coordinates ($rho$,$phi$), one can rewrite the Euclidean coordinates through the polar   coordinates,
$ x = rho * Cos(phi) $,
$ y = rho * Sin(phi) $,
and use the equivalent parametric plot with $phi$ as the parameter:
$ x = Cos(4*phi)^2*Cos(phi) $,
$ y = Cos(4*phi)^2*Sin(phi) $.

*A plotting!three-dimensional

Sometimes higher-dimensional parametric plots are required.
A line plot in three dimensions is defined by three functions of one variable, for example, $x=f(t)$, $y=g(t)$, $z=h(t)$, and a range of the parameter $t$.
A surface plot in three dimensions is defined by three functions of two variables each, for example, $x=f(u,v)$, $y=g(u,v)$, $z=h(u,v)$, and a rectangular domain in the ($u$,$v$) space.

The data for parametric plots can be generated separately using the same adaptive plotting algorithms as for ordinary function plots,
as if all functions such as $f(t)$ or $g(u,v)$ were unrelated functions.
The result would be several separate data sets for the $x$, $y$, ... coordinates.
These data sets could then be combined using an interactive plotting program.


		The cost of arbitrary-precision computations

A computer algebra system absolutely needs to be able to perform
computations with very large <i>integer</i>  numbers. Without this
capability, many symbolic computations (such as exact GCD of
polynomials or exact solution of polynomial equations) would be
impossible.

*A arbitrary-precision computation

A different question is whether a CAS really needs to be able to
evaluate, say, 10,000 digits of the value of a Bessel function of some
10,000-digit complex argument.
It seems likely that no applied problem of
natural sciences would need floating-point computations of special
functions with such a high precision. However, arbitrary-precision
computations are certainly useful in some mathematical applications;
e.g. some mathematical identities can be first guessed by a floating-point
computation with many digits and then proved.

Very high precision computations of special functions <i>might</i> be useful in the future.
But it is already quite clear that computations with moderately high
precision (say, 50 or 100 decimal digits) are useful for applied problems.
For example, to obtain the leading asymptotic of an analytic function, we could expand it in series and take the first term.
But we need to check that the coefficient at what we think is the leading term of the series does not vanish.
This coefficient could be a certain "exact" number such as $(Cos(355)+1)^2$.
This number is "exact" in the sense that it is made of integers and elementary functions.
But we cannot say <i>a priori</i> that this number is nonzero.
The problem of "zero determination" (finding out whether a certain "exact" number is zero)
is known to be algorithmically unsolvable if we allow transcendental functions.
The only practical general approach seems to be to compute the number in question with many digits.
Usually a few digits are enough, but occasionally several hundred digits are needed.

Implementing an efficient algorithm that computes 100 digits of
$Sin(3/7)$ already involves many of the issues that would also be
relevant for a 10,000 digit computation. 
Modern algorithms allow evaluations of all elementary functions in time
that is asymptotically logarithmic in the number of digits $P$ and
linear in the cost of long multiplication (usually denoted $M(P)$).
Almost all special functions can be evaluated in time that is asymptotically linear in $P$ and in $M(P)$.
(However, this asymptotic cost sometimes applies only to very high precision, e.g., $P>1000$, and different algorithms need to be implemented for calculations in lower precision.)

In {Yacas} we strive to implement all numerical functions to arbitrary precision.
All integer or rational functions return exact
results, and all floating-point functions return their value with $P$
correct decimal digits (assuming sufficient precision of the arguments).
The current value of $P$ is accessed as
{Builtin'Precision'Get()} and may be changed by {Builtin'Precision'Set(...)}.

*A arbitrary-precision computation!requirements

Implementing an arbitrary-precision floating-point computation of a
function $f(x)$, such as $f(x)=Exp(x)$, typically needs the following:

*	An algorithm that will compute $f(x)$ for a given value $x$ to a
user-specified precision of $P$ (decimal) digits. Often, several
algorithms must be implemented for different subdomains of the
($x$,$P$) space.
*	An estimate of the computational cost of the algorithm(s), as a function of $x$ and $P$. This is needed to select the best algorithm for given $x$, $P$.
*	An estimate of the round-off error.
This is needed to select the "working precision" which will typically be somewhat higher than the precision of the final result.

In calculations with machine precision where the number of digits is fixed, the problem of round-off errors is quite prominent.
Every arithmetic operation causes a small loss of precision;
as a result, a few last digits of the final value are usually incorrect.
But if we have an arbitrary precision capability, we can always increase precision by a few more digits during intermediate computations and thus eliminate all round-off error in the final result.
We should, of course, take care not to increase the working precision unnecessarily, because any increase of precision means slower calculations.
Taking twice as many digits as needed and hoping that the result is precise is not a good solution.

Selecting algorithms for computations is the most non-trivial part of
the implementation.
We want to achieve arbitrarily high precision, so
we need to find either a series, or a continued fraction, or a
sequence given by explicit formula, that converges to the function in a
controlled way.
It is not enough to use a table of precomputed values
or a fixed approximation formula that has a limited precision.

In the last 30 years, the interest in arbitrary-precision
computations grew and many efficient algorithms for elementary and
special functions were published.
Most algorithms are iterative.
Almost always it is very important to know in advance how many iterations
are needed for given $x$, $P$.
This knowledge allows to estimate the computational cost, in terms of
the required precision $P$ and of the cost of long multiplication
$M(P)$, and choose the best algorithm.

Typically all operations will fall into one of the following categories
(sorted by the increasing cost):

*A arbitrary-precision computation!speed estimates

*	addition, subtraction: linear in $P$;
*	multiplication, division, integer power, integer root: linear in $M(P)$;
*	elementary functions: $Exp(x)$, $Ln(x)$, $Sin(x)$, $ArcTan(x)$ etc.: $M(P)*Ln(P)$ or slower by some powers of $Ln(P)$;
*	transcendental functions: $Erf(x)$, $Gamma(x)$ etc.: typically $P*M(P)$ or slower.

The cost of long multiplication $M(P)$ is between $O(P^2)$ for low precision and $O(P*Ln(P))$ for very high precision.
In some cases, a different algorithm should be chosen if the precision is high enough to allow $M(P)$ faster than $O(P^2)$.

Some algorithms also need storage space
(e.g. an efficient algorithm for summation of the Taylor series uses $O(Ln(P))$ temporary $P$-digit numbers).

Below we shall normally denote by $P$ the required number of decimal digits.
The formulae frequently contain conspicuous factors of $Ln(10)$, so it will be clear how to obtain analogous expressions for another base.
(Most implementations use a binary base rather than a decimal base since it is more convenient for many calculations.)

		Estimating convergence of a series

*A Taylor series!required number of terms

Analyzing convergence of a power series is usually not difficult.
Here is a worked-out example of how we could estimate the required number of terms in the power series
$$ Exp(x)=1+x+x^2/2! +...+x^n/n! + O(x^(n+1))$$
if we need $P$ decimal digits of precision in the result.
To be specific, assume that
$Abs(x)<1$. (A similar calculation can be done for any other bound on $x$.)

Suppose we truncate the series after $n$-th term and
the series converges "well enough" after that term. Then the error will be
approximately equal to the first term we dropped. (This is what we
really mean by "converges well enough" and this will generally be the case in all applications, because we would not want to use a series that does not converge well enough.)

The term we dropped is $x^(n+1)/(n+1)!$.
To estimate $n!$ for large $n$, one can use the inequality
$$ e^(e-1)*(n/e)^n < n! < (n/e)^(n+1)$$
(valid for all $n>=47$) which provides tight bounds for the growth of
the factorial, or a weaker inequality which is somewhat easier to use,
$$ (n/e)^n < n! < ((n+1)/e)^(n+1) $$
(valid for all $n>=6$). The latter inequality is sufficient for most purposes.

If we use the upper bound on $n!$ from this estimate, we find that the term we dropped is bounded by
$$ x^(n+1)/(n+1)! < (e/(n+2))^(n+2) $$.
We need this number to be smaller than $10^(-P)$. This leads to an inequality
$$ (e/(n+2))^(n+2) < 10^(-P) $$,
which we now need to solve for $n$. The left hand side decreases with growing $n$. So it is clear that the inequality will hold for large enough $n$, say for $n>=n0$ where $n0$ is an unknown (integer) value. We can take a logarithm of both sides, replace $n$ with $n0$ and obtain the following equation for $n0$:
$$ (n0+2)*Ln((n0+2)/e) = P*Ln(10) $$.
This equation cannot be solved exactly in terms of elementary functions; this is a typical situation in such estimates. However, we do not really need a very precise solution for $n0$; all we need is an estimate of its integer part.
This is also a typical situation.
It is acceptable if our approximate value of $n0$ comes out a couple of units higher than necessary, because a couple of extra terms of the Taylor series will not significantly slow down the algorithm (but it is important that we do not underestimate $n0$).
Finally, we are mostly interested in having a good enough answer for
large values of $P$.

We can try to guess the result.
The largest term on the LHS
grows as $n0*Ln(n0)$ and it should be approximately equal to
$P*Ln(10)$; but $Ln(n0)$ grows very slowly, so this gives us a hint that
$n0$ is proportional to $P*Ln(10)$.
As a first try, we set $n0=P*Ln(10)-2$
and compare the RHS with the LHS; we find that we have overshot by a
factor $Ln(P)-1+Ln(Ln(10))$, which is not a large factor. We
can now compensate and divide $n0$ by this factor, so our second try is
$$ n0 = (P*Ln(10))/(Ln(P)-1+Ln(Ln(10)))-2 $$.
(This approximation procedure is equivalent to solving the equation
$$ x = (P*Ln(10))/(Ln(x)-1) $$
by direct iteration, starting from $x=P*Ln(10)$.)
If we substitute our second try for $n0$ into the equation, we shall find that we undershot a little bit (i.e. the LHS is a little smaller than the RHS), but our $n0$ is now smaller than it should be by a quantity that is smaller than 1 for large enough $P$.
So we should stop at this point and simply add 1 to this approximate answer. We should also replace $Ln(Ln(10))-1$ by 0 for simplicity (this is safe because it will slightly increase $n0$.)

Our final result is that it is enough to take
$$ n=(P*Ln(10))/Ln(P)-1 $$
terms in the Taylor series to compute $Exp(x)$ for $Abs(x)<1$ to $P$
decimal digits. (Of course, if $x$ is much smaller than 1, many fewer
terms will suffice.)

		Estimating the round-off error

*A arbitrary-precision computation!round-off error estimates

	    Unavoidable round-off errors

As the required precision $P$ grows, an arbitrary-precision algorithm
will need more iterations or more terms of the series. So the round-off
error introduced by every floating-point operation will increase. When
doing arbitrary-precision computations, we can always perform all
calculations with a few more digits and compensate for round-off error.
It is however imperative to know in advance how many more digits we
need to take for our "working precision". We should also take that
increase into account when estimating the total cost of the method.
(In most cases this increase is small.)

Here is a simple estimate of the normal round-off error in a computation of
$n$ terms of a power series.
Suppose that the sum of the series is of order $1$, that the terms monotonically decrease in magnitude, and that adding one term requires two
multiplications and one addition. If all calculations are performed
with absolute precision $epsilon=10^(-P)$, then the total accumulated
round-off error is $3*n*epsilon$. If the relative error is $3*n*epsilon$, it means that our
answer is something like $a*(1+3*n*epsilon)$ where $a$ is the correct
answer. We can see that out of the total $P$ digits of this answer,
only the first $k$ decimal digits are correct, where
$k= -Ln(3*n*epsilon)/Ln(10)$. In other words, we have lost
$$P-k=Ln(3*n)/Ln(10)$$
digits because of accumulated round-off error. So we found that we need
$Ln(3*n)/Ln(10)$ extra decimal digits to compensate for this
round-off error.

This estimate assumes several things about the series (basically, that the series is "well-behaved").
These assumptions must be verified in each particular case.
For example, if the series begins with some large
terms but converges to a very small value, this estimate is wrong (see the next
subsection).

In the previous exercise we found the number of terms $n$ for $Exp(x)$. So now we know how many extra digits of working precision we need for this particular case.

Below we shall have to perform similar estimates of the required number
of terms and of the accumulated round-off error in our analysis of the
algorithms.

	    Catastrophic round-off error

*A arbitrary-precision computation!catastrophic round-off error

Sometimes the round-off error of a particular method of computation becomes so large that the method becomes highly inefficient.

Consider the computation of $Sin(x)$ by the truncated Taylor series
$$ Sin(x) <=>Sum(k,0,N-1,(-1)^k*(x)^(2*k+1)/(2*k+1)!) $$,
when $x$ is large.
We know that this series converges for all $x$, no matter how large.
Assume that $x=10^M$ with $M>=1$, and that we need $P$ decimal digits of precision in the result.

First, we determine the necessary number of terms $N$.
The magnitude of the sum is never larger than $1$.
Therefore we need the $N$-th term of the series to be smaller than $10^(-P)$.
The inequality is
$ (2*N+1)! > 10^(P+M*(2*N+1)) $.
We obtain that $2*N+2>e*10^M$ is a necessary condition, and if $P$ is large, we find approximately
$$ 2*N+2 <=> ((P-M)*Ln(10)) / (Ln(P-M)-1-M*Ln(10)) $$.

However, taking enough terms does not yet guarantee a good result.
The terms of the series grow at first and then start to decrease.
The sum of these terms is, however, small.
Therefore there is some cancellation and we need to increase the working precision to avoid the round-off.
Let us estimate the required working precision.

We need to find the magnitude of the largest term of the series.
The ratio of the next term to the previous term is $x/(2*k*(2*k+1))$ and therefore the maximum will be when this ratio becomes equal to $1$, i.e. for $2*k<=>Sqrt(x)$.
Therefore the largest term is of order $x^Sqrt(x)/Sqrt(x)!$ and so we need about $M/2*Sqrt(x)$ decimal digits before the decimal point to represent this term.
But we also need to keep at least $P$ digits after the decimal point, or else the round-off error will erase the significant digits of the result.
In addition, we will have unavoidable round-off error due to $O(P)$ arithmetic operations.
So we should increase precision again by $P+Ln(P)/Ln(10)$ digits plus a few guard digits.

As an example, to compute $Sin(10)$ to $P=50$ decimal digits with this method, we need a working precision of about $60$ digits, while to compute $Sin(10000)$ we need to work with about $260$ digits.
This shows how inefficient the Taylor series for $Sin(x)$ becomes for large arguments $x$.
A simple transformation $x=2*Pi*n+x'$ would have reduced $x$ to at most 7, and the unnecessary computations with $260$ digits would be avoided.
The main cause of this inefficiency is that we have to add and subtract extremely large numbers to get a relatively small result of order $1$.

We find that the method of Taylor series for $Sin(x)$ at large $x$ is highly inefficient because of round-off error and should be complemented by other methods.
This situation seems to be typical for Taylor series.



		Basic arbitrary-precision arithmetic 

*A arbitrary-precision computation!Yacas internal math

Yacas uses an internal math library (the {yacasnumbers} library) which comes with the source
code. This reduces the dependencies of the Yacas system and improves portability.
The internal math library is simple and does not necessarily use the most optimal algorithms.

If $P$ is
the number of digits of precision, then multiplication and division take
$M(P)=O(P^2)$ operations in the internal math. (Of course, multiplication and division by a short integer takes time linear in $P$.)
Much faster algorithms (Karatsuba, Toom-Cook, FFT multiplication, Newton-Raphson division etc.) are
implemented in {gmp}, {CLN} and some other libraries.
The asymptotic cost of multiplication for very large precision is $M(P)<=>O(P^1.6)$ for the Karatsuba method and $M(P)=O(P*Ln(P)*Ln(Ln(P)))$ for the FFT method.
In the estimates of computation cost in this book we shall assume that $M(P)$ is at
least linear in $P$ and maybe a bit slower.

The costs of multiplication may be different in various arbitrary-precision arithmetic libraries and on different computer platforms.
As a rough guide, one can assume that the straightforward $O(P^2)$ multiplication is good until 100-200 decimal digits,
the asymptotically fastest method of FFT multiplication is good at the precision of about 5,000 or more decimal digits,
and the Karatsuba multiplication is best in the middle range.

Warning: calculations with internal Yacas math using precision exceeding 10,000 digits are currently impractically slow.


*A continued fraction approximation!estimating $Ln(2)$

In some algorithms it is necessary to compute the integer parts of expressions such as $a*Ln(b)/Ln(10)$ or $a*Ln(10)/Ln(2)$, where $a$, $b$ are short integers of order $O(P)$. Such expressions are frequently needed to estimate the number of terms in the Taylor series or similar parameters of the algorithms. In these cases, it is important that the result is not underestimated. However, it would be wasteful to compute $1000*Ln(10)/Ln(2)$ in great precision only to discard most of that information by taking the integer part of that number.
It is more efficient to approximate such constants from above by short rational numbers, for example, $Ln(10)/Ln(2) < 28738/8651$ and $Ln(2) < 7050/10171$. The error of such an approximation will be small enough for practical purposes. The function {BracketRational} can be used to find optimal rational approximations.

The function {IntLog} (see below) efficiently computes the integer part of a logarithm (for an integer base, not a natural logarithm). If more precision is desired in calculating $Ln(a)/Ln(b)$ for integer $a$, $b$, one can compute $IntLog(a^k,b)$ for some integer $k$ and then divide by $k$.

		How many digits of $Sin(Exp(Exp(1000)))$ do we need?

*A arbitrary-precision computation!very large numbers

Arbitrary-precision math is not omnipotent against overflow.
Consider the problem of representing very large numbers such as $x=Exp(Exp(1000))$.
Suppose we need a floating-point representation of the number $x$ with $P$ decimal digits of precision.
In other words, we need to express
$ x <=> M*10^E $,
where the mantissa $1<M<10$ is a floating-point number and the exponent
$E$ is an integer, chosen so that the relative precision is $10^(-P)$.
How much effort is needed to find $M$ and $E$?

The exponent $E$ is easy to obtain:
$$ E = Floor(Ln(x)/Ln(10)) = Floor(Exp(1000)/Ln(10)) <=> 8.55 * 10^433$$.
To compute the integer part $Floor(y)$ of a number $y$ exactly, we need to approximate $y$ with at least $Ln(y)/Ln(10)$ floating-point digits.
In our example, we find that we need 434 decimal digits to represent $E$.

Once we found $E$, we can write $x=10^(E+m)$ where $m=Exp(1000)/Ln(10)-E$ is a floating-point number, $0<m<1$.
Then $M=10^m$.
To find $M$ with $P$ (decimal) digits, we need $m$ with also at least $P$ digits.
Therefore, we actually need to evaluate $Exp(1000)/Ln(10)$ with $434+P$ decimal digits before we can find $P$ digits of the mantissa of $x$.
We ran into a perhaps surprising situation: one needs a high-precision calculation even to find the first digit of $x$, because it is necessary to find the exponent $E$ exactly as an integer, and $E$ is a rather large integer.
A normal double-precision numerical calculation would give an overflow error at this point.

Suppose we have found the number $x=Exp(Exp(1000))$ with some precision.
What about finding $Sin(x)$?
Now, this is extremely difficult, because to find even the first digit of $Sin(x)$ we have to evaluate $x$ with <i>absolute</i> error of at most $0.5$.
We know, however, that the number $x$ has approximately $10^434$ digits <i>before</i> the decimal point.
Therefore, we would need to calculate $x$ with at least that many digits.
Computations with $10^434$ digits is clearly far beyond the capability of modern computers.
It seems unlikely that even the sign of $Sin(Exp(Exp(1000)))$ will be obtained in the near future.
*FOOT It seems even less likely that the sign of $Sin(Exp(Exp(1000)))$ would be of any use to anybody even if it could be computed.

Suppose that $N$ is the largest integer that our arbitrary-precision facility can reasonably handle.
(For Yacas internal math library, $N$ is about $10^10000$.)
Then it follows that numbers $x$ of order $10^N$ can be calculated with at most one (1) digit of floating-point precision,
while larger numbers cannot be calculated with any precision at all.

It seems that very large numbers can be obtained in practice only through exponentiation or powers.
It is unlikely that such numbers will arise from sums or products of reasonably-sized numbers in some formula.
*FOOT A factorial function can produce rapidly growing results, but exact factorials $n!$ for large $n$ are well represented by the Stirling formula which involves powers and exponentials.
For example, suppose a program operates with numbers $x$ of size $N$ or smaller;
a number such as $10^N$ can be obtained only by multiplying $O(N)$ numbers $x$ together.
But since $N$ is the largest representable number, it is certainly not feasible to perform $O(N)$ sequential operations on a computer.
However, it is feasible to obtain $N$-th power of a small number, since it requires only $O(Ln(N))$ operations.

*A exponentially large numbers

If numbers larger than $10^N$  are created only by exponentiation operations, then special exponential notation could be used to represent them.
For example, a very large number $z$ could be stored and manipulated as an unevaluated exponential $z=Exp(M*10^E)$ where $M>=1$ is a floating-point number with $P$ digits of mantissa and $E$ is an integer, $Ln(N)<E<N$.
Let us call such objects "exponentially large numbers" or "exp-numbers" for short.

In practice, we should decide on a threshold value $N$ and promote a number to an exp-number when its logarithm exceeds $N$.

Note that an exp-number $z$ might be positive or negative, e.g.
$z= -Exp(M*10^E)$.

Arithmetic operations can be applied to the exp-numbers.
However, exp-large arithmetic is of limited use because of an almost certainly critical loss of precision.
The power and logarithm operations can be meaningfully performed on exp-numbers $z$.
For example, if $z=Exp(M*10^E)$ and $p$ is a normal floating-point number, then $z^p=Exp(p*M*10^E)$ and $Ln(z)=M*10^E$.
We can also multiply or divide two exp-numbers.
But it makes no sense to multiply an exp-number $z$ by a normal number because we cannot represent the difference between $z$ and say $2.52*z$.
Similarly, adding $z$ to anything else would result in a total underflow, since we do not actually know a single digit of the decimal representation of $z$.
So if $z1$ and $z2$ are exp-numbers, then $z1+z2$ is simply equal to either $z1$ or $z2$ depending on which of them is larger.

We find that an exp-number $z$ acts as an effective "infinity" compared with normal numbers.
But exp-numbers cannot be used as a device for computing limits: the unavoidable underflow will almost certainly produce wrong results.
For example, trying to verify 
$$ (Limit(x,0) (Exp(x)-1)/x)  = 1 $$
by substituting $x=1/z$ with some exp-number $z$ gives 0 instead of 1.

Taking a logarithm of an exp-number brings it back to the realm of normal, representable numbers.
However, taking an exponential of an exp-number results in a number which is not representable even as an exp-number.
This is because an exp-number $z$ needs to have its exponent $E$ represented exactly as an integer,
but $Exp(z)$ has an exponent of order $O(z)$ which is not a representable number.
The monstrous number $Exp(z)$ could be only written as $Exp(Exp(M*10^E))$, i.e. as a "doubly exponentially large" number, or "2-exp-number" for short.
Thus we obtain a hierarchy of iterated exp-numbers.
Each layer is "unrepresentably larger" than the previous one.

*A arbitrary-precision computation!very small numbers

*A exponentially small numbers

The same considerations apply to very small numbers of the order $10^(-N)$ or smaller.
Such numbers can be manipulated as "exponentially small numbers", i.e. expressions of the form $Exp(-M*10^E)$ with floating-point mantissa $M>=1$ and integer $E$ satisfying $Ln(N)<E<N$.
Exponentially small numbers act as an effective zero compared with normal numbers.

Taking a logarithm of an exp-small number makes it again a normal representable number.
However, taking an exponential of an exp-small number produces 1 because of underflow.
To obtain a "doubly exponentially small" number, we need to take a reciprocal of a doubly exponentially large number, or take the exponent of an exponentially large negative power.
In other words, $Exp(-M*10^E)$ is exp-small, while $Exp(-Exp(M*10^E))$ is 2-exp-small.

The practical significance of exp-numbers is rather limited. 
We cannot obtain even a single significant digit of an exp-number.
A "computation" with exp-numbers is essentially a floating-point computation with logarithms of these exp-numbers.
A practical problem that needs numbers of this magnitude can probably be restated in terms of more manageable logarithms of such numbers.
In practice, exp-numbers could be useful not as a means to get a numerical answer,
but as a warning sign of critical overflow or underflow.
*FOOT Yacas currently does not implement exp-numbers or any other guards against overflow and underflow. If a decimal exponential becomes too large, an incorrect answer may result.