1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596
|
\function{ad_ktest}
\synopsis{k-sample Anderson-Darling test}
\usage{p = ad_ktest ({X1, X2, ...} [,&statistic] [;qualifiers])}
\description
The \sfun{ad_ktest} function performs a k-sample Anderson-Darling
test, which may be used to test the hypothesis that two or more
statistical samples come from the same underlying parent population.
The function returns the p-value representing the probability that
the samples are consistent with a common parent distribution. If
the last parameter is a reference, then the variable that it
references will be set to the value of the statistic upon return.
The paper that this test is based upon presents two statistical
tests: one for continuous data where ties are improbable, and one
for data where ties can occur. This function returns the p-value
and statistic for the latter case. A qualifier may be used to
obtain the p-value and statistic for the continuous case.
\qualifiers
\qualifier{pval2=&var}{Set the variable \exmp{var} to the p-value for continuous case}
\qualifier{stat2=&var}{Set the variable \exmp{var} to the statistic for the continuous case.}
\notes
The k-sample test was implemented from the equations found in
Scholz F.W. and Stephens M.A., "K-Sample Anderson-Darling Tests",
Journal of the American Statistical Association, Vol 82, 399 (1987).
\seealso{ks_test2, ad_test}
\done
\function{ad_test}
\synopsis{Anderson-Darling test for normality}
\usage{pval = ad_test (X [,&statistic] [;qualifiers])}
\description
The \sfun{ad_test} function may be used to test the hypothesis that
random samples \exmp{X} come from a normal distribution. It returns
the p-value representing the probability of obtaining such a dataset
under the assumption that the data represent random samples of the
underlying distribution. If the optional second parameter is
present, then it must be a reference to a variable that will be set
to the value of the statistic upon return.
\qualifiers
\qualifier{mean=value}{Specifies the known mean of the normal distribution}.
\qualifier{stddev=value}{Specifies the known standard deviation of the normal distribution}
\qualifier{cdf}{If present, the data will be interpreted as a CDFs of a known, but unspecified, distribution.}
\notes
For testing the hypothesis that a dataset is sampled from a known,
not necessarily normal, distribution, convert the random samples
into CDFs and pass those as the value of X to the \sfun{ad_test}
function. Also use the \exmp{cdf} qualifier to let the function
know that the values are CDFs and not random samples. When this is
done, the values of the CDFs will range from 0 to 1, and the p-value
returned by the function will be computed using an algorithm by
Marsaglia and Marsaglia: Evaluating the Anderson-Darling
Distribution, Journal of Statistical Software, Vol. 9, Issue 2, Feb
2004.
\seealso{ad_ktest, ks_test, t_test, z_test, normal_cdf}
\done
\function{cumulant}
\synopsis{Compute the first n cumulants of an array}
\usage{[k1,...,kn] = cumulant (X, n)}
\description
This function returns unbiased estimates of the first \exmp{n}
cumulants from an array \exmp{X} of samples of a probability
distribution. The cumulants are returned as an array of size
\exmp{n}.
\notes
The implementation currently restricts the value of n to 1, 2, 3, or 4.
The estimator of the nth cumulent is also known as the nth k-statistic.
\seealso{mean, stddev, kurtosis, skewness}
\done
\function{median}
\synopsis{Compute the median of an array of values}
\usage{m = median (a [,i])}
\description
This function computes the median of an array of values. The median
is defined to be the value such that half of the the array values will be
less than or equal to the median value and the other half greater than or
equal to the median value. If the array has an even number of
values, then the median value will be the smallest value that is
greater than or equal to half the values in the array.
If called with a second argument, then the optional argument
specifies the dimension of the array over which the median is to be
taken. In this case, an array of one less dimension than the input
array will be returned.
\notes
This function makes a copy of the input array and then partially
sorts the copy. For large arrays, it may be undesirable to allocate
a separate copy. If memory use is to be minimized, the
\ifun{median_nc} function should be used.
\seealso{median_nc, mean}
\done
\function{median_nc}
\synopsis{Compute the median of an array}
\usage{m = median_nc (a [,i])}
\description
This function computes the median of an array. Unlike the
\ifun{median} function, it does not make a temporary copy of the
array and, as such, is more memory efficient at the expense
increased run-time. See the \ifun{median} function for more
information.
\seealso{median, mean}
\done
\function{mean}
\synopsis{Compute the mean of the values in an array}
\usage{m = mean (a [,i])}
\description
This function computes the arithmetic mean of the values in an
array. The optional parameter \exmp{i} may be used to specify the
dimension over which the mean it to be take. The default is to
compute the mean of all the elements.
\example
Suppose that \exmp{a} is a two-dimensional MxN array. Then
#v+
m = mean (a);
#v-
will assign the mean of all the elements of \exmp{a} to \exmp{m}.
In contrast,
#v+
m0 = mean(a,0);
m1 = mean(a,1);
#v-
will assign the N element array to \exmp{m0}, and an array of
M elements to \exmp{m1}. Here, the jth element of \exmp{m0} is
given by \exmp{mean(a[*,j])}, and the jth element of \exmp{m1} is
given by \exmp{mean(a[j,*])}.
\seealso{stddev, median, kurtosis, skewness}
\done
\function{stddev}
\synopsis{Compute the standard deviation of an array of values}
\usage{s = stddev (a [,i])}
\description
This function computes the standard deviation of the values in the
specified array. The optional parameter \exmp{i} may be used to
specify the dimension over which the standard-deviation it to be
taken. The default is to compute the standard deviation of all the
elements.
\notes
This function returns the unbiased N-1 form of the sample standard
deviation.
\seealso{mean, median, kurtosis, skewness}
\done
\function{skewness}
\synopsis{Compute the skewness of an array of values}
\usage{s = skewness (a)}
\description
This function computes the skewness (g1) of the array \exmp{a}.
\seealso{mean, stddev, kurtosis}
\done
\function{kurtosis}
\synopsis{Compute the kurtosis of an array of values}
\usage{s = kurtosis (a)}
\description
This function computes the kurtosis (g2) of the array \exmp{a}.
\notes
This function is defined such that the kurtosis of the normal
distribution is 0, and is also known as the ``excess-kurtosis''.
\seealso{mean, stddev, skewness}
\done
\function{binomial}
\synopsis{Compute binomial coefficients}
\usage{c = binomial (n [,m])}
\description
This function computes the binomial coefficients (n m) where (n m)
is given by n!/(m!(n-m)!). If \exmp{m} is not provided, then an
array of coefficients for m=0 to n will be returned.
\done
\function{chisqr_cdf}
\synopsis{Compute the Chisqr CDF}
\usage{cdf = chisqr_cdf (Int_Type n, Double_Type d)}
\description
This function returns the probability that a random number
distributed according to the chi-squared distribution for \exmp{n}
degrees of freedom will be less than the non-negative value \exmp{d}.
\notes
The importance of this distribution arises from the fact that if
\exmp{n} independent random variables \exmp{X_1,...X_n} are
distributed according to a gaussian distribution with a mean of 0 and
a variance of 1, then the sum
#v+
X_1^2 + X_2^2 + ... + X_n^2
#v-
follows the chi-squared distribution with \exmp{n} degrees of freedom.
\seealso{chisqr_test, poisson_cdf}
\done
\function{poisson_cdf}
\synopsis{Compute the Poisson CDF}
\usage{cdf = poisson_cdf (Double_Type lambda, Int_Type k)}
\description
This function computes the CDF for the Poisson probability
distribution parameterized by the value \exmp{lambda}. For values of
\exmp{lambda>100} and \exmp{abs(lambda-k)<sqrt(lambda)}, the Wilson and Hilferty
asymptotic approximation is used.
\seealso{chisqr_cdf}
\done
\function{smirnov_cdf}
\synopsis{Compute the Kolmogorov CDF using Smirnov's asymptotic form}
\usage{cdf = smirnov_cdf (x)}
\description
This function computes the CDF for the Kolmogorov distribution using
Smirnov's asymptotic form. In particular, the implementation is based
upon equation 1.4 from W. Feller, "On the Kolmogorov-Smirnov limit
theorems for empirical distributions", Annals of Math. Stat, Vol 19
(1948), pp. 177-190.
\seealso{ks_test, ks_test2, normal_cdf}
\done
\function{normal_cdf}
\synopsis{Compute the CDF for the Normal distribution}
\usage{cdf = normal_cdf (x)}
\description
This function computes the CDF (integrated probability) for the
normal distribution.
\seealso{smirnov_cdf, mann_whitney_cdf, poisson_cdf}
\done
\function{mann_whitney_cdf}
\synopsis{Compute the Mann-Whitney CDF}
\usage{cdf = mann_whitney_cdf (Int_Type m, Int_Type n, Int_Type s)}
\description
This function computes the exact CDF P(X<=s) for the Mann-Whitney
distribution. It is used by the \exmp{mw_test} function to compute
p-values for small values of \exmp{m} and \exmp{n}.
\seealso{mw_test, ks_test, normal_cdf}
\done
\function{kim_jennrich_cdf}
\synopsis{Compute the 2-sample KS CDF using the Kim-Jennrich Algorithm}
\usage{p = kim_jennrich (UInt_Type m, UInt_Type n, UInt_Type c)}
\description
This function returns the exact two-sample Kolmogorov-Smirnov
probability that that \exmp{D_mn <= c/(mn)}, where \exmp{D_mn} is
the two-sample Kolmogorov-Smirnov statistic computed from samples of
sizes \exmp{m} and \exmp{n}.
The algorithm used is that of Kim and Jennrich. The run-time scales
as m*n. As such, it is recommended that asymptotic form given by
the \ifun{smirnov_cdf} function be used for large values of m*n.
\notes
For more information about the Kim-Jennrich algorithm, see:
Kim, P.J., and R.I. Jennrich (1973), Tables of the exact sampling
distribution of the two sample Kolmogorov-Smirnov criterion Dmn(m<n),
in Selected Tables in Mathematical Statistics, Volume 1, (edited
by H. L. Harter and D.B. Owen), American Mathematical Society,
Providence, Rhode Island.
\seealso{smirnov_cdf, ks_test2}
\done
\function{f_cdf}
\synopsis{Compute the CDF for the F distribution}
\usage{cdf = f_cdf (t, nu1, nu2)}
\description
This function computes the CDF for the distribution and returns its
value.
\seealso{f_test2}
\done
#d opt-3-parm#1 If the optional parameter is passed to the function, then \
\__newline__ it must be a reference to a variable that, upon return, will be \
\__newline__ set to the value of the $1.
\function{ks_test}
\synopsis{One sample Kolmogorov test}
\usage{p = ks_test (CDF [,&D])}
\description
This function applies the Kolmogorov test to the data represented by
\exmp{CDF} and returns the p-value representing the probability that
the data values are ``consistent'' with the underlying distribution
function. \opt-3-parm{Kolmogorov statistic}.
The \exmp{CDF} array that is passed to this function must be computed
from the assumed probability distribution function. For example, if
the data are constrained to lie between 0 and 1, and the null
hypothesis is that they follow a uniform distribution, then the CDF
will be equal to the data. In the data are assumed to be normally
(Gaussian) distributed, then the \ifun{normal_cdf} function can be
used to compute the CDF.
\example
Suppose that X is an array of values obtained from repeated
measurements of some quantity. The values are are assumed to follow
a normal distribution with a mean of 20 and a standard deviation of
3. The \sfun{ks_test} may be used to test this hypothesis using:
#v+
pval = ks_test (normal_cdf(X, 20, 3));
#v-
\seealso{ks_test2, ad_test, kuiper_test, t_test, z_test}
\done
\function{ks_test2}
\synopsis{Two-Sample Kolmogorov-Smirnov test}
\usage{prob = ks_test2 (X, Y [,&d])}
\description
This function applies the 2-sample Kolmogorov-Smirnov test to two
datasets \exmp{X} and \exmp{Y} and returns p-value for the null
hypothesis that they share the same underlying distribution.
\opt-3-parm{statistic}
\notes
If \exmp{length(X)*length(Y)<=10000}, the \ifun{kim_jennrich_cdf}
function will be used to compute the exact probability. Otherwise an
asymptotic form will be used.
\seealso{ks_test, ad_ktest, kuiper_test, kim_jennrich_cdf}
\done
\function{kuiper_test}
\synopsis{Perform a 1-sample Kuiper test}
\usage{pval = kuiper_test (CDF [,&D])}
\description
This function applies the Kuiper test to the data represented by
\exmp{CDF} and returns the p-value representing the probability that
the data values are ``consistent'' with the underlying distribution
function. \opt-3-parm{Kuiper statistic}
The \exmp{CDF} array that is passed to this function must be computed
from the assumed probability distribution function. For example, if
the data are constrained to lie between 0 and 1, and the null
hypothesis is that they follow a uniform distribution, then the CDF
will be equal to the data. In the data are assumed to be normally
(Gaussian) distributed, then the \ifun{normal_cdf} function can be
used to compute the CDF.
\example
Suppose that X is an array of values obtained from repeated
measurements of some quantity. The values are are assumed to follow
a normal distribution with a mean of 20 and a standard deviation of
3. The \sfun{ks_test} may be used to test this hypothesis using:
#v+
pval = kuiper_test (normal_cdf(X, 20, 3));
#v-
\seealso{kuiper_test2, ks_test, t_test}
\done
\function{kuiper_test2}
\synopsis{Perform a 2-sample Kuiper test}
\usage{pval = kuiper_test2 (X, Y [,&D])}
\description
This function applies the 2-sample Kuiper test to two
datasets \exmp{X} and \exmp{Y} and returns p-value for the null
hypothesis that they share the same underlying distribution.
\opt-3-parm{Kuiper statistic}
\notes
The p-value is computed from an asymptotic formula suggested by
Stephens, M.A., Journal of the American Statistical Association, Vol
69, No 347, 1974, pp 730-737.
\seealso{ks_test2, kuiper_test}
\done
\function{chisqr_test}
\synopsis{Apply the Chi-square test to a two or more datasets}
\usage{prob = chisqr_test (X_1[], X_2[], ..., X_N [,&t])}
\description
This function applies the Chi-square test to the N datasets
\exmp{X_1}, \exmp{X_2}, ..., \exmp{X_N}, and returns the probability
that each of the datasets were drawn from the same underlying
distribution. Each of the arrays \exmp{X_k} must be the same length.
If the last parameter is a reference to a variable, then upon return
the variable will be set to the value of the statistic.
\seealso{chisqr_cdf, ks_test2, mw_test}
\done
\function{mw_test}
\synopsis{Apply the Two-sample Wilcoxon-Mann-Whitney test}
\usage{p = mw_test(X, Y [,&w])}
\description
This function performs a Wilcoxon-Mann-Whitney test and returns the
p-value for the null hypothesis that there is no difference between
the distributions represented by the datasets \exmp{X} and \exmp{Y}.
If a third argument is given, it must be a reference to a variable
whose value upon return will be to to the rank-sum of \exmp{X}.
\qualifiers
The function makes use of the following qualifiers:
#v+
side=">" : H0: P(X<Y) >= 1/2 (right-tail)
side="<" : H0: P(X<Y) <= 1/2 (left-tail)
#v-
The default null hypothesis is that \exmp{P(X<Y)=1/2}.
\notes
There are a number of definitions of this test. While the exact
definition of the statistic varies, the p-values are the same.
If \exmp{length(X)<50}, \exmp{length(Y)} < 50, and ties are not
present, then the exact p-value is computed using the
\ifun{mann_whitney_cdf} function. Otherwise a normal distribution is
used.
This test is often referred to as the non-parametric generalization
of the Student t-test.
\seealso{mann_whitney_cdf, ks_test2, chisqr_test, t_test}
\done
\function{student_t_cdf}
\synopsis{Compute the Student-t CDF}
\usage{cdf = student_t_cdf (t, n)}
\description
This function computes the CDF for the Student-t distribution for n
degrees of freedom.
\seealso{t_test, normal_cdf}
\done
\function{f_test2}
\synopsis{Apply the Two-sample F test}
\usage{p = f_test2 (X, Y [,&F]}
\description
This function computes the two-sample F statistic and its p-value
for the data in the \exmp{X} and \exmp{Y} arrays. This test is used
to compare the variances of two normally-distributed data sets, with
the null hypothesis that the variances are equal. The return value
is the p-value, which is computed using the module's \ifun{f_cdf}
function.
\qualifiers
The function makes use of the following qualifiers:
#v+
side=">" : H0: Var[X] >= Var[Y] (right-tail)
side="<" : H0: Var[X] <= Var[Y] (left-tail)
#v-
\seealso{f_cdf, ks_test2, chisqr_test}
\done
\function{t_test}
\synopsis{Perform a Student t-test}
\usage{pval = t_test (X, mu [,&t])}
\description
The one-sample t-test may be used to test that the population mean has a
specified value under the null hypothesis. Here, \exmp{X} represents a
random sample drawn from the population and \exmp{mu} is the
specified mean of the population. This function computes Student's
t-statistic and returns the p-value
that the data X were randomly sampled from a population with the
specified mean.
\opt-3-parm{statistic}
\qualifiers
The following qualifiers may be used to specify a 1-sided test:
#v+
side="<" Perform a left-tailed test
side=">" Perform a right-tailed test
#v-
\notes
While the parent population need not be normal, the test assumes
that random samples drawn from this distribution have means that
are normally distributed.
Strictly speaking, this test should only be used if the variance of
the data are equal to that of the assumed parent distribution. Use
the Mann-Whitney-Wilcoxon (\exmp{mw_test}) if the underlying
distribution is non-normal.
\seealso{mw_test, t_test2}
\done
\function{t_test2}
\synopsis{Perform a 2-sample Student t-test}
\usage{pval = t_test2 (X, Y [,&t])}
\description
This function compares two data sets \exmp{X} and \exmp{Y} using the
Student t-statistic. It is assumed that the the parent populations
are normally distributed with equal variance, but with possibly
different means. The test is one that looks for differences in the
means.
\notes
The \exmp{welch_t_test2} function may be used if it is not known that
the parent populations have the same variance.
\seealso{t_test2, welch_t_test2, mw_test}
\done
\function{welch_t_test2}
\synopsis{Perform Welch's t-test}
\usage{pval = welch_t_test2 (X, Y [,&t])}
\description
This function applies Welch's t-test to the 2 datasets \exmp{X} and
\exmp{Y} and returns the p-value that the underlying populations have
the same mean. The parent populations are assumed to be normally
distributed, but need not have the same variance.
\opt-3-parm{statistic}
\qualifiers
The following qualifiers may be used to specify a 1-sided test:
#v+
side="<" Perform a left-tailed test
side=">" Perform a right-tailed test
#v-
\seealso{t_test2}
\done
\function{z_test}
\synopsis{Perform a Z test}
\usage{pval = z_test (X, mu, sigma [,&z])}
\description
This function applies a Z test to the data \exmp{X} and returns the
p-value that the data are consistent with a normally-distributed
parent population with a mean of \exmp{mu} and a standard-deviation
of \exmp{sigma}. \opt-3-parm{Z statistic}
\seealso{t_test, mw_test}
\done
\function{kendall_tau}
\synopsis{Kendall's tau Correlation Test}
\usage{pval = kendall_tau (x, y [,&tau])}
\description
This function computes Kendall's tau statistic for the paired data
values (x,y), which may or may not have ties. It returns the
double-sided p-value associated with the statistic.
\notes
The implementation is based upon Knight's O(nlogn) algorithm
described in "A computer method for calculating Kendall’s tau with
ungrouped data", Journal of the American Statistical Association, 61,
436-439.
In the case of no ties, the exact p-value is computed when length(x)
is less than 30 using algorithm 71 of Applied Statistics (1974) by
Best and Gipps. If ties are present, the the p-value is computed
based upon the normal distribution and a continuity correction.
\qualifiers
The following qualifiers may be used to specify a 1-sided test:
#v+
side="<" Perform a left-tailed test
side=">" Perform a right-tailed test
#v-
\seealso{spearman_r, pearson_r, mann_kendall}
\done
\function{mann_kendall}
\synopsis{Mann-Kendall trend test}
\usage{pval = mann_kendall (y [,&tau])}
\description
The Mann-Kendall test is a non-parametric test that may be used to
identify a trend in a set of serial data values. It is closely
related to the Kendall's tau correlation test.
The \ifun{mann_kendall} function returns the double-sided p-value
that may be used as a basis for rejecting the the null-hypothesis
that there is no trend in the data.
\qualifiers
The following qualifiers may be used to specify a 1-sided test:
#v+
side="<" Perform a left-tailed test
side=">" Perform a right-tailed test
#v-
\seealso{spearman_r, pearson_r, mann_kendall}
\done
\function{pearson_r}
\synopsis{Compute Pearson's Correlation Coefficient}
\usage{pval = pearson_r (X, Y [,&r])}
\description
This function computes Pearson's r correlation coefficient of the two
datasets \exmp{X} and \exmp{Y}. It returns the the p-value that
\exmp{x} and \exmp{y} are mutually independent.
\opt-3-parm{correlation coefficient}
\qualifiers
The following qualifiers may be used to specify a 1-sided test:
#v+
side="<" Perform a left-tailed test
side=">" Perform a right-tailed test
#v-
\seealso{kendall_tau, spearman_r}
\done
\function{spearman_r}
\synopsis{Spearman's Rank Correlation test}
\usage{pval = spearman_r(x, y [,&r])}
\description
This function computes the Spearman rank correlation coefficient (r)
and returns the p-value that \exmp{x} and \exmp{y} are mutually
independent.
\opt-3-parm{correlation coefficient}
\qualifiers
The following qualifiers may be used to specify a 1-sided test:
#v+
side="<" Perform a left-tailed test
side=">" Perform a right-tailed test
#v-
\seealso{kendall_tau, pearson_r}
\done
\function{correlation}
\synopsis{Compute the sample correlation between two datasets}
\usage{c = correlation (x, y)}
\description
This function computes Pearson's sample correlation coefficient
between 2 arrays. It is assumed that the standard deviation of each
array is finite and non-zero. The returned value falls in the
range -1 to 1, with -1 indicating that the data are anti-correlated,
and +1 indicating that the data are completely correlated.
\seealso{covariance, stddev}
\done
|