1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
|
<HTML>
<HEAD>
<TITLE>TSP (libtsp/SP) - SPcorXpc</TITLE>
</HEAD>
<BODY BGCOLOR="#FFFACD">
<H2>SPcorXpc</H2>
<HR>
<H4>Routine</H4>
<DL>
<DT>
double SPcorXpc (const float rxx[], float pc[], int Np)
</DL>
<H4>Purpose</H4>
<DL>
<DT>
Find predictor coefficients from autocorrelation values
</DL>
<H4>Description</H4>
This routine calculates the coefficients for a linear predictor which
minimizes the resulting mean square error given the autocorrelation of the
input signal. Consider a linear predictor with Np coefficients,
<PRE>
Np
y(k) = SUM p(i) x(k-i) ,
i=1
</PRE>
where x(k) is the input signal. The prediction error is
<P>
<PRE>
e(k) = x(k) - y(k) .
</PRE>
<P>
To minimize the mean-square prediction error, solve
<P>
<PRE>
R p = r,
</PRE>
<P>
where R is a symmetric positive definite covariance matrix, p is a vector
of predictor coefficients and r is a vector of correlation values. The
matrix R and and vector r are defined as follows
<P>
<PRE>
R(i,j) = E[x(k-i) x(k-j)], for 1 <= i,j <= Np,
r(i) = E[x(k) x(k-i)], for 1 <= i <= Np.
</PRE>
<P>
The resulting mean-square prediction error can be expressed as
<P>
<PRE>
perr = Ex - 2 p'r + p'R p
= Ex - p'r ,
</PRE>
<P>
where Ex is the mean-square value of the input signal,
<P>
<PRE>
Ex = E[x(k)^2].
</PRE>
<P>
For this routine, the matrix R must be symmetric and Toeplitz. Then
<P>
<PRE>
R(i,j) = rxx(|i-j|)
r(i) = rxx(i)
</PRE>
<P>
If the correlation matrix is numerically not positive definite, or if the
prediction error energy becomes negative at some stage in the calculation,
the remaining predictor coefficients are set to zero. This is equivalent to
truncating the autocorrelation coefficient vector at the point at which it
is positive definite.
<P>
This subroutine solves for the predictor coefficients using
Durbin's recursion.
This algorithm requires
<PRE>
Np divides,
Np*Np multiplies, and
Np*Np adds.
</PRE>
<P>
Predictor coefficients are usually expressed algebraically as vectors with
1-offset indexing. The correspondence to the 0-offset C-arrays is as
follows.
<PRE>
p(1) <==> pc[0] predictor coefficient corresponding to lag 1
p(i) <==> pc[i-1] 1 <= i < Np
</PRE>
<H4>Parameters</H4>
<DL>
<DT>
<- double SPcorXpc
<DD>
Resultant mean-square prediction error
<DT>
-> const float rxx[]
<DD>
Np+1 element vector of autocorrelation values. Element rxx[i] is the
autocorrelation at lag i.
<DT>
<- const float pc[]
<DD>
Np element vector of predictor coefficients. Coefficient pc[i] is the
predictor coefficient corresponding to lag i+1.
<DT>
-> const float rxx[]
<DD>
Np+1 element vector of autocorrelation values. Element rxx[i] is the
autocorrelation at lag i.
<DT>
-> int Np
<DD>
Number of predictor coefficients
</DL>
<H4>Author / revision</H4>
P. Kabal Copyright (C) 1997
/ Revision 1.16 1997/10/10
<H4>See Also</H4>
<A HREF="SPcovXpc.html">SPcovXpc</A>,
<A HREF="SPpcXec.html">SPpcXec</A>
<P>
<HR>
Main Index <A HREF="../libtsp.html">libtsp</A>
</BODY>
</HTML>
|