1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201
|
.. _least_squares:
Least squares problems numerical methods
----------------------------------------
| This section presents numerical methods that can be used in order to
solve least squares problems, which can be encountered when the
construction of a *response surface* (i.e. of a meta-model) is of
interest, or when one wishes to perform a statistical regression.
| Given a matrix :math:`\matr{\Psi}~\in~\Rset^{N\times P}`, :math:`N>P`,
and a vector :math:`\underline{y}~\in~\Rset^{N}`, we want to find a
vector :math:`\underline{a}~\in \Rset^{P}` such that
:math:`\matr{\Psi}\: \underline{a}` is the best approximation to
:math:`\underline{y}` in the least squares sense. Mathematically
speaking, we want to solve the following minimization problem:
.. math::
\min_{\underline{a}} \, \, = \, \, \left\| \; \matr{\Psi} \, \underline{a} \, - \, \underline{y} \; \right\|_2
In the following, it is assumed that the rank of matrix
:math:`\matr{\Psi}` is equal to :math:`P`.
| Several algorithms can be applied to compute the least squares
solution, as shown in the sequel.
**Method of normal equations**
| It is shown that the solution of the least squares problem satisfies
the so-called *normal equations*, which read using a matrix notation:
.. math::
\matr{\Psi}^{\mbox{\scriptsize \textsf{T}}} \; \matr{\Psi} \; \underline{a} \, \, = \, \, \matr{\Psi}^{\mbox{\scriptsize \textsf{T}}} \; \underline{y}
| The matrix
:math:`\matr{C} \equiv \matr{\Psi}^{\mbox{\scriptsize \textsf{T}}} \; \matr{\Psi}`
is symmetric and positive definite. The system can be solved using the
following Cholesky factorization:
.. math::
\matr{C} \, \, = \, \, \matr{R}^{\mbox{\scriptsize \textsf{T}}} \; \matr{R}
| where :math:`\matr{R}` is an upper triangular matrix with positive
diagonal entries. Solving the normal equations is equivalent to
solving the two following triangular systems, which can be easily
solved by backwards substitution:
.. math::
\begin{aligned}
\matr{R}^{\mbox{\scriptsize \textsf{T}}} \; \underline{z} \, \, = \, \, \matr{\Psi}^{\mbox{\scriptsize \textsf{T}}} \; \underline{y}
\qquad , \qquad \matr{R} \; \underline{a} \, \, = \, \, \underline{z}
\end{aligned}
| It has to be noted that this theoretical approach is seldom used in
practice though. Indeed the resulting least squares solution is quite
sensitive to a small change in the data (i.e. in :math:`\underline{y}`
and :math:`\matr{\Psi}`). More precisely, the normal equations are
always more badly conditioned than the original overdetermined system,
as their condition number is squared compared to the original problem:
.. math::
\kappa(\matr{\Psi}^{\mbox{\scriptsize \textsf{T}}} \; \matr{\Psi}) \, \, = \, \, \kappa(\matr{\Psi})^2
As a consequence more robust numerical methods should be adopted.
| **Method based on QR factorization**
| It is shown that the matrix :math:`\matr{\Psi}` can be factorized as
follows:
.. math::
\matr{\Psi} \, \, = \, \, \matr{Q} \; \matr{R}
| where :math:`\matr{Q}` is a :math:`N`-by-:math:`P`-matrix with
*orthonormal* columns and :math:`\matr{R}` is a
:math:`P`-by-:math:`P`-upper triangular matrix. Such a *QR
decomposition* may be constructed using several schemes, such as
*Gram-Schmidt orthogonalization*, *Householder reflections* or *Givens
rotations*.
| In this setup the least squares problem is equivalent to solving:
.. math::
\matr{R} \; \underline{a} \, \, = \, \, \matr{Q}^{\mbox{\scriptsize \textsf{T}}} \; \underline{y}
| This upper triangular system can be solved using backwards
substitution.
| The solving scheme based on Householder QR factorization leads to a
relative error that is proportional to:
.. math::
\kappa(\matr{\Psi}) + \|\underline{r}\|_2 \kappa(\matr{\Psi})^2
| where
:math:`\underline{r} = \matr{\Psi} \, \underline{a} \, - \, \underline{y}`.
Thus this error is expected to be much smaller than the one associated
with the normal equations provided that the residual
:math:`\underline{r}` is “small”.
| **Method based on singular value decomposition**
| The so-called *singular value decomposition* (SVD) of matrix
:math:`\matr{\Psi}` reads:
.. math::
\matr{\Psi} \quad = \quad \matr{U} \; \matr{S} \; \matr{V}^{\mbox{\scriptsize \textsf{T}}}
| where :math:`\matr{U}~\in \Rset^{N \times N}` and
:math:`\matr{V}~\in \Rset^{P \times P}` are orthogonal matrices, and
:math:`\matr{S}~\in \Rset^{N \times P}` can be cast as:
.. math::
\begin{aligned}
\matr{S} \quad = \quad \left(
\begin{array}{c}
\matr{S}_1 \\
\matr{0} \\
\end{array}
\right)
\end{aligned}
| In the previous equation, :math:`\matr{S}_1~\in \Rset^{P \times P}`
is a diagonal matrix containing the singular values
:math:`\sigma_1 \geq \sigma_2 \geq \dots \geq \sigma_P > 0` of
:math:`\matr{\Psi}`.
| It can be shown that the least squares solution is equal to:
.. math::
\begin{aligned}
\widehat{\underline{a}} \quad = \quad \matr{V} \; \left( \begin{array}{c}
\matr{S}_1^{-1} \\
\matr{0} \\
\end{array}\right)
\; \matr{U}^{\mbox{\scriptsize \textsf{T}}} \; \underline{y}
\end{aligned}
| In practice it is not common to compute a “full” SVD as shown above.
Instead, it is often sufficient and more economical in terms of time
and memory to compute a *reduced* version of SVD. The latter reads:
.. math::
\matr{\Psi} \quad = \quad \matr{U}_1 \; \matr{S}_1 \; \matr{V}^{\mbox{\scriptsize \textsf{T}}}
| where :math:`\matr{U}_1` is obtained by extracting the :math:`P`
first columns of :math:`\matr{U}`.
| Note that it is also possible to perform SVD in case of a
rank-deficient matrix :math:`\matr{\Psi}`. In this case the resulting
vector :math:`\widehat{\underline{a}}` corresponds to the *minimum
norm* least squares solution.
| The computational cost of the method is proportional to
:math:`(NP^2 + P^3)` with a factor ranging from 4 to 10, depending on
the numerical scheme used to compute the SVD decomposition. This cost
is higher than those associated with the normal equations and with QR
factorization. However SVD is relevant insofar as it provides a very
valuable information, that is the singular values of matrix
:math:`\matr{\Psi}`.
**Comparison of the methods**
Several conclusions may be drawn concerning the various methods
considered so far:
- If :math:`N \approx P`, normal equations and Householder QR
factorization require about the same computational work. If
:math:`N >> P`, then the QR approach requires about twice as much
work as normal equations.
- However QR appears to be more accurate than normal equations, so it
should be almost always preferred in practice.
- SVD is also robust but it reveals the most computationally expensive
scheme. Nonetheless the singular values are obtained as a by-product,
which may be particularly useful for analytical and computational
purposes.
.. topic:: API:
- See the available :ref:`least squares methods <least_squares_methods>`.
- See :class:`~openturns.PenalizedLeastSquaresAlgorithm`
.. topic:: References:
- A. Bjorck, 1996, "Numerical methods for least squares problems", SIAM Press, Philadelphia, PA.
|