1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050
|
@cindex fitting
@cindex least squares fit
@cindex regression, least squares
@cindex weighted linear fits
@cindex unweighted linear fits
This chapter describes routines for performing least squares fits to
experimental data using linear combinations of functions. The data
may be weighted or unweighted, i.e. with known or unknown errors. For
weighted data the functions compute the best fit parameters and their
associated covariance matrix. For unweighted data the covariance
matrix is estimated from the scatter of the points, giving a
variance-covariance matrix.
The functions are divided into separate versions for simple one- or
two-parameter regression and multiple-parameter fits.
@menu
* Fitting Overview::
* Linear regression::
* Multi-parameter regression::
* Regularized regression::
* Robust linear regression::
* Large Dense Linear Systems::
* Troubleshooting::
* Fitting Examples::
* Fitting References and Further Reading::
@end menu
@node Fitting Overview
@section Overview
Least-squares fits are found by minimizing @math{\chi^2}
(chi-squared), the weighted sum of squared residuals over @math{n}
experimental datapoints @math{(x_i, y_i)} for the model @math{Y(c,x)},
@tex
\beforedisplay
$$
\chi^2 = \sum_i w_i (y_i - Y(c, x_i))^2
$$
\afterdisplay
@end tex
@ifinfo
@example
\chi^2 = \sum_i w_i (y_i - Y(c, x_i))^2
@end example
@end ifinfo
@noindent
The @math{p} parameters of the model are @c{$c = \{c_0, c_1, \dots\}$}
@math{c = @{c_0, c_1, @dots{}@}}. The
weight factors @math{w_i} are given by @math{w_i = 1/\sigma_i^2},
where @math{\sigma_i} is the experimental error on the data-point
@math{y_i}. The errors are assumed to be
Gaussian and uncorrelated.
For unweighted data the chi-squared sum is computed without any weight factors.
The fitting routines return the best-fit parameters @math{c} and their
@math{p \times p} covariance matrix. The covariance matrix measures the
statistical errors on the best-fit parameters resulting from the
errors on the data, @math{\sigma_i}, and is defined
@cindex covariance matrix, linear fits
as @c{$C_{ab} = \langle \delta c_a \delta c_b \rangle$}
@math{C_@{ab@} = <\delta c_a \delta c_b>} where @c{$\langle \, \rangle$}
@math{< >} denotes an average over the Gaussian error distributions of the underlying datapoints.
The covariance matrix is calculated by error propagation from the data
errors @math{\sigma_i}. The change in a fitted parameter @math{\delta
c_a} caused by a small change in the data @math{\delta y_i} is given
by
@tex
\beforedisplay
$$
\delta c_a = \sum_i {\partial c_a \over \partial y_i} \delta y_i
$$
\afterdisplay
@end tex
@ifinfo
@example
\delta c_a = \sum_i (dc_a/dy_i) \delta y_i
@end example
@end ifinfo
@noindent
allowing the covariance matrix to be written in terms of the errors on the data,
@tex
\beforedisplay
$$
C_{ab} = \sum_{i,j} {\partial c_a \over \partial y_i}
{\partial c_b \over \partial y_j}
\langle \delta y_i \delta y_j \rangle
$$
\afterdisplay
@end tex
@ifinfo
@example
C_@{ab@} = \sum_@{i,j@} (dc_a/dy_i) (dc_b/dy_j) <\delta y_i \delta y_j>
@end example
@end ifinfo
@noindent
For uncorrelated data the fluctuations of the underlying datapoints satisfy
@c{$\langle \delta y_i \delta y_j \rangle = \sigma_i^2 \delta_{ij}$}
@math{<\delta y_i \delta y_j> = \sigma_i^2 \delta_@{ij@}}, giving a
corresponding parameter covariance matrix of
@tex
\beforedisplay
$$
C_{ab} = \sum_{i} {1 \over w_i} {\partial c_a \over \partial y_i} {\partial c_b \over \partial y_i}
$$
\afterdisplay
@end tex
@ifinfo
@example
C_@{ab@} = \sum_i (1/w_i) (dc_a/dy_i) (dc_b/dy_i)
@end example
@end ifinfo
@noindent
When computing the covariance matrix for unweighted data, i.e. data with unknown errors,
the weight factors @math{w_i} in this sum are replaced by the single estimate @math{w =
1/\sigma^2}, where @math{\sigma^2} is the computed variance of the
residuals about the best-fit model, @math{\sigma^2 = \sum (y_i - Y(c,x_i))^2 / (n-p)}.
This is referred to as the @dfn{variance-covariance matrix}.
@cindex variance-covariance matrix, linear fits
The standard deviations of the best-fit parameters are given by the
square root of the corresponding diagonal elements of
the covariance matrix, @c{$\sigma_{c_a} = \sqrt{C_{aa}}$}
@math{\sigma_@{c_a@} = \sqrt@{C_@{aa@}@}}.
The correlation coefficient of the fit parameters @math{c_a} and @math{c_b}
is given by @c{$\rho_{ab} = C_{ab} / \sqrt{C_{aa} C_{bb}}$}
@math{\rho_@{ab@} = C_@{ab@} / \sqrt@{C_@{aa@} C_@{bb@}@}}.
@node Linear regression
@section Linear regression
@cindex linear regression
The functions in this section are used to fit simple one or two
parameter linear regression models. The functions are declared in
the header file @file{gsl_fit.h}.
@menu
* Linear regression with a constant term::
* Linear regression without a constant term::
@end menu
@node Linear regression with a constant term
@subsection Linear regression with a constant term
The functions described in this section can be used to perform
least-squares fits to a straight line model, @math{Y(c,x) = c_0 + c_1 x}.
@cindex covariance matrix, from linear regression
@deftypefun int gsl_fit_linear (const double * @var{x}, const size_t @var{xstride}, const double * @var{y}, const size_t @var{ystride}, size_t @var{n}, double * @var{c0}, double * @var{c1}, double * @var{cov00}, double * @var{cov01}, double * @var{cov11}, double * @var{sumsq})
This function computes the best-fit linear regression coefficients
(@var{c0},@var{c1}) of the model @math{Y = c_0 + c_1 X} for the dataset
(@var{x}, @var{y}), two vectors of length @var{n} with strides
@var{xstride} and @var{ystride}. The errors on @var{y} are assumed unknown so
the variance-covariance matrix for the
parameters (@var{c0}, @var{c1}) is estimated from the scatter of the
points around the best-fit line and returned via the parameters
(@var{cov00}, @var{cov01}, @var{cov11}).
The sum of squares of the residuals from the best-fit line is returned
in @var{sumsq}. Note: the correlation coefficient of the data can be computed using @code{gsl_stats_correlation} (@pxref{Correlation}), it does not depend on the fit.
@end deftypefun
@deftypefun int gsl_fit_wlinear (const double * @var{x}, const size_t @var{xstride}, const double * @var{w}, const size_t @var{wstride}, const double * @var{y}, const size_t @var{ystride}, size_t @var{n}, double * @var{c0}, double * @var{c1}, double * @var{cov00}, double * @var{cov01}, double * @var{cov11}, double * @var{chisq})
This function computes the best-fit linear regression coefficients
(@var{c0},@var{c1}) of the model @math{Y = c_0 + c_1 X} for the weighted
dataset (@var{x}, @var{y}), two vectors of length @var{n} with strides
@var{xstride} and @var{ystride}. The vector @var{w}, of length @var{n}
and stride @var{wstride}, specifies the weight of each datapoint. The
weight is the reciprocal of the variance for each datapoint in @var{y}.
The covariance matrix for the parameters (@var{c0}, @var{c1}) is
computed using the weights and returned via the parameters
(@var{cov00}, @var{cov01}, @var{cov11}). The weighted sum of squares
of the residuals from the best-fit line, @math{\chi^2}, is returned in
@var{chisq}.
@end deftypefun
@deftypefun int gsl_fit_linear_est (double @var{x}, double @var{c0}, double @var{c1}, double @var{cov00}, double @var{cov01}, double @var{cov11}, double * @var{y}, double * @var{y_err})
This function uses the best-fit linear regression coefficients
@var{c0}, @var{c1} and their covariance
@var{cov00}, @var{cov01}, @var{cov11} to compute the fitted function
@var{y} and its standard deviation @var{y_err} for the model @math{Y =
c_0 + c_1 X} at the point @var{x}.
@end deftypefun
@node Linear regression without a constant term
@subsection Linear regression without a constant term
The functions described in this section can be used to perform
least-squares fits to a straight line model without a constant term,
@math{Y = c_1 X}.
@deftypefun int gsl_fit_mul (const double * @var{x}, const size_t @var{xstride}, const double * @var{y}, const size_t @var{ystride}, size_t @var{n}, double * @var{c1}, double * @var{cov11}, double * @var{sumsq})
This function computes the best-fit linear regression coefficient
@var{c1} of the model @math{Y = c_1 X} for the datasets (@var{x},
@var{y}), two vectors of length @var{n} with strides @var{xstride} and
@var{ystride}. The errors on @var{y} are assumed unknown so the
variance of the parameter @var{c1} is estimated from
the scatter of the points around the best-fit line and returned via the
parameter @var{cov11}. The sum of squares of the residuals from the
best-fit line is returned in @var{sumsq}.
@end deftypefun
@deftypefun int gsl_fit_wmul (const double * @var{x}, const size_t @var{xstride}, const double * @var{w}, const size_t @var{wstride}, const double * @var{y}, const size_t @var{ystride}, size_t @var{n}, double * @var{c1}, double * @var{cov11}, double * @var{sumsq})
This function computes the best-fit linear regression coefficient
@var{c1} of the model @math{Y = c_1 X} for the weighted datasets
(@var{x}, @var{y}), two vectors of length @var{n} with strides
@var{xstride} and @var{ystride}. The vector @var{w}, of length @var{n}
and stride @var{wstride}, specifies the weight of each datapoint. The
weight is the reciprocal of the variance for each datapoint in @var{y}.
The variance of the parameter @var{c1} is computed using the weights
and returned via the parameter @var{cov11}. The weighted sum of
squares of the residuals from the best-fit line, @math{\chi^2}, is
returned in @var{chisq}.
@end deftypefun
@deftypefun int gsl_fit_mul_est (double @var{x}, double @var{c1}, double @var{cov11}, double * @var{y}, double * @var{y_err})
This function uses the best-fit linear regression coefficient @var{c1}
and its covariance @var{cov11} to compute the fitted function
@var{y} and its standard deviation @var{y_err} for the model @math{Y =
c_1 X} at the point @var{x}.
@end deftypefun
@node Multi-parameter regression
@section Multi-parameter regression
@cindex multi-parameter regression
@cindex fits, multi-parameter linear
This section describes routines which perform least squares fits
to a linear model by minimizing the cost function
@tex
\beforedisplay
$$
\chi^2 = \sum_i w_i (y_i - \sum_j X_{ij} c_j)^2 = || y - Xc ||_W^2
$$
\afterdisplay
@end tex
@ifinfo
@example
\chi^2 = \sum_i w_i (y_i - \sum_j X_ij c_j)^2 = || y - Xc ||_W^2
@end example
@end ifinfo
where @math{y} is a vector of @math{n} observations, @math{X} is an
@math{n}-by-@math{p} matrix of predictor variables, @math{c}
is a vector of the @math{p} unknown best-fit parameters to be estimated,
and @math{||r||_W^2 = r^T W r}.
The matrix @math{W = } diag@math{(w_1,w_2,...,w_n)}
defines the weights or uncertainties of the observation vector.
This formulation can be used for fits to any number of functions and/or
variables by preparing the @math{n}-by-@math{p} matrix @math{X}
appropriately. For example, to fit to a @math{p}-th order polynomial in
@var{x}, use the following matrix,
@tex
\beforedisplay
$$
X_{ij} = x_i^j
$$
\afterdisplay
@end tex
@ifinfo
@example
X_@{ij@} = x_i^j
@end example
@end ifinfo
@noindent
where the index @math{i} runs over the observations and the index
@math{j} runs from 0 to @math{p-1}.
To fit to a set of @math{p} sinusoidal functions with fixed frequencies
@math{\omega_1}, @math{\omega_2}, @dots{}, @math{\omega_p}, use,
@tex
\beforedisplay
$$
X_{ij} = \sin(\omega_j x_i)
$$
\afterdisplay
@end tex
@ifinfo
@example
X_@{ij@} = sin(\omega_j x_i)
@end example
@end ifinfo
@noindent
To fit to @math{p} independent variables @math{x_1}, @math{x_2}, @dots{},
@math{x_p}, use,
@tex
\beforedisplay
$$
X_{ij} = x_j(i)
$$
\afterdisplay
@end tex
@ifinfo
@example
X_@{ij@} = x_j(i)
@end example
@end ifinfo
@noindent
where @math{x_j(i)} is the @math{i}-th value of the predictor variable
@math{x_j}.
The solution of the general linear least-squares system requires an
additional working space for intermediate results, such as the singular
value decomposition of the matrix @math{X}.
These functions are declared in the header file @file{gsl_multifit.h}.
@deftypefun {gsl_multifit_linear_workspace *} gsl_multifit_linear_alloc (const size_t @var{n}, const size_t @var{p})
@tindex gsl_multifit_linear_workspace
This function allocates a workspace for fitting a model to a maximum of @var{n}
observations using a maximum of @var{p} parameters. The user may later supply
a smaller least squares system if desired. The size of the workspace is
@math{O(np + p^2)}.
@end deftypefun
@deftypefun void gsl_multifit_linear_free (gsl_multifit_linear_workspace * @var{work})
This function frees the memory associated with the workspace @var{w}.
@end deftypefun
@deftypefun int gsl_multifit_linear_svd (const gsl_matrix * @var{X}, gsl_multifit_linear_workspace * @var{work})
This function performs a singular value decomposition of the
matrix @var{X} and stores the SVD factors internally in @var{work}.
@end deftypefun
@deftypefun int gsl_multifit_linear_bsvd (const gsl_matrix * @var{X}, gsl_multifit_linear_workspace * @var{work})
This function performs a singular value decomposition of the
matrix @var{X} and stores the SVD factors internally in @var{work}.
The matrix @var{X} is first balanced by applying column scaling
factors to improve the accuracy of the singular values.
@end deftypefun
@deftypefun int gsl_multifit_linear (const gsl_matrix * @var{X}, const gsl_vector * @var{y}, gsl_vector * @var{c}, gsl_matrix * @var{cov}, double * @var{chisq}, gsl_multifit_linear_workspace * @var{work})
This function computes the best-fit parameters @var{c} of the model
@math{y = X c} for the observations @var{y} and the matrix of
predictor variables @var{X}, using the preallocated workspace provided
in @var{work}. The @math{p}-by-@math{p} variance-covariance matrix of the model parameters
@var{cov} is set to @math{\sigma^2 (X^T X)^{-1}}, where @math{\sigma} is
the standard deviation of the fit residuals.
The sum of squares of the residuals from the best-fit,
@math{\chi^2}, is returned in @var{chisq}. If the coefficient of
determination is desired, it can be computed from the expression
@math{R^2 = 1 - \chi^2 / TSS}, where the total sum of squares (TSS) of
the observations @var{y} may be computed from @code{gsl_stats_tss}.
The best-fit is found by singular value decomposition of the matrix
@var{X} using the modified Golub-Reinsch SVD algorithm, with column
scaling to improve the accuracy of the singular values. Any components
which have zero singular value (to machine precision) are discarded
from the fit.
@end deftypefun
@deftypefun int gsl_multifit_linear_tsvd (const gsl_matrix * @var{X}, const gsl_vector * @var{y}, const double @var{tol}, gsl_vector * @var{c}, gsl_matrix * @var{cov}, double * @var{chisq}, size_t * @var{rank}, gsl_multifit_linear_workspace * @var{work})
This function computes the best-fit parameters @var{c} of the model
@math{y = X c} for the observations @var{y} and the matrix of
predictor variables @var{X}, using a truncated SVD expansion.
Singular values which satisfy @math{s_i \le tol \times s_0}
are discarded from the fit, where @math{s_0} is the largest singular value.
The @math{p}-by-@math{p} variance-covariance matrix of the model parameters
@var{cov} is set to @math{\sigma^2 (X^T X)^{-1}}, where @math{\sigma} is
the standard deviation of the fit residuals.
The sum of squares of the residuals from the best-fit,
@math{\chi^2}, is returned in @var{chisq}. The effective rank
(number of singular values used in solution) is returned in @var{rank}.
If the coefficient of
determination is desired, it can be computed from the expression
@math{R^2 = 1 - \chi^2 / TSS}, where the total sum of squares (TSS) of
the observations @var{y} may be computed from @code{gsl_stats_tss}.
@end deftypefun
@deftypefun int gsl_multifit_wlinear (const gsl_matrix * @var{X}, const gsl_vector * @var{w}, const gsl_vector * @var{y}, gsl_vector * @var{c}, gsl_matrix * @var{cov}, double * @var{chisq}, gsl_multifit_linear_workspace * @var{work})
This function computes the best-fit parameters @var{c} of the weighted
model @math{y = X c} for the observations @var{y} with weights @var{w}
and the matrix of predictor variables @var{X}, using the preallocated
workspace provided in @var{work}. The @math{p}-by-@math{p} covariance matrix of the model
parameters @var{cov} is computed as @math{(X^T W X)^{-1}}. The weighted
sum of squares of the residuals from the best-fit, @math{\chi^2}, is
returned in @var{chisq}. If the coefficient of determination is
desired, it can be computed from the expression @math{R^2 = 1 - \chi^2
/ WTSS}, where the weighted total sum of squares (WTSS) of the
observations @var{y} may be computed from @code{gsl_stats_wtss}.
@end deftypefun
@deftypefun int gsl_multifit_wlinear_tsvd (const gsl_matrix * @var{X}, const gsl_vector * @var{w}, const gsl_vector * @var{y}, const double @var{tol}, gsl_vector * @var{c}, gsl_matrix * @var{cov}, double * @var{chisq}, size_t * @var{rank}, gsl_multifit_linear_workspace * @var{work})
This function computes the best-fit parameters @var{c} of the weighted
model @math{y = X c} for the observations @var{y} with weights @var{w}
and the matrix of predictor variables @var{X}, using a truncated SVD expansion.
Singular values which satisfy @math{s_i \le tol \times s_0}
are discarded from the fit, where @math{s_0} is the largest singular value.
The @math{p}-by-@math{p} covariance matrix of the model
parameters @var{cov} is computed as @math{(X^T W X)^{-1}}. The weighted
sum of squares of the residuals from the best-fit, @math{\chi^2}, is
returned in @var{chisq}. The effective rank of the system (number of
singular values used in the solution) is returned in @var{rank}.
If the coefficient of determination is
desired, it can be computed from the expression @math{R^2 = 1 - \chi^2
/ WTSS}, where the weighted total sum of squares (WTSS) of the
observations @var{y} may be computed from @code{gsl_stats_wtss}.
@end deftypefun
@deftypefun int gsl_multifit_linear_est (const gsl_vector * @var{x}, const gsl_vector * @var{c}, const gsl_matrix * @var{cov}, double * @var{y}, double * @var{y_err})
This function uses the best-fit multilinear regression coefficients
@var{c} and their covariance matrix
@var{cov} to compute the fitted function value
@var{y} and its standard deviation @var{y_err} for the model @math{y = x.c}
at the point @var{x}.
@end deftypefun
@deftypefun int gsl_multifit_linear_residuals (const gsl_matrix * @var{X}, const gsl_vector * @var{y}, const gsl_vector * @var{c}, gsl_vector * @var{r})
This function computes the vector of residuals @math{r = y - X c} for
the observations @var{y}, coefficients @var{c} and matrix of predictor
variables @var{X}.
@end deftypefun
@deftypefun size_t gsl_multifit_linear_rank (const double @var{tol}, const gsl_multifit_linear_workspace * @var{work})
This function returns the rank of the matrix @math{X} which must first have its
singular value decomposition computed. The rank is computed by counting the number
of singular values @math{\sigma_j} which satisfy @math{\sigma_j > tol \times \sigma_0},
where @math{\sigma_0} is the largest singular value.
@end deftypefun
@node Regularized regression
@section Regularized regression
@cindex ridge regression
@cindex Tikhonov regression
@cindex regression, ridge
@cindex regression, Tikhonov
@cindex least squares, regularized
Ordinary weighted least squares models seek a solution vector @math{c}
which minimizes the residual
@tex
\beforedisplay
$$
\chi^2 = || y - Xc ||_W^2
$$
\afterdisplay
@end tex
@ifinfo
@example
\chi^2 = || y - Xc ||_W^2
@end example
@end ifinfo
where @math{y} is the @math{n}-by-@math{1} observation vector,
@math{X} is the @math{n}-by-@math{p} design matrix, @math{c} is
the @math{p}-by-@math{1} solution vector,
@math{W =} diag@math{(w_1,...,w_n)} is the data weighting matrix,
and @math{||r||_W^2 = r^T W r}.
In cases where the least squares matrix @math{X} is ill-conditioned,
small perturbations (ie: noise) in the observation vector could lead to
widely different solution vectors @math{c}.
One way of dealing with ill-conditioned matrices is to use a ``truncated SVD''
in which small singular values, below some given tolerance, are discarded
from the solution. The truncated SVD method is available using the functions
@code{gsl_multifit_linear_tsvd} and @code{gsl_multifit_wlinear_tsvd}. Another way
to help solve ill-posed problems is to include a regularization term in the least squares
minimization
@tex
\beforedisplay
$$
\chi^2 = || y - Xc ||_W^2 + \lambda^2 || L c ||^2
$$
\afterdisplay
@end tex
@ifinfo
@example
\chi^2 = || y - Xc ||_W^2 + \lambda^2 || L c ||^2
@end example
@end ifinfo
for a suitably chosen regularization parameter @math{\lambda} and
matrix @math{L}. This type of regularization is known as Tikhonov, or ridge,
regression. In some applications, @math{L} is chosen as the identity matrix, giving
preference to solution vectors @math{c} with smaller norms.
Including this regularization term leads to the explicit ``normal equations'' solution
@tex
\beforedisplay
$$
c = \left( X^T W X + \lambda^2 L^T L \right)^{-1} X^T W y
$$
\afterdisplay
@end tex
@ifinfo
@example
c = ( X^T W X + \lambda^2 L^T L )^-1 X^T W y
@end example
@end ifinfo
which reduces to the ordinary least squares solution when @math{L = 0}.
In practice, it is often advantageous to transform a regularized least
squares system into the form
@tex
\beforedisplay
$$
\chi^2 = || \tilde{y} - \tilde{X} \tilde{c} ||^2 + \lambda^2 || \tilde{c} ||^2
$$
\afterdisplay
@end tex
@ifinfo
@example
\chi^2 = || y~ - X~ c~ ||^2 + \lambda^2 || c~ ||^2
@end example
@end ifinfo
This is known as the Tikhonov ``standard form'' and has the normal equations solution
@math{\tilde{c} = \left( \tilde{X}^T \tilde{X} + \lambda^2 I \right)^{-1} \tilde{X}^T \tilde{y}}.
For an @math{m}-by-@math{p} matrix @math{L} which is full rank and has @math{m >= p} (ie: @math{L} is
square or has more rows than columns), we can calculate the ``thin'' QR decomposition of @math{L}, and
note that @math{||L c||} = @math{||R c||} since the @math{Q} factor will not change the norm. Since
@math{R} is @math{p}-by-@math{p}, we can then use the transformation
@tex
\beforedisplay
$$
\eqalign{
\tilde{X} &= W^{1 \over 2} X R^{-1} \cr
\tilde{y} &= W^{1 \over 2} y \cr
\tilde{c} &= R c
}
$$
\afterdisplay
@end tex
@ifinfo
@example
X~ = sqrt(W) X R^-1
y~ = sqrt(W) y
c~ = R c
@end example
@end ifinfo
to achieve the standard form. For a rectangular matrix @math{L} with @math{m < p},
a more sophisticated approach is needed (see Hansen 1998, chapter 2.3).
In practice, the normal equations solution above is not desirable due to
numerical instabilities, and so the system is solved using the
singular value decomposition of the matrix @math{\tilde{X}}.
The matrix @math{L} is often chosen as the identity matrix, or as a first
or second finite difference operator, to ensure a smoothly varying
coefficient vector @math{c}, or as a diagonal matrix to selectively damp
each model parameter differently. If @math{L \ne I}, the user must first
convert the least squares problem to standard form using
@code{gsl_multifit_linear_stdform1} or @code{gsl_multifit_linear_stdform2},
solve the system, and then backtransform the solution vector to recover
the solution of the original problem (see
@code{gsl_multifit_linear_genform1} and @code{gsl_multifit_linear_genform2}).
In many regularization problems, care must be taken when choosing
the regularization parameter @math{\lambda}. Since both the
residual norm @math{||y - X c||} and solution norm @math{||L c||}
are being minimized, the parameter @math{\lambda} represents
a tradeoff between minimizing either the residuals or the
solution vector. A common tool for visualizing the comprimise between
the minimization of these two quantities is known as the L-curve.
The L-curve is a log-log plot of the residual norm @math{||y - X c||}
on the horizontal axis and the solution norm @math{||L c||} on the
vertical axis. This curve nearly always as an @math{L} shaped
appearance, with a distinct corner separating the horizontal
and vertical sections of the curve. The regularization parameter
corresponding to this corner is often chosen as the optimal
value. GSL provides routines to calculate the L-curve for all
relevant regularization parameters as well as locating the corner.
Another method of choosing the regularization parameter is known
as Generalized Cross Validation (GCV). This method is based on
the idea that if an arbitrary element @math{y_i} is left out of the
right hand side, the resulting regularized solution should predict this element
accurately. This leads to choosing the parameter @math{\lambda}
which minimizes the GCV function
@tex
\beforedisplay
$$
G(\lambda) = {||y - X c_{\lambda}||^2 \over \textrm{Tr}(I_n - X X_{\lambda}^I)^2}
$$
\afterdisplay
@end tex
@ifinfo
@example
G(\lambda) = (||y - X c_@{\lambda@}||^2) / Tr(I_n - X X^I)^2
@end example
@end ifinfo
where @math{X_{\lambda}^I} is the matrix which relates the solution @math{c_{\lambda}}
to the right hand side @math{y}, ie: @math{c_{\lambda} = X_{\lambda}^I y}. GSL
provides routines to compute the GCV curve and its minimum.
@noindent
For most applications, the steps required to solve a regularized least
squares problem are as follows:
@enumerate
@item Construct the least squares system (@math{X}, @math{y}, @math{W}, @math{L})
@item Transform the system to standard form (@math{\tilde{X}},@math{\tilde{y}}). This
step can be skipped if @math{L = I_p} and @math{W = I_n}.
@item Calculate the SVD of @math{\tilde{X}}.
@item Determine an appropriate regularization parameter @math{\lambda} (using for example
L-curve or GCV analysis).
@item Solve the standard form system using the chosen @math{\lambda} and the SVD of @math{\tilde{X}}.
@item Backtransform the standard form solution @math{\tilde{c}} to recover the
original solution vector @math{c}.
@end enumerate
@deftypefun int gsl_multifit_linear_stdform1 (const gsl_vector * @var{L}, const gsl_matrix * @var{X}, const gsl_vector * @var{y}, gsl_matrix * @var{Xs}, gsl_vector * @var{ys}, gsl_multifit_linear_workspace * @var{work})
@deftypefunx int gsl_multifit_linear_wstdform1 (const gsl_vector * @var{L}, const gsl_matrix * @var{X}, const gsl_vector * @var{w}, const gsl_vector * @var{y}, gsl_matrix * @var{Xs}, gsl_vector * @var{ys}, gsl_multifit_linear_workspace * @var{work})
These functions define a regularization matrix
@math{L =} diag@math{(l_0,l_1,...,l_{p-1})}.
The diagonal matrix element @math{l_i} is provided by the
@math{i}th element of the input vector @var{L}.
The @math{n}-by-@math{p} least squares matrix @var{X} and
vector @var{y} of length @math{n} are then
converted to standard form as described above and the parameters
(@math{\tilde{X}},@math{\tilde{y}}) are stored in @var{Xs} and @var{ys}
on output. @var{Xs} and @var{ys} have the same dimensions as
@var{X} and @var{y}. Optional data weights may be supplied in the
vector @var{w} of length @math{n}. In order to apply this transformation,
@math{L^{-1}} must exist and so none of the @math{l_i}
may be zero. After the standard form system has been solved,
use @code{gsl_multifit_linear_genform1} to recover the original solution vector.
It is allowed to have @var{X} = @var{Xs} and @var{y} = @var{ys} for an in-place transform.
In order to perform a weighted regularized fit with @math{L = I}, the user may
call @code{gsl_multifit_linear_applyW} to convert to standard form.
@end deftypefun
@deftypefun int gsl_multifit_linear_L_decomp (gsl_matrix * @var{L}, gsl_vector * @var{tau})
This function factors the @math{m}-by-@math{p} regularization matrix
@var{L} into a form needed for the later transformation to standard form. @var{L}
may have any number of rows @math{m}. If @math{m \ge p} the QR decomposition of
@var{L} is computed and stored in @var{L} on output. If @math{m < p}, the QR decomposition
of @math{L^T} is computed and stored in @var{L} on output. On output,
the Householder scalars are stored in the vector @var{tau} of size @math{MIN(m,p)}.
These outputs will be used by @code{gsl_multifit_linear_wstdform2} to complete the
transformation to standard form.
@end deftypefun
@deftypefun int gsl_multifit_linear_stdform2 (const gsl_matrix * @var{LQR}, const gsl_vector * @var{Ltau}, const gsl_matrix * @var{X}, const gsl_vector * @var{y}, gsl_matrix * @var{Xs}, gsl_vector * @var{ys}, gsl_matrix * @var{M}, gsl_multifit_linear_workspace * @var{work})
@deftypefunx int gsl_multifit_linear_wstdform2 (const gsl_matrix * @var{LQR}, const gsl_vector * @var{Ltau}, const gsl_matrix * @var{X}, const gsl_vector * @var{w}, const gsl_vector * @var{y}, gsl_matrix * @var{Xs}, gsl_vector * @var{ys}, gsl_matrix * @var{M}, gsl_multifit_linear_workspace * @var{work})
These functions convert the least squares system (@var{X},@var{y},@var{W},@math{L}) to standard
form (@math{\tilde{X}},@math{\tilde{y}}) which are stored in @var{Xs} and @var{ys}
respectively. The @math{m}-by-@math{p} regularization matrix @var{L} is specified by the inputs
@var{LQR} and @var{Ltau}, which are outputs from @code{gsl_multifit_linear_L_decomp}.
The dimensions of the standard form parameters (@math{\tilde{X}},@math{\tilde{y}})
depend on whether @math{m} is larger or less than @math{p}. For @math{m \ge p},
@var{Xs} is @math{n}-by-@math{p}, @var{ys} is @math{n}-by-1, and @var{M} is
not used. For @math{m < p}, @var{Xs} is @math{(n - p + m)}-by-@math{m},
@var{ys} is @math{(n - p + m)}-by-1, and @var{M} is additional @math{n}-by-@math{p} workspace,
which is required to recover the original solution vector after the system has been
solved (see @code{gsl_multifit_linear_genform2}). Optional data weights may be supplied in the
vector @var{w} of length @math{n}, where @math{W =} diag(w).
@end deftypefun
@deftypefun int gsl_multifit_linear_solve (const double @var{lambda}, const gsl_matrix * @var{Xs}, const gsl_vector * @var{ys}, gsl_vector * @var{cs}, double * @var{rnorm}, double * @var{snorm}, gsl_multifit_linear_workspace * @var{work})
This function computes the regularized best-fit parameters @math{\tilde{c}}
which minimize the cost function
@math{\chi^2 = || \tilde{y} - \tilde{X} \tilde{c} ||^2 + \lambda^2 || \tilde{c} ||^2} which is
in standard form. The least squares system must therefore be converted
to standard form prior to calling this function.
The observation vector @math{\tilde{y}} is provided in @var{ys} and the matrix of
predictor variables @math{\tilde{X}} in @var{Xs}. The solution vector @math{\tilde{c}} is
returned in @var{cs}, which has length min(@math{m,p}). The SVD of @var{Xs} must be computed prior
to calling this function, using @code{gsl_multifit_linear_svd}.
The regularization parameter @math{\lambda} is provided in @var{lambda}.
The residual norm @math{|| \tilde{y} - \tilde{X} \tilde{c} || = ||y - X c||_W} is returned in @var{rnorm}.
The solution norm @math{|| \tilde{c} || = ||L c||} is returned in
@var{snorm}.
@end deftypefun
@deftypefun int gsl_multifit_linear_genform1 (const gsl_vector * @var{L}, const gsl_vector * @var{cs}, gsl_vector * @var{c}, gsl_multifit_linear_workspace * @var{work})
After a regularized system has been solved with
@math{L =} diag@math{(\l_0,\l_1,...,\l_{p-1})},
this function backtransforms the standard form solution vector @var{cs}
to recover the solution vector of the original problem @var{c}. The
diagonal matrix elements @math{l_i} are provided in
the vector @var{L}. It is allowed to have @var{c} = @var{cs} for an
in-place transform.
@end deftypefun
@deftypefun int gsl_multifit_linear_genform2 (const gsl_matrix * @var{LQR}, const gsl_vector * @var{Ltau}, const gsl_matrix * @var{X}, const gsl_vector * @var{y}, const gsl_vector * @var{cs}, const gsl_matrix * @var{M}, gsl_vector * @var{c}, gsl_multifit_linear_workspace * @var{work})
@deftypefunx int gsl_multifit_linear_wgenform2 (const gsl_matrix * @var{LQR}, const gsl_vector * @var{Ltau}, const gsl_matrix * @var{X}, const gsl_vector * @var{w}, const gsl_vector * @var{y}, const gsl_vector * @var{cs}, const gsl_matrix * @var{M}, gsl_vector * @var{c}, gsl_multifit_linear_workspace * @var{work})
After a regularized system has been solved with a general rectangular matrix @math{L},
specified by (@var{LQR},@var{Ltau}), this function backtransforms the standard form solution @var{cs}
to recover the solution vector of the original problem, which is stored in @var{c},
of length @math{p}. The original least squares matrix and observation vector are provided in
@var{X} and @var{y} respectively. @var{M} is the matrix computed by
@code{gsl_multifit_linear_stdform2}. For weighted fits, the weight vector
@var{w} must also be supplied.
@end deftypefun
@deftypefun int gsl_multifit_linear_applyW (const gsl_matrix * @var{X}, const gsl_vector * @var{w}, const gsl_vector * @var{y}, gsl_matrix * @var{WX}, gsl_vector * @var{Wy})
For weighted least squares systems with @math{L = I}, this function may be used to
convert the system to standard form by applying the weight matrix @math{W =} diag(@var{w})
to the least squares matrix @var{X} and observation vector @var{y}. On output, @var{WX}
is equal to @math{W^{1/2} X} and @var{Wy} is equal to @math{W^{1/2} y}. It is allowed
for @var{WX} = @var{X} and @var{Wy} = @var{y} for an in-place transform.
@end deftypefun
@deftypefun int gsl_multifit_linear_lcurve (const gsl_vector * @var{y}, gsl_vector * @var{reg_param}, gsl_vector * @var{rho}, gsl_vector * @var{eta}, gsl_multifit_linear_workspace * @var{work})
This function computes the L-curve for a least squares system
using the right hand side vector @var{y} and the SVD decomposition
of the least squares matrix @var{X}, which must be provided
to @code{gsl_multifit_linear_svd} prior to
calling this function. The output vectors @var{reg_param},
@var{rho}, and @var{eta} must all be the same size, and will
contain the regularization parameters @math{\lambda_i}, residual norms
@math{||y - X c_i||}, and solution norms @math{|| L c_i ||}
which compose the L-curve, where @math{c_i} is the regularized
solution vector corresponding to @math{\lambda_i}.
The user may determine the number of points on the L-curve by
adjusting the size of these input arrays. The regularization
parameters @math{\lambda_i} are estimated from the singular values
of @var{X}, and chosen to represent the most relevant portion of
the L-curve.
@end deftypefun
@deftypefun int gsl_multifit_linear_lcorner (const gsl_vector * @var{rho}, const gsl_vector * @var{eta}, size_t * @var{idx})
This function attempts to locate the corner of the L-curve
@math{(||y - X c||, ||L c||)} defined by the @var{rho} and @var{eta}
input arrays respectively. The corner is defined as the point of maximum
curvature of the L-curve in log-log scale. The @var{rho} and @var{eta}
arrays can be outputs of @code{gsl_multifit_linear_lcurve}. The
algorithm used simply fits a circle to 3 consecutive points on the L-curve
and uses the circle's radius to determine the curvature at
the middle point. Therefore, the input array sizes must be
@math{\ge 3}. With more points provided for the L-curve, a better
estimate of the curvature can be obtained. The array index
corresponding to maximum curvature (ie: the corner) is returned
in @var{idx}. If the input arrays contain colinear points,
this function could fail and return @code{GSL_EINVAL}.
@end deftypefun
@deftypefun int gsl_multifit_linear_lcorner2 (const gsl_vector * @var{reg_param}, const gsl_vector * @var{eta}, size_t * @var{idx})
This function attempts to locate the corner of an alternate L-curve
@math{(\lambda^2, ||L c||^2)} studied by Rezghi and Hosseini, 2009.
This alternate L-curve can provide better estimates of the
regularization parameter for smooth solution vectors. The regularization
parameters @math{\lambda} and solution norms @math{||L c||} are provided
in the @var{reg_param} and @var{eta} input arrays respectively. The
corner is defined as the point of maximum curvature of this
alternate L-curve in linear scale. The @var{reg_param} and @var{eta}
arrays can be outputs of @code{gsl_multifit_linear_lcurve}. The
algorithm used simply fits a circle to 3 consecutive points on the L-curve
and uses the circle's radius to determine the curvature at
the middle point. Therefore, the input array sizes must be
@math{\ge 3}. With more points provided for the L-curve, a better
estimate of the curvature can be obtained. The array index
corresponding to maximum curvature (ie: the corner) is returned
in @var{idx}. If the input arrays contain colinear points,
this function could fail and return @code{GSL_EINVAL}.
@end deftypefun
@deftypefun int gsl_multifit_linear_gcv_init(const gsl_vector * @var{y}, gsl_vector * @var{reg_param}, gsl_vector * @var{UTy}, double * @var{delta0}, gsl_multifit_linear_workspace * @var{work})
This function performs some initialization in preparation for computing
the GCV curve and its minimum. The right hand side vector is provided
in @var{y}. On output, @var{reg_param} is set to a vector of regularization
parameters in decreasing order and may be of any size. The vector
@var{UTy} of size @math{p} is set to @math{U^T y}. The parameter
@var{delta0} is needed for subsequent steps of the GCV calculation.
@end deftypefun
@deftypefun int gsl_multifit_linear_gcv_curve(const gsl_vector * @var{reg_param}, const gsl_vector * @var{UTy}, const double @var{delta0}, gsl_vector * @var{G}, gsl_multifit_linear_workspace * @var{work})
This funtion calculates the GCV curve @math{G(\lambda)} and stores it in
@var{G} on output, which must be the same size as @var{reg_param}. The
inputs @var{reg_param}, @var{UTy} and @var{delta0} are computed in
@code{gsl_multifit_linear_gcv_init}.
@end deftypefun
@deftypefun int gsl_multifit_linear_gcv_min(const gsl_vector * @var{reg_param}, const gsl_vector * @var{UTy}, const gsl_vector * @var{G}, const double @var{delta0}, double * @var{lambda}, gsl_multifit_linear_workspace * @var{work})
This function computes the value of the regularization parameter
which minimizes the GCV curve @math{G(\lambda)} and stores it in
@var{lambda}. The input @var{G} is calculated by
@code{gsl_multifit_linear_gcv_curve} and the inputs
@var{reg_param}, @var{UTy} and @var{delta0} are computed by
@code{gsl_multifit_linear_gcv_init}.
@end deftypefun
@deftypefun double gsl_multifit_linear_gcv_calc(const double @var{lambda}, const gsl_vector * @var{UTy}, const double @var{delta0}, gsl_multifit_linear_workspace * @var{work})
This function returns the value of the GCV curve @math{G(\lambda)} corresponding
to the input @var{lambda}.
@end deftypefun
@deftypefun int gsl_multifit_linear_gcv(const gsl_vector * @var{y}, gsl_vector * @var{reg_param}, gsl_vector * @var{G}, double * @var{lambda}, double * @var{G_lambda}, gsl_multifit_linear_workspace * @var{work})
This function combines the steps @code{gcv_init}, @code{gcv_curve},
and @code{gcv_min} defined above into a single function. The input
@var{y} is the right hand side vector. On output, @var{reg_param} and
@var{G}, which must be the same size, are set to vectors of
@math{\lambda} and @math{G(\lambda)} values respectively. The
output @var{lambda} is set to the optimal value of @math{\lambda}
which minimizes the GCV curve. The minimum value of the GCV curve is
returned in @var{G_lambda}.
@end deftypefun
@deftypefun int gsl_multifit_linear_Lk (const size_t @var{p}, const size_t @var{k}, gsl_matrix * @var{L})
This function computes the discrete approximation to the derivative operator @math{L_k} of
order @var{k} on a regular grid of @var{p} points and stores it in @var{L}. The dimensions of @var{L} are
@math{(p-k)}-by-@math{p}.
@end deftypefun
@deftypefun int gsl_multifit_linear_Lsobolev (const size_t @var{p}, const size_t @var{kmax}, const gsl_vector * @var{alpha}, gsl_matrix * @var{L}, gsl_multifit_linear_workspace * @var{work})
This function computes the regularization matrix @var{L} corresponding to the weighted Sobolov norm
@math{||L c||^2 = \sum_k \alpha_k^2 ||L_k c||^2} where @math{L_k} approximates the derivative
operator of order @math{k}. This regularization norm can be useful in applications where
it is necessary to smooth several derivatives of the solution. @var{p} is the number of
model parameters, @var{kmax} is the highest derivative to include in the summation above, and
@var{alpha} is the vector of weights of size @var{kmax} + 1, where @var{alpha}[k] = @math{\alpha_k}
is the weight assigned to the derivative of order @math{k}. The output matrix @var{L} is size
@var{p}-by-@var{p} and upper triangular.
@end deftypefun
@deftypefun double gsl_multifit_linear_rcond (const gsl_multifit_linear_workspace * @var{work})
This function returns the reciprocal condition number of the least squares matrix @math{X},
defined as the ratio of the smallest and largest singular values, rcond = @math{\sigma_{min}/\sigma_{max}}.
The routine @code{gsl_multifit_linear_svd} must first be called to compute the SVD of @math{X}.
@end deftypefun
@node Robust linear regression
@section Robust linear regression
@cindex robust regression
@cindex regression, robust
@cindex least squares, robust
Ordinary least squares (OLS) models are often heavily influenced by the presence of outliers.
Outliers are data points which do not follow the general trend of the other observations,
although there is strictly no precise definition of an outlier. Robust linear regression
refers to regression algorithms which are robust to outliers. The most common type of
robust regression is M-estimation. The general M-estimator minimizes the objective function
@tex
\beforedisplay
$$
\sum_i \rho(e_i) = \sum_i \rho (y_i - Y(c, x_i))
$$
\afterdisplay
@end tex
@ifinfo
@example
\sum_i \rho(e_i) = \sum_i \rho (y_i - Y(c, x_i))
@end example
@end ifinfo
where @math{e_i = y_i - Y(c, x_i)} is the residual of the ith data point, and
@math{\rho(e_i)} is a function which should have the following properties:
@itemize @w{}
@item @math{\rho(e) \ge 0}
@item @math{\rho(0) = 0}
@item @math{\rho(-e) = \rho(e)}
@item @math{\rho(e_1) > \rho(e_2)} for @math{|e_1| > |e_2|}
@end itemize
@noindent
The special case of ordinary least squares is given by @math{\rho(e_i) = e_i^2}.
Letting @math{\psi = \rho'} be the derivative of @math{\rho}, differentiating
the objective function with respect to the coefficients @math{c}
and setting the partial derivatives to zero produces the system of equations
@tex
\beforedisplay
$$
\sum_i \psi(e_i) X_i = 0
$$
\afterdisplay
@end tex
@ifinfo
@example
\sum_i \psi(e_i) X_i = 0
@end example
@end ifinfo
where @math{X_i} is a vector containing row @math{i} of the design matrix @math{X}.
Next, we define a weight function @math{w(e) = \psi(e)/e}, and let
@math{w_i = w(e_i)}:
@tex
\beforedisplay
$$
\sum_i w_i e_i X_i = 0
$$
\afterdisplay
@end tex
@ifinfo
@example
\sum_i w_i e_i X_i = 0
@end example
@end ifinfo
This system of equations is equivalent to solving a weighted ordinary least squares
problem, minimizing @math{\chi^2 = \sum_i w_i e_i^2}. The weights however, depend
on the residuals @math{e_i}, which depend on the coefficients @math{c}, which depend
on the weights. Therefore, an iterative solution is used, called Iteratively Reweighted
Least Squares (IRLS).
@enumerate
@item Compute initial estimates of the coefficients @math{c^{(0)}} using ordinary least squares
@item For iteration @math{k}, form the residuals @math{e_i^{(k)} = (y_i - X_i c^{(k-1)})/(t \sigma^{(k)} \sqrt{1 - h_i})},
where @math{t} is a tuning constant depending on the choice of @math{\psi}, and @math{h_i} are the
statistical leverages (diagonal elements of the matrix @math{X (X^T X)^{-1} X^T}). Including @math{t}
and @math{h_i} in the residual calculation has been shown to improve the convergence of the method.
The residual standard deviation is approximated as @math{\sigma^{(k)} = MAD / 0.6745}, where MAD is the
Median-Absolute-Deviation of the @math{n-p} largest residuals from the previous iteration.
@item Compute new weights @math{w_i^{(k)} = \psi(e_i^{(k)})/e_i^{(k)}}.
@item Compute new coefficients @math{c^{(k)}} by solving the weighted least squares problem with
weights @math{w_i^{(k)}}.
@item Steps 2 through 4 are iterated until the coefficients converge or until some maximum iteration
limit is reached. Coefficients are tested for convergence using the critera:
@tex
\beforedisplay
$$
|c_i^{(k)} - c_i^{(k-1)}| \le \epsilon \times \hbox{max}(|c_i^{(k)}|, |c_i^{(k-1)}|)
$$
\afterdisplay
@end tex
@ifinfo
@example
|c_i^(k) - c_i^(k-1)| \le \epsilon \times max(|c_i^(k)|, |c_i^(k-1)|)
@end example
@end ifinfo
for all @math{0 \le i < p} where @math{\epsilon} is a small tolerance factor.
@end enumerate
@noindent
The key to this method lies in selecting the function @math{\psi(e_i)} to assign
smaller weights to large residuals, and larger weights to smaller residuals. As
the iteration proceeds, outliers are assigned smaller and smaller weights, eventually
having very little or no effect on the fitted model.
@deftypefun {gsl_multifit_robust_workspace *} gsl_multifit_robust_alloc (const gsl_multifit_robust_type * @var{T}, const size_t @var{n}, const size_t @var{p})
@tindex gsl_multifit_robust_workspace
This function allocates a workspace for fitting a model to @var{n}
observations using @var{p} parameters. The size of the workspace
is @math{O(np + p^2)}. The type @var{T} specifies the
function @math{\psi} and can be selected from the following choices.
@deffn {Robust type} gsl_multifit_robust_default
This specifies the @code{gsl_multifit_robust_bisquare} type (see below) and is a good
general purpose choice for robust regression.
@end deffn
@deffn {Robust type} gsl_multifit_robust_bisquare
This is Tukey's biweight (bisquare) function and is a good general purpose choice for
robust regression. The weight function is given by
@tex
\beforedisplay
$$
w(e) = \left\{ \matrix{ (1 - e^2)^2, & |e| \le 1\cr
0, & |e| > 1} \right.
$$
\afterdisplay
@end tex
@ifinfo
@example
w(e) = (1 - e^2)^2
@end example
@end ifinfo
and the default tuning constant is @math{t = 4.685}.
@end deffn
@deffn {Robust type} gsl_multifit_robust_cauchy
This is Cauchy's function, also known as the Lorentzian function.
This function does not guarantee a unique solution,
meaning different choices of the coefficient vector @var{c}
could minimize the objective function. Therefore this option should
be used with care. The weight function is given by
@tex
\beforedisplay
$$
w(e) = {1 \over 1 + e^2}
$$
\afterdisplay
@end tex
@ifinfo
@example
w(e) = 1 / (1 + e^2)
@end example
@end ifinfo
and the default tuning constant is @math{t = 2.385}.
@end deffn
@deffn {Robust type} gsl_multifit_robust_fair
This is the fair @math{\rho} function, which guarantees a unique solution and
has continuous derivatives to three orders. The weight function is given by
@tex
\beforedisplay
$$
w(e) = {1 \over 1 + |e|}
$$
\afterdisplay
@end tex
@ifinfo
@example
w(e) = 1 / (1 + |e|)
@end example
@end ifinfo
and the default tuning constant is @math{t = 1.400}.
@end deffn
@deffn {Robust type} gsl_multifit_robust_huber
This specifies Huber's @math{\rho} function, which is a parabola in the vicinity of zero and
increases linearly for a given threshold @math{|e| > t}. This function is also considered
an excellent general purpose robust estimator, however, occasional difficulties can
be encountered due to the discontinuous first derivative of the @math{\psi} function.
The weight function is given by
@tex
\beforedisplay
$$
w(e) = \left\{ \matrix{ 1, & |e| \le 1\cr
{1 \over |e|}, & |e| > 1} \right.
$$
\afterdisplay
@end tex
@ifinfo
@example
w(e) = 1/max(1,|e|)
@end example
@end ifinfo
and the default tuning constant is @math{t = 1.345}.
@end deffn
@deffn {Robust type} gsl_multifit_robust_ols
This specifies the ordinary least squares solution, which can be useful for quickly
checking the difference between the various robust and OLS solutions. The weight function is given by
@tex
\beforedisplay
$$
w(e) = 1
$$
\afterdisplay
@end tex
@ifinfo
@example
w(e) = 1
@end example
@end ifinfo
and the default tuning constant is @math{t = 1}.
@end deffn
@deffn {Robust type} gsl_multifit_robust_welsch
This specifies the Welsch function which can perform well in cases where the residuals have an
exponential distribution. The weight function is given by
@tex
\beforedisplay
$$
w(e) = \exp{(-e^2)}
$$
\afterdisplay
@end tex
@ifinfo
@example
w(e) = \exp(-e^2)
@end example
@end ifinfo
and the default tuning constant is @math{t = 2.985}.
@end deffn
@end deftypefun
@deftypefun void gsl_multifit_robust_free (gsl_multifit_robust_workspace * @var{w})
This function frees the memory associated with the workspace @var{w}.
@end deftypefun
@deftypefun {const char *} gsl_multifit_robust_name (const gsl_multifit_robust_workspace * @var{w})
This function returns the name of the robust type @var{T} specified to @code{gsl_multifit_robust_alloc}.
@end deftypefun
@deftypefun int gsl_multifit_robust_tune (const double @var{tune}, gsl_multifit_robust_workspace * @var{w})
This function sets the tuning constant @math{t} used to adjust the residuals at each iteration to @var{tune}.
Decreasing the tuning constant increases the downweight assigned to large residuals, while increasing
the tuning constant decreases the downweight assigned to large residuals.
@end deftypefun
@deftypefun int gsl_multifit_robust_maxiter (const size_t @var{maxiter}, gsl_multifit_robust_workspace * @var{w})
This function sets the maximum number of iterations in the iteratively
reweighted least squares algorithm to @var{maxiter}. By default,
this value is set to 100 by @code{gsl_multifit_robust_alloc}.
@end deftypefun
@deftypefun int gsl_multifit_robust_weights (const gsl_vector * @var{r}, gsl_vector * @var{wts}, gsl_multifit_robust_workspace * @var{w})
This function assigns weights to the vector @var{wts} using the residual vector @var{r} and
previously specified weighting function. The output weights are given by @math{wts_i = w(r_i / (t \sigma))},
where the weighting functions @math{w} are detailed in @code{gsl_multifit_robust_alloc}. @math{\sigma}
is an estimate of the residual standard deviation based on the Median-Absolute-Deviation and @math{t}
is the tuning constant. This
function is useful if the user wishes to implement their own robust regression rather than using
the supplied @code{gsl_multifit_robust} routine below.
@end deftypefun
@deftypefun int gsl_multifit_robust (const gsl_matrix * @var{X}, const gsl_vector * @var{y}, gsl_vector * @var{c}, gsl_matrix * @var{cov}, gsl_multifit_robust_workspace * @var{w})
This function computes the best-fit parameters @var{c} of the model
@math{y = X c} for the observations @var{y} and the matrix of
predictor variables @var{X}, attemping to reduce the influence
of outliers using the algorithm outlined above.
The @math{p}-by-@math{p} variance-covariance matrix of the model parameters
@var{cov} is estimated as @math{\sigma^2 (X^T X)^{-1}}, where @math{\sigma} is
an approximation of the residual standard deviation using the theory of robust
regression. Special care must be taken when estimating @math{\sigma} and
other statistics such as @math{R^2}, and so these
are computed internally and are available by calling the function
@code{gsl_multifit_robust_statistics}.
If the coefficients do not converge within the maximum iteration
limit, the function returns @code{GSL_EMAXITER}. In this case,
the current estimates of the coefficients and covariance matrix
are returned in @var{c} and @var{cov} and the internal fit statistics
are computed with these estimates.
@end deftypefun
@deftypefun int gsl_multifit_robust_est (const gsl_vector * @var{x}, const gsl_vector * @var{c}, const gsl_matrix * @var{cov}, double * @var{y}, double * @var{y_err})
This function uses the best-fit robust regression coefficients
@var{c} and their covariance matrix
@var{cov} to compute the fitted function value
@var{y} and its standard deviation @var{y_err} for the model @math{y = x.c}
at the point @var{x}.
@end deftypefun
@deftypefun int gsl_multifit_robust_residuals (const gsl_matrix * @var{X}, const gsl_vector * @var{y}, const gsl_vector * @var{c}, gsl_vector * @var{r}, gsl_multifit_robust_workspace * @var{w})
This function computes the vector of studentized residuals
@math{r_i = {y_i - (X c)_i \over \sigma \sqrt{1 - h_i}}} for
the observations @var{y}, coefficients @var{c} and matrix of predictor
variables @var{X}. The routine @code{gsl_multifit_robust} must
first be called to compute the statisical leverages @math{h_i} of
the matrix @var{X} and residual standard deviation estimate @math{\sigma}.
@end deftypefun
@deftypefun gsl_multifit_robust_stats gsl_multifit_robust_statistics (const gsl_multifit_robust_workspace * @var{w})
This function returns a structure containing relevant statistics from a robust regression. The function
@code{gsl_multifit_robust} must be called first to perform the regression and calculate these statistics.
The returned @code{gsl_multifit_robust_stats} structure contains the following fields.
@itemize @w{}
@item double @code{sigma_ols} This contains the standard deviation of the residuals as computed from ordinary least squares (OLS).
@item double @code{sigma_mad} This contains an estimate of the standard deviation of the final residuals using the Median-Absolute-Deviation statistic
@item double @code{sigma_rob} This contains an estimate of the standard deviation of the final residuals from the theory of robust regression (see Street et al, 1988).
@item double @code{sigma} This contains an estimate of the standard deviation of the final residuals by attemping to reconcile @code{sigma_rob} and @code{sigma_ols}
in a reasonable way.
@item double @code{Rsq} This contains the @math{R^2} coefficient of determination statistic using the estimate @code{sigma}.
@item double @code{adj_Rsq} This contains the adjusted @math{R^2} coefficient of determination statistic using the estimate @code{sigma}.
@item double @code{rmse} This contains the root mean squared error of the final residuals
@item double @code{sse} This contains the residual sum of squares taking into account the robust covariance matrix.
@item size_t @code{dof} This contains the number of degrees of freedom @math{n - p}
@item size_t @code{numit} Upon successful convergence, this contains the number of iterations performed
@item gsl_vector * @code{weights} This contains the final weight vector of length @var{n}
@item gsl_vector * @code{r} This contains the final residual vector of length @var{n}, @math{r = y - X c}
@end itemize
@end deftypefun
@node Large Dense Linear Systems
@section Large dense linear systems
@cindex large dense linear least squares
@cindex linear least squares, large
This module is concerned with solving large dense least squares systems
@math{X c = y} where the @math{n}-by-@math{p} matrix
@math{X} has @math{n >> p} (ie: many more rows than columns).
This type of matrix is called a ``tall skinny'' matrix, and for
some applications, it may not be possible to fit the
entire matrix in memory at once to use the standard SVD approach.
Therefore, the algorithms in this module are designed to allow
the user to construct smaller blocks of the matrix @math{X} and
accumulate those blocks into the larger system one at a time. The
algorithms in this module never need to store the entire matrix
@math{X} in memory. The large linear least squares routines
support data weights and Tikhonov regularization, and are
designed to minimize the residual
@tex
\beforedisplay
$$
\chi^2 = || y - Xc ||_W^2 + \lambda^2 || L c ||^2
$$
\afterdisplay
@end tex
@ifinfo
@example
\chi^2 = || y - Xc ||_W^2 + \lambda^2 || L c ||^2
@end example
@end ifinfo
where @math{y} is the @math{n}-by-@math{1} observation vector,
@math{X} is the @math{n}-by-@math{p} design matrix, @math{c} is
the @math{p}-by-@math{1} solution vector,
@math{W =} diag@math{(w_1,...,w_n)} is the data weighting matrix,
@math{L} is an @math{m}-by-@math{p} regularization matrix,
@math{\lambda} is a regularization parameter,
and @math{||r||_W^2 = r^T W r}. In the discussion which follows,
we will assume that the system has been converted into Tikhonov
standard form,
@tex
\beforedisplay
$$
\chi^2 = || \tilde{y} - \tilde{X} \tilde{c} ||^2 + \lambda^2 || \tilde{c} ||^2
$$
\afterdisplay
@end tex
@ifinfo
@example
\chi^2 = || y~ - X~ c~ ||^2 + \lambda^2 || c~ ||^2
@end example
@end ifinfo
and we will drop the tilde characters from the various parameters.
For a discussion of the transformation to standard form
@pxref{Regularized regression}.
The basic idea is to partition the matrix @math{X} and observation
vector @math{y} as
@tex
\beforedisplay
$$
\left(
\matrix{
X_1 \cr
X_2 \cr
X_3 \cr
\vdots \cr
X_k \cr
}
\right)
c =
\left(
\matrix{
y_1 \cr
y_2 \cr
y_3 \cr
\vdots \cr
y_k \cr
}
\right)
$$
\afterdisplay
@end tex
@ifinfo
@example
[ X_1 ] c = [ y_1 ]
[ X_2 ] [ y_2 ]
[ X_3 ] [ y_3 ]
[ ... ] [ ... ]
[ X_k ] [ y_k ]
@end example
@end ifinfo
into @math{k} blocks, where each block (@math{X_i,y_i}) may have
any number of rows, but each @math{X_i} has @math{p} columns.
The sections below describe the methods available for solving
this partitioned system. The functions are declared in
the header file @file{gsl_multilarge.h}.
@menu
* Large Dense Linear Systems Normal Equations::
* Large Dense Linear Systems TSQR::
* Large Dense Linear Systems Solution Steps::
* Large Dense Linear Systems Routines::
@end menu
@node Large Dense Linear Systems Normal Equations
@subsection Normal Equations Approach
@cindex large linear least squares, normal equations
The normal equations approach to the large linear least squares
problem described above is popular due to its speed and simplicity.
Since the normal equations solution to the problem is given by
@tex
\beforedisplay
$$
c = \left( X^T X + \lambda^2 I \right)^{-1} X^T y
$$
\afterdisplay
@end tex
@ifinfo
@example
c = ( X^T X + \lambda^2 I )^-1 X^T y
@end example
@end ifinfo
only the @math{p}-by-@math{p} matrix @math{X^T X} and
@math{p}-by-1 vector @math{X^T y} need to be stored. Using
the partition scheme described above, these are given by
@tex
\beforedisplay
$$
\eqalign{
X^T X &= \sum_i X_i^T X_i \cr
X^T y &= \sum_i X_i^T y_i
}
$$
\afterdisplay
@end tex
@ifinfo
@example
X^T X = \sum_i X_i^T X_i
X^T y = \sum_i X_i^T y_i
@end example
@end ifinfo
Since the matrix @math{X^T X} is symmetric, only half of it
needs to be calculated. Once all of the blocks (@math{X_i,y_i})
have been accumulated into the final @math{X^T X} and @math{X^T y},
the system can be solved with a Cholesky factorization of the
@math{X^T X} matrix. If the Cholesky factorization fails (occasionally
due to numerical rounding errors), a QR decomposition is then used.
In both cases, the @math{X^T X} matrix is first transformed via
a diagonal scaling transformation to attempt to reduce its condition
number as much as possible to recover a more accurate solution vector.
The normal equations approach is the fastest method for solving the
large least squares problem, and is accurate for well-conditioned
matrices @math{X}. However, for ill-conditioned matrices, as is often
the case for large systems, this method can suffer from numerical
instabilities (see Trefethen and Bau, 1997). The number of operations
for this method is @math{O(np^2 + {1 \over 3}p^3)}.
@node Large Dense Linear Systems TSQR
@subsection Tall Skinny QR (TSQR) Approach
@cindex large linear least squares, TSQR
An algorithm which has better numerical stability for ill-conditioned
problems is known as the Tall Skinny QR (TSQR) method. This method
is based on computing the thin QR decomposition of the least squares
matrix @math{X = Q R}, where @math{Q} is an @math{n}-by-@math{p} matrix
with orthogonal columns, and @math{R} is a @math{p}-by-@math{p}
upper triangular matrix. Once these factors are calculated, the
residual becomes
@tex
\beforedisplay
$$
\chi^2 = || Q^T y - R c ||^2 + \lambda^2 || c ||^2
$$
\afterdisplay
@end tex
@ifinfo
@example
\chi^2 = || Q^T y - R c ||^2 + \lambda^2 || c ||^2
@end example
@end ifinfo
which can be written as the matrix equation
@tex
\beforedisplay
$$
\left(
\matrix{
R \cr
\lambda I
} \right) c =
\left(
\matrix{
Q^T y \cr
0
} \right)
$$
\afterdisplay
@end tex
@ifinfo
@example
[ R ; \lambda I ] c = [ Q^T b ; 0 ]
@end example
@end ifinfo
The matrix on the left hand side is now a much
smaller @math{2p}-by-@math{p} matrix which can
be solved with a standard SVD approach. The
@math{Q} matrix is just as large as the original
matrix @math{X}, however it does not need to be
explicitly constructed. The TSQR algorithm
computes only the @math{p}-by-@math{p} matrix
@math{R} and the @math{p}-by-1 vector @math{Q^T y},
and updates these quantities as new blocks
are added to the system. Each time a new block of rows
(@math{X_i,y_i}) is added, the algorithm performs a QR decomposition
of the matrix
@tex
\beforedisplay
$$
\left(
\matrix{
R_{i-1} \cr
X_i
} \right)
$$
\afterdisplay
@end tex
@ifinfo
@example
[ R_(i-1) ; X_i ]
@end example
@end ifinfo
where @math{R_{i-1}} is the upper triangular
@math{R} factor for the matrix
@tex
\beforedisplay
$$
\left(
\matrix{
X_1 \cr
\vdots \cr
X_{i-1}
} \right)
$$
\afterdisplay
@end tex
@ifinfo
@example
[ X_1 ; ... ; X_(i-1) ]
@end example
@end ifinfo
This QR decomposition is done efficiently taking into account
the sparse structure of @math{R_{i-1}}. See Demmel et al, 2008 for
more details on how this is accomplished. The number
of operations for this method is @math{O(2np^2 - {2 \over 3}p^3)}.
@node Large Dense Linear Systems Solution Steps
@subsection Large Dense Linear Systems Solution Steps
@cindex large linear least squares, steps
The typical steps required to solve large regularized linear least
squares problems are as follows:
@enumerate
@item Choose the regularization matrix @math{L}.
@item Construct a block of rows of the least squares matrix, right
hand side vector, and weight vector (@math{X_i}, @math{y_i}, @math{w_i}).
@item Transform the block to standard form (@math{\tilde{X_i}},@math{\tilde{y_i}}). This
step can be skipped if @math{L = I} and @math{W = I}.
@item Accumulate the standard form block (@math{\tilde{X_i}},@math{\tilde{y_i}}) into
the system.
@item Repeat steps 2-4 until the entire matrix and right hand side vector have
been accumulated.
@item Determine an appropriate regularization parameter @math{\lambda} (using for example
L-curve analysis).
@item Solve the standard form system using the chosen @math{\lambda}.
@item Backtransform the standard form solution @math{\tilde{c}} to recover the
original solution vector @math{c}.
@end enumerate
@node Large Dense Linear Systems Routines
@subsection Large Dense Linear Least Squares Routines
@cindex large linear least squares, routines
@deftypefun {gsl_multilarge_linear_workspace *} gsl_multilarge_linear_alloc (const gsl_multilarge_linear_type * @var{T}, const size_t @var{p})
This function allocates a workspace for solving large linear least squares
systems. The least squares matrix @math{X} has @var{p} columns,
but may have any number of rows. The parameter @var{T} specifies
the method to be used for solving the large least squares system
and may be selected from the following choices
@deffn {Multilarge type} gsl_multilarge_linear_normal
This specifies the normal equations approach for
solving the least squares system. This method is suitable
in cases where performance is critical and it is known that the
least squares matrix @math{X} is well conditioned. The size
of this workspace is @math{O(p^2)}.
@end deffn
@deffn {Multilarge type} gsl_multilarge_linear_tsqr
This specifies the sequential Tall Skinny QR (TSQR) approach for
solving the least squares system. This method is a good
general purpose choice for large systems, but requires about
twice as many operations as the normal equations method for
@math{n >> p}. The size of this workspace is @math{O(p^2)}.
@end deffn
@end deftypefun
@deftypefun void gsl_multilarge_linear_free (gsl_multilarge_linear_workspace * @var{w})
This function frees the memory associated with the
workspace @var{w}.
@end deftypefun
@deftypefun {const char *} gsl_multilarge_linear_name (gsl_multilarge_linear_workspace * @var{w})
This function returns a string pointer to the name
of the multilarge solver.
@end deftypefun
@deftypefun int gsl_multilarge_linear_reset (gsl_multilarge_linear_workspace * @var{w})
This function resets the workspace @var{w} so
it can begin to accumulate a new least squares
system.
@end deftypefun
@deftypefun int gsl_multilarge_linear_stdform1 (const gsl_vector * @var{L}, const gsl_matrix * @var{X}, const gsl_vector * @var{y}, gsl_matrix * @var{Xs}, gsl_vector * @var{ys}, gsl_multilarge_linear_workspace * @var{work})
@deftypefunx int gsl_multilarge_linear_wstdform1 (const gsl_vector * @var{L}, const gsl_matrix * @var{X}, const gsl_vector * @var{w}, const gsl_vector * @var{y}, gsl_matrix * @var{Xs}, gsl_vector * @var{ys}, gsl_multilarge_linear_workspace * @var{work})
These functions define a regularization matrix
@math{L =} diag@math{(l_0,l_1,...,l_{p-1})}.
The diagonal matrix element @math{l_i} is provided by the
@math{i}th element of the input vector @var{L}.
The block (@var{X},@var{y}) is converted to standard form and
the parameters (@math{\tilde{X}},@math{\tilde{y}}) are stored in @var{Xs}
and @var{ys} on output. @var{Xs} and @var{ys} have the same dimensions as
@var{X} and @var{y}. Optional data weights may be supplied in the
vector @var{w}. In order to apply this transformation,
@math{L^{-1}} must exist and so none of the @math{l_i}
may be zero. After the standard form system has been solved,
use @code{gsl_multilarge_linear_genform1} to recover the original solution vector.
It is allowed to have @var{X} = @var{Xs} and @var{y} = @var{ys} for an in-place transform.
@end deftypefun
@deftypefun int gsl_multilarge_linear_L_decomp (gsl_matrix * @var{L}, gsl_vector * @var{tau})
This function calculates the QR decomposition of the @math{m}-by-@math{p} regularization matrix
@var{L}. @var{L} must have @math{m \ge p}. On output,
the Householder scalars are stored in the vector @var{tau} of size @math{p}.
These outputs will be used by @code{gsl_multilarge_linear_wstdform2} to complete the
transformation to standard form.
@end deftypefun
@deftypefun int gsl_multilarge_linear_stdform2 (const gsl_matrix * @var{LQR}, const gsl_vector * @var{Ltau}, const gsl_matrix * @var{X}, const gsl_vector * @var{y}, gsl_matrix * @var{Xs}, gsl_vector * @var{ys}, gsl_multilarge_linear_workspace * @var{work})
@deftypefunx int gsl_multilarge_linear_wstdform2 (const gsl_matrix * @var{LQR}, const gsl_vector * @var{Ltau}, const gsl_matrix * @var{X}, const gsl_vector * @var{w}, const gsl_vector * @var{y}, gsl_matrix * @var{Xs}, gsl_vector * @var{ys}, gsl_multilarge_linear_workspace * @var{work})
These functions convert a block of rows (@var{X},@var{y},@var{w}) to standard
form (@math{\tilde{X}},@math{\tilde{y}}) which are stored in @var{Xs} and @var{ys}
respectively. @var{X}, @var{y}, and @var{w} must all have the same number of rows.
The @math{m}-by-@math{p} regularization matrix @var{L} is specified by the inputs
@var{LQR} and @var{Ltau}, which are outputs from @code{gsl_multilarge_linear_L_decomp}.
@var{Xs} and @var{ys} have the same dimensions as @var{X} and @var{y}. After the
standard form system has been solved, use @code{gsl_multilarge_linear_genform2} to
recover the original solution vector. Optional data weights may be supplied in the
vector @var{w}, where @math{W =} diag(w).
@end deftypefun
@deftypefun int gsl_multilarge_linear_accumulate (gsl_matrix * @var{X}, gsl_vector * @var{y}, gsl_multilarge_linear_workspace * @var{w})
This function accumulates the standard form block (@math{X,y}) into the
current least squares system. @var{X} and @var{y} have the same number
of rows, which can be arbitrary. @var{X} must have @math{p} columns.
For the TSQR method, @var{X} and @var{y} are destroyed on output.
For the normal equations method, they are both unchanged.
@end deftypefun
@deftypefun int gsl_multilarge_linear_solve (const double @var{lambda}, gsl_vector * @var{c}, double * @var{rnorm}, double * @var{snorm}, gsl_multilarge_linear_workspace * @var{w})
After all blocks (@math{X_i,y_i}) have been accumulated into
the large least squares system, this function will compute
the solution vector which is stored in @var{c} on output.
The regularization parameter @math{\lambda} is provided in
@var{lambda}. On output, @var{rnorm} contains the residual norm
@math{||y - X c||_W} and @var{snorm} contains the solution
norm @math{||L c||}.
@end deftypefun
@deftypefun int gsl_multilarge_linear_genform1 (const gsl_vector * @var{L}, const gsl_vector * @var{cs}, gsl_vector * @var{c}, gsl_multilarge_linear_workspace * @var{work})
After a regularized system has been solved with
@math{L =} diag@math{(\l_0,\l_1,...,\l_{p-1})},
this function backtransforms the standard form solution vector @var{cs}
to recover the solution vector of the original problem @var{c}. The
diagonal matrix elements @math{l_i} are provided in
the vector @var{L}. It is allowed to have @var{c} = @var{cs} for an
in-place transform.
@end deftypefun
@deftypefun int gsl_multilarge_linear_genform2 (const gsl_matrix * @var{LQR}, const gsl_vector * @var{Ltau}, const gsl_vector * @var{cs}, gsl_vector * @var{c}, gsl_multilarge_linear_workspace * @var{work})
After a regularized system has been solved with a regularization matrix @math{L},
specified by (@var{LQR},@var{Ltau}), this function backtransforms the standard form solution @var{cs}
to recover the solution vector of the original problem, which is stored in @var{c},
of length @math{p}.
@end deftypefun
@deftypefun int gsl_multilarge_linear_lcurve (gsl_vector * @var{reg_param}, gsl_vector * @var{rho}, gsl_vector * @var{eta}, gsl_multilarge_linear_workspace * @var{work})
This function computes the L-curve for a large least squares system
after it has been fully accumulated into the workspace @var{work}.
The output vectors @var{reg_param}, @var{rho}, and @var{eta} must all
be the same size, and will contain the regularization parameters
@math{\lambda_i}, residual norms @math{||y - X c_i||}, and solution
norms @math{|| L c_i ||} which compose the L-curve, where @math{c_i}
is the regularized solution vector corresponding to @math{\lambda_i}.
The user may determine the number of points on the L-curve by
adjusting the size of these input arrays. For the TSQR method,
the regularization parameters @math{\lambda_i} are estimated from the
singular values of the triangular @math{R} factor. For the normal
equations method, they are estimated from the eigenvalues of the
@math{X^T X} matrix.
@end deftypefun
@deftypefun int gsl_multilarge_linear_rcond (double * @var{rcond}, gsl_multilarge_linear_workspace * @var{work})
This function computes the reciprocal condition number, stored in
@var{rcond}, of the least squares matrix after it has been accumulated
into the workspace @var{work}. For the TSQR algorithm, this is
accomplished by calculating the SVD of the @math{R} factor, which
has the same singular values as the matrix @math{X}. For the normal
equations method, this is done by computing the eigenvalues of
@math{X^T X}, which could be inaccurate for ill-conditioned matrices
@math{X}.
@end deftypefun
@node Troubleshooting
@section Troubleshooting
@cindex least squares troubleshooting
When using models based on polynomials, care should be taken when constructing the design matrix
@math{X}. If the @math{x} values are large, then the matrix @math{X} could be ill-conditioned
since its columns are powers of @math{x}, leading to unstable least-squares solutions.
In this case it can often help to center and scale the @math{x} values using the mean and standard deviation:
@tex
\beforedisplay
$$
x' = {x - \mu(x) \over \sigma(x)}
$$
\afterdisplay
@end tex
@ifinfo
@example
x' = (x - mu)/sigma
@end example
@end ifinfo
@noindent
and then construct the @math{X} matrix using the transformed values @math{x'}.
@node Fitting Examples
@section Examples
The example programs in this section demonstrate the various linear regression methods.
@menu
* Fitting linear regression example::
* Fitting multi-parameter linear regression example::
* Fitting regularized linear regression example 1::
* Fitting regularized linear regression example 2::
* Fitting robust linear regression example::
* Fitting large linear systems example::
@end menu
@node Fitting linear regression example
@subsection Simple Linear Regression Example
The following program computes a least squares straight-line fit to a
simple dataset, and outputs the best-fit line and its
associated one standard-deviation error bars.
@example
@verbatiminclude examples/fitting.c
@end example
@noindent
The following commands extract the data from the output of the program
and display it using the @sc{gnu} plotutils @code{graph} utility,
@example
$ ./demo > tmp
$ more tmp
# best fit: Y = -106.6 + 0.06 X
# covariance matrix:
# [ 39602, -19.9
# -19.9, 0.01]
# chisq = 0.8
$ for n in data fit hi lo ;
do
grep "^$n" tmp | cut -d: -f2 > $n ;
done
$ graph -T X -X x -Y y -y 0 20 -m 0 -S 2 -Ie data
-S 0 -I a -m 1 fit -m 2 hi -m 2 lo
@end example
@iftex
@sp 1
@center @image{fit-wlinear,3.0in}
@end iftex
@node Fitting multi-parameter linear regression example
@subsection Multi-parameter Linear Regression Example
The following program performs a quadratic fit @math{y = c_0 + c_1 x + c_2
x^2} to a weighted dataset using the generalised linear fitting function
@code{gsl_multifit_wlinear}. The model matrix @math{X} for a quadratic
fit is given by,
@tex
\beforedisplay
$$
X=\pmatrix{1&x_0&x_0^2\cr
1&x_1&x_1^2\cr
1&x_2&x_2^2\cr
\dots&\dots&\dots\cr}
$$
\afterdisplay
@end tex
@ifinfo
@example
X = [ 1 , x_0 , x_0^2 ;
1 , x_1 , x_1^2 ;
1 , x_2 , x_2^2 ;
... , ... , ... ]
@end example
@end ifinfo
@noindent
where the column of ones corresponds to the constant term @math{c_0}.
The two remaining columns corresponds to the terms @math{c_1 x} and
@math{c_2 x^2}.
The program reads @var{n} lines of data in the format (@var{x}, @var{y},
@var{err}) where @var{err} is the error (standard deviation) in the
value @var{y}.
@example
@verbatiminclude examples/fitting2.c
@end example
@noindent
A suitable set of data for fitting can be generated using the following
program. It outputs a set of points with gaussian errors from the curve
@math{y = e^x} in the region @math{0 < x < 2}.
@example
@verbatiminclude examples/fitting3.c
@end example
@noindent
The data can be prepared by running the resulting executable program,
@example
$ GSL_RNG_TYPE=mt19937_1999 ./generate > exp.dat
$ more exp.dat
0.1 0.97935 0.110517
0.2 1.3359 0.12214
0.3 1.52573 0.134986
0.4 1.60318 0.149182
0.5 1.81731 0.164872
0.6 1.92475 0.182212
....
@end example
@noindent
To fit the data use the previous program, with the number of data points
given as the first argument. In this case there are 19 data points.
@example
$ ./fit 19 < exp.dat
0.1 0.97935 +/- 0.110517
0.2 1.3359 +/- 0.12214
...
# best fit: Y = 1.02318 + 0.956201 X + 0.876796 X^2
# covariance matrix:
[ +1.25612e-02, -3.64387e-02, +1.94389e-02
-3.64387e-02, +1.42339e-01, -8.48761e-02
+1.94389e-02, -8.48761e-02, +5.60243e-02 ]
# chisq = 23.0987
@end example
@noindent
The parameters of the quadratic fit match the coefficients of the
expansion of @math{e^x}, taking into account the errors on the
parameters and the @math{O(x^3)} difference between the exponential and
quadratic functions for the larger values of @math{x}. The errors on
the parameters are given by the square-root of the corresponding
diagonal elements of the covariance matrix. The chi-squared per degree
of freedom is 1.4, indicating a reasonable fit to the data.
@iftex
@sp 1
@center @image{fit-wlinear2,3.0in}
@end iftex
@node Fitting regularized linear regression example 1
@subsection Regularized Linear Regression Example 1
The next program demonstrates the difference between ordinary and
regularized least squares when the design matrix is near-singular.
In this program, we generate two random normally distributed variables
@math{u} and @math{v}, with @math{v = u + noise} so that @math{u}
and @math{v} are nearly colinear. We then set a third dependent
variable @math{y = u + v + noise} and solve for the coefficients
@math{c_1,c_2} of the model @math{Y(c_1,c_2) = c_1 u + c_2 v}.
Since @math{u \approx v}, the design matrix @math{X} is nearly
singular, leading to unstable ordinary least squares solutions.
@noindent
Here is the program output:
@example
matrix condition number = 1.025113e+04
=== Unregularized fit ===
best fit: y = -43.6588 u + 45.6636 v
residual norm = 31.6248
solution norm = 63.1764
chisq/dof = 1.00213
=== Regularized fit (L-curve) ===
optimal lambda: 4.51103
best fit: y = 1.00113 u + 1.0032 v
residual norm = 31.6547
solution norm = 1.41728
chisq/dof = 1.04499
=== Regularized fit (GCV) ===
optimal lambda: 0.0232029
best fit: y = -19.8367 u + 21.8417 v
residual norm = 31.6332
solution norm = 29.5051
chisq/dof = 1.00314
@end example
@noindent
We see that the ordinary least squares solution is completely wrong,
while the L-curve regularized method with the optimal
@math{\lambda = 4.51103} finds the correct solution
@math{c_1 \approx c_2 \approx 1}. The GCV regularized method finds
a regularization parameter @math{\lambda = 0.0232029} which is too
small to give an accurate solution, although it performs better than OLS.
The L-curve and its computed corner, as well as the GCV curve and its
minimum are plotted below.
@iftex
@sp 1
@center @image{regularized,5.0in}
@end iftex
@noindent The program is given below.
@example
@verbatiminclude examples/fitreg.c
@end example
@node Fitting regularized linear regression example 2
@subsection Regularized Linear Regression Example 2
The following example program minimizes the cost function
@tex
\beforedisplay
$$
||y - X c||^2 + \lambda^2 ||x||^2
$$
\afterdisplay
@end tex
@ifinfo
@example
||y - X c||^2 + \lambda^2 ||x||^2
@end example
@end ifinfo
@noindent
where @math{X} is the @math{10}-by-@math{8} Hilbert matrix whose
entries are given by
@tex
\beforedisplay
$$
X_{ij} = {1 \over i + j - 1}
$$
\afterdisplay
@end tex
@ifinfo
@example
X_@{ij@} = 1 / (i + j - 1)
@end example
@end ifinfo
@noindent
and the right hand side vector is given by
@math{y = [1,-1,1,-1,1,-1,1,-1,1,-1]^T}. Solutions
are computed for @math{\lambda = 0} (unregularized) as
well as for optimal parameters @math{\lambda} chosen by
analyzing the L-curve and GCV curve.
@noindent
Here is the program output:
@example
matrix condition number = 3.565872e+09
=== Unregularized fit ===
residual norm = 2.15376
solution norm = 2.92217e+09
chisq/dof = 2.31934
=== Regularized fit (L-curve) ===
optimal lambda: 7.11407e-07
residual norm = 2.60386
solution norm = 424507
chisq/dof = 3.43565
=== Regularized fit (GCV) ===
optimal lambda: 1.72278
residual norm = 3.1375
solution norm = 0.139357
chisq/dof = 4.95076
@end example
@noindent
Here we see the unregularized solution results in a large solution
norm due to the ill-conditioned matrix. The L-curve solution finds
a small value of @math{\lambda = 7.11e-7} which still results in
a badly conditioned system and a large solution norm. The GCV method
finds a parameter @math{\lambda = 1.72} which results in a well-conditioned
system and small solution norm.
@noindent
The L-curve and its computed corner, as well as the GCV curve and its
minimum are plotted below.
@iftex
@sp 1
@center @image{regularized2,5.0in}
@end iftex
@noindent The program is given below.
@example
@verbatiminclude examples/fitreg2.c
@end example
@node Fitting robust linear regression example
@subsection Robust Linear Regression Example
The next program demonstrates the advantage of robust least squares on
a dataset with outliers. The program generates linear @math{(x,y)}
data pairs on the line @math{y = 1.45 x + 3.88}, adds some random
noise, and inserts 3 outliers into the dataset. Both the robust
and ordinary least squares (OLS) coefficients are computed for
comparison.
@example
@verbatiminclude examples/robfit.c
@end example
The output from the program is shown in the following plot.
@iftex
@sp 1
@center @image{robust,3.0in}
@end iftex
@node Fitting large linear systems example
@subsection Large Dense Linear Regression Example
The following program demostrates the large dense linear least squares
solvers. This example is adapted from Trefethen and Bau,
and fits the function @math{f(t) = \exp{(\sin^3{(10t)}})} on
the interval @math{[0,1]} with a degree 15 polynomial. The
program generates @math{n = 50000} equally spaced points
@math{t_i} on this interval, calculates the function value
and adds random noise to determine the observation value
@math{y_i}. The entries of the least squares matrix are
@math{X_{ij} = t_i^j}, representing a polynomial fit. The
matrix is highly ill-conditioned, with a condition number
of about @math{1.4 \cdot 10^{11}}. The program accumulates the
matrix into the least squares system in 5 blocks, each with
10000 rows. This way the full matrix @math{X} is never
stored in memory. We solve the system with both the
normal equations and TSQR methods. The results are shown
in the plot below. In the top left plot, we see the unregularized
normal equations solution has larger error than TSQR due to
the ill-conditioning of the matrix. In the bottom left plot,
we show the L-curve, which exhibits multiple corners.
In the top right panel, we plot a regularized solution using
@math{\lambda = 10^{-6}}. The TSQR and normal solutions now agree,
however they are unable to provide a good fit due to the damping.
This indicates that for some ill-conditioned
problems, regularizing the normal equations does not improve the
solution. This is further illustrated in the bottom right panel,
where we plot the L-curve calculated from the normal equations.
The curve agrees with the TSQR curve for larger damping parameters,
but for small @math{\lambda}, the normal equations approach cannot
provide accurate solution vectors leading to numerical
inaccuracies in the left portion of the curve.
@iftex
@sp 1
@center @image{multilarge,6.0in}
@end iftex
@example
@verbatiminclude examples/largefit.c
@end example
@node Fitting References and Further Reading
@section References and Further Reading
A summary of formulas and techniques for least squares fitting can be
found in the ``Statistics'' chapter of the Annual Review of Particle
Physics prepared by the Particle Data Group,
@itemize @w{}
@item
@cite{Review of Particle Properties},
R.M. Barnett et al., Physical Review D54, 1 (1996)
@uref{http://pdg.lbl.gov/}
@end itemize
@noindent
The Review of Particle Physics is available online at the website given
above.
@cindex NIST Statistical Reference Datasets
@cindex Statistical Reference Datasets (StRD)
The tests used to prepare these routines are based on the NIST
Statistical Reference Datasets. The datasets and their documentation are
available from NIST at the following website,
@center @uref{http://www.nist.gov/itl/div898/strd/index.html}.
@noindent
More information on Tikhonov regularization can be found in
@itemize @w{}
@item Hansen, P. C. (1998), Rank-Deficient and Discrete Ill-Posed Problems:
Numerical Aspects of Linear Inversion. SIAM Monogr. on Mathematical
Modeling and Computation, Society for Industrial and Applied Mathematics
@item M. Rezghi and S. M. Hosseini (2009), A new variant of L-curve for
Tikhonov regularization, Journal of Computational and Applied Mathematics,
Volume 231, Issue 2, pages 914-924.
@end itemize
@noindent
The GSL implementation of robust linear regression closely follows the publications
@itemize @w{}
@item DuMouchel, W. and F. O'Brien (1989), "Integrating a robust
option into a multiple regression computing environment,"
Computer Science and Statistics: Proceedings of the 21st
Symposium on the Interface, American Statistical Association
@item Street, J.O., R.J. Carroll, and D. Ruppert (1988), "A note on
computing robust regression estimates via iteratively
reweighted least squares," The American Statistician, v. 42,
pp. 152-154.
@end itemize
@noindent
More information about the normal equations and TSQR approach for solving
large linear least squares systems can be found in the publications
@itemize @w{}
@item Trefethen, L. N. and Bau, D. (1997), "Numerical Linear Algebra", SIAM.
@item Demmel, J., Grigori, L., Hoemmen, M. F., and Langou, J.
"Communication-optimal parallel and sequential QR and LU factorizations",
UCB Technical Report No. UCB/EECS-2008-89, 2008.
@end itemize
|