1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963
|
@cindex nonlinear least squares
@cindex least squares, nonlinear
This chapter describes functions for multidimensional nonlinear
least-squares fitting. There are generally two classes of
algorithms for solving nonlinear least squares problems, which
fall under line search methods and trust region methods.
GSL currently implements only trust region methods and
provides the user with
full access to intermediate steps of the iteration. The user
also has the ability to tune a number of parameters which affect
low-level aspects of the algorithm which can help to accelerate
convergence for the specific problem at hand. GSL provides
two separate interfaces for nonlinear least squares fitting. The
first is designed for small to moderate sized problems, and the
second is designed for very large problems, which may or may not
have significant sparse structure.
The header file @file{gsl_multifit_nlinear.h} contains prototypes for the
multidimensional nonlinear fitting functions and related declarations
relating to the small to moderate sized systems.
The header file @file{gsl_multilarge_nlinear.h} contains prototypes for the
multidimensional nonlinear fitting functions and related declarations
relating to large systems.
@menu
* Nonlinear Least-Squares Overview::
* Nonlinear Least-Squares TRS Overview::
* Nonlinear Least-Squares Weighted Overview::
* Nonlinear Least-Squares Tunable Parameters::
* Nonlinear Least-Squares Initialization::
* Nonlinear Least-Squares Function Definition::
* Nonlinear Least-Squares Iteration::
* Nonlinear Least-Squares Testing for Convergence::
* Nonlinear Least-Squares High Level Driver::
* Nonlinear Least-Squares Covariance Matrix::
* Nonlinear Least-Squares Troubleshooting::
* Nonlinear Least-Squares Examples::
* Nonlinear Least-Squares References and Further Reading::
@end menu
@node Nonlinear Least-Squares Overview
@section Overview
@cindex nonlinear least squares, overview
The problem of multidimensional nonlinear least-squares fitting requires
the minimization of the squared residuals of @math{n} functions,
@math{f_i}, in @math{p} parameters, @math{x_i},
@tex
\beforedisplay
$$
\Phi(x) = {1 \over 2} || f(x) ||^2
= {1 \over 2} \sum_{i=1}^{n} f_i (x_1, \dots, x_p)^2
$$
\afterdisplay
@end tex
@ifinfo
@example
\Phi(x) = (1/2) || f(x) ||^2
= (1/2) \sum_@{i=1@}^@{n@} f_i(x_1, ..., x_p)^2
@end example
@end ifinfo
@noindent
In trust region methods, the objective (or cost) function @math{\Phi(x)} is approximated
by a model function @math{m_k(\delta)} in the vicinity of some point @math{x_k}. The
model function is often simply a second order Taylor series expansion around the
point @math{x_k}, ie:
@tex
\beforedisplay
$$
\Phi(x_k + \delta) \approx m_k(\delta) = \Phi(x_k) + g_k^T \delta + {1 \over 2} \delta^T B_k \delta
$$
\afterdisplay
@end tex
@ifinfo
@example
\Phi(x_k + \delta) ~=~ m_k(\delta) = \Phi(x_k) + g_k^T \delta + 1/2 \delta^T B_k \delta
@end example
@end ifinfo
where @math{g_k = \nabla \Phi(x_k) = J^T f} is the gradient vector at the point @math{x_k},
@math{B_k = \nabla^2 \Phi(x_k)} is the Hessian matrix at @math{x_k}, or some
approximation to it, and @math{J} is the @math{n}-by-@math{p} Jacobian matrix
@c{$J_{ij} = \partial f_i / \partial x_j$}
@math{J_@{ij@} = d f_i / d x_j}.
In order to find the next step @math{\delta}, we minimize the model function
@math{m_k(\delta)}, but search for solutions only within a region where
we trust that @math{m_k(\delta)} is a good approximation to the objective
function @math{\Phi(x_k + \delta)}. In other words,
we seek a solution of the trust region subproblem (TRS)
@tex
\beforedisplay
$$
\min_{\delta \in R^p} m_k(\delta) = \Phi(x_k) + g_k^T \delta + {1 \over 2} \delta^T B_k \delta, \qquad\hbox{s.t.}\quad || D_k \delta || \le \Delta_k
$$
\afterdisplay
@end tex
@ifinfo
@example
\min_(\delta \in R^p) m_k(\delta), s.t. || D_k \delta || <= \Delta_k
@end example
@end ifinfo
where @math{\Delta_k > 0} is the trust region radius and @math{D_k} is
a scaling matrix. If @math{D_k = I}, then the trust region is a ball
of radius @math{\Delta_k} centered at @math{x_k}. In some applications,
the parameter vector @math{x} may have widely different scales. For
example, one parameter might be a temperature on the order of
@math{10^3} K, while another might be a length on the order of
@math{10^{-6}} m. In such cases, a spherical trust region may not
be the best choice, since if @math{\Phi} changes rapidly along
directions with one scale, and more slowly along directions with
a different scale, the model function @math{m_k} may be a poor
approximation to @math{\Phi} along the rapidly changing directions.
In such problems, it may be best to use an elliptical trust region,
by setting @math{D_k} to a diagonal matrix whose entries are designed
so that the scaled step @math{D_k \delta} has entries of approximately the same
order of magnitude.
The trust region subproblem above normally amounts to solving a
linear least squares system (or multiple systems) for the step
@math{\delta}. Once @math{\delta} is computed, it is checked whether
or not it reduces the objective function @math{\Phi(x)}. A useful
statistic for this is to look at the ratio
@tex
\beforedisplay
$$
\rho_k = { \Phi(x_k) - \Phi(x_k + \delta_k) \over m_k(0) - m_k(\delta_k) }
$$
\afterdisplay
@end tex
@ifinfo
@example
\rho_k = ( \Phi(x_k) - \Phi(x_k + \delta_k) / ( m_k(0) - m_k(\delta_k) )
@end example
@end ifinfo
where the numerator is the actual reduction of the objective function
due to the step @math{\delta_k}, and the denominator is the predicted
reduction due to the model @math{m_k}. If @math{\rho_k} is negative,
it means that the step @math{\delta_k} increased the objective function
and so it is rejected. If @math{\rho_k} is positive,
then we have found a step which reduced the objective function and
it is accepted. Furthermore, if @math{\rho_k} is close to 1,
then this indicates that the model function is a good approximation
to the objective function in the trust region, and so on the next
iteration the trust region is enlarged in order to take more ambitious
steps. When a step is rejected, the trust region is made smaller and
the TRS is solved again. An outline for the general trust region method
used by GSL can now be given.
@noindent
@b{Trust Region Algorithm}
@enumerate
@item Initialize: given @math{x_0}, construct @math{m_0(\delta)}, @math{D_0} and @math{\Delta_0 > 0}
@item For k = 0, 1, 2, ...
@enumerate a
@item If converged, then stop
@item Solve TRS for trial step @math{\delta_k}
@item Evaluate trial step by computing @math{\rho_k}
@enumerate
@item if step is accepted, set @math{x_{k+1} = x_k + \delta_k} and increase radius, @math{\Delta_{k+1} = \alpha \Delta_k}
@item if step is rejected, set @math{x_{k+1} = x_k} and decrease radius, @math{\Delta_{k+1} = {\Delta_k \over \beta}}; goto 2(b)
@end enumerate
@item Construct @math{m_{k+1}(\delta)} and @math{D_{k+1}}
@end enumerate
@end enumerate
@noindent
GSL offers the user a number of different algorithms for solving the trust
region subproblem in 2(b), as well as different choices of scaling matrices
@math{D_k} and different methods of updating the trust region radius
@math{\Delta_k}. Therefore, while reasonable default methods are provided,
the user has a lot of control to fine-tune the various steps of the
algorithm for their specific problem.
@node Nonlinear Least-Squares TRS Overview
@section Solving the Trust Region Subproblem (TRS)
@menu
* Nonlinear Least-Squares TRS Levenberg-Marquardt::
* Nonlinear Least-Squares TRS Levenberg-Marquardt with Geodesic Acceleration::
* Nonlinear Least-Squares TRS Dogleg::
* Nonlinear Least-Squares TRS Double Dogleg::
* Nonlinear Least-Squares TRS 2D Subspace::
* Nonlinear Least-Squares TRS Steihaug-Toint Conjugate Gradient::
@end menu
Below we describe the methods available for solving the trust region
subproblem. The methods available provide either exact or approximate
solutions to the trust region subproblem. In all algorithms below,
the Hessian matrix @math{B_k} is approximated as @math{B_k \approx J_k^T J_k},
where @math{J_k = J(x_k)}. In all methods, the solution of the TRS
involves solving a linear least squares system involving the Jacobian
matrix. For small to moderate sized problems (@code{gsl_multifit_nlinear} interface),
this is accomplished by factoring the full Jacobian matrix, which is provided
by the user, with the Cholesky, QR, or SVD decompositions. For large systems
(@code{gsl_multilarge_nlinear} interface), the user has two choices. One
is to solve the system iteratively, without needing to store the full
Jacobian matrix in memory. With this method, the user must provide a routine
to calculate the matrix-vector products @math{J u} or @math{J^T u} for a given vector @math{u}.
This iterative method is particularly useful for systems where the Jacobian has
sparse structure, since forming matrix-vector products can be done cheaply. The
second option for large systems involves forming the normal equations matrix
@math{J^T J} and then factoring it using a Cholesky decomposition. The normal
equations matrix is @math{p}-by-@math{p}, typically much smaller than the full
@math{n}-by-@math{p} Jacobian, and can usually be stored in memory even if the full
Jacobian matrix cannot. This option is useful for large, dense systems, or if the
iterative method has difficulty converging.
@node Nonlinear Least-Squares TRS Levenberg-Marquardt
@subsection Levenberg-Marquardt
@cindex Levenberg-Marquardt algorithm
@cindex nonlinear least squares, levenberg-marquardt
There is a theorem which states that if @math{\delta_k} is a solution
to the trust region subproblem given above, then there exists
@math{\mu_k \ge 0} such that
@tex
\beforedisplay
$$
\left( B_k + \mu_k D_k^T D_k \right) \delta_k = -g_k
$$
\afterdisplay
@end tex
@ifinfo
@example
( B_k + \mu_k D_k^T D_k ) \delta_k = -g_k
@end example
@end ifinfo
with @math{\mu_k (\Delta_k - ||D_k \delta_k||) = 0}. This
forms the basis of the Levenberg-Marquardt algorithm, which controls
the trust region size by adjusting the parameter @math{\mu_k}
rather than the radius @math{\Delta_k} directly. For each radius
@math{\Delta_k}, there is a unique parameter @math{\mu_k} which
solves the TRS, and they have an inverse relationship, so that large values of
@math{\mu_k} correspond to smaller trust regions, while small
values of @math{\mu_k} correspond to larger trust regions.
@noindent
With the approximation @math{B_k \approx J_k^T J_k}, on each iteration,
in order to calculate the step @math{\delta_k},
the following linear least squares problem is solved:
@tex
\beforedisplay
$$
\left[
\matrix{
J_k \cr
\sqrt{\mu_k} D_k
}
\right]
\delta_k =
-
\left[
\matrix{
f_k \cr
0
}
\right]
$$
\afterdisplay
@end tex
@ifinfo
@example
[J_k; sqrt(mu_k) D_k] \delta_k = - [f_k; 0]
@end example
@end ifinfo
@noindent
If the step @math{\delta_k} is accepted, then
@math{\mu_k} is decreased on the next iteration in order
to take a larger step, otherwise it is increased to take
a smaller step. The Levenberg-Marquardt algorithm provides
an exact solution of the trust region subproblem, but
typically has a higher computational cost per iteration
than the approximate methods discussed below, since it
may need to solve the least squares system above several
times for different values of @math{\mu_k}.
@node Nonlinear Least-Squares TRS Levenberg-Marquardt with Geodesic Acceleration
@subsection Levenberg-Marquardt with Geodesic Acceleration
@cindex Levenberg-Marquardt algorithm, geodesic acceleration
@cindex nonlinear least squares, levenberg-marquardt, geodesic acceleration
This method applies a so-called geodesic acceleration correction to
the standard Levenberg-Marquardt step @math{\delta_k} (Transtrum et al, 2011).
By interpreting @math{\delta_k} as a first order step along a geodesic in the
model parameter space (ie: a velocity @math{\delta_k = v_k}), the geodesic
acceleration @math{a_k} is a second order correction along the
geodesic which is determined by solving the linear least squares system
@tex
\beforedisplay
$$
\left[
\matrix{
J_k \cr
\sqrt{\mu_k} D_k
}
\right]
a_k =
-
\left[
\matrix{
f_{vv}(x_k) \cr
0
}
\right]
$$
\afterdisplay
@end tex
@ifinfo
@example
[J_k; sqrt(mu_k) D_k] a_k = - [f_vv(x_k); 0]
@end example
@end ifinfo
@noindent
where @math{f_{vv}} is the second directional derivative of
the residual vector in the velocity direction @math{v},
@math{f_{vv}(x) = D_v^2 f = \sum_{\alpha\beta} v_{\alpha} v_{\beta} \partial_{\alpha} \partial_{\beta} f(x)},
where @math{\alpha} and @math{\beta} are summed over the @math{p}
parameters. The new total step is then @math{\delta_k' = v_k + {1 \over 2}a_k}.
The second order correction @math{a_k} can be calculated with a modest additional
cost, and has been shown to dramatically reduce the number of iterations
(and expensive Jacobian evaluations) required to reach convergence on a variety
of different problems. In order to utilize the geodesic acceleration, the user must supply a
function which provides the second directional derivative vector
@math{f_{vv}(x)}, or alternatively the library can use a finite
difference method to estimate this vector with one additional function
evaluation of @math{f(x + h v)} where @math{h} is a tunable step size
(see the @code{h_fvv} parameter description).
@node Nonlinear Least-Squares TRS Dogleg
@subsection Dogleg
@cindex Dogleg algorithm
@cindex nonlinear least squares, dogleg
This is Powell's dogleg method, which finds an approximate
solution to the trust region subproblem, by restricting
its search to a piecewise linear ``dogleg'' path,
composed of the origin, the Cauchy point which represents
the model minimizer along the steepest descent direction,
and the Gauss-Newton point, which is the overall minimizer
of the unconstrained model. The Gauss-Newton step is calculated by
solving
@tex
\beforedisplay
$$
J_k \delta_{gn} = -f_k
$$
\afterdisplay
@end tex
@ifinfo
@example
J_k \delta_gn = -f_k
@end example
@end ifinfo
which is the main computational task for each iteration,
but only needs to be performed once per iteration. If
the Gauss-Newton point is inside the trust region, it is
selected as the step. If it is outside, the method then
calculates the Cauchy point, which is located along the
gradient direction. If the Cauchy point is also outside
the trust region, the method assumes that it is still far
from the minimum and so proceeds along the gradient
direction, truncating the step at the trust region
boundary. If the Cauchy point is inside the trust region,
with the Gauss-Newton point outside, the method
uses a dogleg step, which is a linear combination of the
gradient direction and the Gauss-Newton direction, stopping at the trust
region boundary.
@node Nonlinear Least-Squares TRS Double Dogleg
@subsection Double Dogleg
@cindex double Dogleg algorithm
@cindex Dogleg algorithm, double
@cindex nonlinear least squares, double dogleg
This method is an improvement over the classical dogleg
algorithm, which attempts to include information about
the Gauss-Newton step while the iteration is still far from
the minimum. When the Cauchy point is inside the trust region
and the Gauss-Newton point is outside, the method computes
a scaled Gauss-Newton point and then takes a dogleg step
between the Cauchy point and the scaled Gauss-Newton point.
The scaling is calculated to ensure that the reduction
in the model @math{m_k} is about the same as the reduction
provided by the Cauchy point.
@node Nonlinear Least-Squares TRS 2D Subspace
@subsection Two Dimensional Subspace
The dogleg methods restrict the search for the TRS solution
to a 1D curve defined by the Cauchy and Gauss-Newton points.
An improvement to this is to search for a solution using
the full two dimensional subspace spanned by the Cauchy
and Gauss-Newton directions. The dogleg path is of course
inside this subspace, and so this method solves the TRS
at least as accurately as the dogleg methods. Since this
method searches a larger subspace for a solution, it can
converge more quickly than dogleg on some problems. Because
the subspace is only two dimensional, this method is
very efficient and the main computation per iteration is
to determine the Gauss-Newton point.
@node Nonlinear Least-Squares TRS Steihaug-Toint Conjugate Gradient
@subsection Steihaug-Toint Conjugate Gradient
One difficulty of the dogleg methods is calculating the
Gauss-Newton step when the Jacobian matrix is singular. The
Steihaug-Toint method also computes a generalized dogleg
step, but avoids solving for the Gauss-Newton step directly,
instead using an iterative conjugate gradient algorithm. This
method performs well at points where the Jacobian is singular,
and is also suitable for large-scale problems where factoring
the Jacobian matrix could be prohibitively expensive.
@node Nonlinear Least-Squares Weighted Overview
@section Weighted Nonlinear Least-Squares
Weighted nonlinear least-squares fitting minimizes the function
@tex
\beforedisplay
$$
\Phi(x) = {1 \over 2} || f ||_W^2
= {1 \over 2} \sum_{i=1}^{n} w_i f_i (x_1, \dots, x_p)^2
$$
\afterdisplay
@end tex
@ifinfo
@example
\Phi(x) = (1/2) || f(x) ||_W^2
= (1/2) \sum_@{i=1@}^@{n@} f_i(x_1, ..., x_p)^2
@end example
@end ifinfo
where @math{W = diag(w_1,w_2,...,w_n)} is the weighting matrix,
and @math{||f||_W^2 = f^T W f}.
The weights @math{w_i} are commonly defined as @math{w_i = 1/\sigma_i^2},
where @math{\sigma_i} is the error in the @math{i}th measurement.
A simple change of variables @math{\tilde{f} = W^{1 \over 2} f} yields
@math{\Phi(x) = {1 \over 2} ||\tilde{f}||^2}, which is in the
same form as the unweighted case. The user can either perform this
transform directly on their function residuals and Jacobian, or use
the @code{gsl_multifit_nlinear_winit} interface which automatically
performs the correct scaling. To manually perform this transformation,
the residuals and Jacobian should be modified according to
@tex
\beforedisplay
$$
\eqalign{
\tilde{f}_i & = \sqrt{w_i} f_i = {f_i \over \sigma_i} \cr
\tilde{J}_{ij} & = \sqrt{w_i} { \partial f_i \over \partial x_j } = { 1 \over \sigma_i} { \partial f_i \over \partial x_j }
}
$$
\afterdisplay
@end tex
@ifinfo
@example
f~_i = f_i / \sigma_i
J~_ij = 1 / \sigma_i df_i/dx_j
@end example
@end ifinfo
@noindent
For large systems, the user must perform their own weighting.
@node Nonlinear Least-Squares Tunable Parameters
@section Tunable Parameters
@noindent
The user can tune nearly all aspects of the iteration at allocation
time. For the @code{gsl_multifit_nlinear} interface, the user may
modify the @code{gsl_multifit_nlinear_parameters} structure, which is
defined as follows:
@example
typedef struct
@{
const gsl_multifit_nlinear_trs *trs; /* trust region subproblem method */
const gsl_multifit_nlinear_scale *scale; /* scaling method */
const gsl_multifit_nlinear_solver *solver; /* solver method */
gsl_multifit_nlinear_fdtype fdtype; /* finite difference method */
double factor_up; /* factor for increasing trust radius */
double factor_down; /* factor for decreasing trust radius */
double avmax; /* max allowed |a|/|v| */
double h_df; /* step size for finite difference Jacobian */
double h_fvv; /* step size for finite difference fvv */
@} gsl_multifit_nlinear_parameters;
@end example
@noindent
For the @code{gsl_multilarge_nlinear} interface, the user may
modify the @code{gsl_multilarge_nlinear_parameters} structure, which is
defined as follows:
@example
typedef struct
@{
const gsl_multilarge_nlinear_trs *trs; /* trust region subproblem method */
const gsl_multilarge_nlinear_scale *scale; /* scaling method */
const gsl_multilarge_nlinear_solver *solver; /* solver method */
gsl_multilarge_nlinear_fdtype fdtype; /* finite difference method */
double factor_up; /* factor for increasing trust radius */
double factor_down; /* factor for decreasing trust radius */
double avmax; /* max allowed |a|/|v| */
double h_df; /* step size for finite difference Jacobian */
double h_fvv; /* step size for finite difference fvv */
size_t max_iter; /* maximum iterations for trs method */
double tol; /* tolerance for solving trs */
@} gsl_multilarge_nlinear_parameters;
@end example
@noindent
Each of these parameters is discussed in further detail below.
@deftypevr {Parameter} {const gsl_multifit_nlinear_trs *} trs
@deftypevrx {Parameter} {const gsl_multilarge_nlinear_trs *} trs
This parameter determines the method used to solve the trust region
subproblem, and may be selected from the following choices,
@defvr {Default} gsl_multifit_nlinear_trs_lm
@defvrx {Default} gsl_multilarge_nlinear_trs_lm
This selects the Levenberg-Marquardt algorithm.
@end defvr
@defvr {Option} gsl_multifit_nlinear_trs_lmaccel
@defvrx {Option} gsl_multilarge_nlinear_trs_lmaccel
This selects the Levenberg-Marquardt algorithm with geodesic
acceleration.
@end defvr
@defvr {Option} gsl_multifit_nlinear_trs_dogleg
@defvrx {Option} gsl_multilarge_nlinear_trs_dogleg
This selects the dogleg algorithm.
@end defvr
@defvr {Option} gsl_multifit_nlinear_trs_ddogleg
@defvrx {Option} gsl_multilarge_nlinear_trs_ddogleg
This selects the double dogleg algorithm.
@end defvr
@defvr {Option} gsl_multifit_nlinear_trs_subspace2D
@defvrx {Option} gsl_multilarge_nlinear_trs_subspace2D
This selects the 2D subspace algorithm.
@end defvr
@defvr {Option} gsl_multilarge_nlinear_trs_cgst
This selects the Steihaug-Toint conjugate gradient algorithm. This
method is available only for large systems.
@end defvr
@end deftypevr
@deftypevr {Parameter} {const gsl_multifit_nlinear_scale *} scale
@deftypevrx {Parameter} {const gsl_multilarge_nlinear_scale *} scale
This parameter determines the diagonal scaling matrix @math{D} and
may be selected from the following choices,
@defvr {Default} gsl_multifit_nlinear_scale_more
@defvrx {Default} gsl_multilarge_nlinear_scale_more
This damping strategy was suggested by Mor@'e, and
corresponds to @math{D^T D = } max(diag(@math{J^T J})),
in other words the maximum elements of
diag(@math{J^T J}) encountered thus far in the iteration.
This choice of @math{D} makes the problem scale-invariant,
so that if the model parameters @math{x_i} are each scaled
by an arbitrary constant, @math{\tilde{x}_i = a_i x_i}, then
the sequence of iterates produced by the algorithm would
be unchanged. This method can work very well in cases
where the model parameters have widely different scales
(ie: if some parameters are measured in nanometers, while others
are measured in degrees Kelvin). This strategy has been proven
effective on a large class of problems and so it is the library
default, but it may not be the best choice for all problems.
@end defvr
@defvr {Option} gsl_multifit_nlinear_scale_levenberg
@defvrx {Option} gsl_multilarge_nlinear_scale_levenberg
This damping strategy was originally suggested by Levenberg, and
corresponds to @math{D^T D = I}. This method has also proven
effective on a large class of problems, but is not scale-invariant.
However, some authors (e.g. Transtrum and Sethna 2012) argue
that this choice is better for problems which are susceptible
to parameter evaporation (ie: parameters go to infinity)
@end defvr
@defvr {Option} gsl_multifit_nlinear_scale_marquardt
@defvrx {Option} gsl_multilarge_nlinear_scale_marquardt
This damping strategy was suggested by Marquardt, and
corresponds to @math{D^T D = } diag(@math{J^T J}). This
method is scale-invariant, but it is generally considered
inferior to both the Levenberg and Mor@'e strategies, though
may work well on certain classes of problems.
@end defvr
@end deftypevr
@deftypevr {Parameter} {const gsl_multifit_nlinear_solver *} solver
@deftypevrx {Parameter} {const gsl_multilarge_nlinear_solver *} solver
@noindent
Solving the trust region subproblem on each iteration almost always
requires the solution of the following linear least squares system
@tex
\beforedisplay
$$
\left[
\matrix{
J \cr
\sqrt{\mu} D
}
\right]
\delta =
-
\left[
\matrix{
f \cr
0
}
\right]
$$
\afterdisplay
@end tex
@ifinfo
@example
[J; sqrt(mu) D] \delta = - [f; 0]
@end example
@end ifinfo
The @var{solver} parameter determines how the system is
solved and can be selected from the following choices:
@defvr {Default} gsl_multifit_nlinear_solver_qr
This method solves the system using a rank revealing QR
decomposition of the Jacobian @math{J}. This method will
produce reliable solutions in cases where the Jacobian
is rank deficient or near-singular but does require about
twice as many operations as the Cholesky method discussed
below.
@end defvr
@defvr {Option} gsl_multifit_nlinear_solver_cholesky
@defvrx {Default} gsl_multilarge_nlinear_solver_cholesky
This method solves the alternate normal equations problem
@tex
\beforedisplay
$$
\left( J^T J + \mu D^T D \right) \delta = -J^T f
$$
\afterdisplay
@end tex
@ifinfo
@example
( J^T J + \mu D^T D ) \delta = -J^T f
@end example
@end ifinfo
by using a Cholesky decomposition of the matrix
@math{J^T J + \mu D^T D}. This method is faster than the
QR approach, however it is susceptible to numerical instabilities
if the Jacobian matrix is rank deficient or near-singular. In
these cases, an attempt is made to reduce the condition number
of the matrix using Jacobi preconditioning, but for highly
ill-conditioned problems the QR approach is better. If it is
known that the Jacobian matrix is well conditioned, this method
is accurate and will perform faster than the QR approach.
@end defvr
@defvr {Option} gsl_multifit_nlinear_solver_svd
This method solves the system using a singular value
decomposition of the Jacobian @math{J}. This method will
produce the most reliable solutions for ill-conditioned Jacobians
but is also the slowest solver method.
@end defvr
@end deftypevr
@deftypevr {Parameter} {gsl_multifit_nlinear_fdtype} fdtype
This parameter specifies whether to use forward or centered
differences when approximating the Jacobian. This is only
used when an analytic Jacobian is not provided to the solver.
This parameter may be set to one of the following choices.
@defvr {Default} GSL_MULTIFIT_NLINEAR_FWDIFF
This specifies a forward finite difference to approximate
the Jacobian matrix. The Jacobian matrix will be calculated as
@tex
\beforedisplay
$$
J_{ij} = {1 \over \Delta_j} \left( f_i(x + \Delta_j e_j) - f_i(x) \right)
$$
\afterdisplay
@end tex
@ifinfo
@example
J_ij = 1 / \Delta_j ( f_i(x + \Delta_j e_j) - f_i(x) )
@end example
@end ifinfo
where @math{\Delta_j = h |x_j|} and @math{e_j} is the standard
@math{j}th Cartesian unit basis vector so that
@math{x + \Delta_j e_j} represents a small (forward) perturbation of
the @math{j}th parameter by an amount @math{\Delta_j}. The perturbation
@math{\Delta_j} is proportional to the current value @math{|x_j|} which
helps to calculate an accurate Jacobian when the various parameters have
different scale sizes. The value of @math{h} is specified by the @code{h_df}
parameter. The accuracy of this method is @math{O(h)}, and evaluating this
matrix requires an additional @math{p} function evaluations.
@end defvr
@defvr {Option} GSL_MULTIFIT_NLINEAR_CTRDIFF
This specifies a centered finite difference to approximate
the Jacobian matrix. The Jacobian matrix will be calculated as
@tex
\beforedisplay
$$
J_{ij} = {1 \over \Delta_j} \left( f_i(x + {1 \over 2} \Delta_j e_j) - f_i(x - {1 \over 2} \Delta_j e_j) \right)
$$
\afterdisplay
@end tex
@ifinfo
@example
J_ij = 1 / \Delta_j ( f_i(x + 1/2 \Delta_j e_j) - f_i(x - 1/2 \Delta_j e_j) )
@end example
@end ifinfo
See above for a description of @math{\Delta_j}. The accuracy of this
method is @math{O(h^2)}, but evaluating this
matrix requires an additional @math{2p} function evaluations.
@end defvr
@end deftypevr
@deftypevr {Parameter} {double} factor_up
When a step is accepted, the trust region radius will be increased
by this factor. The default value is @math{3}.
@end deftypevr
@deftypevr {Parameter} {double} factor_down
When a step is rejected, the trust region radius will be decreased
by this factor. The default value is @math{2}.
@end deftypevr
@deftypevr {Parameter} {double} avmax
When using geodesic acceleration to solve a nonlinear least squares problem,
an important parameter to monitor is the ratio of the acceleration term
to the velocity term,
@tex
\beforedisplay
$$
{ ||a|| \over ||v|| }
$$
\afterdisplay
@end tex
@ifinfo
@example
|a| / |v|
@end example
@end ifinfo
If this ratio is small, it means the acceleration correction
is contributing very little to the step. This could be because
the problem is not ``nonlinear'' enough to benefit from
the acceleration. If the ratio is large (@math{> 1}) it
means that the acceleration is larger than the velocity,
which shouldn't happen since the step represents a truncated
series and so the second order term @math{a} should be smaller than
the first order term @math{v} to guarantee convergence.
Therefore any steps with a ratio larger than the parameter
@var{avmax} are rejected. @var{avmax} is set to 0.75 by default.
For problems which experience difficulty converging, this threshold
could be lowered.
@end deftypevr
@deftypevr {Parameter} {double} h_df
This parameter specifies the step size for approximating the
Jacobian matrix with finite differences. It is set to
@math{\sqrt{\epsilon}} by default, where @math{\epsilon}
is @code{GSL_DBL_EPSILON}.
@end deftypevr
@deftypevr {Parameter} {double} h_fvv
When using geodesic acceleration, the user must either supply
a function to calculate @math{f_{vv}(x)} or the library
can estimate this second directional derivative using a finite
difference method. When using finite differences, the library
must calculate @math{f(x + h v)} where @math{h} represents
a small step in the velocity direction. The parameter
@var{h_fvv} defines this step size and is set to 0.02 by
default.
@end deftypevr
@node Nonlinear Least-Squares Initialization
@section Initializing the Solver
@deftypefun {gsl_multifit_nlinear_workspace *} gsl_multifit_nlinear_alloc (const gsl_multifit_nlinear_type * @var{T}, const gsl_multifit_nlinear_parameters * @var{params}, const size_t @var{n}, const size_t @var{p})
@deftypefunx {gsl_multilarge_nlinear_workspace *} gsl_multilarge_nlinear_alloc (const gsl_multilarge_nlinear_type * @var{T}, const gsl_multilarge_nlinear_parameters * @var{params}, const size_t @var{n}, const size_t @var{p})
@tindex gsl_multifit_nlinear_alloc
@tindex gsl_multifit_nlinear_type
These functions return a pointer to a newly allocated instance of a
derivative solver of type @var{T} for @var{n} observations and @var{p}
parameters. The @var{params} input specifies a tunable set of
parameters which will affect important details in each iteration
of the trust region subproblem algorithm. It is recommended to start
with the suggested default parameters (see
@code{gsl_multifit_nlinear_default_parameters} and
@code{gsl_multilarge_nlinear_default_parameters}) and then tune
the parameters once the code is working correctly. See
@ref{Nonlinear Least-Squares Tunable Parameters}
for descriptions of the various parameters.
For example, the following code creates an instance of a
Levenberg-Marquardt solver for 100 data points and 3 parameters,
using suggested defaults:
@example
const gsl_multifit_nlinear_type * T
= gsl_multifit_nlinear_lm;
gsl_multifit_nlinear_parameters params
= gsl_multifit_nlinear_default_parameters();
gsl_multifit_nlinear_workspace * w
= gsl_multifit_nlinear_alloc (T, ¶ms, 100, 3);
@end example
@noindent
The number of observations @var{n} must be greater than or equal to
parameters @var{p}.
If there is insufficient memory to create the solver then the function
returns a null pointer and the error handler is invoked with an error
code of @code{GSL_ENOMEM}.
@end deftypefun
@deftypefun gsl_multifit_nlinear_parameters gsl_multifit_nlinear_default_parameters (void)
@deftypefunx gsl_multilarge_nlinear_parameters gsl_multilarge_nlinear_default_parameters (void)
These functions return a set of recommended default parameters
for use in solving nonlinear least squares problems. The user
can tune each parameter to improve the performance on their
particular problem, see
@ref{Nonlinear Least-Squares Tunable Parameters}.
@end deftypefun
@deftypefun int gsl_multifit_nlinear_init (const gsl_vector * @var{x}, gsl_multifit_nlinear_fdf * @var{fdf}, gsl_multifit_nlinear_workspace * @var{w})
@deftypefunx int gsl_multifit_nlinear_winit (const gsl_vector * @var{x}, const gsl_vector * @var{wts}, gsl_multifit_nlinear_fdf * @var{fdf}, gsl_multifit_nlinear_workspace * @var{w})
@deftypefunx int gsl_multilarge_nlinear_init (const gsl_vector * @var{x}, gsl_multilarge_nlinear_fdf * @var{fdf}, gsl_multilarge_nlinear_workspace * @var{w})
@deftypefunx int gsl_multilarge_nlinear_winit (const gsl_vector * @var{x}, const gsl_vector * @var{wts}, gsl_multilarge_nlinear_fdf * @var{fdf}, gsl_multilarge_nlinear_workspace * @var{w})
These functions initialize, or reinitialize, an existing workspace @var{w}
to use the system @var{fdf} and the initial guess
@var{x}. See @ref{Nonlinear Least-Squares Function Definition}
for a description of the @var{fdf} structure.
Optionally, a weight vector @var{wts} can be given to perform
a weighted nonlinear regression. Here, the weighting matrix is
@math{W = diag(w_1,w_2,...,w_n)}.
@end deftypefun
@deftypefun void gsl_multifit_nlinear_free (gsl_multifit_nlinear_workspace * @var{w})
@deftypefunx void gsl_multilarge_nlinear_free (gsl_multilarge_nlinear_workspace * @var{w})
These functions free all the memory associated with the workspace @var{w}.
@end deftypefun
@deftypefun {const char *} gsl_multifit_nlinear_name (const gsl_multifit_nlinear_workspace * @var{w})
@deftypefunx {const char *} gsl_multilarge_nlinear_name (const gsl_multilarge_nlinear_workspace * @var{w})
These functions return a pointer to the name of the solver. For example,
@example
printf ("w is a '%s' solver\n",
gsl_multifit_nlinear_name (w));
@end example
@noindent
would print something like @code{w is a 'trust-region' solver}.
@end deftypefun
@deftypefun {const char *} gsl_multifit_nlinear_trs_name (const gsl_multifit_nlinear_workspace * @var{w})
@deftypefunx {const char *} gsl_multilarge_nlinear_trs_name (const gsl_multilarge_nlinear_workspace * @var{w})
These functions return a pointer to the name of the trust region subproblem
method. For example,
@example
printf ("w is a '%s' solver\n",
gsl_multifit_nlinear_trs_name (w));
@end example
@noindent
would print something like @code{w is a 'levenberg-marquardt' solver}.
@end deftypefun
@node Nonlinear Least-Squares Function Definition
@section Providing the Function to be Minimized
The user must provide @math{n} functions of @math{p} variables for the
minimization algorithm to operate on. In order to allow for
arbitrary parameters the functions are defined by the following data
types:
@deftp {Data Type} gsl_multifit_nlinear_fdf
This data type defines a general system of functions with arbitrary parameters,
the corresponding Jacobian matrix of derivatives, and optionally the
second directional derivative of the functions for geodesic acceleration.
@table @code
@item int (* f) (const gsl_vector * @var{x}, void * @var{params}, gsl_vector * @var{f})
This function should store the @math{n} components of the vector
@c{$f(x)$}
@math{f(x)} in @var{f} for argument @var{x} and arbitrary parameters @var{params},
returning an appropriate error code if the function cannot be computed.
@item int (* df) (const gsl_vector * @var{x}, void * @var{params}, gsl_matrix * @var{J})
This function should store the @var{n}-by-@var{p} matrix result
@c{$J_{ij} = \partial f_i(x) / \partial x_j$}
@math{J_ij = d f_i(x) / d x_j} in @var{J} for argument @var{x}
and arbitrary parameters @var{params}, returning an appropriate error code if the
matrix cannot be computed. If an analytic Jacobian is unavailable, or too expensive
to compute, this function pointer may be set to NULL, in which
case the Jacobian will be internally computed using finite difference approximations
of the function @var{f}.
@item int (* fvv) (const gsl_vector * @var{x}, const gsl_vector * @var{v}, void * @var{params}, gsl_vector * @var{fvv})
When geodesic acceleration is enabled, this function should store the
@math{n} components of the vector
@math{f_{vv}(x) = \sum_{\alpha\beta} v_{\alpha} v_{\beta} {\partial \over \partial x_{\alpha}} {\partial \over \partial x_{\beta}} f(x)},
representing second directional derivatives of the function to be minimized,
into the output @var{fvv}. The parameter vector is provided in @var{x} and
the velocity vector is provided in @var{v}, both of which have @math{p}
components. The arbitrary parameters are given in @var{params}. If
analytic expressions for @math{f_{vv}(x)} are unavailable or too difficult
to compute, this function pointer may be set to NULL, in which case
@math{f_{vv}(x)} will be computed internally using a finite difference
approximation.
@item size_t n
the number of functions, i.e. the number of components of the
vector @var{f}.
@item size_t p
the number of independent variables, i.e. the number of components of
the vector @var{x}.
@item void * params
a pointer to the arbitrary parameters of the function.
@item size_t nevalf
This does not need to be set by the user. It counts the number of
function evaluations and is initialized by the @code{_init} function.
@item size_t nevaldf
This does not need to be set by the user. It counts the number of
Jacobian evaluations and is initialized by the @code{_init} function.
@item size_t nevalfvv
This does not need to be set by the user. It counts the number of
@math{f_{vv}(x)} evaluations and is initialized by the @code{_init} function.
@end table
@end deftp
@deftp {Data Type} gsl_multilarge_nlinear_fdf
This data type defines a general system of functions with arbitrary parameters,
a function to compute @math{J u} or @math{J^T u} for a given vector @math{u},
the normal equations matrix @math{J^T J},
and optionally the second directional derivative of the functions for geodesic acceleration.
@table @code
@item int (* f) (const gsl_vector * @var{x}, void * @var{params}, gsl_vector * @var{f})
This function should store the @math{n} components of the vector
@c{$f(x)$}
@math{f(x)} in @var{f} for argument @var{x} and arbitrary parameters @var{params},
returning an appropriate error code if the function cannot be computed.
@item int (* df) (CBLAS_TRANSPOSE_t @var{TransJ}, const gsl_vector * @var{x}, const gsl_vector * @var{u}, void * @var{params}, gsl_vector * @var{v}, gsl_matrix * @var{JTJ})
If @var{TransJ} is equal to @code{CblasNoTrans}, then this function should
compute the matrix-vector product @math{J u} and store the result in @var{v}.
If @var{TransJ} is equal to @code{CblasTrans}, then this function should
compute the matrix-vector product @math{J^T u} and store the result in @var{v}.
Additionally, the normal equations matrix @math{J^T J} should be stored in the
lower half of @var{JTJ}. The input matrix @var{JTJ} could be set to NULL,
for example by iterative methods which do not require this matrix, so the user
should check for this prior to constructing the matrix.
The input @var{params} contains the arbitrary parameters.
@item int (* fvv) (const gsl_vector * @var{x}, const gsl_vector * @var{v}, void * @var{params}, gsl_vector * @var{fvv})
When geodesic acceleration is enabled, this function should store the
@math{n} components of the vector
@math{f_{vv}(x) = \sum_{\alpha\beta} v_{\alpha} v_{\beta} {\partial \over \partial x_{\alpha}} {\partial \over \partial x_{\beta}} f(x)},
representing second directional derivatives of the function to be minimized,
into the output @var{fvv}. The parameter vector is provided in @var{x} and
the velocity vector is provided in @var{v}, both of which have @math{p}
components. The arbitrary parameters are given in @var{params}. If
analytic expressions for @math{f_{vv}(x)} are unavailable or too difficult
to compute, this function pointer may be set to NULL, in which case
@math{f_{vv}(x)} will be computed internally using a finite difference
approximation.
@item size_t n
the number of functions, i.e. the number of components of the
vector @var{f}.
@item size_t p
the number of independent variables, i.e. the number of components of
the vector @var{x}.
@item void * params
a pointer to the arbitrary parameters of the function.
@item size_t nevalf
This does not need to be set by the user. It counts the number of
function evaluations and is initialized by the @code{_init} function.
@item size_t nevaldfu
This does not need to be set by the user. It counts the number of
Jacobian matrix-vector evaluations (@math{J u} or @math{J^T u}) and
is initialized by the @code{_init} function.
@item size_t nevaldf2
This does not need to be set by the user. It counts the number of
@math{J^T J} evaluations and is initialized by the @code{_init} function.
@item size_t nevalfvv
This does not need to be set by the user. It counts the number of
@math{f_{vv}(x)} evaluations and is initialized by the @code{_init} function.
@end table
@end deftp
@noindent
Note that when fitting a non-linear model against experimental data,
the data is passed to the functions above using the
@var{params} argument and the trial best-fit parameters through the
@var{x} argument.
@node Nonlinear Least-Squares Iteration
@section Iteration
The following functions drive the iteration of each algorithm. Each
function performs one iteration of the trust region method and updates
the state of the solver.
@deftypefun int gsl_multifit_nlinear_iterate (gsl_multifit_nlinear_workspace * @var{w})
@deftypefunx int gsl_multilarge_nlinear_iterate (gsl_multilarge_nlinear_workspace * @var{w})
These functions perform a single iteration of the solver @var{w}. If
the iteration encounters an unexpected problem then an error code will
be returned. The solver workspace maintains a current estimate of the
best-fit parameters at all times.
@end deftypefun
@noindent
The solver workspace @var{w} contains the following entries, which can
be used to track the progress of the solution:
@table @code
@item gsl_vector * x
The current position, length @math{p}.
@item gsl_vector * f
The function residual vector at the current position @math{f(x)}, length
@math{n}.
@item gsl_matrix * J
The Jacobian matrix at the current position @math{J(x)}, size
@math{n}-by-@math{p} (only for @code{gsl_multifit_nlinear} interface).
@item gsl_vector * dx
The difference between the current position and the previous position,
i.e. the last step @math{\delta}, taken as a vector, length @math{p}.
@end table
@noindent
These quantities can be accessed with the following functions,
@deftypefun {gsl_vector *} gsl_multifit_nlinear_position (const gsl_multifit_nlinear_workspace * @var{w})
@deftypefunx {gsl_vector *} gsl_multilarge_nlinear_position (const gsl_multilarge_nlinear_workspace * @var{w})
These functions return the current position @math{x} (i.e. best-fit
parameters) of the solver @var{w}.
@end deftypefun
@deftypefun {gsl_vector *} gsl_multifit_nlinear_residual (const gsl_multifit_nlinear_workspace * @var{w})
@deftypefunx {gsl_vector *} gsl_multilarge_nlinear_residual (const gsl_multilarge_nlinear_workspace * @var{w})
These functions return the current residual vector @math{f(x)} of the
solver @var{w}. For weighted systems, the residual vector includes the
weighting factor @math{\sqrt{W}}.
@end deftypefun
@deftypefun {gsl_matrix *} gsl_multifit_nlinear_jac (const gsl_multifit_nlinear_workspace * @var{w})
This function returns a pointer to the @math{n}-by-@math{p} Jacobian matrix for the
current iteration of the solver @var{w}. This function is available only for the
@code{gsl_multifit_nlinear} interface.
@end deftypefun
@deftypefun size_t gsl_multifit_nlinear_niter (const gsl_multifit_nlinear_workspace * @var{w})
@deftypefunx size_t gsl_multilarge_nlinear_niter (const gsl_multilarge_nlinear_workspace * @var{w})
These functions return the number of iterations performed so far.
The iteration counter is updated on each call to the
@code{_iterate} functions above, and reset to 0 in the
@code{_init} functions.
@end deftypefun
@deftypefun int gsl_multifit_nlinear_rcond (double * @var{rcond}, const gsl_multifit_nlinear_workspace * @var{w})
@deftypefunx int gsl_multilarge_nlinear_rcond (double * @var{rcond}, const gsl_multilarge_nlinear_workspace * @var{w})
This function estimates the reciprocal condition number
of the Jacobian matrix at the current position @math{x} and
stores it in @var{rcond}. The computed value is only an estimate
to give the user a guideline as to the conditioning of their particular
problem. Its calculation is based on which factorization
method is used (Cholesky, QR, or SVD).
@itemize @bullet
@item For the Cholesky solver, the matrix @math{J^T J} is factored at each
iteration. Therefore this function will estimate the 1-norm condition number
@math{rcond^2 = 1/(||J^T J||_1 \cdot ||(J^T J)^{-1}||_1)}
@item For the QR solver, @math{J} is factored as @math{J = Q R} at each
iteration. For simplicity, this function calculates the 1-norm conditioning of
only the @math{R} factor, @math{rcond = 1 / (||R||_1 \cdot ||R^{-1}||_1)}.
This can be computed efficiently since @math{R} is upper triangular.
@item For the SVD solver, in order to efficiently solve the trust region
subproblem, the matrix which is factored is @math{J D^{-1}}, instead of
@math{J} itself. The resulting singular values are used to provide
the 2-norm reciprocal condition number, as @math{rcond = \sigma_{min} / \sigma_{max}}.
Note that when using Mor@'e scaling, @math{D \ne I} and the resulting
@var{rcond} estimate may be significantly different from the true
@var{rcond} of @math{J} itself.
@end itemize
@end deftypefun
@node Nonlinear Least-Squares Testing for Convergence
@section Testing for Convergence
@cindex nonlinear fitting, stopping parameters, convergence
A minimization procedure should stop when one of the following conditions is
true:
@itemize @bullet
@item
A minimum has been found to within the user-specified precision.
@item
A user-specified maximum number of iterations has been reached.
@item
An error has occurred.
@end itemize
@noindent
The handling of these conditions is under user control. The functions
below allow the user to test the current estimate of the best-fit
parameters in several standard ways.
@deftypefun int gsl_multifit_nlinear_test (const double @var{xtol}, const double @var{gtol}, const double @var{ftol}, int * @var{info}, const gsl_multifit_nlinear_workspace * @var{w})
@deftypefunx int gsl_multilarge_nlinear_test (const double @var{xtol}, const double @var{gtol}, const double @var{ftol}, int * @var{info}, const gsl_multilarge_nlinear_workspace * @var{w})
These functions test for convergence of the minimization method
using the following criteria:
@itemize @bullet
@item
Testing for a small step size relative to the current parameter vector
@tex
\beforedisplay
$$
|\delta_i| \le xtol (|x_i| + xtol)
$$
\afterdisplay
@end tex
@ifinfo
@example
|\delta_i| <= xtol (|x_i| + xtol)
@end example
@end ifinfo
for each @math{0 <= i < p}. Each element of the step vector @math{\delta}
is tested individually in case the different parameters have widely
different scales. Adding @var{xtol} to @math{|x_i|} helps the test avoid
breaking down in situations where the true solution value @math{x_i = 0}.
If this test succeeds, @var{info} is set to 1 and the function
returns @code{GSL_SUCCESS}.
A general guideline for selecting the step tolerance is to choose
@math{xtol = 10^{-d}} where @math{d} is the number of accurate
decimal digits desired in the solution @math{x}. See Dennis and
Schnabel for more information.
@item
Testing for a small gradient (@math{g = \nabla \Phi(x) = J^T f})
indicating a local function minimum:
@tex
\beforedisplay
$$
max_i |g_i \times max(x_i, 1)| \le gtol \times max(\Phi(x), 1)
$$
\afterdisplay
@end tex
@ifinfo
@example
||g||_inf <= gtol
@end example
@end ifinfo
This expression tests whether the ratio
@math{(\nabla \Phi)_i x_i / \Phi} is small. Testing this scaled gradient
is a better than @math{\nabla \Phi} alone since it is a dimensionless
quantity and so independent of the scale of the problem. The
@code{max} arguments help ensure the test doesn't break down in
regions where @math{x_i} or @math{\Phi(x)} are close to 0.
If this test succeeds, @var{info} is set to 2 and the function
returns @code{GSL_SUCCESS}.
A general guideline for choosing the gradient tolerance is to set
@code{gtol = GSL_DBL_EPSILON^(1/3)}. See Dennis and Schnabel for
more information.
@end itemize
If none of the tests succeed, @var{info} is set to 0 and the
function returns @code{GSL_CONTINUE}, indicating further iterations
are required.
@end deftypefun
@node Nonlinear Least-Squares High Level Driver
@section High Level Driver
These routines provide a high level wrapper that combines the iteration
and convergence testing for easy use.
@deftypefun int gsl_multifit_nlinear_driver (const size_t @var{maxiter}, const double @var{xtol}, const double @var{gtol}, const double @var{ftol}, void (* @var{callback})(const size_t @var{iter}, void * @var{params}, const gsl_multifit_linear_workspace * @var{w}), void * @var{callback_params}, int * @var{info}, gsl_multifit_nlinear_workspace * @var{w})
@deftypefunx int gsl_multilarge_nlinear_driver (const size_t @var{maxiter}, const double @var{xtol}, const double @var{gtol}, const double @var{ftol}, void (* @var{callback})(const size_t @var{iter}, void * @var{params}, const gsl_multilarge_linear_workspace * @var{w}), void * @var{callback_params}, int * @var{info}, gsl_multilarge_nlinear_workspace * @var{w})
These functions iterate the nonlinear least squares solver @var{w} for a
maximum of @var{maxiter} iterations. After each iteration, the system is
tested for convergence with the error tolerances @var{xtol}, @var{gtol} and @var{ftol}.
Additionally, the user may supply a callback function @var{callback}
which is called after each iteration, so that the user may save or print
relevant quantities for each iteration. The parameter @var{callback_params}
is passed to the @var{callback} function. The parameters @var{callback}
and @var{callback_params} may be set to NULL to disable this feature.
Upon successful convergence, the function returns @code{GSL_SUCCESS}
and sets @var{info} to the reason for convergence (see
@code{gsl_multifit_nlinear_test}). If the function has not
converged after @var{maxiter} iterations, @code{GSL_EMAXITER} is
returned. In rare cases, during an iteration the algorithm may
be unable to find a new acceptable step @math{\delta} to take. In
this case, @code{GSL_ENOPROG} is returned indicating no further
progress can be made. If your problem is having difficulty converging,
see @ref{Nonlinear Least-Squares Troubleshooting} for further guidance.
@end deftypefun
@node Nonlinear Least-Squares Covariance Matrix
@section Covariance matrix of best fit parameters
@cindex best-fit parameters, covariance
@cindex least squares, covariance of best-fit parameters
@cindex covariance matrix, nonlinear fits
@deftypefun int gsl_multifit_nlinear_covar (const gsl_matrix * @var{J}, const double @var{epsrel}, gsl_matrix * @var{covar})
@deftypefunx int gsl_multilarge_nlinear_covar (gsl_matrix * @var{covar}, gsl_multilarge_nlinear_workspace * @var{w})
This function computes the covariance matrix of best-fit parameters
using the Jacobian matrix @var{J} and stores it in @var{covar}.
The parameter @var{epsrel} is used to remove linear-dependent columns
when @var{J} is rank deficient.
The covariance matrix is given by,
@tex
\beforedisplay
$$
C = (J^T J)^{-1}
$$
\afterdisplay
@end tex
@ifinfo
@example
covar = (J^T J)^@{-1@}
@end example
@end ifinfo
or in the weighted case,
@tex
\beforedisplay
$$
C = (J^T W J)^{-1}
$$
\afterdisplay
@end tex
@ifinfo
@example
covar = (J^T W J)^@{-1@}
@end example
@end ifinfo
@noindent
and is computed using the factored form of the Jacobian (Cholesky, QR, or SVD).
Any columns of @math{R} which satisfy
@tex
\beforedisplay
$$
|R_{kk}| \leq epsrel |R_{11}|
$$
\afterdisplay
@end tex
@ifinfo
@example
|R_@{kk@}| <= epsrel |R_@{11@}|
@end example
@end ifinfo
@noindent
are considered linearly-dependent and are excluded from the covariance
matrix (the corresponding rows and columns of the covariance matrix are
set to zero).
If the minimisation uses the weighted least-squares function
@math{f_i = (Y(x, t_i) - y_i) / \sigma_i} then the covariance
matrix above gives the statistical error on the best-fit parameters
resulting from the Gaussian errors @math{\sigma_i} on
the underlying data @math{y_i}. This can be verified from the relation
@math{\delta f = J \delta c} and the fact that the fluctuations in @math{f}
from the data @math{y_i} are normalised by @math{\sigma_i} and
so satisfy @c{$\langle \delta f \delta f^T \rangle = I$}
@math{<\delta f \delta f^T> = I}.
For an unweighted least-squares function @math{f_i = (Y(x, t_i) -
y_i)} the covariance matrix above should be multiplied by the variance
of the residuals about the best-fit @math{\sigma^2 = \sum (y_i - Y(x,t_i))^2 / (n-p)}
to give the variance-covariance
matrix @math{\sigma^2 C}. This estimates the statistical error on the
best-fit parameters from the scatter of the underlying data.
For more information about covariance matrices see @ref{Fitting Overview}.
@end deftypefun
@node Nonlinear Least-Squares Troubleshooting
@section Troubleshooting
When developing a code to solve a nonlinear least squares problem,
here are a few considerations to keep in mind.
@enumerate
@item
The most common difficulty is the accurate implementation of the Jacobian
matrix. If the analytic Jacobian is not properly provided to the
solver, this can hinder and many times prevent convergence of the method.
When developing a new nonlinear least squares code, it often helps
to compare the program output with the internally computed finite
difference Jacobian and the user supplied analytic Jacobian. If there
is a large difference in coefficients, it is likely the analytic
Jacobian is incorrectly implemented.
@item
If your code is having difficulty converging, the next thing to
check is the starting point provided to the solver. The methods
of this chapter are local methods, meaning if you provide a starting
point far away from the true minimum, the method may converge to
a local minimum or not converge at all. Sometimes it is possible
to solve a linearized approximation to the nonlinear problem,
and use the linear solution as the starting point to the nonlinear
problem.
@item
If the various parameters of the coefficient vector @math{x}
vary widely in magnitude, then the problem is said to be badly scaled.
The methods of this chapter do attempt to automatically rescale
the elements of @math{x} to have roughly the same order of magnitude,
but in extreme cases this could still cause problems for convergence.
In these cases it is recommended for the user to scale their
parameter vector @math{x} so that each parameter spans roughly the
same range, say @math{[-1,1]}. The solution vector can be backscaled
to recover the original units of the problem.
@end enumerate
@node Nonlinear Least-Squares Examples
@section Examples
The following example programs demonstrate the nonlinear least
squares fitting capabilities.
@menu
* Nonlinear Least-Squares Exponential Fit Example::
* Nonlinear Least-Squares Geodesic Acceleration Example::
* Nonlinear Least-Squares Comparison Example::
* Nonlinear Least-Squares Large Example::
@end menu
@node Nonlinear Least-Squares Exponential Fit Example
@subsection Exponential Fitting Example
The following example program fits a weighted exponential model with
background to experimental data, @math{Y = A \exp(-\lambda t) + b}. The
first part of the program sets up the functions @code{expb_f} and
@code{expb_df} to calculate the model and its Jacobian. The appropriate
fitting function is given by,
@tex
\beforedisplay
$$
f_i = (A \exp(-\lambda t_i) + b) - y_i
$$
\afterdisplay
@end tex
@ifinfo
@example
f_i = (A \exp(-\lambda t_i) + b) - y_i
@end example
@end ifinfo
@noindent
where we have chosen @math{t_i = i}. The Jacobian matrix @math{J} is
the derivative of these functions with respect to the three parameters
(@math{A}, @math{\lambda}, @math{b}). It is given by,
@tex
\beforedisplay
$$
J_{ij} = {\partial f_i \over \partial x_j}
$$
\afterdisplay
@end tex
@ifinfo
@example
J_@{ij@} = d f_i / d x_j
@end example
@end ifinfo
@noindent
where @math{x_0 = A}, @math{x_1 = \lambda} and @math{x_2 = b}.
The @math{i}th row of the Jacobian is therefore
@tex
\beforedisplay
$$
J_{i\cdot} = \left(
\matrix{
\exp(-\lambda t_i) & -t_i A \exp(-\lambda t_i) & 1
}
\right)
$$
\afterdisplay
@end tex
@ifinfo
@example
@end example
@end ifinfo
@noindent
The main part of the program sets up a Levenberg-Marquardt solver and
some simulated random data. The data uses the known parameters
(5.0,0.1,1.0) combined with Gaussian noise (standard deviation = 0.1)
over a range of 40 timesteps. The initial guess for the parameters is
chosen as (1.0, 1.0, 0.0). The iteration terminates when the relative
change in x is smaller than @math{10^{-8}}, or when the magnitude of
the gradient falls below @math{10^{-8}}. Here are the results of running
the program:
@smallexample
iter 0: A = 1.0000, lambda = 1.0000, b = 0.0000, cond(J) = inf, |f(x)| = 62.2029
iter 1: A = 1.2196, lambda = 0.3663, b = 0.0436, cond(J) = 53.6368, |f(x)| = 59.8062
iter 2: A = 1.6062, lambda = 0.1506, b = 0.1054, cond(J) = 23.8178, |f(x)| = 53.9039
iter 3: A = 2.4528, lambda = 0.0583, b = 0.2470, cond(J) = 20.0493, |f(x)| = 28.8039
iter 4: A = 2.9723, lambda = 0.0494, b = 0.3727, cond(J) = 94.5601, |f(x)| = 15.3252
iter 5: A = 3.3473, lambda = 0.0477, b = 0.4410, cond(J) = 229.3627, |f(x)| = 10.7511
iter 6: A = 3.6690, lambda = 0.0508, b = 0.4617, cond(J) = 298.3589, |f(x)| = 9.7373
iter 7: A = 3.9907, lambda = 0.0580, b = 0.5433, cond(J) = 250.0194, |f(x)| = 8.7661
iter 8: A = 4.2353, lambda = 0.0731, b = 0.7989, cond(J) = 154.8571, |f(x)| = 7.4299
iter 9: A = 4.6573, lambda = 0.0958, b = 1.0302, cond(J) = 140.2265, |f(x)| = 6.1893
iter 10: A = 5.0138, lambda = 0.1060, b = 1.0329, cond(J) = 109.4141, |f(x)| = 5.4961
iter 11: A = 5.1505, lambda = 0.1103, b = 1.0497, cond(J) = 100.8762, |f(x)| = 5.4552
iter 12: A = 5.1724, lambda = 0.1110, b = 1.0526, cond(J) = 97.3403, |f(x)| = 5.4542
iter 13: A = 5.1737, lambda = 0.1110, b = 1.0528, cond(J) = 96.7136, |f(x)| = 5.4542
iter 14: A = 5.1738, lambda = 0.1110, b = 1.0528, cond(J) = 96.6678, |f(x)| = 5.4542
iter 15: A = 5.1738, lambda = 0.1110, b = 1.0528, cond(J) = 96.6663, |f(x)| = 5.4542
iter 16: A = 5.1738, lambda = 0.1110, b = 1.0528, cond(J) = 96.6663, |f(x)| = 5.4542
summary from method 'trust-region/levenberg-marquardt'
number of iterations: 16
function evaluations: 23
Jacobian evaluations: 17
reason for stopping: small step size
initial |f(x)| = 62.202928
final |f(x)| = 5.454180
chisq/dof = 0.804002
A = 5.17379 +/- 0.27938
lambda = 0.11104 +/- 0.00817
b = 1.05283 +/- 0.05365
status = success
@end smallexample
@noindent
The approximate values of the parameters are found correctly, and the
chi-squared value indicates a good fit (the chi-squared per degree of
freedom is approximately 1). In this case the errors on the parameters
can be estimated from the square roots of the diagonal elements of the
covariance matrix. If the chi-squared value shows a poor fit (i.e.
@c{$\chi^2/(n-p) \gg 1$}
@math{chi^2/dof >> 1}) then the error estimates obtained from the
covariance matrix will be too small. In the example program the error estimates
are multiplied by @c{$\sqrt{\chi^2/(n-p)}$}
@math{\sqrt@{\chi^2/dof@}} in this case, a common way of increasing the
errors for a poor fit. Note that a poor fit will result from the use
of an inappropriate model, and the scaled error estimates may then
be outside the range of validity for Gaussian errors.
@noindent
Additionally, we see that the condition number of @math{J(x)} stays
reasonably small throughout the iteration. This indicates we could
safely switch to the Cholesky solver for speed improvement,
although this particular system is too small to really benefit.
@iftex
@sp 1
@center @image{fit-exp,3.4in}
@end iftex
@example
@verbatiminclude examples/nlfit.c
@end example
@node Nonlinear Least-Squares Geodesic Acceleration Example
@subsection Geodesic Acceleration Example
The following example program minimizes a modified Rosenbrock function,
which is characterized by a narrow canyon with steep walls. The
starting point is selected high on the canyon wall, so the solver
must first find the canyon bottom and then navigate to the minimum.
The problem is solved both with and without using geodesic acceleration
for comparison. The cost function is given by
@tex
\beforedisplay
$$
\eqalign{
\Phi(x) &= {1 \over 2} (f_1^2 + f_2^2) \cr
f_1 &= 100 \left( x_2 - x_1^2 \right) \cr
f_2 &= 1 - x_1
}
$$
\afterdisplay
@end tex
@ifinfo
@example
Phi(x) = 1/2 (f1^2 + f2^2)
f1 = 100 ( x2 - x1^2 )
f2 = 1 - x1
@end example
@end ifinfo
@noindent
The Jacobian matrix is given by
@tex
\beforedisplay
$$
J =
\left(
\matrix{
{\partial f_1 \over \partial x_1} & {\partial f_1 \over \partial x_2} \cr
{\partial f_2 \over \partial x_1} & {\partial f_2 \over \partial x_2}
}
\right) =
\left(
\matrix{
-200 x_1 & 100 \cr
-1 & 0
}
\right)
$$
\afterdisplay
@end tex
@ifinfo
@example
J = [ -200*x1 100 ; -1 0 ]
@end example
@end ifinfo
@noindent
In order to use geodesic acceleration, the user must provide
the second directional derivative of each residual in the
velocity direction,
@math{D_v^2 f_i = \sum_{\alpha\beta} v_{\alpha} v_{\beta} \partial_{\alpha} \partial_{\beta} f_i}.
The velocity vector @math{v} is provided by the solver. For this example,
these derivatives are given by
@tex
\beforedisplay
$$
f_{vv} =
D_v^2
\left(
\matrix{
f_1 \cr
f_2
}
\right) =
\left(
\matrix{
-200 v_1^2 \cr
0
}
\right)
$$
\afterdisplay
@end tex
@ifinfo
@example
fvv = [ -200 v1^2 ; 0 ]
@end example
@end ifinfo
@noindent
The solution of this minimization problem is given by
@tex
\beforedisplay
$$
\eqalign{
x^{*} &= \left(
\matrix{
1 \cr
1
}
\right) \cr
\Phi(x^{*}) &= 0
}
$$
\afterdisplay
@end tex
@ifinfo
@example
x* = [ 1 ; 1 ]
Phi(x*) = 0
@end example
@end ifinfo
@noindent
The program output is shown below.
@smallexample
=== Solving system without acceleration ===
NITER = 53
NFEV = 56
NJEV = 54
NAEV = 0
initial cost = 2.250225000000e+04
final cost = 6.674986031430e-18
final x = (9.999999974165e-01, 9.999999948328e-01)
final cond(J) = 6.000096055094e+02
=== Solving system with acceleration ===
NITER = 15
NFEV = 17
NJEV = 16
NAEV = 16
initial cost = 2.250225000000e+04
final cost = 7.518932873279e-19
final x = (9.999999991329e-01, 9.999999982657e-01)
final cond(J) = 6.000097233278e+02
@end smallexample
@noindent
We can see that enabling geodesic acceleration requires less
than a third of the number of Jacobian evaluations in order to locate
the minimum. The path taken by both methods is shown in the
figure below. The contours show the cost function
@math{\Phi(x_1,x_2)}. We see that both methods quickly
find the canyon bottom, but the geodesic acceleration method
navigates along the bottom to the solution with significantly
fewer iterations.
@iftex
@sp 1
@center @image{nlfit2,6in}
@end iftex
@noindent
The program is given below.
@example
@verbatiminclude examples/nlfit2.c
@end example
@node Nonlinear Least-Squares Comparison Example
@subsection Comparing TRS Methods Example
The following program compares all available nonlinear least squares
trust-region subproblem (TRS) methods on the Branin function, a common
optimization test problem. The cost function is given by
@tex
\beforedisplay
$$
\eqalign{
\Phi(x) &= {1 \over 2} (f_1^2 + f_2^2) \cr
f_1 &= x_2 + a_1 x_1^2 + a_2 x_1 + a_3 \cr
f_2 &= \sqrt{a_4} \sqrt{1 + (1 - a_5) \cos{x_1}}
}
$$
\afterdisplay
@end tex
@ifinfo
@example
\Phi(x) &= 1/2 (f_1^2 + f_2^2)
f_1 &= x_2 + a_1 x_1^2 + a_2 x_1 + a_3
f_2 &= sqrt(a_4) sqrt(1 + (1 - a_5) cos(x_1))
@end example
@end ifinfo
with @math{a_1 = -{5.1 \over 4 \pi^2}, a_2 = {5 \over \pi}, a_3 = -6, a_4 = 10, a_5 = {1 \over 8\pi}}.
There are three minima of this function in the range
@math{(x_1,x_2) \in [-5,15] \times [-5,15]}. The program
below uses the starting point @math{(x_1,x_2) = (6,14.5)}
and calculates the solution with all available nonlinear
least squares TRS methods. The program output is shown below.
@smallformat
@verbatim
Method NITER NFEV NJEV Initial Cost Final cost Final cond(J) Final x
levenberg-marquardt 20 27 21 1.9874e+02 3.9789e-01 6.1399e+07 (-3.14e+00, 1.23e+01)
levenberg-marquardt+accel 27 36 28 1.9874e+02 3.9789e-01 1.4465e+07 (3.14e+00, 2.27e+00)
dogleg 23 64 23 1.9874e+02 3.9789e-01 5.0692e+08 (3.14e+00, 2.28e+00)
double-dogleg 24 69 24 1.9874e+02 3.9789e-01 3.4879e+07 (3.14e+00, 2.27e+00)
2D-subspace 23 54 24 1.9874e+02 3.9789e-01 2.5142e+07 (3.14e+00, 2.27e+00)
@end verbatim
@end smallformat
@noindent
The first row of output above corresponds to standard Levenberg-Marquardt, while
the second row includes geodesic acceleration. We see that the standard LM method
converges to the minimum at @math{(-\pi,12.275)} and also uses the least number
of iterations and Jacobian evaluations. All other methods converge to the minimum
@math{(\pi,2.275)} and perform similarly in terms of number of Jacobian evaluations.
We see that @math{J} is fairly ill-conditioned
at both minima, indicating that the QR (or SVD) solver is the best choice for this problem.
Since there are only two parameters in this optimization problem, we can easily
visualize the paths taken by each method, which are shown in the figure below.
The figure shows contours of the cost function @math{\Phi(x_1,x_2)} which exhibits
three global minima in the range @math{[-5,15] \times [-5,15]}. The paths taken
by each solver are shown as colored lines.
@iftex
@sp 1
@center @image{nlfit3,6in}
@end iftex
@noindent
The program is given below.
@example
@verbatiminclude examples/nlfit3.c
@end example
@node Nonlinear Least-Squares Large Example
@subsection Large Nonlinear Least Squares Example
The following program illustrates the large nonlinear least
squares solvers on a system with significant sparse structure
in the Jacobian. The cost function is given by
@tex
\beforedisplay
$$
\eqalign{
\Phi(x) &= {1 \over 2} \sum_{i=1}^{p+1} f_i^2 \cr
f_i &= \sqrt{\alpha} (x_i - 1), \quad 1 \le i \le p \cr
f_{p+1} &= ||x||^2 - {1 \over 4}
}
$$
\afterdisplay
@end tex
@ifinfo
@example
\Phi(x) &= 1/2 \sum_@{i=1@}^@{p+1@} f_i^2
f_i &= \sqrt@{\alpha@} (x_i - 1), 1 \le i \le p
f_@{p+1@} &= ||x||^2 - 1/4
@end example
@end ifinfo
with @math{\alpha = 10^{-5}}. The residual @math{f_{p+1}} imposes a constraint on the @math{p}
parameters @math{x}, to ensure that @math{||x||^2 \approx {1 \over 4}}.
The @math{(p+1)}-by-@math{p} Jacobian for this system is given by
@tex
\beforedisplay
$$
J(x) =
\left(
\matrix{
\sqrt{\alpha} I_p \cr
2 x^T
}
\right)
$$
\afterdisplay
@end tex
@ifinfo
@example
J(x) = [ \sqrt@{alpha@} I_p; 2 x^T ]
@end example
@end ifinfo
and the normal equations matrix is given by
@tex
\beforedisplay
$$
J^T J =
\alpha I_p + 4 x x^T
$$
\afterdisplay
@end tex
@ifinfo
@example
J^T J = [ \alpha I_p + 4 x x^T ]
@end example
@end ifinfo
@noindent
Finally, the second directional derivative of @math{f} for the
geodesic acceleration method is given by
@tex
\beforedisplay
$$
f_{vv} = D_v^2 f =
\left(
\matrix{
0 \cr
2 ||v||^2
}
\right)
$$
\afterdisplay
@end tex
@ifinfo
@example
fvv = [ 0; 2 ||v||^2 ]
@end example
@end ifinfo
@noindent
Since the upper @math{p}-by-@math{p} block of @math{J} is diagonal,
this sparse structure should be exploited in the nonlinear solver.
For comparison, the following program solves the system for @math{p = 2000}
using the dense direct Cholesky solver based on the normal equations matrix
@math{J^T J}, as well as the iterative Steihaug-Toint solver, based on
sparse matrix-vector products @math{J u} and @math{J^T u}. The
program output is shown below.
@smallformat
@verbatim
Method NITER NFEV NJUEV NJTJEV NAEV Init Cost Final cost cond(J) Final |x|^2 Time (s)
levenberg-marquardt 25 31 26 26 0 7.1218e+18 1.9555e-02 447.50 2.5044e-01 46.28
levenberg-marquardt+accel 22 23 45 23 22 7.1218e+18 1.9555e-02 447.64 2.5044e-01 33.92
dogleg 37 87 36 36 0 7.1218e+18 1.9555e-02 447.59 2.5044e-01 56.05
double-dogleg 35 88 34 34 0 7.1218e+18 1.9555e-02 447.62 2.5044e-01 52.65
2D-subspace 37 88 36 36 0 7.1218e+18 1.9555e-02 447.71 2.5044e-01 59.75
steihaug-toint 35 88 345 0 0 7.1218e+18 1.9555e-02 inf 2.5044e-01 0.09
@end verbatim
@end smallformat
@noindent
The first five rows use methods based on factoring the dense @math{J^T J} matrix
while the last row uses the iterative Steihaug-Toint method. While the number
of Jacobian matrix-vector products (NJUEV) is less for the dense methods, the added time
to construct and factor the @math{J^T J} matrix (NJTJEV) results in a much larger runtime than the
iterative method (see last column).
@noindent
The program is given below.
@example
@verbatiminclude examples/nlfit4.c
@end example
@node Nonlinear Least-Squares References and Further Reading
@section References and Further Reading
@noindent
The following publications are relevant to the algorithms described
in this section,
@itemize @w{}
@item
J.J. Mor@'e, @cite{The Levenberg-Marquardt Algorithm: Implementation and
Theory}, Lecture Notes in Mathematics, v630 (1978), ed G. Watson.
@item
H. B. Nielsen, ``Damping Parameter in Marquardt's Method'',
IMM Department of Mathematical Modeling, DTU, Tech. Report IMM-REP-1999-05
(1999).
@item
K. Madsen and H. B. Nielsen, ``Introduction to Optimization and Data
Fitting'', IMM Department of Mathematical Modeling, DTU, 2010.
@item
J. E. Dennis and R. B. Schnabel, Numerical Methods for Unconstrained
Optimization and Nonlinear Equations, SIAM, 1996.
@item
M. K. Transtrum, B. B. Machta, and J. P. Sethna,
Geometry of nonlinear least squares with applications to sloppy models and optimization,
Phys. Rev. E 83, 036701, 2011.
@item
M. K. Transtrum and J. P. Sethna, Improvements to the Levenberg-Marquardt
algorithm for nonlinear least-squares minimization, arXiv:1201.5885, 2012.
@item
J.J. Mor@'e, B.S. Garbow, K.E. Hillstrom, ``Testing Unconstrained
Optimization Software'', ACM Transactions on Mathematical Software, Vol
7, No 1 (1981), p 17--41.
@item
H. B. Nielsen, ``UCTP Test Problems for Unconstrained Optimization'',
IMM Department of Mathematical Modeling, DTU, Tech. Report IMM-REP-2000-17
(2000).
@end itemize
|