1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132
|
Optimization (:mod:`scipy.optimize`)
====================================
.. sectionauthor:: Travis E. Oliphant
.. sectionauthor:: Pauli Virtanen
.. sectionauthor:: Denis Laxalde
.. currentmodule:: scipy.optimize
.. contents::
The :mod:`scipy.optimize` package provides several commonly used
optimization algorithms. A detailed listing is available:
:mod:`scipy.optimize` (can also be found by ``help(scipy.optimize)``).
Local minimization of multivariate scalar functions (:func:`minimize`)
----------------------------------------------------------------------
The :func:`minimize` function provides a common interface to unconstrained
and constrained minimization algorithms for multivariate scalar functions
in `scipy.optimize`. To demonstrate the minimization function, consider the
problem of minimizing the Rosenbrock function of :math:`N` variables:
.. math::
f\left(\mathbf{x}\right)=\sum_{i=1}^{N-1}100\left(x_{i+1}-x_{i}^{2}\right)^{2}+\left(1-x_{i}\right)^{2}.
The minimum value of this function is 0 which is achieved when
:math:`x_{i}=1.`
Note that the Rosenbrock function and its derivatives are included in
`scipy.optimize`. The implementations shown in the following sections
provide examples of how to define an objective function as well as its
jacobian and hessian functions. Objective functions in `scipy.optimize`
expect a numpy array as their first parameter which is to be optimized
and must return a float value. The exact calling signature must be
``f(x, *args)`` where ``x`` represents a numpy array and ``args``
a tuple of additional arguments supplied to the objective function.
Unconstrained minimization
^^^^^^^^^^^^^^^^^^^^^^^^^^
Nelder-Mead Simplex algorithm (``method='Nelder-Mead'``)
""""""""""""""""""""""""""""""""""""""""""""""""""""""""
In the example below, the :func:`minimize` routine is used
with the *Nelder-Mead* simplex algorithm (selected through the ``method``
parameter):
>>> import numpy as np
>>> from scipy.optimize import minimize
>>> def rosen(x):
... """The Rosenbrock function"""
... return sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0)
>>> x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2])
>>> res = minimize(rosen, x0, method='nelder-mead',
... options={'xatol': 1e-8, 'disp': True})
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 339
Function evaluations: 571
>>> print(res.x)
[1. 1. 1. 1. 1.]
The simplex algorithm is probably the simplest way to minimize a fairly
well-behaved function. It requires only function evaluations and is a good
choice for simple minimization problems. However, because it does not use
any gradient evaluations, it may take longer to find the minimum.
Another optimization algorithm that needs only function calls to find
the minimum is *Powell*'s method available by setting ``method='powell'`` in
:func:`minimize`.
To demonstrate how to supply additional arguments to an objective function,
let us minimize the Rosenbrock function with an additional scaling factor `a`
and an offset `b`:
.. math::
f\left(\mathbf{x}, a, b\right)=\sum_{i=1}^{N-1}a\left(x_{i+1}-x_{i}^{2}\right)^{2}+\left(1-x_{i}\right)^{2} + b.
Again using the :func:`minimize` routine this can be solved by the following
code block for the example parameters ``a=0.5`` and ``b=1``.
>>> def rosen_with_args(x, a, b):
... """The Rosenbrock function with additional arguments"""
... return sum(a*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0) + b
>>> x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2])
>>> res = minimize(rosen_with_args, x0, method='nelder-mead',
... args=(0.5, 1.), options={'xatol': 1e-8, 'disp': True})
Optimization terminated successfully.
Current function value: 1.000000
Iterations: 319 # may vary
Function evaluations: 525 # may vary
>>> print(res.x)
[1. 1. 1. 1. 0.99999999]
As an alternative to using the ``args`` parameter of :func:`minimize`, simply
wrap the objective function in a new function that accepts only ``x``. This
approach is also useful when it is necessary to pass additional parameters to
the objective function as keyword arguments.
>>> def rosen_with_args(x, a, *, b): # b is a keyword-only argument
... return sum(a*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0) + b
>>> def wrapped_rosen_without_args(x):
... return rosen_with_args(x, 0.5, b=1.) # pass in `a` and `b`
>>> x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2])
>>> res = minimize(wrapped_rosen_without_args, x0, method='nelder-mead',
... options={'xatol': 1e-8,})
>>> print(res.x)
[1. 1. 1. 1. 0.99999999]
Another alternative is to use :py:func:`functools.partial`.
>>> from functools import partial
>>> partial_rosen = partial(rosen_with_args, a=0.5, b=1.)
>>> res = minimize(partial_rosen, x0, method='nelder-mead',
... options={'xatol': 1e-8,})
>>> print(res.x)
[1. 1. 1. 1. 0.99999999]
Broyden-Fletcher-Goldfarb-Shanno algorithm (``method='BFGS'``)
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
In order to converge more quickly to the solution, this routine uses
the gradient of the objective function. If the gradient is not given
by the user, then it is estimated using first-differences. The
Broyden-Fletcher-Goldfarb-Shanno (BFGS) method typically requires
fewer function calls than the simplex algorithm even when the gradient
must be estimated.
To demonstrate this algorithm, the Rosenbrock function is again used.
The gradient of the Rosenbrock function is the vector:
.. math::
:nowrap:
\begin{eqnarray*} \frac{\partial f}{\partial x_{j}} & = & \sum_{i=1}^{N}200\left(x_{i}-x_{i-1}^{2}\right)\left(\delta_{i,j}-2x_{i-1}\delta_{i-1,j}\right)-2\left(1-x_{i-1}\right)\delta_{i-1,j}.\\ & = & 200\left(x_{j}-x_{j-1}^{2}\right)-400x_{j}\left(x_{j+1}-x_{j}^{2}\right)-2\left(1-x_{j}\right).\end{eqnarray*}
This expression is valid for the interior derivatives. Special cases
are
.. math::
:nowrap:
\begin{eqnarray*} \frac{\partial f}{\partial x_{0}} & = & -400x_{0}\left(x_{1}-x_{0}^{2}\right)-2\left(1-x_{0}\right),\\ \frac{\partial f}{\partial x_{N-1}} & = & 200\left(x_{N-1}-x_{N-2}^{2}\right).\end{eqnarray*}
A Python function which computes this gradient is constructed by the
code-segment:
>>> def rosen_der(x):
... xm = x[1:-1]
... xm_m1 = x[:-2]
... xm_p1 = x[2:]
... der = np.zeros_like(x)
... der[1:-1] = 200*(xm-xm_m1**2) - 400*(xm_p1 - xm**2)*xm - 2*(1-xm)
... der[0] = -400*x[0]*(x[1]-x[0]**2) - 2*(1-x[0])
... der[-1] = 200*(x[-1]-x[-2]**2)
... return der
This gradient information is specified in the :func:`minimize` function
through the ``jac`` parameter as illustrated below.
>>> res = minimize(rosen, x0, method='BFGS', jac=rosen_der,
... options={'disp': True})
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 25 # may vary
Function evaluations: 30
Gradient evaluations: 30
>>> res.x
array([1., 1., 1., 1., 1.])
**Avoiding Redundant Calculation**
It is common for the objective function and its gradient to share parts of the
calculation. For instance, consider the following problem.
>>> def f(x):
... return -expensive(x[0])**2
>>>
>>> def df(x):
... return -2 * expensive(x[0]) * dexpensive(x[0])
>>>
>>> def expensive(x):
... # this function is computationally expensive!
... expensive.count += 1 # let's keep track of how many times it runs
... return np.sin(x)
>>> expensive.count = 0
>>>
>>> def dexpensive(x):
... return np.cos(x)
>>>
>>> res = minimize(f, 0.5, jac=df)
>>> res.fun
-0.9999999999999174
>>> res.nfev, res.njev
6, 6
>>> expensive.count
12
Here, ``expensive`` is called 12 times: six times in the objective function and
six times from the gradient. One way of reducing redundant calculations is to
create a single function that returns both the objective function and the
gradient.
>>> def f_and_df(x):
... expensive_value = expensive(x[0])
... return (-expensive_value**2, # objective function
... -2*expensive_value*dexpensive(x[0])) # gradient
>>>
>>> expensive.count = 0 # reset the counter
>>> res = minimize(f_and_df, 0.5, jac=True)
>>> res.fun
-0.9999999999999174
>>> expensive.count
6
When we call minimize, we specify ``jac==True`` to indicate that the provided
function returns both the objective function and its gradient. While
convenient, not all :mod:`scipy.optimize` functions support this feature,
and moreover, it is only for sharing calculations between the function and its
gradient, whereas in some problems we will want to share calculations with the
Hessian (second derivative of the objective function) and constraints. A more
general approach is to memoize the expensive parts of the calculation. In
simple situations, this can be accomplished with the
:func:`functools.lru_cache` wrapper.
>>> from functools import lru_cache
>>> expensive.count = 0 # reset the counter
>>> expensive = lru_cache(expensive)
>>> res = minimize(f, 0.5, jac=df)
>>> res.fun
-0.9999999999999174
>>> expensive.count
6
Newton-Conjugate-Gradient algorithm (``method='Newton-CG'``)
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
Newton-Conjugate Gradient algorithm is a modified Newton's
method and uses a conjugate gradient algorithm to (approximately) invert
the local Hessian [NW]_. Newton's method is based on fitting the function
locally to a quadratic form:
.. math::
f\left(\mathbf{x}\right)\approx f\left(\mathbf{x}_{0}\right)+\nabla f\left(\mathbf{x}_{0}\right)\cdot\left(\mathbf{x}-\mathbf{x}_{0}\right)+\frac{1}{2}\left(\mathbf{x}-\mathbf{x}_{0}\right)^{T}\mathbf{H}\left(\mathbf{x}_{0}\right)\left(\mathbf{x}-\mathbf{x}_{0}\right).
where :math:`\mathbf{H}\left(\mathbf{x}_{0}\right)` is a matrix of second-derivatives (the Hessian). If the Hessian is
positive definite then the local minimum of this function can be found
by setting the gradient of the quadratic form to zero, resulting in
.. math::
\mathbf{x}_{\textrm{opt}}=\mathbf{x}_{0}-\mathbf{H}^{-1}\nabla f.
The inverse of the Hessian is evaluated using the conjugate-gradient
method. An example of employing this method to minimizing the
Rosenbrock function is given below. To take full advantage of the
Newton-CG method, a function which computes the Hessian must be
provided. The Hessian matrix itself does not need to be constructed,
only a vector which is the product of the Hessian with an arbitrary
vector needs to be available to the minimization routine. As a result,
the user can provide either a function to compute the Hessian matrix,
or a function to compute the product of the Hessian with an arbitrary
vector.
**Full Hessian example**
The Hessian of the Rosenbrock function is
.. math::
:nowrap:
\begin{eqnarray*} H_{ij}=\frac{\partial^{2}f}{\partial x_{i}\partial x_{j}} & = & 200\left(\delta_{i,j}-2x_{i-1}\delta_{i-1,j}\right)-400x_{i}\left(\delta_{i+1,j}-2x_{i}\delta_{i,j}\right)-400\delta_{i,j}\left(x_{i+1}-x_{i}^{2}\right)+2\delta_{i,j},\\ & = & \left(202+1200x_{i}^{2}-400x_{i+1}\right)\delta_{i,j}-400x_{i}\delta_{i+1,j}-400x_{i-1}\delta_{i-1,j},\end{eqnarray*}
if :math:`i,j\in\left[1,N-2\right]` with :math:`i,j\in\left[0,N-1\right]` defining the :math:`N\times N` matrix. Other non-zero entries of the matrix are
.. math::
:nowrap:
\begin{eqnarray*} \frac{\partial^{2}f}{\partial x_{0}^{2}} & = & 1200x_{0}^{2}-400x_{1}+2,\\ \frac{\partial^{2}f}{\partial x_{0}\partial x_{1}}=\frac{\partial^{2}f}{\partial x_{1}\partial x_{0}} & = & -400x_{0},\\ \frac{\partial^{2}f}{\partial x_{N-1}\partial x_{N-2}}=\frac{\partial^{2}f}{\partial x_{N-2}\partial x_{N-1}} & = & -400x_{N-2},\\ \frac{\partial^{2}f}{\partial x_{N-1}^{2}} & = & 200.\end{eqnarray*}
For example, the Hessian when :math:`N=5` is
.. math::
\mathbf{H}=\begin{bmatrix} 1200x_{0}^{2}+2\mkern-2em\\&1200x_{1}^{2}+202\mkern-2em\\&&1200x_{1}^{2}+202\mkern-2em\\&&&1200x_{3}^{2}+202\mkern-1em\\&&&&200\end{bmatrix}-400\begin{bmatrix} x_1 & x_0 \\ x_0 & x_2 & x_1 \\ & x_1 & x_3 & x_2\\ & & x_2 & x_4 & x_3 \\ & & & x_3 & 0\end{bmatrix}.
The code which computes this Hessian along with the code to minimize
the function using Newton-CG method is shown in the following example:
>>> def rosen_hess(x):
... x = np.asarray(x)
... H = np.diag(-400*x[:-1],1) - np.diag(400*x[:-1],-1)
... diagonal = np.zeros_like(x)
... diagonal[0] = 1200*x[0]**2-400*x[1]+2
... diagonal[-1] = 200
... diagonal[1:-1] = 202 + 1200*x[1:-1]**2 - 400*x[2:]
... H = H + np.diag(diagonal)
... return H
>>> res = minimize(rosen, x0, method='Newton-CG',
... jac=rosen_der, hess=rosen_hess,
... options={'xtol': 1e-8, 'disp': True})
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 19 # may vary
Function evaluations: 22
Gradient evaluations: 19
Hessian evaluations: 19
>>> res.x
array([1., 1., 1., 1., 1.])
**Hessian product example**
For larger minimization problems, storing the entire Hessian matrix can
consume considerable time and memory. The Newton-CG algorithm only needs
the product of the Hessian times an arbitrary vector. As a result, the user
can supply code to compute this product rather than the full Hessian by
giving a ``hess`` function which take the minimization vector as the first
argument and the arbitrary vector as the second argument (along with extra
arguments passed to the function to be minimized). If possible, using
Newton-CG with the Hessian product option is probably the fastest way to
minimize the function.
In this case, the product of the Rosenbrock Hessian with an arbitrary
vector is not difficult to compute. If :math:`\mathbf{p}` is the arbitrary
vector, then :math:`\mathbf{H}\left(\mathbf{x}\right)\mathbf{p}` has
elements:
.. math::
\mathbf{H}\left(\mathbf{x}\right)\mathbf{p}=\begin{bmatrix} \left(1200x_{0}^{2}-400x_{1}+2\right)p_{0}-400x_{0}p_{1}\\ \vdots\\ -400x_{i-1}p_{i-1}+\left(202+1200x_{i}^{2}-400x_{i+1}\right)p_{i}-400x_{i}p_{i+1}\\ \vdots\\ -400x_{N-2}p_{N-2}+200p_{N-1}\end{bmatrix}.
Code which makes use of this Hessian product to minimize the
Rosenbrock function using :func:`minimize` follows:
>>> def rosen_hess_p(x, p):
... x = np.asarray(x)
... Hp = np.zeros_like(x)
... Hp[0] = (1200*x[0]**2 - 400*x[1] + 2)*p[0] - 400*x[0]*p[1]
... Hp[1:-1] = -400*x[:-2]*p[:-2]+(202+1200*x[1:-1]**2-400*x[2:])*p[1:-1] \
... -400*x[1:-1]*p[2:]
... Hp[-1] = -400*x[-2]*p[-2] + 200*p[-1]
... return Hp
>>> res = minimize(rosen, x0, method='Newton-CG',
... jac=rosen_der, hessp=rosen_hess_p,
... options={'xtol': 1e-8, 'disp': True})
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 20 # may vary
Function evaluations: 23
Gradient evaluations: 20
Hessian evaluations: 44
>>> res.x
array([1., 1., 1., 1., 1.])
According to [NW]_ p. 170 the ``Newton-CG`` algorithm can be inefficient
when the Hessian is ill-conditioned because of the poor quality search directions
provided by the method in those situations. The method ``trust-ncg``,
according to the authors, deals more effectively with this problematic situation
and will be described next.
Trust-Region Newton-Conjugate-Gradient Algorithm (``method='trust-ncg'``)
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
The ``Newton-CG`` method is a line search method: it finds a direction
of search minimizing a quadratic approximation of the function and then uses
a line search algorithm to find the (nearly) optimal step size in that direction.
An alternative approach is to, first, fix the step size limit :math:`\Delta` and then find the
optimal step :math:`\mathbf{p}` inside the given trust-radius by solving
the following quadratic subproblem:
.. math::
:nowrap:
\begin{eqnarray*}
\min_{\mathbf{p}} f\left(\mathbf{x}_{k}\right)+\nabla f\left(\mathbf{x}_{k}\right)\cdot\mathbf{p}+\frac{1}{2}\mathbf{p}^{T}\mathbf{H}\left(\mathbf{x}_{k}\right)\mathbf{p};&\\
\text{subject to: } \|\mathbf{p}\|\le \Delta.&
\end{eqnarray*}
The solution is then updated :math:`\mathbf{x}_{k+1} = \mathbf{x}_{k} + \mathbf{p}` and
the trust-radius :math:`\Delta` is adjusted according to the degree of agreement of the quadratic
model with the real function. This family of methods is known as trust-region methods.
The ``trust-ncg`` algorithm is a trust-region method that uses a conjugate gradient algorithm
to solve the trust-region subproblem [NW]_.
**Full Hessian example**
>>> res = minimize(rosen, x0, method='trust-ncg',
... jac=rosen_der, hess=rosen_hess,
... options={'gtol': 1e-8, 'disp': True})
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 20 # may vary
Function evaluations: 21
Gradient evaluations: 20
Hessian evaluations: 19
>>> res.x
array([1., 1., 1., 1., 1.])
**Hessian product example**
>>> res = minimize(rosen, x0, method='trust-ncg',
... jac=rosen_der, hessp=rosen_hess_p,
... options={'gtol': 1e-8, 'disp': True})
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 20 # may vary
Function evaluations: 21
Gradient evaluations: 20
Hessian evaluations: 0
>>> res.x
array([1., 1., 1., 1., 1.])
Trust-Region Truncated Generalized Lanczos / Conjugate Gradient Algorithm (``method='trust-krylov'``)
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
Similar to the ``trust-ncg`` method, the ``trust-krylov`` method is a method
suitable for large-scale problems as it uses the hessian only as linear
operator by means of matrix-vector products.
It solves the quadratic subproblem more accurately than the ``trust-ncg``
method.
.. math::
:nowrap:
\begin{eqnarray*}
\min_{\mathbf{p}} f\left(\mathbf{x}_{k}\right)+\nabla f\left(\mathbf{x}_{k}\right)\cdot\mathbf{p}+\frac{1}{2}\mathbf{p}^{T}\mathbf{H}\left(\mathbf{x}_{k}\right)\mathbf{p};&\\
\text{subject to: } \|\mathbf{p}\|\le \Delta.&
\end{eqnarray*}
This method wraps the [TRLIB]_ implementation of the [GLTR]_ method solving
exactly a trust-region subproblem restricted to a truncated Krylov subspace.
For indefinite problems it is usually better to use this method as it reduces
the number of nonlinear iterations at the expense of few more matrix-vector
products per subproblem solve in comparison to the ``trust-ncg`` method.
**Full Hessian example**
>>> res = minimize(rosen, x0, method='trust-krylov',
... jac=rosen_der, hess=rosen_hess,
... options={'gtol': 1e-8, 'disp': True})
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 19 # may vary
Function evaluations: 20
Gradient evaluations: 20
Hessian evaluations: 18
>>> res.x
array([1., 1., 1., 1., 1.])
**Hessian product example**
>>> res = minimize(rosen, x0, method='trust-krylov',
... jac=rosen_der, hessp=rosen_hess_p,
... options={'gtol': 1e-8, 'disp': True})
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 19 # may vary
Function evaluations: 20
Gradient evaluations: 20
Hessian evaluations: 0
>>> res.x
array([1., 1., 1., 1., 1.])
.. [TRLIB] F. Lenders, C. Kirches, A. Potschka: "trlib: A vector-free
implementation of the GLTR method for iterative solution of
the trust region problem", :arxiv:`1611.04718`
.. [GLTR] N. Gould, S. Lucidi, M. Roma, P. Toint: "Solving the
Trust-Region Subproblem using the Lanczos Method",
SIAM J. Optim., 9(2), 504--525, (1999).
:doi:`10.1137/S1052623497322735`
Trust-Region Nearly Exact Algorithm (``method='trust-exact'``)
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
All methods ``Newton-CG``, ``trust-ncg`` and ``trust-krylov`` are suitable for dealing with
large-scale problems (problems with thousands of variables). That is because the conjugate
gradient algorithm approximately solve the trust-region subproblem (or invert the Hessian)
by iterations without the explicit Hessian factorization. Since only the product of the Hessian
with an arbitrary vector is needed, the algorithm is specially suited for dealing
with sparse Hessians, allowing low storage requirements and significant time savings for
those sparse problems.
For medium-size problems, for which the storage and factorization cost of the Hessian are not critical,
it is possible to obtain a solution within fewer iteration by solving the trust-region subproblems
almost exactly. To achieve that, a certain nonlinear equations is solved iteratively for each quadratic
subproblem [CGT]_. This solution requires usually 3 or 4 Cholesky factorizations of the
Hessian matrix. As the result, the method converges in fewer number of iterations
and takes fewer evaluations of the objective function than the other implemented
trust-region methods. The Hessian product option is not supported by this algorithm. An
example using the Rosenbrock function follows:
>>> res = minimize(rosen, x0, method='trust-exact',
... jac=rosen_der, hess=rosen_hess,
... options={'gtol': 1e-8, 'disp': True})
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 13 # may vary
Function evaluations: 14
Gradient evaluations: 13
Hessian evaluations: 14
>>> res.x
array([1., 1., 1., 1., 1.])
.. [NW] J. Nocedal, S.J. Wright "Numerical optimization."
2nd edition. Springer Science (2006).
.. [CGT] Conn, A. R., Gould, N. I., & Toint, P. L.
"Trust region methods". Siam. (2000). pp. 169-200.
.. _tutorial-sqlsp:
Constrained minimization
^^^^^^^^^^^^^^^^^^^^^^^^
The :func:`minimize` function provides several algorithms for constrained minimization,
namely ``'trust-constr'`` , ``'SLSQP'``, ``'COBYLA'``, and ``'COBYQA'``. They require the constraints
to be defined using slightly different structures. The methods ``'trust-constr'``, ``'COBYQA'``, and ``'COBYLA'`` require
the constraints to be defined as a sequence of objects :func:`LinearConstraint` and
:func:`NonlinearConstraint`. Method ``'SLSQP'``, on the other hand,
requires constraints to be defined as a sequence of dictionaries, with keys
``type``, ``fun`` and ``jac``.
As an example let us consider the constrained minimization of the Rosenbrock function:
.. math::
:nowrap:
\begin{eqnarray*} \min_{x_0, x_1} & ~~100\left(x_{1}-x_{0}^{2}\right)^{2}+\left(1-x_{0}\right)^{2} &\\
\text{subject to: } & x_0 + 2 x_1 \leq 1 & \\
& x_0^2 + x_1 \leq 1 & \\
& x_0^2 - x_1 \leq 1 & \\
& 2 x_0 + x_1 = 1 & \\
& 0 \leq x_0 \leq 1 & \\
& -0.5 \leq x_1 \leq 2.0. & \end{eqnarray*}
This optimization problem has the unique solution :math:`[x_0, x_1] = [0.4149,~ 0.1701]`,
for which only the first and fourth constraints are active.
Trust-Region Constrained Algorithm (``method='trust-constr'``)
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
The trust-region constrained method deals with constrained minimization problems of the form:
.. math::
:nowrap:
\begin{eqnarray*} \min_x & f(x) & \\
\text{subject to: } & ~~~ c^l \leq c(x) \leq c^u, &\\
& x^l \leq x \leq x^u. & \end{eqnarray*}
When :math:`c^l_j = c^u_j` the method reads the :math:`j`-th constraint as an
equality constraint and deals with it accordingly. Besides that, one-sided constraint
can be specified by setting the upper or lower bound to ``np.inf`` with the appropriate sign.
The implementation is based on [EQSQP]_ for equality-constraint problems and on [TRIP]_
for problems with inequality constraints. Both are trust-region type algorithms suitable
for large-scale problems.
**Defining Bounds Constraints**
The bound constraints :math:`0 \leq x_0 \leq 1` and :math:`-0.5 \leq x_1 \leq 2.0`
are defined using a :func:`Bounds` object.
>>> from scipy.optimize import Bounds
>>> bounds = Bounds([0, -0.5], [1.0, 2.0])
**Defining Linear Constraints**
The constraints :math:`x_0 + 2 x_1 \leq 1`
and :math:`2 x_0 + x_1 = 1` can be written in the linear constraint standard format:
.. math::
:nowrap:
\begin{equation*} \begin{bmatrix}-\infty \\1\end{bmatrix} \leq
\begin{bmatrix} 1& 2 \\ 2& 1\end{bmatrix}
\begin{bmatrix} x_0 \\x_1\end{bmatrix} \leq
\begin{bmatrix} 1 \\ 1\end{bmatrix},\end{equation*}
and defined using a :func:`LinearConstraint` object.
>>> from scipy.optimize import LinearConstraint
>>> linear_constraint = LinearConstraint([[1, 2], [2, 1]], [-np.inf, 1], [1, 1])
**Defining Nonlinear Constraints**
The nonlinear constraint:
.. math::
:nowrap:
\begin{equation*} c(x) =
\begin{bmatrix} x_0^2 + x_1 \\ x_0^2 - x_1\end{bmatrix}
\leq
\begin{bmatrix} 1 \\ 1\end{bmatrix}, \end{equation*}
with Jacobian matrix:
.. math::
:nowrap:
\begin{equation*} J(x) =
\begin{bmatrix} 2x_0 & 1 \\ 2x_0 & -1\end{bmatrix},\end{equation*}
and linear combination of the Hessians:
.. math::
:nowrap:
\begin{equation*} H(x, v) = \sum_{i=0}^1 v_i \nabla^2 c_i(x) =
v_0\begin{bmatrix} 2 & 0 \\ 0 & 0\end{bmatrix} +
v_1\begin{bmatrix} 2 & 0 \\ 0 & 0\end{bmatrix},
\end{equation*}
is defined using a :func:`NonlinearConstraint` object.
>>> def cons_f(x):
... return [x[0]**2 + x[1], x[0]**2 - x[1]]
>>> def cons_J(x):
... return [[2*x[0], 1], [2*x[0], -1]]
>>> def cons_H(x, v):
... return v[0]*np.array([[2, 0], [0, 0]]) + v[1]*np.array([[2, 0], [0, 0]])
>>> from scipy.optimize import NonlinearConstraint
>>> nonlinear_constraint = NonlinearConstraint(cons_f, -np.inf, 1, jac=cons_J, hess=cons_H)
Alternatively, it is also possible to define the Hessian :math:`H(x, v)`
as a sparse matrix,
>>> from scipy.sparse import csc_matrix
>>> def cons_H_sparse(x, v):
... return v[0]*csc_matrix([[2, 0], [0, 0]]) + v[1]*csc_matrix([[2, 0], [0, 0]])
>>> nonlinear_constraint = NonlinearConstraint(cons_f, -np.inf, 1,
... jac=cons_J, hess=cons_H_sparse)
or as a :obj:`~scipy.sparse.linalg.LinearOperator` object.
>>> from scipy.sparse.linalg import LinearOperator
>>> def cons_H_linear_operator(x, v):
... def matvec(p):
... return np.array([p[0]*2*(v[0]+v[1]), 0])
... return LinearOperator((2, 2), matvec=matvec)
>>> nonlinear_constraint = NonlinearConstraint(cons_f, -np.inf, 1,
... jac=cons_J, hess=cons_H_linear_operator)
When the evaluation of the Hessian :math:`H(x, v)`
is difficult to implement or computationally infeasible, one may use :class:`HessianUpdateStrategy`.
Currently available strategies are :class:`BFGS` and :class:`SR1`.
>>> from scipy.optimize import BFGS
>>> nonlinear_constraint = NonlinearConstraint(cons_f, -np.inf, 1, jac=cons_J, hess=BFGS())
Alternatively, the Hessian may be approximated using finite differences.
>>> nonlinear_constraint = NonlinearConstraint(cons_f, -np.inf, 1, jac=cons_J, hess='2-point')
The Jacobian of the constraints can be approximated by finite differences as well. In this case,
however, the Hessian cannot be computed with finite differences and needs to
be provided by the user or defined using :class:`HessianUpdateStrategy`.
>>> nonlinear_constraint = NonlinearConstraint(cons_f, -np.inf, 1, jac='2-point', hess=BFGS())
**Solving the Optimization Problem**
The optimization problem is solved using:
>>> x0 = np.array([0.5, 0])
>>> res = minimize(rosen, x0, method='trust-constr', jac=rosen_der, hess=rosen_hess,
... constraints=[linear_constraint, nonlinear_constraint],
... options={'verbose': 1}, bounds=bounds)
# may vary
`gtol` termination condition is satisfied.
Number of iterations: 12, function evaluations: 8, CG iterations: 7, optimality: 2.99e-09, constraint violation: 1.11e-16, execution time: 0.016 s.
>>> print(res.x)
[0.41494531 0.17010937]
When needed, the objective function Hessian can be defined using a :obj:`~scipy.sparse.linalg.LinearOperator` object,
>>> def rosen_hess_linop(x):
... def matvec(p):
... return rosen_hess_p(x, p)
... return LinearOperator((2, 2), matvec=matvec)
>>> res = minimize(rosen, x0, method='trust-constr', jac=rosen_der, hess=rosen_hess_linop,
... constraints=[linear_constraint, nonlinear_constraint],
... options={'verbose': 1}, bounds=bounds)
# may vary
`gtol` termination condition is satisfied.
Number of iterations: 12, function evaluations: 8, CG iterations: 7, optimality: 2.99e-09, constraint violation: 1.11e-16, execution time: 0.018 s.
>>> print(res.x)
[0.41494531 0.17010937]
or a Hessian-vector product through the parameter ``hessp``.
>>> res = minimize(rosen, x0, method='trust-constr', jac=rosen_der, hessp=rosen_hess_p,
... constraints=[linear_constraint, nonlinear_constraint],
... options={'verbose': 1}, bounds=bounds)
# may vary
`gtol` termination condition is satisfied.
Number of iterations: 12, function evaluations: 8, CG iterations: 7, optimality: 2.99e-09, constraint violation: 1.11e-16, execution time: 0.018 s.
>>> print(res.x)
[0.41494531 0.17010937]
Alternatively, the first and second derivatives of the objective function can be approximated.
For instance, the Hessian can be approximated with :func:`SR1` quasi-Newton approximation
and the gradient with finite differences.
>>> from scipy.optimize import SR1
>>> res = minimize(rosen, x0, method='trust-constr', jac="2-point", hess=SR1(),
... constraints=[linear_constraint, nonlinear_constraint],
... options={'verbose': 1}, bounds=bounds)
# may vary
`gtol` termination condition is satisfied.
Number of iterations: 12, function evaluations: 24, CG iterations: 7, optimality: 4.48e-09, constraint violation: 0.00e+00, execution time: 0.016 s.
>>> print(res.x)
[0.41494531 0.17010937]
.. [TRIP] Byrd, Richard H., Mary E. Hribar, and Jorge Nocedal. 1999.
An interior point algorithm for large-scale nonlinear programming.
SIAM Journal on Optimization 9.4: 877-900.
.. [EQSQP] Lalee, Marucha, Jorge Nocedal, and Todd Plantega. 1998. On the
implementation of an algorithm for large-scale equality constrained
optimization. SIAM Journal on Optimization 8.3: 682-706.
Sequential Least SQuares Programming (SLSQP) Algorithm (``method='SLSQP'``)
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
The SLSQP method deals with constrained minimization problems of the form:
.. math::
:nowrap:
\begin{eqnarray*} \min_x & f(x) \\
\text{subject to: } & c_j(x) = 0 , &j \in \mathcal{E}\\
& c_j(x) \geq 0 , &j \in \mathcal{I}\\
& \text{lb}_i \leq x_i \leq \text{ub}_i , &i = 1,...,N. \end{eqnarray*}
Where :math:`\mathcal{E}` or :math:`\mathcal{I}` are sets of indices
containing equality and inequality constraints.
Both linear and nonlinear constraints are defined as dictionaries with keys ``type``, ``fun`` and ``jac``.
>>> ineq_cons = {'type': 'ineq',
... 'fun' : lambda x: np.array([1 - x[0] - 2*x[1],
... 1 - x[0]**2 - x[1],
... 1 - x[0]**2 + x[1]]),
... 'jac' : lambda x: np.array([[-1.0, -2.0],
... [-2*x[0], -1.0],
... [-2*x[0], 1.0]])}
>>> eq_cons = {'type': 'eq',
... 'fun' : lambda x: np.array([2*x[0] + x[1] - 1]),
... 'jac' : lambda x: np.array([2.0, 1.0])}
And the optimization problem is solved with:
>>> x0 = np.array([0.5, 0])
>>> res = minimize(rosen, x0, method='SLSQP', jac=rosen_der,
... constraints=[eq_cons, ineq_cons], options={'ftol': 1e-9, 'disp': True},
... bounds=bounds)
# may vary
Optimization terminated successfully. (Exit mode 0)
Current function value: 0.342717574857755
Iterations: 5
Function evaluations: 6
Gradient evaluations: 5
>>> print(res.x)
[0.41494475 0.1701105 ]
Most of the options available for the method ``'trust-constr'`` are not available
for ``'SLSQP'``.
Local minimization solver comparison
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Find a solver that meets your requirements using the table below.
If there are multiple candidates, try several and see which ones best
meet your needs (e.g. execution time, objective function value).
.. list-table::
:widths: 15 20 20 15 15 15
:header-rows: 1
* - Solver
- Bounds Constraints
- Nonlinear Constraints
- Uses Gradient
- Uses Hessian
- Utilizes Sparsity
* - CG
-
-
- ✓
-
-
* - BFGS
-
-
- ✓
-
-
* - dogleg
-
-
- ✓
- ✓
-
* - trust-ncg
-
-
- ✓
- ✓
-
* - trust-krylov
-
-
- ✓
- ✓
-
* - trust-exact
-
-
- ✓
- ✓
-
* - Newton-CG
-
-
- ✓
- ✓
- ✓
* - Nelder-Mead
- ✓
-
-
-
-
* - Powell
- ✓
-
-
-
-
* - L-BFGS-B
- ✓
-
- ✓
-
-
* - TNC
- ✓
-
- ✓
-
-
* - COBYLA
- ✓
- ✓
-
-
-
* - COBYQA
- ✓
- ✓
-
-
-
* - SLSQP
- ✓
- ✓
- ✓
-
-
* - trust-constr
- ✓
- ✓
- ✓
- ✓
- ✓
.. _tutorial_optimize_global:
Global optimization
-------------------
Global optimization aims to find the global minimum of a function within given
bounds, in the presence of potentially many local minima. Typically, global
minimizers efficiently search the parameter space, while using a local
minimizer (e.g., :func:`minimize`) under the hood. SciPy contains a
number of good global optimizers. Here, we'll use those on the same objective
function, namely the (aptly named) ``eggholder`` function::
>>> def eggholder(x):
... return (-(x[1] + 47) * np.sin(np.sqrt(abs(x[0]/2 + (x[1] + 47))))
... -x[0] * np.sin(np.sqrt(abs(x[0] - (x[1] + 47)))))
>>> bounds = [(-512, 512), (-512, 512)]
This function looks like an egg carton::
>>> import matplotlib.pyplot as plt
>>> from mpl_toolkits.mplot3d import Axes3D
>>> x = np.arange(-512, 513)
>>> y = np.arange(-512, 513)
>>> xgrid, ygrid = np.meshgrid(x, y)
>>> xy = np.stack([xgrid, ygrid])
>>> fig = plt.figure()
>>> ax = fig.add_subplot(111, projection='3d')
>>> ax.view_init(45, -45)
>>> ax.plot_surface(xgrid, ygrid, eggholder(xy), cmap='terrain')
>>> ax.set_xlabel('x')
>>> ax.set_ylabel('y')
>>> ax.set_zlabel('eggholder(x, y)')
>>> plt.show()
.. plot:: tutorial/examples/optimize_global_2.py
:alt: "A 3-D plot shown from a three-quarter view. The function is very noisy with dozens of valleys and peaks. There is no clear min or max discernible from this view and it's not possible to see all the local peaks and valleys from this view."
:align: center
:include-source: 0
We now use the global optimizers to obtain the minimum and the function value
at the minimum. We'll store the results in a dictionary so we can compare
different optimization results later.
>>> from scipy import optimize
>>> results = dict()
>>> results['shgo'] = optimize.shgo(eggholder, bounds)
>>> results['shgo']
fun: -935.3379515604197 # may vary
funl: array([-935.33795156])
message: 'Optimization terminated successfully.'
nfev: 42
nit: 2
nlfev: 37
nlhev: 0
nljev: 9
success: True
x: array([439.48096952, 453.97740589])
xl: array([[439.48096952, 453.97740589]])
>>> results['DA'] = optimize.dual_annealing(eggholder, bounds)
>>> results['DA']
fun: -956.9182316237413 # may vary
message: ['Maximum number of iteration reached']
nfev: 4091
nhev: 0
nit: 1000
njev: 0
x: array([482.35324114, 432.87892901])
All optimizers return an ``OptimizeResult``, which in addition to the solution
contains information on the number of function evaluations, whether the
optimization was successful, and more. For brevity, we won't show the full
output of the other optimizers::
>>> results['DE'] = optimize.differential_evolution(eggholder, bounds)
:func:`shgo` has a second method, which returns all local minima rather than
only what it thinks is the global minimum::
>>> results['shgo_sobol'] = optimize.shgo(eggholder, bounds, n=200, iters=5,
... sampling_method='sobol')
We'll now plot all found minima on a heatmap of the function::
>>> fig = plt.figure()
>>> ax = fig.add_subplot(111)
>>> im = ax.imshow(eggholder(xy), interpolation='bilinear', origin='lower',
... cmap='gray')
>>> ax.set_xlabel('x')
>>> ax.set_ylabel('y')
>>>
>>> def plot_point(res, marker='o', color=None):
... ax.plot(512+res.x[0], 512+res.x[1], marker=marker, color=color, ms=10)
>>> plot_point(results['DE'], color='c') # differential_evolution - cyan
>>> plot_point(results['DA'], color='w') # dual_annealing. - white
>>> # SHGO produces multiple minima, plot them all (with a smaller marker size)
>>> plot_point(results['shgo'], color='r', marker='+')
>>> plot_point(results['shgo_sobol'], color='r', marker='x')
>>> for i in range(results['shgo_sobol'].xl.shape[0]):
... ax.plot(512 + results['shgo_sobol'].xl[i, 0],
... 512 + results['shgo_sobol'].xl[i, 1],
... 'ro', ms=2)
>>> ax.set_xlim([-4, 514*2])
>>> ax.set_ylim([-4, 514*2])
>>> plt.show()
.. plot:: tutorial/examples/optimize_global_1.py
:align: center
:alt: "This X-Y plot is a heatmap with the Z value denoted with the lowest points as black and the highest values as white. The image resembles a chess board rotated 45 degrees but heavily smoothed. A red dot is located at many of the minima on the grid resulting from the SHGO optimizer. SHGO shows the global minima as a red X in the top right. A local minima found with dual annealing is a white circle marker in the top left. A different local minima found with basinhopping is a yellow marker in the top center. The code is plotting the differential evolution result as a cyan circle, but it is not visible on the plot. At a glance it's not clear which of these valleys is the true global minima."
:include-source: 0
Comparison of Global Optimizers
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Find a solver that meets your requirements using the table below.
If there are multiple candidates, try several and see which ones best
meet your needs (e.g. execution time, objective function value).
.. list-table::
:widths: 20 15 15 20 20
:header-rows: 1
* - Solver
- Bounds Constraints
- Nonlinear Constraints
- Uses Gradient
- Uses Hessian
* - basinhopping
-
-
- (✓)
- (✓)
* - direct
- ✓
-
-
-
* - dual_annealing
- ✓
-
- (✓)
- (✓)
* - differential_evolution
- ✓
- ✓
-
-
* - shgo
- ✓
- ✓
- (✓)
- (✓)
(✓) = Depending on the chosen local minimizer
Least-squares minimization (:func:`least_squares`)
--------------------------------------------------
SciPy is capable of solving robustified bound-constrained nonlinear
least-squares problems:
.. math::
:nowrap:
\begin{align}
&\min_\mathbf{x} \frac{1}{2} \sum_{i = 1}^m \rho\left(f_i(\mathbf{x})^2\right) \\
&\text{subject to }\mathbf{lb} \leq \mathbf{x} \leq \mathbf{ub}
\end{align}
Here :math:`f_i(\mathbf{x})` are smooth functions from
:math:`\mathbb{R}^n` to :math:`\mathbb{R}`, we refer to them as residuals.
The purpose of a scalar-valued function :math:`\rho(\cdot)` is to reduce the
influence of outlier residuals and contribute to robustness of the solution,
we refer to it as a loss function. A linear loss function gives a standard
least-squares problem. Additionally, constraints in a form of lower and upper
bounds on some of :math:`x_j` are allowed.
All methods specific to least-squares minimization utilize a :math:`m \times n`
matrix of partial derivatives called Jacobian and defined as
:math:`J_{ij} = \partial f_i / \partial x_j`. It is highly recommended to
compute this matrix analytically and pass it to :func:`least_squares`,
otherwise, it will be estimated by finite differences, which takes a lot of
additional time and can be very inaccurate in hard cases.
Function :func:`least_squares` can be used for fitting a function
:math:`\varphi(t; \mathbf{x})` to empirical data :math:`\{(t_i, y_i), i = 0, \ldots, m-1\}`.
To do this, one should simply precompute residuals as
:math:`f_i(\mathbf{x}) = w_i (\varphi(t_i; \mathbf{x}) - y_i)`, where :math:`w_i`
are weights assigned to each observation.
Example of solving a fitting problem
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Here we consider an enzymatic reaction [1]_. There are 11 residuals defined as
.. math::
f_i(x) = \frac{x_0 (u_i^2 + u_i x_1)}{u_i^2 + u_i x_2 + x_3} - y_i, \quad i = 0, \ldots, 10,
where :math:`y_i` are measurement values and :math:`u_i` are values of
the independent variable. The unknown vector of parameters is
:math:`\mathbf{x} = (x_0, x_1, x_2, x_3)^T`. As was said previously, it is
recommended to compute Jacobian matrix in a closed form:
.. math::
:nowrap:
\begin{align}
&J_{i0} = \frac{\partial f_i}{\partial x_0} = \frac{u_i^2 + u_i x_1}{u_i^2 + u_i x_2 + x_3} \\
&J_{i1} = \frac{\partial f_i}{\partial x_1} = \frac{u_i x_0}{u_i^2 + u_i x_2 + x_3} \\
&J_{i2} = \frac{\partial f_i}{\partial x_2} = -\frac{x_0 (u_i^2 + u_i x_1) u_i}{(u_i^2 + u_i x_2 + x_3)^2} \\
&J_{i3} = \frac{\partial f_i}{\partial x_3} = -\frac{x_0 (u_i^2 + u_i x_1)}{(u_i^2 + u_i x_2 + x_3)^2}
\end{align}
We are going to use the "hard" starting point defined in [2]_. To find a
physically meaningful solution, avoid potential division by zero and assure
convergence to the global minimum we impose constraints
:math:`0 \leq x_j \leq 100, j = 0, 1, 2, 3`.
The code below implements least-squares estimation of :math:`\mathbf{x}` and
finally plots the original data and the fitted model function:
.. plot::
:alt: "This code plots an X-Y time-series. The series starts in the lower left at (0, 0) and rapidly trends up to the maximum of 0.2 then flattens out. The fitted model is shown as a smooth orange trace and is well fit to the data."
>>> from scipy.optimize import least_squares
>>> def model(x, u):
... return x[0] * (u ** 2 + x[1] * u) / (u ** 2 + x[2] * u + x[3])
>>> def fun(x, u, y):
... return model(x, u) - y
>>> def jac(x, u, y):
... J = np.empty((u.size, x.size))
... den = u ** 2 + x[2] * u + x[3]
... num = u ** 2 + x[1] * u
... J[:, 0] = num / den
... J[:, 1] = x[0] * u / den
... J[:, 2] = -x[0] * num * u / den ** 2
... J[:, 3] = -x[0] * num / den ** 2
... return J
>>> u = np.array([4.0, 2.0, 1.0, 5.0e-1, 2.5e-1, 1.67e-1, 1.25e-1, 1.0e-1,
... 8.33e-2, 7.14e-2, 6.25e-2])
>>> y = np.array([1.957e-1, 1.947e-1, 1.735e-1, 1.6e-1, 8.44e-2, 6.27e-2,
... 4.56e-2, 3.42e-2, 3.23e-2, 2.35e-2, 2.46e-2])
>>> x0 = np.array([2.5, 3.9, 4.15, 3.9])
>>> res = least_squares(fun, x0, jac=jac, bounds=(0, 100), args=(u, y), verbose=1)
# may vary
`ftol` termination condition is satisfied.
Function evaluations 130, initial cost 4.4383e+00, final cost 1.5375e-04, first-order optimality 4.92e-08.
>>> res.x
array([ 0.19280596, 0.19130423, 0.12306063, 0.13607247])
>>> import matplotlib.pyplot as plt
>>> u_test = np.linspace(0, 5)
>>> y_test = model(res.x, u_test)
>>> plt.plot(u, y, 'o', markersize=4, label='data')
>>> plt.plot(u_test, y_test, label='fitted model')
>>> plt.xlabel("u")
>>> plt.ylabel("y")
>>> plt.legend(loc='lower right')
>>> plt.show()
.. [1] J. Kowalik and J. F. Morrison, “Analysis of kinetic data for allosteric enzyme reactions as
a nonlinear regression problem”, Math. Biosci., vol. 2, pp. 57-66, 1968.
.. [2] B. M. Averick et al., “The MINPACK-2 Test Problem Collection”.
Further examples
^^^^^^^^^^^^^^^^
Three interactive examples below illustrate usage of :func:`least_squares` in
greater detail.
1. `Large-scale bundle adjustment in scipy <https://scipy-cookbook.readthedocs.io/items/bundle_adjustment.html>`_
demonstrates large-scale capabilities of :func:`least_squares` and how to
efficiently compute finite difference approximation of sparse Jacobian.
2. `Robust nonlinear regression in scipy <https://scipy-cookbook.readthedocs.io/items/robust_regression.html>`_
shows how to handle outliers with a robust loss function in a nonlinear
regression.
3. `Solving a discrete boundary-value problem in scipy <https://scipy-cookbook.readthedocs.io/items/discrete_bvp.html>`_
examines how to solve a large system of equations and use bounds to achieve
desired properties of the solution.
For the details about mathematical algorithms behind the implementation refer
to documentation of :func:`least_squares`.
Univariate function minimizers (:func:`minimize_scalar`)
--------------------------------------------------------
Often only the minimum of an univariate function (i.e., a function that
takes a scalar as input) is needed. In these circumstances, other
optimization techniques have been developed that can work faster. These are
accessible from the :func:`minimize_scalar` function, which proposes several
algorithms.
Unconstrained minimization (``method='brent'``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There are, actually, two methods that can be used to minimize an univariate
function: `brent` and `golden`, but `golden` is included only for academic
purposes and should rarely be used. These can be respectively selected
through the `method` parameter in :func:`minimize_scalar`. The `brent`
method uses Brent's algorithm for locating a minimum. Optimally, a bracket
(the `bracket` parameter) should be given which contains the minimum desired. A
bracket is a triple :math:`\left( a, b, c \right)` such that :math:`f
\left( a \right) > f \left( b \right) < f \left( c \right)` and :math:`a <
b < c` . If this is not given, then alternatively two starting points can
be chosen and a bracket will be found from these points using a simple
marching algorithm. If these two starting points are not provided, `0` and
`1` will be used (this may not be the right choice for your function and
result in an unexpected minimum being returned).
Here is an example:
>>> from scipy.optimize import minimize_scalar
>>> f = lambda x: (x - 2) * (x + 1)**2
>>> res = minimize_scalar(f, method='brent')
>>> print(res.x)
1.0
Bounded minimization (``method='bounded'``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Very often, there are constraints that can be placed on the solution space
before minimization occurs. The `bounded` method in :func:`minimize_scalar`
is an example of a constrained minimization procedure that provides a
rudimentary interval constraint for scalar functions. The interval
constraint allows the minimization to occur only between two fixed
endpoints, specified using the mandatory `bounds` parameter.
For example, to find the minimum of :math:`J_{1}\left( x \right)` near
:math:`x=5` , :func:`minimize_scalar` can be called using the interval
:math:`\left[ 4, 7 \right]` as a constraint. The result is
:math:`x_{\textrm{min}}=5.3314` :
>>> from scipy.special import j1
>>> res = minimize_scalar(j1, bounds=(4, 7), method='bounded')
>>> res.x
5.33144184241
Custom minimizers
-----------------
Sometimes, it may be useful to use a custom method as a (multivariate
or univariate) minimizer, for example, when using some library wrappers
of :func:`minimize` (e.g., :func:`basinhopping`).
We can achieve that by, instead of passing a method name, passing
a callable (either a function or an object implementing a `__call__`
method) as the `method` parameter.
Let us consider an (admittedly rather virtual) need to use a trivial
custom multivariate minimization method that will just search the
neighborhood in each dimension independently with a fixed step size::
>>> from scipy.optimize import OptimizeResult
>>> def custmin(fun, x0, args=(), maxfev=None, stepsize=0.1,
... maxiter=100, callback=None, **options):
... bestx = x0
... besty = fun(x0)
... funcalls = 1
... niter = 0
... improved = True
... stop = False
...
... while improved and not stop and niter < maxiter:
... improved = False
... niter += 1
... for dim in range(np.size(x0)):
... for s in [bestx[dim] - stepsize, bestx[dim] + stepsize]:
... testx = np.copy(bestx)
... testx[dim] = s
... testy = fun(testx, *args)
... funcalls += 1
... if testy < besty:
... besty = testy
... bestx = testx
... improved = True
... if callback is not None:
... callback(bestx)
... if maxfev is not None and funcalls >= maxfev:
... stop = True
... break
...
... return OptimizeResult(fun=besty, x=bestx, nit=niter,
... nfev=funcalls, success=(niter > 1))
>>> x0 = [1.35, 0.9, 0.8, 1.1, 1.2]
>>> res = minimize(rosen, x0, method=custmin, options=dict(stepsize=0.05))
>>> res.x
array([1., 1., 1., 1., 1.])
This will work just as well in case of univariate optimization::
>>> def custmin(fun, bracket, args=(), maxfev=None, stepsize=0.1,
... maxiter=100, callback=None, **options):
... bestx = (bracket[1] + bracket[0]) / 2.0
... besty = fun(bestx)
... funcalls = 1
... niter = 0
... improved = True
... stop = False
...
... while improved and not stop and niter < maxiter:
... improved = False
... niter += 1
... for testx in [bestx - stepsize, bestx + stepsize]:
... testy = fun(testx, *args)
... funcalls += 1
... if testy < besty:
... besty = testy
... bestx = testx
... improved = True
... if callback is not None:
... callback(bestx)
... if maxfev is not None and funcalls >= maxfev:
... stop = True
... break
...
... return OptimizeResult(fun=besty, x=bestx, nit=niter,
... nfev=funcalls, success=(niter > 1))
>>> def f(x):
... return (x - 2)**2 * (x + 2)**2
>>> res = minimize_scalar(f, bracket=(-3.5, 0), method=custmin,
... options=dict(stepsize = 0.05))
>>> res.x
-2.0
Root finding
------------
Scalar functions
^^^^^^^^^^^^^^^^
If one has a single-variable equation, there are multiple different root
finding algorithms that can be tried. Most of these algorithms require the
endpoints of an interval in which a root is expected (because the function
changes signs). In general, :obj:`brentq` is the best choice, but the other
methods may be useful in certain circumstances or for academic purposes.
When a bracket is not available, but one or more derivatives are available,
then :obj:`newton` (or ``halley``, ``secant``) may be applicable.
This is especially the case if the function is defined on a subset of the
complex plane, and the bracketing methods cannot be used.
Fixed-point solving
^^^^^^^^^^^^^^^^^^^
A problem closely related to finding the zeros of a function is the
problem of finding a fixed point of a function. A fixed point of a
function is the point at which evaluation of the function returns the
point: :math:`g\left(x\right)=x.` Clearly, the fixed point of :math:`g`
is the root of :math:`f\left(x\right)=g\left(x\right)-x.`
Equivalently, the root of :math:`f` is the fixed point of
:math:`g\left(x\right)=f\left(x\right)+x.` The routine
:obj:`fixed_point` provides a simple iterative method using Aitkens
sequence acceleration to estimate the fixed point of :math:`g` given a
starting point.
Sets of equations
^^^^^^^^^^^^^^^^^
Finding a root of a set of non-linear equations can be achieved using the
:func:`root` function. Several methods are available, amongst which ``hybr``
(the default) and ``lm``, which, respectively, use the hybrid method of Powell
and the Levenberg-Marquardt method from MINPACK.
The following example considers the single-variable transcendental
equation
.. math::
x+2\cos\left(x\right)=0,
a root of which can be found as follows::
>>> import numpy as np
>>> from scipy.optimize import root
>>> def func(x):
... return x + 2 * np.cos(x)
>>> sol = root(func, 0.3)
>>> sol.x
array([-1.02986653])
>>> sol.fun
array([ -6.66133815e-16])
Consider now a set of non-linear equations
.. math::
:nowrap:
\begin{eqnarray*}
x_{0}\cos\left(x_{1}\right) & = & 4,\\
x_{0}x_{1}-x_{1} & = & 5.
\end{eqnarray*}
We define the objective function so that it also returns the Jacobian and
indicate this by setting the ``jac`` parameter to ``True``. Also, the
Levenberg-Marquardt solver is used here.
::
>>> def func2(x):
... f = [x[0] * np.cos(x[1]) - 4,
... x[1]*x[0] - x[1] - 5]
... df = np.array([[np.cos(x[1]), -x[0] * np.sin(x[1])],
... [x[1], x[0] - 1]])
... return f, df
>>> sol = root(func2, [1, 1], jac=True, method='lm')
>>> sol.x
array([ 6.50409711, 0.90841421])
Root finding for large problems
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Methods ``hybr`` and ``lm`` in :func:`root` cannot deal with a very large
number of variables (*N*), as they need to calculate and invert a dense *N
x N* Jacobian matrix on every Newton step. This becomes rather inefficient
when *N* grows.
Consider, for instance, the following problem: we need to solve the
following integrodifferential equation on the square
:math:`[0,1]\times[0,1]`:
.. math::
(\partial_x^2 + \partial_y^2) P + 5 \left(\int_0^1\int_0^1\cosh(P)\,dx\,dy\right)^2 = 0
with the boundary condition :math:`P(x,1) = 1` on the upper edge and
:math:`P=0` elsewhere on the boundary of the square. This can be done
by approximating the continuous function *P* by its values on a grid,
:math:`P_{n,m}\approx{}P(n h, m h)`, with a small grid spacing
*h*. The derivatives and integrals can then be approximated; for
instance :math:`\partial_x^2 P(x,y)\approx{}(P(x+h,y) - 2 P(x,y) +
P(x-h,y))/h^2`. The problem is then equivalent to finding the root of
some function ``residual(P)``, where ``P`` is a vector of length
:math:`N_x N_y`.
Now, because :math:`N_x N_y` can be large, methods ``hybr`` or ``lm`` in
:func:`root` will take a long time to solve this problem. The solution can,
however, be found using one of the large-scale solvers, for example
``krylov``, ``broyden2``, or ``anderson``. These use what is known as the
inexact Newton method, which instead of computing the Jacobian matrix
exactly, forms an approximation for it.
The problem we have can now be solved as follows:
.. plot::
:alt: "This code generates a 2-D heatmap with Z values from 0 to 1. The graph resembles a smooth, dark blue-green, U shape, with an open yellow top. The right, bottom, and left edges have a value near zero and the top has a value close to 1. The center of the solution space has a value close to 0.8."
import numpy as np
from scipy.optimize import root
from numpy import cosh, zeros_like, mgrid, zeros
# parameters
nx, ny = 75, 75
hx, hy = 1./(nx-1), 1./(ny-1)
P_left, P_right = 0, 0
P_top, P_bottom = 1, 0
def residual(P):
d2x = zeros_like(P)
d2y = zeros_like(P)
d2x[1:-1] = (P[2:] - 2*P[1:-1] + P[:-2]) / hx/hx
d2x[0] = (P[1] - 2*P[0] + P_left)/hx/hx
d2x[-1] = (P_right - 2*P[-1] + P[-2])/hx/hx
d2y[:,1:-1] = (P[:,2:] - 2*P[:,1:-1] + P[:,:-2])/hy/hy
d2y[:,0] = (P[:,1] - 2*P[:,0] + P_bottom)/hy/hy
d2y[:,-1] = (P_top - 2*P[:,-1] + P[:,-2])/hy/hy
return d2x + d2y + 5*cosh(P).mean()**2
# solve
guess = zeros((nx, ny), float)
sol = root(residual, guess, method='krylov', options={'disp': True})
#sol = root(residual, guess, method='broyden2', options={'disp': True, 'max_rank': 50})
#sol = root(residual, guess, method='anderson', options={'disp': True, 'M': 10})
print('Residual: %g' % abs(residual(sol.x)).max())
# visualize
import matplotlib.pyplot as plt
x, y = mgrid[0:1:(nx*1j), 0:1:(ny*1j)]
plt.pcolormesh(x, y, sol.x, shading='gouraud')
plt.colorbar()
plt.show()
Still too slow? Preconditioning.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When looking for the zero of the functions :math:`f_i({\bf x}) = 0`,
*i = 1, 2, ..., N*, the ``krylov`` solver spends most of the
time inverting the Jacobian matrix,
.. math:: J_{ij} = \frac{\partial f_i}{\partial x_j} .
If you have an approximation for the inverse matrix
:math:`M\approx{}J^{-1}`, you can use it for *preconditioning* the
linear-inversion problem. The idea is that instead of solving
:math:`J{\bf s}={\bf y}` one solves :math:`MJ{\bf s}=M{\bf y}`: since
matrix :math:`MJ` is "closer" to the identity matrix than :math:`J`
is, the equation should be easier for the Krylov method to deal with.
The matrix *M* can be passed to :func:`root` with method ``krylov`` as an
option ``options['jac_options']['inner_M']``. It can be a (sparse) matrix
or a :obj:`scipy.sparse.linalg.LinearOperator` instance.
For the problem in the previous section, we note that the function to
solve consists of two parts: the first one is the application of the
Laplace operator, :math:`[\partial_x^2 + \partial_y^2] P`, and the second
is the integral. We can actually easily compute the Jacobian corresponding
to the Laplace operator part: we know that in 1-D
.. math::
\partial_x^2 \approx \frac{1}{h_x^2} \begin{pmatrix}
-2 & 1 & 0 & 0 \cdots \\
1 & -2 & 1 & 0 \cdots \\
0 & 1 & -2 & 1 \cdots \\
\ldots
\end{pmatrix}
= h_x^{-2} L
so that the whole 2-D operator is represented by
.. math::
J_1 = \partial_x^2 + \partial_y^2
\simeq
h_x^{-2} L \otimes I + h_y^{-2} I \otimes L
The matrix :math:`J_2` of the Jacobian corresponding to the integral
is more difficult to calculate, and since *all* of it entries are
nonzero, it will be difficult to invert. :math:`J_1` on the other hand
is a relatively simple matrix, and can be inverted by
:obj:`scipy.sparse.linalg.splu` (or the inverse can be approximated by
:obj:`scipy.sparse.linalg.spilu`). So we are content to take
:math:`M\approx{}J_1^{-1}` and hope for the best.
In the example below, we use the preconditioner :math:`M=J_1^{-1}`.
.. literalinclude:: examples/newton_krylov_preconditioning.py
Resulting run, first without preconditioning::
0: |F(x)| = 803.614; step 1; tol 0.000257947
1: |F(x)| = 345.912; step 1; tol 0.166755
2: |F(x)| = 139.159; step 1; tol 0.145657
3: |F(x)| = 27.3682; step 1; tol 0.0348109
4: |F(x)| = 1.03303; step 1; tol 0.00128227
5: |F(x)| = 0.0406634; step 1; tol 0.00139451
6: |F(x)| = 0.00344341; step 1; tol 0.00645373
7: |F(x)| = 0.000153671; step 1; tol 0.00179246
8: |F(x)| = 6.7424e-06; step 1; tol 0.00173256
Residual 3.57078908664e-07
Evaluations 317
and then with preconditioning::
0: |F(x)| = 136.993; step 1; tol 7.49599e-06
1: |F(x)| = 4.80983; step 1; tol 0.00110945
2: |F(x)| = 0.195942; step 1; tol 0.00149362
3: |F(x)| = 0.000563597; step 1; tol 7.44604e-06
4: |F(x)| = 1.00698e-09; step 1; tol 2.87308e-12
Residual 9.29603061195e-11
Evaluations 77
Using a preconditioner reduced the number of evaluations of the
``residual`` function by a factor of *4*. For problems where the
residual is expensive to compute, good preconditioning can be crucial
--- it can even decide whether the problem is solvable in practice or
not.
Preconditioning is an art, science, and industry. Here, we were lucky
in making a simple choice that worked reasonably well, but there is a
lot more depth to this topic than is shown here.
Linear programming (:func:`linprog`)
------------------------------------
The function :func:`linprog` can minimize a linear objective function
subject to linear equality and inequality constraints. This kind of
problem is well known as linear programming. Linear programming solves
problems of the following form:
.. math::
\min_x \ & c^T x \\
\mbox{such that} \ & A_{ub} x \leq b_{ub},\\
& A_{eq} x = b_{eq},\\
& l \leq x \leq u ,
where :math:`x` is a vector of decision variables; :math:`c`, :math:`b_{ub}`,
:math:`b_{eq}`, :math:`l`, and :math:`u` are vectors; and :math:`A_{ub}` and
:math:`A_{eq}` are matrices.
In this tutorial, we will try to solve a typical linear programming
problem using :func:`linprog`.
Linear programming example
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Consider the following simple linear programming problem:
.. math::
\max_{x_1, x_2, x_3, x_4} \ & 29x_1 + 45x_2 \\
\mbox{such that} \
& x_1 -x_2 -3x_3 \leq 5\\
& 2x_1 -3x_2 -7x_3 + 3x_4 \geq 10\\
& 2x_1 + 8x_2 + x_3 = 60\\
& 4x_1 + 4x_2 + x_4 = 60\\
& 0 \leq x_0\\
& 0 \leq x_1 \leq 5\\
& x_2 \leq 0.5\\
& -3 \leq x_3\\
We need some mathematical manipulations to convert the target problem to the form accepted by :func:`linprog`.
First of all, let's consider the objective function.
We want to maximize the objective
function, but :func:`linprog` can only accept a minimization problem. This is easily remedied by converting the maximize
:math:`29x_1 + 45x_2` to minimizing :math:`-29x_1 -45x_2`. Also, :math:`x_3, x_4` are not shown in the objective
function. That means the weights corresponding with :math:`x_3, x_4` are zero. So, the objective function can be
converted to:
.. math::
\min_{x_1, x_2, x_3, x_4} \ -29x_1 -45x_2 + 0x_3 + 0x_4
If we define the vector of decision variables :math:`x = [x_1, x_2, x_3, x_4]^T`, the objective weights vector :math:`c` of :func:`linprog` in this problem
should be
.. math::
c = [-29, -45, 0, 0]^T
Next, let's consider the two inequality constraints. The first one is a "less than" inequality, so it is already in the form accepted by `linprog`.
The second one is a "greater than" inequality, so we need to multiply both sides by :math:`-1` to convert it to a "less than" inequality.
Explicitly showing zero coefficients, we have:
.. math::
x_1 -x_2 -3x_3 + 0x_4 &\leq 5\\
-2x_1 + 3x_2 + 7x_3 - 3x_4 &\leq -10\\
These equations can be converted to matrix form:
.. math::
A_{ub} x \leq b_{ub}\\
where
.. math::
:nowrap:
\begin{equation*} A_{ub} =
\begin{bmatrix} 1 & -1 & -3 & 0 \\
-2 & 3 & 7 & -3
\end{bmatrix}
\end{equation*}
.. math::
:nowrap:
\begin{equation*} b_{ub} =
\begin{bmatrix} 5 \\
-10
\end{bmatrix}
\end{equation*}
Next, let's consider the two equality constraints. Showing zero weights explicitly, these are:
.. math::
2x_1 + 8x_2 + 1x_3 + 0x_4 &= 60\\
4x_1 + 4x_2 + 0x_3 + 1x_4 &= 60\\
These equations can be converted to matrix form:
.. math::
A_{eq} x = b_{eq}\\
where
.. math::
:nowrap:
\begin{equation*} A_{eq} =
\begin{bmatrix} 2 & 8 & 1 & 0 \\
4 & 4 & 0 & 1
\end{bmatrix}
\end{equation*}
.. math::
:nowrap:
\begin{equation*} b_{eq} =
\begin{bmatrix} 60 \\
60
\end{bmatrix}
\end{equation*}
Lastly, let's consider the separate inequality constraints on individual decision variables, which are known as
"box constraints" or "simple bounds". These constraints can be applied using the bounds argument of :func:`linprog`.
As noted in the :func:`linprog` documentation, the default value of bounds is ``(0, None)``, meaning that the
lower bound on each decision variable is 0, and the upper bound on each decision variable is infinity:
all the decision variables are non-negative. Our bounds are different, so we will need to specify the lower and upper bound on each
decision variable as a tuple and group these tuples into a list.
Finally, we can solve the transformed problem using :func:`linprog`.
::
>>> import numpy as np
>>> from scipy.optimize import linprog
>>> c = np.array([-29.0, -45.0, 0.0, 0.0])
>>> A_ub = np.array([[1.0, -1.0, -3.0, 0.0],
... [-2.0, 3.0, 7.0, -3.0]])
>>> b_ub = np.array([5.0, -10.0])
>>> A_eq = np.array([[2.0, 8.0, 1.0, 0.0],
... [4.0, 4.0, 0.0, 1.0]])
>>> b_eq = np.array([60.0, 60.0])
>>> x0_bounds = (0, None)
>>> x1_bounds = (0, 5.0)
>>> x2_bounds = (-np.inf, 0.5) # +/- np.inf can be used instead of None
>>> x3_bounds = (-3.0, None)
>>> bounds = [x0_bounds, x1_bounds, x2_bounds, x3_bounds]
>>> result = linprog(c, A_ub=A_ub, b_ub=b_ub, A_eq=A_eq, b_eq=b_eq, bounds=bounds)
>>> print(result.message)
The problem is infeasible. (HiGHS Status 8: model_status is Infeasible; primal_status is None)
The result states that our problem is infeasible, meaning that there is no solution vector that satisfies all the
constraints. That doesn't necessarily mean we did anything wrong; some problems truly are infeasible.
Suppose, however, that we were to decide that our bound constraint on :math:`x_1` was too tight and that it could be loosened
to :math:`0 \leq x_1 \leq 6`. After adjusting our code ``x1_bounds = (0, 6)`` to reflect the change and executing it again:
::
>>> x1_bounds = (0, 6)
>>> bounds = [x0_bounds, x1_bounds, x2_bounds, x3_bounds]
>>> result = linprog(c, A_ub=A_ub, b_ub=b_ub, A_eq=A_eq, b_eq=b_eq, bounds=bounds)
>>> print(result.message)
Optimization terminated successfully. (HiGHS Status 7: Optimal)
The result shows the optimization was successful.
We can check the objective value (``result.fun``) is same as :math:`c^Tx`:
::
>>> x = np.array(result.x)
>>> obj = result.fun
>>> print(c @ x)
-505.97435889013434 # may vary
>>> print(obj)
-505.97435889013434 # may vary
We can also check that all constraints are satisfied within reasonable tolerances:
::
>>> print(b_ub - (A_ub @ x).flatten()) # this is equivalent to result.slack
[ 6.52747190e-10, -2.26730279e-09] # may vary
>>> print(b_eq - (A_eq @ x).flatten()) # this is equivalent to result.con
[ 9.78840831e-09, 1.04662945e-08]] # may vary
>>> print([0 <= result.x[0], 0 <= result.x[1] <= 6.0, result.x[2] <= 0.5, -3.0 <= result.x[3]])
[True, True, True, True]
Assignment problems
-------------------
Linear sum assignment problem example
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Consider the problem of selecting students for a swimming medley relay team.
We have a table showing times for each swimming style of five students:
========== =========== ============ =========== ===============================
Student backstroke breaststroke butterfly freestyle
========== =========== ============ =========== ===============================
A 43.5 47.1 48.4 38.2
B 45.5 42.1 49.6 36.8
C 43.4 39.1 42.1 43.2
D 46.5 44.1 44.5 41.2
E 46.3 47.8 50.4 37.2
========== =========== ============ =========== ===============================
We need to choose a student for each of the four swimming styles such that
the total relay time is minimized.
This is a typical linear sum assignment problem. We can use :func:`linear_sum_assignment` to solve it.
The linear sum assignment problem is one of the most famous combinatorial optimization problems.
Given a "cost matrix" :math:`C`, the problem is to choose
- exactly one element from each row
- without choosing more than one element from any column
- such that the sum of the chosen elements is minimized
In other words, we need to assign each row to one column such that the sum of
the corresponding entries is minimized.
Formally, let :math:`X` be a boolean matrix where :math:`X[i,j] = 1` iff row :math:`i` is assigned to column :math:`j`.
Then the optimal assignment has cost
.. math::
\min \sum_i \sum_j C_{i,j} X_{i,j}
The first step is to define the cost matrix.
In this example, we want to assign each swimming style to a student.
:func:`linear_sum_assignment` is able to assign each row of a cost matrix to a column.
Therefore, to form the cost matrix, the table above needs to be transposed so that the rows
correspond with swimming styles and the columns correspond with students:
::
>>> import numpy as np
>>> cost = np.array([[43.5, 45.5, 43.4, 46.5, 46.3],
... [47.1, 42.1, 39.1, 44.1, 47.8],
... [48.4, 49.6, 42.1, 44.5, 50.4],
... [38.2, 36.8, 43.2, 41.2, 37.2]])
We can solve the assignment problem with :func:`linear_sum_assignment`:
::
>>> from scipy.optimize import linear_sum_assignment
>>> row_ind, col_ind = linear_sum_assignment(cost)
The ``row_ind`` and ``col_ind`` are optimal assigned matrix indexes of the cost matrix:
::
>>> row_ind
array([0, 1, 2, 3])
>>> col_ind
array([0, 2, 3, 1])
The optimal assignment is:
::
>>> styles = np.array(["backstroke", "breaststroke", "butterfly", "freestyle"])[row_ind]
>>> students = np.array(["A", "B", "C", "D", "E"])[col_ind]
>>> dict(zip(styles, students))
{'backstroke': 'A', 'breaststroke': 'C', 'butterfly': 'D', 'freestyle': 'B'}
The optimal total medley time is:
::
>>> cost[row_ind, col_ind].sum()
163.89999999999998
Note that this result is not the same as the sum of the minimum times for each swimming style:
::
>>> np.min(cost, axis=1).sum()
161.39999999999998
because student "C" is the best swimmer in both "breaststroke" and "butterfly" style.
We cannot assign student "C" to both styles, so we assigned student C to the "breaststroke" style
and D to the "butterfly" style to minimize the total time.
.. rubric:: References
Some further reading and related software, such as Newton-Krylov [KK]_,
PETSc [PP]_, and PyAMG [AMG]_:
.. [KK] D.A. Knoll and D.E. Keyes, "Jacobian-free Newton-Krylov methods",
J. Comp. Phys. 193, 357 (2004). :doi:`10.1016/j.jcp.2003.08.010`
.. [PP] PETSc https://www.mcs.anl.gov/petsc/ and its Python bindings
https://bitbucket.org/petsc/petsc4py/
.. [AMG] PyAMG (algebraic multigrid preconditioners/solvers)
https://github.com/pyamg/pyamg/issues
.. _tutorial-optimize_milp:
Mixed integer linear programming
---------------------------------
Knapsack problem example
^^^^^^^^^^^^^^^^^^^^^^^^^
The knapsack problem is a well known combinatorial optimization problem.
Given a set of items, each with a size and a value, the problem is to choose
the items that maximize the total value under the condition that the total size
is below a certain threshold.
Formally, let
- :math:`x_i` be a boolean variable that indicates whether item :math:`i` is
included in the knapsack,
- :math:`n` be the total number of items,
- :math:`v_i` be the value of item :math:`i`,
- :math:`s_i` be the size of item :math:`i`, and
- :math:`C` be the capacity of the knapsack.
Then the problem is:
.. math::
\max \sum_i^n v_{i} x_{i}
.. math::
\text{subject to} \sum_i^n s_{i} x_{i} \leq C, x_{i} \in {0, 1}
Although the objective function and inequality constraints are linear in the
*decision variables* :math:`x_i`, this differs from a typical linear
programming problem in that the decision variables can only assume integer
values. Specifically, our decision variables can only be :math:`0` or
:math:`1`, so this is known as a *binary integer linear program* (BILP). Such
a problem falls within the larger class of *mixed integer linear programs*
(MILPs), which we we can solve with :func:`milp`.
In our example, there are 8 items to choose from, and the size and value of
each is specified as follows.
::
>>> import numpy as np
>>> from scipy import optimize
>>> sizes = np.array([21, 11, 15, 9, 34, 25, 41, 52])
>>> values = np.array([22, 12, 16, 10, 35, 26, 42, 53])
We need to constrain our eight decision variables to be binary. We do so
by adding a :class:`Bounds`: constraint to ensure that they lie between
:math:`0` and :math:`1`, and we apply "integrality" constraints to ensure that
they are *either* :math:`0` *or* :math:`1`.
::
>>> bounds = optimize.Bounds(0, 1) # 0 <= x_i <= 1
>>> integrality = np.full_like(values, True) # x_i are integers
The knapsack capacity constraint is specified using :class:`LinearConstraint`.
::
>>> capacity = 100
>>> constraints = optimize.LinearConstraint(A=sizes, lb=0, ub=capacity)
If we are following the usual rules of linear algebra, the input ``A`` should
be a two-dimensional matrix, and the lower and upper bounds ``lb`` and ``ub``
should be one-dimensional vectors, but :class:`LinearConstraint` is forgiving
as long as the inputs can be broadcast to consistent shapes.
Using the variables defined above, we can solve the knapsack problem using
:func:`milp`. Note that :func:`milp` minimizes the objective function, but we
want to maximize the total value, so we set `c` to be negative of the values.
::
>>> from scipy.optimize import milp
>>> res = milp(c=-values, constraints=constraints,
... integrality=integrality, bounds=bounds)
Let's check the result:
::
>>> res.success
True
>>> res.x
array([1., 1., 0., 1., 1., 1., 0., 0.])
This means that we should select the items 1, 2, 4, 5, 6 to optimize the total
value under the size constraint. Note that this is different from we would have
obtained had we solved the *linear programming relaxation* (without integrality
constraints) and attempted to round the decision variables.
::
>>> from scipy.optimize import milp
>>> res = milp(c=-values, constraints=constraints,
... integrality=False, bounds=bounds)
>>> res.x
array([1. , 1. , 1. , 1. ,
0.55882353, 1. , 0. , 0. ])
If we were to round this solution up to
``array([1., 1., 1., 1., 1., 1., 0., 0.])``, our knapsack would be over the
capacity constraint, whereas if we were to round down to
``array([1., 1., 1., 1., 0., 1., 0., 0.])``, we would have a sub-optimal
solution.
For more MILP tutorials, see the Jupyter notebooks on SciPy Cookbooks:
- `Compressed Sensing l1 program <https://nbviewer.org/github/scipy/scipy-cookbook/blob/main/ipython/LinearAndMixedIntegerLinearProgramming/compressed_sensing_milp_tutorial_1.ipynb>`_
- `Compressed Sensing l0 program <https://nbviewer.org/github/scipy/scipy-cookbook/blob/main/ipython/LinearAndMixedIntegerLinearProgramming/compressed_sensing_milp_tutorial_2.ipynb>`_
Parallel execution support
---------------------------
Some SciPy optimization methods, such as :func:`differential_evolution`, offer
parallelization through the use of a ``workers`` keyword.
For :func:`differential_evolution` there are two loops (iteration) levels in the
algorithm. The outer loop represents successive generations of a population. This
loop can't be parallelized. For a given generation candidate solutions are generated
that have to be compared against existing population members. The fitness of the
candidate solution can be done in a loop, but it's also possible to parallelize the
calculation.
Parallelization is also possible in other optimization algorithms. For example in
various :func:`minimize` methods numerical differentiation is used to estimate
derivatives. For a simple gradient calculation using two-point forward differences a
total of ``N + 1`` objective function calculations have to be done, where ``N`` is the
number of parameters. These are just small perturbations around a given location
(the +1). Those ``N + 1`` calculations are also parallelizable. The calculation of
numerical derivatives are used by the minimization algorithm to generate new steps.
Each optimization algorithm is quite different in how they work, but they often have
locations where multiple objective function calculations are required before the
algorithm does something else. Those locations are what can be parallelized.
There are therefore common characteristics in how ``workers`` is used. These
commonalities are described below.
If an int is supplied then a :class:`multiprocessing.Pool <multiprocessing.pool.Pool>` is
created, with the object's :func:`map` method being used to evaluate solutions in
parallel. With this approach it is mandatory that the objective function is pickleable.
Lambda functions do not meet that requirement.
::
>>> import numpy as np
>>> from scipy.optimize import rosen, differential_evolution, Bounds
>>> bnds = Bounds([0., 0., 0.], [10., 10., 10.])
>>> res = differential_evolution(rosen, bnds, workers=2, updating='deferred')
It is also possible to use a map-like callable as a worker. Here the map-like function
is provided with a series of vectors that the optimization algorithm provides.
The map-like function needs to evaluate each vector against the objective function.
In the following example we use :class:`multiprocessing.Pool <multiprocessing.pool.Pool>`
as the map-like. As before, the objective function still needs to be pickleable.
This example is semantically identical to the previous example.
::
>>> from multiprocessing import Pool
>>> with Pool(2) as pwl:
... res = differential_evolution(rosen, bnds, workers=pwl.map, updating='deferred')
It can be an advantage to use this pattern because the Pool can be re-used for further
calculations - there is a significant amount of overhead in creating those objects.
Alternatives to :class:`multiprocessing.Pool <multiprocessing.pool.Pool>` include the
`mpi4py <https://mpi4py.readthedocs.io/en/stable/>`_ package, which enables parallel
processing on clusters.
In Scipy 1.16.0 the ``workers`` keyword was introduced to selected :func:`minimize`
methods. Here parallelization is typically applied during numerical differentiation.
Either of the two approaches outlined above can be used, although it's strongly
advised to supply the map-like callable due to the overhead of creating new processes.
Performance gains will only be made if the objective function is expensive to
calculate.
Let's compare how much parallelization can help compared to the serial version. To
simulate a slow function we use the ``time`` package.
::
>>> import time
>>> def slow_func(x):
... time.sleep(0.0002)
... return rosen(x)
Examine the serial minimization first::
In [1]: rng = np.random.default_rng()
In [2]: x0 = rng.uniform(low=0.0, high=10.0, size=(20,))
In [3]: %timeit minimize(slow_func, x0, method='L-BFGS-B') # serial approach
365 ms ± 6.17 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # may vary
Now the parallel version::
In [4]: with Pool() as pwl: # parallel approach
... %timeit minimize(slow_func, x0, method='L-BFGS-B', options={'workers':pwl.map})
70.5 ms ± 146 μs per loop (mean ± std. dev. of 7 runs, 1 loop each) # may vary
If the objective function can be vectorized, then a map-like can be used to take
advantage of vectorization during function evaluation. Vectorization means that the
objective function can carry out the required calculations in a single (rather than
multiple) call, which is typically very efficient::
In [5]: def vectorized_maplike(fun, iterable):
... arr = np.array([i for i in iter(iterable)]) # arr.shape = (S, N)
... arr_t = arr.T # arr_t.shape = (N, S)
... r = slow_func(arr_t) # calculation vectorized over S
... return r
In [6]: %timeit minimize(slow_func, x0, method='L-BFGS-B', options={'workers':vectorized_maplike})
38.9 ms ± 734 μs per loop (mean ± std. dev. of 7 runs, 10 loops each) # may vary
There are several important points to note about this example:
* The iterable represents the series of parameter vectors that the algorithm wishes
to be evaluated.
* The iterable is first converted to an iterator, before being made into an array via
a list comprehension. This allows the iterable to be a generator, list, array, etc.
* Within the map-like the calculation is done using ``slow_func`` instead of using
``fun``. The map-like is actually supplied with a wrapped version of the objective
function. The wrapping is used to detect various types of common user errors,
including checking whether the objective function returns a scalar. If ``fun`` is
used then a :class:`RuntimeError` will result, because ``fun(arr_t)`` will be a 1-D
array and not a scalar. We therefore use ``slow_func`` directly.
* ``arr.T`` is sent to the objective function. This is because ``arr.shape == (S, N)``,
where ``S`` is the number of parameter vectors to evaluate and ``N`` is the number of
variables. For ``slow_func`` vectorization occurs on ``(N, S)`` shaped arrays.
* This approach is not needed for :func:`differential_evolution` as that minimizer
already has a keyword for vectorization.
|