File: optim.texi

package info (click to toggle)
octave 10.3.0-1
  • links: PTS, VCS
  • area: main
  • in suites:
  • size: 145,388 kB
  • sloc: cpp: 335,976; ansic: 82,241; fortran: 20,963; objc: 9,402; sh: 8,756; yacc: 4,392; lex: 4,333; perl: 1,544; java: 1,366; awk: 1,259; makefile: 659; xml: 192
file content (1362 lines) | stat: -rw-r--r-- 38,147 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
@c DO NOT EDIT!  Generated automatically by munge-texi.pl.

@c Copyright (C) 1996-2025 The Octave Project Developers
@c
@c This file is part of Octave.
@c
@c Octave is free software: you can redistribute it and/or modify it
@c under the terms of the GNU General Public License as published by
@c the Free Software Foundation, either version 3 of the License, or
@c (at your option) any later version.
@c
@c Octave is distributed in the hope that it will be useful, but
@c WITHOUT ANY WARRANTY; without even the implied warranty of
@c MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
@c GNU General Public License for more details.
@c
@c You should have received a copy of the GNU General Public License
@c along with Octave; see the file COPYING.  If not, see
@c <https://www.gnu.org/licenses/>.

@node Optimization
@chapter Optimization

Octave comes with support for solving various kinds of optimization
problems.  Specifically Octave can solve problems in Linear Programming,
Quadratic Programming, Nonlinear Programming, and Linear Least Squares
Minimization.

@menu
* Linear Programming::
* Quadratic Programming::
* Nonlinear Programming::
* Linear Least Squares::
@end menu

@c @cindex linear programming
@cindex quadratic programming
@cindex nonlinear programming
@cindex optimization
@cindex LP
@cindex QP
@cindex NLP

@node Linear Programming
@section Linear Programming

Octave can solve Linear Programming problems using the @code{glpk}
function.  That is, Octave can solve

@tex
$$
  \min_x c^T x
$$
@end tex
@ifnottex

@example
min C'*x
@end example

@end ifnottex
subject to the linear constraints
@tex
$Ax = b$ where $x \geq 0$.
@end tex
@ifnottex
@math{A*x = b} where @math{x @geq{} 0}.
@end ifnottex

@noindent
The @code{glpk} function also supports variations of this problem.

@c glpk scripts/optimization/glpk.m
@anchor{XREFglpk}
@html
<span style="display:block; margin-top:-4.5ex;">&nbsp;</span>
@end html


@deftypefn {} {[@var{xopt}, @var{fmin}, @var{errnum}, @var{extra}] =} glpk (@var{c}, @var{A}, @var{b}, @var{lb}, @var{ub}, @var{ctype}, @var{vartype}, @var{sense}, @var{param})
Solve a linear program using the GNU @sc{glpk} library.

Given three arguments, @code{glpk} solves the following standard LP:
@tex
$$
  \min_x C^T x
$$
@end tex
@ifnottex

@example
min C'*x
@end example

@end ifnottex
subject to
@tex
$$
  Ax = b \qquad x \geq 0
$$
@end tex
@ifnottex

@example
@group
A*x  = b
  x >= 0
@end group
@end example

@end ifnottex
but may also solve problems of the form
@tex
$$
  [ \min_x | \max_x ] C^T x
$$
@end tex
@ifnottex

@example
[ min | max ] C'*x
@end example

@end ifnottex
subject to
@tex
$$
 Ax [ = | \leq | \geq ] b \qquad LB \leq x \leq UB
$$
@end tex
@ifnottex

@example
@group
A*x [ "=" | "<=" | ">=" ] b
  x >= LB
  x <= UB
@end group
@end example

@end ifnottex

Input arguments:

@table @var
@item c
A column array containing the objective function coefficients.

@item A
A matrix containing the constraints coefficients.

@item b
A column array containing the right-hand side value for each constraint in
the constraint matrix.

@item lb
An array containing the lower bound on each of the variables.  If @var{lb}
is not supplied, the default lower bound for the variables is zero.

@item ub
An array containing the upper bound on each of the variables.  If @var{ub}
is not supplied, the default upper bound is assumed to be infinite.

@item ctype
An array of characters containing the sense of each constraint in the
constraint matrix.  Each element of the array may be one of the following
values

@table @asis
@item @qcode{"F"}
A free (unbounded) constraint (the constraint is ignored).

@item @qcode{"U"}
An inequality constraint with an upper bound (@code{A(i,:)*x <= b(i)}).

@item @qcode{"S"}
An equality constraint (@code{A(i,:)*x = b(i)}).

@item @qcode{"L"}
An inequality with a lower bound (@code{A(i,:)*x >= b(i)}).

@item @qcode{"D"}
An inequality constraint with both upper and lower bounds
(@code{A(i,:)*x >= -b(i)}) @emph{and} (@code{A(i,:)*x <= b(i)}).
@end table

@item vartype
A column array containing the types of the variables.

@table @asis
@item @qcode{"C"}
A continuous variable.

@item @qcode{"I"}
An integer variable.
@end table

@item sense
If @var{sense} is 1, the problem is a minimization.  If @var{sense} is -1,
the problem is a maximization.  The default value is 1.

@item param
A structure containing the following parameters used to define the
behavior of solver.  Missing elements in the structure take on default
values, so you only need to set the elements that you wish to change from
the default.

Integer parameters:

@table @code
@item msglev (default: 1)
Level of messages output by solver routines:

@table @asis
@item 0 (@w{@code{GLP_MSG_OFF}})
No output.

@item 1 (@w{@code{GLP_MSG_ERR}})
Error and warning messages only.

@item 2 (@w{@code{GLP_MSG_ON}})
Normal output.

@item 3 (@w{@code{GLP_MSG_ALL}})
Full output (includes informational messages).
@end table

@item scale (default: 16)
Scaling option.  The values can be combined with the bitwise OR operator and
may be the following:

@table @asis
@item 1 (@w{@code{GLP_SF_GM}})
Geometric mean scaling.

@item 16 (@w{@code{GLP_SF_EQ}})
Equilibration scaling.

@item 32 (@w{@code{GLP_SF_2N}})
Round scale factors to power of two.

@item 64 (@w{@code{GLP_SF_SKIP}})
Skip if problem is well scaled.
@end table

Alternatively, a value of 128 (@w{@env{GLP_SF_AUTO}}) may be also
specified, in which case the routine chooses the scaling options
automatically.

@item dual (default: 1)
Simplex method option:

@table @asis
@item 1 (@w{@code{GLP_PRIMAL}})
Use two-phase primal simplex.

@item 2 (@w{@code{GLP_DUALP}})
Use two-phase dual simplex, and if it fails, switch to the primal simplex.

@item 3 (@w{@code{GLP_DUAL}})
Use two-phase dual simplex.
@end table

@item price (default: 34)
Pricing option (for both primal and dual simplex):

@table @asis
@item 17 (@w{@code{GLP_PT_STD}})
Textbook pricing.

@item 34 (@w{@code{GLP_PT_PSE}})
Steepest edge pricing.
@end table

@item itlim (default: intmax)
Simplex iterations limit.  It is decreased by one each time when one simplex
iteration has been performed, and reaching zero value signals the solver to
stop the search.

@item outfrq (default: 200)
Output frequency, in iterations.  This parameter specifies how frequently
the solver sends information about the solution to the standard output.

@item branch (default: 4)
Branching technique option (for MIP only):

@table @asis
@item 1 (@w{@code{GLP_BR_FFV}})
First fractional variable.

@item 2 (@w{@code{GLP_BR_LFV}})
Last fractional variable.

@item 3 (@w{@code{GLP_BR_MFV}})
Most fractional variable.

@item 4 (@w{@code{GLP_BR_DTH}})
Heuristic by @nospell{Driebeck and Tomlin}.

@item 5 (@w{@code{GLP_BR_PCH}})
Hybrid @nospell{pseudocost} heuristic.
@end table

@item btrack (default: 4)
Backtracking technique option (for MIP only):

@table @asis
@item 1 (@w{@code{GLP_BT_DFS}})
Depth first search.

@item 2 (@w{@code{GLP_BT_BFS}})
Breadth first search.

@item 3 (@w{@code{GLP_BT_BLB}})
Best local bound.

@item 4 (@w{@code{GLP_BT_BPH}})
Best projection heuristic.
@end table

@item presol (default: 1)
If this flag is set, the simplex solver uses the built-in LP presolver.
Otherwise the LP presolver is not used.

@item lpsolver (default: 1)
Select which solver to use.  If the problem is a MIP problem this flag
will be ignored.

@table @asis
@item 1
Revised simplex method.

@item 2
Interior point method.
@end table

@item rtest (default: 34)
Ratio test technique:

@table @asis
@item 17 (@w{@code{GLP_RT_STD}})
Standard ("textbook").

@item 34 (@w{@code{GLP_RT_HAR}})
Harris' two-pass ratio test.
@end table

@item tmlim (default: intmax)
Searching time limit, in milliseconds.

@item outdly (default: 0)
Output delay, in seconds.  This parameter specifies how long the solver
should delay sending information about the solution to the standard output.

@item save (default: 0)
If this parameter is nonzero, save a copy of the problem in @nospell{CPLEX}
LP format to the file @file{"outpb.lp"}.  There is currently no way to
change the name of the output file.
@end table

Real parameters:

@table @code
@item tolbnd (default: 1e-7)
Relative tolerance used to check if the current basic solution is primal
feasible.  It is not recommended that you change this parameter unless you
have a detailed understanding of its purpose.

@item toldj (default: 1e-7)
Absolute tolerance used to check if the current basic solution is dual
feasible.  It is not recommended that you change this parameter unless you
have a detailed understanding of its purpose.

@item tolpiv (default: 1e-9)
Relative tolerance used to choose eligible pivotal elements of the simplex
table.  It is not recommended that you change this parameter unless you have
a detailed understanding of its purpose.

@item objll (default: -DBL_MAX)
Lower limit of the objective function.  If the objective function reaches
this limit and continues decreasing, the solver stops the search.  This
parameter is used in the dual simplex method only.

@item objul (default: +DBL_MAX)
Upper limit of the objective function.  If the objective function reaches
this limit and continues increasing, the solver stops the search.  This
parameter is used in the dual simplex only.

@item tolint (default: 1e-5)
Relative tolerance used to check if the current basic solution is integer
feasible.  It is not recommended that you change this parameter unless you
have a detailed understanding of its purpose.

@item tolobj (default: 1e-7)
Relative tolerance used to check if the value of the objective function is
not better than in the best known integer feasible solution.  It is not
recommended that you change this parameter unless you have a detailed
understanding of its purpose.
@end table
@end table

Output values:

@table @var
@item xopt
The optimizer (the value of the decision variables at the optimum).

@item fopt
The optimum value of the objective function.

@item errnum
Error code.

@table @asis
@item 0
No error.

@item 1 (@w{@code{GLP_EBADB}})
Invalid basis.

@item 2 (@w{@code{GLP_ESING}})
Singular matrix.

@item 3 (@w{@code{GLP_ECOND}})
Ill-conditioned matrix.

@item 4 (@w{@code{GLP_EBOUND}})
Invalid bounds.

@item 5 (@w{@code{GLP_EFAIL}})
Solver failed.

@item 6 (@w{@code{GLP_EOBJLL}})
Objective function lower limit reached.

@item 7 (@w{@code{GLP_EOBJUL}})
Objective function upper limit reached.

@item 8 (@w{@code{GLP_EITLIM}})
Iterations limit exhausted.

@item 9 (@w{@code{GLP_ETMLIM}})
Time limit exhausted.

@item 10 (@w{@code{GLP_ENOPFS}})
No primal feasible solution.

@item 11 (@w{@code{GLP_ENODFS}})
No dual feasible solution.

@item 12 (@w{@code{GLP_EROOT}})
Root LP optimum not provided.

@item 13 (@w{@code{GLP_ESTOP}})
Search terminated by application.

@item 14 (@w{@code{GLP_EMIPGAP}})
Relative MIP gap tolerance reached.

@item 15 (@w{@code{GLP_ENOFEAS}})
No primal/dual feasible solution.

@item 16 (@w{@code{GLP_ENOCVG}})
No convergence.

@item 17 (@w{@code{GLP_EINSTAB}})
Numerical instability.

@item 18 (@w{@code{GLP_EDATA}})
Invalid data.

@item 19 (@w{@code{GLP_ERANGE}})
Result out of range.
@end table

@item extra
A data structure containing the following fields:

@table @code
@item lambda
Dual variables.

@item redcosts
Reduced Costs.

@item time
Time (in seconds) used for solving LP/MIP problem.

@item status
Status of the optimization.

@table @asis
@item 1 (@w{@code{GLP_UNDEF}})
Solution status is undefined.

@item 2 (@w{@code{GLP_FEAS}})
Solution is feasible.

@item 3 (@w{@code{GLP_INFEAS}})
Solution is infeasible.

@item 4 (@w{@code{GLP_NOFEAS}})
Problem has no feasible solution.

@item 5 (@w{@code{GLP_OPT}})
Solution is optimal.

@item 6 (@w{@code{GLP_UNBND}})
Problem has no unbounded solution.
@end table
@end table
@end table

Example:

@example
@group
c = [10, 6, 4]';
A = [ 1, 1, 1;
     10, 4, 5;
      2, 2, 6];
b = [100, 600, 300]';
lb = [0, 0, 0]';
ub = [];
ctype = "UUU";
vartype = "CCC";
s = -1;

param.msglev = 1;
param.itlim = 100;

[xmin, fmin, status, extra] = ...
   glpk (c, A, b, lb, ub, ctype, vartype, s, param);
@end group
@end example
@end deftypefn


@node Quadratic Programming
@section Quadratic Programming

Octave can also solve Quadratic Programming problems, this is
@tex
$$
 \min_x {1 \over 2} x^T H x + x^T q
$$
@end tex
@ifnottex

@example
min 0.5 x'*H*x + x'*q
@end example

@end ifnottex
subject to
@tex
$$
 A x = b \qquad lb \leq x \leq ub \qquad A_{lb} \leq A_{in} x \leq A_{ub}
$$
@end tex
@ifnottex

@example
@group
     A*x = b
     lb <= x <= ub
     A_lb <= A_in*x <= A_ub
@end group
@end example

@end ifnottex

@c qp scripts/optimization/qp.m
@anchor{XREFqp}
@html
<span style="display:block; margin-top:-4.5ex;">&nbsp;</span>
@end html


@deftypefn  {} {[@var{x}, @var{obj}, @var{info}, @var{lambda}] =} qp (@var{x0}, @var{H})
@deftypefnx {} {[@var{x}, @var{obj}, @var{info}, @var{lambda}] =} qp (@var{x0}, @var{H}, @var{q})
@deftypefnx {} {[@var{x}, @var{obj}, @var{info}, @var{lambda}] =} qp (@var{x0}, @var{H}, @var{q}, @var{A}, @var{b})
@deftypefnx {} {[@var{x}, @var{obj}, @var{info}, @var{lambda}] =} qp (@var{x0}, @var{H}, @var{q}, @var{A}, @var{b}, @var{lb}, @var{ub})
@deftypefnx {} {[@var{x}, @var{obj}, @var{info}, @var{lambda}] =} qp (@var{x0}, @var{H}, @var{q}, @var{A}, @var{b}, @var{lb}, @var{ub}, @var{A_lb}, @var{A_in}, @var{A_ub})
@deftypefnx {} {[@var{x}, @var{obj}, @var{info}, @var{lambda}] =} qp (@dots{}, @var{options})
Solve a quadratic program (QP).

Solve the quadratic program defined by
@tex
$$
 \min_x {1 \over 2} x^T H x + x^T q
$$
@end tex
@ifnottex

@example
@group
min 0.5 x'*H*x + x'*q
 x
@end group
@end example

@end ifnottex
subject to
@tex
$$
 A x = b \qquad lb \leq x \leq ub \qquad A_{lb} \leq A_{in} x \leq A_{ub}
$$
@end tex
@ifnottex

@example
@group
A*x = b
lb <= x <= ub
A_lb <= A_in*x <= A_ub
@end group
@end example

@end ifnottex
@noindent
using a null-space active-set method.

Any bound (@var{A}, @var{b}, @var{lb}, @var{ub}, @var{A_in}, @var{A_lb},
@var{A_ub}) may be set to the empty matrix (@code{[]}) if not present.  The
constraints @var{A} and @var{A_in} are matrices with each row representing
a single constraint.  The other bounds are scalars or vectors depending on
the number of constraints.  The algorithm is faster if the initial guess is
feasible.

@var{options} is a structure specifying additional parameters which
control the algorithm.  Currently, @code{qp} recognizes these options:
@qcode{"MaxIter"}, @qcode{"TolX"}.

@qcode{"MaxIter"} proscribes the maximum number of algorithm iterations
before optimization is halted.  The default value is 200.
The value must be a positive integer.

@qcode{"TolX"} specifies the termination tolerance for the unknown variables
@var{x}.  The default is @code{sqrt (eps)} or approximately 1e-8.

On return, @var{x} is the location of the minimum and @var{fval} contains
the value of the objective function at @var{x}.

@table @var
@item info
Structure containing run-time information about the algorithm.  The
following fields are defined:

@table @code
@item solveiter
The number of iterations required to find the solution.

@item info
An integer indicating the status of the solution.

@table @asis
@item 0
The problem is feasible and convex.  Global solution found.

@item 1
The problem is not convex.  Local solution found.

@item 2
The problem is not convex and unbounded.

@item 3
Maximum number of iterations reached.

@item 6
The problem is infeasible.
@end table
@end table
@end table
@xseealso{@ref{XREFsqp,,sqp}}
@end deftypefn


@c pqpnonneg scripts/optimization/pqpnonneg.m
@anchor{XREFpqpnonneg}
@html
<span style="display:block; margin-top:-4.5ex;">&nbsp;</span>
@end html


@deftypefn  {} {@var{x} =} pqpnonneg (@var{c}, @var{d})
@deftypefnx {} {@var{x} =} pqpnonneg (@var{c}, @var{d}, @var{x0})
@deftypefnx {} {@var{x} =} pqpnonneg (@var{c}, @var{d}, @var{x0}, @var{options})
@deftypefnx {} {[@var{x}, @var{minval}] =} pqpnonneg (@dots{})
@deftypefnx {} {[@var{x}, @var{minval}, @var{exitflag}] =} pqpnonneg (@dots{})
@deftypefnx {} {[@var{x}, @var{minval}, @var{exitflag}, @var{output}] =} pqpnonneg (@dots{})
@deftypefnx {} {[@var{x}, @var{minval}, @var{exitflag}, @var{output}, @var{lambda}] =} pqpnonneg (@dots{})

Minimize @code{ (1/2 * @var{x}' * @var{c} * @var{x} + @var{d}' * @var{x}) }
subject to @code{@var{x} >= 0}.

@var{c} and @var{d} must be real matrices, and @var{c} must be symmetric and
positive definite.

@var{x0} is an optional initial guess for the solution @var{x}.

@var{options} is an options structure to change the behavior of the
algorithm (@pxref{XREFoptimset,,@code{optimset}}).  @code{pqpnonneg}
recognizes one option: @qcode{"MaxIter"}.

Outputs:

@table @var

@item x
The solution matrix

@item minval
The minimum attained model value,
@code{1/2*@var{xmin}'*@var{c}*@var{xmin} + @var{d}'*@var{xmin}}

@item exitflag
An indicator of convergence.  0 indicates that the iteration count was
exceeded, and therefore convergence was not reached; >0 indicates that the
algorithm converged.  (The algorithm is stable and will converge given
enough iterations.)

@item output
A structure with two fields:

@itemize @bullet
@item @qcode{"algorithm"}: The algorithm used (@nospell{@qcode{"nnls"}})

@item @qcode{"iterations"}: The number of iterations taken.
@end itemize

@item lambda
Lagrange multipliers.  If these are nonzero, the corresponding @var{x}
values should be zero, indicating the solution is pressed up against a
coordinate plane.  The magnitude indicates how much the residual would
improve if the @code{@var{x} >= 0} constraints were relaxed in that
direction.

@end table
@xseealso{@ref{XREFlsqnonneg,,lsqnonneg}, @ref{XREFqp,,qp}, @ref{XREFoptimset,,optimset}}
@end deftypefn


@node Nonlinear Programming
@section Nonlinear Programming

Octave can also perform general nonlinear minimization using a
successive quadratic programming solver.

@c sqp scripts/optimization/sqp.m
@anchor{XREFsqp}
@html
<span style="display:block; margin-top:-4.5ex;">&nbsp;</span>
@end html


@deftypefn  {} {[@var{x}, @var{obj}, @var{info}, @var{iter}, @var{nf}, @var{lambda}] =} sqp (@var{x0}, @var{phi})
@deftypefnx {} {[@dots{}] =} sqp (@var{x0}, @var{phi}, @var{g})
@deftypefnx {} {[@dots{}] =} sqp (@var{x0}, @var{phi}, @var{g}, @var{h})
@deftypefnx {} {[@dots{}] =} sqp (@var{x0}, @var{phi}, @var{g}, @var{h}, @var{lb}, @var{ub})
@deftypefnx {} {[@dots{}] =} sqp (@var{x0}, @var{phi}, @var{g}, @var{h}, @var{lb}, @var{ub}, @var{maxiter})
@deftypefnx {} {[@dots{}] =} sqp (@var{x0}, @var{phi}, @var{g}, @var{h}, @var{lb}, @var{ub}, @var{maxiter}, @var{tolerance})
Minimize an objective function using sequential quadratic programming (SQP).

Solve the nonlinear program
@tex
$$
\min_x \phi (x)
$$
@end tex
@ifnottex

@example
@group
min phi (x)
 x
@end group
@end example

@end ifnottex
subject to
@tex
$$
 g(x) = 0 \qquad h(x) \geq 0 \qquad lb \leq x \leq ub
$$
@end tex
@ifnottex

@example
@group
g(x)  = 0
h(x) >= 0
lb <= x <= ub
@end group
@end example

@end ifnottex
@noindent
using a sequential quadratic programming method.

The first argument is the initial guess for the vector @var{x0}.

The second argument is a function handle pointing to the objective function
@var{phi}.  The objective function must accept one vector argument and
return a scalar.

The second argument may also be a 2- or 3-element cell array of function
handles.  The first element should point to the objective function, the
second should point to a function that computes the gradient of the
objective function, and the third should point to a function that computes
the Hessian of the objective function.  If the gradient function is not
supplied, the gradient is computed by finite differences.  If the Hessian
function is not supplied, a BFGS update formula is used to approximate the
Hessian.

When supplied, the gradient function @code{@var{phi}@{2@}} must accept one
vector argument and return a vector.  When supplied, the Hessian function
@code{@var{phi}@{3@}} must accept one vector argument and return a matrix.

The third and fourth arguments @var{g} and @var{h} are function handles
pointing to functions that compute the equality constraints and the
inequality constraints, respectively.  If the problem does not have
equality (or inequality) constraints, then use an empty matrix ([]) for
@var{g} (or @var{h}).  When supplied, these equality and inequality
constraint functions must accept one vector argument and return a vector.

The third and fourth arguments may also be 2-element cell arrays of
function handles.  The first element should point to the constraint
function and the second should point to a function that computes the
gradient of the constraint function:
@tex
$$
 \Bigg( {\partial f(x) \over \partial x_1},
        {\partial f(x) \over \partial x_2}, \ldots,
        {\partial f(x) \over \partial x_N} \Bigg)^T
$$
@end tex
@ifnottex

@example
@group
            [ d f(x)   d f(x)        d f(x) ]
transpose ( [ ------   -----   ...   ------ ] )
            [  dx_1     dx_2          dx_N  ]
@end group
@end example

@end ifnottex
The fifth and sixth arguments, @var{lb} and @var{ub}, contain lower and
upper bounds on @var{x} and when provided must be vectors of the same size
as the vector @var{x0}.  The bounds must be consistent with the
equality and inequality constraints @var{g} and @var{h}.

The seventh argument @var{maxiter} specifies the maximum number of
iterations.  The default value is 100.

The eighth argument @var{tolerance} specifies the tolerance for the stopping
criteria.  The default value is @code{sqrt (eps)}.

The value returned in @var{info} may be one of the following:

@table @asis
@item 101
The algorithm terminated normally.
All constraints meet the specified tolerance.

@item 102
The BFGS update failed.

@item 103
The maximum number of iterations was reached.

@item 104
The stepsize has become too small, i.e.,
@tex
$\Delta x,$
@end tex
@ifnottex
delta @var{x},
@end ifnottex
is less than @code{@var{tol} * norm (x)}.
@end table

An example of calling @code{sqp}:

@example
function r = g (x)
  r = [ sumsq(x)-10;
        x(2)*x(3)-5*x(4)*x(5);
        x(1)^3+x(2)^3+1 ];
endfunction

function obj = phi (x)
  obj = exp (prod (x)) - 0.5*(x(1)^3+x(2)^3+1)^2;
endfunction

x0 = [-1.8; 1.7; 1.9; -0.8; -0.8];

[x, obj, info, iter, nf, lambda] = sqp (x0, @@phi, @@g, [])

x =

  -1.71714
   1.59571
   1.82725
  -0.76364
  -0.76364

obj = 0.053950
info = 101
iter = 8
nf = 10
lambda =

  -0.0401627
   0.0379578
  -0.0052227
@end example

@xseealso{@ref{XREFqp,,qp}}
@end deftypefn


@node Linear Least Squares
@section Linear Least Squares

Octave also supports linear least squares minimization.  That is,
Octave can find the parameter @math{b} such that the model
@tex
$y = xb$
@end tex
@ifnottex
@math{y = x*b}
@end ifnottex
fits data @math{(x,y)} as well as possible, assuming zero-mean
Gaussian noise.  If the noise is assumed to be isotropic the problem
can be solved using the @samp{\} or @samp{/} operators, or the @code{ols}
function.  In the general case where the noise is assumed to be anisotropic
the @code{gls} is needed.

@c ols scripts/linear-algebra/ols.m
@anchor{XREFols}
@html
<span style="display:block; margin-top:-4.5ex;">&nbsp;</span>
@end html


@deftypefn {} {[@var{beta}, @var{sigma}, @var{r}] =} ols (@var{y}, @var{x})
Ordinary least squares (OLS) estimation.

OLS applies to the multivariate model
@tex
$@var{y} = @var{x}\,@var{b} + @var{e}$
@end tex
@ifnottex
@w{@math{@var{y} = @var{x}*@var{b} + @var{e}}}
@end ifnottex
where
@tex
$@var{y}$ is a $t \times p$ matrix, $@var{x}$ is a $t \times k$ matrix,
$@var{b}$ is a $k \times p$ matrix, and $@var{e}$ is a $t \times p$ matrix.
@end tex
@ifnottex
@math{@var{y}} is a @math{t}-by-@math{p} matrix, @math{@var{x}} is a
@math{t}-by-@math{k} matrix, @var{b} is a @math{k}-by-@math{p} matrix, and
@var{e} is a @math{t}-by-@math{p} matrix.
@end ifnottex

Each row of @var{y} is a @math{p}-variate observation in which each column
represents a variable.  Likewise, the rows of @var{x} represent
@math{k}-variate observations or possibly designed values.  Furthermore,
the collection of observations @var{x} must be of adequate rank, @math{k},
otherwise @var{b} cannot be uniquely estimated.

The observation errors, @var{e}, are assumed to originate from an
underlying @math{p}-variate distribution with zero mean and
@math{p}-by-@math{p} covariance matrix @var{S}, both constant conditioned
on @var{x}.  Furthermore, the matrix @var{S} is constant with respect to
each observation such that
@tex
$\bar{@var{e}} = 0$ and cov(vec(@var{e})) =  kron(@var{s},@var{I}).
@end tex
@ifnottex
@code{mean (@var{e}) = 0} and
@code{cov (vec (@var{e})) = kron (@var{s}, @var{I})}.
@end ifnottex
(For cases
that don't meet this criteria, such as autocorrelated errors, see
generalized least squares, gls, for more efficient estimations.)

The return values @var{beta}, @var{sigma}, and @var{r} are defined as
follows.

@table @var
@item beta
The OLS estimator for matrix @var{b}.
@tex
@var{beta} is calculated directly via $(@var{x}^T@var{x})^{-1} @var{x}^T
@var{y}$ if the matrix $@var{x}^T@var{x}$ is of full rank.
@end tex
@ifnottex
@var{beta} is calculated directly via
@code{inv (@var{x}'*@var{x}) * @var{x}' * @var{y}} if the matrix
@code{@var{x}'*@var{x}} is of full rank.
@end ifnottex
Otherwise, @code{@var{beta} = pinv (@var{x}) * @var{y}} where
@code{pinv (@var{x})} denotes the pseudoinverse of @var{x}.

@item sigma
The OLS estimator for the matrix @var{s},

@example
@group
@var{sigma} = (@var{y}-@var{x}*@var{beta})' * (@var{y}-@var{x}*@var{beta}) / (@math{t}-rank(@var{x}))
@end group
@end example

@item r
The matrix of OLS residuals, @code{@var{r} = @var{y} - @var{x}*@var{beta}}.
@end table
@xseealso{@ref{XREFgls,,gls}, @ref{XREFpinv,,pinv}}
@end deftypefn


@c gls scripts/linear-algebra/gls.m
@anchor{XREFgls}
@html
<span style="display:block; margin-top:-4.5ex;">&nbsp;</span>
@end html


@deftypefn {} {[@var{beta}, @var{v}, @var{r}] =} gls (@var{y}, @var{x}, @var{o})
Generalized least squares (GLS) model.

Perform a generalized least squares estimation for the multivariate model
@tex
$@var{y} = @var{x}\,@var{b} + @var{e}$
@end tex
@ifnottex
@w{@math{@var{y} = @var{x}*@var{B} + @var{E}}}
@end ifnottex
where
@tex
$@var{y}$ is a $t \times p$ matrix, $@var{x}$ is a $t \times k$ matrix,
$@var{b}$ is a $k \times p$ matrix and $@var{e}$ is a $t \times p$ matrix.
@end tex
@ifnottex
@var{y} is a @math{t}-by-@math{p} matrix, @var{x} is a
@math{t}-by-@math{k} matrix, @var{b} is a @math{k}-by-@math{p} matrix
and @var{e} is a @math{t}-by-@math{p} matrix.
@end ifnottex

@noindent
Each row of @var{y} is a @math{p}-variate observation in which each column
represents a variable.  Likewise, the rows of @var{x} represent
@math{k}-variate observations or possibly designed values.  Furthermore,
the collection of observations @var{x} must be of adequate rank, @math{k},
otherwise @var{b} cannot be uniquely estimated.

The observation errors, @var{e}, are assumed to originate from an
underlying @math{p}-variate distribution with zero mean but possibly
heteroscedastic observations.  That is, in general,
@tex
$\bar{@var{e}} = 0$ and cov(vec(@var{e})) = $s^2@var{o}$
@end tex
@ifnottex
@code{@math{mean (@var{e}) = 0}} and
@code{@math{cov (vec (@var{e})) = (@math{s}^2)*@var{o}}}
@end ifnottex
in which @math{s} is a scalar and @var{o} is a
@tex
@math{t \, p \times t \, p}
@end tex
@ifnottex
@math{t*p}-by-@math{t*p}
@end ifnottex
matrix.

The return values @var{beta}, @var{v}, and @var{r} are
defined as follows.

@table @var
@item beta
The GLS estimator for matrix @var{b}.

@item v
The GLS estimator for scalar @math{s^2}.

@item r
The matrix of GLS residuals, @math{@var{r} = @var{y} - @var{x}*@var{beta}}.
@end table
@xseealso{@ref{XREFols,,ols}}
@end deftypefn


@c lsqnonneg scripts/optimization/lsqnonneg.m
@anchor{XREFlsqnonneg}
@html
<span style="display:block; margin-top:-4.5ex;">&nbsp;</span>
@end html


@deftypefn  {} {@var{x} =} lsqnonneg (@var{c}, @var{d})
@deftypefnx {} {@var{x} =} lsqnonneg (@var{c}, @var{d}, @var{x0})
@deftypefnx {} {@var{x} =} lsqnonneg (@var{c}, @var{d}, @var{x0}, @var{options})
@deftypefnx {} {[@var{x}, @var{resnorm}] =} lsqnonneg (@dots{})
@deftypefnx {} {[@var{x}, @var{resnorm}, @var{residual}] =} lsqnonneg (@dots{})
@deftypefnx {} {[@var{x}, @var{resnorm}, @var{residual}, @var{exitflag}] =} lsqnonneg (@dots{})
@deftypefnx {} {[@var{x}, @var{resnorm}, @var{residual}, @var{exitflag}, @var{output}] =} lsqnonneg (@dots{})
@deftypefnx {} {[@var{x}, @var{resnorm}, @var{residual}, @var{exitflag}, @var{output}, @var{lambda}] =} lsqnonneg (@dots{})

Minimize @code{norm (@var{c}*@var{x} - @var{d})} subject to
@code{@var{x} >= 0}.

@var{c} and @var{d} must be real matrices.

@var{x0} is an optional initial guess for the solution @var{x}.

@var{options} is an options structure to change the behavior of the
algorithm (@pxref{XREFoptimset,,@code{optimset}}).  @code{lsqnonneg}
recognizes these options: @qcode{"MaxIter"}, @qcode{"TolX"}.

Outputs:

@table @var
@item resnorm
The squared 2-norm of the residual: @code{norm (@var{c}*@var{x}-@var{d})^2}

@item residual
The residual: @code{@var{d}-@var{c}*@var{x}}

@item exitflag
An indicator of convergence.  0 indicates that the iteration count was
exceeded, and therefore convergence was not reached; >0 indicates that the
algorithm converged.  (The algorithm is stable and will converge given
enough iterations.)

@item output
A structure with two fields:

@itemize @bullet
@item @qcode{"algorithm"}: The algorithm used (@qcode{"nnls"})

@item @qcode{"iterations"}: The number of iterations taken.
@end itemize

@item lambda
Lagrange multipliers.  If these are nonzero, the corresponding @var{x}
values should be zero, indicating the solution is pressed up against a
coordinate plane.  The magnitude indicates how much the residual would
improve if the @code{@var{x} >= 0} constraints were relaxed in that
direction.

@end table
@xseealso{@ref{XREFpqpnonneg,,pqpnonneg}, @ref{XREFlscov,,lscov}, @ref{XREFoptimset,,optimset}}
@end deftypefn


@c lscov scripts/linear-algebra/lscov.m
@anchor{XREFlscov}
@html
<span style="display:block; margin-top:-4.5ex;">&nbsp;</span>
@end html


@deftypefn  {} {@var{x} =} lscov (@var{A}, @var{b})
@deftypefnx {} {@var{x} =} lscov (@var{A}, @var{b}, @var{V})
@deftypefnx {} {@var{x} =} lscov (@var{A}, @var{b}, @var{V}, @var{alg})
@deftypefnx {} {[@var{x}, @var{stdx}, @var{mse}, @var{S}] =} lscov (@dots{})

Compute a generalized linear least squares fit.

Estimate @var{x} under the model @var{b} = @var{A}@var{x} + @var{w}, where
the noise @var{w} is assumed to follow a normal distribution with covariance
matrix @math{{\sigma^2} V}.

If the size of the coefficient matrix @var{A} is n-by-p, the size of the
vector/array of constant terms @var{b} must be n-by-k.

The optional input argument @var{V} may be an n-element vector of positive
weights (inverse variances), or an n-by-n symmetric positive semi-definite
matrix representing the covariance of @var{b}.  If @var{V} is not supplied,
the ordinary least squares solution is returned.

The @var{alg} input argument, a guidance on solution method to use, is
currently ignored.

Besides the least-squares estimate matrix @var{x} (p-by-k), the function
also returns @var{stdx} (p-by-k), the error standard deviation of estimated
@var{x}; @var{mse} (k-by-1), the estimated data error covariance scale
factors (@math{\sigma^2}); and @var{S} (p-by-p, or p-by-p-by-k if k > 1),
the error covariance of @var{x}.

Reference: @nospell{Golub and Van Loan} (1996),
@cite{Matrix Computations (3rd Ed.)}, Johns Hopkins, Section 5.6.3

@xseealso{@ref{XREFols,,ols}, @ref{XREFgls,,gls}, @ref{XREFlsqnonneg,,lsqnonneg}}
@end deftypefn


@c optimset scripts/optimization/optimset.m
@anchor{XREFoptimset}
@html
<span style="display:block; margin-top:-4.5ex;">&nbsp;</span>
@end html


@deftypefn  {} {} optimset ()
@deftypefnx {} {@var{options} =} optimset ()
@deftypefnx {} {@var{options} =} optimset (@var{par}, @var{val}, @dots{})
@deftypefnx {} {@var{options} =} optimset (@var{old}, @var{par}, @var{val}, @dots{})
@deftypefnx {} {@var{options} =} optimset (@var{old}, @var{new})
Create options structure for optimization functions.

When called without any input or output arguments, @code{optimset} prints
a list of all valid optimization parameters.

When called with one output and no inputs, return an options structure with
all valid option parameters initialized to @code{[]}.

When called with a list of parameter/value pairs, return an options
structure with only the named parameters initialized.

When the first input is an existing options structure @var{old}, the values
are updated from either the @var{par}/@var{val} list or from the options
structure @var{new}.

If @var{par} does not exactly match the name of a standard parameter,
@code{optimset} will attempt to match @var{par} to a standard parameter
and will set the value of that parameter if a match is found.  Matching is
case insensitive and is based on character matching at the start of the
parameter name.  @code{optimset} produces an error if it finds multiple
ambiguous matches.  If no standard parameter matches are found a warning is
issued and the non-standard parameter is created.

Standard list of valid parameters:

@table @asis
@item AutoScaling

@item ComplexEqn

@item Display
Request verbose display of results from optimizations.  Values are:

@table @asis
@item @qcode{"off"} [default]
No display.

@item @qcode{"iter"}
Display intermediate results for every loop iteration.

@item @qcode{"final"}
Display the result of the final loop iteration.

@item @qcode{"notify"}
Display the result of the final loop iteration if the function has
failed to converge.
@end table

@item FinDiffType

@item FunValCheck
When enabled, display an error if the objective function returns an invalid
value (a complex number, NaN, or Inf).  Must be set to @qcode{"on"} or
@qcode{"off"} [default].  Note: the functions @code{fzero} and
@code{fminbnd} correctly handle Inf values and only complex values or NaN
will cause an error in this case.

@item GradObj
When set to @qcode{"on"}, the function to be minimized must return a
second argument which is the gradient, or first derivative, of the
function at the point @var{x}.  If set to @qcode{"off"} [default], the
gradient is computed via finite differences.

@item Jacobian
When set to @qcode{"on"}, the function to be minimized must return a
second argument which is the Jacobian, or first derivative, of the
function at the point @var{x}.  If set to @qcode{"off"} [default], the
Jacobian is computed via finite differences.

@item MaxFunEvals
Maximum number of function evaluations before optimization stops.
Must be a positive integer.

@item MaxIter
Maximum number of algorithm iterations before optimization stops.
Must be a positive integer.

@item OutputFcn
A user-defined function executed once per algorithm iteration.

@item TolFun
Termination criterion for the function output.  If the difference in the
calculated objective function between one algorithm iteration and the next
is less than @code{TolFun} the optimization stops.  Must be a positive
scalar.

@item TolX
Termination criterion for the function input.  If the difference in @var{x},
the current search point, between one algorithm iteration and the next is
less than @code{TolX} the optimization stops.  Must be a positive scalar.

@item TypicalX

@item Updating
@end table

This list can be extended by the user or other loaded Octave packages.  An
updated valid parameters list can be queried using the no-argument form of
@code{optimset}.

Note 1: Only parameter names from the standard list are considered when
matching short parameter names, and @var{par} will always be expanded
to match a standard parameter even if an exact non-standard match exists.
The value of a non-standard parameter that is ambiguous with one or more
standard parameters cannot be set by @code{optimset} and can only be set
using @code{setfield} or dot notation for structs.

Note 2: The optimization options structure is primarily intended for
manipulation of known parameters by @code{optimset} and @code{optimget}.
Unpredictable behavior on future calls to @code{optimset} or
@code{optimget} can result from creating non-standard or ambiguous
parameters or from loading/unloading packages that change the known
parameter list after creation of an optimization options structure.
@xseealso{@ref{XREFoptimget,,optimget}}
@end deftypefn


@c optimget scripts/optimization/optimget.m
@anchor{XREFoptimget}
@html
<span style="display:block; margin-top:-4.5ex;">&nbsp;</span>
@end html


@deftypefn  {} {@var{val} =} optimget (@var{options}, @var{par})
@deftypefnx {} {@var{val} =} optimget (@var{options}, @var{par}, @var{default})
Return the value of the specific parameter @var{par} from the optimization
options structure @var{options} created by @code{optimset}.

If @var{par} is not defined then return the @var{default} value if
supplied, otherwise return an empty matrix.

If @var{par} does not exactly match the name of a standard parameter,
@code{optimget} will attempt to match @var{par} to a standard parameter
and will return that parameter's value if a match is found.  Matching is
case insensitive and is based on character matching at the start of the
parameter name.  @code{optimget} produces an error if it finds multiple
ambiguous matches.  If no standard parameter matches are found a warning is
issued.  See @code{optimset} for information about the standard options
list.

Note: Only parameter names from the standard list are considered when
matching short parameter names, and @var{par} will always be expanded to
match a standard parameter even if an exact non-standard match exists.  The
value of a non-standard parameter that is ambiguous with one or more
standard parameters cannot be returned by @code{optimget} and can only be
accessed using @code{getfield} or dot notation for structs.
@xseealso{@ref{XREFoptimset,,optimset}}
@end deftypefn