File: function.rst

package info (click to toggle)
casadi 3.7.0%2Bds2-3
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 19,964 kB
  • sloc: cpp: 114,229; python: 35,462; xml: 1,946; ansic: 859; makefile: 257; sh: 114; f90: 63; perl: 9
file content (875 lines) | stat: -rw-r--r-- 29,219 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
.. _sec-function:

Function objects
================

.. include:: defs.rst

|casadi| allows the user to create function objects, in C++ terminology often referred to as *functors*. This includes functions that are defined by a symbolic expression, ODE/DAE integrators, QP solvers, NLP solvers etc.

Function objects are typically created with the syntax:

.. code-block:: none

      f = functionname(name, arguments, ..., [options])

The name is mainly a display name that will show up in e.g. error messages or as comments in generated C code. This is followed by a set of arguments, which is class dependent. Finally, the user can pass an options structure for customizing the behavior of the class. The options structure is a dictionary type in Python, a struct in MATLAB or |casadi|'s ``Dict`` type in C++.

A ``Function`` can be constructed by passing a list of input expressions and a list of output expressions:

.. side-by-side::
    .. exec-block:: python

        x = SX.sym('x',2)
        y = SX.sym('y')
        f = Function('f',[x,y],\
                   [x,sin(y)*x])
        print(f)
    &&

    .. exec-block:: octave

        x = SX.sym('x',2);
        y = SX.sym('y');
        f = Function('f',{x,y},...
                   {x,sin(y)*x});
        disp(f)

which defines a function
:math:`f : \mathbb{R}^{2} \times \mathbb{R} \rightarrow \mathbb{R}^{2} \times \mathbb{R}^{2}, \quad (x,y) \mapsto (x,\sin(y) x)`.
Note that all function objects in |casadi|, including the above, are multiple matrix-valued input, multiple, matrix-valued output.

|MX| expression graphs work the same way:

.. side-by-side::
    .. exec-block:: python

        x = MX.sym('x',2)
        y = MX.sym('y')
        f = Function('f',[x,y],\
                   [x,sin(y)*x])
        print(f)
    &&

    .. exec-block:: octave

        x = MX.sym('x',2);
        y = MX.sym('y');
        f = Function('f',{x,y},...
                   {x,sin(y)*x});
        disp(f)

When creating a ``Function`` from expressions like that, it is always advisory to *name* the inputs and outputs as follows:


.. side-by-side::
    .. exec-block:: python

        x = MX.sym('x',2) [hidden]
        y = MX.sym('y') [hidden]
        f = Function('f',[x,y],\
              [x,sin(y)*x],\
              ['x','y'],['r','q'])
        print(f)
    &&

    .. exec-block:: octave

        x = MX.sym('x',2); [hidden]
        y = MX.sym('y'); [hidden]
        f = Function('f',{x,y},...
              {x,sin(y)*x},...
              {'x','y'},{'r','q'});
        disp(f)

Naming inputs and outputs is preferred for a number of reasons:

* No need to remember the number or order of arguments
* Inputs or outputs that are absent can be left unset
* More readable and less error prone syntax.

For ``Function`` instances -- to be encountered later -- that are *not* created directly from expressions,
the inputs and outputs are named automatically.

Calling function objects
------------------------

|MX| expressions may contain calls to ``Function``-derived functions. Calling a function object is both done for the numerical evaluation and, by passing symbolic arguments, for embedding a *call* to the function object into an expression graph (cf. also :numref:`sec-integrator`).

To call a function object, you either pass the argument in the correct order:

.. side-by-side::
    .. exec-block:: python

        x = MX.sym('x',2) [hidden]
        y = MX.sym('y') [hidden]
        f = Function('f',[x,y],[x,sin(y)*x],['x','y'],['r','q']) [hidden]
        r0, q0 = f(1.1,3.3)
        print('r0:',r0)
        print('q0:',q0)
    &&

    .. exec-block:: octave

        x = MX.sym('x',2); [hidden]
        y = MX.sym('y'); [hidden]
        f = Function('f',{x,y},{x,sin(y)*x},{'x','y'},{'r','q'}); [hidden]
        [r0, q0] = f(1.1,3.3);
        disp(r0)
        disp(q0)

or the arguments and their names as follows, which will result in a dictionary (``dict`` in Python, ``struct`` in MATLAB and ``std::map<std::string, MatrixType>`` in C++):

.. side-by-side::
    .. exec-block:: python

        x = MX.sym('x',2) [hidden]
        y = MX.sym('y') [hidden]
        f = Function('f',[x,y],[x,sin(y)*x],['x','y'],['r','q']) [hidden]
        res = f(x=1.1, y=3.3)
        print('res:', res)
    &&

    .. exec-block:: octave

        x = MX.sym('x',2); [hidden]
        y = MX.sym('y'); [hidden]
        f = Function('f',{x,y},{x,sin(y)*x},{'x','y'},{'r','q'}); [hidden]
        res = f('x',1.1,'y',3.3);
        disp(res)

When calling a function object, the dimensions (but not necessarily the sparsity patterns) of the evaluation arguments have to match those of the function inputs, with two exceptions:

* A row vector can be passed instead of a column vector and vice versa.
* A scalar argument can always be passed, regardless of the input dimension. This has the meaning of setting all elements of the input matrix to that value.

When the number of inputs to a function object is large or changing, an alternative syntax to the above is to use the *call* function which takes a Python list / MATLAB cell array or, alternatively, a Python dict / MATLAB struct. The return value will have the same type:

.. side-by-side::
    .. exec-block:: python

        x = MX.sym('x',2) [hidden]
        y = MX.sym('y') [hidden]
        f = Function('f',[x,y],[x,sin(y)*x],['x','y'],['r','q']) [hidden]
        arg = [1.1,3.3]
        res = f.call(arg)
        print('res:', res)
        arg = {'x':1.1,'y':3.3}
        res = f.call(arg)
        print('res:', res)
    &&

    .. exec-block:: octave

        x = MX.sym('x',2); [hidden]
        y = MX.sym('y'); [hidden]
        f = Function('f',{x,y},{x,sin(y)*x},{'x','y'},{'r','q'}); [hidden]

        arg = {1.1,3.3};
        res = f.call(arg);
        disp(res)
        arg = struct('x',1.1,'y',3.3);
        res = f.call(arg);
        disp(res)

Converting |MX| to |SX|
-----------------------
A function object defined by an |MX| graph that only contains built-in operations (e.g. element-wise operations such as addition, square root, matrix multiplications and calls to |SX| functions), can be converted into a function defined purely by an |SX| graph using the syntax:

.. code-block:: python

     sx_function = mx_function.expand()


This might speed up the calculations significantly, but might also cause extra memory overhead.


.. _sec-rootfinder:

Nonlinear root-finding problems
-------------------------------

Consider the following system of equations:

.. math::

    \begin{aligned}
    &g_0(z, x_1, x_2, \ldots, x_n) &&= 0 \\
    &g_1(z, x_1, x_2, \ldots, x_n) &&= y_1 \\
    &g_2(z, x_1, x_2, \ldots, x_n) &&= y_2 \\
    &\qquad \vdots \qquad &&\qquad \\
    &g_m(z, x_1, x_2, \ldots, x_n) &&= y_m,
    \end{aligned}

where the first equation uniquely defines :math:`z` as a function of :math:`x_1`, \ldots, :math:`x_n` by the *implicit function theorem*
and the remaining equations define the auxiliary outputs :math:`y_1`, \ldots, :math:`y_m`.

Given a function :math:`g` for evaluating :math:`g_0`, \ldots, :math:`g_m`, we can use |casadi| to automatically formulate a function
:math:`G: \{z_{\text{guess}}, x_1, x_2, \ldots, x_n\} \rightarrow \{z, y_1, y_2, \ldots, y_m\}`.
This function includes a guess for :math:`z` to handle the case when the solution is non-unique.
The syntax for this, assuming :math:`n=m=1` for simplicity, is:


.. side-by-side::
    .. exec-block:: python

        nz = 1 [hidden]
        nx = 1 [hidden]

        z = SX.sym('x',nz)
        x = SX.sym('x',nx)
        g0 = sin(x+z)
        g1 = cos(x-z)
        g = Function('g',[z,x],[g0,g1])
        G = rootfinder('G','newton',g)
        print(G)
    &&

    .. exec-block:: octave

        nz = 1; [hidden]
        nx = 1; [hidden]

        z = SX.sym('x',nz);
        x = SX.sym('x',nx);
        g0 = sin(x+z);
        g1 = cos(x-z);
        g = Function('g',{z,x},{g0,g1});
        G = rootfinder('G','newton',g);
        disp(G)

where the ``rootfinder`` function expects a display name, the name of a solver plugin
(here a simple full-step Newton method) and the residual function.

Rootfinding objects in |casadi| are differential objects and derivatives can be calculated exactly to arbitrary order.

.. seealso:: `API of rootfinder <http://casadi.org/python-api/#rootfinding>`_


.. _sec-integrator:

Initial-value problems and sensitivity analysis
-----------------------------------------------

|casadi| can be used to solve initial-value problems in ODE or DAE. The problem formulation used
is a DAE of semi-explicit form with quadratures:

.. math::

    \begin{aligned}
     \dot{x} &= f_{\text{ode}}(t,x,z,p,u), \qquad x(0) = x_0 \\
          0  &= f_{\text{alg}}(t,x,z,p,u) \\
     \dot{q} &= f_{\text{quad}}(t,x,z,p,u), \qquad q(0) = 0
    \end{aligned}

For solvers of *ordinary* differential equations, the second equation and the algebraic variables :math:`z` must be absent.

An integrator in |casadi| is a function that takes the state at the initial time ``x0``, a set of parameters ``p`` and controls ``u``, and a guess for the algebraic variables (only for DAEs) ``z0`` and returns the state vector ``xf``, algebraic variables ``zf`` and the quadrature state ``qf`` at a number of output times. The control vector ``u`` is assumed to be piecewise constant and has the same grid discretization as the output grid.

The freely available `SUNDIALS suite <https://computation.llnl.gov/casc/sundials/description/description.html>`_ (distributed along with |casadi|) contains the two popular integrators CVodes and IDAS for ODEs and DAEs respectively. These integrators have support for forward and adjoint sensitivity analysis and when used via |casadi|'s Sundials interface, |casadi| will automatically formulate the Jacobian information, which is needed by the backward differentiation formula (BDF) that CVodes and IDAS use. Also automatically formulated will be the forward and adjoint sensitivity equations.

Creating integrators
^^^^^^^^^^^^^^^^^^^^

Integrators are created using |casadi|'s ``integrator`` function. Different integrators schemes and interfaces are implemented as *plugins*, essentially shared libraries that are loaded at runtime.

Consider for example the DAE:

.. math::

  \begin{aligned}
   \dot{x} &= z+p, \\
        0  &= z \, \cos(z)-x
  \end{aligned}

An integrator, using the "idas" plugin, can be created using the syntax:

.. exec-block:: python

    x = SX.sym('x'); z = SX.sym('z'); p = SX.sym('p')
    dae = {'x':x, 'z':z, 'p':p, 'ode':z+p, 'alg':z*cos(z)-x}
    F = integrator('F', 'idas', dae)
    print(F)

.. exec-block:: octave

    x = SX.sym('x'); z = SX.sym('z'); p = SX.sym('p');
    dae = struct('x',x,'z',z,'p',p,'ode',z+p,'alg',z*cos(z)-x);
    F = integrator('F', 'idas', dae);
    disp(F)

This will result in an integration from :math:`t_0=0` until :math:`t_f=1`, i.e. a single output time.
We can evaluate the function object using the initial condition :math:`x(0)=0`, parameter :math:`p=0.1`
and the guess for the algebraic variable at the initial time :math:`z(0)=0` as follows:

.. side-by-side::
    .. exec-block:: python

        x = SX.sym('x'); z = SX.sym('z'); p = SX.sym('p') [hidden]
        dae = {'x':x, 'z':z, 'p':p, 'ode':z+p, 'alg':z*cos(z)-x} [hidden]
        F = integrator('F', 'idas', dae) [hidden]

        r = F(x0=0, z0=0, p=0.1)
        print(r['xf'])
    &&

    .. exec-block:: octave

        x = SX.sym('x'); z = SX.sym('z'); p = SX.sym('p'); [hidden]
        dae = struct('x',x,'z',z,'p',p,'ode',z+p,'alg',z*cos(z)-x); [hidden]
        F = integrator('F', 'idas', dae); [hidden]

        r = F('x0',0,'z0',0,'p',0.1);
        disp(r.xf)

Note that the time horizon is always fixed. It can be changed to its default values :math:`t_0=0` and :math:`t_f=1` by adding two additional argument to the constructor, after the DAE. :math:`t_f` can either be a single value or a vector of values. It may include the initial time.

Sensitivity analysis
^^^^^^^^^^^^^^^^^^^^

From a usage point of view, an integrator behaves just like the function objects created from expressions earlier in the chapter.
You can use member functions in the Function class to generate new function objects corresponding to directional derivatives (forward or reverse mode) or complete Jacobians. Then evaluate these function objects numerically to obtain sensitivity information. The documented example "sensitivity_analysis" (available in |casadi|'s example collection for Python, MATLAB and C++) demonstrate how |casadi| can be used to calculate first and second order derivative information (forward-over-forward, forward-over-adjoint, adjoint-over-adjoint) for a simple DAE.


.. _sec-nlpsol:

Nonlinear programming
---------------------

.. note:: This section assumes familiarity with much of the material that comes above.
             There is also a higher-level interface in :numref:`Chapter %s <sec-opti>`. That interface can be learned stand-alone.


The NLP solvers distributed with or interfaced to |casadi| solves parametric NLPs of the following form:

.. math::
    :label: nlp

    \begin{array}{cc}
    \begin{array}{c}
    \text{minimize:} \\
    x
    \end{array}
    &
    f(x,p)
    \\
    \begin{array}{c}
    \text{subject to:}
    \end{array}
    &
    \begin{array}{rcl}
      x_{\textrm{lb}} \le &  x   & \le x_{\textrm{ub}} \\
      g_{\textrm{lb}} \le &g(x,p)& \le g_{\textrm{ub}}
    \end{array}
    \end{array}


where :math:`x \in \mathbb{R}^{nx}` is the decision variable and :math:`p \in \mathbb{R}^{np}` is a known parameter vector.

An NLP solver in |casadi| is a function that takes the parameter value (``p``), the bounds (``lbx``, ``ubx``, ``lbg``, ``ubg``) and a guess for the primal-dual solution (``x0``, ``lam_x0``, ``lam_g0``) and returns the optimal solution. Unlike integrator objects, NLP solver functions are currently not differentiable functions in |casadi|.

There are several NLP solvers interfaced with |casadi|. The most popular one is IPOPT, an open-source primal-dual interior point method which is included in |casadi| installations. Others, that require the installation of third-party software, include SNOPT, WORHP and KNITRO. Whatever the NLP solver used, the interface will automatically generate the information that it needs to solve the NLP, which may be solver and option dependent. Typically an NLP solver will need a function that gives the Jacobian of the constraint function and a Hessian of the Lagrangian function (:math:`L(x,\lambda) = f(x) + \lambda^{\text{T}} \, g(x))` with respect to :math:`x`.

Creating NLP solvers
^^^^^^^^^^^^^^^^^^^^

NLP solvers are created using |casadi|'s ``nlpsol`` function. Different solvers and interfaces are implemented as *plugins*.
Consider the following form of the so-called Rosenbrock problem:

.. math::

    \begin{array}{cc}
    \begin{array}{c}
    \text{minimize:} \\
    x,y,z
    \end{array}
    &
    x^2 + 100 \, z^2  \\
    \begin{array}{c}
    \text{subject to:}
    \end{array}
    &  z+(1-x)^2-y = 0
    \end{array}


A solver for this problem, using the "ipopt" plugin, can be created using the syntax:

.. exec-block:: python

    x = SX.sym('x'); y = SX.sym('y'); z = SX.sym('z')
    nlp = {'x':vertcat(x,y,z), 'f':x**2+100*z**2, 'g':z+(1-x)**2-y}
    S = nlpsol('S', 'ipopt', nlp)
    print(S)

.. exec-block:: octave

    x = SX.sym('x'); y = SX.sym('y'); z = SX.sym('z');
    nlp = struct('x',[x;y;z], 'f',x^2+100*z^2, 'g',z+(1-x)^2-y);
    S = nlpsol('S', 'ipopt', nlp);
    disp(S)


Once the solver has been created, we can solve the NLP, using ``[2.5,3.0,0.75]`` as an initial guess, by evaluating the
function ``S``:

.. side-by-side::
    .. exec-block:: python

        x = SX.sym('x'); y = SX.sym('y'); z = SX.sym('z') [hidden]
        nlp = {'x':vertcat(x,y,z), 'f':x**2+100*z**2, 'g':z+(1-x)**2-y} [hidden]
        S = nlpsol('S', 'ipopt', nlp) [hidden]

        r = S(x0=[2.5,3.0,0.75],\
              lbg=0, ubg=0)
        x_opt = r['x']
        print('x_opt: ', x_opt)


    &&

    .. exec-block:: octave

        x = SX.sym('x'); y = SX.sym('y'); z = SX.sym('z'); [hidden]
        nlp = struct('x',[x;y;z], 'f',x^2+100*z^2, 'g',z+(1-x)^2-y); [hidden]
        S = nlpsol('S', 'ipopt', nlp); [hidden]

        r = S('x0',[2.5,3.0,0.75],...
              'lbg',0,'ubg',0);
        x_opt = r.x;
        disp(x_opt)


.. _sec-qpsol:

Quadratic programming
---------------------

|casadi| provides interfaces to solve quadratic programs (QPs). Supported solvers are the open-source solvers qpOASES (distributed with |casadi|), OOQP, OSQP and PROXQP 
as well as the commercial solvers CPLEX and GUROBI.

There are two different ways to solve QPs in |casadi|, using a high-level interface and a low-level interface. They are described in the following.

High-level interface
^^^^^^^^^^^^^^^^^^^^

The high-level interface for quadratic programming mirrors that of nonlinear programming, i.e. expects a problem of the form :eq:`nlp`,
with the restriction that objective function :math:`f(x,p)` must be a convex quadratic function in :math:`x` and the constraint function :math:`g(x,p)` must be linear in :math:`x`.
If the functions are not quadratic and linear, respectively, the solution is done at the current linearization point, given by the "initial guess" for :math:`x`.

If the objective function is not convex, the solver may or may not fail to find a solution or the solution may not be unique.

To illustrate the syntax, we consider the following convex QP:

.. math::
  :label: simple_qp

    \begin{array}{cc}
    \begin{array}{c}
    \text{minimize:} \\
    x,y
    \end{array}
    &
    x^2 + y^2  \\
    \begin{array}{c}
    \text{subject to:}
    \end{array}
    & x+y-10 \ge 0
    \end{array}


To solve this problem with the high-level interface, we simply replace ``nlpsol`` with ``qpsol`` and use a QP solver plugin such as the with |casadi| distributed qpOASES:

.. exec-block:: python

    x = SX.sym('x'); y = SX.sym('y')
    qp = {'x':vertcat(x,y), 'f':x**2+y**2, 'g':x+y-10}
    S = qpsol('S', 'qpoases', qp)
    print(S)

.. exec-block:: octave

    x = SX.sym('x'); y = SX.sym('y');
    qp = struct('x',[x;y], 'f',x^2+y^2, 'g',x+y-10);
    S = qpsol('S', 'qpoases', qp);
    disp(S)

The created solver object ``S`` will have the same input and output signature as the solver objects
created with ``nlpsol``. Since the solution is unique, it is less important to provide an initial guess:


.. side-by-side::
    .. exec-block:: python

        x = SX.sym('x'); y = SX.sym('y') [hidden]
        qp = {'x':vertcat(x,y), 'f':x**2+y**2, 'g':x+y-10} [hidden]
        S = qpsol('S', 'qpoases', qp) [hidden]

        r = S(lbg=0)
        x_opt = r['x']
        print('x_opt: ', x_opt)

    &&

    .. exec-block:: octave

        x = SX.sym('x'); y = SX.sym('y'); [hidden]
        qp = struct('x',[x;y], 'f',x^2+y^2, 'g',x+y-10); [hidden]
        S = qpsol('S', 'qpoases', qp); [hidden]

        r = S('lbg',0);
        x_opt = r.x;
        disp(x_opt)


Low-level interface
^^^^^^^^^^^^^^^^^^^

The low-level interface, on the other hand, solves QPs of the following form:

.. math::
  :label: qp

    \begin{array}{cc}
    \begin{array}{c}
    \text{minimize:} \\
    x
    \end{array}
    &
    \frac{1}{2} x^T \, H \, x + g^T \, x
    \\
    \begin{array}{c}
    \text{subject to:}
    \end{array}
    &
    \begin{array}{rcl}
      x_{\textrm{lb}} \le &  x   & \le x_{\textrm{ub}} \\
      a_{\textrm{lb}} \le & A \, x& \le a_{\textrm{ub}}
    \end{array}
    \end{array}


Encoding problem :eq:`simple_qp` in this form, omitting bounds that are infinite, is straightforward:


.. side-by-side::
    .. code-block:: python

        H = 2*DM.eye(2)
        A = DM.ones(1,2)
        g = DM.zeros(2)
        lba = 10.


    .. code-block:: octave

        H = 2*DM.eye(2);
        A = DM.ones(1,2);
        g = DM.zeros(2);
        lba = 10;

To create a solver instance, instead of passing symbolic expressions for the QP, we now pass the sparsity patterns of the matrices :math:`H` and :math:`A`.
Since we used |casadi|'s |DM|-type above, we can simply query the sparsity patterns:

.. side-by-side::
    .. exec-block:: python

        H = 2*DM.eye(2) [hidden]
        A = DM.ones(1,2) [hidden]
        g = DM.zeros(2) [hidden]
        lba = 10. [hidden]

        qp = {}
        qp['h'] = H.sparsity()
        qp['a'] = A.sparsity()
        S = conic('S','qpoases',qp)
        print(S)


    .. exec-block:: octave

        H = 2*DM.eye(2); [hidden]
        A = DM.ones(1,2); [hidden]
        g = DM.zeros(2); [hidden]
        lba = 10; [hidden]

        qp = struct;
        qp.h = H.sparsity();
        qp.a = A.sparsity();
        S = conic('S','qpoases',qp);
        disp(S)


The returned ``Function`` instance will have a *different* input/output signature compared to the high-level interface, one that includes the matrices :math:`H` and :math:`A`:

.. side-by-side::
    .. exec-block:: python

        H = 2*DM.eye(2) [hidden]
        A = DM.ones(1,2) [hidden]
        g = DM.zeros(2) [hidden]
        lba = 10. [hidden]

        qp = {} [hidden]
        qp['h'] = H.sparsity() [hidden]
        qp['a'] = A.sparsity() [hidden]
        S = conic('S','qpoases',qp) [hidden]

        r = S(h=H, g=g, \
              a=A, lba=lba)
        x_opt = r['x']
        print('x_opt: ', x_opt)

    &&

    .. exec-block:: octave

        H = 2*DM.eye(2); [hidden]
        A = DM.ones(1,2); [hidden]
        g = DM.zeros(2); [hidden]
        lba = 10; [hidden]

        qp = struct; [hidden]
        qp.h = H.sparsity(); [hidden]
        qp.a = A.sparsity(); [hidden]
        S = conic('S','qpoases',qp); [hidden]

        r = S('h', H, 'g', g,...
              'a', A, 'lba', lba);
        x_opt = r.x;
        disp(x_opt)



For-loop equivalents
--------------------

When modeling using expression graphs in |casadi|, it is a common pattern to use of for-loop constructs of the host language (C++/Python/Matlab).

The graph size will grow linearly with the loop size :math:`n`, and so will the construction time of the expression graph and the initialization time of functions using that expression.

We offer some special constructs that improve on this complexity.

Map
^^^

Suppose you are interested in computing a function :math:`f : \mathbb{R}^{n} \rightarrow \mathbb{R}^{m}` repeatedly on all columns of a matrix :math:`X \in \mathbb{R}^{n \times N}`, and aggregating all results in a result matrix :math:`Y \in \mathbb{R}^{m \times N}`:

.. side-by-side::
    .. exec-block:: python

        x = SX.sym("x") [hidden]
        f = Function("f",[x],[sin(x)]) [hidden]

        N = 4
        X = MX.sym("X",1,N)

        print(f)

        ys = []
        for i in range(N):
          ys.append(f(X[:,i]))

        Y = hcat(ys)
        F = Function('F',[X],[Y])
        print(F)

    &&

    .. exec-block:: octave

        x = SX.sym('x'); [hidden]
        f = Function('f',{x},{sin(x)}); [hidden]

        N = 4;
        X = MX.sym('X',1,N);

        disp(f)

        ys = {};
        for i=1:N
          ys{end+1} = f(X(:,i));
        end
        Y = [ys{:}];
        F = Function('F',{X},{Y});
        disp(F)

The aggregate function :math:`F : \mathbb{R}^{n \times N} \rightarrow \mathbb{R}^{m \times N}` can be obtained with the ``map`` construct:

.. side-by-side::
    .. exec-block:: python

        x = SX.sym("x") [hidden]
        f = Function("f",[x],[sin(x)]) [hidden]
        N = 4 [hidden]


        F = f.map(N)
        print(F)

    &&

    .. exec-block:: octave

        x = SX.sym('x'); [hidden]
        f = Function('f',{x},{sin(x)}); [hidden]
        N = 4; [hidden]

        F = f.map(N);
        disp(F)

|casadi| can be instructed to parallelize when :math:`F` gets evaluated. In the following example, we dedicate 2 threads for the ``map`` task.

.. side-by-side::
    .. exec-block:: python

        x = SX.sym("x") [hidden]
        f = Function("f",[x],[sin(x)]) [hidden]
        N = 4 [hidden]

        F = f.map(N,"thread",2)
        print(F)

    &&

    .. exec-block:: octave

        x = SX.sym('x'); [hidden]
        f = Function('f',{x},{sin(x)}); [hidden]
        N = 4; [hidden]

        F = f.map(N,'thread',2);
        disp(F)

The ``map`` operation supports primitive functions :math:`f` with multiple inputs/outputs which can also be matrices. Aggregation will always happen horizontally.

The ``map`` operation exhibits constant graph size and initialization time.

Fold
^^^^

In case each for-loop iteration depends on the result from the previous iteration, the ``fold`` construct applies. In the following, the ``x`` variable acts as an accumulater that is initialized by :math:`x_0 \in \mathbb{R}^{n}`:

.. side-by-side::
    .. exec-block:: python

        x = SX.sym("x") [hidden]
        f = Function("f",[x],[sin(x)]) [hidden]
        x0 = MX.sym("x0") [hidden]
        N = 4 [hidden]

        x = x0
        for i in range(N):
          x = f(x)

        F = Function('F',[x0],[x])
        print(F)

    &&

    .. exec-block:: octave

        x = SX.sym('x'); [hidden]
        f = Function('f',{x},{sin(x)}); [hidden]
        x0 = MX.sym('x0'); [hidden]
        N = 4; [hidden]

        x = x0;
        for i=1:N
          x = f(x);
        end

        F = Function('F',{x0},{x});
        disp(F)

For a given function :math:`f : \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}`, the result function :math:`F : \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}` can be obtained with the ``fold`` construct:

.. side-by-side::
    .. exec-block:: python

        x = SX.sym("x") [hidden]
        f = Function("f",[x],[sin(x)]) [hidden]
        N = 4 [hidden]

        F = f.fold(N)
        print(F)
    &&

    .. exec-block:: octave

        x = SX.sym('x'); [hidden]
        f = Function('f',{x},{sin(x)}); [hidden]
        N = 4; [hidden]

        F = f.fold(N);
        disp(F)

In case intermediate accumulator values are desired as output (:math:`\mathbb{R}^{n} \rightarrow \mathbb{R}^{n \times N}`), use ``mapaccum`` instead of ``fold``.

The ``fold``/``mapaccum`` operation supports primitive functions :math:`f` with multiple inputs/outputs which can also be matrices.
The first input and output are used for accumulating, while the remainder inputs are read column-wise over the iterations.

The ``map``/``mapaccum`` operation exhibits a graph size and initialization time that scales logarithmically with :math:`n`.

Conditional evaluation
^^^^^^^^^^^^^^^^^^^^^^

It is possible to include conditional evaluation of expressions in CasADi expression graphs by constucting ``conditional`` function instances. This function takes a number of existing ``Function`` instances,  :math:`f_1`, :math:`f_2`, :math:`f_n` as well as a "default" function :math:`f_{def}`. All these functions must have the same input and output signatures, i.e. the same number of inputs and outputs with the same dimensions:

.. side-by-side::
    .. exec-block:: python

        x = SX.sym("x")
        f0 = Function("f0",[x],[sin(x)])
        f1 = Function("f1",[x],[cos(x)])
        f2 = Function("f2",[x],[tan(x)])
        f_cond = Function.conditional('f_cond', [f0, f1], f2)
        print(f_cond)

    .. exec-block:: octave

        x = SX.sym('x');
        f0 = Function('f0',{x},{sin(x)});
        f1 = Function('f1',{x},{cos(x)});
        f2 = Function('f2',{x},{tan(x)});
        f_cond = Function.conditional('f_cond', {f0, f1}, f2);
        disp(f_cond);

The result is a new function instance with the same input/output signature, but with one additional input corresponding to an index. Its evaluation corresponds to:

.. math::
    f_{\text{cond}}(c, x_1, x_2, \ldots, x_m) = \left\{
    \begin{array}{ll}
    f_0(x_1, x_2, \ldots, x_m) & \text{if $c = 0$,} \\
    f_1(x_1, x_2, \ldots, x_m) & \text{if $c = 1$,} \\
    f_2(x_1, x_2, \ldots, x_m) & \text{otherwise}
    \end{array}
    \right.

A function above can be missing (i.e. be a null pointer ``Function()``), in which case all outputs will be evaluated to NaN. Note that the evaluation is "short-circuiting", i.e. only the relevant function is evaluated. This also applies to any derivative calculation.

A common special case is when there is only a single case in addition to the default case. This is equivalent to an if-then-else statement, which can be written with the shorthand:

.. side-by-side::
    .. exec-block:: python

        x = SX.sym("x")
        f_true = Function("f_true",[x],[cos(x)])
        f_false = Function("f_false",[x],[sin(x)])
        f_cond = Function.if_else('f_cond', f_true, f_false)
        print(f_cond)

    .. exec-block:: octave

        x = SX.sym('x');
        f_true = Function('f_true',{x},{cos(x)});
        f_false = Function('f_false',{x},{sin(x)});
        f_cond = Function.if_else('f_cond', f_true, f_false);
        disp(f_cond);

Note that conditional expressions such as these can result in non-smooth expressions that may not converge if used if used in a gradient-based optimization algorithm.

.. rubric:: Footnotes

.. [#f1] for problems with free end time, you can always scale time by introducing an extra parameter and substitute :math:`t` for a dimensionless time variable that goes from 0 to 1