File: analytical_derivatives.rst

package info (click to toggle)
ceres-solver 2.1.0%2Breally2.1.0%2Bdfsg-1
  • links: PTS, VCS
  • area: main
  • in suites: bookworm
  • size: 13,656 kB
  • sloc: cpp: 80,895; ansic: 2,869; python: 679; sh: 78; makefile: 74; xml: 21
file content (192 lines) | stat: -rw-r--r-- 6,896 bytes parent folder | download | duplicates (4)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
.. default-domain:: cpp

.. cpp:namespace:: ceres

.. _chapter-analytical_derivatives:

====================
Analytic Derivatives
====================

Consider the problem of fitting the following curve (`Rat43
<http://www.itl.nist.gov/div898/strd/nls/data/ratkowsky3.shtml>`_) to
data:

.. math::
  y = \frac{b_1}{(1+e^{b_2-b_3x})^{1/b_4}}

That is, given some data :math:`\{x_i, y_i\},\ \forall i=1,... ,n`,
determine parameters :math:`b_1, b_2, b_3` and :math:`b_4` that best
fit this data.

Which can be stated as the problem of finding the
values of :math:`b_1, b_2, b_3` and :math:`b_4` are the ones that
minimize the following objective function [#f1]_:

.. math::
   \begin{align}
   E(b_1, b_2, b_3, b_4)
   &= \sum_i f^2(b_1, b_2, b_3, b_4 ; x_i, y_i)\\
   &= \sum_i \left(\frac{b_1}{(1+e^{b_2-b_3x_i})^{1/b_4}} - y_i\right)^2\\
   \end{align}

To solve this problem using Ceres Solver, we need to define a
:class:`CostFunction` that computes the residual :math:`f` for a given
:math:`x` and :math:`y` and its derivatives with respect to
:math:`b_1, b_2, b_3` and :math:`b_4`.

Using elementary differential calculus, we can see that:

.. math::
  \begin{align}
  D_1 f(b_1, b_2, b_3, b_4; x,y) &= \frac{1}{(1+e^{b_2-b_3x})^{1/b_4}}\\
  D_2 f(b_1, b_2, b_3, b_4; x,y) &=
  \frac{-b_1e^{b_2-b_3x}}{b_4(1+e^{b_2-b_3x})^{1/b_4 + 1}} \\
  D_3 f(b_1, b_2, b_3, b_4; x,y) &=
  \frac{b_1xe^{b_2-b_3x}}{b_4(1+e^{b_2-b_3x})^{1/b_4 + 1}} \\
  D_4 f(b_1, b_2, b_3, b_4; x,y) & = \frac{b_1  \log\left(1+e^{b_2-b_3x}\right) }{b_4^2(1+e^{b_2-b_3x})^{1/b_4}}
  \end{align}

With these derivatives in hand, we can now implement the
:class:`CostFunction` as:

.. code-block:: c++

  class Rat43Analytic : public SizedCostFunction<1,4> {
     public:
       Rat43Analytic(const double x, const double y) : x_(x), y_(y) {}
       virtual ~Rat43Analytic() {}
       virtual bool Evaluate(double const* const* parameters,
                             double* residuals,
                             double** jacobians) const {
         const double b1 = parameters[0][0];
         const double b2 = parameters[0][1];
         const double b3 = parameters[0][2];
         const double b4 = parameters[0][3];

         residuals[0] = b1 *  pow(1 + exp(b2 -  b3 * x_), -1.0 / b4) - y_;

         if (!jacobians) return true;
         double* jacobian = jacobians[0];
         if (!jacobian) return true;

         jacobian[0] = pow(1 + exp(b2 - b3 * x_), -1.0 / b4);
         jacobian[1] = -b1 * exp(b2 - b3 * x_) *
                       pow(1 + exp(b2 - b3 * x_), -1.0 / b4 - 1) / b4;
         jacobian[2] = x_ * b1 * exp(b2 - b3 * x_) *
                       pow(1 + exp(b2 - b3 * x_), -1.0 / b4 - 1) / b4;
         jacobian[3] = b1 * log(1 + exp(b2 - b3 * x_)) *
                       pow(1 + exp(b2 - b3 * x_), -1.0 / b4) / (b4 * b4);
         return true;
       }

      private:
       const double x_;
       const double y_;
   };

This is tedious code, hard to read and with a lot of
redundancy. So in practice we will cache some sub-expressions to
improve its efficiency, which would give us something like:

.. code-block:: c++

  class Rat43AnalyticOptimized : public SizedCostFunction<1,4> {
     public:
       Rat43AnalyticOptimized(const double x, const double y) : x_(x), y_(y) {}
       virtual ~Rat43AnalyticOptimized() {}
       virtual bool Evaluate(double const* const* parameters,
                             double* residuals,
                             double** jacobians) const {
         const double b1 = parameters[0][0];
         const double b2 = parameters[0][1];
         const double b3 = parameters[0][2];
         const double b4 = parameters[0][3];

         const double t1 = exp(b2 -  b3 * x_);
         const double t2 = 1 + t1;
         const double t3 = pow(t2, -1.0 / b4);
         residuals[0] = b1 * t3 - y_;

         if (!jacobians) return true;
         double* jacobian = jacobians[0];
         if (!jacobian) return true;

         const double t4 = pow(t2, -1.0 / b4 - 1);
         jacobian[0] = t3;
         jacobian[1] = -b1 * t1 * t4 / b4;
         jacobian[2] = -x_ * jacobian[1];
         jacobian[3] = b1 * log(t2) * t3 / (b4 * b4);
         return true;
       }

     private:
       const double x_;
       const double y_;
   };

What is the difference in performance of these two implementations?

==========================   =========
CostFunction                 Time (ns)
==========================   =========
Rat43Analytic                      255
Rat43AnalyticOptimized              92
==========================   =========

``Rat43AnalyticOptimized`` is :math:`2.8` times faster than
``Rat43Analytic``.  This difference in run-time is not uncommon. To
get the best performance out of analytically computed derivatives, one
usually needs to optimize the code to account for common
sub-expressions.


When should you use analytical derivatives?
===========================================

#. The expressions are simple, e.g. mostly linear.

#. A computer algebra system like `Maple
   <https://www.maplesoft.com/products/maple/>`_ , `Mathematica
   <https://www.wolfram.com/mathematica/>`_, or `SymPy
   <http://www.sympy.org/en/index.html>`_ can be used to symbolically
   differentiate the objective function and generate the C++ to
   evaluate them.

#. Performance is of utmost concern and there is algebraic structure
   in the terms that you can exploit to get better performance than
   automatic differentiation.

   That said, getting the best performance out of analytical
   derivatives requires a non-trivial amount of work.  Before going
   down this path, it is useful to measure the amount of time being
   spent evaluating the Jacobian as a fraction of the total solve time
   and remember `Amdahl's Law
   <https://en.wikipedia.org/wiki/Amdahl's_law>`_ is your friend.

#. There is no other way to compute the derivatives, e.g. you
   wish to compute the derivative of the root of a polynomial:

   .. math::
     a_3(x,y)z^3 + a_2(x,y)z^2 + a_1(x,y)z + a_0(x,y) = 0


   with respect to :math:`x` and :math:`y`. This requires the use of
   the `Inverse Function Theorem
   <https://en.wikipedia.org/wiki/Inverse_function_theorem>`_

#. You love the chain rule and actually enjoy doing all the algebra by
   hand.


.. rubric:: Footnotes

.. [#f1] The notion of best fit depends on the choice of the objective
         function used to measure the quality of fit, which in turn
         depends on the underlying noise process which generated the
         observations. Minimizing the sum of squared differences is
         the right thing to do when the noise is `Gaussian
         <https://en.wikipedia.org/wiki/Normal_distribution>`_. In
         that case the optimal value of the parameters is the `Maximum
         Likelihood Estimate
         <https://en.wikipedia.org/wiki/Maximum_likelihood_estimation>`_.