File: tutorial-ibvs.dox

package info (click to toggle)
visp 3.6.0-5
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 119,296 kB
  • sloc: cpp: 500,914; ansic: 52,904; xml: 22,642; python: 7,365; java: 4,247; sh: 482; makefile: 237; objc: 145
file content (404 lines) | stat: -rw-r--r-- 18,832 bytes parent folder | download | duplicates (3)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
/**

\page tutorial-ibvs Tutorial: Image-based visual servo
\tableofcontents

\section ibvs_intro Introduction

The aim of all vision-based control schemes is to minimize an
error \f${\bf e}(t)\f$, which is typically defined by \f${\bf e}(t) = {\bf s}[{\bf m}(t), {\bf a}]-{\bf s}^*\f$.

Traditional image-based control schemes use the
image-plane coordinates of a set of points  to define the set
of visual features \f$\bf s\f$. The image measurements \f$\bf m\f$ are usually the pixel coordinates of the set of image points (but this is not the only possible choice), 
and the camera intrinsic parameters \f$\bf a\f$ are used to go from image measurements expressed in pixels to the features.

For a 3D point with coordinates \f${\bf X} = (X,Y,Z)\f$ in the camera frame, which projects in the image as a 2D point with coordinates \f${\bf x} = (x,y)\f$ we have:

\f[ 
\left\{\begin{array}{l}
x = X/Z = (u - u_0)/p_x\\
y = Y/Z = (v - v_0)/p_y
\end{array}\right.
\f]

where \f${\bf m}=(u,v)\f$ gives the coordinates of the image point
expressed in pixel units, and \f${\bf a}=(u_0,v_0, p_x,p_y)\f$ is the set of
camera intrinsic parameters: \f$u_0\f$ and \f$v_0\f$ are the coordinates of
the principal point, while \f$p_x\f$ and \f$p_y\f$ are the ratio between the focal length and the size of a pixel. 

Let the spatial velocity of the camera be denoted by \f$ {\bf v}_c = (v_c, \omega_c)\f$, with \f$ v_c \f$ the instantaneous linear velocity of the
origin of the camera frame and \f$ \omega_c \f$ the instantaneous angular
velocity of the camera frame. The relationship between  \f$ \dot{\bf x} \f$, the time variation of the feature \f$\bf s = x\f$, and the camera velocity  \f$ {\bf v}_c \f$ is given by

\f[ \dot{\bf s} = {\bf L_x} {\bf v}_c\f]

where the interaction matrix \f$ {\bf L_x}\f$ is given by

  \f[ {\bf L_x} =  
  \left[\begin{array}{cccccc}
  -1/Z & 0 & x/Z & xy & -(1+x^2) & y \\
  0 & -1/Z & y/Z & 1+y^2 & -xy & -x
  \end{array}\right]\f]


Considering \f$ {\bf v}_c \f$ as the input to the robot
controller, and if we would like for instance to try to ensure an exponential decoupled decrease of the error, we obtain

 \f[ {\bf v}_c = -\lambda {\bf L}_{\bf x}^{+} {\bf e} \f]

Note that all the material (source code and image) described in this tutorial is part of ViSP source code and could be downloaded using the following command:

\code
$ svn export https://github.com/lagadic/visp.git/trunk/tutorial/visual-servo/ibvs
\endcode

\section ibvs_basic Basic IBVS simulation

The following example available in tutorial-ibvs-4pts.cpp shows how to use ViSP to implement an IBVS simulation using 4 points as visual features.

\include tutorial-ibvs-4pts.cpp

Now we give a line by line description of the source:

\code
#include <visp3/core/vpFeatureBuilder.h>
\endcode

Include a kind of common header for all the classes that implement visual features, especially in our case vpFeaturePoint that will allow us to handle \f${\bf x} = (x,y)\f$ described in the \ref ibvs_intro.

\code
#include <visp3/vs/vpServo.h>
\endcode

Include the header of the vpServo class that implements the control law \f[ {\bf v}_c = -\lambda {\bf L}_{\bf x}^{+} {\bf e} \f] described in the \ref ibvs_intro.

\code
#include <visp3/robot/vpSimulatorCamera.h>
\endcode

Include the header of the vpSimulatorCamera class that allows to simulate a 6 dof free flying camera.

Then in the main() function, we define the desired and initial position of the camera as two homogeneous matrices; \c cdMo refers to \f${^c}^*{\bf M}_o\f$ and \c cMo  to \f${^c}{\bf M}_o\f$.

\code
  vpHomogeneousMatrix cdMo(0, 0, 0.75, 0, 0, 0);
  vpHomogeneousMatrix cMo(0.15, -0.1, 1., vpMath::rad(10), vpMath::rad(-10), vpMath::rad(50));
\endcode

Then we define four 3D points that represent the corners of a 20cm by 20cm square.
\code
  vpPoint point[4] ;
  point[0].setWorldCoordinates(-0.1,-0.1, 0);
  point[1].setWorldCoordinates( 0.1,-0.1, 0);
  point[2].setWorldCoordinates( 0.1, 0.1, 0);
  point[3].setWorldCoordinates(-0.1, 0.1, 0);
\endcode

The instantiation of the visual servo task is done with the next lines. We initialize the task as an eye in hand visual servo. Resulting velocities computed by the controller are those that should be applied in the camera frame: \f$ {\bf v}_c \f$. The interaction matrix will be computed from the current visual features. Thus they need to be updated at each iteration of the control loop. Finally, the constant gain  \f$ \lambda\f$ is set to 0.5.
\code
  vpServo task ;
  task.setServo(vpServo::EYEINHAND_CAMERA);
  task.setInteractionMatrixType(vpServo::CURRENT);
  task.setLambda(0.5);
\endcode

It is now time to define four visual features as points in the image-plane. To this end we instantiate the vpFeaturePoint class. The current point feature \f${\bf s}\f$ is implemented in \c p[i]. The desired point feature \f${\bf s}^*\f$ is implemented in \c pd[i]. 
\code
  vpFeaturePoint p[4], pd[4] ;
\endcode

Each feature is obtained by computing the position of the 3D points in the corresponding camera frame, and then by applying the perspective projection. Once current and desired features are created, they are added to the visual servo task.
\code
  for (unsigned int i = 0 ; i < 4 ; i++) {
    point[i].track(cdMo);
    vpFeatureBuilder::create(pd[i], point[i]);
    point[i].track(cMo);
    vpFeatureBuilder::create(p[i], point[i]);
    task.addFeature(p[i], pd[i]);
  }
\endcode

For the simulation we need first to create two homogeneous transformations \c wMc and \c wMo, respectively to define the position of the camera, and the position of the object in the world frame. 

\code
  vpHomogeneousMatrix wMc, wMo;
\endcode

Secondly. we create an instance of our free flying camera. Here we also specify the sampling time to 0.040 seconds. When a velocity is applied to the camera, this time will be used by the exponential map to determine the next position of the camera. 

\code
  vpSimulatorCamera robot;
  robot.setSamplingTime(0.040);
\endcode

Finally, from the initial position \c wMc of the camera and the position of the object previously fixed in the camera frame \c cMo, we compute the position of the object in the world frame \c wMo. Since in our simulation the object is static, \c wMo will remain unchanged.

\code
  robot.getPosition(wMc);
  wMo = wMc * cMo;
\endcode

Now we can enter in the visual servo loop. When a velocity is applied to our free flying camera, the position of the camera frame \c wMc will evolve wrt the world frame. From this position we compute the position of object in the new camera frame.
\code
    robot.getPosition(wMc);
    cMo = wMc.inverse() * wMo;
\endcode

The current visual features are then updated by projecting the 3D points in the image-plane associated to the new camera location \c cMo.
\code
   for (unsigned int i = 0 ; i < 4 ; i++) {
      point[i].track(cMo);
      vpFeatureBuilder::create(p[i], point[i]);
    }
\endcode

Finally, the velocity skew \f$ {\bf v}_c \f$ is computed.
\code
   vpColVector v = task.computeControlLaw();
\endcode

This 6-dimension velocity vector is then applied to the camera.

\code
   robot.setVelocity(vpRobot::CAMERA_FRAME, v);
\endcode

Before exiting the program, we free all the memory by killing the task. 
\code
  task.kill();
\endcode

The next \ref tutorial-plotter shows how to modify the previous example to plot some curves that illustrate the visual servo behavior.

\section ibvs_display IBVS simulation with basic viewers

The previous IBVS simulation can be modified easily to introduce a basic internal and external camera viewer. This is implemented in tutorial-ibvs-4pts-display.cpp and given below:

\include tutorial-ibvs-4pts-display.cpp

The result of this program is visible in the next videos. The first one shows the internal camera view with the straight line trajectory of each point in the image. The second one provides an external view that shows the 3D camera trajectory to reach the desired position.

\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/sPNkDfp3UeU" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
\endhtmlonly

\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/ejzZqkyXXHw" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
\endhtmlonly

We explain now the new lines that were introduced.

First, we add the headers of the classes that are used to introduce the viewers and some display functionality. vpProjectionDisplay is the class that allows to handle the external view from a given camera position, while vpServoDisplay allows to display in overlay the position of the current and desired features in the internal camera view.

\code
#include <visp3/core/vpDisplayX.h>
#include <visp3/core/vpDisplayGDI.h>
#include <visp3/core/vpProjectionDisplay.h>
#include <visp3/vs/vpServoDisplay.h>
\endcode

Secondly, we introduce the function display_trajectory() that allows to display the trajectory of the current features in the image. From the 3D coordinates of the points given in the object frame, we compute their respective position in the camera frame, then we apply the perspective projection before retrieving their positions in sub-pixels in the image thanks to the camera parameters. The successive sub-pixel positions are stored in a vector named \c traj and displayed as a green trajectory.

\code
void display_trajectory(const vpImage<unsigned char> &I, std::vector<vpPoint> &point,
                        const vpHomogeneousMatrix &cMo, const vpCameraParameters &cam)
{
  int thickness = 1;
  static std::vector<vpImagePoint> traj[4];
  vpImagePoint cog;
  for (unsigned int i=0; i<4; i++) {
    // Project the point at the given camera position
    point[i].project(cMo);
    vpMeterPixelConversion::convertPoint(cam, point[i].get_x(), point[i].get_y(), cog);
    traj[i].push_back(cog);
  }
  for (unsigned int i=0; i<4; i++) {
    for (unsigned int j=1; j<traj[i].size(); j++) {
      vpDisplay::displayLine(I, traj[i][j-1], traj[i][j], vpColor::green, thickness);
    }
  }
}
\endcode

We enter then in the main() where we create two images that will be displayed in a window. The first images \c Iint is dedicated to the internal camera view. It shows the content of the images seen by the simulated camera. The second image   \c Iext correspond the images seen by an external camera that observes the simulated camera.

\code
  vpImage<unsigned char> Iint(480, 640, 255) ;
  vpImage<unsigned char> Iext(480, 640, 255) ;
#if defined(VISP_HAVE_X11)
  vpDisplayX displayInt(Iint, 0, 0, "Internal view");
  vpDisplayX displayExt(Iext, 670, 0, "External view");
#elif defined(VISP_HAVE_GDI)
  vpDisplayGDI displayInt(Iint, 0, 0, "Internal view");
  vpDisplayGDI displayExt(Iext, 670, 0, "External view");
#else
  std::cout << "No image viewer is available..." << std::endl;
#endif
\endcode

We create an instance of the vpProjectionDisplay class. This class is only available if at least one of the display (X11, GDI, OpenCV, GTK, D3D9) is installed. That is why we use VISP_HAVE_DISPLAY macro. We then insert the points used to define the target. Later, the 3D coordinates of these points defined in the object frame will be used and projected in \c Iext.
\code
#if defined(VISP_HAVE_DISPLAY)
  vpProjectionDisplay externalview;
  for (unsigned int i = 0 ; i < 4 ; i++)
    externalview.insert(point[i]) ;
#endif
\endcode

We initialize the intrinsic camera parameters that are used in display_trajectory() to determine the positions in pixels in the image of the visual features.
\code
vpCameraParameters cam(840, 840, Iint.getWidth()/2, Iint.getHeight()/2);
\endcode

We also set the position of the external camera that will observe the evolution of the simulated camera during the servo.
\code
vpHomogeneousMatrix cextMo(0,0,3, 0,0,0);
\endcode

Finally, at each iteration of the servo loop we update the viewers:

\code
    vpDisplay::display(Iint) ;
    vpDisplay::display(Iext) ;
    display_trajectory(Iint, point, cMo, cam);

    vpServoDisplay::display(task, cam, Iint, vpColor::green, vpColor::red);
#if defined(VISP_HAVE_DISPLAY)
    externalview.display(Iext, cextMo, cMo, cam, vpColor::red, true);
#endif
    vpDisplay::flush(Iint);
    vpDisplay::flush(Iext);
\endcode

\section ibvs_wireframe IBVS simulation with wireframe viewers

The IBVS simulation presented section \ref ibvs_basic can also be modified to introduce a wireframe internal and external camera viewer. This is implemented in tutorial-ibvs-4pts-wireframe-camera.cpp and given below:

\include tutorial-ibvs-4pts-wireframe-camera.cpp

The result of this program is visible in the next videos. The first one shows the internal wireframe camera view with the straight line trajectory of each point in the image. The second one provides an external wireframe view that shows the 3D camera trajectory to reach the desired position.

\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/riv3LBg6FYY" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
\endhtmlonly

\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/EnfUz5p9QNs" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
\endhtmlonly

We explain now the new lines that were introduced.

First we include the header of the wireframe simulator.
\code
#include <visp3/robot/vpWireFrameSimulator.h>
\endcode

Then in the main(), we create an instance of the simulator. First we initialize the object in the scene. We recall that the target is a 20cm by 20cm square. This is exactly an object handled by the simulator and defined as a vpWireFrameSimulator::PLATE. By vpWireFrameSimulator::D_STANDARD we indicate that the object displayed at the desired position is also a PLATE. Secondly we initialize the camera position wrt to the object using \c cMo, and also the desired camera position, the one the camera has to reach at the end of the servo, using \c cdMo. Next we initialize the position of a static external camera that will observe the simulated camera during the servoing using \c cextMo. All these poses are define wrt the object frame. Finally, we set the intrinsic camera parameters used for the internal and external viewers.

\code
  vpWireFrameSimulator sim;
  sim.initScene(vpWireFrameSimulator::PLATE, vpWireFrameSimulator::D_STANDARD);
  sim.setCameraPositionRelObj(cMo);
  sim.setDesiredCameraPosition(cdMo);
  sim.setExternalCameraPosition(cextMo);
  sim.setInternalCameraParameters(cam);
  sim.setExternalCameraParameters(cam);
\endcode

At each iteration of the servo loop, we update the wireframe simulator with the new camera position.

\code
sim.setCameraPositionRelObj(cMo);
\endcode

Then we update the drawings in the internal and external viewers associated to their respective images.

\code
    sim.getInternalImage(Iint);
    sim.getExternalImage(Iext);
\endcode

Note here that the same kind of simulation could be achieved on a Viper arm or a gantry robot. The programs that does the simulation are provided in tutorial-ibvs-4pts-wireframe-robot-viper.cpp and in tutorial-ibvs-4pts-wireframe-robot-afma6.cpp.

The next video illustrate the behavior of the same IBVS implemented in tutorial-ibvs-4pts-wireframe-robot-viper.cpp on a Viper arm. 

\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/g0B5ug0ooYo" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
\endhtmlonly

\section ibvs_tracking IBVS simulation with image processing

\note In this section we assume that you are familiar with:
- the blob tracker presented in \ref tutorial-tracking-blob
- image projection capabilities described in \ref tutorial-simu-image.

The IBVS simulation presented section \ref ibvs_basic can be modified to introduce an image processing that allows to track the point features using a blob tracker. This is implemented in tutorial-ibvs-4pts-image-tracking.cpp. The code is given below:

\include tutorial-ibvs-4pts-image-tracking.cpp

The result of this program is visible in the next video. 

\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/gn3tfoOxwKo" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
\endhtmlonly

We explain now the new lines that were introduced.

First we create a white image in which the target will be projected prior to the image processing.
\code
  vpImage<unsigned char> I(480, 640, 255);
\endcode

Since we will have to convert pixel coordinates to visual features expressed in meter, we need to initialize intrinsic camera parameters.

\code
  vpCameraParameters cam(840, 840, I.getWidth()/2, I.getHeight()/2);
\endcode

To retrieve the simulated image that depends on the simulated camera position we create an instance of a virtual grabber. Its goal is to project the image of the target \c ./target_square.pgm to a given camera position \c cMo, and then to retrieve the corresponding image \c I. Note here that the cog of the blobs in the .pgm file define a 20cm by 20cm square.
\code
  vpVirtualGrabber g("./target_square.pgm", cam);

  g.acquire(I, cMo);
\endcode

The current visual features are computed using a vpDot2 blob tracker. Once four trackers are instantiated, we are waiting for a mouse click in each blob to initialize the tracker. Then we compute the current visual features \f$(x,y) \f$ from the camera parameters and the cog position of each blob.

\code
  for (unsigned int i = 0 ; i < 4 ; i++) {
    ...
    dot[i].setGraphics(true);
    dot[i].initTracking(I);
    vpDisplay::flush(I);
    vpFeatureBuilder::create(p[i], cam, dot[i].getCog());
    ...
  }
\endcode

In the visual servo loop, at each iteration we get a new image of the target.

\code
    g.acquire(I, cMo);
\endcode

We track each blob and update the values of the current visual features as previously.
\code
    for (unsigned int i = 0 ; i < 4 ; i++) {
      dot[i].track(I);
      vpFeatureBuilder::create(p[i], cam, dot[i].getCog());    
\endcode

As described in the \ref ibvs_intro, since the interaction matrix \f$\bf L_x \f$
depends on \f$(x,y)\f$ but also on \f$Z\f$ the depth of the feature, we need to update \f$Z\f$. This is done by projecting each 3D point of the target in the camera frame using:

\code
      vpColVector cP;
      point[i].changeFrame(cMo, cP) ;
      p[i].set_Z(cP[2]);
    }
\endcode

\section ibvs_next Next tutorial
You are now ready to see the \ref tutorial-simu-robot-pioneer and \ref tutorial-boost-vs. 
*/