1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489
|
/**
\page tutorial-tracking-mb-generic-rgbd-Blender Tutorial: How to use Blender to generate simulated data for model-based tracking experiments
\tableofcontents
\section mb_Blender_intro Introduction
This tutorial will show how to use [Blender](https://www.blender.org/), a free and open source 3D creation
suite, to create a simple textured object like a tee box, generate color and depth images from a virtual RGB-D
camera, retrieve color and depth camera intrinsics and get ground truth color camera poses while the RGB-D
camera is animated.
Once generated by Blender, data could be used by the model-based tracker and results could be compared
with ground truth.
This tutorial was tested Ubuntu and macOS with the following versions:
OS | Blender
------------------- | -------------
Ubuntu 22.04 | Blender 3.6.4
macOS Ventura 13.6 | Blender 3.4.1
\warning You are advised to know how to use the basic tools of Blender before reading this tutorial.
Some non-exhaustive links:
- [Blender Reference Manual](https://docs.blender.org/manual/en/latest/index.html)
- [Blender 3D: Noob to Pro](https://en.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro/Parenting)
Note that all the material (source code, input video, CAD model or XML settings files) described in this tutorial is
part of ViSP source code (in `tracking/model-based/generic-rgbd-blender` folder) and could be found in
https://github.com/lagadic/visp/tree/master/tracking/model-based/generic-rgbd-blender.
\section mb_Blender_setup Considered setup
Remember that for each object considered by the model-based tracker, you need a \ref mb_generic_model
(`object_name.cao` or `object_name.wrl`) and a file for \ref mb_generic_init by mouse click (`object_name.init`).
\subsection mb_Blender_setup_teabox Tea box objet
In this tutorial we will consider a tea box. The CAD model file (`teabox.cao`) and the init file (`teabox.init`)
are provided in `model/teabox` folder:
\code{.sh}
$ cd $VISP_WS/visp-build/tutorial/tracking/model-based/generic-rgbd-blender
$ ls model/teabox/teabox
teabox.cao teabox.png teabox_color.xml
teabox.init teabox.wrl teabox_depth.xml
\endcode
Instead of creating an object with any dimensions, to simplify this tutorial, in Blender we'll create a tea box
whose dimensions, object frame, and 3D point coordinates correspond to those in the `teabox.cao` and `teabox.init`.
\note If you are not familiar with these files and their content, you may follow \ref tutorial-tracking-mb-generic.
The content of the `teabox.cao` file is the following:
\includelineno tutorial/tracking/model-based/generic-rgbd-blender/model/teabox/teabox.cao
The corresponding CAD model is the following:
\image html img-teabox-cao.jpg
Analysing `teabox.cao` we can see that the box dimensions are the following (see \ref mb_generic_teabox_cao):
- Height (from point 0 to point 1): 0.08 meters
- Length (from point 0 to point 3): 0.165 meters
- Width (from point 0 to point 7): 0.068 meters
\subsection mb_Blender_setup_rgbd RGB-D camera
We will also consider a RGB-D camera where the left camera is a classical color camera and the right camera
a depth camera. The distance between left and right cameras is 10 cm.
Both cameras are grabbing 640 x 480 images.
| Camera | Setting | Values |
|--------|------------------| :-----------: |
| Color | Image resolution | 640 x 480 |
| ^ | Focal lenght | 35 mm |
| ^ | Sensor size | 32 mm x 24 mm |
| Depth | Image resolution | 640 x 480 |
| ^ | Focal lenght | 30 mm |
| ^ | Sensor size | 32 mm x 24 mm |
\note In ViSP (see vpCameraParameters class) and in the computer vision community, camera intrinsic parameters are the
following:
\f[
{\bf K} = \begin{bmatrix}
p_x & 0 & u_0 \\
0 & p_y & v_0 \\
0 & 0 & 1
\end{bmatrix}
\f]
where:
- \f$ \left( p_x, p_y \right) \f$ corresponds to the pixel size
- and \f$ \left( u_0, v_0 \right) \f$ to the principal point location.
.
The relations that links the pixel size with the focal lenght \f$ f \f$, the camera sensor size and the image
resolution are the following:
\f[
p_x = \frac{f \times \text{image width}}{\text{sensor width}}
\f]
\f[
p_y = \frac{f \times \text{image height}}{\text{sensor height}}
\f]
\section mb_Blender_scene Create your scene with Blender
In this section, we will create the corresponding tea box, RGB-D camera and define an initial and final RGB-D camera
pose used to animate the scene.
\subsection mb_Blender_scene_create_teabox Create tea box object
Here we will show how to create with Blender a box with the following dimensions:
- Height: 0.08 meters
- Length: 0.165 meters
- Width: 0.068 meters
Open Blender and do the following to transform the cube in a box with the expected dimensions:
- In `"Object Mode"` (1) select the `"Cube"` the right panel (2). Edges should appear in orange
- Deploy the `"Transform"` panel (3)
\image html img-Blender-cube-select.jpg
- Set `"Dimensions"` to X=0.165, Y=0.068 and Z=0.08 meters (4) and change the object name to `"teabox"` (5).
As the object's dimensions are drastically reduced, it becomes very small at the centre of the scene.
\image html img-Blender-cube-transform.jpg
- With the middle mouse button, zoom in to see the box, select the `"teabox"` object in this is not the case (6),
and rescale the object to 1. To rescale, move the mouse pointer in the scene near the box, press shortcut: Ctrl-A
and select `"Apply > Scale"`. At this point, you should see that the scale values are set to 1 (7).
\image html img-Blender-cube-zoom.jpg
The coordinates of the box's vertices are expressed in the object frame which origin is at the box center of gravity
(cog) and which axis are aligned with the scene reference frame. To conform with the required CAD
model, we will move the origin of the object frame at point 0 which position is given
in the next image:
\image html img-teabox-cao.jpg
- select the `"teabox"` object in not the case, enter `"Edit Mode"` (8), select vertex corresponding to point 0 (9),
press shortcut Shift-S and select `"Cursor to Selected"` (10)
\image html img-Blender-cube-cursor-to-selected.jpg
- As given in the next image, at point 0 you can now see the cursor (11). Switch to `"Object Mode"` (12), move the
mouse pointer in the scene close to the box and press mouse right click to make appear `"Object Context Menu"`.
In this menu, select `"Set Origin > Origin to 3D Cursor"` (13)
\image html img-Blender-cube-set-origin.jpg
- Now you can verify that all the 8 vertices have the same 3D coordinates as the one in the required CAD model.
For example, to get the 3D coordinates of point 3, switch to `"Edit Mode"` (14), select the corresponding vertex (15)
to see its coordinates (16)
\image html img-Blender-cube-coords-pt-3.jpg
- These coordinates are exactly the same as the one given in `"teabox.cao"` file
\code{.sh}
0.165 0 0 # Point 3
\endcode
Next step consists in adding a texture to be able to test tracking with keypoint features. You can add a realistic
texture using an image texture, but here again, to simplify this tutorial, we will just add a `"Voronoi Texture"`.
To this end:
- Switch to `"Object Mode"` (1)
- Click on `"Viewport Shading with Material Preview"` icon (2)
- Click on `"Material Properties"` icon (3)
- Click on the yellow bullet at the right of `"Base Color"` (4)
\image html img-Blender-cube-texture-add.jpg
- Select `"Voronoi Texture"` (5)
\image html img-Blender-cube-texture-select.jpg
- You can now see the texture applied to the box faces
\image html img-Blender-cube-textured.jpg
This ends the creation of a textured box that matches the `"teabox.cao"` cad model.
\subsection mb_Blender_scene_camera_color Color camera settings
Here you will set the color camera image resolution to `640x480` to match a VGA camera resolution, set the focal length
to f = 35 mm, and the sensor size to 32 mm x 24 mm. Note that the width / height ratio must be the same for the image
resolution and the sensor resolution. The values we use correspond to a ratio of 4/3.
We consider now that the `"Camera"` that is already in the scene is the left camera. To modify its settings:
- Select the camera and rename it to `"Camera_L"` (1)
- Select camera icon to access camera properties (2)
- Change focal lenght to 35 mm (3)
- Deploy `"Camera"` menu and change `"Sensor Fit"` to `"Vertical" and set sensor `"Width"` to 32 mm and sensor
`"Height"` to 24 mm (4)
\image html img-Blender-camera-color-settings.jpg
Now to set image resolution to `640x480`
- Select `"Output Properties"` icon (5)
- Set `"Resolution X"` to 640 and `"Resolution Y"` to 480 (6)
\image html img-Blender-camera-image-resolution.jpg
As shown in the previous image, the camera is not visible in the scene. This is simply due to its location that is too
far from the scene origin. The camera frame origin is exactly at location X = 7.3589, Y = -6.9258, Z = 4.9583
meters (7). We need to move the camera close to the object and choose a position that will ease introduction of the
depth camera.
To this end:
- In the `"Transform"` panel, modify its `"Location"` to X = 0, Y = -0.35, Z = 0.3 meters, its `"Rotation"` to X = 45,
Y = 0, Z = 0 deg, and its `"Scale"` to X = 0.050, Y = 0.050, Z = 0.050 (8)
- Use the mouse middle button to zoom out in order to see the camera and the box
\image html img-Blender-camera-color-location.jpg
For curiosity you can now render an image of your object (shortcut: F12) and obtain an image like the following:
\image html img-Blender-camera-color-render.jpg
\subsection mb_Blender_scene_depth_camera Add a depth camera
To simulate an RGB-D camera, we need to add a second camera to retrieve the depth map and set the appropriate
parameters to match the desired intrinsic parameters following the same instructions as in previous section.
To this end:
- If not the case switch to `"3D Viewport"` (shortcut: Shift-F5)
- Enter menu `"Add > Camera"` (1)
- In the `"Scene Collection"` the new camera appears with name `"Camera"` (2)
- Image resolution is already set to 640x480 (3)
\image html img-Blender-camera-depth-add.jpg
We need now to modify its settings to match the required intrinsics:
- Rename the camera to `"Camera_R"` (4)
- Click on camera icon (5)
- Set its focal lenght to 30 mm (6) and sensor resolution to 32 mm x 24 mm (7)
\image html img-Blender-camera-depth-settings.jpg
Since this depth camera should be 10 cm on the right of the color camera, we need to modify its location:
- Be sure that the `"Camera_R" corresponding to the depth camera is selected (8)
- In the `"Transform"` panel, modify its `"Location"` to X = 0.1, Y = -0.35, Z = 0.3 meters, its `"Rotation"`
to X = 45, Y = 0, Z = 0 deg, and its `"Scale"` to X = 0.050, Y = 0.050, Z = 0.050 (9)
\image html img-Blender-camera-depth-location.jpg
As we want to be able to animate the movement of the stereo pair rather than each camera individually, we need to link
them together using the Blender parenting concept:
- In the `"Scene Collection"` select `"Camera_R"` that will be the child object (10)
- Press Ctrl key and select `"Camera_L"` that will be the parent object (11)
- Right camera should become orange, while left camera yellow (12)
- Hit shortcut Ctrl-P to [parent](https://docs.blender.org/manual/en/latest/scene_layout/object/editing/parent.html)
them
- Set parent to `"Object"` (13)
\image html img-Blender-camera-depth-parenting.jpg
- Once done, you can see that `"Camera_L"` has for child `"Camera_L"` (14)
\image html img-Blender-camera-depth-parenting-done.jpg
\subsection mb_Blender_scene_generate_depth Enable depth maps
If you enter menu `"Render > Render Image"` you will see only the box seen by the color left camera.
To be able to render color and depth, you need to enable `"Stereoscopy"`. To this end:
- Click on `"Output Properties"` icon (1)
- And enable `"Stereoscopy"` (2)
\image html img-Blender-camera-stereo-enabled.jpg
Now if you enter menu `"Render > Render Image"` you will see the box rendered by the left camera and right camera
like the following image
\image html img-Blender-camera-stereo-rendered.jpg
The last thing to do is to modify the project to generate the depth maps. To this end:
- Select `"View Layer Properties"` icon (3)
- And enable `"Data > Z"` (4)
\image html img-Blender-camera-stereo-enable-Z.jpg
Now we need to set the format of the depth map images that will be rendered:
- Switch to the `"Compositor"` screen layout, next to the menu bar (shortcut: Shift-F3)
- Tick `"Use Nodes"` (5) and `"Backdrop"` (6)
- Enter menu `"Add > Output > File Output"` to add file output node (7)
- Add a link between the `"Depth"` output of the `"Render Layers"` node to the `"File Output"` node (8)
- Modify `"Base Path"` to `"/tmp/teabox/depth"` the name of the folder that will host rendered depth maps (9)
- Click on `"Node"` tab (10)
- Change `"File Format"` to `OpenEXR` (11)
\image html img-Blender-camera-stereo-depth-exr.jpg
Now if you render an image using menu `"Render > Render Image"` in `"/tmp/teabox/depth"` folder you will get depth
images for the left and right cameras. There is nos possibility to enable depth only for the right camera. Depth images
corresponding to the left camera could be removed manually.
\code{.sh}
$ ls /tmp/teabox/depth
Image0001_L.exr Image0001_R.exr
\endcode
\subsection mb_Blender_create_trajectory Create a camera trajectory
We are now ready to animate the scene. First we have to define the camera initial position. This can be done easily:
- Switch to `"3D Vieport"` (shortcut: Shift-F5)
- Select `"Camera_L"` (1)
- Using `"Move"` (2) and `"Rotate"` (3) tools move the stereo camera at a desired initial location / orientation.
You can also enter the values directly in the `"Transform"` panel. A possible initial `"Location"` is X = 0.15,
Y = -0.35, Z = 0.3 meters and `"Rotation"` X = 45, Y = 0, Z = 35 degrees (4)
- Render the image entering menu `"Render > Render Image"` (5) to check if your object is visible in the image
- If you are happy with your camera positionning, move the time slider to the `"Start"` frame number which is 1 (6)
- Insert a keyframe with (shortcut: `I`) and choose `"Location, Rotation & Scale"` to insert a keyframe at the
`"Start"` frame with number 1
- A small yellow diamond should appear at frame 1 in the timeline (7)
\image html img-Blender-camera-initial-position.jpg
- If you render the image at the initial position you will get the following image
\image html img-Blender-camera-initial-position-rendered.jpg
Now we have to perform the same operation for the camera final position.
- If not already done, select `"Camera_L"` (8)
- First you to have to define the `"End"` frame number of your animation. By default `"End"` frame is set to 250.
Here we set this number to 50 (9)
- Then move the time slider to the `"End"` frame number (10)
- Now move the camera to the final position. Let us choose for final `"Location"` X = 0.3, Y = -0.15, Z = 0.15
meters and `"Rotation"` X = 60, Y = -10, Z = 65 degrees (11)
- Render the image using menu `"Render > Render Image"` (12) to check if the box is visible in the image
- Insert a keyframe with (shortcut: `I`) and choose `"Location, Rotation & Scale"` to insert a keyframe at the
`"End"` frame with number 50
- At this point, a small yellow diamond should appear at frame 50 in the timeline (13)
- You can now play the animation pressing `"Play Animation"` button (14) and see the stereo pair moving from started
position to end position
\image html img-Blender-camera-final-position.jpg
- If you render the image at the final position you should see someting simitar to:
\image html img-Blender-camera-final-position-rendered.jpg
This completes the configuration of the scene in Blender. We strongly recommend that you save your project.
\note The project used to create this tutorial is available in
`$VISP_WS/visp/tutorial/tracking/model-based/generic-rgbd-blender/data/teabox/blender/teabox.blend`.
\section mb_Blender_data Generate full data
Here we want to run `get_camera_pose_teabox.py` Python script inside Blender to:
- animate the RGB-D camera and retrieve in `"/tmp/teabox"` folder
- color camera intrinsics in `"Camera_L.xml"`
- depth camera intrinsics in `"Camera_R.xml"`
- homogeneous transformation between depth and color camera in `"depth_M_color.txt"`
- rendered color images in `"color/%04d_L.jpg"` files
- rendered depth images in `"depth/Images%04d_R.exr"` files
- ground truth in `"ground-truth/Camera_L_%04d.txt"` files
The Python script is available in `"$VISP_WS/visp/tutorial/tracking/model-based/generic-rgbd-blender/"` folder.
Since the script is displaying information using `print()` function, Blender should be started from a terminal.
- On Ubuntu, simply open a terminal and launch
```
$ blender
```
- On MacOS, open a terminal and launch
```
$ /Applications/Blender.app/Contents/MacOS/Blender
```
Then to run the Python script in Blender:
- Switch from `"3D Viewport"` to `"Text Editor"` (shortcut: Shift-F11)
- Open the Python script file `$VISP_WS/visp/tutorial/tracking/model-based/generic-rgbd-blender/get_camera_pose_teabox.py`
- Click on `"Run Script"` (1) (shortcut: Alt-P)
\image html img-Blender-python-get-pose.jpg
- In the terminal from which you launched Blender, you should see something similar to:
\code{.sh}
Create /tmp/teabox/
Create /tmp/teabox/camera_poses/
Saved: /tmp/teabox/depth_M_color.txt
Saved: /tmp/teabox/depth_M_color.txt
Saved: /tmp/teabox/depth/Image0001_L.exr
Saved: /tmp/teabox/depth/Image0001_R.exr
Saved: '/tmp/teabox/color/0001_L.jpg'
Saved: '/tmp/teabox/color/0001_R.jpg'
Time: 00:00.29 (Saving: 00:00.00)
Current Frame 1
Saved: /tmp/teabox/camera_poses/Camera_L_001.txt
Remove file: /tmp/teabox/color/0001_R.jpg
Remove file: /tmp/teabox/depth/Image0001_L.exr
Saved: /tmp/teabox/depth/Image0002_L.exr
Saved: /tmp/teabox/depth/Image0002_R.exr
Saved: '/tmp/teabox/color/0002_L.jpg'
Saved: '/tmp/teabox/color/0002_R.jpg'
Time: 00:00.22 (Saving: 00:00.00)
...
\endcode
As explained previously, data are saved in the `/tmp/teabox` directory.
By default, for each camera (`"Camera_L"` and `"Camera_R"`) we render the color image and the depth image.
The script will remove useless generated files by removing depth images corresponding to `"Camera_L"` and color
images from `"Camera_R"`.
\subsection mb_Blender_data_color_intrinsics How to get only camera intrinsics
If you are only interested to see which are the camera intrinsics of a given camera set in Blender, we provide
`get_camera_intrinsics.py` Python script in `$VISP_WS/visp/tutorial/tracking/model-based/generic-rgbd-blender/`
folder.
As in the previous section, since this script is displaying information using `print()` function, Blender should be
started from a terminal.
- On Ubuntu, simply open a terminal and launch
```
$ blender
```
- On MacOS, open a terminal and launch
```
$ /Applications/Blender.app/Contents/MacOS/Blender
```
Then to run this Python script in Blender:
- Switch from `"3D Viewport"` to `"Text Editor"` (shortcut: Shift-F11)
- Open the Python script file `$VISP_WS/visp/tutorial/tracking/model-based/generic-rgbd-blender/get_camera_intrinsics.py`
- Verify that the camera name is set to `"Camera_L"`
\code{.py}
camera_name = "Camera_L"
\endcode
- Click on `"Run Script"` (shortcut: Alt-P)
In the terminal from which you launched Blender, you should get something similar to:
```
Intrinsics for Camera_L are K =
<Matrix 3x3 (700.0000, 0.0000, 320.0000)
( 0.0000, 700.0000, 240.0000)
( 0.0000, 0.0000, 1.0000)>
```
\note The principal point is always in the middle of the image here.
You can retrieve the depth camera intrinsics using again `get_camera_intrinsics.py` just by modifying in the script
the name of the camera
- Modify the camera name to `"Camera_R"`
\code{.py}
camera_name = "Camera_R"
\endcode
- Click on `Run Script` (shortcut: Alt-P)
In the terminal from which you launched Blender, you should get something similar to:
```
Intrinsics for Camera_R are K =
<Matrix 3x3 (600.0000, 0.0000, 320.0000)
( 0.0000, 600.0000, 240.0000)
( 0.0000, 0.0000, 1.0000)>
```
\section mb_Blender_mbt Run model-based tracker on simulated data
\subsection mb_Blender_mbt_src Source code
The following C++ sample file also available in tutorial-mb-generic-tracker-rgbd-blender.cpp reads color and depth
images, pointcloud is recreated using the depth camera intrinsic parameters. The ground truth data are read and
printed along with the estimated camera pose by the model-based tracker. Since depth data are stored in `OpenEXR`
file format, OpenCV is used for the reading.
\include tutorial-mb-generic-tracker-rgbd-blender.cpp
\note Here the depth values are manually clipped in order to simulate the depth range of a depth sensor.
This probably can be done directly in Blender.
\subsection mb_Blender_mbt_run Usage on simulated data
Once build, to get tutorial-mb-generic-tracker-rgbd-blender.cpp usage, just run:
\code{.sh}
$ cd $VISP_WS/visp-build/tutorial/tracking/model-based/generic-rgbd-blender
$ ./tutorial-mb-generic-tracker-rgbd-blender -h
Synopsis
./tutorial-mb-generic-tracker-rgbd-blender [--data-path <path>] [--model-path <path>]
[--first-frame <index>] [--disable-depth] [--disable-klt] [--step-by-step]
[--display-ground-truth] [--help, -h]
Description
--data-path <path> Path to the data generated by Blender get_camera_pose_teabox.py
Python script. Default: data/teabox
--model-path <path> Path to the cad model and tracker settings.
Default: model/teabox
--first-frame <index> First frame number to process.
Default: 1
--disable-depth Flag to turn off tracker depth features.
--disable-klt Flag to turn off tracker keypoints features.
--step-by-step Flag to enable step by step mode.
--display-ground-truth Flag to enable displaying ground truth.
When this flag is enabled, there is no tracking. This flag is useful
to validate the ground truth over the rendered images.
--help, -h Print this helper message.
\endcode
Default parameters allow to run the binary with the data provided in ViSP. Just run:
\code{.sh}
$ ./tutorial-mb-generic-tracker-rgbd-blender
\endcode
To run the binary on the data generated by Blender in `"/tmp/teabox"` folder, just run:
\code{.sh}
$ ./tutorial-mb-generic-tracker-rgbd-blender --data-path /tmp/teabox
\endcode
You should be able to see similar tracking results as the one given in the next video.
\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/AuCHE0cTa6Q" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
\endhtmlonly
\note If you just want to project the cad model in the images using the ground truth, without tracking, you may run:
\code{.sh}
$ ./tutorial-mb-generic-tracker-rgbd-blender --data-path /tmp/teabox --display-ground-truth
\endcode
\section mb_Blender_next Next tutorial
- You are now ready to see the next \ref tutorial-tracking-tt.
- Since ViSP 2.7.0 we introduce a new tracker named RBT that allows to track more complex objects.
To learn more about it you may follow \ref tutorial-tracking-rbt.
*/
|