1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498
|
/** \page page_how_to_use How to use
*
* This page gives an introduction to the usage of OpenGV including a description of the interface and explicit examples. More information can be found in [17]. However, in order to have a smooth communication and full understanding of the functionality and documentation of the library, we first need to clearly define the meaning of a couple of words in the present context.
*
* \section sec_vocabulary Vocabulary
*
* <ul>
* <li> <b>Bearing vector</b>: A bearing vector is defined to be a 3-vector with unit norm bearing at a spatial 3D point from a camera reference frame. It has 2 degrees of freedom, which are the azimuth and elevation in the camera reference frame. Because it has only two degrees of freedom, we frequently refer to it as a 2D information. It is normally expressed in a camera reference frame.
* <li> <b>Landmark</b>: A landmark describes a 3D spatial point (usually expressed in a fixed frame called world reference frame).
* <li> <b>Camera</b>: OpenGV assumes to be in the calibrated case, and landmark measurements are always given in form of bearing vectors in a camera frame. A camera therefore denotes a camera reference frame with a set of bearing vectors, all pointing from the origin to landmarks. The following figure shows a camera c with bearing vectors (in red). The bearing vectors are all lying on the unit-sphere centered around the camera.
*
* \image html central.png
* \image latex central.pdf "" width=0.45\columnwidth
*
* <li> <b>Viewpoint</b>: You will notice that the documentation of the code very frequently talks about viewpoints instead of cameras. One of the advantages of OpenGV is that it can transparently handle both the central and the non-central case. The viewpoint is a generalization of a camera, and can contain an arbitrary number of cameras each one having it's own landmark-measurements (e.g. bearing vectors). A practical example of a viewpoint would be the set of images and related measurements captured by a fully-calibrated, rigid multi-camera rig with synchronized cameras, which therefore still represents a single (multi)-snapshot (i.e. viewpoint). Each camera has its own transformation to the viewpoint frame. In the central case the viewpoint simply contains a single camera with an identity transformation. The most general case-the generalized camera-can also be described by the viewpoint. Each bearing vector would then have it's own camera and related transformation. A generalized camera would hence be represented by an exhaustive multi-camera system. The following image shows a viewpoint vp (in blue) with three cameras c, c', and c'', each one containing its own bearing vector measurements.
*
* \image html noncentral.png
* \image latex noncentral.pdf "" width=0.8\columnwidth
*
* <li> <b>Pose</b>: By a pose, we understand here the position and orientation of a viewpoint, either with respect to a fixed spatial reference frame called the "world" reference frame, or with respect to another viewpoint.
* <li> <b>Absolute Pose</b>: By absolute pose, we understand the pose of a viewpoint in the world reference frame.
* <li> <b>Relative Pose</b>: By relative pose, we understand the pose of a viewpoint with respect to another viewpoint.
* <li> <b>Correspondence</b>: By a correspondence, we understand a pair of bearing-vectors pointing at the same landmark from different viewpoints (2D-2D correspondence), a bearing vector and a world-point it is pointing at (2D-3D correspondence), or a pair of expressions of the same landmark in different frames (3D-3D correspondence).
* </ul>
*
* \section sec_organization Organization of the library
*
* The library-structure is best analyzed at the hand of the namespace or directory hierarchy (as a matter of fact, there is no difference between the two):
* <ul>
* <li> opengv: contains generic things such as the types used throughout the library.
* <li> opengv/math: contains a bunch of math functions that are used in different algorithms, mainly for root-finding and rotation-related stuff.
* <li> opengv/absolute_pose: contains the absolute-pose methods. methods.hpp is the main-header that contains the method-declarations. You can also find a bunch of adapters here for interfacing with the algorithms (explained in the next section). The sub-folder modules contains declarations of internal methods.
* <li> opengv/relative_pose: contains the relative-pose methods. methods.hpp is the main-header that contains the method-declarations. You can also find a bunch of adapters here for interfacing with the algorithms (explained in the next section). The sub-folder modules contains declarations of internal methods.
* <li> opengv/point_cloud: contains the point-cloud alignment methods. Again, methods.hpp contains the declarations, and the folder contains adapters for interfacing (explained below).
* <li> opengv/triangulation: contains the triangulation methods.
* <li> opengv/sac: contains base-classes for sample-consensus methods and problems. So far, only the Ransac algorithm is implemented.
* <li> opengv/sac_problems: contains sample-consensus problems derived from the base-class. Implements sample-consensus problems for point-cloud alignment and central as well as non-central absolute and relative-pose estimation.
* </ul>
*
* \section sec_interface Interface
*
* You will quickly notice that all methods in OpenGV use a variable called adapter as a function-call parameter. OpenGV is designed based on the adapter pattern. "Adapters" in OpenGV are used as "visitors" to all geometric vision methods. They contain the landmarks, bearing-vectors, poses, correspondences etc. used as an input to the different methods (or references to the alike), and allow to access those elements with a unified interface. Adapters are derived from a base-class that defines the unified interface and they have to implement the related functions for accessing bearing-vectors, world-points, camera-transformations, viewpoint-poses, etc. There are three adapter-base-classes:
*
* <ul>
* <li> <b>AbsoluteAdapterBase</b>: Base-class for adapters holding 2D-3D correspondences for absolute-pose methods.
* <li> <b>RelativeAdapterBase</b>: Base-class for adapters holding 2D-2D correspondences for relative-pose methods.
* <li> <b>PointCloudAdapterBase</b>: Base-class for adapters holding 3D-3D correspondences for point-cloud alignment methods.
* </ul>
*
* The derived adapters have the task of transforming the data from the user-format to OpenGV types. This gives the library great flexibility. Users have to implement only a couple of adapters for the specific data-format they are using, and can then access the full functionality of the library. OpenGV currently contains adapters that simply hold references to OpenGV types (no transformation needed) plus adapters for mexArrays used within the Matlab-interface. Further adapters are planned, such as for instance an adapter for OpenCV keypoint and match-types including a camera model. The user would then be able to chose whether normalization of keypoints is done "on-demand" or "once for all" at the beginning, which is more efficient in sample-consensus problems.
*
* Note that adapters containing the tag "Central" in their name are adapters for a single camera (i.e. view-points with only one camera having identity transformation). Adapters having the tag "Noncentral" in their name are meant for view-points with multiple cameras (e.g., multi-camera systems, generalized cameras).
*
* Please check out the doxygen documentation on the above base-classes, they contain important documentation on the functions that need to be overloaded for a proper implementation of an adapter.
*
* \section sec_conventions Conventions, problem types, and examples
*
* As already mentioned, the entire library is assuming calibrated cameras/viewpoints, and it operates with 3D unit bearing vectors expressed in the camera frame. Calibrated means that the configuration of the multi-camera system (i.e. the inter-camera transformations) is known. The following introduces the different problems that can be solved with the library, and outlines the conventions for transformations (translations and rotations) in their context. <b>Note that all problems have solutions for both the minimal and non-minimal cases, and may also be solved as sample-consensus or non-linear optimization problems</b>. The code samples are mostly taken from the test files, which you can compile along with the library by setting BUILD_TESTS in CMakeLists.txt to ON.
*
* <ul>
*
* <li> <b>Central absolute pose:</b> The central absolute pose problem consists of finding the pose of a camera (e.g. viewpoint with a single camera) given a number of 2D-3D correspondences between bearing vectors in the camera frame and points in the world frame. The seeked transformation is given by the position \f$ \mathbf{t}_{c} \f$ of the camera seen from the world frame and the rotation \f$ \mathbf{R}_{c} \f$ from the camera to the world frame. This is what the algorithms return (or part of it), and what the adapters can hold as known or prior information.
*
* \image html absolute_central.png
* \image latex absolute_central.pdf "" width=0.8\columnwidth
*
* The minimal variants are p2p (a solution for position if rotation is known), p3p_kneip [1], p3p_gao [2], and UPnP [19]. The non-minimal variants are epnp [4], and UPnP [19]. The non-linear optimization variant is called optimize_nonlinear. Here's how to use them:
*
\code
// create the central adapter
absolute_pose::CentralAbsoluteAdapter adapter(
bearingVectors, points );
// Kneip's P2P (uses rotation from adapter)
adapter.setR( knownRotation );
translation_t p2p_translation =
absolute_pose::p2p( adapter, indices1 );
// Kneip's P3P
transformations_t p3p_kneip_transformations =
absolute_pose::p3p_kneip( adapter, indices2 );
// Gao's P3P
transformations_t p3p_gao_transformations =
absolute_pose::p3p_gao( adapter, indices2 );
// Lepetit's Epnp (using all correspondences)
transformation_t epnp_transformation =
absolute_pose::epnp(adapter);
// UPnP (using all correspondences)
transformations_t upnp_transformations =
absolute_pose::upnp(adapter);
// UPnP (using three correspondences)
transformations_t upnp_transformations =
absolute_pose::upnp( adapter, indices2 );
// non-linear optimization (using all correspondences)
adapter.sett(initial_translation);
adapter.setR(initial_rotation);
transformation_t nonlinear_transformation =
absolute_pose::optimize_nonlinear(adapter);
\endcode
*
* p3p_kneip, p3p_gao, and epnp can also be used within a sample consensus context. The following shows how to do it:
*
\code
// create the central adapter
absolute_pose::CentralAbsoluteAdapter adapter(
bearingVectors, points );
// create a Ransac object
sac::Ransac<sac_problems::absolute_pose::AbsolutePoseSacProblem> ransac;
// create an AbsolutePoseSacProblem
// (algorithm is selectable: KNEIP, GAO, or EPNP)
std::shared_ptr<sac_problems::absolute_pose::AbsolutePoseSacProblem>
absposeproblem_ptr(
new sac_problems::absolute_pose::AbsolutePoseSacProblem(
adapter, sac_problems::absolute_pose::AbsolutePoseSacProblem::KNEIP ) );
// run ransac
ransac.sac_model_ = absposeproblem_ptr;
ransac.threshold_ = threshold;
ransac.max_iterations_ = maxIterations;
ransac.computeModel();
// get the result
transformation_t best_transformation =
ransac.model_coefficients_;
\endcode
*
* These examples are taken from test_absolute_pose.cpp and test_absolute_pose_sac.cpp.
*
* <li> <b>Non-central absolute pose:</b> The non-central absolute pose problem consists of finding the pose of a viewpoint given a number of 2D-3D correspondences between bearing vectors in multiple camera frames and points in the world frame. The seeked transformation is given by the position \f$ \mathbf{t}_{vp} \f$ of the viewpoint seen from the world frame and the rotation \f$ \mathbf{R}_{vp} \f$ from the viewpoint to the world frame. This is what the algorithms return, and what the adapters can hold as known or prior information.
*
* \image html absolute_noncentral.png
* \image latex absolute_noncentral.pdf "" width=0.8\columnwidth
*
* The minimal variant is gp3p, and the non-minimal variant is gpnp [3]. UPnP can be used for both the minimal and the non-minimal case [19]. The non-linear optimization variant is still optimize_nonlinear (it handles both cases). Here's how to use them:
*
\code
// create the non-central adapter
absolute_pose::NoncentralAbsoluteAdapter adapter(
bearingVectors,
camCorrespondences,
points,
camOffsets,
camRotations );
// Kneip's GP3P
transformations_t gp3p_transformations =
absolute_pose::gp3p( adapter, indices1 );
// Kneip's GPNP (using all correspondences)
transformation_t gpnp_transformation =
absolute_pose::gpnp(adapter);
// UPnP (using all correspondences)
transformations_t upnp_transformations =
absolute_pose::upnp( adapter );
// UPnP (using only 3 correspondences)
transformations_t upnp_transformations_3 =
absolute_pose::upnp( adapter, indices1 );
// non-linear optimization
adapter.sett(initial_translation);
adapter.setR(initial_rotation);
transformation_t nonlinear_transformation =
absolute_pose::optimize_nonlinear(adapter);
\endcode
*
* gp3p can also be used within a sample-consensus context. It remains an AbsolutePoseSacProblem, this one is usable for both the central and the non-central case. We simply have to set algorithm to GP3P:
*
\code
// create the non-central adapter
absolute_pose::NoncentralAbsoluteAdapter adapter(
bearingVectors,
camCorrespondences,
points,
camOffsets,
camRotations );
// create a RANSAC object
sac::Ransac<sac_problems::absolute_pose::AbsolutePoseSacProblem> ransac;
// create a absolute-pose sample consensus problem (using GP3P as an algorithm)
std::shared_ptr<sac_problems::absolute_pose::AbsolutePoseSacProblem>
absposeproblem_ptr(
new sac_problems::absolute_pose::AbsolutePoseSacProblem(
adapter, sac_problems::absolute_pose::AbsolutePoseSacProblem::GP3P ) );
// run ransac
ransac.sac_model_ = absposeproblem_ptr;
ransac.threshold_ = threshold;
ransac.max_iterations_ = maxIterations;
ransac.computeModel();
// get the result
transformation_t best_transformation =
ransac.model_coefficients_;
\endcode
*
* These examples are taken from test_noncentral_absolute_pose.cpp and test_noncentral_absolute_pose_sac.cpp.
*
* <li> <b>Central relative pose:</b> The central relative pose problem consists of finding the pose of a camera (e.g. viewpoint with a single camera) with respect to a different camera given a number of 2D-2D correspondences between bearing vectors in the camera frames. The seeked transformation is given by the position \f$ \mathbf{t}_{c'}^{c} \f$ of the second camera seen from the first one and the rotation \f$ \mathbf{R}_{c'}^{c} \f$ from the second camera back to the first camera frame. This is what the algorithms return (or part of it), and what the adapters can hold as known or prior information.
*
* \image html relative_central.png
* \image latex relative_central.pdf "" width=0.6\columnwidth
*
* There are many central relative-pose algorithms in the library. The minimal variants are twopt (in case the rotation is known), twopt_rotationOnly (in case there is only rotational change, and using only two points), fivept_stewenius [5], fivept_nister [6], and fivept_kneip [7]. The libary also contains non-minimal variants, namely rotationOnly (in case of pure-rotation change), sevenpt [8], eightpt [9,10] and the new eigensolver [11] methods. All of them except twopt, twopt_rotationOnly, and fivept_kneip can be used for an arbitrary number of correspondences (of course at least the minimal number). The non-linear optimization variant is again called optimize_nonlinear. Here's how to use most of them (we assume a regular situation here, and thus omit the rotationOnly algorithms):
*
\code
// create the central relative adapter
relative_pose::CentralRelativeAdapter adapter(
bearingVectors1, bearingVectors2 );
// Relative translation with only two point-correspondences
// (no or known rotation)
adapter.setR(knownRotation);
translation_t twopt_translation =
relative_pose::twopt( adapter, true, indices1 );
// Stewenius' 5-point algorithm
complexEssentials_t fivept_stewenius_essentials =
relative_pose::fivept_stewenius( adapter, indices2 );
// Nister's 5-point algorithm
essentials_t fivept_nister_essentials =
relative_pose::fivept_nister( adapter, indices2 );
// Kneip's 5-point algorithm
rotations_t fivept_kneip_rotations =
relative_pose::fivept_kneip( adapter, indices2 );
// the 7-point algorithm
essentials_t sevenpt_essentials =
relative_pose::sevenpt( adapter, indices3 );
// the 8-point algorithm
essential_t eightpt_essential =
relative_pose::eightpt( adapter, indices4 );
// Kneip's eigensolver
adapter.setR(initial_rotation);
eigensolver_rotation =
relative_pose::eigensolver( adapter, indices5 );
// non-linear optimization (using all available correspondences)
adapter.sett(initial_translation);
adapter.setR(initial_rotation);
transformation_t nonlinear_transformation =
relative_pose::optimize_nonlinear(adapter);
\endcode
*
* fivept_nister, fivept_stewenius, sevenpt, and eigthpt can also be used within a random-sample consensus scheme. It is done as follows:
*
\code
// create the central relative adapter
relative_pose::CentralRelativeAdapter adapter(
bearingVectors1, bearingVectors2 );
// create a RANSAC object
sac::Ransac<sac_problems::relative_pose::CentralRelativePoseSacProblem> ransac;
// create a CentralRelativePoseSacProblem
// (set algorithm to STEWENIUS, NISTER, SEVENPT, or EIGHTPT)
std::shared_ptr<sac_problems::relative_pose::CentralRelativePoseSacProblem>
relposeproblem_ptr(
new sac_problems::relative_pose::CentralRelativePoseSacProblem(
adapter,
sac_problems::relative_pose::CentralRelativePoseSacProblem::NISTER ) );
// run ransac
ransac.sac_model_ = relposeproblem_ptr;
ransac.threshold_ = threshold;
ransac.max_iterations_ = maxIterations;
ransac.computeModel();
// get the result
transformation_t best_transformation =
ransac.model_coefficients_;
\endcode
*
* These examples are taken from test_relative_pose.cpp and test_relative_pose_sac.cpp. There are also sample consensus problems for the case of pure-rotation, known rotation, or the eigensolver method. Feel free to explore opengv/sac_problems/relative_pose.
*
* <li> <b>Non-central relative pose:</b> The non-central relative pose problem consists of finding the pose of a viewpoint with respect to a different viewpoint given a number of 2D-2D correspondences between bearing vectors in multiple camera frames. The seeked transformation is given by the position \f$ \mathbf{t}_{vp'}^{vp} \f$ of the second viewpoint seen from the first one and the rotation \f$ \mathbf{R}_{vp'}^{vp} \f$ from the second viewpoint back to the first viewpoint frame. This is what the algorithms return (or part of it), and what the adapters can hold as known or prior information.
*
* \image html relative_noncentral.png
* \image latex relative_noncentral.pdf "" width=0.8\columnwidth
*
* There are three non-central relative pose methods in the library, the 17-point algorithm by Li [12], the 6-point method by Stewenius [16], and the new generalized eigensolver [18]. The 17-point algorithm as well as the generalized eigensolver can be used with an arbitrary number of points. The 6-point algorithm is usable with only 6-points exactly. "optimize_nonlinear" is again able to also handle the non-central case. Here's how to use these methods:
*
\code
// create the non-central relative adapter
relative_pose::NoncentralRelativeAdapter adapter(
bearingVectors1,
bearingVectors2,
camCorrespondences1,
camCorrespondences2,
camOffsets,
camRotations );
// 6-point algorithm
rotations_t sixpt_rotations =
relative_pose::sixpt( adapter, indices );
// generalized eigensolver (over all points)
geOutput_t output;
relative_pose::ge(adapter,output);
translation_t ge_translation = output.translation.block<3,1>(0,0);
rotation_t ge_rotation = output.rotation;
// 17-point algorithm
transformation_t seventeenpt_transformation =
relative_pose::seventeenpt( adapter, indices );
// non-linear optimization (using all available correspondences)
adapter.sett(initial_translation);
adapter.setR(initial_rotation);
transformation_t nonlinear_transformation =
relative_pose::optimize_nonlinear(adapter);
\endcode
*
* All algorithms are also available in a sample-consensus scheme:
*
\code
// create the non-central relative adapter
relative_pose::NoncentralRelativeAdapter adapter(
bearingVectors1,
bearingVectors2,
camCorrespondences1,
camCorrespondences2,
camOffsets,
camRotations );
// create a RANSAC object
sac::Ransac<sac_problems::relative_pose::NoncentralRelativePoseSacProblem>
ransac;
// create a NoncentralRelativePoseSacProblem
std::shared_ptr<
sac_problems::relative_pose::NoncentralRelativePoseSacProblem>
relposeproblem_ptr(
new sac_problems::relative_pose::NoncentralRelativePoseSacProblem(
adapter,
sac_problems::relative_pose::NoncentralRelativePoseSacProblem::SEVENTEENPT)
);
// run ransac
ransac.sac_model_ = relposeproblem_ptr;
ransac.threshold_ = threshold;
ransac.max_iterations_ = maxIterations;
ransac.computeModel();
// get the result
transformation_t best_transformation =
ransac.model_coefficients_;
\endcode
*
* These examples are taken from test_noncentral_relative_pose.cpp and test_noncentral_relative_pose_sac.cpp. Simply set SEVENTEENPT to GE or SIXPT in order to use the alternative algorithms.
*
* <li> <b>Triangulation of points:</b> OpenGV contains two methods for triangulating points. They are currently only designed for the central case, and compute the position of a point expressed in the first camera given a 2D-2D correspondence between bearing vectors from two cameras. The methods reuse the relative adapter, which need to hold the transformation between the cameras given by the position \f$ \mathbf{t}_{c'}^{c} \f$ of the second camera seen from the first one and the rotation \f$ \mathbf{R}_{c'}^{c} \f$ from the second camera back to the first camera frame.
*
* \image html triangulation_central.png
* \image latex triangulation_central.pdf "" width=0.6\columnwidth
*
* There are two methods, triangulate (linear) and triangulate2 (a fast non-linear approximation). They are used as follows:
*
\code
// create a central relative adapter
// (immediately pass translation and rotation)
relative_pose::CentralRelativeAdapter adapter(
bearingVectors1,
bearingVectors2,
translation,
rotation );
// run method 1
point_t point =
triangulation::triangulate( adapter, index );
//run method 2
point_t point =
triangulation::triangulate2( adapter, index );
\endcode
*
* The example is taken from test_triangulation.cpp.
*
* <li> <b>Alignment of two point-clouds:</b> OpenGV also contains a method for aligning point-clouds. It is currently only designed for the central case, and computes the transformation between two frames given 3D-3D correspondences between points expressed in the two frames (here denoted by c and c', although it ain't necessarily need to be cameras anymore). The method returns the transformation between the frames given by the position \f$ \mathbf{t}_{c'}^{c} \f$ of the second frame seen from the first one and the rotation \f$ \mathbf{R}_{c'}^{c} \f$ from the second frame back to the first frame.
*
* \image html point_cloud.png
* \image latex point_cloud.pdf "" width=0.6\columnwidth
*
* The method is called threept_arun, and it can be used for an arbitrary number of points (minimum three). There is also a non-linear optimization method again called optimize_nonlinear. The methods are used as follows:
*
\code
// create the 3D-3D adapter
point_cloud::PointCloudAdapter adapter(
points1, points2 );
// run threept_arun
transformation_t threept_transformation =
point_cloud::threept_arun( adapter, indices );
// run the non-linear optimization over all correspondences
transformation_t nonlinear_transformation =
point_cloud::optimize_nonlinear(adapter);
\endcode
*
* There is also a sample-consensus problem for the point-cloud alignment. It is set up as follows:
*
\code
// create a 3D-3D adapter
point_cloud::PointCloudAdapter adapter(
points1, points2 );
// create a RANSAC object
sac::Ransac<sac_problems::point_cloud::PointCloudSacProblem> ransac;
// create the sample consensus problem
std::shared_ptr<sac_problems::point_cloud::PointCloudSacProblem>
relposeproblem_ptr(
new sac_problems::point_cloud::PointCloudSacProblem(adapter) );
// run ransac
ransac.sac_model_ = relposeproblem_ptr;
ransac.threshold_ = threshold;
ransac.max_iterations_ = maxIterations;
ransac.computeModel(0);
// return the result
transformation_t best_transformation =
ransac.model_coefficients_;
\endcode
*
* These examples are taken from test_point_cloud.cpp and test_point_cloud_sac.cpp.
*
* </ul>
*
* Note that there are more unit-tests in the test-directory. It shows the usage of all the methods contained in the library.
*
* \section sec_threshold Some words about the sample-consensus-classes
*
* All the above mentioned Ransac-methods make use of a number of super-classes such that only the basic functions need to be implemented in the derived SacProblem (SampleConsensusProblem). The basic functions are responsible for getting valid samples for model instantiation, model instantiation itself, as well as model verification. <b>SamplesConsensusProblem</b> is the base-class for any problem we want to solve, and contains a virtual interface for the basic methods that need to be implemented. The base-class <b>SampleConsensus</b> is then for the sample-consensus method itself, calling the basic functions. So far only the <b>Ransac</b> is implemented [15].
*
* \subsection sec_ransac Ransac threshold
*
* Since the entire library is operating in 3D, we also need a way to compute and threshold reprojection errors in 3D. What we are looking at is the angle \f$ q \f$ between the original bearing-vector \f$ \mathbf{f}_{meas} \f$ and the reprojected one \f$ \mathbf{f}_{repr} \f$. By adopting a certain threshold angle \f$ q_{threshold} \f$, we hence constrain the \f$ \mathbf{f}_{repr} \f$ to lie within a cone of axis \f$ \mathbf{f}_{meas} \f$ and of opening angle \f$ q_{threshold} \f$.
*
* \image html reprojectionError.png
* \image latex reprojectionError.pdf "" width=0.45\columnwidth
*
* The threshold-angle \f$ q_{threshold} \f$ can be easily obtained from classical reprojection error-thresholds expressed in pixels \f$ \psi \f$ by assuming a certain focal length \f$ l \f$. We then have \f$ q_{threshold} = \arctan{\frac{\psi}{l}} \f$.
*
* The threshold we are using in the end is still not quite this one, but a value derived from it in analogy with the computation of reprojection errors. The most efficient way to compute a "reprojection error" is given by taking the scalar product of \f$ \mathbf{f}_{meas} \f$ and \f$ \mathbf{f}_{repr} \f$, which equals to \f$ \cos q \f$. Since this value is between -1 and 1, and we actually want an error that minimizes to 0, we take \f$ \epsilon = 1 - \mathbf{f}_{meas}^{T}\mathbf{f}_{repr} = 1 - \cos q \f$. The threshold error is therefore given by
*
* \f$ \epsilon_{threshold} = 1 - \cos{q_{threshold}} = 1 - \cos({\arctan{\frac{\psi}{l}}}) \f$
*
* In the ransac-examples in the test-folder, you will often see something like this.
*
\code
ransac.threshold_ = 1.0 - cos(atan(sqrt(2.0)*0.5/800.0));
\endcode
*
* This notably corresponds to the above computation of our "reprojection-error"-threshold, with a focal length of 800.0 and a reprojection error in pixels of 0.5*sqrt(2.0).
*
* \section sec_multi The "Multi"-stuff
*
* As you go deeper into the code you might notice that there are a number of elements (mostly in the relative-pose context) that contain the tag "multi" in their name. The adapter base-class used here is called <b>RelativeMultiAdapterBase</b>. The idea of this adapter is to hold multiple sets of bearing-vector correspondences originating from pairs of cameras. A pair of cameras is, as the name says, a set of two cameras in different viewpoints. The correspondences are accessed via a multi-index (a pair-index referring to a specific pair of cameras, and a correspondence-index refering to the correspondence within the camera-pair).
*
* Subsets of camera-pairs can be identified in a number of problems, such as
* <ul>
* <li> Non-central relative pose (2 viewpoints): Non-central relative pose problems involving two viewpoints typically originate from motion-estimation with multi-camera rigs. In the special situation where the cameras are pointing in different directions, and where the motion between the viewpoints is not too big (a practically very relevant case), the correspondences are typically originating from the same camera in both viewpoints. We therefore can do a camera-wise grouping of the correspondences in the multi-camera system. The following situation contains four pairs given by the black, green, blue, and orange camera in both viewpoints:
*
* \image html nonoverlapping.png
* \image latex nonoverlapping.pdf "" width=0.8\columnwidth
*
* <li> Central multi-viewpoint problems: By multi-viewpoint we understand here problems that involve more than two viewpoints. As indicated below, a problem of three central viewpoints for instance allows to identify three camera-pairs as well. The number of camera-pairs in an n-view problem amounts to the combination of 2 out of n, meaning n*(n-1)/2. For the below example, we could have tha camera pairs (c,c'), (c',c''), and (c'',c). The first pair would have a set of correspondences originating from points p1 and p4, the second one from p2 and p4, and the third one from p3 and p4.
*
* \image html multi_viewpoint.png
* \image latex multi_viewpoint.pdf "" width=0.8\columnwidth
*
* </ul>
*
* The multi-adapters keep track of these camera-pair-wise correspondence groups. The benefit of it appears when moving towards random sample-consensus schemes. Have a look at the "opengv/sac/"-folder, it contains the <b>MultiSampleConsensus</b>, <b>MultiRansac</b>, and <b>MultiSampleConsensusProblem</b> classes. They employ the multi-indices, and the derived MultiSampleConsensusProblems exploit the fact that the correspondences are grouped:
*
* <ul>
* <li>The <b>MultiNoncentralRelativePoseSacProblem</b> is for non-central, non-overlapping viewpoints with little change, and exploits the grouping in order to do homogeneous sampling of correspondences over the cameras. As an example, imagine we are computing the relative pose of a non-overlapping multi-camera rig with two cameras facing opposite directions. In terms of accuracy, it doesn't make sense to sample 16-points in one camera and one point in the other. We preferrably would like to sample 8 points in one camera, and 9 in the other. This is exactly what MultiNoncentralRelativePoseSacProblem is able to do. It uses the derived adapter <b>NoncentralRelativeMultiAdapter</b>.
* <li>In the multi viewpoint case, one could of course solve a central relative pose problem for each camera-pair individually. The idea of <b>MultiCentralRelativePoseSacProblem</b> is to benefit from a joint solution of multiple relative-pose problems. In the above three-view problem for instance, we can exploit additional constraints around the individual transformations such as cycles of rotations returning identity, and cycles of translations returning zero. The corresponding adapter is called <b>CentralRelativeMultiAdapter</b>.
* </ul>
*
* All this stuff is highly experimental, so you probably shouldn't pay too much attention to it for the moment ;)
*
*/
|