File: transformations.markdown

package info (click to toggle)
opencv 4.5.1%2Bdfsg-5
  • links: PTS, VCS
  • area: main
  • in suites: bullseye
  • size: 268,248 kB
  • sloc: cpp: 969,170; xml: 682,525; python: 36,732; lisp: 30,170; java: 25,155; ansic: 7,927; javascript: 5,643; objc: 2,041; sh: 935; cs: 601; perl: 494; makefile: 145
file content (91 lines) | stat: -rw-r--r-- 3,126 bytes parent folder | download | duplicates (3)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
Transformations {#tutorial_transformations}
===============

@prev_tutorial{tutorial_widget_pose}
@next_tutorial{tutorial_creating_widgets}

Goal
----

In this tutorial you will learn how to

-   How to use makeTransformToGlobal to compute pose
-   How to use makeCameraPose and Viz3d::setViewerPose
-   How to visualize camera position by axes and by viewing frustum

Code
----

You can download the code from [here ](https://github.com/opencv/opencv_contrib/tree/master/modules/viz/samples/transformations.cpp).
@include viz/samples/transformations.cpp

Explanation
-----------

Here is the general structure of the program:

-   Create a visualization window.
    @code{.cpp}
    /// Create a window
    viz::Viz3d myWindow("Transformations");
    @endcode
-   Get camera pose from camera position, camera focal point and y direction.
    @code{.cpp}
    /// Let's assume camera has the following properties
    Point3f cam_pos(3.0f,3.0f,3.0f), cam_focal_point(3.0f,3.0f,2.0f), cam_y_dir(-1.0f,0.0f,0.0f);

    /// We can get the pose of the cam using makeCameraPose
    Affine3f cam_pose = viz::makeCameraPose(cam_pos, cam_focal_point, cam_y_dir);
    @endcode
-   Obtain transform matrix knowing the axes of camera coordinate system.
    @code{.cpp}
    /// We can get the transformation matrix from camera coordinate system to global using
    /// - makeTransformToGlobal. We need the axes of the camera
    Affine3f transform = viz::makeTransformToGlobal(Vec3f(0.0f,-1.0f,0.0f), Vec3f(-1.0f,0.0f,0.0f), Vec3f(0.0f,0.0f,-1.0f), cam_pos);
    @endcode
-   Create a cloud widget from bunny.ply file
    @code{.cpp}
    /// Create a cloud widget.
    Mat bunny_cloud = cvcloud_load();
    viz::WCloud cloud_widget(bunny_cloud, viz::Color::green());
    @endcode
-   Given the pose in camera coordinate system, estimate the global pose.
    @code{.cpp}
    /// Pose of the widget in camera frame
    Affine3f cloud_pose = Affine3f().translate(Vec3f(0.0f,0.0f,3.0f));
    /// Pose of the widget in global frame
    Affine3f cloud_pose_global = transform * cloud_pose;
    @endcode
-   If the view point is set to be global, visualize camera coordinate frame and viewing frustum.
    @code{.cpp}
    /// Visualize camera frame
    if (!camera_pov)
    {
        viz::WCameraPosition cpw(0.5); // Coordinate axes
        viz::WCameraPosition cpw_frustum(Vec2f(0.889484, 0.523599)); // Camera frustum
        myWindow.showWidget("CPW", cpw, cam_pose);
        myWindow.showWidget("CPW_FRUSTUM", cpw_frustum, cam_pose);
    }
    @endcode
-   Visualize the cloud widget with the estimated global pose
    @code{.cpp}
    /// Visualize widget
    myWindow.showWidget("bunny", cloud_widget, cloud_pose_global);
    @endcode
-   If the view point is set to be camera's, set viewer pose to **cam_pose**.
    @code{.cpp}
    /// Set the viewer pose to that of camera
    if (camera_pov)
        myWindow.setViewerPose(cam_pose);
    @endcode

Results
-------

-#  Here is the result from the camera point of view.

![](images/camera_view_point.png)

-#  Here is the result from global point of view.

![](images/global_view_point.png)