File: feature_description.rst

package info (click to toggle)
opencv 2.4.9.1%2Bdfsg-1%2Bdeb8u1
  • links: PTS, VCS
  • area: main
  • in suites: jessie
  • size: 126,800 kB
  • ctags: 62,729
  • sloc: xml: 509,055; cpp: 490,794; lisp: 23,208; python: 21,174; java: 19,317; ansic: 1,038; sh: 128; makefile: 72
file content (102 lines) | stat: -rw-r--r-- 2,774 bytes parent folder | download | duplicates (3)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
.. _feature_description:

Feature Description
*******************

Goal
=====

In this tutorial you will learn how to:

.. container:: enumeratevisibleitemswithsquare

   * Use the :descriptor_extractor:`DescriptorExtractor<>` interface in order to find the feature vector correspondent to the keypoints. Specifically:

     * Use :surf_descriptor_extractor:`SurfDescriptorExtractor<>` and its function :descriptor_extractor:`compute<>` to perform the required calculations.
     * Use a :brute_force_matcher:`BFMatcher<>`	to match the features vector
     * Use the function :draw_matches:`drawMatches<>` to draw the detected matches.


Theory
======

Code
====

This tutorial code's is shown lines below. You can also download it from `here <https://github.com/Itseez/opencv/tree/master/samples/cpp/tutorial_code/features2D/SURF_descriptor.cpp>`_

.. code-block:: cpp

   #include <stdio.h>
   #include <iostream>
   #include "opencv2/core/core.hpp"
   #include "opencv2/features2d/features2d.hpp"
   #include "opencv2/highgui/highgui.hpp"
   #include "opencv2/nonfree/features2d.hpp"

   using namespace cv;

   void readme();

   /** @function main */
   int main( int argc, char** argv )
   {
     if( argc != 3 )
      { return -1; }

     Mat img_1 = imread( argv[1], CV_LOAD_IMAGE_GRAYSCALE );
     Mat img_2 = imread( argv[2], CV_LOAD_IMAGE_GRAYSCALE );

     if( !img_1.data || !img_2.data )
      { return -1; }

     //-- Step 1: Detect the keypoints using SURF Detector
     int minHessian = 400;

     SurfFeatureDetector detector( minHessian );

     std::vector<KeyPoint> keypoints_1, keypoints_2;

     detector.detect( img_1, keypoints_1 );
     detector.detect( img_2, keypoints_2 );

     //-- Step 2: Calculate descriptors (feature vectors)
     SurfDescriptorExtractor extractor;

     Mat descriptors_1, descriptors_2;

     extractor.compute( img_1, keypoints_1, descriptors_1 );
     extractor.compute( img_2, keypoints_2, descriptors_2 );

     //-- Step 3: Matching descriptor vectors with a brute force matcher
     BFMatcher matcher(NORM_L2);
     std::vector< DMatch > matches;
     matcher.match( descriptors_1, descriptors_2, matches );

     //-- Draw matches
     Mat img_matches;
     drawMatches( img_1, keypoints_1, img_2, keypoints_2, matches, img_matches );

     //-- Show detected matches
     imshow("Matches", img_matches );

     waitKey(0);

     return 0;
     }

    /** @function readme */
    void readme()
    { std::cout << " Usage: ./SURF_descriptor <img1> <img2>" << std::endl; }

Explanation
============

Result
======

#. Here is the result after applying the BruteForce matcher between the two original images:

   .. image:: images/Feature_Description_BruteForce_Result.jpg
      :align: center
      :height: 200pt