File: i.segment.html

package info (click to toggle)
grass 8.4.2-1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 277,040 kB
  • sloc: ansic: 460,798; python: 227,732; cpp: 42,026; sh: 11,262; makefile: 7,007; xml: 3,637; sql: 968; lex: 520; javascript: 484; yacc: 450; asm: 387; perl: 157; sed: 25; objc: 6; ruby: 4
file content (293 lines) | stat: -rw-r--r-- 11,150 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
<h2>DESCRIPTION</h2>

<em>i.segment</em> identifies segments (objects) from
imagery data.
<p>
Image segmentation or object recognition is the process of grouping
similar pixels into unique segments, also referred to as objects.
Boundary and region based algorithms are described in the literature,
currently a region growing and merging algorithm is implemented. Each
object found during the segmentation process is given a unique ID and
is a collection of contiguous pixels meeting some criteria. Note the
contrast with image classification where all pixels similar to each
other are assigned to the same class and do not need to be contiguous.
The image segmentation results can be useful on their own, or used as a
preprocessing step for image classification. The segmentation
preprocessing step can reduce noise and speed up the classification.

<h2>NOTES</h2>

<h3>Region Growing and Merging</h3>

This segmentation algorithm sequentially examines all current segments
in the raster map. The similarity between the current segment and each
of its neighbors is calculated according to the given distance
formula. Segments will be merged if they meet a number of criteria,
including:

<ol>
  <li>The pair is mutually most similar to each other (the similarity
distance will be smaller than to any other neighbor), and</li>
  <li>The similarity must be lower than the input threshold. The
process is repeated until no merges are made during a complete pass.</li>
</ol>

<h4>Similarity and Threshold</h4>
The similarity between segments and unmerged objects is used to
determine which objects are merged. Smaller distance values indicate a
closer match, with a similarity score of zero for identical pixels.
<p>
During normal processing, merges are only allowed when the
similarity between two segments is lower than the given
threshold value. During the final pass, however, if a minimum
segment size of 2 or larger is given with the <b>minsize</b>
parameter, segments with a smaller pixel count will be merged with
their most similar neighbor even if the similarity is greater than
the threshold.
<p>
The <b>threshold</b> must be larger than 0.0 and smaller than 1.0. A threshold
of 0 would allow only identical valued pixels to be merged, while a
threshold of 1 would allow everything to be merged. The threshold is scaled to
the data range of the entire input data, not the current computational region.
This allows the application of the same threshold to different computational
regions when working on the same dataset, ensuring that this threshold has the
same meaning in all subregions.

<p>
Initial empirical
tests indicate threshold values of 0.01 to 0.05 are reasonable values
to start. It is recommended to start with a low value, e.g. 0.01, and
then perform hierarchical segmentation by using the output of the last
run as <b>seeds</b> for the next run.

<h4>Calculation Formulas</h4>
Both Euclidean and Manhattan distances use the normal definition,
considering each raster in the image group as a dimension.

In future, the distance calculation will also take into account the
shape characteristics of the segments. The normal distances are then
multiplied by the input radiometric weight. Next an additional
contribution is added: <code>(1-radioweight) * {smoothness * smoothness
weight + compactness * (1-smoothness weight)}</code>,
where <code>compactness = Perimeter Length / sqrt( Area )</code>
and <code>smoothness = Perimeter Length / Bounding Box</code>. The
perimeter length is estimated as the number of pixel sides the segment
has.

<h4>Seeds</h4>
The seeds map can be used to provide either seed pixels (random or
selected points from which to start the segmentation process) or
seed segments. If the seeds are the results of a previous segmentation
with lower threshold, hierarchical segmentation can be performed. The
different approaches are automatically detected by the program: any
pixels that have identical seed values and are contiguous will be
assigned a unique segment ID.

<h4>Maximum number of segments</h4>

The current limit with CELL storage used for segment IDs is 2
billion starting segment IDs. Segment IDs are assigned whenever a yet
unprocessed pixel is merged with another segment. Integer overflow can
happen for computational regions with more than 2 billion cells and
very low threshold values, resulting in many segments. If integer
overflow occurs durin region growing, starting segments can be used
(created by initial classification or other methods).

<h4>Goodness of Fit</h4>
The <b>goodness</b> of fit for each pixel is calculated as 1 - distance
of the pixel to the object it belongs to. The distance is calculated
with the selected <b>similarity</b> method. A value of 1 means
identical values, perfect fit, and a value of 0 means maximum possible
distance, worst possible fit.

<h3>Mean shift</h3>
Mean shift image segmentation consists of 2 steps: anisotrophic
filtering and 2. clustering. For anisotrophic filtering new cell values
are calculated from all pixels not farther than <b>hs</b> pixels away
from the current pixel and with a spectral difference not larger than
<b>hr</b>. That means that pixels that are too different from the current
pixel are not considered in the calculation of new pixel values.
<b>hs</b> and <b>hr</b> are the spatial and spectral (range) bandwidths
for anisotrophic filtering. Cell values are iteratively recalculated
(shifted to the segment's mean) until the maximum number of iterations
is reached or until the largest shift is smaller than <b>threshold</b>.

<p>
If input bands have been reprojected, they should not be reprojected
with bilinear resampling because that method causes smooth transitions
between objects. More appropriate methods are bicubic or lanczos
resampling.

<h3>Boundary Constraints</h3>
Boundary constraints limit the adjacency of pixels and segments.
Each unique value present in the <b>bounds</b> raster are
considered as a MASK. Thus no segments in the final segmentated map
will cross a boundary, even if their spectral data is very similar.

<h3>Minimum Segment Size</h3>
To reduce the salt and pepper effect, a <b>minsize</b> greater
than 1 will add one additional pass to the processing. During the
final pass, the threshold is ignored for any segments smaller then
the set size, thus forcing very small segments to merge with their
most similar neighbor. A minimum segment size larger than 1 is
recommended when using adaptive bandwidth selected with the <b>-a</b>
flag.

<h2>EXAMPLES</h2>

<h3>Segmentation of RGB orthophoto</h3>

This example uses the ortho photograph included in the NC Sample
Dataset. Set up an imagery group:
<div class="code"><pre>
i.group group=ortho_group input=ortho_2001_t792_1m@PERMANENT
</pre></div>

<p>Set the region to a smaller test region (resolution taken from
input ortho photograph).

<div class="code"><pre>
g.region -p raster=ortho_2001_t792_1m n=220446 s=220075 e=639151 w=638592
</pre></div>

Try out a low threshold and check the results.
<div class="code"><pre>
i.segment group=ortho_group output=ortho_segs_l1 threshold=0.02
</pre></div>

<center>
<img src="i_segment_ortho_segs_l1.jpg">
</center>

<p>
From a visual inspection, it seems this results in too many segments.
Increasing the threshold, using the previous results as seeds,
and setting a minimum size of 2:
<div class="code"><pre>
i.segment group=ortho_group output=ortho_segs_l2 threshold=0.05 seeds=ortho_segs_l1 min=2

i.segment group=ortho_group output=ortho_segs_l3 threshold=0.1 seeds=ortho_segs_l2

i.segment group=ortho_group output=ortho_segs_l4 threshold=0.2 seeds=ortho_segs_l3

i.segment group=ortho_group output=ortho_segs_l5 threshold=0.3 seeds=ortho_segs_l4
</pre></div>

<center>
<img src="i_segment_ortho_segs_l2_l5.jpg">
</center>

<p>
The output <code>ortho_segs_l4</code> with <b>threshold</b>=0.2 still has
too many segments, but the output with <b>threshold</b>=0.3 has too few
segments. A threshold value of 0.25 seems to be a good choice. There
is also some noise in the image, lets next force all segments smaller
than 10 pixels to be merged into their most similar neighbor (even if
they are less similar than required by our threshold):

<p>Set the region to match the entire map(s) in the group.
<div class="code"><pre>
g.region -p raster=ortho_2001_t792_1m@PERMANENT
</pre></div>

<p>
Run <em>i.segment</em> on the full map:

<div class="code"><pre>
i.segment group=ortho_group output=ortho_segs_final threshold=0.25 min=10
</pre></div>

<center>
<img src="i_segment_ortho_segs_final.jpg">
</center>

<p>
Processing the entire ortho image with nearly 10 million pixels took
about 450 times more then for the final run.

<h3>Segmentation of panchromatic channel</h3>

This example uses the panchromatic channel of the Landsat7 scene included
in the North Carolina sample dataset:

<div class="code"><pre>
# create group with single channel
i.group group=singleband input=lsat7_2002_80

# set computational region to Landsat7 PAN band
g.region raster=lsat7_2002_80 -p

# perform segmentation with minsize=5
i.segment group=singleband threshold=0.05 minsize=5 \
  output=lsat7_2002_80_segmented_min5 goodness=lsat7_2002_80_goodness_min5

# perform segmentation with minsize=100
i.segment group=singleband threshold=0.05 minsize=100
  output=lsat7_2002_80_segmented_min100 goodness=lsat7_2002_80_goodness_min100
</pre></div>

<p>
<center>
<img src="i_segment_lsat7_pan.png"><br>
Original panchromatic channel of the Landsat7 scene
</center>

<p>
<center>
<img src="i_segment_lsat7_seg_min5.png"><br>
Segmented panchromatic channel, minsize=5
</center>
<p>
<center>
<img src="i_segment_lsat7_seg_min100.png"><br>
Segmented panchromatic channel, minsize=100
</center>

<h2>TODO</h2>

<h3>Functionality</h3>
<ul>
<li>Further testing of the shape characteristics (smoothness,
compactness), if it looks good it should be added.
(<b>in progress</b>)</li>
<li>Malahanobis distance for the similarity calculation.</li>
</ul>
<h3>Use of Segmentation Results</h3>
<ul>
<li>Improve the optional output from this module, or better yet, add a
module for <em>i.segment.metrics</em>.</li>
<li>Providing updates to <em><a href="i.maxlik.html">i.maxlik</a></em>
to ensure the segmentation output can be used as input for the
existing classification functionality.</li>
<li>Integration/workflow for <em>r.fuzzy</em> (Addon).</li>
</ul>

<h3>Speed</h3>
<ul>
<li>See create_isegs.c</li>
</ul>

<h2>REFERENCES</h2>

This project was first developed during GSoC 2012. Project documentation,
Image Segmentation references, and other information is at the
<a href="https://grasswiki.osgeo.org/wiki/GRASS_GSoC_2012_Image_Segmentation">project wiki</a>.
<p>
Information about
<a href="https://grasswiki.osgeo.org/wiki/Image_classification">classification in GRASS</a>
is at available on the wiki.

<h2>SEE ALSO</h2>

<em>
<a href="g.gui.iclass.html">g.gui.iclass</a>,
<a href="i.group.html">i.group</a>,
<a href="i.maxlik.html">i.maxlik</a>,
<a href="i.smap.html">i.smap</a>,
<a href="r.kappa.html">r.kappa</a>
</em>

<h2>AUTHORS</h2>

Eric Momsen - North Dakota State University<br>
Markus Metz (GSoC Mentor)