File: duperemove.html

package info (click to toggle)
duperemove 0.15.2-1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 700 kB
  • sloc: ansic: 7,719; makefile: 92; sh: 5
file content (558 lines) | stat: -rw-r--r-- 22,504 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
<head>
  <meta charset="utf-8" />
  <meta name="generator" content="pandoc" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
  <meta name="dcterms.date" content="2025-03-01" />
  <title>Duperemove</title>
  <style>
    html {
      color: #1a1a1a;
      background-color: #fdfdfd;
    }
    body {
      margin: 0 auto;
      max-width: 36em;
      padding-left: 50px;
      padding-right: 50px;
      padding-top: 50px;
      padding-bottom: 50px;
      hyphens: auto;
      overflow-wrap: break-word;
      text-rendering: optimizeLegibility;
      font-kerning: normal;
    }
    @media (max-width: 600px) {
      body {
        font-size: 0.9em;
        padding: 12px;
      }
      h1 {
        font-size: 1.8em;
      }
    }
    @media print {
      html {
        background-color: white;
      }
      body {
        background-color: transparent;
        color: black;
        font-size: 12pt;
      }
      p, h2, h3 {
        orphans: 3;
        widows: 3;
      }
      h2, h3, h4 {
        page-break-after: avoid;
      }
    }
    p {
      margin: 1em 0;
    }
    a {
      color: #1a1a1a;
    }
    a:visited {
      color: #1a1a1a;
    }
    img {
      max-width: 100%;
    }
    svg {
      height: auto;
      max-width: 100%;
    }
    h1, h2, h3, h4, h5, h6 {
      margin-top: 1.4em;
    }
    h5, h6 {
      font-size: 1em;
      font-style: italic;
    }
    h6 {
      font-weight: normal;
    }
    ol, ul {
      padding-left: 1.7em;
      margin-top: 1em;
    }
    li > ol, li > ul {
      margin-top: 0;
    }
    blockquote {
      margin: 1em 0 1em 1.7em;
      padding-left: 1em;
      border-left: 2px solid #e6e6e6;
      color: #606060;
    }
    code {
      font-family: Menlo, Monaco, Consolas, 'Lucida Console', monospace;
      font-size: 85%;
      margin: 0;
      hyphens: manual;
    }
    pre {
      margin: 1em 0;
      overflow: auto;
    }
    pre code {
      padding: 0;
      overflow: visible;
      overflow-wrap: normal;
    }
    .sourceCode {
     background-color: transparent;
     overflow: visible;
    }
    hr {
      background-color: #1a1a1a;
      border: none;
      height: 1px;
      margin: 1em 0;
    }
    table {
      margin: 1em 0;
      border-collapse: collapse;
      width: 100%;
      overflow-x: auto;
      display: block;
      font-variant-numeric: lining-nums tabular-nums;
    }
    table caption {
      margin-bottom: 0.75em;
    }
    tbody {
      margin-top: 0.5em;
      border-top: 1px solid #1a1a1a;
      border-bottom: 1px solid #1a1a1a;
    }
    th {
      border-top: 1px solid #1a1a1a;
      padding: 0.25em 0.5em 0.25em 0.5em;
    }
    td {
      padding: 0.125em 0.5em 0.25em 0.5em;
    }
    header {
      margin-bottom: 4em;
      text-align: center;
    }
    #TOC li {
      list-style: none;
    }
    #TOC ul {
      padding-left: 1.3em;
    }
    #TOC > ul {
      padding-left: 0;
    }
    #TOC a:not(:hover) {
      text-decoration: none;
    }
    code{white-space: pre-wrap;}
    span.smallcaps{font-variant: small-caps;}
    div.columns{display: flex; gap: min(4vw, 1.5em);}
    div.column{flex: auto; overflow-x: auto;}
    div.hanging-indent{margin-left: 1.5em; text-indent: -1.5em;}
    /* The extra [class] is a hack that increases specificity enough to
       override a similar rule in reveal.js */
    ul.task-list[class]{list-style: none;}
    ul.task-list li input[type="checkbox"] {
      font-size: inherit;
      width: 0.8em;
      margin: 0 0.8em 0.2em -1.6em;
      vertical-align: middle;
    }
    .display.math{display: block; text-align: center; margin: 0.5rem auto;}
  </style>
</head>
<body>
<header id="title-block-header">
<h1 class="title">Duperemove</h1>
<p class="date">01 Mar 2025</p>
</header>
<h1 id="name">NAME</h1>
<p><code>duperemove</code> - Find duplicate regions in files and submit
them for deduplication</p>
<h1 id="synopsis">SYNOPSIS</h1>
<p><strong>duperemove</strong> <em><a href="#options">options</a></em>
<em>files…</em></p>
<h1 id="description">DESCRIPTION</h1>
<p><code>duperemove</code> is a simple tool for finding duplicated
regions in files and submitting them for deduplication. When given a
list of files it will hash their contents and compare those hashes to
each other, finding and categorizing regions that match each other. When
given the <code>-d</code> option, <code>duperemove</code> will submit
those regions for deduplication using the Linux kernel FIDEDUPERANGE
ioctl.</p>
<p><code>duperemove</code> computes hashes for each files extents as
well as for the whole file’s content. Optionally, per-block hashes can
be computed.</p>
<p><code>duperemove</code> can store the hashes it computes in a
<code>hashfile</code>. If given an existing hashfile,
<code>duperemove</code> will only compute hashes for those files which
have changed since the last run. Thus you can run
<code>duperemove</code> repeatedly on your data as it changes, without
having to re-checksum unchanged data. For more on hashfiles see the
<code>--hashfile</code> option below as well as the
<code>Examples</code> section.</p>
<p><code>duperemove</code> can also take input from the
<code>fdupes</code> program, see the <code>--fdupes</code> option
below.</p>
<h1 id="general">GENERAL</h1>
<p>Duperemove has two major modes of operation, one of which is a subset
of the other.</p>
<h2 id="readonly-non-deduplicating-mode">Readonly / Non-deduplicating
Mode</h2>
<p>When run without <code>-d</code> (the default) duperemove will print
out one or more tables of matching extents it has determined would be
ideal candidates for deduplication. As a result, readonly mode is useful
for seeing what duperemove might do when run with <code>-d</code>.</p>
<p>Generally, duperemove does not concern itself with the underlying
representation of the extents it processes. Some of them could be
compressed, undergoing I/O, or even have already been deduplicated. In
dedupe mode, the kernel handles those details and therefore we try not
to replicate that work.</p>
<h2 id="deduping-mode">Deduping Mode</h2>
<p>This functions similarly to readonly mode with the exception that the
duplicated extents found in our “read, hash, and compare” step will
actually be submitted for deduplication. Extents that have already been
deduped will be skipped. An estimate of the total data deduplicated will
be printed after the operation is complete. This estimate is calculated
by comparing the total amount of shared bytes in each file before and
after the dedupe.</p>
<h1 id="options">OPTIONS</h1>
<h2 id="common-options">Common options</h2>
<p><code>files</code> can refer to a list of regular files and
directories or be a hyphen (-) to read them from standard input. If a
directory is specified, all regular files within it will also be
scanned. Duperemove can also be told to recursively scan directories
with the <code>-r</code> switch.</p>
<dl>
<dt><strong>-r</strong></dt>
<dd>
Enable recursive dir traversal.
</dd>
<dt><strong>-d</strong></dt>
<dd>
De-dupe the results - only works on <code>btrfs</code> and
<code>xfs</code>. Use this option twice to disable the check and try to
run the ioctl anyway.
</dd>
<dt><strong>--hashfile</strong>=<code>hashfile</code></dt>
<dd>
<p>Use a file for storage of hashes instead of memory. This option
drastically reduces the memory footprint of duperemove and is
recommended when your data set is more than a few files large.
<code>Hashfiles</code> are also reusable, allowing you to further reduce
the amount of hashing done on subsequent dedupe runs.</p>
<p>If <code>hashfile</code> does not exist it will be created. If it
exists, <code>duperemove</code> will check the file paths stored inside
of it for changes. Files which have changed will be rescanned and their
updated hashes will be written to the <code>hashfile</code>. Deleted
files will be removed from the <code>hashfile</code>.</p>
<p>New files are only added to the <code>hashfile</code> if they are
discoverable via the <code>files</code> argument. For that reason you
probably want to provide the same <code>files</code> list and
<code>-r</code> arguments on each run of <code>duperemove</code>. The
file discovery algorithm is efficient and will only visit each file
once, even if it is already in the <code>hashfile</code>.</p>
<p>Adding a new path to a hashfile is as simple as adding it to the
<code>files</code> argument.</p>
<p>When deduping from a hashfile, duperemove will avoid deduping files
which have not changed since the last dedupe.</p>
</dd>
<dt><strong>-B</strong> <code>N</code>,
<strong>--batchsize</strong>=<code>N</code></dt>
<dd>
<p>Run the deduplication phase every <code>N</code> files newly scanned.
This greatly reduces memory usage for large dataset, or when you are
doing partial extents lookup, but reduces multithreading efficiency.</p>
<p>Because of that small overhead, its value shall be selected based on
the average file size and <code>blocksize</code>.</p>
<p>The default is a sane value for extents-only lookups, while you can
go as low as <code>1</code> if you are running <code>duperemove</code>
on very large files (like virtual machines etc).</p>
<p>By default, batching is set to 1024.</p>
</dd>
<dt><strong>-h</strong></dt>
<dd>
Print numbers in human-readable format.
</dd>
<dt><strong>-q</strong></dt>
<dd>
Quiet mode. Duperemove will only print errors and a short summary of any
dedupe.
</dd>
<dt><strong>-v</strong></dt>
<dd>
Be verbose.
</dd>
<dt><strong>--help</strong></dt>
<dd>
Prints help text.
</dd>
</dl>
<h2 id="advanced-options">Advanced options</h2>
<dl>
<dt><strong>--fdupes</strong></dt>
<dd>
Run in <code>fdupes</code> mode. With this option you can pipe the
output of <code>fdupes</code> to duperemove to dedupe any duplicate
files found. When receiving a file list in this manner, duperemove will
skip the hashing phase.
</dd>
<dt><strong>-L</strong></dt>
<dd>
Print all files in the hashfile and exit. Requires the
<code>--hashfile</code> option. Will print additional information about
each file when run with <code>-v</code>.
</dd>
<dt><strong>-R</strong> <code>files ..</code></dt>
<dd>
<p>Remove file from the db and exit. Duperemove will read the list from
standard input if a hyphen (-) is provided. Requires the
<code>--hashfile</code> option.</p>
<p><code>Note:</code> If you are piping filenames from another
duperemove instance it is advisable to do so into a temporary file first
as running duperemove simultaneously on the same hashfile may corrupt
that hashfile.</p>
</dd>
<dt><strong>--skip-zeroes</strong></dt>
<dd>
Read data blocks and skip any zeroed blocks, useful for speedup
duperemove, but can prevent deduplication of zeroed files.
</dd>
<dt><strong>-b</strong> <code>size</code></dt>
<dd>
Use the specified block size for reading file extents. Defaults to 128K.
</dd>
<dt><strong>--io-threads</strong>=<code>N</code></dt>
<dd>
Use N threads for I/O. This is used by the file hashing and dedupe
stages. Default is automatically detected based on number of host cpus.
</dd>
<dt><strong>--cpu-threads</strong>=<code>N</code></dt>
<dd>
<p>Use N threads for CPU bound tasks. This is used by the duplicate
extent finding stage. Default is automatically detected based on number
of host cpus.</p>
<p><code>Note:</code> Hyperthreading can adversely affect performance of
the extent finding stage. If duperemove detects an Intel CPU with
hyperthreading it will use half the number of cores reported by the
system for cpu bound tasks.</p>
</dd>
<dt><strong>--dedupe-options</strong>=<code>options</code></dt>
<dd>
<p>Comma separated list of options which alter how we dedupe. Prepend
‘no’ to an option in order to turn it off.</p>
<dl>
<dt><strong>[no]partial</strong></dt>
<dd>
<p>Duperemove can often find more dedupe by comparing portions of
extents to each other. This can be a lengthy, CPU intensive task so it
is turned off by default. Using <code>--batchsize</code> is recommended
to limit the negative effects of this option.</p>
<p>The code behind this option is under active development and as a
result the semantics of the <code>partial</code> argument may
change.</p>
</dd>
<dt><strong>[no]same</strong></dt>
<dd>
Defaults to <code>on</code>. Allow dedupe of extents within the same
file.
</dd>
<dt><strong>[no]only_whole_files</strong></dt>
<dd>
Defaults to <code>off</code>. Duperemove will only work on full file.
Both extent-based and block-based deduplication will be disabled. The
hashfile will be smaller, some operations will be faster, but the
deduplication efficiency will indeed be reduced.
</dd>
</dl>
</dd>
<dt><strong>--read-hashes</strong>=<code>hashfile</code></dt>
<dd>
<p><strong>This option is primarily for testing</strong>. See the
<code>--hashfile</code> option if you want to use hashfiles.</p>
<p>Read hashes from a hashfile. A file list is not required with this
option. Dedupe can be done if duperemove is run from the same base
directory as is stored in the hash file (basically duperemove has to be
able to find the files).</p>
</dd>
<dt><strong>--write-hashes</strong>=<code>hashfile</code></dt>
<dd>
<p><strong>This option is primarily for testing</strong>. See the
<code>--hashfile</code> option if you want to use hashfiles.</p>
<p>Write hashes to a hashfile. These can be read in at a later date and
deduped from.</p>
</dd>
<dt><strong>--debug</strong></dt>
<dd>
Print debug messages, forces <code>-v</code> if selected.
</dd>
<dt><strong>--hash-threads</strong>=<code>N</code></dt>
<dd>
Deprecated, see <code>--io-threads</code> above.
</dd>
<dt><strong>--exclude</strong>=<code>PATTERN</code></dt>
<dd>
You can exclude certain files and folders from the deduplication
process. This might be benefical for skipping subvolume snapshot mounts,
for instance. Unless you provide a full path for exclusion, the exclude
will be relative to the current working directory. Another thing to keep
in mind is that shells usually expand glob pattern so the passed in
pattern ought to also be quoted. Taking everything into consideration
the correct way to pass an exclusion pattern is
<code>duperemove --exclude "/path/to/dir/file*" /path/to/dir</code>
</dd>
</dl>
<h1 id="examples">EXAMPLES</h1>
<h2 id="simple-usage">Simple Usage</h2>
<p>Dedupe the files in directory /foo, recurse into all subdirectories.
You only want to use this for small data sets:</p>
<pre><code>duperemove -dr /foo</code></pre>
<p>Use duperemove with fdupes to dedupe identical files below directory
foo:</p>
<pre><code>fdupes -r /foo | duperemove --fdupes</code></pre>
<h2 id="using-hashfiles">Using Hashfiles</h2>
<p>Duperemove can optionally store the hashes it calculates in a
hashfile. Hashfiles have two primary advantages - memory usage and
re-usability. When using a hashfile, duperemove will stream computed
hashes to it, instead of main memory.</p>
<p>If Duperemove is run with an existing hashfile, it will only scan
those files which have changed since the last time the hashfile was
updated. The <code>files</code> argument controls which directories
duperemove will scan for newly added files. In the simplest usage, you
rerun duperemove with the same parameters and it will only scan changed
or newly added files - see the first example below.</p>
<p>Dedupe the files in directory foo, storing hashes in foo.hash. We can
run this command multiple times and duperemove will only checksum and
dedupe changed or newly added files:</p>
<pre><code>duperemove -dr --hashfile=foo.hash foo/</code></pre>
<p>Don’t scan for new files, only update changed or deleted files, then
dedupe:</p>
<pre><code>duperemove -dr --hashfile=foo.hash</code></pre>
<p>Add directory bar to our hashfile and discover any files that were
recently added to foo:</p>
<pre><code>duperemove -dr --hashfile=foo.hash foo/ bar/</code></pre>
<p>List the files tracked by foo.hash:</p>
<pre><code>duperemove -L --hashfile=foo.hash</code></pre>
<h1 id="faq">FAQ</h1>
<h2 id="is-duperemove-safe-for-my-data">Is duperemove safe for my
data?</h2>
<p>Yes. To be specific, duperemove does not deduplicate the data itself.
It simply finds candidates for dedupe and submits them to the Linux
kernel FIDEDUPERANGE ioctl. In order to ensure data integrity, the
kernel locks out other access to the file and does a byte-by-byte
compare before proceeding with the dedupe.</p>
<h2 id="is-is-safe-to-interrupt-the-program-ctrl-c">Is is safe to
interrupt the program (Ctrl-C)?</h2>
<p>Yes. The Linux kernel deals with the actual data. On Duperemove’
side, a transactional database engine is used. The result is that you
should be able to ctrl-c the program at any point and re-run without
experiencing corruption of your hashfile. In case of a bug, your
hashfile may be broken, but your data never will.</p>
<h2 id="i-got-two-identical-files-why-are-they-not-deduped">I got two
identical files, why are they not deduped?</h2>
<p>Duperemove by default works on extent granularity. What this means is
if there are two files which are logically identical (have the same
content) but are laid out on disk with different extent structure they
won’t be deduped. For example if 2 files are 128k each and their content
are identical but one of them consists of a single 128k extent and the
other of 2 * 64k extents then they won’t be deduped. This behavior is
dependent on the current implementation and is subject to change as
duperemove is being improved.</p>
<h2 id="what-is-the-cost-of-deduplication">What is the cost of
deduplication?</h2>
<p>Deduplication will lead to increased fragmentation. The blocksize
chosen can have an effect on this. Larger blocksizes will fragment less
but may not save you as much space. Conversely, smaller block sizes may
save more space at the cost of increased fragmentation.</p>
<h2 id="how-can-i-find-out-my-space-savings-after-a-dedupe">How can I
find out my space savings after a dedupe?</h2>
<p>Duperemove will print out an estimate of the saved space after a
dedupe operation for you.</p>
<p>You can get a more accurate picture by running ‘btrfs fi df’ before
and after each duperemove run.</p>
<p>Be careful about using the ‘df’ tool on btrfs - it is common for
space reporting to be ‘behind’ while delayed updates get processed, so
an immediate df after deduping might not show any savings.</p>
<h2 id="why-is-the-total-deduped-data-report-an-estimate">Why is the
total deduped data report an estimate?</h2>
<p>At the moment duperemove can detect that some underlying extents are
shared with other files, but it can not resolve which files those
extents are shared with.</p>
<p>Imagine duperemove is examining a series of files and it notes a
shared data region in one of them. That data could be shared with a file
outside of the series. Since duperemove can’t resolve that information
it will account the shared data against our dedupe operation while in
reality, the kernel might deduplicate it further for us.</p>
<h2
id="why-are-my-files-showing-dedupe-but-my-disk-space-is-not-shrinking">Why
are my files showing dedupe but my disk space is not shrinking?</h2>
<p>This is a little complicated, but it comes down to a feature in Btrfs
called <em>bookending</em>. The <a
href="http://en.wikipedia.org/wiki/Btrfs#Extents">Btrfs wiki</a>
explains this in detail.</p>
<p>Essentially though, the underlying representation of an extent in
Btrfs can not be split (with small exception). So sometimes we can end
up in a situation where a file extent gets partially deduped (and the
extents marked as shared) but the underlying extent item is not freed or
truncated.</p>
<h2
id="is-there-an-upper-limit-to-the-amount-of-data-duperemove-can-process">Is
there an upper limit to the amount of data duperemove can process?</h2>
<p>Duperemove is fast at reading and cataloging data. Dedupe runs will
be memory limited unless the <code>--hashfile</code> option is used.
<code>--hashfile</code> allows duperemove to temporarily store
duplicated hashes to disk, thus removing the large memory overhead and
allowing for a far larger amount of data to be scanned and deduped.
Realistically though you will be limited by the speed of your disks and
cpu. In those situations where resources are limited you may have
success by breaking up the input data set into smaller pieces.</p>
<p>When using a hashfile, duperemove will only store duplicate hashes in
memory. During normal operation then the hash tree will make up the
largest portion of duperemove memory usage. As of Duperemove v0.11 hash
entries are 88 bytes in size. If you know the number of duplicate blocks
in your data set you can get a rough approximation of memory usage by
multiplying with the hash entry size.</p>
<p>Actual performance numbers are dependent on hardware - up to date
testing information is kept on the duperemove wiki (see below for the
link).</p>
<h2 id="how-large-of-a-hashfile-will-duperemove-create">How large of a
hashfile will duperemove create?</h2>
<p>Hashfiles are essentially sqlite3 database files with several tables,
the largest of which are the files and extents tables. Each extents
table entry is about 72 bytes though that may grow as features are
added. The size of a files table entry depends on the file path but a
good estimate is around 270 bytes per file. The number of extents in a
data set is directly proportional to file fragmentation level.</p>
<p>If you know the total number of extents and files in your data set
then you can calculate the hashfile size as:</p>
<pre><code>Hashfile Size = Num Hashes * 72 + Num Files * 270</code></pre>
<p>Using a real world example of 1TB (8388608 128K blocks) of data over
1000 files:</p>
<pre><code>8388608 * 72 + 270 * 1000 = 755244720 or about 720MB for 1TB spread over 1000 files.</code></pre>
<p><code>Note that none of this takes database overhead into account.</code></p>
<h1 id="notes">NOTES</h1>
<p>Deduplication is currently only supported by the <code>btrfs</code>
and <code>xfs</code> filesystem.</p>
<p>The Duperemove project page can be found on <a
href="https://github.com/markfasheh/duperemove">github</a></p>
<p>There is also a <a
href="https://github.com/markfasheh/duperemove/wiki">wiki</a></p>
<h1 id="see-also">SEE ALSO</h1>
<ul>
<li><code>hashstats(8)</code></li>
<li><code>filesystems(5)</code></li>
<li><code>btrfs(8)</code></li>
<li><code>xfs(8)</code></li>
<li><code>fdupes(1)</code></li>
<li><code>ioctl_fideduprange(2)</code></li>
</ul>
</body>
</html>