1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140
|
.\" Text automatically generated by txt2man
.TH dgord 1 "23 November 2019" "" "PT-Scotch user's manual"
.SH NAME
\fBdgord \fP- compute sparse matrix orderings of graphs in parallel
\fB
.SH SYNOPSIS
.nf
.fam C
\fBdgord\fP [\fIoptions\fP] [\fIgfile\fP] [\fIofile\fP] [\fIlfile\fP]
.fam T
.fi
.fam T
.fi
.SH DESCRIPTION
The \fBdgord\fP program computes, in a parallel way, an ordering of a
Scotch source graph representing the pattern of some symmetric
sparse matrix.
.PP
Source graph file \fIgfile\fP is either a centralized graph file, or a set
of files representing fragments of a distributed graph. The resulting
ordering is stored in file \fIofile\fP. Eventual logging information (such
as the one produced by option \fB-v\fP) is sent to file \fIlfile\fP. When file
names are not specified, data is read from standard input and
written to standard output. Standard streams can also be explicitely
represented by a dash '-'.
.PP
When the proper libraries have been included at compile time, \fBdgord\fP
can directly handle compressed graphs, both as input and output. A
stream is treated as compressed whenever its name is postfixed with
a compressed file extension, such as in 'brol.grf.bz2' or '-.gz'. The
compression formats which can be supported are the bzip2 format
('.bz2'), the gzip format ('.gz'), and the lzma format ('.lzma').
.PP
\fBdgord\fP bases on implementations of the MPI interface to spread work
across the processing elements. It is therefore not likely to be run
directly, but instead through some launcher command such as \fBmpirun\fP.
.SH OPTIONS
.TP
.B
\fB-c\fPopt
Choose default ordering strategy according to one or
several \fIoptions\fP among:
.RS
.TP
.B
b
enforce load balance as much as possible.
.TP
.B
q
privilege quality over speed (default).
.TP
.B
s
privilege speed over quality.
.TP
.B
t
enforce safety.
.TP
.B
x
enforce scalability.
.RE
.TP
.B
\fB-h\fP
Display some help.
.TP
.B
\fB-m\fP\fImfile\fP
Save column block mapping data to file \fImfile\fP. Mapping
data specifies, for each vertex, the index of the column
block to which this vertex belongs.
.TP
.B
\fB-o\fP\fIstrat\fP
Use parallel graph ordering strategy \fIstrat\fP (see
PT-Scotch user's manual for more information).
.TP
.B
\fB-r\fP\fIpnum\fP
Set root process for centralized files (default is 0).
.TP
.B
\fB-t\fP\fItfile\fP
Save partitioning tree data to file \fItfile\fP. Partitioning
tree data specifies, for each vertex, the index of the
first vertex of the parent block of the block to which
the vertex belongs. Altogether with the mapping data
provided in file \fImfile\fP, it allows one to rebuild the
separator tree of the nested dissection process.
.TP
.B
\fB-V\fP
Display program version and copyright.
.TP
.B
\fB-v\fP\fIverb\fP
Set verbose mode to \fIverb\fP. It is a set of one of more
characters which can be:
.RS
.TP
.B
s
strategy information.
.TP
.B
t
timing information.
.SH EXAMPLES
Run \fBdgord\fP on 5 processing elements to reorder matrix graph brol.grf
and save the resulting ordering to file brol.ord, using the default
sequential graph ordering strategy:
.PP
.nf
.fam C
$ mpirun -np 5 dgord brol.grf brol.ord
.fam T
.fi
Run \fBdgord\fP on 5 processing elements to reorder the distributed matrix
stored on graph fragment files brol5-0.dgr to brol5-4.dgr, and save
the resulting ordering to file brol.ord (see \fBdgscat\fP(1) for an
explanation of the '%p' and '%r' sequences in names of distributed
graph fragments).
.PP
.nf
.fam C
$ mpirun -np 5 dgord brol%p-%r.dgr brol.ord
.fam T
.fi
.SH SEE ALSO
\fBdgtst\fP(1), \fBdgscat\fP(1), \fBgmk_hy\fP(1), \fBgord\fP(1).
.PP
PT-Scotch user's manual.
.SH AUTHOR
Francois Pellegrini <francois.pellegrini@labri.fr>
|