File: OptFiles.tex

package info (click to toggle)
normaliz 3.11.1%2Bds-1
  • links: PTS, VCS
  • area: main
  • in suites: sid
  • size: 41,376 kB
  • sloc: cpp: 48,779; makefile: 2,266; sh: 1
file content (259 lines) | stat: -rw-r--r-- 11,774 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
\section{Optional output files: the file interface}\label{optionaloutput}

In addition nto the output file with suffix \verb|out| several computation goals write their results into extra files. But these are \emph{not} optional. They are explained together with the pertaining computationn goals, and Section \ref{forced_out} gives an overview. 

The main purpose of the optional output files is to provide an interface to Normaliz by files that can more easily be parsed than the main output file.

When one of the options \ttt{Files, -f} or \ttt{AllFiles, -a} is activated, Normaliz
writes additional optional output files whose names are of type
\ttt{<project>.<type>}. Moreover one can select the optional output files individually on the command line. Most of these files contain matrices in a simple format:
\begin{Verbatim}
<m>
<n>
<x_1>
...
<x_m>
\end{Verbatim}
where each row has \verb|<n>| entries. Exceptions are the files with suffixes \verb|cst|, \verb|inv|, \verb|esp|.

Note that the files are only written if they would contain at least one row.

As pointed out in Section~\ref{outcontrol}, the optional output files for the integer hull are the same as for the original computation, as far as their content has been computed.

\subsection{The homogeneous case}\label{opt_hom_case}

The option \ttt{-f} makes Normaliz write the following files:

\begin{itemize}
	\itemtt[gen] contains the Hilbert basis. If you want to use this file as an input file and reproduce the computation results, then you must make it a matrix of type \verb|cone_and_lattice| (and add the dehomogenization in the inhomogeneous case).
	
	\itemtt[cst] contains the constraints defining the cone
	and the lattice in the same format as they would appear
	in the input: matrices of types \emph{constraints} following each
	other. Each matrix is concluded by the type of the constraints.
	Empty matrices are indicated by $0$ as the
	number of rows. Therefore there will always be at least
	$3$ matrices.
	
	If a grading is defined, it will be appended. Therefore
	this file (with suffix \ttt{in}) as input for
	Normaliz will reproduce the Hilbert basis and all the
	other data computed, at least in principle.
	
	In the case of number field coordinates this file must be transformed from Normaliz~2 input format to Normaliz~3 format by hand before it can be used for input.
	
	\itemtt[inv] contains all the information from the
	file \ttt{out} that is not contained in any of the
	other files.
\end{itemize}

If \ttt{-a} is activated, then the following files are written
\emph{additionally:}

\begin{itemize}
	
	\itemtt[ext] contains the extreme rays of the cone.
	
	\itemtt[ht1] contains the degree $1$ elements of the
	Hilbert basis if a grading is defined.
	
	\itemtt[egn,esp] These contain the Hilbert basis and
	support hyperplanes in the coordinates with respect to
	a basis of $\EE$. \ttt{esp} contains the grading and the dehomogenization in the
	coordinates of $\EE$. Note that no
	equations for $\CC\cap\EE$ or congruences for $\EE$ are
	necessary.
	
	\itemtt[lat] contains the basis of the lattice $\EE$.
	
	\itemtt[mod] contains the module generators of the integral closure modulo the original monoid.
	
	\itemtt[msp] contains the basis of the maximal subspace.
\end{itemize}

In order to select one or more of these files individually, add an option of type \verb|--<suffix>| to the command line where \verb|<suffix>| can take the values
\begin{Verbatim}
gen, cst, inv, ext, ht1, egn, esp, lat, mod, msp, typ
\end{Verbatim}

The type \verb|typ| is not contained in \verb|Files| or \verb|AllFiles| since it can be extremely large. It is of the matrix format described above. It is the product of the matrices
corresponding to \ttt{egn} and the transpose of \ttt{esp}. In other
words, the linear forms representing the support
hyperplanes of the cone $C$ are evaluated on the
Hilbert basis. The resulting matrix, with the
generators corresponding to the rows and the support
hyperplanes corresponding to the columns, is written to
this file.

The suffix \ttt{typ} is motivated by the fact that the
matrix in this file depends only on the isomorphism
type of monoid generated by the Hilbert basis (up to
row and column permutations). In the language of~\cite{BG}
it contains the \emph{standard embedding}.

Note: the explicit choice of an optional output file does \emph{not} imply a computation goal. Output files that would contain unknown data are simply not written without a warning or error message.

\subsection{Modifications in the inhomogeneous case}

The optional output files are a subset of those that can be produced in the homogeneous
case. The main difference is that the generators of the solution module and the
Hilbert basis of the recession monoid appear together in the file \verb|gen|.
They can be distinguished by evaluating the dehomogenization on them (simply the last component with inhomogeneous input), and the
same applies to the vertices of the polyhedron and extreme rays of the
recession cone. The file \verb|cst| contains the constraints defining the
polyhedron and the recession cone in conjunction with the dehomogenization, which is also contained in the \verb|cst| file, following the constraints.

In the file with suffix \verb|ext| the vertices of polyhedron are listed first, followed by the extreme rays of the recession cone.

With \verb|-a| the files \verb|egn| and \verb|esp| are produced. These files contain \verb|gen| and the support hyperplanes of the homogenized cone in the coordinates of $\EE$, as well as the dehomogenization.

\subsection{Algebraic polyhedra}

Some entries in the inv file are listed as \ttt{integer}, even if they are not integer numbers. But all entries make sense as elements of the algebraic number field.

\subsection{Precomputed data for future input}\label{write_precomp}

One can generate a file with the data needed for an input file with precomputed data of the cirrent cone using the cone property
\begin{itemize}
	\itemtt[WritePreComp]
\end{itemize}
The suffix is \verb|precomp.in|. It contains the data that can (and must) go into a file redefining the present cone (except \verb|hilbert_basis_rec_cone|).

\subsection{Overview: Output files forced by computation goals}\label{forced_out}

The following suffixes are used:

\begin{itemize}
	
	\itemtt[tri] contains the triangulation.
	
	\itemtt[tgn] reference generators for the triangulation.
	
	\itemtt[aut] contains the automorphism group.
	
	\itemtt[dec] contains the Stanley decomposition.
	
	\itemtt[fac] contains the (dual) face lattice.
	
	 \itemtt[fus] fusion data.
	
	\itemtt[inc] contains the (dual) incidence matrix.
	
	 \itemtt[ind] induction matrices.
	
	\itemtt[mrk] contains the Markov basis.
	
	\itemtt[grb] contains the Gröbner basis.
	
	\itemtt[rep] contains the representations of the reducible generators of a monoid by the irreducible ones.
	
	\itemtt[ogn] contains the refernce generators for Markov bases, Gröbner bases and represntations.
	
	\itemtt[sng] contains the singular locus.
	
	\itemtt[proj.out] contains the projected cone.
	
	\itemtt[inthull.out] contains the integer hull.
	
	\itemtt[symm.out] contains the symmetrized cone.
	
\end{itemize}

\section{Performance}\label{Perf}

\subsection{Parallelization}\label{PerfPar}

The executables of Normaliz have been compiled for parallelization
on shared memory systems with OpenMP. Parallelization reduces the
``real'' time of the computations considerably, even on relatively
small systems. However, one should not underestimate the
administrative overhead involved.
\begin{itemize}
	\item It is not a good idea to use parallelization for very small problems.
	\item On multi-user systems with many processors it may be wise to limit
	the number of threads for Normaliz somewhat below the maximum
	number of cores.
\end{itemize}
By default, Normaliz limits the number of threads to~$8$. One can override this limit by the Normaliz
option \ttt{-x} (see Section~\ref{exec}).

Another way to set an upper limit to the number of threads is via the environment variable \verb|OMP_NUM_THREADS|:
\begin{center}
	\verb+export OMP_NUM_THREADS=<T>+\qquad (Linux/Mac)
\end{center}
or
\begin{center}
	\verb+set OMP_NUM_THREADS=<T>+\qquad (Windows)
\end{center}
where \ttt{<T>} stands for the maximum number of threads
accessible to Normaliz. For example, we often use
\begin{center}
	\verb+export OMP_NUM_THREADS=20+
\end{center}
on a multi-user system system with $24$ cores.

Limiting the number of threads to $1$ forces a strictly serial
execution of Normaliz.

The paper~\cite{BIS} contains extensive data on the effect of parallelization. On the whole Normaliz scales very well.
However, the dual algorithm often performs best with mild parallelization, say with $4$ or $6$ threads.

\subsection{Running large computations}\label{Large}

\textbf{Note:}\enspace This section discusses computations in primal mode, and reflects the state of Normaliz discussed in \cite{BIS}. Especially the computation of lattice points in polytopes and of volumes have been implemented in oher algorithms that are often much faster. However, for Hilbert bases and Hilbert series only refinements of the primal mode have been realized.\bigskip

Normaliz can cope with very large examples, but it
is usually difficult to decide a priori whether an example is
very large, but nevertheless doable, or simply impossible.
Therefore some exploration makes sense. The following applies to the primal algorithm.

See~\cite{BIS} for some very large computations. The following
hints reflect the authors' experience with them.

(1) Run Normaliz with the option \ttt{-cs} and pay attention
to the terminal output. The number of extreme rays, but also
the numbers of support hyperplanes of the intermediate cones
are useful data.

(2) In many cases the most critical size parameter for the primal algorithm is the
number of simplicial cones in the triangulation. It makes sense
to determine it as the next step. Even with the fastest
potential evaluation (option \ttt{-v} or \verb|TriangulationDetSum|), finding the
triangulation takes less time, say by a factor between $3$ and
$10$. Thus it makes sense to run the example with \ttt{-t} in
order to explore the size.

As you can see from~\cite{BIS}, Normaliz has successfully
evaluated triangulations of size $\approx 5\cdot 10^{11}$ in
dimension $24$.

(3) Another critical parameter are the determinants of the
generator matrices of the simplicial cones. To get some feeling
for their sizes, one can restrict the input to a subset (of the
extreme rays computed in (1)) and use the option \ttt{-v} or the computation goal \verb|TriangulationDetSum| if there is no grading.

The output file contains the number of simplicial cones as well
as the sum of the absolute values of the determinants. The
latter is the number of vectors to be processed by Normaliz
in triangulation based calculations.

The number includes the zero vector for every simplicial cone
in the triangulation. The zero vector does not enter the
Hilbert basis calculation, but cannot be neglected for the
Hilbert series.

Normaliz has mastered calculations with $> 10^{15}$ vectors.

(4) If the triangulation is small, we can add the option
\ttt{-T} in order to actually see the triangulation in a file.
Then the individual determinants become visible.

(5) If a cone is defined by inequalities and/or equations
consider the dual mode for Hilbert basis calculation, even if
you also want the Hilbert series.

(6) The size of the triangulation and the size of the
determinants are \emph{not} dangerous for memory by themselves
(unless \ttt{-T} or \ttt{-y} are set). Critical magnitudes can
be the number of support hyperplanes, Hilbert basis candidates,
or degree $1$ elements.