File: Appendix.tex

package info (click to toggle)
normaliz 3.11.0%2Bds-1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 40,448 kB
  • sloc: cpp: 48,104; makefile: 2,247; sh: 1
file content (4073 lines) | stat: -rw-r--r-- 190,309 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
2999
3000
3001
3002
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
3071
3072
3073
3074
3075
3076
3077
3078
3079
3080
3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
3108
3109
3110
3111
3112
3113
3114
3115
3116
3117
3118
3119
3120
3121
3122
3123
3124
3125
3126
3127
3128
3129
3130
3131
3132
3133
3134
3135
3136
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
3160
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
3174
3175
3176
3177
3178
3179
3180
3181
3182
3183
3184
3185
3186
3187
3188
3189
3190
3191
3192
3193
3194
3195
3196
3197
3198
3199
3200
3201
3202
3203
3204
3205
3206
3207
3208
3209
3210
3211
3212
3213
3214
3215
3216
3217
3218
3219
3220
3221
3222
3223
3224
3225
3226
3227
3228
3229
3230
3231
3232
3233
3234
3235
3236
3237
3238
3239
3240
3241
3242
3243
3244
3245
3246
3247
3248
3249
3250
3251
3252
3253
3254
3255
3256
3257
3258
3259
3260
3261
3262
3263
3264
3265
3266
3267
3268
3269
3270
3271
3272
3273
3274
3275
3276
3277
3278
3279
3280
3281
3282
3283
3284
3285
3286
3287
3288
3289
3290
3291
3292
3293
3294
3295
3296
3297
3298
3299
3300
3301
3302
3303
3304
3305
3306
3307
3308
3309
3310
3311
3312
3313
3314
3315
3316
3317
3318
3319
3320
3321
3322
3323
3324
3325
3326
3327
3328
3329
3330
3331
3332
3333
3334
3335
3336
3337
3338
3339
3340
3341
3342
3343
3344
3345
3346
3347
3348
3349
3350
3351
3352
3353
3354
3355
3356
3357
3358
3359
3360
3361
3362
3363
3364
3365
3366
3367
3368
3369
3370
3371
3372
3373
3374
3375
3376
3377
3378
3379
3380
3381
3382
3383
3384
3385
3386
3387
3388
3389
3390
3391
3392
3393
3394
3395
3396
3397
3398
3399
3400
3401
3402
3403
3404
3405
3406
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
3419
3420
3421
3422
3423
3424
3425
3426
3427
3428
3429
3430
3431
3432
3433
3434
3435
3436
3437
3438
3439
3440
3441
3442
3443
3444
3445
3446
3447
3448
3449
3450
3451
3452
3453
3454
3455
3456
3457
3458
3459
3460
3461
3462
3463
3464
3465
3466
3467
3468
3469
3470
3471
3472
3473
3474
3475
3476
3477
3478
3479
3480
3481
3482
3483
3484
3485
3486
3487
3488
3489
3490
3491
3492
3493
3494
3495
3496
3497
3498
3499
3500
3501
3502
3503
3504
3505
3506
3507
3508
3509
3510
3511
3512
3513
3514
3515
3516
3517
3518
3519
3520
3521
3522
3523
3524
3525
3526
3527
3528
3529
3530
3531
3532
3533
3534
3535
3536
3537
3538
3539
3540
3541
3542
3543
3544
3545
3546
3547
3548
3549
3550
3551
3552
3553
3554
3555
3556
3557
3558
3559
3560
3561
3562
3563
3564
3565
3566
3567
3568
3569
3570
3571
3572
3573
3574
3575
3576
3577
3578
3579
3580
3581
3582
3583
3584
3585
3586
3587
3588
3589
3590
3591
3592
3593
3594
3595
3596
3597
3598
3599
3600
3601
3602
3603
3604
3605
3606
3607
3608
3609
3610
3611
3612
3613
3614
3615
3616
3617
3618
3619
3620
3621
3622
3623
3624
3625
3626
3627
3628
3629
3630
3631
3632
3633
3634
3635
3636
3637
3638
3639
3640
3641
3642
3643
3644
3645
3646
3647
3648
3649
3650
3651
3652
3653
3654
3655
3656
3657
3658
3659
3660
3661
3662
3663
3664
3665
3666
3667
3668
3669
3670
3671
3672
3673
3674
3675
3676
3677
3678
3679
3680
3681
3682
3683
3684
3685
3686
3687
3688
3689
3690
3691
3692
3693
3694
3695
3696
3697
3698
3699
3700
3701
3702
3703
3704
3705
3706
3707
3708
3709
3710
3711
3712
3713
3714
3715
3716
3717
3718
3719
3720
3721
3722
3723
3724
3725
3726
3727
3728
3729
3730
3731
3732
3733
3734
3735
3736
3737
3738
3739
3740
3741
3742
3743
3744
3745
3746
3747
3748
3749
3750
3751
3752
3753
3754
3755
3756
3757
3758
3759
3760
3761
3762
3763
3764
3765
3766
3767
3768
3769
3770
3771
3772
3773
3774
3775
3776
3777
3778
3779
3780
3781
3782
3783
3784
3785
3786
3787
3788
3789
3790
3791
3792
3793
3794
3795
3796
3797
3798
3799
3800
3801
3802
3803
3804
3805
3806
3807
3808
3809
3810
3811
3812
3813
3814
3815
3816
3817
3818
3819
3820
3821
3822
3823
3824
3825
3826
3827
3828
3829
3830
3831
3832
3833
3834
3835
3836
3837
3838
3839
3840
3841
3842
3843
3844
3845
3846
3847
3848
3849
3850
3851
3852
3853
3854
3855
3856
3857
3858
3859
3860
3861
3862
3863
3864
3865
3866
3867
3868
3869
3870
3871
3872
3873
3874
3875
3876
3877
3878
3879
3880
3881
3882
3883
3884
3885
3886
3887
3888
3889
3890
3891
3892
3893
3894
3895
3896
3897
3898
3899
3900
3901
3902
3903
3904
3905
3906
3907
3908
3909
3910
3911
3912
3913
3914
3915
3916
3917
3918
3919
3920
3921
3922
3923
3924
3925
3926
3927
3928
3929
3930
3931
3932
3933
3934
3935
3936
3937
3938
3939
3940
3941
3942
3943
3944
3945
3946
3947
3948
3949
3950
3951
3952
3953
3954
3955
3956
3957
3958
3959
3960
3961
3962
3963
3964
3965
3966
3967
3968
3969
3970
3971
3972
3973
3974
3975
3976
3977
3978
3979
3980
3981
3982
3983
3984
3985
3986
3987
3988
3989
3990
3991
3992
3993
3994
3995
3996
3997
3998
3999
4000
4001
4002
4003
4004
4005
4006
4007
4008
4009
4010
4011
4012
4013
4014
4015
4016
4017
4018
4019
4020
4021
4022
4023
4024
4025
4026
4027
4028
4029
4030
4031
4032
4033
4034
4035
4036
4037
4038
4039
4040
4041
4042
4043
4044
4045
4046
4047
4048
4049
4050
4051
4052
4053
4054
4055
4056
4057
4058
4059
4060
4061
4062
4063
4064
4065
4066
4067
4068
4069
4070
4071
4072
4073
% !TeX spellcheck = en_US

\appendix

\section{Mathematical background and terminology}\label{AppBackground}

For a coherent and thorough treatment of the mathematical background we refer the reader to~\cite{BG}.

\subsection{Polyhedra, polytopes and cones}

An \emph{affine halfspace} of $\RR^d$ is a subset given as
$$
H_\lambda^+=\{x: \lambda(x)\ge 0\},
$$
where $\lambda$ is an affine form, i.e., a non-constant map $\lambda:\RR^d\to\RR$, $\lambda(x)=\alpha_1x_1+\dots+\alpha_dx_d+\beta$ with $\alpha_1,\dots,\alpha_d,\beta\in\RR$. If $\beta=0$ and $\lambda$ is therefore linear, then the halfspace is called \emph{linear}. The halfspace is \emph{rational} if $\lambda$ is \emph{rational}, i.e., has rational coordinates. If $\lambda$ is rational, we can assume that it is even \emph{integral}, i.e., has integral coordinates, and, moreover, that these are coprime. Then $\lambda$ is uniquely determined by $H_\lambda^+$. Such integral forms are called \emph{primitive}, and the same terminology applies to vectors.

\begin{definition}
	A (rational) \emph{polyhedron} $P$ is the intersection of finitely many (rational) halfspaces. If it is bounded, then it is called a \emph{polytope}. If all the halfspaces are linear, then $P$ is a \emph{cone}.
	
	The \emph{dimension} of $P$ is the dimension of the smallest affine subspace $\aff(P)$ containing $P$.
\end{definition}


A support hyperplane of $P$ is an affine hyperplane $H$ that intersects $P$, but only in such a way that $H$ is contained in one of the two halfspaces determined by $H$. The intersection $H\cap P$ is called a \emph{face} of $P$. It is a polyhedron (polytope, cone) itself. Faces of dimension $0$ are called \emph{vertices}, those of dimension $1$ are called \emph{edges} (in the case of cones \emph{extreme rays}), and those of dimension $\dim(P)-1$ are \emph{facets}.

When we speak of \emph{the} support hyperplanes of $P$, then we mean those intersecting $P$ in a facet. Their halfspaces containing $P$ cut out $P$ from $\aff(P)$. If $\dim(P)=d$, then they are uniquely determined (up to a positive scalar).

The constraints by which Normaliz describes polyhedra are
\begin{arab}
	\item linear equations for $\aff(P)$ and
	\item linear inequalities (simply called support hyperplanes) cutting out $P$ from $\aff(P)$.
\end{arab}
In other words, the constraints are given by a linear system of equations and inequalities, and a polyhedron is nothing else than the solution set of a linear system of inequalities and equations. It can always be represented in the form
$$
Ax\ge b, \qquad A\in\RR^{m\times d}, b\in \RR^m,
$$
if we replace an equation by two inequalities.

\subsection{Cones}

The definition describes a cone by constraints. One can equivalently describe it by generators:

\begin{theorem}[Minkowski-Weyl]
	The following are equivalent for $C\subset\RR^d$;
	\begin{enumerate}
		\item $C$ is a (rational) cone;
		\item there exist finitely many (rational) vectors $x_1,\dots,x_n$ such that
		$$
		C=\{a_1x_1+\dots+a_nx_n:a_1,\dots,a_n\in\RR_+\}.
		$$
	\end{enumerate}
\end{theorem}

By $\RR_+$ we denote the set of nonnegative real numbers; $\QQ_+$ and $\ZZ_+$ are defined in the same way.

The conversion between the description by constraints and that by generators is one of the basic tasks of Normaliz. It uses the \emph{Fourier-Motzkin elimination}.

Let $C_0$ be the set of those $x\in C$ for which $-x\in C$ as well. It is the largest vector subspace contained in $C$.
A cone is \emph{pointed} if $C_0=0$. If a rational cone is pointed, then it has uniquely determined \emph{extreme integral generators}. These are the primitive integral vectors spanning the extreme rays. These can also be defined with respect to a sublattice $L$ of $\ZZ^d$, provided $C$ is contained in $\RR L$. If a cone is not pointed, then Normaliz computes the extreme rays of the pointed $C/C_0$ and lifts them to $C$. (Therefore they are only unique modulo $C_0$.)

The \emph{dual cone} $C^*$ is given by
$$
C^*=\{\lambda\in (\RR^d)^*:\lambda(x)\ge0 \text{ for all } x\in C\}.
$$
Under the identification $\RR^d=(\RR^d)^{**}$ one has $C^{**}=C$. Then one has
$$
\dim C_0+\dim C^*=d.
$$
In particular, $C$ is pointed if and only if $C^*$ is full dimensional, and this is the criterion for pointedness used by Normaliz. Linear forms $\lambda_1,\dots,\lambda_n$ generate $C^*$ if and only if $C$ is the intersection of the halfspaces $H_{\lambda_i}^+$. Therefore the conversion from constraints to generators and its converse are the same task, except for the exchange of $\RR^d$ and its dual space.

\subsection{Polyhedra}

In order to transfer the Minkowski-Weyl theorem to polyhedra it is useful to homogenize coordinates by embedding $\RR^d$ as a hyperplane in $\RR^{d+1}$, namely via
$$
\kappa:\RR^d\to\RR^{d+1},\qquad \kappa(x)=(x,1).
$$
If $P$ is a (rational) polyhedron, then the closure of the union of the rays from $0$ through the points of $\kappa(P)$ is a (rational) cone $C(P)$, called the \emph{cone over} $P$. The intersection $C(P)\cap(\RR^d\times\{0\})$ can be identified with the \emph{recession} (or tail) \emph{cone}
$$
\rec(P)=\{x\in\RR^d: y+x\in P\text{ for all } y\in P\}.
$$
It is the cone of unbounded directions in $P$. The recession cone is pointed if and only if $P$ has at least one bounded face, and this is the case if and only if it has a vertex.

The theorem of Minkowski-Weyl can then be generalized as follows:

\begin{theorem}[Motzkin]
	The following are equivalent for a subset $P\neq\emptyset$ of $\RR^d$:
	\begin{enumerate}
		\item $P$ is a (rational) polyhedron;
		\item $P=Q+C$ where $Q$ is a (rational) polytope and $C$ is a (rational) cone.
	\end{enumerate}
	If $P$ has a vertex, then the smallest choice for $Q$ is the convex hull of its vertices, and $C=\rec(P)$ is uniquely determined.
\end{theorem}

The \emph{convex hull} of a subset $X\in\RR^d$ is
$$
\conv(X)=\{a_1x_1+\dots+a_nx_n: n\ge 1, x_1,\dots,x_n\in X, a_1,\dots,a_n\in\RR_+, a_1+\dots+a_n=1\}.
$$

Clearly, $P$ is a polytope if and only if $\rec(P)=\{0\}$, and the specialization to this case one obtains Minkowski's theorem: a subset $P$ of $\RR^d$ is a polytope if and only if it is the convex hull of a finite set. A \emph{lattice polytope} is distinguished by having integral points as vertices.

Normaliz computes the recession cone and the polytope $Q$ if $P$ is defined by constraints. Conversely it finds the constraints if the vertices of $Q$ and the generators of $C$ are specified.

Suppose that $P$ is given by a system
$$
Ax\ge b, \qquad A\in\RR^{m\times d},\ b\in \RR^m,
$$
of linear inequalities (equations are replaced by two inequalities). Then $C(P)$ is defined by the \emph{homogenized system}
$$
Ax-x_{d+1}b\ge 0
$$
whereas the $\rec(P)$ is given by the \emph{associated homogeneous system}
$$
Ax\ge 0.
$$

It is of course possible that $P$ is empty if it is given by constraints since inhomogeneous systems of linear equations and inequalities may be unsolvable. By abuse of language we call the solution set of the associated homogeneous system the recession cone of the system.

Via the concept of dehomogenization, Normaliz allows for a more general approach. The \emph{dehomogenization} is a linear form $\delta$ on $\RR^{d+1}$. For a cone $\widetilde C$ in $\RR^{d+1}$ and a dehomogenization $\delta$, Normaliz computes the polyhedron $P=\{x\in \widetilde C: \delta(x)=1\}$ and the recession cone $C=\{x\in \widetilde C: \delta(x)=0\}$. In particular, this allows other choices of the homogenizing coordinate. (Often one chooses $x_0$, the first coordinate then.)

In the language of projective geometry, $\delta(x)=0$ defines the hyperplane at infinity.

\subsection{Affine monoids}\label{affine_monids}

An \emph{affine monoid} $M$ is a finitely generated submonoid of $\ZZ^d$ for some $d\ge0$. This means: $0\in M$, $M+M\subset M$, and there exist $x_1,\dots,x_n$ such that
$$
M=\{a_1x_1+\dots+a_nx_n: a_1,\dots,a_n\in\ZZ_+\}.
$$
We say that $x_1,\dots,x_n$ is a \emph{system of generators} of $M$. A monoid $M$ is positive if $x\in M$ and $-x\in M$ implies $x=0$. An element $x$ in a positive monoid $M$ is called \emph{irreducible} if it has no decomposition $x=y+z$ with $y,z\in M$, $y,z\neq0$. The \emph{rank} of $M$ is the rank of the subgroup $\gp(M)$ of $\ZZ^d$ generated by $M$. (Subgroups of $\ZZ^d$ are also called sublattices.)
For certain aspects of monoid theory it is very useful (or even necessary) to introduce coefficients from a field $K$ (or a more general commutative ring) and consider the monoid algebra $K[M]$.


\begin{theorem}[van der Corput]
	Every positive affine monoid $M$ has a unique minimal system of generators, given by its irreducible elements.
\end{theorem}

We call the minimal system of generators the \emph{Hilbert basis} of $M$. Normaliz computes Hilbert bases of a special type of affine monoid:

\begin{theorem}[Gordan's lemma]
	Let $C\subset\RR^d$ be a (pointed) rational cone and let $L\subset \ZZ^d$ be a sublattice. Then $C\cap L$ is a (positive) affine monoid.
\end{theorem}

The monoids $M=C\cap L$ of the theorem have the pleasant property that the group of units $M_0$ (i.e., elements whose inverse also belongs to $M$) splits off as a direct summand. Therefore $M/M_0$ is a well-defied affine monoid. If $M$ is not positive, then Normaliz computes a Hilbert basis of $M/M_0$ and lifts it to $M$.

Let $M\subset \ZZ^d$ be an affine monoid, and let $N\supset M$ be an overmonoid (not necessarily affine), for example a sublattice $L$ of $\ZZ^d$ containing $M$.

\begin{definition}
	The \emph{integral closure} (or \emph{saturation}) of $M$ in $N$ is the set
	$$
	\widehat M_N=\{x\in N: kx\in M \text{ for some } k\in \ZZ, k>0\}.
	$$
	If $\widehat M_N=M$, one calls $M$ \emph{integrally closed} in $N$.
	
	The integral closure $\overline M$ of $M$ in $\gp(M)$ is its \emph{normalization}. $M$ is \emph{normal} if $\overline M=M$.
\end{definition}

The integral closure has a geometric description:

\begin{theorem}\label{incl_cone}
	$$
	\widehat M_N =\cone(M)\cap N.
	$$
\end{theorem}

Combining the theorems, we can say that Normaliz computes integral closures of affine monoids in lattices, and the integral closures are themselves affine monoids as well. (More generally, $\widehat M_N$ is affine if $M$ and $N$ are affine.)

In order to specify the intersection $C\cap L$ by constraints we need a system of homogeneous inequalities for $C$. Every sublattice of $\ZZ^d$ can be written as the solution set of a combined system of homogeneous linear diophantine equations and a homogeneous system of congruences (this follows from the elementary divisor theorem). Thus $C\cap L$ is the solution set of a homogeneous linear diophantine system of inequalities, equations and congruences. Conversely, the solution set of every such system is a monoid of type $C\cap L$.

In the situation of Theorem~\ref{incl_cone}, if $\gp(N)$ has finite rank as a $\gp(M)$-module, $\widehat M_N$ is even a finitely generated module over $M$. I.e., there exist finitely many elements $y_1,\dots,y_m\in \widehat M_N$ such that $\widehat M_N=\bigcup_{i=1}^m M+y_i$. Normaliz computes a minimal system $y_1,\dots,y_m$ and lists the nonzero $y_i$ as a system of module generators of $\widehat M_N$ modulo $M$. We must introduce coefficients to make this precise: Normaliz computes a minimal system of generators of the $K[M]$-module $K[\widehat M_N]/K[M]$.



\subsection{Lattice points in polyhedra}\label{latt_hedra}

Let $P\subset \RR^d$ be a rational polyhedron and $L\subset \ZZ^d$ be an \emph{affine sublattice}, i.e., a subset $w+L_0$ where $w\in\ZZ^d$ and $L_0\subset \ZZ^d$ is a sublattice. In order to investigate (and compute) $P\cap L$ one again uses homogenization: $P$ is extended to $C(P)$ and $L$ is extended to $\cL=L_0+\ZZ(w,1)$. Then one computes $C(P)\cap \cL$. Via this ``bridge'' one obtains the following inhomogeneous version of Gordan's lemma:

\begin{theorem}
	Let $P$ be a rational polyhedron with vertices and $L=w+L_0$ an affine lattice as above. Set $\rec_L(P)=\rec(P)\cap L_0$. Then there exist $x_1,\dots,x_m\in P\cap L$ such that
	$$
	P\cap L=\{(x_1+\rec_L(P))\cap\dots\cap(x_m+\rec_L(P))\}.
	$$
	If the union is irredundant, then $x_1,\dots,x_m$ are uniquely determined.
\end{theorem}

The Hilbert basis of $\rec_L(P)$ is given by $\{x: (x,0)\in \Hilb(C(P)\cap\cL)\}$ and the minimal system of generators can also be read off the Hilbert basis of $C(P)\cap \cL$: it is given by those $x$ for which $(x,1)$ belongs to $\Hilb(C(P)\cap\cL)$. (Normaliz computes the Hilbert basis of $C(P)\cap L$ only at ``levels'' $0$ and $1$.)

We call $\rec_L(P)$ the \emph{recession monoid} of $P$ with respect to $L$ (or $L_0$). It is justified to call $P\cap L$ a \emph{module} over $\rec_L(P)$. In the light of the theorem, it is a finitely generated module, and it has a unique minimal system of generators.

After the introduction of coefficients from a field $K$, $\rec_L(P)$ is turned into an affine monoid algebra, and $N=P\cap L$ into a finitely generated torsionfree module over it. As such it has a well-defined \emph{module rank} $\mrank(N)$, which is computed by Normaliz via the following combinatorial description: Let $x_1,\dots,x_m$ be a system of generators of $N$ as above; then $\mrank(N)$ is the cardinality of the set of residue classes of $x_1,\dots,x_m$ modulo $\rec_L(P)$.

Clearly, to model $P\cap L$ we need linear diophantine systems of inequalities, equations and congruences which now will be inhomogeneous in general. Conversely, the set of solutions of such a system is of type $P\cap L$.


\subsection{Hilbert series and multiplicity}\label{AppHilbertSeries}

Normaliz can compute the Hilbert series and the Hilbert
(quasi)polynomial of a graded monoid. A \emph{grading} of a
monoid $M$ is simply a homomorphism $\deg:M\to\ZZ^g$ where
$\ZZ^g$ contains the degrees. The \emph{Hilbert series} of $M$
with respect to the grading is the formal Laurent series
$$
H(t)=\sum_{u\in \ZZ^g} \#\{x\in M: \deg x=u\}t_1^{u_1}\cdots t_g^{u_g}=\sum_{x\in M}t^{\deg x},
$$
provided all sets $\{x\in M: \deg x=u\}$ are finite. At the moment, Normaliz can only handle the case $g=1$, and therefore we restrict ourselves to this case. We assume in the following that $\deg x >0$ for all nonzero $x\in M$ and that there exists an $x\in\gp(M)$ such that $\deg x=1$. (Normaliz always rescales the grading accordingly -- as long as no module $N$ is involved.) In the case of a nonpositive monoid, these conditions must hold for $M/M_0$, and its Hilbert series is considered as the Hilbert series of $M$.

The basic fact about $H(t)$ in the $\ZZ$-graded case is that it
is the Laurent expansion of a rational function at the origin:
\begin{theorem}[Hilbert, Serre; Ehrhart]
	Suppose that $M$ is a normal positive affine monoid. Then
	$$
	H(t)=\frac{R(t)}{(1-t^e)^r},\qquad R(t)\in\ZZ[t], %\label{raw}
	$$
	where $r$ is the rank of $M$ and $e$ is the least common multiple
	of the degrees of the extreme integral generators of $\cone(M)$. As a rational function, $H(t)$ has negative degree.
\end{theorem}

The statement about the rationality of $H(t)$ holds under much more general hypotheses.

Usually one can find denominators for $H(t)$ of much lower
degree than that in the theorem, and Normaliz tries to
give a more economical presentation of $H(t)$ as a quotient of
two polynomials. One should note that it is not clear what the
most natural presentation of $H(t)$ is in general (when $e>1$).
We discuss this problem in~\cite[Section~4]{BIS}. The examples~\ref{rational} and~\ref{magiceven}, may serve as
an illustration.

A rational cone $C$ and a grading together define the rational
polytope $Q=C\cap A_1$ where $A_1=\{x:\deg x=1\}$. In this
sense the Hilbert series is nothing but the Ehrhart series of
$Q$.
The following description of the Hilbert function $H(M,k)=\#\{x\in M: \deg x=k\}$ is equivalent to the previous theorem:

\begin{theorem}
	There exists a quasipolynomial $q$ with rational coefficients, degree $\rank M-1$ and period $\pi$ dividing $e$ such that $H(M,k)=q(k)$ for all $q\ge0$.
\end{theorem}

The statement about the quasipolynomial means that there exist
polynomials $q^{(j)}$, $j=0,\dots,\pi-1$, of degree $\rank M-1$ such that
$$
q(k)=q^{(j)}(k),\qquad j\equiv k\pod \pi,
$$
and
$$
q^{(j)}(k)=q^{(j)}_0+q^{(j)}_1k+\dots+q^{(j)}_{r-1}k^{r-1},\qquad r=\rank M,
$$
with coefficients $q^{(j)}_i\in \QQ$. It is not hard to show that in the case of affine monoids all components have the same degree $r-1$ and the same leading coefficient:
$$
q_{r-1}=\frac{\vol(Q)}{(r-1)!},
$$
where $\vol$ is the lattice normalized volume of $Q$ (a lattice simplex of smallest possible volume has volume $1$). The \emph{multiplicity} of $M$, denoted by $e(M)$ is $(r-1)!q_{r-1}=\vol(Q)$.

Suppose now that $P$ is a rational polyhedron in $\RR^d$, $L\subset\ZZ^d$ is an affine lattice, and we consider $N=P\cap L$ as a module over $M=\rec_L(P)$. Then we must give up the condition that $\deg$ takes the value $1$ on $\gp(M)$. But the Hilbert series
$$
H_N(t)=\sum_{x\in N} t^{\deg x}
$$
is well-defined, and the qualitative statement above about rationality remain valid. However, in general the quasipolynomial gives the correct value of the Hilbert function only for $k>r$ where $r$ is the degree of the Hilbert series as a rational function. The multiplicity of $N$ is given by
$$
e(N)=\mrank(N)e(M).
$$
where $\mrank(M)$ is the module rank of $M$.

Since $N$ may have generators in negative degrees, Normaliz shifts the degrees into $\ZZ_+$ by subtracting a constant, called the \emph{shift}. (The shift may also be positive.)

Above the multiplicity of $M$ was defined under the assumption that $\gp(M)$ contains an element of degree $1$. In the homogeneous situation where no module $N$ comes into play, Normaliz achieves this extra condition by dividing the grading by the \emph{grading denominator} so that we are effectively in the situation considered above, except in two situations:
(i) the use of the grading denominator is blocked; (ii) when a module $N$ is considered, it can easily happen that the grading restricted to the recession monoid $M$ has a denominator $g>1$, but there occur degrees in $N$ that are not divisible by $g$. Let $\deg'=\deg/g$ and let $e'(M)$ be the multiplicity of $M$ with respect to $\deg'$. Then
$$
e(M)=\frac{e'(M)}{g^{r-1}}.
$$
With this definition, $e(M)$ has the expected property as a dimension normed leading coefficient of the Hilbert quasipolynomial: if $q^{(j)}$ is a \emph{nonzero} component of the quasipolynomial of $M$, then its leading coefficient satisfies
$$
q_{r-1}^{(j)}=\frac{e(M)}{(r-1)!}.
$$
This follows immediately from the substitution $k\mapsto k/g$ in the Hilbert function when we pass from $\deg'$ to $\deg$: $H(M,k)=H'(M,k/g)$ if $g$ divides $k$ and $H(M,k)=0$ otherwise. Also the interpretation as a volume is consistent: $e(M)$ is the lattice normalized volume of the polytope $C\cap\{x:\deg x=1 \}$ (whereas $e'(M)$ is the lattice normalized volume of $C\cap\{x:\deg x=g \}$).

For the interpretation of the multiplicity $e(N)=\mrank(N)e(M)$ one must first split the module $N$ into a direct sum where each summand bundles the elements whose degrees belong to a fixed residue class modulo $g$. Let $N^0,\dots,N^{g-1}$ be these summands. Then $e(N^k)$ is the dimension normed constant leading coefficient of the Hilbert quasipolynomial of $N^k$ for each $k$, and $e(N)=\sum_k e(N^k)$.

\subsection{The class group}

A normal affine monoid $M$ has a well-defined divisor class group. It is naturally isomorphic to the divisor class group of $K[M]$ where $K$ is a field (or any unique factorization domain); see~\cite[Section~4.F]{BG}, and especially~\cite[Corollary~4.56]{BG}. The class group classifies the divisorial ideals up to isomorphism. It can be computed from the standard embedding that sends an element $x$ of $\gp(M)$ to the vector $\sigma(x)$ where $\sigma$ is the collection of support forms $\sigma_1,\dots,\sigma_s$ of $M$: $\Cl(M)=\ZZ^s/\sigma(\gp(M))$. Finding this quotient amounts to an application of the Smith normal form to the matrix of $\sigma$.

\subsection{Affine monoid algebras and their defining ideals}\label{aff_mon_bin_id}


In addition to \cite{BG}, the reader may want to consult De Loera, Hmmecke nd Köppe \cite{DLHK} and Sturmfels \cite{Stu} for the discussion of Markov and Gröbner bases in our context.

As soon as one wants to understand affine monoids by generators and relations, the pure combinatorial treatment becomes cumbersome, since there are no exact sequences without coefficcients. (In monoid theory relations are given by congruences; see \cite{BG}.) Therefore we start from a field  $K$ and a $K$-subalgebra $A$ of a Laurent polynomial ring $K[X_1^{\pm1},\dots, X_n^{\pm1}]$ that is generated by finitely many monomials $M_1,\dots,M_m$ , i.e., power products of indeterminates and their inverses. Often we identify the monomials with their exponent vectors, switching from multiplicative to additive notation and back. The exponent vectors generate an affine monoid, and the corresponding monomials are a $K$-basis of $A$. 

To study $A$ by its relations, one takes a polynomial ring $P=K[Y_1,\dots,Y_m]$ and the surjective $K$-algebra homomorphism $\phi:P \to A$ induced by the substitution $Y_i\mapsto M_i$. The kernel $I$ of $\phi$ is the \emph{defining ideal} of $A$ (with respect to the generating system $M_1,\dots,M_m$). It is generated by \emph{binomials} $Y^{v^+} - Y^{v^-}$ where $v^+$ and $v^-$ are vectors with $m$ nonnegative integer entries, and for such a vector $v=(v_1,\dots,v_m)$ we have set $Y^v = Y_1^{v_1}\cdots Y_m^{v_m}$. Since all variables $Y_1,\dots,Y_m$ are nonzerodivisors modulo $I$, we only need to consider binomials  such that most one entry $v^+_i$ and $v^-_i$ is nonzero in computing $I$. So we restrict our use of the term ``inomial'' by assuming that both monmials in it do not have a common factor. This restriction has tremendous computational advantages: a binomial (in our restricted sense) can be represented by the difference vector $v^+-v^-$.

Ideals like $I$ are called \emph{toric ideals}. Computing $I$ means to find a \emph{Markov basis} of $I$, i.e., a binomial system of generators, and for efficiency we may want a minimal Markov basis. It is not unique in general, but at least its cardinality is unique if $M_1,\dots, M_m$ generate a positive affine monoid. The name ``Markov basis'' is motivated by applications in algebraic statistics.  
For certain computations, for example the Hilbert series, one even needs a Gröbner basis of $I$ (unless $A$ is normal).

Toric ideals are  generalized by \emph{lattice ideals} $J$; these are  generated by binomials and have the property that no indeterminate is a zeroduvisor modulo $J$.

Markov and Gröbner bases of toric and lattice are combinatorial invariants. They are independent of the choice of the field $K$.

The computation of Markov bases (that in all algorithms we know is based on Gröbner bases) is often time consuming. Normaliz uses a reimplementation of the project-and-lift algorithm of Hemmecke and Malkin \cite{HM} for the computation of Markov bases, realized by them in 4ti2 \cite{4ti2}. The project-and-lift algorithm is also explained in \cite{DLHK}.

\subsection{Affine monoid algebras from binomial ideals}\label{binomials}

The typical starting point is an ideal $J\subset
P=K[Y_1,\dots,Y_m]$ generated by binomials
$$
Y^v - Y^w, \qquad v,w \in \ZZ_+^n.
$$
In general the residue class ring $P/J$ is not a monoid ring, let alone an affine monoid ring. To understand in which way $J$ nevertheless defines an affine monoid algebra, we extend $J$ to $JQ$ where $Q= K[Y_1^{\pm1},\dots,Y_m^{\pm1}]$ is the Laurent polynomial extension of $P$.

The term ` `'lattice ideal'' needs an explanation. In $Q$ the monomial $Y^w$ is a unit, so that 
$$
Y^v/Y^w - 1 \in JQ \quad \iff \quad Y^v - Y^w\in JQ\cap P.
$$
The quotients $Y^v/Y^w$ generate a sublattice of the unit group of $Q$ which can be identified with $ZZ^m$ if one passes from a monomial to its exponent vector. The binomials $Y^v/Y^w$ for which $Y^v/Y^w - 1 \in JQ$ form a sublattice $L$ of the unit group. So the smallest lattice ideal containing $J$ is $JQ\cap P$. Normaliz computes it from the input type  \verb|lattice_ideal| via the sublattice $L$ of the unit group of the Laurent polynomial ring.

Let us now assume that $J$ is a lattice ideal. Then $P/J$ is a monoid ring, but not necessarily an affine monoid ring since $\ZZ^m/L$ need not be torsionfree. In order to get an affine monoid ring, we must increase by the preimage of the torsion subgroup of $\ZZ^m/L$ in $\ZZ^m$. In other words, $L$ is replaced by its saturation $\overline L$ in $ZZ^m$. Computing the smallest toric ideal $T$ defined by our binomials means to find $\overline L$ and the ideal in $P$ generated by all binomials $X^v-X^w$ for which $v-w\in \overline L$.

\subsection{Local properties of affine monoid algebras}
\def\cP{{\mathcal P}}
Let $R$ be a commutative Noetherian ring. By a ``local property'' $\cP$  we mean a property that is described in terms of the localizations $R_P$ running over all prime ideals $P$ of $R$. For an extensive discussion of the following we refer the reader to \cite[Chap. 4]{BG} (including the exercises).

Certain local properties of affine monoid algebras $R$ depend only on the underlying monoid and can be tested by computations applied to the latter. The transition from general prime ideals $P$ to their combinatorial counterparts proceeds in two steps. The first is the observation that the ideal $P^*$ generated by all monomials in $P$ is itself a prime ideal. For suitable $\cP$, $R_P$ satisfies $\cP$ if and only if $R_{P^*}$ has $\cP$. The second step is the passage from $R_{\cP^*}$ to the ring $R[S^{-1}]$ where $S$ is the set of monoid elements outside $P$ (or $P^*$). The crucial point now is that $R[S^{-1}]$ is again a monoid algebra and the underlying monoid is (in additive notation) $N[-S]$. The set set $S$ is the intersection of $N$ witha facet of the  cone generated by $N$. To sum up: Determining the monomial prime ideals in $R$ amounts to the computation of the face lattice, and certain propefrties $\cP$ can be tested on the ``ocalizations'' of $N$. 

Suitable properfties are regularity and normality. For regularity we  have realized the computation of the singular locus, i.e., the set of prime ideals for which the corresponding localization is not regular. See Section \ref{SingularLocus}.

\newpage

\section{Annotated console output}\label{Console}

Somewhat outdated, but not much has changed in the shown computations since~3.2.0.

\subsection{Primal mode}

With
\begin{Verbatim}
./normaliz -ch example/A443
\end{Verbatim}
we get the following terminal output.

\begin{Verbatim}
                                                    \.....|
                    Normaliz 3.2.0                   \....|
                                                      \...|
     (C) The Normaliz Team, University of Osnabrueck   \..|
                     January  2017                      \.|
                                                         \|
************************************************************
Command line: -ch example/A443 
Compute: HilbertBasis HilbertSeries 
************************************************************
starting primal algorithm with full triangulation ...
Roughness 1
Generators sorted by degree and lexicographically
Generators per degree:
1: 48 
\end{Verbatim}
Self explanatory so far (see Section~\ref{bottom_dec} for the definition of roughness). Now the generators are inserted.
\begin{Verbatim}
Start simplex 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 19 22 25 26 27 28 31 34 
37 38 39 40 43 46 
\end{Verbatim}
Normaliz starts by searching linearly independent generators with indices as small as possible. They span the start simplex in the triangulation. The remaining generators are inserted successively. (If a generator does not increase the cone spanned by the previous ones, it is not listed, but this does not happen for \verb|A443|.)
\begin{Verbatim}
gen=17, 39 hyp, 4 simpl
\end{Verbatim}
We have now reached a cone with $39$ support hyperplanes and the triangulation has $4$ simplices so far. We omit some generators until something interesting happens:
\begin{Verbatim}
gen=35, 667 hyp, 85 pyr, 13977 simpl
\end{Verbatim}
In view of the number of simplices in the triangulation and the number of support hyperplanes, Normaliz has decided to build pyramids and to store them for later triangulation.
\begin{Verbatim}
gen=36, 723 hyp, 234 pyr, 14025 simpl
...
gen=48, 4948 hyp, 3541 pyr, 14856 simpl
\end{Verbatim}
All generators have been processed now. Fortunately our cone is pointed:
\begin{Verbatim}
Pointed since graded
Select extreme rays via comparison ... done.
\end{Verbatim}
Normaliz knows two methods for finding the extreme rays. Instead of ``comparison'' you may see ``rank''.
Now the stored pyramids must be triangulated. They may produce not only simplices, but also pyramids of higher level, and indeed they do so:
\begin{Verbatim}
**************************************************
level 0 pyramids remaining: 3541
**************************************************
**************************************************
all pyramids on level 0 done!
**************************************************
level 1 pyramids remaining: 5935
**************************************************
**************************************************
all pyramids on level 1 done!
**************************************************
level 2 pyramids remaining: 1567
**************************************************
1180 pyramids remaining on level 2, evaluating 2503294 simplices
\end{Verbatim}
At this point the preset size of the evaluation buffer for simplices has been exceeded. Normaliz stops the processing of pyramids, and empties the buffer by evaluating the simplices.
\begin{Verbatim}
||||||||||||||||||||||||||||||||||||||||||||||||||
2503294 simplices, 0 HB candidates accumulated.
**************************************************
all pyramids on level 2 done!
**************************************************
level 3 pyramids remaining: 100
**************************************************
**************************************************
all pyramids on level 3 done!
\end{Verbatim}
This is a small computation, and the computation of pyramids goes level by level without the necessity to return to a lower level. But in larger examples the buffer for level $n+1$ may be filled before level $n$ is finished. Then it becomes necessary to go back. Some simplices remaining in the buffer are now evaluated:
\begin{Verbatim}
evaluating 150978 simplices
||||||||||||||||||||||||||||||||||||||||||||||||||
2654272 simplices, 0 HB candidates accumulated.
Adding 1 denominator classes... done.
\end{Verbatim}
Since our generators form the Hilbert basis, we do not collect any further candidates. If all generators are in degree $1$, we have only one denominator class in the Hilbert series, but otherwise there may be many. The collection of the Hilbert series in denominator classes reduces the computations of common denominators to a minimum.
\begin{Verbatim}
Total number of pyramids = 14137, among them simplicial 2994
\end{Verbatim}
Some statistics of the pyramid decomposition.
\begin{Verbatim}
------------------------------------------------------------
transforming data... done.
\end{Verbatim}
Our computation is finished.

A typical pair of lines that you will see for other examples is
\begin{Verbatim}
auto-reduce 539511 candidates, degrees <= 1 3 7 
reducing 30 candidates by 73521 reducers
\end{Verbatim}
It tells you that Normaliz has found a list of $539511$ new candidates for the Hilbert basis, and this list is reduced against itself (auto-reduce). Then the $30$ old candidates are reduced against the $73521$ survivors of the auto-reduction.

\subsection{Dual mode}

Now we give an example of a computation in dual mode. It is started by the command
\begin{Verbatim}
./normaliz -cid example/5x5
\end{Verbatim}
The option \verb|i| is used to suppress the \verb|HSOP| in the input file. The console output:

\begin{Verbatim}
                                                    \.....|
                    Normaliz 3.2.0                   \....|
                                                      \...|
     (C) The Normaliz Team, University of Osnabrueck   \..|
                     January  2017                      \.|
                                                         \|
************************************************************
Command line: -cid example/5x5 
Compute: DualMode 
No inequalities specified in constraint mode, using non-negative orthant.
************************************************************
\end{Verbatim}
Indeed, we have used only equations as the input.
\begin{Verbatim}
************************************************************
computing Hilbert basis ...
==================================================
cut with halfspace 1 ...
Final sizes: Pos 1 Neg 1 Neutral 0
\end{Verbatim}
The cone is cut out from the space of solutions of the system of equations (in this case) by successive intersections with halfspaces defined by the inequalities. After such an intersection we have the positive half space, the ``neutral'' hyperplane and the negative half space. The final sizes given are the numbers of Hilbert basis elements strictly in the positive half space, strictly in the negative half space, and in the hyperplane. This pattern is repeated until all hyperplanes have been used.
\begin{Verbatim}
==================================================
cut with halfspace 2 ...
Final sizes: Pos 1 Neg 1 Neutral 1
\end{Verbatim}
We leave out some hyperplanes \dots
\begin{Verbatim}
==================================================
cut with halfspace 20 ...
auto-reduce 1159 candidates, degrees <= 13 27 
Final sizes: Pos 138 Neg 239 Neutral 1592
==================================================
cut with halfspace 21 ...
Positive: 1027  Negative: 367
..................................................
Final sizes: Pos 1094 Neg 369 Neutral 1019
\end{Verbatim}
Sometimes reduction takes some time, and then Normaliz may issue a message on ``auto-reduction'' organized by degree (chosen for the algorithm, not defined by the given grading). The line of dots is printed is the computation of new Hilbert basis candidates takes time, and Normaliz wants to show you that it is not sleeping. Normaliz shows you the number of positive and negative partners that must be pared produce offspring.
\begin{Verbatim}
==================================================
cut with halfspace 25 ...
Positive: 1856  Negative: 653
..................................................
auto-reduce 1899 candidates, degrees <= 19 39 
Final sizes: Pos 1976 Neg 688 Neutral 2852
\end{Verbatim}
All hyperplanes have been taken care of.
\begin{Verbatim}
Find extreme rays
Find relevant support hyperplanes
\end{Verbatim}
Well, in connection with the equations, some hyperplanes become superfluous. In the output file Normaliz will list a minimal set of support hyperplanes that together with the equations define the cone.
\begin{Verbatim}
Hilbert basis 4828
\end{Verbatim}
The number of Hilbert basis elements computed is the sum of the last positive and neutral numbers.
\begin{Verbatim}
Find degree 1 elements
\end{Verbatim}
The input file contains a grading.
\begin{Verbatim}
transforming data... done.
\end{Verbatim}
Our example is finished.

The computation of the new Hilbert basis after the intersection with the new hyperplane proceeds in rounds, and there can be many rounds \dots (not in the above example). Then you can see terminal output like
\begin{Verbatim}
Round 100
Round 200
Round 300
Round 400
Round 500
\end{Verbatim}

\newpage

\section{Normaliz~2 input syntax}\label{OldSyntax}

A Normaliz~2 input file contains a sequence of matrices. Comments or options are not allowed in it. A matrix has the format
\begin{Verbatim}
<m>
<n>
<x_1>
...
<x_m>
<type>
\end{Verbatim}
where \verb|<m>| denotes the number of rows, \verb|<n>| is the number of columns and \verb|<x_1>\dots<x_n>| are the rows with \verb|<n>| entries each. All matrix types of Normaliz~3 are allowed (with Normaliz~3), also \verb|grading| and \verb|dehomogenization|. These vectors must be encoded as matrices with $1$ row.

Note that algebraic polyhedra cannot be defined by input files in this format.

The optional output files with suffix \verb|cst| are still in this format. Just create one and inspect it.

\newpage

\section{libnormaliz}\label{libnorm}

\begin{small}

The kernel of Normaliz is the C++ class library \verb|libnormaliz|. It implements all the classes that are necessary for the computations. The central class is \verb|Cone|. It realizes the communication with the calling program and starts the computations most of which are implemented in other classes. In the following we describe the class \verb|Cone|; other classes of \verb|libnormaliz| may follow in the future.

Of course, Normaliz itself is the prime example for the use of \verb|libnormaliz|, but it is rather complicated because of the input and output it must handle. Therefore we have a added a simple example program at the end of this introduction.

\verb|libnormaliz| defines its own name space. In the following we assume that
\begin{Verbatim}
using namespace std;
using namespace libnormaliz;
\end{Verbatim}
have been declared. It is clear that opening these name spaces is dangerous. In this documentation we only do it to avoid constant repetition of \verb|std::| and \verb|libnormaliz::|

\subsection{The master header file}

\begin{Verbatim}
#include "libnormaliz/libnormaliz.h"
\end{Verbatim}
reads all installed header files of libnormaliz.

\subsection{Optional packages and configuration}

The file
\begin{Verbatim}
#include "libnormaliz/lnmz_config.h"
\end{Verbatim}
is created and installed when Normaliz is built by the autotools scripts. It (un)defines the preprocessor variables that indicate the optional packages used in the build process. These are
\begin{Verbatim}
ENFNORMALIZ   NMZ_NAUTY   NMZ_FLINT   NMZ_COCOA 
\end{Verbatim}
with obvious interpretations (\verb|ENFNORMALIZ| stands for e-antic).

\subsection{Integer type as a template parameter}

A cone can be constructed for two integer types, \verb|long long| and \verb|mpz_class|. (Also \verb|long| is possible, but we disregard it in the following, since one should make sure that the integer type has at least~$64$~bits.) It is reasonable to choose \verb|mpz_class| since the main computations will then be tried with \verb|long long| and restarted with \verb|mpz_class| if \verb|long long| cannot store the results. This internal change of integer type is not possible if the cone is constructed for \verb|long long|. (Nevertheless, the linear algebra routines can use \verb|mpz_class| locally if intermediate results exceed \verb|long long|; have a look into \verb|matrix.cpp|.)

Internally the template parameter is called \verb|Integer|. In the following we assume that the integer type has been fixed as follows:
\begin{Verbatim}
typedef mpz_class Integer;
\end{Verbatim}

The internal passage from \verb|mpz_class| to \verb|long long| can be suppressed by
\begin{Verbatim}
MyCone.deactivateChangeOfPrecision();
\end{Verbatim}
where we assume that \verb|MyCone| has been constructed as described in the next section.

\subsubsection{Alternative integer types}

It is possible to use libnormaliz with other integer types than \verb|mpz_class|, \verb|long long|, \verb|long| or \verb|renf_elem_class| but we have tested only these types.

If you want to use other types, you probably have to implement some conversion functions which you can find in \verb|integer.h| and \verb|integer.cpp|. Namely the functions
\begin{Verbatim}
bool libnormaliz::try_convert(TypeA, TypeB); 
// converts TypeB to TypeA, returns false if not possible
\end{Verbatim}
where one type is your type and the other is \verb|long|, \verb|long long|, \verb|mpz_class| and \verb|nmz_float|.
Additionally, if your type uses infinite precision (for example, it is some wrapper for GMP), you must also implement
\begin{Verbatim}
template<> inline bool libnormaliz::using_GMP<YourType>() { return true; }
\end{Verbatim}

\subsubsection{Decimal fractions and floating point numbers}

libnormaliz has a type \verb|nmz_float| (presently set to \verb|double|) that allows the construction of cones from floating point data. These are are first converted into \verb|mpq_class| by using the GMP constructor of \verb|mpq_class|, and then denominators are cleared. (The input routine of Normaliz goes another way by reading the floating point input as decimal fractions.)

\subsection{Construction of a cone}\label{ConstCone}

The construction requires the specification of input data consisting of one or more matrices and the input types they represent. In addition there is a constructor that takes a Normaliz input file.

The term ``matrix'' stands for
\begin{Verbatim}
vector<vector<number> >
\end{Verbatim}
where predefined choices of number are \verb|long long|, \verb|mpz_class|, \verb|mpq_class| and \verb|nmz_float| (the latter representing \verb|double|).

The available input types (from \verb|input_type.h|) are defined as follows:
\begin{Verbatim}
namespace Type {
enum InputType {
    //
// homogeneous generators
//
polytope,
rees_algebra,
subspace,
cone,
cone_and_lattice,
lattice,
saturation,
rational_lattice,
monoid,
//
// inhomogeneous generators
//
vertices,
offset,
rational_offset,
//
// homogeneous constraints
//
inequalities,
signs,
equations,
congruences,
excluded_faces,
//
// inhomogeneous constraints
//
inhom_equations,
inhom_inequalities,
strict_inequalities,
strict_signs,
inhom_congruences,
inhom_excluded_faces,
//
// linear forms
//
grading,
dehomogenization,
gb_weight,
//
// lattice ideals and friends
//
lattice_ideal,
toric_ideal,
normal_toric_ideal,
//
// special
//
open_facets,
projection_coordinates,
fusion_type,
fusion_duality,
candidate_subring,
fusion_type_for_partition,
fusion_ring_map,
fusion_image_type,
fusion_image_ring,
fusion_image_duality,
//
// precomputed data
//
support_hyperplanes,
extreme_rays,
maximal_subspace,
generated_lattice,
hilbert_basis_rec_cone,
//
// deprecated
//
integral_closure,
normalization,
polyhedron,
...
};
} //end namespace Type
\end{Verbatim}
The input types are explained in Section~\ref{input}. (There are further input types used for debugging and tests.)

In certain environments it is not possible to use the enumeration. Therefore we provide a function that converts a string into the corresponding input type:
\begin{Verbatim}
Type::InputType to_type(const string& type_string)
\end{Verbatim}

The types \verb|grading|, \verb|dehomogenization|, \verb|signs|, \verb|strict_signs|, \verb|offset|and \verb|open_facets| must be encoded as matrices with a single row. We come back to this point below.

The simplest constructor has the syntax
\begin{Verbatim}
Cone<Integer>::Cone(InputType input_type, const vector< vector<Integer> >& Input)
\end{Verbatim}
and can be used as in the following example:
\begin{Verbatim}
vector<vector <Integer> > Data = ...
Type::InputType type = cone;
Cone<Integer> MyCone = Cone<Integer>(type, Data);
\end{Verbatim}
For two and three pairs of type and matrix there are the constructors
\begin{Verbatim}
Cone<Integer>::Cone(InputType type1, const vector< vector<Integer> >& Input1,
InputType type2, const vector< vector<Integer> >& Input2)

Cone<Integer>::Cone(InputType type1, const vector< vector<Integer> >& Input1,
InputType type2, const vector< vector<Integer> >& Input2,
InputType type3, const vector< vector<Integer> >& Input3)
\end{Verbatim}

If you have to combine more than three matrices, you can define a
\begin{Verbatim}
map <InputType, vector< vector<Integer> > >
\end{Verbatim}
and use the constructor with syntax
\begin{Verbatim}
Cone<Integer>::Cone(const map< InputType, 
vector< vector<Integer> > >& multi_input_data)
\end{Verbatim}

The four constructors also exist in a variant that uses the \verb|libnormaliz| type \verb|Matrix<Integer>| instead of \verb|vector< vector<Integer> >| (see \verb|cone.h|).

For the input of rational numbers we have all constructors also in variants that use \verb|mpq_class| for the input matrix, for example
\begin{Verbatim}
Cone<Integer>::Cone(InputType input_type, const vector< vector<mpq_class> >& Input)
\end{Verbatim}
etc.

Similarly, for the input of decimal fractions and floating point numbers we have all constructors also in variants that use \verb|nmz_float| for the input matrix, for example
\begin{Verbatim}
Cone<Integer>::Cone(InputType input_type, const vector< vector<nmz_float> >& Input)
\end{Verbatim}
etc.

Note that \verb|rational_lattice| and \verb|rational_offset| can only be used if the input data are given in class \verb|mpq_class| or \verb|nmz_float|.

For convenience we provide the function
\begin{Verbatim}
vector<vector<T> > to_matrix<Integer>(vector<T> v)
\end{Verbatim}
in \verb|matrix.h|. It returns a matrix whose first row is \verb|v|. A typical example:
\begin{Verbatim}
size_t dim = ...
vector<vector <Integer> > Data = ...
Type::InputType type = cone;
vector<Integer> total_degree(dim,1);
Type::InputType grad = grading;
Cone<Integer> MyCone = Cone<Integer>(type, Data,grad,to_matrix(total_degree));
\end{Verbatim}

There is a default constructor for cones,
\begin{Verbatim}
Cone<Integer>::Cone()
\end{Verbatim}

\subsubsection{Construction from an input file}

One can construct a cone also from a Normaliz input file by
\begin{Verbatim}
Cone<Integer>::Cone(const string project)
\end{Verbatim}
The constructor reads the file \verb|<project>.in|. The boolean parameters defined in Section \ref{bool_param} are transferred to the cone, as well as the polynomial parameters in Section \ref{poly_param} and the numerical parameters of Section \ref{num_params}.

\subsection{Setting and changing additional data}

The cone constructors allow only matrices as parameters. Input data of other types must be set after the call of a constructor. Moreover, it might be useful to reset (or introduce) the grading or the projection coordinates.

\subsubsection{Boolean parameters}\label{bool_param}

The construction of the cone has three phases. The first phase mainly standardizes the types in the input matrices and the second does syntax checks. These two phases are run by the constructor itself. The third phase sorts the input matrices into constraints and generators, roughly speaking, and does preparatory computations. It is started by the first compute command applied to the cone. This allows us to communicate boolean parameters (and potentially others) to libNormaliz that influence the third phase (and potentially subsequent steps). Their use can almost completely be avoided in the library interface. They are mainly meant as convenient and for compatibility between the input file interface and the library interface.
\begin{Verbatim}
void Cone<Integer>::setNonnegative(bool onoff = true)
void Cone<Integer>::setTotalDegree(bool onoff = true)
void Cone<Integer>::setNoPosOrthDef(bool onoff = true)
void Cone<Integer>::setConvertEquations(bool onoff = true)
void Cone<Integer>:::setNoCoordTransf(bool onoff = true)
void Cone<Integer>::setListPolynomials(bool onoff = true)
bool Cone<Integer>::setVerbose(bool v = true)
\end{Verbatim}

Their default value is \verb*|false|. \verb*|setVerbose| returns the previous value. The (new) \verb*|setVerbose| amkes the function \verb*|suppressNextConstructorVerbose()| below superfluous, but the latter has been kept for backward compatibility.

The meaning of these boolean parameters has been explained for their use in input files. The parameters can de set en bloc:
\begin{Verbatim}
void Cone<Integer>::setBoolParams(const map<BoolParam::Param, bool>& bool_params)
\end{Verbatim}
where \verb*|BoolParam::Param| is an enumeration defined as follows:
\begin{Verbatim}
namespace BoolParam {
enum Param {
verbose,
nonnegative,
total_degree,
convert_equations,
no_coord_transf,
list_polynomials,
no_pos_orth_def,
not_a_bool_param
};
}  // end namespace BoolParam
\end{Verbatim}

\subsubsection{Polynomials}\label{poly_param}

The polynomial needed for integrals and weighted Ehrhart series must be passed to the cone after construction:
\begin{Verbatim}
void Cone<Integer>::setPolynomial(string poly)
\end{Verbatim}

Like the grading it can be changed later on. Then the results depending on the previous polynomial will be deleted.

Similarly polynomial constraints can be set:
\begin{Verbatim}
void Cone<Integer>::setPolynomialEquations(const vector<string>& poly_equs)
void Cone<Integer>::setPolynomialInequalities(const vector<string>& poly_inequs)
\end{Verbatim}

En bloc setting is also possible:
\begin{Verbatim}
void Cone<Integer>::setPolyParams(const map<PolyParam::Param, vector<string>>& poly_params)
\end{Verbatim}
Have a look at \verb|cone.cpp|. The single \verb|polynomial| must be disguised as the only member of a vector.

The enumeration \verb*|PolyParam::Param| is defined by
\begin{Verbatim}
namespace PolyParam {
enum Param {
polynomial,
polynomial_equations,
polynomial_inequalities,
not_a_poly_param
};
}  // end namespace PolyParam
\end{Verbatim}

\subsubsection{Numerical parameters}\label{num_params}

Some computations can be controlled by numerical parameters. They can be given to the cone en bloc or individually.

To set them individually, you can use the following functions:
\begin{Verbatim}
void Cone<Integer>::setExpansionDegree(long degree)
void Cone<Integer>::setNrCoeffQuasiPol(long nr_coeff)
void Cone<Integer>::setFaceCodimBound(long bound)
void Cone<Integer>::setAutomCodimBoundVectors(long bound)
void Cone<Integer>::setDecimalDigits(long digits)
void Cone<Integer>::setBlocksizeHollowTri(long block_size)
void Cone<Integer>::setGBDegreeBound(const long degree_bound)
void Cone<Integer>::setGBMinDegree(const long min_degree)
void Cone<Integer>::setModularGraing(long mod_gr)
void Cone<Integer>::setChosenFusionRing(long fus_r)
\end{Verbatim}
These functions transfer their arguments to variables defined internally in libnormaliz. The common default value of these variables is $-1$.

To set them en bloc you can use
\begin{Verbatim}
	void Cone<Integer>::setNumericalParams(const map <NumParam::Param, long >& num_params)
\end{Verbatim}
where \verb|NumParam::Param| refers to
\begin{Verbatim}
namespace NumParam {
enum Param {
expansion_degree,
nr_coeff_quasipol,
face_codim_bound,
autom_codim_bound_vectors,
block_size_hollow_tri,
decimal_digits,
modular_grading,
chosen_fusion_ring,
not_a_num_param
};
} //end namespace NumParam
\end{Verbatim}
(see \verb|libnormaliz/input_type.h|).

\subsubsection{Grading}

If your computation needs a grading, you should include it into the construction of the cone. However, especially in interactive use via PyNormaliz or other interfaces, it can be useful to add the grading if it was forgotten or to change it later on. The following function allows this:

\begin{Verbatim}
void Cone<Integer>::resetGrading(const vector<Integer>& grading)
\end{Verbatim}

Note that it deletes all previously computed results that depend on the grading.

\subsubsection{Projection coordinates}

Similarly to  \verb|resetGrading| we have
\begin{Verbatim}
void Cone<Integer>::resetProjectionCoords(const vector<Integer>& lf)
\end{Verbatim}
The entries of \verb|lf| must be $0$ or $1$.

\subsection{Modifying a cone after construction}\label{Modify}

Within some boundaries it is possible to change an already constructed cone (and lattice). To this end one can use the functions
\begin{Verbatim}
void Cone<Integer>::modifyCone(const map<InputType, vector<vector<Integer> > >& 
                                                           multi_add_input_const)
void Cone<Integer>::modifyCone(InputType input_type, const vector< vector<Integer> >& Input)
\end{Verbatim}
Similar to the cone constructor, it has variations for \verb|vector< vector<mpq_class> >| and\\ \verb|vector< vector<nmz_float> >| for cones that are not of \verb|renf_elem_class|. There are also versions with \verb|Matrix<...>| .

The following input types are allowed (to be prefixed by \verb|Type::|)
\begin{center}
\texttt{
	\begin{tabular}{llll}
		cone& vertices & subspace\\
		equations & inhom\_equations&inequalities&inhom\_inequalities
\end{tabular}	}
\end{center}
Modifying the current cone $C $ by \emph{additional} generators (first row) means to extend $C$. Modifying it by \emph{additional} constraints (second row) restricts $C$.

It is allowed to issue several \verb|modifyCone(...)| at any time, but there are some restrictions:
\begin{arab}
\item The inhomogeneous types are only allowed if the cone was constructed with inhomogeneous input.

\item Normaliz cannot fall back behind the coordinate transformation that has been reached at the time of additional input. This implies: (i) Additional generators must satisfy the equations valid at the time of addition. (They are automatically adapted to the congruences if there should be any.) (ii) Additional linear inequalities must vanish on the maximal subspace at the time of addition.

\item \verb|modifyCone| cannot be used if the cone was created with \verb|rational_lattice| or \verb|rational_offset|.

\item Between two \verb|compute(...)| several \verb|modifyCone| are allowed. But they must be of the same category, either the types in the first line above (generators) or those in the second (constraints).
\end{arab}

The last restriction are necessary to avoid ambiguities. If the cone constructor is used with generators and constraints simultaneously, then the \emph{intersection} of the cones defined by the constraints on one side and the generators on the other side is computed. (The same applies to lattice data.) In contrast, the later addition of generators always leads to an \emph{extension} of the existing cone. And: if both constraints and generators are added between two \verb|compute|, should we first extend and then restrict, or the other way round? The two operations do not commute.

For flexibility both support hyperplanes and extrene rays are computed before the modification.

It may happen that a previously computed (or provided) grading gives a negative value on an added generator. In this case the grading is reset. In the inhomogeneous case, if the dehomogenization should give a negative value, a \verb|BadInputException| is thrown.

\subsection{Computations in a cone}

Before starting a computation in a (previously constructed) cone, one must decide what should be computed and in which way it should be computed. The computation goals and algorithmic variants (see Section~\ref{Goals}) are defined as follows (\verb|cone_property.h|):
\begin{Verbatim}
namespace ConeProperty {
enum Enum {
// matrix valued
START_ENUM_RANGE(FIRST_MATRIX),
ExtremeRays,
VerticesOfPolyhedron,
SupportHyperplanes,
HilbertBasis,
ModuleGenerators,
Deg1Elements,
LatticePoints,
ModuleGeneratorsOverOriginalMonoid,
ExcludedFaces,
OriginalMonoidGenerators,
MaximalSubspace,
Equations,
Congruences,
GroebnerBasis,
MarkovBasis,
Representations,
SimpleFusionRings,
NonsimpleFusionRings,
FusionRings,
END_ENUM_RANGE(LAST_MATRIX),

START_ENUM_RANGE(FIRST_MATRIX_FLOAT),
ExtremeRaysFloat,
SuppHypsFloat,
VerticesFloat,
END_ENUM_RANGE(LAST_MATRIX_FLOAT),

// vector valued
START_ENUM_RANGE(FIRST_VECTOR),
Grading,
Dehomogenization,
WitnessNotIntegrallyClosed,
GeneratorOfInterior,
CoveringFace,
AxesScaling,
SingleLatticePoint,
SingleFusionRing,
END_ENUM_RANGE(LAST_VECTOR),

// integer valued
START_ENUM_RANGE(FIRST_INTEGER),
TriangulationDetSum,
ReesPrimaryMultiplicity,
GradingDenom,
UnitGroupIndex,
InternalIndex,
END_ENUM_RANGE(LAST_INTEGER),

START_ENUM_RANGE(FIRST_GMP_INTEGER),
ExternalIndex = FIRST_GMP_INTEGER,
END_ENUM_RANGE(LAST_GMP_INTEGER),

// rational valued
START_ENUM_RANGE(FIRST_RATIONAL),
Multiplicity,
Volume,
Integral,
VirtualMultiplicity,
END_ENUM_RANGE(LAST_RATIONAL),

// field valued
START_ENUM_RANGE(FIRST_FIELD_ELEM),
RenfVolume,
END_ENUM_RANGE(LAST_FIELD_ELEM),

// floating point valued
START_ENUM_RANGE(FIRST_FLOAT),
EuclideanVolume,
EuclideanIntegral,
END_ENUM_RANGE(LAST_FLOAT),

// dimensions and cardinalities
START_ENUM_RANGE(FIRST_MACHINE_INTEGER),
TriangulationSize,
NumberLatticePoints,
RecessionRank,
AffineDim,
ModuleRank,
Rank,
EmbeddingDim,
CodimSingularLocus,
END_ENUM_RANGE(LAST_MACHINE_INTEGER),

// boolean valued
START_ENUM_RANGE(FIRST_BOOLEAN),
IsPointed,
IsDeg1ExtremeRays,
IsDeg1HilbertBasis,
IsIntegrallyClosed,
IsSerreR1,
IsLatticeIdealToric,
IsReesPrimary,
IsInhomogeneous,
IsGorenstein,
IsEmptySemiOpen,
//
// checking properties of already computed data
// (cannot be used as a computation goal)
//
IsTriangulationNested,
IsTriangulationPartial,
END_ENUM_RANGE(LAST_BOOLEAN),

// complex structures
START_ENUM_RANGE(FIRST_COMPLEX_STRUCTURE),
Triangulation,
UnimodularTriangulation,
LatticePointTriangulation,
AllGeneratorsTriangulation,
PlacingTriangulation,
PullingTriangulation,
StanleyDec,
InclusionExclusionData,
IntegerHull,
ProjectCone,
ConeDecomposition,
//
Automorphisms,
CombinatorialAutomorphisms,
RationalAutomorphisms,
EuclideanAutomorphisms,
AmbientAutomorphisms,
InputAutomorphisms,
//
HilbertSeries,
HilbertQuasiPolynomial,
EhrhartSeries,
EhrhartQuasiPolynomial,
WeightedEhrhartSeries,
WeightedEhrhartQuasiPolynomial,
//
FaceLattice,
DualFaceLattice,
FVector,
DualFVector,
FaceLatticeOrbits,
DualFaceLatticeOrbits,
FVectorOrbits,
DualFVectorOrbits,
Incidence,
DualIncidence,
SingularLocus,
//
Sublattice,
//
ClassGroup,
//
ModularGradings,
FusionData,
InductionMatrices,
END_ENUM_RANGE(LAST_COMPLEX_STRUCTURE),

//
// integer type for computations
//
START_ENUM_RANGE(FIRST_PROPERTY),
BigInt,
//
// algorithmic variants
//
DefaultMode,
Approximate,
BottomDecomposition,
NoBottomDec,
DualMode,
PrimalMode,
Projection,
ProjectionFloat,
NoProjection,
Symmetrize,
NoSymmetrization,
NoSubdivision,
NoNestedTri,  // synonym for NoSubdivision
KeepOrder,
HSOP,
OnlyCyclotomicHilbSer,
NoQuasiPolynomial,
NoPeriodBound,
NoLLL,
NoRelax,
Descent,
NoDescent,
NoGradingDenom,
GradingIsPositive,
ExploitAutomsVectors,
ExploitIsosMult,
StrictIsoTypeCheck,
SignedDec,
NoSignedDec,
FixedPrecision,
DistributedComp,
NoPatching,
NoCoarseProjection,
MaxDegRepresentations,
UseWeightsPatching,
NoWeights,
LinearOrderPatches,
CongOrderPatches,
MinimizePolyEquations,
UseModularGrading,
//
Dynamic,
Static,
//
WritePreComp,
// Gröbner Basis
Lex,
RevLex,
DegLex,
//
ShortInt,
NoHeuristicMinimization,
//
END_ENUM_RANGE(LAST_PROPERTY),
//
// ONLY FOR INTERNAL CONTROL
//
...
END_ENUM_RANGE(LAST_PROPERTY),

EnumSize // this has to be the last entry, to get the number of entries in the enum

}; // remember to change also the string conversion function if you change this enum
}
\end{Verbatim}

The class \verb|ConeProperties| is based on this enumeration. Its instantiations are essentially boolean vectors that can be accessed via the names in the enumeration. Instantiations of the class are used to set computation goals and algorithmic variants and to check whether the goals have been reached. The distinction between computation goals and algorithmic variants is not completely strict. See Section~\ref{Goals} for implications between some \verb|ConeProperties|.

There exist versions of \verb|compute| for up to $3$ cone properties:
\begin{Verbatim}
ConeProperties Cone<Integer>::compute(ConeProperty::Enum cp)

ConeProperties Cone<Integer>::compute(ConeProperty::Enum cp1, 
                   ConeProperty::Enum cp2)

ConeProperties Cone<Integer>::compute(ConeProperty::Enum cp1, 
                   ConeProperty::Enum cp2, ConeProperty::Enum cp3)
\end{Verbatim}

An example:
\begin{Verbatim}
MyCone.compute(ConeProperty::HilbertBasis, ConeProperty::Multiplicity)
\end{Verbatim}

If you want to specify more than $3$ cone properties, you can define an instance of \verb|ConeProperties| yourself and call
\begin{Verbatim}
ConeProperties Cone<Integer>::compute(ConeProperties ToCompute)
\end{Verbatim}

An example:
\begin{Verbatim}
ConeProperties Wanted;
Wanted.set(ConeProperty::Triangulation, ConeProperty::HilbertBasis);
MyCone.compute(Wanted);
\end{Verbatim}

All \verb|get...| functions that are listed in the next section, try to compute the data asked for if they have not yet been computed. Unless you are interested a single result, we recommend to use \verb|compute| since the data asked for can then be computed in a single run. For example, if the Hilbert basis and the multiplicity are wanted, then it would be a bad idea to call \verb|getHilbertBasis| and \verb|getMultiplicity| consecutively. More importantly, however, there is no choice of an algorithmic variant if you use \verb|get...| without \verb|compute| beforehand.

It is possible that a computation goal is unreachable. If this can be recognized from the input, a \verb|BadInputException| will be thrown. If it cannot be recognized from the input, and \verb|DefaultMode| is not set, then \verb|compute()| will throw a \verb|NotComputableException| so that \verb|compute()| cannot return a value. In the presence of \verb|DefaultMode|, the returned \verb|ConeProperties| are those that have not been computed.

Please inspect \verb|cone_property.cpp| for the full list of methods implemented in the class \verb|ConeProperties|. Here we only mention the constructors
\begin{Verbatim}
ConeProperties::ConeProperties(ConeProperty::Enum p1)

ConeProperties::ConeProperties(ConeProperty::Enum p1, ConeProperty::Enum p2)

ConeProperties::ConeProperties(ConeProperty::Enum p1, ConeProperty::Enum p2,
                               ConeProperty::Enum p3)
\end{Verbatim}
and the functions
\begin{Verbatim}
ConeProperties& ConeProperties::set(ConeProperty::Enum p1, bool value)

ConeProperties& ConeProperties::set(ConeProperty::Enum p1, ConeProperty::Enum p2)

bool ConeProperties::test(ConeProperty::Enum Property) const
\end{Verbatim}

A string can be converted to a cone property and conversely:
\begin{Verbatim}
ConeProperty::Enum toConeProperty(const string&)
const string& toString(ConeProperty::Enum)
\end{Verbatim}

You can return the whole collection of reached computation goals via
\begin{Verbatim}
const ConeProperties& Cone<Integer>::getIsComputed() const
\end{Verbatim}



\subsection{Retrieving results}

As remarked above, all \verb|get...| functions that are listed below, try to compute the data asked for if they have not yet been computed. As also remarked above, it is often better to use \verb|compute| first.

The functions that return a matrix encoded as \verb|vector<vector<number> >| have variants that return a matrix encoded in the \verb|libnormaliz| class \verb|Matrix<number>|. These are not listed below; see \verb|cone.h|.

Note that there are now functions that return results by type so that interfaces need not implement all the functions in this section. See~\ref{ByType}.

\subsubsection{Checking computations}
In order to check whether a computation goal has been reached, one can use
\begin{Verbatim}
bool Cone<Integer>::isComputed(ConeProperty::Enum prop) const 
\end{Verbatim}
for example
\begin{Verbatim}
bool done=MyCone.isComputed(ConeProperty::HilbertBasis)
\end{Verbatim}

\subsubsection{Rank, index and dimension}

\begin{Verbatim}
size_t Cone<Integer>::getEmbeddingDim()
size_t Cone<Integer>::getRank()
Integer Cone<Integer>::getInternalIndex()
Integer Cone<Integer>::getUnitGroupIndex()

size_t Cone<Integer>::getRecessionRank()
long Cone<Integer>::getAffineDim()
size_t Cone<Integer>::getModuleRank()
\end{Verbatim}

The \emph{internal} index is only defined if original generators are defined. See Section~\ref{coord} for the external index.

The last three functions return values that are only well-defined after inhomogeneous computations.

\subsubsection{Support hyperplanes and constraints}\label{SHC}

\begin{Verbatim}
const vector< vector<Integer> >& Cone<Integer>::getSupportHyperplanes()
size_t Cone<Integer>::getNrSupportHyperplanes()
\end{Verbatim}

The first function returns the support hyperplanes of the (homogenized) cone.
The second function returns the number of support hyperplanes. Similarly we have

\begin{Verbatim}
const vector< vector<Integer> >& Cone<Integer>::getEquations()
size_t Cone<Integer>::getNrEquations()
const vector< vector<Integer> >& Cone<Integer>::getCongruences()
size_t Cone<Integer>::getNrCongruences()
\end{Verbatim}

Support hyperplanes can be returned in floating point format:
\begin{Verbatim}
const vector< vector<nmz_float> >& Cone<Integer>::getSuppHypsFloat()
size_t Cone<Integer>::getNrSuppHypsFloat()
\end{Verbatim}

For these functions there also exist \ttt{Matrix}versions.

\subsubsection{Extreme rays and vertices}

\begin{Verbatim}
const vector< vector<Integer> >& Cone<Integer>::getExtremeRays()
size_t Cone<Integer>::getNrExtremeRays()
const vector< vector<Integer> >& Cone<Integer>::getVerticesOfPolyhedron()
size_t Cone<Integer>::getNrVerticesOfPolyhedron()
\end{Verbatim}

In the inhomogeneous case the first function returns the extreme rays of the recession cone, and the second the vertices of the polyhedron. (Together they form the extreme rays of the homogenized cone.)

Vertices and extreme rays can be returned in floating point format:
\begin{Verbatim}
const vector< vector<nmz_float> >& Cone<Integer>::getVerticesFloat()
const vector< vector<nmz_float> >& Cone<Integer>::getExtremeRaysFloat()
size_t Cone<Integer>::getNrVerticesFloat()
\end{Verbatim}

\subsubsection{Original generators}

\begin{Verbatim}
const vector< vector<Integer> >& Cone<Integer>::getOriginalMonoidGenerators()
size_t Cone<Integer>::getNrOriginalMonoidGenerators()
\end{Verbatim}
Note that original generators are not always defined.

\subsubsection{Lattice points in polytopes and elements of degree $1$}

\begin{Verbatim}
const vector< vector<Integer> >& Cone<Integer>::getDeg1Elements()
size_t Cone<Integer>::getNrDeg1Elements()
\end{Verbatim}
These functions apply to the homogeneous case. \verb|getNrDeg1Elements()| returns the number of degree $1$ elements if these have been computed and stored|, and if the degree $1$ elements are not available, it forces their computation and storage, even if the number of these elements should be known from other computations.

In the inhomogeneous case replace \verb|Deg1Elements| by \verb|ModuleGenerators|; see below. (They are also computable in the unbounded case.) A uniform access is possible by
\begin{Verbatim}
const vector< vector<Integer> >& Cone<Integer>::getLatticePoints()
size_t Cone<Integer>::getNrLatticePoints()
\end{Verbatim}

In addition, we have
\begin{Verbatim}
size_t Cone<Integer>::getNumberLatticePoints()
\end{Verbatim}
There is an important difference between \verb|getNrLatticePoints()| and \verb|getNumberLatticePoints()|: the latter returns the number whenever it is known for some reason. If the number is not known, it forces only the counting of lattice points, not their storage.

If only a single lattice point has been asked for, it can be returned by 
\begin{Verbatim}
const vector<Integer>& Cone<Integer>::getSingleLatticePoint()
\end{Verbatim}
If the returened vector has size $0$, no lattice point was found.

\subsubsection{Hilbert basis}\label{HB_lib}

In the nonpointed case we need the maximal linear subspace of the cone:
\begin{Verbatim}
const vector< vector<Integer> >& Cone<Integer>::getMaximalSubspace()
size_t Cone<Integer>::getDimMaximalSubspace()
\end{Verbatim}

One of the prime results of Normaliz and its cardinality are returned by
\begin{Verbatim}
const vector< vector<Integer> >& Cone<Integer>::getHilbertBasis()
size_t Cone<Integer>::getNrHilbertBasis()
\end{Verbatim}
Inhomogeneous case the functions refer to the the Hilbert basis of the recession cone. The module generators of the lattice points in the polyhedron are accessed by
\begin{Verbatim}
const vector< vector<Integer> >& Cone<Integer>::getModuleGenerators()
size_t Cone<Integer>::getNrModuleGenerators()
\end{Verbatim}

If the original monoid is not integrally closed, you can ask for a witness:
\begin{Verbatim}
vector<Integer> Cone<Integer>::getWitnessNotIntegrallyClosed()
\end{Verbatim}

\subsubsection{Module generators over original monoid}

\begin{Verbatim}
const vector< vector<Integer> >& 
               Cone<Integer>::getModuleGeneratorsOverOriginalMonoid()
size_t Cone<Integer>::getNrModuleGeneratorsOverOriginalMonoid()
\end{Verbatim}

\subsubsection{Generator of the interior}\label{GenInt}

If the monoid is Gorenstein, Normaliz computes the generator of the interior (the canonical module):
\begin{Verbatim}
const vector<Integer>& Cone<Integer>::getGeneratorOfInterior()
\end{Verbatim}
Before asking for this vector, one should test \verb|isGorenstein()|.

\subsubsection{Grading and dehomogenization}

\begin{Verbatim}
vector<Integer> Cone<Integer>::getGrading()
Integer Cone<Integer>::getGradingDenom()
\end{Verbatim}
The second function returns the denominator of the grading.

\begin{Verbatim}
vector<Integer> Cone<Integer>::getDehomogenization()
\end{Verbatim}

\subsubsection{Enumerative data}

\begin{Verbatim}
mpq_class Cone<Integer>::getMultiplicity()
\end{Verbatim}
Don't forget that the multiplicity is measured for a rational, not necessarily integral polytope. Therefore it need not be an integer. The same applies to
\begin{Verbatim}
mpq_class Cone<Integer>::getVolume()
nmz_float Cone<Integer>::getEuclideanVolume()
\end{Verbatim}
which can be computed for polytopes defined by homogeneous or inhomogeneous input. In the homogeneous case the volume is the multiplicity.

The Hilbert and Ehrhart series are stored in instances class \verb|HilbertSeries|. They are retrieved by
\begin{Verbatim}
const HilbertSeries& Cone<Integer>::getHilbertSeries()
const HilbertSeries& Cone<Integer>::getEhrhartSeries()
\end{Verbatim}
They contain several data fields that can be accessed as follows (see \verb|hilbert_series.h|):
\begin{Verbatim}
const vector<mpz_class>& HilbertSeries::getNum() const;
const map<long, denom_t>& HilbertSeries::getDenom() const;

const vector<mpz_class>& HilbertSeries::getCyclotomicNum() const;
const map<long, denom_t>& HilbertSeries::getCyclotomicDenom() const;

const vector<mpz_class>& HilbertSeries::getHSOPNum() const;
const map<long, denom_t>& HilbertSeries::getHSOPDenom() const;

long HilbertSeries::getDegreeAsRationalFunction() const;
long HilbertSeries::getShift() const;

bool HilbertSeries::isHilbertQuasiPolynomialComputed() const;
const vector< vector<mpz_class> >& HilbertSeries::getHilbertQuasiPolynomial() const;
long HilbertSeries::getPeriod() const;
mpz_class HilbertSeries::getHilbertQuasiPolynomialDenom() const;

vector<mpz_class> HilbertSeries::getExpansion() const;
\end{Verbatim}

The first six functions refer to three representations of the Hilbert series as a rational function in the variable $t$: the first has a denominator that is a product of polynomials $(1-t^g)^e$, the second has a denominator that is a product of cyclotomic polynomials. In the third case the denominator is determined by the degrees of a homogeneous system of parameters (see Section~\ref{rational}). In all cases the numerators are given by their coefficient vectors, and the denominators are lists of pairs $(g,e)$ where in the second case $g$ is the order of the cyclotomic polynomial.

If you have already computed the Hilbert series without HSOP and you want it with HSOP afterwards, the Hilbert series will simply be transformed, but Normaliz must compute the degrees for the denominator, and this may be a nontrivial computation.

The degree as a rational function is of course independent of the chosen representation, but may be negative, as well as the shift that indicates with which power of $t$ the numerator tarts. Since the denominator has a nonzero constant term in all cases, this is exactly the smallest degree in which the Hilbert function has a nonzero value.

The Hilbert quasipolynomial is represented by a vector whose length is the period and whose entries are itself vectors that represent the coefficients of the individual polynomials corresponding to the residue classes modulo the period. These integers must be divided by the common denominator that is returned by the last function.

For the input type \verb|rees_algebra| we provide
\begin{Verbatim}
Integer Cone<Integer>::getReesPrimaryMultiplicity()
\end{Verbatim}

\subsubsection{Weighted Ehrhart series and integrals}

The weighted Ehrhart series can be accessed by
\begin{Verbatim}
const pair<HilbertSeries, mpz_class>& Cone<Integer>::getWeightedEhrhartSeries()
\end{Verbatim}
The second component of the pair is the denominator of the coefficients in the series numerator. Its introduction was necessary since we wanted to keep integral coefficients for the numerator of a Hilbert series. The numerator and the denominator of the first component of type \verb|HilbertSeries| can be accessed as usual, but one \emph{must not forget the denominator of the numerator coefficients}, the second component of the return value. There is a second way to access these data; see below.

The virtual multiplicity and the integral, respectively, are got by
\begin{Verbatim}
mpq_class Cone<Integer>::getVirtualMultiplicity()
mpq_class Cone<Integer>::getIntegral()
nmz_float Cone<Integer>::getEuclideanIntegral()
\end{Verbatim}

Actually the cone saves these data in a special container of class \verb|IntegrationData| (defined in \verb|Hilbert_series.h|). It is accessed by
\begin{Verbatim}
const IntegrationData& Cone<Integer>::getIntData()
\end{Verbatim}
The three \verb|get| functions above are only shortcuts for the access via \verb|getIntData()|:
\begin{Verbatim}
string IntegrationData::getPolynomial() const
long IntegrationData::getDegreeOfPolynomial() const
bool IntegrationData::isPolynomialHomogeneous() const

const vector<mpz_class>& IntegrationData::getNum_ZZ() const
mpz_class IntegrationData::getNumeratorCommonDenom() const
const map<long, denom_t>& IntegrationData::getDenom() const

const vector<mpz_class>& IntegrationData::getCyclotomicNum_ZZ() const
const map<long, denom_t>& IntegrationData::getCyclotomicDenom() const

bool IntegrationData::isWeightedEhrhartQuasiPolynomialComputed() const
void IntegrationData::computeWeightedEhrhartQuasiPolynomial()
const vector< vector<mpz_class> >& IntegrationData::getWeightedEhrhartQuasiPolynomial()
mpz_class IntegrationData::getWeightedEhrhartQuasiPolynomialDenom() const

vector<mpz_class> IntegrationData::getExpansion() const

mpq_class IntegrationData::getVirtualMultiplicity() const
mpq_class IntegrationData::getIntegral() const
\end{Verbatim}

The first three functions refer to the polynomial defining the integral or weighted Ehrhart series. The function \verb|getNumeratorCommonDenom()| returns the integer by which the coefficients of the numerator of the series must be divided.

The computation of these data is controlled by the corresponding \verb|ConeProperty|. The expansion is always computed on-the-fly. Its values must be divided by the same number as the coefficients of the numerator.

\subsubsection{Triangulation and disjoint decomposition}

The last triangulation that has been explicitly computed is returned by
\begin{Verbatim}
const pair<vector<SHORTSIMPLEX<Integer> >, Matrix<Integer> >&
                                                Cone<Integer>::getTriangulation()
\end{Verbatim}
If no triangulation has been computed yet, the basic triangulation is returned.

The \verb| Matrix<Integer>| contains (a superset of) the vectors that generate the simplicial cones in the triangulation. The simplicial cones are represented by the \verb|<vector<SHORTSIMPLEX<Integer> >|:
\begin{Verbatim}
struct SHORTSIMPLEX {
vector<key_t> key;      // full key of simplex
Integer height;         // height of last vertex over opposite facet, used in Full_Cone
Integer vol;            // volume if computed, 0 else
Integer mult;           // used for renf_elem_class in Full_Cone
vector<bool> Excluded;  // for disjoint decomposition of cone
                        // true in position i indicates that the facet
                        // opposite of generator i must be excluded
};
\end{Verbatim}
The \verb|key| specifies the generators of the simplicial cone by their row indices in the matrix (counted from $0$). The component \verb|vol| is the (absolute value) of their determinant, and \verb|Excluded| is only set if \verb|ConeDecomposition| was asked for.

For the refined triangulations one uses
\begin{Verbatim}
const pair<vector<SHORTSIMPLEX<Integer> >, Matrix<Integer> >& 
                    Cone<Integer>::getTriangulation(ConeProperty::Enum quality)
\end{Verbatim}
In which the parameter specifies the type of triangulation that is to be computed:
\begin{Verbatim}
ConeProperty::Triangulation
ConeProperty::AllGeneratorsTriangulation
ConeProperty::LatticePointTriangulation
ConeProperty::UnimodularTriangulation
\end{Verbatim}
where the first choice returns the basic triangulation.

\begin{Verbatim}
const pair<vector<SHORTSIMPLEX<Integer> >, Matrix<Integer> >&
                                           Cone<Integer>::getConeDecomposition()
\end{Verbatim}
has the same effect as \verb|getTriangulation(ConeProperty::Triangulation)|, except that the components \verb|Excluded| are definitely set.

Additional information on the possibly nested and /or partial triangulation that has been used for the computation in primal ode can be retrieved by
\begin{Verbatim}
size_t Cone<Integer>::getTriangulationSize()
Integer Cone<Integer>::getTriangulationDetSum() 
\end{Verbatim}

\subsubsection{Stanley decomposition}

The Stanley decomposition is stored in a list whose entries correspond to the simplicial cones in the triangulation:
\begin{Verbatim}
const pair<list<STANLEYDATA<Integer> >, Matrix<Integer> > &  Cone<Integer>::getStanleyDec()
\end{Verbatim}
The \verb|Matrix<Integer| has the same meaning as for triangulations.
\verb|STANLEYDATA| defined as follows:
\begin{Verbatim}
struct STANLEYDATA {
vector<key_t> key;
Matrix<Integer> offsets;
};
\end{Verbatim}
The key has the same interpretation as for the triangulation, namely as the vector of indices of the generators of the simplicial cone (counted from $0$). The matrix contains the coordinate vectors of the offsets of the components of the decomposition that belong to the simplicial cone defined by the key. See Section~\ref{Stanley} for the interpretation. The format of the matrix can be accessed by the following functions of class \verb|Matrix<Integer>|:
\begin{Verbatim}
size_t nr_of_rows() const
size_t nr_of_columns() const
\end{Verbatim}
The entries are accessed in the same way as those of \verb|vector<vector<Integer> >|.

\subsubsection{Scaling of axes}

If \verb|rational_lattice| or \verb|rational_offset| are in the input for the cone, then the vector giving scaling of axes can be retrieved by
\begin{Verbatim}
vector<Integer> Cone<Integer>::getAxesScaling() 
\end{Verbatim}
The cone property \verb|AxesScaling| cannot be used as a computation goal, but one can ask for its computation as usual.|

\subsubsection{Coordinate transformation}\label{coord}

The coordinate transformation from the ambient lattice to the sublattice generated by the Hilbert basis (whether it has been computed or not) can be returned as follows:
\begin{Verbatim}
const Sublattice_Representation<Integer>& Cone<Integer>::getSublattice()
\end{Verbatim}
For algebraic polyhedra it defines the subspace generated by the (homogenized) cone.

An object of type \verb|Sublattice_Representation| models a sequence of $\ZZ$-homomorphisms
$$
\ZZ^r\xrightarrow{\phi}\ZZ^n\xrightarrow{\pi}\ZZ^r
$$
with the following property: there exists $c\in\ZZ$, $c\neq 0$, such that $\pi\circ \phi=c\cdot\operatorname{id}_{\ZZ^r}$. In particular $\phi$ is injective. One should view the two maps as a pair of coordinate transformations: $\phi$ is determined by a choice of basis in the sublattice $U=\phi(\ZZ^r)$, and it allows us to transfer vectors from $U\cong \ZZ^r$ to the ambient lattice $\ZZ^n$. The map $\pi$ is used to realize vectors from $U$ as linear combinations of the given basis of $U\cong\ZZ^r$: after the application of $\pi$ one divides by $c$. (If $U$ is a direct summand of $\ZZ^n$, one can choose $c=1$, and conversely.) Normaliz considers vectors as rows of matrices. Therefore $\phi$ is given as an $r\times n$-matrix and $\pi$ is given as an $n\times r$ matrix.

The data just described can be accessed as follows (\verb|sublattice_representation.h|). For space reasons we omit the class specification \verb|Sublattice_Representation<Integer>::|
\begin{Verbatim}
const vector<vector<Integer> >& getEmbedding() const
const vector<vector<Integer> >& getProjection() const
Integer getAnnihilator() const
\end{Verbatim}
Here ``Embedding'' refers to $\phi$ and ``Projection'' to $\pi$ (though $\pi$ is not always surjective). The ``Annihilator'' is the number $c$ above. (It annihilates $\ZZ^r$ modulo $\pi(\ZZ^n)$.)

The numbers $n$ and $r$ are accessed in this order by
\begin{Verbatim}
size_t getDim() const
size_t getRank() const
\end{Verbatim}
The external index, namely the order of the torsion subgroup of $\ZZ^n/U$, is returned by
\begin{Verbatim}
mpz_class getExternalIndex() const
\end{Verbatim}
Very often $\phi$ and $\psi$ are identity maps, and this property can be tested by
\begin{Verbatim}
bool IsIdentity() const
\end{Verbatim}
The constraints computed by Normaliz are ``hidden'' in the sublattice representation. They van be accessed by
\begin{Verbatim}
const vector<vector<Integer> >& getEquations() const
const vector<vector<Integer> >& getCongruences() const
\end{Verbatim}

But see Section~\ref{SHC} above for a more direct access.

\subsubsection{Suppressing the coordinate transformation}

The project-and-lift algorithm uses the coordinate system of the cone constructor. For other algorithms it is necessary to pas to a sublattice or even a quotient lattice. At construction time Normaliz does not know what algorithm will be used later, and therefore computes a coordinate transformation if the input contains lattice generators or constraints. If there are only constraints, the coordinate transformation can be suppressed by
\begin{Verbatim}
void noCoordTransf(bool onoff)
\end{Verbatim}
This can be useful if the number of coordinates is extremely large, as it happens in the computation of fusion rings. Of course, you must be sure that the coordinate transformation will not be needed. The function applies only until a cone construction has taken place.

\subsubsection{Coordinate transformations for precomputed data}\label{coord_pre}

For precomputed data we need \verb|Type::generated_lattice| and \verb|Type::maximal_subspace|, should they be nontrivial. The maximal subspace is retrieved by
\begin{Verbatim}
getMaximalSubspace()
\end{Verbatim}
mentioned already in Section~\ref{HB_lib}. The generated lattice (subspace in the algebraic case) is accessed by
\begin{Verbatim}
getSublattice().getEmbedding()
\end{Verbatim}
introduced in Section~\ref{coord}.

\subsubsection{Automorphism groups}

The automorphism group is accessed by

\begin{Verbatim}
const AutomorphismGroup<Integer>& Cone<Integer>::getAutomorphismGroup();
\end{Verbatim}
independently of the type of the automorphism group (see below). Only one type of automorphism group can be computed in a run of \verb|compute(...)| and this type is stored.

Contrary to other get functions, \verb|getAutomorphismGroup()| does not trigger a computation since it is unclear what quality of automorphisms is asked for. If no automorphism group has been computed, a \verb|BadInputException| is thrown.

Additionally we have
\begin{Verbatim}
const AutomorphismGroup<Integer>& 
            Cone<Integer>::getAutomorphismGroup(ConeProperty::Enum quality)
\end{Verbatim}
in which the quality can be specified. If the automorphism group has already been computed with a different quality, then it is recomputed.

If the automorphism group has been computed by those options that use extreme rays and support hyperplanes, i.e., all except \verb|AmbientAutomorphisms| and \verb|InputAutomorphisms|, then the action of the group is recorded in
\begin{Verbatim}
mpz_class  getOrder() const;
const vector<vector<key_t> >&  getVerticesPerms() const
const vector<vector<key_t> >& getExtremeRaysPerms() const
const vector<vector<key_t> >& getSupportHyperplanesPrms() const

const vector<vector<key_t> >& getVerticesOrbits() const
const vector<vector<key_t> >& geExtremeRaystOrbits() const
const vector<vector<key_t> >& getSupportHyperplanesOrbits() const
\end{Verbatim}
All these functions and the following ones belong to the class \verb|AutomorphismGroup<Integer>|.

``Perms'' is a shorthand for ``permutations'', and each generator of the automorphism group is represented by the permutation of the extreme rays that it induces. In the permutations, objects are counted from $0$. The reference order of the vectors is the same as in the output files. The entry \verb|[i][j]|| is the index of the object to which the $j$-th object is mapped by the $i$-th generator of the automorphism group.

The orbits are listed one by one: each \verb|vector<key_t>| contains the indices that form an orbit, and the collection of orbits is given by the outer vector.

The action  of \verb|AmbientAutomorphisms| and \verb|InputAutomorphisms| is documented in
\begin{Verbatim}
const vector<vector<key_t> >& getGensPerms() const;
const vector<vector<key_t> >& getGensOrbits() const;
const vector<vector<key_t> >& getLinFormsPerms() const;
const vector<vector<key_t> >& getLinFormsOrbits() const; 
\end{Verbatim}
where the `' Gens'' are the input vectors representing generators of the primal cone or inequalities, given by linear forms  generating the dual cone. ``LinForms'' are defined only for \verb|AmbientAutomorphisms|, and they represent the coordinate linear forms. The generators from which the group has been computed are returned by
\begin{Verbatim}
const Matrix<Integer>& getGens() const;
\end{Verbatim}

The qualities of the automorphisms is returned by
\begin{Verbatim}
set<AutomParam::Quality> getQualities() const;
\end{Verbatim}
and the qualities are given by
\begin{Verbatim} 
namespace AutomParam {
enum Quality {
combinatorial,
rational,
integral,
euclidean,
ambient_gen,
ambient_ineq,
input_gen,
input_ineq,
algebraic,
graded // not used at present
};
...
\end{Verbatim}
Input and ambient automorphisms appear twice since Normaliz records what type of input is used for the computation, and this information is shown in the output files.

Another access is given by
\begin{Verbatim}
string getQualitiesString()
\end{Verbatim}
and
\begin{Verbatim}
string quality_to_string(AutomParam::Quality quality)
\end{Verbatim}
does a single conversion.

Moreover, you can ask
\begin{Verbatim}
bool IsIntegralityChecked() const;
bool IsIntegral() const;
\end{Verbatim}

If you are interested in cycle decompositions, you can use
\begin{Verbatim}
vector<vector<key_t> > cycle_decomposition(vector<key_t> perm, bool with_fixed_points)
\end{Verbatim}
where \verb|with_fixed_points| decides whether cycles of length $1$ are produced.

\subsubsection{Class group}

\begin{Verbatim}
vector<Integer> Cone<Integer>::getClassGroup()
\end{Verbatim}
The return value is to be interpreted as follows: The entry for index $0$ is the rank of the class group. The remaining entries contain the orders of the summands in a direct sum decomposition of the torsion subgroup.

\subsubsection{Face lattice and f-vector}
\begin{Verbatim}
vector<size_t> Cone<Integer>::getFVector()
const map<dynamic_bitset,int>& Cone<Integer>::getFaceLattice()
\end{Verbatim}
Each element of the set represents a face $F$: the \verb|int| is its codimension, and the \verb|vector<bool>| $v$ represents the facets containing $F$: $v[i]=1$, if and only if the facet given by the $i$-th row of \verb|getSupportHyperplanes()| contains $F$. (See Section~\ref{FaceLattice}.)

The incidence matrix can be accessed by

\begin{Verbatim}
const vector<dynamic_bitset>& Cone<Integer>::getIncidence()
\end{Verbatim}

These functions have dual versions:

\begin{Verbatim}
vector<size_t> Cone<Integer>::getDualFVector()
const map<dynamic_bitset,int>& Cone<Integer>::getDualFaceLattice()
const vector<dynamic_bitset>& Cone<Integer>::getDualIncidence()
\end{Verbatim}

For the orbit versions we have
\begin{Verbatim}
vector<size_t> Cone<Integer>::getFVectorOrbits()
const map<dynamic_bitset,int>& Cone<Integer>::getFaceLatticerOrbits()
vector<size_t> Cone<Integer>::getDualFVectorrOrbits()
const map<dynamic_bitset,int>& Cone<Integer>::getDualFaceLatticerOrbits()
\end{Verbatim}

\subsubsection{Local properties}

They are properties of localizations. So far we have
\begin{Verbatim}
const map<dynamic_bitset, int>& Cone<Integer>::getSingularLocus()
size_t Cone<Integer>::getCodimSingularLocus()
bool Cone<Integer>::isSerreR1()
\end{Verbatim}

\subsubsection{Markov and Grobner bases, representations}
\begin{Verbatim}
const  vector<vector<Integer> >& Cone<Integer>::getMarkovBasis()
const  vector<vector<Integer> >& Cone<Integer>::getGroebnerBasis()
const  vector<vector<Integer> >& Cone<Integer>::getRepresentations()
\end{Verbatim}
For all three there are also the usual variants with \verb|Matrix| and \verb|Nr|.
Note that they return only those binomials for Markov and Gröbner bases that satisfy the degree bounds set by \verb|gb_degree_bound| and \verb|gb_min_degree|.


\subsubsection{Integer hull}

For the computation of the integer hull an auxiliary cone is constructed. A reference to it is returned by
\begin{Verbatim}
Cone<Integer>& Cone<Integer>::getIntegerHullCone() const
\end{Verbatim}

For example, the support hyperplanes of the integer hull can be accessed by
\begin{Verbatim}
MyCone.getIntegerHullCone().getSupportHyperplanes()
\end{Verbatim}

\subsubsection{Projection of the cone}

Like the integer hull, the image of the projection is contained in an auxiliary cone that can be accessed by
\begin{Verbatim}
Cone<Integer>& Cone<Integer>::getProjectCone() const
\end{Verbatim}

It contains constraints and extreme rays of the projection.

\subsubsection{Excluded faces}

Before using the excluded faces Normaliz makes the collection irredundant by discarding those that are contained in others. The irredundant collection (given by hyperplanes that intersect the cone in the faces) and its cardinality are returned by
\begin{Verbatim}
const vector< vector<Integer> >& Cone<Integer>::getExcludedFaces()
size_t Cone<Integer>::getNrExcludedFaces()
\end{Verbatim}
For the computation of the Hilbert series the all intersections of the excluded faces are computed, and for each resulting face the weight with which it must be counted is computed. These data can be accessed by
\begin{Verbatim}
const vector< pair<vector<key_t>,long> >& Cone<Integer>::getInclusionExclusionData()
\end{Verbatim}
The first component of each pair contains the indices of the generators (counted from~$0$) that lie in the face and the second component is the weight.

The emptiness of semiopen polyhedra can be tested by
\begin{Verbatim}
bool Cone<Integer>::isEmptySemiOpen()
\end{Verbatim}
If the answer is positive, an excluded face making the semiopen polyhedron empty is returned by
\begin{Verbatim}
vector<Integer> Cone<Integer>::getCoveringFace() 
\end{Verbatim}

\subsubsection{Fusion rings}\label{lib_fusion}

See Appendix \ref{fusion_rings}  for the terminology. The following functions are available:
\begin{Verbatim}
const vector<vector<Integer> >& Cone<Integer>::getFusionRings()
size_t Cone<Integer>::getNrFusionRings() 
const vector<vector<Integer> >& Cone<Integer>::getSimpleFusionRings()
size_t Cone<Integer>::getNrSimpleFusionRings()
const vector<vector<Integer> >& Cone<Integer>::getNonsimpleFusionRings()
size_t Cone<Integer>::getNrNonsimpleFusionRings()
const vector<Integer>& Cone<Integer>::getSingleFusionRing()
const vector<vector<dynamic_bitset> >& Cone<Integer>::getModularGradings()
\end{Verbatim}
There exist \verb*|Matrix| variants for \verb*|FusionRings| and \verb*|SimnpleFusionRings| as well.

One can also retrieve the full fusion data:
\begin{Verbatim}
const vector<vector<Matrix<Integer> > >& Cone<Integer>::getFusionDataMatrix()
\end{Verbatim}
and the induction matrices:
\begin{Verbatim}
const vector<vector<Matrix<Integer> > >& Cone<Integer>::getInductionMatrices()
\end{Verbatim}

If there are several modular gradings, then for \verb*|UseModzlarGrading| you must pick one by
\begin{Verbatim}
void Cone<Integer>::setModularGraing(long mod_gr)
\end{Verbatim}
counting from $1$ (also see \ref{num_params}). Similarly
\begin{Verbatim}
void Cone<Integer>::setChosenFusionRing(long fus_r)
\end{Verbatim}

\subsubsection{Boolean valued results}

All the ``questions'' to the cone that can be asked by the boolean valued functions in this section start a computation if the answer is not yet known.

The first, the question
\begin{Verbatim}
bool Cone<Integer>::isIntegrallyClosed()
\end{Verbatim}
does not trigger a computation of the full Hilbert basis. The computation stops as soon as the answer can be given, and this is the case when an element in the integral closure has been found that is not in the original monoid. Such an element is retrieved by
\begin{Verbatim}
vector<Integer> Cone<Integer>::getWitnessNotIntegrallyClosed()
\end{Verbatim}

As discussed in Section~\ref{IsPointed} it can sometimes be useful to ask
\begin{Verbatim}
bool Cone<Integer>::isPointed()
\end{Verbatim}
before a more complex computation is started.

The Gorenstein property can be tested with
\begin{Verbatim}
bool Cone<Integer>::isGorenstein()
\end{Verbatim}
If the answer is positive, Normaliz computes the generator of the interior of the monoid. Also see~\ref{GenInt}.


The next two functions answer the question whether the Hilbert basis or at least the extreme rays live in degree $1$.
\begin{Verbatim}
bool Cone<Integer>::isDeg1ExtremeRays()
bool Cone<Integer>::isDeg1HilbertBasis()
\end{Verbatim}

Finally we have
\begin{Verbatim}
bool Cone<Integer>::isInhomogeneous()
bool Cone<Integer>::isReesPrimary()
\end{Verbatim}
\verb|isReesPrimary()| checks whether the ideal defining the Rees algebra is primary to the irrelevant maximal ideal.


\subsubsection{Results by type}\label{ByType}

It is also possible to access (and compute if necessary) the output data of Normaliz by functions that only depend on the C++ type of the data:

\begin{Verbatim}
const Matrix<Integer>& getMatrixConePropertyMatrix(ConeProperty::Enum property);
const vector< vector<Integer> >& getMatrixConeProperty(ConeProperty::Enum property);
const Matrix<nmz_float>& getFloatMatrixConePropertyMatrix(ConeProperty::Enum property);
const vector< vector<nmz_float> >& getFloatMatrixConeProperty(ConeProperty::Enum property);
vector<Integer> getVectorConeProperty(ConeProperty::Enum property);
Integer getIntegerConeProperty(ConeProperty::Enum property);
mpz_class getGMPIntegerConeProperty(ConeProperty::Enum property);
mpq_class getRationalConeProperty(ConeProperty::Enum property);
renf_elem_class getFieldElemConeProperty(ConeProperty::Enum property);
nmz_float getFloatConeProperty(ConeProperty::Enum property);
size_t getMachineIntegerConeProperty(ConeProperty::Enum property);
bool getBooleanConeProperty(ConeProperty::Enum property);
\end{Verbatim}

For example, \verb|getMatrixConeProperty(ConeProperty::HilbertBasis)| will return the Hilbert basis as a \verb|const vector< vector<Integer> >&|.

These functions make it easier to write interfaces to Normaliz since they need not to introduce new functions for results that have one of the types listed above.

It is clear that the complex results can only be accessed via their specialized ``get'' functions.

\subsection{Algebraic polyhedra}

Cones over algebraic number fields are constructed by
\begin{Verbatim}
Cone<renf_elem_class>(...)
\end{Verbatim}
where \ttt{...} stands for all the variants that have been discussed in Section~\ref{ConstCone}, except that all matrices must be of type \verb+vector<vector<renf_elem_class> >+ or \verb+Matrix<renf_elem_class>+. \verb|Cone<renf_elem_class>(...)| is predefined in \ttt{libnormaliz}.

Note that not all integer, rational or float input types are allowed; see Section~\ref{Algebraic}.

After the construction of the cone you must use
\begin{Verbatim}
void Cone<renf_elem_class>::setRenf(renf_class* renf)
\end{Verbatim}
It is necessary to forward the information about the number field to derived cones. In the other direction:
\begin{Verbatim}
renf_class* Cone<renf_elem_class>::getRenf()
\end{Verbatim}
Since version~1.0.0 the \verb|renf_class*| is administrated through a \verb|std::shared_ptr<const renf_class>|. It is returned by
\begin{Verbatim}
const std::shared_ptr<const renf_class> Cone<Integer>::getRenfSharedPtr()
\end{Verbatim}

One can retrieve the minimal polynomial and the embedding by
\begin{Verbatim}
vector<string> Cone<renf_elem_class>::getRenfData()
\end{Verbatim}
The name of the field generator is returned by
\begin{Verbatim}
string Cone<renf_elem_class>::getRenfGenerator()
\end{Verbatim}

The computation follows the same rules that have been explained above, again with some restriction of the computation goals that can be reached. Again see Section~\ref{Algebraic}.

In return values \ttt{Integer} must be specialized to \verb|renf_elem_class|. A special return value is the volume that in general is no longer of type \verb|mpq_class|. It is retrieved by
\begin{Verbatim}
renf_elem_class Cone<renf_elem_class>::getRenfVolume() 
\end{Verbatim}

The number field must be defined outside of libnormaliz. Have a look at \verb|source/normaliz.cpp| and \verb|source/input.in| to see the details.

The integer hull cone is of type \verb|libnormaliz::Cone<renf_elem_class>|.

Remark: In the code, the template \ttt{Integer} does no longer stand for a truly integer type, but also for \verb|renf_elem_class|, and thus for elements from a field.

\subsection{Reusing previous computation results}

To some extent it is possible to exploit the results of a previous computation after the modification of a cone (see Section~\ref{Modify}). This is controlled by
\begin{Verbatim}
ConeProperty::Dynamic
ConeProperty::Static
\end{Verbatim}
where \verb|Dynamic| activates this feature and \verb|Static| deactivates it.

At present only results of previous convex hull computations or vertex enumerations can be reused. Restrictions:
\begin{arab}
	\item The coordinate transformation that had been reached before the previous computation must have remained unchanged. Note that a change may have happened as a consequence the previous computation. For example, the addition of inequalities can reduce the dimension.
	\item If a convex hull computation simultaneously creates a triangulation, then it must start from scratch.
\end{arab}

An example for the use of \verb|ConeProperty::Dynamic| is given in \verb|source/dynamic/dynamic.cpp|. It is compiled automatically by the \verb|autotools| scripts, and can also be compiled in \verb|source| by\\ \verb|make -f Makefile.classic dynamic|.

\subsection{Control of execution}

\subsubsection{Exceptions}

All exceptions that are thrown in \verb|libnormaliz| are derived from the abstract class \verb|NormalizException| that itself is derived from \verb|std::exception|:
\begin{Verbatim}
class NormalizException: public std::exception
\end{Verbatim}

The following exceptions must be caught by the calling program:
\begin{Verbatim}
class ArithmeticException: public NormalizException
class BadInputException: public NormalizException
class NotComputableException: public NormalizException
class FatalException: public NormalizException
class NmzCoCoAException: public NormalizException
class InterruptException: public NormalizException
\end{Verbatim}

The \verb|ArithmeticException| leaves \verb|libnormaliz| if a nonrecoverable overflow occurs (it is also used internally for the change of integer type). This should not happen for cones of integer type \verb|mpz_class|, unless it is caused by the attempt to create a data structure of illegal size or by a bug in the program. The \verb|BadInputException| is thrown whenever the input is inconsistent; the reasons for this are manifold. The \verb|NotComputableException| is thrown if a computation goal cannot be reached. The \verb|FatalException| should never appear. It covers error situations that can only be caused by a bug in the program. At many places \verb|libnormaliz| has \verb|assert| verifications built in that serve the same purpose.

There are two more exceptions for the communication within \verb|libnormaliz| that should not leave it:
\begin{Verbatim}
class NonpointedException: public NormalizException 
class NotIntegrallyClosedException: public NormalizException
\end{Verbatim}

The \verb|InterruptException| is discussed in the next section.

\subsubsection{Interruption}

In order to find out if the user wants to interrupt the program, the functions in \verb|libnormaliz| test the value of the global variable
\begin{Verbatim}
volatile sig_atomic_t nmz_interrupted
\end{Verbatim}
If it is found to be \verb|true|, an \verb|InterruptException| is thrown. This interrupt leaves \verb|libnormaliz|, so that the calling program can process it. The \verb|Cone| still exists, and the data computed in it can still be accessed. Moreover, \verb|compute| can again be applied to it.

The calling program must take care to catch the signal caused by Ctrl-C and to set \verb|nmz_interrupted=1|.

\subsubsection{Inner parallelization}

By default the cone constructor sets the maximal number of parallel threads to $8$, unless the system has set a lower limit. You can change this value by
\begin{Verbatim}
long set_thread_limit(long t)
\end{Verbatim}
The function returns the previous value.

\verb|set_thread_limit(0)| raises the limit set by libnormaliz to $\infty$.

\subsubsection{Outer parallelization}

The libnormaliz functions can be called by programs that are parallelized via OpenMP themselves. The functions in libnormaliz switch off nested parallelization.

As a test program you can compile and run \verb|outerpar| in \verb|source/outerpar|. Compile it by
\begin{Verbatim}
make -f Makefile.classic outerpar
\end{Verbatim}
in \verb|source|.

\subsubsection{Control of terminal output}
By using
\begin{Verbatim}
bool setVerboseDefault(bool v)
\end{Verbatim}
one can control the verbose output of \verb|libnormaliz|. The default value is \verb|false|. This is a global setting that effects all cones constructed afterwards. However, for every cone one can set an individual value of \verb|verbose| by
\begin{Verbatim}
bool Cone<Integer>::setVerbose(bool v)
\end{Verbatim}
Both functions return the previous value.In order to \verb|setVerbose| for a cone, it must have already beren constructed, and during construction the global verbose determines terminal output. The construction phase does some precomputations, and they may issue some unwanted terminal output. In order to suppress it, one can use
\begin{Verbatim}
void suppressNextConstructorVerbose()
\end{Verbatim}
It sets the value of \verb|verbose| to \verb|false| for the next cone constructed. 


The default values of verbose output and error output are \verb|std::cout| and \verb|std::cerr|. These values can be changed by
\begin{Verbatim}
void setVerboseOutput(std::ostream&)
void setErrorOutput(std::ostream&)
\end{Verbatim}

\subsubsection{Printing the cone}

The function
\begin{Verbatim}
void Cone<Integer>::write_cone_output(const string& output_file)
\end{Verbatim}
writes the standard \verb|out| file using the content of \verb|output_file| instead of the standard \verb|<project>|. It is meant as a tool for debugging libraries. It is not possible to write any file with a suffix different from \verb|out|.

We also have
\begin{Verbatim}
void Cone<Integer>::write_precomp_for_input(const string& output_file)
\end{Verbatim}
It writes an input file with precomputed data (see Section \ref{write_precomp}).
writes the file with suffix  \verb|precomp.in| file using the content of \verb|output_file| instead of the standard \verb|<project>|.

\subsection{A simple program}\label{maxsimplex}

The example program is a simplified version of the program on which the experiments for the paper ``Quantum jumps of normal polytopes'' by W.~Bruns, J.~Gubeladze and M.~Micha\l{}ek, Discrete Comput.\ Geom.\ 56 (2016), no.\ 1, 181--215, are based. Its goal is to find a maximal normal lattice polytope $P$ in the following sense: there is no normal lattice polytope $Q\supset P$ that has exactly one more lattice point than $P$. ``Normal'' means in this context that the Hilbert basis of the cone over $P$ is given by the lattice points of $P$, considered as degree $1$ elements in the cone.

The program generates normal lattice simplices and checks them for maximality. The dimension is set in the program, as well as the bound for the random coordinates of the vertices.

Let us have a look at \verb|source/maxsimplex/maxsimplex.cpp|. First the more or less standard preamble:

\begin{Verbatim}
#include <cstdlib>
#include <vector>
#include <fstream>
#include <omp.h>
using namespace std;

#include "libnormaliz/libnormaliz.h"
\end{Verbatim}

Since we want to perform a high speed experiment which is not expected to be arithmetically demanding, we choose $64$ bit integers:
\begin{Verbatim}
typedef long long Integer;
\end{Verbatim}

The first routine finds a random normal simplex of dimension \verb|dim|. The coordinates of the vertices are integers between $0$ and \verb|bound|. We are optimistic that such a simplex can be found, and this is indeed no problem in dimension $4$ or $5$.

\begin{Verbatim}
Cone<Integer> rand_simplex(size_t dim, long bound){

    vector<vector<Integer> > vertices(dim+1,vector<Integer> (dim));
    while(true){ // an eternal loop ...
        for(size_t i=0;i<=dim;++i){
            for(size_t j=0;j<dim;++j)
                vertices[i][j]=rand()%(bound+1);
        }

        Cone<Integer> Simplex(Type::polytope,vertices);
        // we must check the rank and normality
        if(Simplex.getRank()==dim+1 && Simplex.isDeg1HilbertBasis())
            return Simplex;
    }
    vector<vector<Integer> > dummy_gen(1,vector<Integer>(1,1)); 
    // to make the compiler happy
    return Cone<Integer>(Type::cone,dummy_gen); 
}
\end{Verbatim}

We are looking for a normal polytope $Q\supset P$ with exactly one more lattice point. The potential extra lattice points $z$ are contained in the matrix \verb|jump_cands|. There are two obstructions for $Q=\operatorname{conv}(P,z)$ to be tested: (i) $z$ is the only extra lattice point and (ii) $Q$ is normal. It makes sense to test them in this order since most of the time condition (i) is already violated and it is much faster to test.
\begin{Verbatim}
bool exists_jump_over(Cone<Integer>& Polytope, 
                      const vector<vector<Integer> >& jump_cands){

    vector<vector<Integer> > test_polytope=Polytope.getExtremeRays();
    test_polytope.resize(test_polytope.size()+1); 
    for(size_t i=0;i<jump_cands.size();++i){
        test_polytope[test_polytope.size()-1]=jump_cands[i];
        Cone<Integer> TestCone(Type::cone,test_polytope);
        if(TestCone.getNrDeg1Elements()!=Polytope.getNrDeg1Elements()+1)
            continue;
        if(TestCone.isDeg1HilbertBasis())
            return true;
    }
    return false;
}
\end{Verbatim}

In order to make the (final) list of candidates $z$ as above we must compute the widths of $P$ over its support hyperplanes.
\begin{Verbatim}
vector<Integer> lattice_widths(Cone<Integer>& Polytope){

    if(!Polytope.isDeg1ExtremeRays()){
        cerr<< "Cone in lattice_widths is not defined by lattice polytope"<< endl;
        exit(1);
    }
    vector<Integer> widths(Polytope.getNrExtremeRays(),0);
    for(size_t i=0;i<Polytope.getNrSupportHyperplanes();++i){
        for(size_t j=0;j<Polytope.getNrExtremeRays();++j){
            // v_scalar_product is a useful function from vector_operations.h
            Integer test=v_scalar_product(Polytope.getSupportHyperplanes()[i],
            Polytope.getExtremeRays()[j]);
            if(test>widths[i])
                widths[i]=test;
        }
    }
    return widths;
}
\end{Verbatim}

\begin{Verbatim}
int main(int argc, char* argv[]){

    time_t ticks;
    srand(time(&ticks));
    cout << "Seed " <<ticks << endl;  // we may want to reproduce the run

    size_t polytope_dim=4;
    size_t cone_dim=polytope_dim+1;
    long bound=6;
    vector<Integer> grading(cone_dim,0); 
           // at some points we need the explicit grading
    grading[polytope_dim]=1;

    size_t nr_simplex=0; // for the progress report
\end{Verbatim}
Since the computations are rather small, we suppress parallelization (except for one step below).
\begin{Verbatim}
    while(true){

#ifdef _OPENMP
        omp_set_num_threads(1);
#endif
    Cone<Integer> Candidate=rand_simplex(polytope_dim,bound);
    nr_simplex++;
    if(nr_simplex%1000 ==0)
    cout << "simplex " << nr_simplex << endl;
\end{Verbatim}
Maximality is tested in $3$ steps. Most often there exists a lattice point $z$ of height $1$ over $P$. If so, then $\operatorname{conv}(P,z)$ contains only $z$ as an extra lattice point and it is automatically normal. In order to find such a point we must move the support hyperplanes outward by lattice distance $1$.
\begin{Verbatim}
    vector<vector<Integer> > supp_hyps_moved=Candidate.getSupportHyperplanes();
    for(size_t i=0;i<supp_hyps_moved.size();++i)
        supp_hyps_moved[i][polytope_dim]+=1;
    Cone<Integer> Candidate1(Type::inequalities,supp_hyps_moved, 
    Type::grading,to_matrix(grading));
    if(Candidate1.getNrDeg1Elements()>Candidate.getNrDeg1Elements()) 
        continue;                     // there exists a point of height 1
\end{Verbatim}
Among the polytopes that have survived the height $1$ test, most nevertheless have suitable points $z$ close to them, and it makes sense not to use the maximum possible height immediately. Note that we must now test normality explicitly.
\begin{Verbatim}
    cout << "No ht 1 jump"<< " #latt " << Candidate.getNrDeg1Elements() << endl; 
    // move the hyperplanes further outward
    for(size_t i=0;i<supp_hyps_moved.size();++i)
        supp_hyps_moved[i][polytope_dim]+=polytope_dim; 
    Cone<Integer> Candidate2(Type::inequalities,supp_hyps_moved,
                             Type::grading,to_matrix(grading));
    cout << "Testing " << Candidate2.getNrDeg1Elements() 
         << " jump candidates" << endl; // including the lattice points in P
    if(exists_jump_over(Candidate,Candidate2.getDeg1Elements()))
            continue;
\end{Verbatim}
Now we can be optimistic that a maximal polytope $P$ has been found, and we test all candidates $z$ that satisfy the maximum possible bound on their lattice distance to $P$.
\begin{Verbatim}
    cout << "No ht <= 1+dim jump" << endl;
    vector<Integer> widths=lattice_widths(Candidate);
    for(size_t i=0;i<supp_hyps_moved.size();++i)
            supp_hyps_moved[i][polytope_dim]+=
                            -polytope_dim+(widths[i])*(polytope_dim-2);
\end{Verbatim}
The computation may become arithmetically critical at this point. Therefore we use \verb|mpz_class| for our cone. The conversion to and from \verb|mpz_class| is done by routines contained in \verb|convert.h|.
\begin{Verbatim}
    vector<vector<mpz_class> > mpz_supp_hyps;
    convert(mpz_supp_hyps,supp_hyps_moved);
    vector<mpz_class> mpz_grading=convertTo<vector<mpz_class> >(grading);
\end{Verbatim}
The computation may need some time now. Therefore we allow a little bit of parallelization.
\begin{Verbatim}
#ifdef _OPENMP
        omp_set_num_threads(4);
#endif
\end{Verbatim}
Since $P$ doesn't have many vertices (even if we use these routines for more general polytopes than simplices), we don't expect too many vertices for the enlarged polytope. In this situation it makes sense to set the algorithmic variant \verb|Approximate|.
\begin{Verbatim}
    Cone<mpz_class> Candidate3(Type::inequalities,mpz_supp_hyps,
                               Type::grading,to_matrix(mpz_grading));
    Candidate3.compute(ConeProperty::Deg1Elements,ConeProperty::Approximate);
    vector<vector<Integer> > jumps_cand; // for conversion from mpz_class
    convert(jumps_cand,Candidate3.getDeg1Elements());
    cout << "Testing " << jumps_cand.size() << " jump candidates" << endl;
    if(exists_jump_over(Candidate, jumps_cand))
        continue;
\end{Verbatim}
Success!
\begin{Verbatim}
    cout << "Maximal simplex found" << endl;
    cout << "Vertices" << endl;
    Candidate.getExtremeRaysMatrix().pretty_print(cout); // a goody from matrix.h
    cout << "Number of lattice points = " << Candidate.getNrDeg1Elements();
    cout << " Multiplicity = " << Candidate.getMultiplicity() << endl; 

    } // end while
} // end main
\end{Verbatim}

For the compilation of \verb|maxsimplex.cpp| use
\begin{Verbatim}
make -f Makefile.classic maxsimplex
\end{Verbatim}
in \verb|source|. Running the program needs a little bit of patience. However, within a few hours a maximal simplex should have emerged. From a log file:
\begin{Verbatim}
simplex 143000
No ht 1 jump #latt 9
Testing 22 jump candidates
No ht 1 jump #latt 10
Testing 30 jump candidates
No ht 1 jump #latt 29
Testing 39 jump candidates
No ht <= 1+dim jump
Testing 173339 jump candidates
Maximal simplex found
Vertices
1 3 5 3 1
2 3 0 3 1
3 0 5 5 1
5 2 2 1 1
6 5 6 2 1
Number of lattice points = 29 Multiplicity = 275
\end{Verbatim}

\end{small}

\newpage

\section{Normaliz interactive: PyNormaliz}\label{PyNormaliz}

\begin{small}
	
PyNormaliz serves three purposes:
\begin{itemize}
	\item It is the bridge from Normaliz to SageMath.
	\item It provides an interactive access to Normaliz from a Python command line.
	\item It is a flexible environment for the exploration of Normaliz.
\end{itemize}
In the following we describe the use of PyNormaliz from a Python command line and document the basic functions that allow the access from SageMath.

For a brief introduction please consult the PyNormaliz tutorial at \url{https://nbviewer.jupyter.org/github/Normaliz/PyNormaliz/blob/main/doc/PyNormaliz_Tutorial.ipynb}.

You can also open the tutorial for PyNormaliz interactively on \url{https://mybinder.org} following the link \url{https://mybinder.org/v2/gh/Normaliz/NormalizJupyter/master}.

\subsection{Installation}

The PyNormaliz install script assumes that you have executed the
\begin{center}
	\verb|install_normaliz_with_eantic.sh|
\end{center}
script. (It is however possible to install PyNormaliz with fewer optional packages.) In the following we assume that PyNormaliz resides in the subdirectory \verb|PyNormaliz| of the Normaliz directory. This automatically the case if you have downloaded a Normaliz source package. If you have obtained Normaliz or PyNormaliz in another way, make sure that our assumption is satisfied.

To install PyNormaliz navigate to the Normaliz directory and type
\begin{Verbatim}
./install_pynormaliz.sh --user
\end{Verbatim}
The script detects your Python3 version, assuming the executable is in the \verb|PATH|. Note that the installation stores the produced files in \verb|~/.local|.

If you want to install PyNormaliz system wide,
replace \verb|--user| by \verb|--sudo|. Then you will be asked for your root password.
The following additional options are available for \verb|install_pynormaliz.sh|:
\begin{itemize}
	\item \verb|--python3 <path>|: Path to a python3 executable.
	\item \verb|--prefix <path>|: Path to the Normaliz install path
\end{itemize}

Depending on your setup, you might be able to install PyNormaliz via pip, typing
\begin{Verbatim}
pip3 install PyNormaliz
\end{Verbatim}
at a command prompt.

The installation requires the \verb|setuptools|. If you are missing them install them with  \verb|pip3|.


\subsection{The high level interface by examples}

PyNormaliz has a high level interface which allows a very intuitive use. We load PyNormaliz:
\begin{Verbatim}
winfried@ryzen:... python3
Python 3.6.9 (default, Oct  8 2020, 12:12:24) 
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import PyNormaliz
>>> from PyNormaliz import *
\end{Verbatim}

\subsubsection{Creating a cone}

The only available class in PyNormaliz is \verb|Cone|. As often in this manual, ``cone'' includes a lattice of reference, unless we are working in an algebraic number field. We come back to this case below. First we have to create a cone (and a lattice). We can use all input types that are allowed in Normaliz input files. They must be given as named parameters as in the following example:
\begin{Verbatim}
>>> C = Cone(cone = [[1,3],[2,1]])
\end{Verbatim}
This is the example from Section~\ref{cone_ex}. There can be several input matrices. The example shows us how Normaliz matrices are represented as Python types: each row is a \verb|list|, and the matrix then is a \verb|list| whose members are the lists representing the rows. Important: This encoding matches exactly the formatted matrices in Normaliz input files.

It is possible to use (decimal) fractions in the input, but they must be encoded as strings. Our cone from above could be defined by
\begin{Verbatim}
>>> C = Cone(cone = [[1,"3.0"],[1,"1/2"]])
\end{Verbatim}
This creates a \verb|Cone<mpz_class>| on the Normaliz side. One can also create a \verb|Cone<long long>| by
\begin{Verbatim}
>>> C = Cone(cone = [[1,"3.0"],[1,"1/2"]], CreateAsLongLong = True)
\end{Verbatim}

In the following \verb|Cone| (with a capital C) is a class defined in \verb|PyNormaliz.py|. An instance of this class contains an \verb|NmzCone| which is the Python equivalent of a \verb|Cone<Integer>| defined on the Normaliz side. The \verb|NmzCone| in the \verb|Cone| \verb|C|, is referred to by \verb|C.cone|. This is only important when one wants to access the low level interface.

One can create a cone from a Normaliz input file as follows:
\begin{Verbatim}
C = Cone(file = "example/small")
\end{Verbatim}
It will read the file \verb|small.in| in the directory \verb|example| /relative to the current directory.  \verb|CreateAsLongLong| \verb| = True| can be used.

For polynomial constraints one uses commands like
\begin{Verbatim}
PolyEq = ["x[1] -x[2]^2", "x[2]*x[3] - 27"]
C.SetPolynomialEquations(PolyEQ)
\end{Verbatim}
The argument of \verb|SetPolynomialEquations| is a list of strings in which each component represents a polynomial expression. See Sections \ref{poly_input} and \ref{poly_const_input}. The equations are always of type $f(x) = 0$. Similarly, inequalities defined by
\begin{Verbatim}
C.SetPolynomialInequalities(PolyEQ)
\end{Verbatim}
are interpreted as $f(x) \ge 0$.

Selected input types:
\begin{itemize}
\item Homogeneous generators:
\ttt{
polytope,
subspace,
cone,
cone\_and\_lattice,
lattice
monoid}
\item Inhomogeneous generators:
\ttt{
vertices
}
\item Homogeneous constraints:
\ttt{
inequalities,
signs,
equations,
congruences
}
\item  Inhomogeneous constraints:
\ttt{
inhom\_equations,
inhom\_inequalities,
inhom\_congruences
}
\item Linear forms:
\ttt{
grading,
dehomogenization
}
\item Lattice ideals and friends:
\ttt{
lattice\_ideal,
toric\_ideal,
normal\_toric\_ideal
}
\end{itemize}
For explanations and other input types se the Normaliz manual. The input type \verb|constraints| can't be used in PyNormaliz (but it is allowed in input files read by PyNormaliz as, for example \verb*|small.in| above).

Shortcuts like \verb|nonnegative| or \verb|total_degree| are available as boolean parameters. They can be set by, for example,
\begin{Verbatim}
C.SetBoolParam("nonnegative")
\end{Verbatim}
The function has an optional argument that can be \verb*|True| or \verb*|False| (though\verb*|False| hardly makes sense). The boolean parameter itself is encoded as a string as in the example. These parameters are
\begin{center}
	\verb*|verbose|, \verb*|nonnegative|, \verb*|total_degree|, \verb*|list_poilynomials|, \verb*|convert_equations|, \verb*|no_coord_transf|, \verb*|no_pos_orth_def|
\end{center}
For \verb*|verbose| you can also use \verb*|C.SetVerbose()|.

\subsubsection{Matrices, vectors and numbers}

The matrix format of the input is of course also used in PyNormaliz results:
\begin{Verbatim}
>>> C.HilbertBasis()
[[1, 1], [1, 2], [1, 3], [2, 1]]
\end{Verbatim}
PyNormaliz contains some functions that help reading complicated output. For matrices we can use
\begin{Verbatim}
>>> print_matrix(C.HilbertBasis())
1 1
1 2
1 3
2 1
\end{Verbatim}
Similarly
\begin{Verbatim}
>>> print_matrix(C.SupportHyperplanes())
-1  2
3 -1
\end{Verbatim}
Since our input defines an original monoid, we can ask for the module generators over it:
\begin{Verbatim}
>>> print_matrix(C.ModuleGeneratorsOverOriginalMonoid())
0 0
1 1
1 2
2 2
2 3
\end{Verbatim}
Binomials are retrieved in the same way:
\begin{Verbatim}
>>> print_matrix(C.MarkovBasis())
-1  2 -1 0
-3  1  0 1
-2 -1  1 1
\end{Verbatim}
In this connection note that you can set upper and lower bounds for the degrees in the output of Markov and Gröbner bases:
\begin{Verbatim}
C.SetGBDegreeBound(3)
C.SetGBMinDegree(2)
\end{Verbatim}
If you want to set a monomial order for the Gröbner basis, you must use the \verb|Compute| function:
\begin{Verbatim}
C.Compute("GroebnerBasis", "Lex")
C.GroebnerBasis()
\end{Verbatim}

Some numerical invariants:
\begin{Verbatim}
>>> C.Rank()
2
>>> C.EmbeddingDim()
2
>>> C.ExternalIndex()
1
>>> C.InternalIndex()
5
\end{Verbatim}

If we want to know whether a certain cone property has already been computed, we can ask for it:
\begin{Verbatim}
>>> C.IsComputed("HilbertBasis")
True
\end{Verbatim}
The essential point is that this query does \emph{not} force the computation if the property has not yet been computed.
There are several more computation goals that come as matrices, vectors or numbers. We list all of them:
\begin{itemize}
	\item Matrices: \ttt{    ExtremeRays,
		VerticesOfPolyhedron,
		SupportHyperplanes,
		HilbertBasis,\\
		ModuleGenerators,
		Deg1Elements,
		LatticePoints,
		ModuleGeneratorsOverOriginalMonoid,
		ExcludedFaces,
		OriginalMonoidGenerators,
		MaximalSubspace,
		Equations,
		Congruences,
		GroebnerBasis,
		Representations,
		FusionRings,
		SimpleFusionRings,
		NonSimpleFusionRings
	}
	\item Matrices with floating point entries: \ttt{    ExtremeRaysFloat,
		SuppHypsFloat,
		VerticesFloat}
	\item Vectors: \ttt{    Grading,
		Dehomogenization,
		WitnessNotIntegrallyClosed,
		GeneratorOfInterior,
		CoveringFace,
		AxesScaling,
		SingleLatticePoint,
		SingleFusionRing
	}
	\item Numbers: \ttt{    
		TriangulationSize,
		NumberLatticePoints,
		RecessionRank,
		AffineDim,
		ModuleRank,
		Rank,
		EmbeddingDim,
		ExternalIndex,
		TriangulationDetSum,
		GradingDenom,
		UnitGroupIndex,
		InternalIndex,}
\end{itemize}

The numbers have several different representations on the Normaliz side. In Python they are all (long) integers.
\subsubsection{Triangulations, automorphisms and face lattice}
Some of the raw output is complicated:
\begin{Verbatim}
>>> U = C.UnimodularTriangulation()
>>> U
[[[[1, 2], 1, []], [[2, 3], 1, []], [[0, 3], 1, []]], [[1, 3], [2, 1], [1, 1], [1, 2]]]
\end{Verbatim}
Taking a close look, we see two members of the outermost \verb|list|. The second is an ordinary matrix, namely the matrix of the rays of the triangulation:
\begin{Verbatim}
>>> print_matrix(U[1])
1 3
2 1
1 1
1 2
\end{Verbatim}
The first member is not a matrix, but close enough so that we can use \verb|print_matrix|:
\begin{Verbatim}
>>> print_matrix(U[0])
[1, 2] 1 []
[2, 3] 1 []
[0, 3] 1 []
\end{Verbatim}
In each line we find the information on a simplicial cone, first the list of the rays by their indices relative to the matrix of rays (counting rows from $0$). The next is the determinant relative to a lattice basis (in our case the unit vectors). In a unimodular triangulation these determinants must of course be $1$. The third component is the list of excluded faces if we have computed a disjoint decomposition (not done automatically!). This is explained in Section~\ref{Disjoint}.

To see an even more complicated data structure we ask for the combinatorial automorphisms:
\begin{Verbatim}
>>> G = C.CombinatorialAutomorphisms()
>>> G
[2, Faase, False,  [[[1, 0]], [[0, 1]]], [[], []], [[[1, 0]], [[0, 1]]]]
\end{Verbatim}
There are~$6$ components on the outermost level. The first is the order of the group. The second answers the question whether the integrality of the automorphisms has been checked. The answer is always ``no'' for compinatorial automorphisms, and therefore the third give the answer ``no'' to the question whether the automorphisms are integral.

The next three contain information on the
\begin{itemize}
	\item extreme rays of the (recession) cone,
	\item the vertices of the polyhedron,
	\item he support hyperplane
\end{itemize}
in this order. In each of them we find
\begin{itemize}
	\item the action of the group generators on the respective vectors,
	\item their orbits under the group.
\end{itemize}
In our case there are no vertices of the polyhedron (only defined for inhomogeneous input). This explains the empty \verb|list|. Fortunately we can print the complicated result nicely with an explanation:
\begin{Verbatim}
>>> print_automs(G)
order  2
permutations of  extreme rays of (recession) cone
0 :  [1, 0]
orbits of  extreme rays of (recession) cone
0 :  [0, 1]
permutations of  support hyperplanes
0 :  [1, 0]
orbits of  support hyperplanes
0 :  [0, 1]
\end{Verbatim}
It makes sense to have a look at Section~\ref{Automorphisms}. (Here we count from $0$.)

\ttt{AmbientAutomorphisms} and \ttt{InputAutomorphisms} yield a slightly different result. The permutations and orbits in the third element of the outer list now refer to the input vectors. The fourth element gives data for the rmpty set, as does the fifth for \ttt{InputAutomorphisms} . For \ttt{AmbientAutomorphisms} it lists the permutation and orbits of the coordinates of the ambient lattice. All this is followed by the input vectors for reference. A simple example:
\begin{Verbatim}
>>> C = Cone(cone = [[0,1],[1,0]])
>>> C.AmbientAutomorphisms()
[2, True, True, [[[1, 0]], [[0, 1]]], [[], []], [[[1, 0]], [[0, 1]]], [[0, 1], [1, 0]]]
>>> print_automs(C.AmbientAutomorphisms())
order  2
automorphisms are integral
permutations of  input vectors
0 :  [1, 0]
orbits of  input vectors
0 :  [0, 1]
permutations of  coordinates
0 :  [1, 0]
orbits of  coordinates
0 :  [0, 1]
input vectors
0 1
1 0
\end{Verbatim}
 
Of course, we also want to know the face lattice:
\begin{Verbatim}
>>> C.FaceLattice()
[[[0, 0], 0], [[1, 0], 1], [[0, 1], 1], [[1, 1], 2]]
\end{Verbatim}
Hard to read. Much better:
\begin{Verbatim}
>>> print_matrix(C.FaceLattice())
[0, 0] 0
[1, 0] 1
[0, 1] 1
[1, 1] 2
\end{Verbatim}
So there are four faces. The \verb|list| contains the support hyperplanes that meet in the face and the number is the codimension. The support hyperplanes are given by their row indices relative to the matrix of support hyperplanes. Also see Section~\ref{FaceLattice}. The $f$-vector:
\begin{Verbatim}
>>> C.FVector()
[1, 2, 1]
\end{Verbatim}

If you want to limit the codimension of the faces computed with \verb|FaceLattice| or \verb|FVector|, set the bound by
\begin{Verbatim}
>>> C.SetFaceCodimBound(1)
\end{Verbatim}
Try it and ask for \verb|FaceLattice| once more. If you want to get rid of a previously set bound:
\begin{Verbatim}
>>> SetFaceCodimBound()
\end{Verbatim}
or take $-1$ as the argument.

We also have a printer for the Stanley decomposition:
\begin{Verbatim}
>>> print_Stanley_dec(C.StanleyDec())
\end{Verbatim}
Try it.

The cone properties that fall into the categories discussed in this section include: \ttt{    Triangulation,
	UnimodularTriangulation,
	LatticePointTriangulation,
	AllGeneratorsTriangulation,\\
	PlacingTriangulation,
	PullingTriangulation,
	StanleyDec,
	InclusionExclusionData,
	Automorphisms,
	CombinatorialAutomorphisms,
	RationalAutomorphisms,
	EuclideanAutomorphisms,
	AmbientAutomorphisms,
	InputAutomorphisms,
	FaceLattice,
	DualFaceLattice,
	FVector,
	DualFVector,
    FaceLatticeOrbits,
	DualFaceLatticeOrbits,
	FVectorOrbits,
	DualFVectorOrbits,
	Incidence,
	DualIncidence,
	SingularLocus.}

\subsubsection{Hilbert and other series}

Now we turn to the Hilbert series.
\begin{Verbatim}
>>> C.HilbertSeries()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/winfried/../PyNormaliz.py", line 403, in inner
return self._generic_getter(name, **kwargs)
File "/home/winfried/.../PyNormaliz.py", line 393, in _generic_getter
PyNormaliz_cpp.NmzCompute(self.cone, input_list)
PyNormaliz_cpp.NormalizError: Could not compute: 
No grading specified and cannot find one. Cannot compute some requested properties!
\end{Verbatim}
Indeed, we forgot the grading. We could have added it at the time of construction
\begin{Verbatim}
>>> C = Cone(cone = [[1,3],[2,1]], grading = [[1,2]])
\end{Verbatim}
where it must be given as a matrix with a single row. Or we can add it later:
\begin{Verbatim}
>>> C.SetGrading([1,2])
\end{Verbatim}
(A similar function is \verb|SetProjectionCoords|.) We check the grading:
\begin{Verbatim}
>>> C.Grading()
[[1, 2], 1]
\end{Verbatim}
The number $1$ following the vector is the grading denominator.

Now:
\begin{Verbatim}
>>> C.HilbertSeries()
[[1, -1, 0, 1, 0, 0, 0, 1, 0, -1, ..., 0, 0, 0, 0, 1, -1, 1], [1, 28], 0]
\end{Verbatim}
For space reasons we have omitted some components in the first \verb|list|, the numerator of the Hilbert series. The second gives the denominator, and the last is the shift. Much nicer:
\begin{Verbatim}
>>> print_series(C.HilbertSeries())
(1 -  t +  t^3 +  t^7 -  t^9 +  t^10 +  t^12 -  t^13 +  t^14 +  t^19 +  t^24 -  t^25 +  t^26)
---------------------------------------------------------------------------------------------
(1 - t) (1 - t^28) 
\end{Verbatim}

Options can be added as named parameters:
\begin{Verbatim}
>>> print_series(C.HilbertSeries(HSOP = True))
(1 +  t^3 +  t^5 +  t^6 +  t^8)
-------------------------------
(1 - t^4) (1 - t^7)   
\end{Verbatim}
This representation is much more natural in this case. Perhaps we want so see the Hilbert quasipolynomial:
\begin{Verbatim}
>>> print_quasipol(C.HilbertQuasiPolynomial())
28 5
-5 5
...
10 5
5 5
divide all coefficients by  28
\end{Verbatim}
In this case it seems better to print the polynomials as vectors of coefficients.

If the quasipolynomial has a large period and high degree, you may want to restrict the information to only a few coefficients from the top:
\begin{Verbatim}
SetNrCoeffQuasiPol(bound)
\end{Verbatim}
The bound $-1$ or \verb|SetNrCoeffQuasiPol()|mean ``all'', in case you want to get rid of the previously set bound.

Normaliz can compute the values of the coefficients of the Hilbert series for you:
\begin{Verbatim}
>>> C.HilbertSeriesExpansion(10)
[1, 0, 0, 1, 1, 1, 1, 2, 2, 1, 2]
\end{Verbatim}

For the weighted Ehrhart series we need a polynomial. Let's add it (can also be done in the constructor with \verb|polynomial = <string>|):
\begin{Verbatim}
>>> C.SetPolynomial("x[1]+x[2]")
True
\end{Verbatim}
Then
\begin{Verbatim}
print_series(C.WeightedEhrhartSeries())
\end{Verbatim}
We don't show the result because it is too long for this manual.

The cone properties of this section: \ttt{    HilbertSeries,
	HilbertQuasiPolynomial,
	EhrhartSeries,
	EhrhartQuasiPolynomial,
	WeightedEhrhartSeries,
	WeightedEhrhartQuasiPolynomial}

\subsubsection{Multiplicity, volume and integral}
The first time we see a fraction printed as such:
\begin{Verbatim}
>>> C.Multiplicity()
'5/28'
\end{Verbatim}
Since Python has no built-in type for fractions, we print it as a string.

\begin{Verbatim}
>>> C.EuclideanVolume()
'0.3993'
\end{Verbatim}
The decimal fractions is rounded to $4$ decimals. If you need more precision, you can directly use the low level interface:
\begin{Verbatim}
>>> NmzResult(C.cone,"EuclideanVolume")
0.39929785312496247
\end{Verbatim}
By default, the low level interface returns raw values. We use it once more:
\begin{Verbatim}
>>> NmzResult(C.cone,"EuclideanIntegral")
0.2638217958147073
\end{Verbatim}
We have integrated our polynomial from above. In case we have forgotten it:
\begin{Verbatim}
>>> C.Polynomial()
'x[1]+x[2]'
\end{Verbatim}

For computations with fixed precision one can specify the number of decimal digits:
\begin{Verbatim}
>>> C.setDecimalDigits(50)
\end{Verbatim}
This function is hardly necessary, since the default value of \ttt{100} is almost always satisfactory.

The cone properties of this section: \ttt{    Multiplicity,
	Volume,
	Integral,
	VirtualMultiplicity,
	EuclideanVolume,
	EuclideanIntegral,
	ReesPrimaryMultiplicity
}

\subsubsection{Integer hull and other cones as values}

Let us define a nonintegral polytope (we vary the format of the numbers on purpose):
\begin{Verbatim}
>>> R = Cone(vertices = [["-3/2", '7/5',1], [9,-15,4], ["7.0",8,3]])
>>> R.VerticesOfPolyhedron()
[[-15, 14, 10], [7, 8, 3], [9, -15, 4]]
\end{Verbatim}
The last component of each vector acts as the denominator of the first two, and we recognize the fractions in the input. Numerical invariants available with inhomogeneous input:
\begin{Verbatim}
>>> R.AffineDim()
2
>>> R.RecessionRank()
0
>>> R.LatticePoints()
[[-1, 1, 1], [0, 0, 1], [0, 1, 1], [1, -2, 1], ...  [2, -1, 1], [2, 0, 1], [2, 1, 1], [2, 2, 1]]
>>> H = R.IntegerHull()
>>> H
<Normaliz Cone>
\end{Verbatim}
So we have computed a new cone, the cone over the polytope (in this case) spanned by the lattice points in the polytope with rational vertices \verb|[[-15, 14, 10], [7, 8, 3], [9, -15, 4]]|.
\begin{Verbatim}
>>> H.VerticesOfPolyhedron()
[[-1, 1, 1], [1, -2, 1], [1, 2, 1], [2, -3, 1], [2, 2, 1]]
\end{Verbatim}
The last component is $1$ as it must be for lattice points of the polytope.
\begin{Verbatim}
>>> print_matrix(H.SupportHyperplanes())
-1  0 2
 0 -1 2
 1 -2 3
 1  1 1
 3  2 1
\end{Verbatim}

In the same way as \verb*|IntegerHull| you can get the  \verb|ProjectCone| as the result of the cone projection. The third cone that Normaliz produces is the symmetrized cone. It is only an auxiliary cone that is not a computation goal itself. See Sewction \ref{add_data} how to access it.

\subsubsection{Boolean values}

We ask our cone \verb|C| many questions:
\begin{Verbatim}
>>> C.IsGorenstein()
False
>>> C.IsDeg1HilbertBasis()
False
>>> C.IsDeg1ExtremeRays()
False
>>> C.IsPointed()
True
>>> C.IsInhomogeneous()
False
>>> C.IsEmptySemiOpen()
...
PyNormaliz_cpp.NormalizError: ...: IsEmptySemiOpen can only be computed with excluded faces
>>> C.IsIntegrallyClosed()
False
>>> 
>>> C.IsReesPrimary()
...
PyNormaliz_cpp.NormalizError: Could not compute: IsReesPrimary !
\end{Verbatim}


\subsubsection{Algebraic polyhedra}

For an algebraic polyhedron we must define the real embedded number field over which the polyhedron is living. This information is given in the cone constructor:
\begin{Verbatim}
>>> A = Cone(number_field=[ "a^2-2", "a", "1.4+/-0.1" ], 
             vertices = [["1/2a", "13/3",1], ["-3a^1",-6,2], [-6, "-1/2a-7",1]])
>>> print_matrix(A.VerticesOfPolyhedron())
    -6 -1/2*a-7 1
-3/2*a       -3 1
 1/2*a     13/3 1
>>> print_matrix(A.VerticesFloat())
-6.0000 -7.7071 1.0000
-2.1213 -3.0000 1.0000
 0.7071  4.3333 1.0000
>>> A.RenfVolume()
'-19*a+42'
>>> A.EuclideanVolume()
'7.5650'
>>> print_matrix(A.LatticePoints())
-5 -6 1
...
-1  1 1
0  3 1
>>> A.NumberFieldData()
('a^2 - 2', '[1.414213562373095048801...8073176679738 +/- 3.57e-64]')
>>> A.GetFieldGeneratorName()
'a'
\end{Verbatim}

The only point to notice is \verb|RenfVolume| that we must use instead of \verb|Volume| here. The number field data show you to what precision $\sqrt2$ had to be computed to make all decisions about positivity for our little polytope.

\subsubsection{Fusion rings}

The definition of fusion rings (see Appendix \ref{fusion_rings}) follows the usual rules. Example:
\begin{Verbatim}
>>> C = Cone(fusion_type = [[1,1,2,2]], fusion_duality = [[0,1,2,3]])
>>> C.FusionRings()
[[0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1]]
\end{Verbatim}
As in ordinary input files, the duality can be omitted if it is the identity. As usual the type and the duality which are really vectors, must be disguised as matrices with a single row.

For this simple input there is only one fusion ring. It is of course also returned by 
\begin{Verbatim}
>>> C.SingleFusionRing()
[0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1]
\end{Verbatim} 
as a vector.

Wec can also ask for \verb*|SimpleFusionRings|, \verb*|NonSimpleFusionRings|, \verb*|LatticePoints|, \verb*|SingleLatticePoint|, \verb*|InductionMatrices| and \verb*|FusionData|. For our example we get the fusion data (line breaks inserted) 
\begin{Verbatim}
[[[[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]],
  [[0, 1, 0, 0], [1, 0, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]],
  [[0, 0, 1, 0], [0, 0, 1, 0], [1, 1, 0, 1], [0, 0, 1, 1]],
  [[0, 0, 0, 1], [0, 0, 0, 1], [0, 0, 1, 1], [1, 1, 1, 0]]]]
\end{Verbatim}

For suitable input it make sense to ask for \verb*|ModularGradings|. Then the algorithmic variant \verb*|UseModularGrading| can be applied via the collective compute command discussed below. If there is more than one modular grading, you must pick one by the function \verb*|SetModularGrading(<g>)| where  \verb*|<g>| is the number of the grading you want to pick, counted from 1. The same applies to \verb*|SSetChosenFusionRing(<r>)| by which we can pick a fusion ring for \verb*|InductionMatrices|.

The input types \verb*|fusion_ring_map|, \verb*|fusion_image_type|, \verb*|fusion_image_ring| and \verb*|fusion_image_duality| can of course be used: they are all given by matrices.

\subsubsection{The collective compute command and algorithmic variants}\label{CollComp}
So far we have asked Normaliz for a single cone property. It is also possible to bundle several computation goals and options in a single compute command:
\begin{Verbatim}
>>> C.Compute("HilbertBasis", "HilbertSeries", "ClassGroup", "DualMode")
True
>>> C.IsComputed("ClassGroup")
True
>>> C.ClassGroup()
[0, 5]
\end{Verbatim}
which means that the class group is isomorphic to $\ZZ/(5)$. The first number $0$ indicates that the class group has rank $0$.

The collective compute command not only allows you to set several computation goals simultaneously. It allows you to specify algorithmic variants, like \verb|DulaMode|. There is a whole collection of variants explained elsewhere in this manual:

\ttt{DefaultMode,
Approximate,
BottomDecomposition,
NoBottomDec,
DualMode,
PrimalMode,\\
Projection,
ProjectionFloat,
NoProjection,
Symmetrize,
NoSymmetrization,
NoSubdivision,\\
NoNestedTri, 
KeepOrder,
HSOP,
NoPeriodBound,
NoLLL,
NoRelax,
Descent,
NoDescent,\\
NoGradingDenom,
GradingIsPositive,
ExploitAutomsVectors,
ExploitIsosMult,
StrictIsoTypeCheck,
SignedDec,
NoSignedDec,
FixedPrecision}

\subsubsection{Miscellaneous functions}
In order to get some information about what is going on in Normaliz, we can switch on the terminal output:
\begin{Verbatim}
>>> C = Cone(cone = [[1,3],[2,1]], grading = [[1,2]])
>>> C.SetVerbose()
False
>>> C.HilbertBasis(DualMode = True)
Computing support hyperplanes for the dual mode:
************************************************************
starting full cone computation
Generators sorted lexicographically
Starting primal algorithm (only support hyperplanes) ...
Start simplex 1 2 
Pointed since graded
Select extreme rays via comparison ... done.
------------------------------------------------------------
transforming data... done.
************************************************************
computing Hilbert basis ...
==================================================
cut with halfspace 1 ...
Final sizes: Pos 1 Neg 1 Neutral 0
==================================================
cut with halfspace 2 ...
Final sizes: Pos 3 Neg 3 Neutral 1
Hilbert basis 4
Find degree 1 elements
transforming data... done.
[[1, 1], [2, 1], [1, 2], [1, 3]]
\end{Verbatim}
The return value of \verb|SetVerbose| is the \emph{old value} of \emph{verbose}. We had to redefine \verb|C| to get of the already computed Hilbert basis. The very last line is our Hilbert basis.

If we want to see all data computed for \verb|C|, call
\begin{Verbatim}
>>> C.print_properties()
ExtremeRays:                         NumberLatticePoints:
[[2, 1], [1, 3]]                     0
SupportHyperplanes:                  Rank:
[[-1, 2], [3, -1]]                   2
HilbertBasis:                        EmbeddingDim:
[[1, 1], [2, 1], [1, 2], [1, 3]]     2
Deg1Elements:                        IsPointed:
[]                                   True
OriginalMonoidGenerators:            IsDeg1ExtremeRays:
[[1, 3], [2, 1]]                     False
MaximalSubspace:                     IsDeg1HilbertBasis:
[]                                   False
Grading:                             IsIntegrallyClosed:
[[1, 2], 1]                          False
GradingDenom:                        IsInhomogeneous:
1                                    False
UnitGroupIndex:                      Sublattice:
1                                    [[[1, 0], [0, 1]], [[1, 0], [0, 1]], 1]
InternalIndex:
\end{Verbatim}
Typeset in two columns. The last property we see is \verb|Sublattice|. It consists of two matrices and a number. See Section~\ref{coord} for the interpretation.

Finally, we can write a Normaliz output file:
\begin{Verbatim}
>>> C.WriteOutputFile("Wonderful")
True
\end{Verbatim}
Now you should find a file \verb|Wonderful.out| in the current directory. Note that additional compulsory output files are written as well. For example, \verb*|Wionderful.aut| is written if an automorphism group has been computed. It is not possible to write truly optional output files line \verb*|Wonderful.gen|. If you want one of them, you must use python methods.

One can also write a file for the input of precomputed data:
\begin{Verbatim}
>>> C.WritePrecompData("Wonderful")
True
\end{Verbatim}
It creates the file \verb|Wonderful.precomp.in|.

\subsection{The low level interface}

The low level interface is contained in \ttt{NormalizModule.cpp}. Its functions are listed in\\ \verb|PyNormaliz_cppMethods[]|. They allow the construction of an \verb|NmzCone| (accompanied by a lattice), the computation in it, and give access to the computation results. The use of the low level interface is indirectly explained by the examples above. Therefore we keep the discussion short.

\subsubsection{The main functions}

For the construction one uses
\begin{Verbatim}
NmzCone(**kwargs)
\end{Verbatim}
The keyword arguments \ttt{kwargs} transport Normaliz input types and the corresponding matrices in Python format. In addition we must use \ttt{number\_field} for algebraic polyhedra. You can use \ttt{polynomial} for computations with a polynomial weight. (There is also an extra function for setting the polynomial; see below.) You can also ask for a \verb|Cone<long long>| by adding \verb|CreateAsLongLong = True|.

\textbf{Once and for all:} in the functions listed in the following that apply to a specific \verb|NmzCone|, this \verb|NmzCone| must be the first argument in \verb|*args|, apart from obvious exceptions that do not depend on a specific cone. In interactive use one should note that a \verb*|Cone| produced by the high level interface is NOT an \verb*|NmzCone|, but it contains an \verb*|NmzCone|, as shown in the following example:
\begin{Verbatim}
>>> from PyNormaliz import *
>>> C= Cone(cone =  [[1,0]]);
>>> NmzGetHilbertSeriesExpansion(C,25);
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
PyNormaliz_cpp.NormalizInterfaceError: First argument must be a cone
>>> NmzGetHilbertSeriesExpansion(C.cone,25);
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
>>> 
\end{Verbatim}

Computations are started by
\begin{Verbatim}
NmzCompute(*args)
\end{Verbatim}
The arguments list the computation goals and options as strings.

Access to the computation results is given by
\begin{Verbatim}
NmzResult(*args, **kwargs)
\end{Verbatim}
There must be exactly two positional arguments. The first is the \verb|NmzCone|, the second names the result to be returned, given as a string.

The \verb|*kwargs| specify handlers, routines that format the raw results of output types that are not existent in Python or should be formatted for another reason. The potential handlers:
\begin{itemize}
	\item[\texttt{RatHandler}] defines the formatting of fractions.
	
	\item[\texttt{FloatHandler}] defines the formatting of floating point numbers.
	
	\item[\texttt{NumberfieldElementHandler}] defines the formatting of number field elements.
	
	\item[\texttt{VectorHandler}] defines the formatting of vectors.
	
	\item[\texttt{MatrixHandler}] defines the formatting of matrices.
\end{itemize}

The default handler for vectors and matrices is \verb|list|, and there is not be much point in changing it. If you don't like lists, you can set \verb|VectorHandler=tuple|, for example. But especially \verb|RatHandler| and \verb|NumberfieldElementHandler| are very useful since the raw versions are difficult to read. Examples of handlers can be found in \verb|PyNormaliz.py|.

\textbf{Note:}\enspace When \verb|NmzResult| is called, its first action is to reset the handlers to the raw format. Then the \verb|kwargs| are evaluated. In other words: the values of the handlers are only applied to the current result, and not to future ones.

In the same way as the data access functions of Normaliz, \verb|NmzResult| triggers the computation of the required result if it should not have been computed yet. Whether a result has been computed yet can be checked by
\begin{Verbatim}
NmzIsComputed(*args)
\end{Verbatim}
The second argument of exactly $2$ is the result whose computation is to be checked, given as a string.

\subsubsection{Additional input and modification of existing cones}

These functions allow the input of data that cannot be passed through the cone constructor or modify a cone after construction. For example:
\begin{Verbatim}
NmzSetGrading(cone, grading)
\end{Verbatim}
The grading is a vector encoded as a Python list. Similarly
\begin{Verbatim}
NmzSetProjectionCoords(cone, coordinates)
\end{Verbatim}
where \verb|coordinates| is a list with entries $0$ or $1$.

\begin{Verbatim}
NmzSetPolynomial(cone, polynomial)
\end{Verbatim}
The polynomial is given as a string.

\begin{Verbatim}
NmzSetNrCoeffQuasiPol(cone, number)
NmzSetFaceCodimBound(cone, number)
\end{Verbatim}
do what the names say.

\begin{Verbatim}
NmzModifyCone(cone, type, matrix)
\end{Verbatim}
This is the PyNormaliz version of the libnormaliz function modifyCone. Please have a look at Section~\ref{Modify}.

\subsubsection{Additional data access}\label{add_data}

Some values cannot be returned as cone properties. For them we have additional access functions.

\begin{Verbatim}
NmzGetPolynomial(cone)
\end{Verbatim}
returns the polynomial weight if one has been set.

The functions
\begin{Verbatim}
NmzGHetHilbertSeriesExpansion(cone, degree)
NmzGetEhrhartSeriesExpansion(cone, degree)
NmzGetWeightedEhrhartSeriesExpansion(cone, degree)
\end{Verbatim}
return the expansion of the named series up to the given degree as a list of numbers.

\begin{Verbatim}
NmzSymmetrizedCone(cone)
\end{Verbatim}
returns a \verb|NmzCone|.

\begin{Verbatim}
NmzGetRenfInfo(cone)
NmzFieldGenName(cone)
\end{Verbatim}
return the data defining the number field.

\subsubsection{Miscellaneous functions}

\begin{Verbatim}
NmzSetVerbose(cone, value=True)
NmzSetVerboseDefault(value=True)
\end{Verbatim}
The first sets \verb|verbose| to the specified value for cone, whereas the second sets it for all subsequently defined cones.

\begin{Verbatim}
NmzConeCopy(cone)
\end{Verbatim}
returns a copy of cone.

\begin{Verbatim}
NmzSetNumberOfNormalizThreads(number)
\end{Verbatim}
does what its name says. The previous number of threads is returned.

\begin{Verbatim}
NmzWriteOutputFile(cone, project)
NmzWritePrecompData(cone, project)
\end{Verbatim}
The first writes a Normaliz output file whose name is the string \verb|project| with the suffix \verb|.out|, the second a file whose name is the string \verb|project| with suffix \verb|precomp.in|.


The functions
\begin{Verbatim}
NmzHasEantic(cone)
NmzHasCoCoA(cone)
NmzHasFlint(cone)
NmzHasFlint(cone)
\end{Verbatim}
return \verb|True| or \verb|False|, depending on whether Normaliz has been built with the corresponding package.

\begin{Verbatim}
NmzListConeProperties() 
\end{Verbatim}
lists all cone properties in case you should have forgotten any of them.

\begin{Verbatim}
error_out(PyObject* m)
\end{Verbatim}
writes an error message if something bad has happened.

\subsubsection{Raw formats of numbers}

All Normaliz integers are transformed to Python long integers, and floating point numbers are transformed to Python floats.

Numbers of type \verb|mpq_class| are represented by a \verb|list| with two components on the Python side, namely the numerator and the denominator.

An algebraic number is represented by a \verb|list| whose members are rational numbers each of which is a \verb|list| with two members. They are the coefficients of the polynomial representing the algebraic number.

\end{small}

\newpage

\section{Distributed computation}

\subsection{Volume via signed decomposition}\label{distr_comp}

Normaliz offers a possibility to compute volumes via signed decomposition by distributing the task to several computers or nodes in a gigh performance cluster that run independently of each other. The principal approach:
\begin{arab}
\item The first step is the computation of the hollow triangulation and the generic vector on a single machine. This step can require considerable time and memory.
\item These data (and some more) are written to ``hollow tri'' files.
\item The files are read by Normaliz with the \verb|--Chunk| option that makes it compute the contribution to the volume that comes from a single data file (``chunk'') and write this volume to a ``mult''  file.
\item A final run of Normaliz with the \verb|--AddChunks| option so that it reads all the ``mult'' files and adds the partial volumes.
\end{arab}

This is certainly a  robust and flexible approach to distributed computation. While the main purpose of distributed computation is a massive increase in parallelization, one should consider its use even if the computation is done on a single machine. It limits the loss of data caused by system crashes or similar interruptions to a small amount and allows easy repair. Another advantage is that the most time consuming step (3) needs very little RAM for a single ``chunk'', compared to step (1). 

To make Normaliz write the data files and to stop once they have been written, one uses the cone property
\begin{itemize}
	\itemtt[DistributedComp, -{}-DCM] 
\end{itemize}
The size of the locks can be set by
\begin{itemize}
	\itemtt[block\_size\_hollow\_tri <n>] 
\end{itemize}
to the input file where \verb|<n>| is the number of simplices of the full triangulation that should go into a single output file.  The default value chosen by \verb|DistributedComp| is $500,000$.

The output files are
\begin{itemize}
	\itemtta[<project>.hollow\_tri.<n>.gz]{project>.hollow\_tri.<n>.gz}
\end{itemize}
where \verb|<n>| numbers these files consecutively, starting from $0$. As usual, \verb|<project>| is the name of the project. These files are gzipped to save disk space.

Moreover, there is a common data file:
\begin{itemize}
	\itemtta[<project>.basic.data]{project>.basic.data}
\end{itemize}

Each file of the hollow triangulation must be run by  Normaliz with the option \verb|--Chunk|. The input is read from \ttt{stdin} to which the gzipped file(s) must be decompressed and redirected or piped. The directory \verb|source/chunk| contains \verb|run_single.sh| hat can be used for this purpose:
\begin{Verbatim}
time zcat $1.hollow_tri.$2.gz | ../normaliz --Chunk 
\end{Verbatim}
where \verb|$1| is the project name and \verb|$2| is the number \verb|<n>| from above. The OpenMP parallelization is set to $8$ threads by this call, but one can add the option \verb|-x=<p>| where \verb|<p>| is the number of parallel threads to be used. Normaliz processes the single blocks with the fixed precision of $100$ decimal digits. The path to normaliz (\verb|../| above) must be adapted to your system.

On a cluster system one uses a script to start a job array where our number \verb|<n>| serves as an index for the array. An example:
\begin{Verbatim}
#SBATCH --job-name="CondEffPlur"
#SBATCH --comment="CondEffPlur"
#SBATCH --time=24:00:00
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=8
#SBATCH --mem=15000
#SBATCH --array=0-359%100

# each job will see a different ${SLURM_ARRAY_TASK_ID}
../run_single.sh CondEffPlur ${SLURM_ARRAY_TASK_ID}
\end{Verbatim}
In this example \verb|<n>| runs from 0 to 359, and 100 jobs can be processed simultaneously. The number $8$ in \verb|#SBATCH --cpus-per-task=8| corresponds to the number of threads Normaliz is using for internal parallelization. 

Finally, execute 
\begin{Verbatim}
normaliz <project> --AddChunks
\end{Verbatim}
to sum the partial multiplicities in the files \verb|<project>.mult.<n>| The result is written to the terminal and also to the file \verb|<project>.total.mult|.

\subsection{Lattice points via patching}\label{DistPatch}

The patching algorithm (Section \ref{positive_systems}) allows distributed computation, designed for computation on a high performance cluster (HPC). The parts into which the computation is distributed are called \emph{splits} in the following.

As an example for the whole file structure that we will explain in this section, the directory \verb*|example| contains the file \verb*|split_demo.zip|. Unzipping it creates a directory \verb*|split_demo|. To see the full directory structure, unzip the contained zip files.

The scheme that Normaliz uses is again a batch array. As above we assume it is controlled by SLURM. There is a crucial  constraint set by the system: an upper limit on the wall clock time of a batch job, i.e., a split. Normaliz can overcome the time limit by using \emph{successive refinement}: if a run of the full array does not complete the computation,it creates temporary files whose contents are then read and exploited by the next refinement.

\subsubsection{Precomputation}

The precomputation is again started by
\begin{itemize}
\itemtt[DistributedComp, -{}-DCM] 
\end{itemize}
It allows the parameter
\begin{itemize}
\itemtta[-X=<s>]{X=<s>}	
\end{itemize}
where \verb*|<s>| is the number of desired splits. The default value is $1000$, unless the maximum possible number of splits is smaller. 

 The only purpose is to find the lowest patch level where splitting makes sense. (For small computations such a level may not exist.) You should use the same additional options for \verb*|DistributedComp| that you want to use for the main computation.
 
 The result of the precomputation is written to the file
 \begin{itemize}
 	\itemtta[<project>.split.data]{project>.split.data}
 \end{itemize}
The start version is
 \begin{Verbatim}
 refinement 0 <s>
 <l> <s>
 \end{Verbatim}
 \verb*|<l>| is the split level, \verb*|<s>| the number of splits. \verb*|refinement 0| indicates the first refinement (counted from $0$). Later on this file is used to transfer information to the next refinement.

A second task that can be performed as a precomputation is solving ``local systems'' and storing the solutions in files that are then read when the solutions are needed, instead of being computed separately by every split. This can save a lot of time and also memory (the latter since certain complicated data structures need not be built). The option is
\begin{itemize}
	\itemtt[SaveLocalSolutions, -{}-SLS] 
\end{itemize}
You must also set a level up to which the local solutions should be precomputed:
\begin{itemize}
	\itemtta[-Q=<l>]{Q=l}
\end{itemize}
This will create files
\begin{itemize}
	\itemtta[<project>.<k>.sls]{project>.<k>.sls} 
\end{itemize}
where \verb*|<k>| runs form $0$ to \verb*|<l>|. It is of course useful to run the precomputation of local solutions before running \verb*|[DistributedComp|.

We remind the reader of  option
\begin{itemize}
	\itemtt[ShortInt]
\end{itemize}
mentioned in Section \ref{positive_systems}.

\subsubsection {Running a refinement}

We use a job array defined by 
\begin{Verbatim}
#!/bin/sh
#SBATCH --job-name="SplitList"
#SBATCH --comment="SplitList"
#SBATCH --time=36:00:00
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
#SBATCH --mem=20000
#SBATCH --array=0-999

# each job will see a different ${SLURM_ARRAY_TASK_ID}
time ../normaliz -c -x=4 InList --List --Split -X=${SLURM_ARRAY_TASK_ID} --UWP --FusionRings
\end{Verbatim}
for the  list \verb*|InList| of input files (see Appendix \ref{input_list}). The range \verb*|0-999| fits the number of $1000$ splits. It may be necessary to change the range. The number $4$ in \verb|#SBATCH --cpus-per-task=4| corresponds to \verb*|-x=4|.

We need the option
\begin{itemize}
\itemtta[-{}-Split]{Split}
\end{itemize}
to indicate that this is a split computation, and 
\begin{itemize}
\itemtta[-X=<s>]{X=<s>}	
\end{itemize}
gives the number \verb*|<s>|, an index transferred to Normaliz. Example:
\begin{Verbatim}
./normaliz -c -x=4 tough --Split -X=151
\end{Verbatim}

The index $s$ is used to identify the data produced by this split. It is
\begin{itemize}
\itemtta[<project>.<f>.<s>.lat]{project.f>.<s>.lat}
\end{itemize}
where \verb|<f>| is the index of the current refinement. 

If the job has been completed, this file contains the lattice points found in this split in the standard format, namely their number, the embedding dimension, and then the vectors. 

If it has not been completed, then the first line is \verb|preliminary_stage|, followed by intermediate results that can be read by the next refinement.

After the refinement has been finished, the lattice points are harvested by running Normaliz with the option
\begin{itemize}
	\itemtta[-{}-CollectLat, -{}-CLL]{CollectLat}
\end{itemize}
for example
\begin{Verbatim}
./normaliz tough --CLL
\end{Verbatim}
Make sure that all splits have finished before running \verb*|CollectLat|, especiall when splitting is applied to a list if input files as described in Appendix \ref{input_list}. A third role of \verb*|-X|:
\begin{itemize}
	\itemtta[-X=<s>]{X=<s>}	
\end{itemize}
With \verb*|--CollectLat| it can be used to set the number of subsplits if further refinement is necessary.

The If all jobs have completed their tasks, total list of lattice points is written to
\begin{itemize}
	\itemtta[<project>.out]{project.out}
\end{itemize}
provided all splits could be finished by Normaliz. If at least one \verb|lat| file is still in preliminary stage, Normaliz will realize that, clean up all preliminary lat files, and collect all computed lattice points in the file
\begin{itemize}
	\itemtta[<project>.<f>.lat.so\_far]{project.<f>.lat.so\ ,far}
\end{itemize}
Again \verb*|<f>| is the index of the refinement. The \verb*|lat| files are zipped and then removed for better overview.

A second task of \verb*|--CollectLat| is the preparation of \verb*|<project>.split.data| for the next refinement. The current version is archived in
\begin{itemize}
\itemtta[<project>.<f>.split.data]{project.f.split.data}
\end{itemize}
Then the next refinement can be started in the same way as refinement $0$.

The computation goals \verb*|SingleLatticePoint| and \verb*|SingleFusionRing| ask for  a single point. If a split has found such a point, it signalizes this to the other splits by writing the file
\begin{itemize}
\itemtta[<project>.spst]{project.spst}
\end{itemize}
This file is deleted by \verb*|CollectLat|.

\newpage


\section{Lists of input files}\label{input_list}

in order to have Normaliz run over a list of input files one produces a file containing  their names (and paths relative to the working directory), for instance \verb*|InList| in \verb*|example|:
\begin{Verbatim}
example/small
example/medium
example/big
\end{Verbatim}
That Normaliz is to be run over a list of input files is indicated by the option
\begin{itemize}
	\itemtta[-{}-List]{List}
\end{itemize}
on the command line. All other options on the command lines are forwarded to the individual files. As a test run
\begin{Verbatim}
	./normaliz -c example/InList --List --LongLong
\end{Verbatim}
in the Normaliz directory. Note that the file names in the list must be prefixed with the path from the working directory (in our example the Normaliz directory) to the directory containing the input files 

In addition to \verb*|--Split| and \verb*|X=<s>| as in Appendix \ref{DistPatch}, two more parameters control the run of Normaliz over a list:

\begin{enumerate}
\item[(1)] if \verb*|--Split| is set, then Normaliz goes over the whole list, and runs the split with index \verb*|<s>| given by \verb*|-X=<s>| for every file in the list.

\item[(2)] if \verb*|--Split| is not set, then

\begin{itemize}
	\itemtta[-A=<a>]{A}	means that the Normaliz chooses those input files   whose index in the list is $\equiv$ \verb*|<a>| modulo \verb*|<z>| where \verb*|<z>| is given by
	\itemtta[-Z=<z>]{Z}. If \verb*|-Z=<z>| is omitted, then \verb*|<z>| is the length of the list so that Normaliz picks exactly one file.
\end{itemize}

\item[(3)] If neither \verb*|--Split| nor \verb*|A=<a>| is on the command line, then Normaliz runs over the whole list without splitting for the individual files.
\end{enumerate}

A typical SLURM file for case (1):
\begin{Verbatim}
#!/bin/sh
#SBATCH --job-name="InList"
#SBATCH --comment="InList"
#SBATCH --time=24:00:00
#SBATCH --ntasks=8
#SBATCH --threads-per-core=1
#SBATCH --mem=80000
#SBATCH --array=0-999

# each job will see a different ${SLURM_ARRAY_TASK_ID}
time ../normaliz -c InList --List --Split -X=${SLURM_ARRAY_TASK_ID}
\end{Verbatim}

A typical file for case (2):
\begin{Verbatim}
#!/bin/sh
#SBATCH --job-name="InList"
#SBATCH --comment="InList"
#SBATCH --time=24:00:00
#SBATCH --ntasks=8
#SBATCH --threads-per-core=1
#SBATCH --mem=80000
#SBATCH --array=0-999

# each job will see a different ${SLURM_ARRAY_TASK_ID}
time ../normaliz -c InList --List -A=${SLURM_ARRAY_TASK_ID} -Z=100
\end{Verbatim}
	
The operations started by \verb*|DistributedComp| or \verb*|CollectLat| can also be performed on an input list.

The time bound set by \verb*|normaliz.time| is shared by all files in siuch a way that each file gets the same amount of time.

\newpage

\section{Fusion rings}\label{fusion_rings}

The computation of fusion rings is a special case of computing lattice points in a polytope that satisfy polynomials equations. We refer the user to the book \cite{EGNO} for the basic theory and to \cite{ABPP} for our computational approach.

\subsection{The structure of fusion rings}\label{fusion_structure}

A \emph{fusion ring} $R$ is an associative free $\ZZ$-algebra with a fixed $\ZZ$-basis $b_1,\dots,b_r$. One of the basis elements is the multiplicative unit which is always chosen to be $b_1$. The bilinear map $R\times R\to R$ given by the multiplication $(x,x')\mapsto xx'$ is the bilinear extension of the product $(b_i, b_j)\mapsto b_ib_j$ and this product can be written uniquely in the form
$$
b_i b_j = \sum_{i=1}^r N_{ij}^k b_k, \qquad N_{ij}^k\in \ZZ.
$$
One of the  distinctive features of fusion rings: $N_{ij}^k\ge 0$ for all $i,j,k$. The other is the existence of a \emph{duality}, an involution of the set $\{b_1,\dots,b_r\}$, $b_i\mapsto b_{i^*}$ that extends to an antiautomorphism of $R$, and satisfies the condition $N_{ij}^1 = 1$ if $i=j^*$, and $N_{ij}^1 = 0$ else.

In total the coefficients $N_{ij}^k$ must satisfy the following conditions (in addition to nonnegativity): for all $i,j,k,t$
\begin{itemize}
	\item (Ass) $\sum_s N_{i,j}^s N_{s,k}^t = \sum_s N_{j,k}^s N_{i,s}^t$,		
	\item (Unit) $N_{1,i}^j = N_{i,1}^j = \delta_{i,j}$,	
	\item (Auto) $N_{ij}^{k^*} = N_{j^*i^*}^k$,			
	\item (Dual) $N_{i^*,j}^{1} = N_{ji^*}^{1} = \delta_{i,j}$.					
\end{itemize}

These conditions imply a very useful identity, called \emph{Frobenius reciprocity}:
for all $i,j,k$ one has
$$
N_{i,j}^k = N_{i^*,k}^j = N_{j,k^*}^{i^*} = N_{j^*, i^*}^{k^*} = N_{k^*, i}^{j^*} = N_{k, j^*}^{i}.
$$
Together with the fixed values of $N_{i,j}^k$ with $1\in\{i,j,k\}$ this identity reduced the number of the coefficients that we want to compute to $\sim (r-1)^3/6$. If the commutativity $N_{ij}^k = N_{ji}^k$ is asked for, the $6$-term identity extends to a $12$-term identity since then $N_{ij}^k= N_{ji}^k$ for all $i,j,k$. 

The fundamental property of fusion rings is given by the \emph{Frobenius--Perron theorem}:
\begin{itemize}
\item a square matrix with nonnegative integer entries has a nonnegative real eigenvalue;
\item for the maximum real eigenvalues $d_i$ of the left multiplication by $b_i$, $i=1,\dots,r$, the assignment $b_i \mapsto d_i$, $i=1,\dots,r$ extends to a ring homomorphism $R\to \RR$;
\item it is the only ring homomorphism $R\to \RR$ that has nonnegative values on $b_1,\dots,b_r$.
\end{itemize}

One calls $d_i$ the  \label{FPdim} \emph{Frobenius--Perron dimension $\FPdim(b_i)$ of $b_i$}, and sets $\FPdim(R) = \sum_i d_i^2$.

By definition $d_1,\dots,d_r$ are algebraic integers. We concentrate on the case in which they belong to $\ZZ$. The task to be solved is the computation of all fusion rings of a given \emph{type} $(d_1,\dots,d_r)$ and a given duality $(1^*,\dots,r^*)$ (possibly with the additional condition that $R$ is commutative).

Given the type $(d_1,\dots,d_r)$ and the duality, we must find all nonnegative solutions to the linear equations
$$
N_{ij}^1d_1 +\dots + N_{ij}^rd_r = d_id_j, \qquad i,j = 1,\dots,r
$$
that reflect the homomorphism condition of the assignment $b_i\mapsto d_i$. Furthermore the associativity condition (Ass) must be satisfied that is given by polynomial equations of degree~$2$.

To set up these equations and to interpret the solutions one must fix coordinates. We do this as follows.  The $N_{ij}^k$ with $1\in \{i,j,k\}$ are inserted into the equations with their fixed values $\in \{0,1\}$. Each tuple $(i,j,k)$ with $1\notin \{i,j,k\}$ belongs to a set $FR(i,j,k)$ defined by the $6$-term identity (or the $12$-term identity). This set is represented by its lexicographic smallest member and the sets are ordered lexicographically by these members.

Examples of input files containing the systems of equations are \verb*|pet.in| and \verb*|baby.in|. \emph{Normaliz can produce the system of equations itself,} and we explain the necessary input types in Section \ref{fusion_input}.

The computation uses the patching variant of project-and-lift (see \ref{positive_systems}).The options that control the insertion order of patches can be applied (see \ref{patch_order}).

\subsection{Input types and computation goals}\label{fusion_input}

\emph{Note that we count types and dualities from $0$ in the following.}

In order to  produce the system of equations for fusion rings itself, Normaliz provides the input types
\begin{itemize}
	\itemtt[fusion\_type] -- the type of the fusion ring, and
	\itemtt[fusion\_duality] -- the duality of the fusion ring.
\end{itemize}

Both are vectors of length \verb*|amb_space|, the fusion rank. If the duality is omitted, Normaliz chooses the identity for it. Example \verb*|bracket_4.in|, using the handy \verb*|amb_space auto|:
\begin{Verbatim}
amb_space auto
fusion_type
[1,1,2,3,3,6,6,8,8,8,12,12]
fusion_duality
[0,1,2,3,4,5,6,7,8,9,11,10]
\end{Verbatim}
The $i$-th entry of the duality is $i^*$ for $i=0,...,r-1$. In our case there is only one transposition: $10^*= 11$. The first entry of the duality vector must be $0$ unless we want to use it to require certain additional conditions; see Section \ref{modular}.

It is of course possible to compute all lattice points satisfying the linear and quadratic equations by asking for \verb*|LatticePoints|. But in general the systems has nontrivial automorphisms so that every fusion ring is represented by several isomorphic copies, of which only one is of interest. A further aspect is the distinction between simple and nonsimple fusion rings, where simple means that no proper subset of the basis generates a $\ZZ$-submodule that is a nontrivial fusion ring (to which the duality restricts). The computation goals for fusion rings are
\begin{itemize}
	\itemtt[FusionRings] -- compute all fusion rings (up to automorphisms),
	\itemtt[SimpleFusionRings] -- compute all simple fusion rings (up to automorphisms),
	\itemtt[LatticePoints] -- compute all fusion rings (allowing isomorphic copies).
\end{itemize}
The default computation goal is \verb*|FusionRings|. For
\verb*|bracket_4.in| we get the output file 
\begin{Verbatim}
148 fusion rings up to isomorphism
0 simple fusion rings up to isomorphism
148 nonsimple fusion rings up to isomorphism

Embedding dimension 231

dehomogenization
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 1

***********************************************************************

0 simple fusion rings up to isomorphism:

148 nonsimple fusion rings up to isomorphism:
0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 ... 1 3 3 1 1 1
...
\end{Verbatim}
Note that the input for fusion rings is inhomogeneous. So the last one is the homogenizing coordinate.

The computation goals \verb*|FusionRings|  and \verb*|SimpleFusionRings| can also be used for ``full'' input files, provided they have standard names. Moreover, standard names allow the computation of fusion rings without an input file. See Section \ref{virtual_input}.

A further input type is
\begin{itemize}
	\itemtt[candidate\_subring]  -- a $0$-$1$-vector of length \verb*|amb_space|.
\end{itemize}
It specifies a subset of the basis of the fusion ring that is used for testing simplicity: The entries $1$ mark the basis vectors selected for the candidate subring. 
This can be useful for very hard computations. However `simple ''must then be understood as ``not containing the candidate'' and ``nonsimple'' is the opposite. Example \verb*|bracket_3_cand.in|:
\begin{Verbatim}
amb_space auto

fusion_type
[1,1,2,3,3,6,6,8,8,8,12,12]

candidate_subring
[1,1,0,0,0,0,0,0,0,0,0,0]
\end{Verbatim}
If a \verb*|candidate_subring| is given, the default computation goal is changed to \verb*|SingleLattiicePoint|. If another computation goal is set explicitly, then the \verb*|candidate_subring| is disregarded.

If you only want to find out whether there is a fusion ring for your type and duality, you can use the option
\begin{itemize}
	\itemtt[SingleFusionRing]
\end{itemize}
If there exist simple and nonsimple fusion rings for your data, then it is impossible to predict whether the single fusion ring will be simple or nonsimple. However, the output files will test the single fusion ring for these properties. Example \verb*|bracket_4_single.in|:
\begin{Verbatim}
amb_space auto
fusion_type
[1,1,2,3,3,6,6,8,8,8,12,12]
fusion_duality
[0,1,2,3,4,5,6,7,8,9,11,10]
SingleFusionRing
\end{Verbatim}
It yields the output

\begin{Verbatim}
1 fusion rings up to isomorphism (only single fusion ring  asked for)
0 simple fusion rings up to isomorphism
1 nonsimple fusion rings up to isomorphism

Embedding dimension = 276

dehomogenization
0 0 0 0 0 0 0 0 0 0 0 0 0 0  ...0 0 0 0 0 1 

***********************************************************************

0 simple fusion rings up to isomorphism:

1 nonsimple fusion rings up to isomorphism:
0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0...  3 3 1 1 1
\end{Verbatim}
Note that the single fusion ring found is not uniquely determined. The option can save considerable computation time, but only if there exists a fusion ring.

Finally one can also generate an input file with a system of linear equations that constitutes a necessary condition of fusion rings for the given type: if the ``partition system'' has no solution in nonnegative integers, then there are no fusion rings for the given type, regardless of the duality). See \cite{ABPP}. The input type is
\begin{itemize}
	\itemtt[fusion\_type\_for\_partition] -- the type to be tested.
\end{itemize}
The default computation goal is \verb*|SingleLatticePoint| since (at present) we are only interested in the solubility. Also \verb*|LatticePoints| is allowed, but be aware of potentially very large numbers of solutions. Example \verb*|bracket_3_part.in|:
\begin{Verbatim}
amb_space auto
fusion_type_for_partition
[1,1,2,3,3,6,6,8,8,8,12,12]
\end{Verbatim}
which yields
\begin{Verbatim}
1 module generators (only single lattice point asked for)

embedding dimension = 57

dehomogenization:
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...

Lattice point:
0 0 0 0 0 0 1 0 0 0 0 2 0 0 0 2 0 0 3 0 2 1 ...

***********************************************************************
\end{Verbatim}
Note that the single lattice point is not uniquely determined. 
There are $300$ lattice points in the polytope. 

The user should study Section \ref{patch_order}. It explains several options that control the insertion order of the patches and the heuristic minimization of polynomial equations and inequalities. 


\subsection{Standard names and virtual input files}\label{virtual_input}

Fusion rings can be computed without an input file -- the input file exists ``virtually''. For this variant the project name must contain the fusion type and duality. Such ``standard names'' have the structure
\begin{Verbatim}
[<t>][<d>]
\end{Verbatim}
where \verb*|<t>| is the type and \verb*|<d>| is the duality. Both are comma separated integer vectors of the same length (the fusion rank). Example:
\begin{Verbatim}
[1,1,2,3,3,6,6,8,8,8,12,12][0,1,2,3,4,5,6,7,8,9,11,10]
\end{Verbatim}
This standard name can be prefixed by a path which defines the directory where the output file is placed.

Note: if there exists a file \verb*|[<t>][<d>].in| , it is read and evaluated. It is not allowed to be empty. So, if you want to have a real input file, it must contain the same data as an input file whose name is not standard. 
Commutativity can be forced by starting the duality with \verb*|-1|.

Try
\begin{Verbatim}
/path/to/normaliz -c [1,1,2,3,3,6,6,8,8,8,12,12][0,1,2,3,4,5,6,7,8,9,11,10]
\end{Verbatim}
to see the computation with virtual input file.

This trick can also be used for partition files where the standard name is only the type. Example:
\begin{Verbatim}
[1,1,2,3,3,6,6,8,8,8,12,12]
\end{Verbatim}
Try
\begin{Verbatim}
/path/to/normaliz -c [1,1,2,3,3,6,6,8,8,8,12,12]
\end{Verbatim}

If you still have `full'' input files with linear and polynomial equations for fusion rings, you can run them with the computation goals \verb*|FusionRings| or \verb|SimpleFusionRings|. However, the fusion data must be transported by a standard name.

Normaliz can produce a real input file for a standard name by
\begin{itemize}
	\itemtta[-{}-MakeFusionInput, -{}-MFI]{MakeFusionInput}
\end{itemize}
For example,
\begin{Verbatim}
/path/to/normaliz -c [1,1,2,3,3,6,6,8,8,8,12,12] --MFI
\end{Verbatim}
will produce \verb*|[1,1,2,3,3,6,6,8,8,8,12,12].in|.

Virtual input files can be used in lists.

Note: an input files is only generated if it does not exist yet. 

\subsection{Nonintegral fusion rings}

It is possible to compute nonintegral fusion rings. For them the type must be specified by elements from an algebraic number field. At present it must be embedded into $\RR$. In order to define the number field, one needs a real input file. Example \verb*|EH1.in|:
\begin{Verbatim}
amb_space auto
number_field min_poly (5+4*a-5*a^2+a^3) embedding [3 +/- 0.5]

fusion_type
[1, (a) (a^2 - a - 1) (2*a^2 - 3*a - 4) (3*a^2 - 5*a - 4) (4*a^2 - 7*a - 6)]
\end{Verbatim}
it gives the output
\begin{Verbatim}
1 fusion rings up to isomorphism
1 simple fusion rings up to isomorphism
0 nonsimple fusion rings up to isomorphism

Embedding dimension 36

dehomogenization
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 

***********************************************************************

1 simple fusion rings up to isomorphism:
1 1 0 0 0 1 0 1 0 0 1 1 1 1 2 1 1 1 1 1 1 2 2 3 3 1 2 2 3 3 4 4 5 6 7 1

0 nonsimple fusion rings up to isomorphism:
\end{Verbatim}

\subsection{Full fusion data}

The output for fusion rings shown in the previous section is the short form that uses Frobenius reciprocity, (Dual)  and (Unit). However, Normaliz can also provide the fusion data $(N_{ij}^k)$ in full form. For a  single fusion ring it is a list of Matrices $M_i$. The matrix $M_i$ contains the numbers $N_{ij}^k$ where $j$ is the row index and $k$ is the column index. (The transpose is the matrix of left multiplication by the basis element $b_i$.)
The option
\begin{itemize}
	\itemtt[FusionData]
\end{itemize}
asks for the additional output file
\begin{itemize}
	\itemtta[<project>.fus]{project.fus}
\end{itemize}
It contains a list of lists, one inner list for every computed fusion ring. As an example we take \verb*|pet_new.in| that produces $2$ fusion rings. The usual output file is \verb*|pet_new.out|:
\begin{Verbatim}
	...
	2 simple fusion rings up to isomorphism:
	1 1 0 0 1 1 0 0 1 1 1 1 1 1 1 1 1 1  ...  1 1 2 1 2 1 2 2 1 1
	1 1 0 0 1 1 0 0 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 2 1 2 2 1 3 0 1
	...
\end{Verbatim}
With the option \verb*|FusionData| in \verb*|pet_new.in| we run
\begin{Verbatim}
	./normaliz -c example/pet_new
\end{Verbatim}
and  get the additional file \verb*|pet_new.fus| (with comments \verb|<--- ...|):
\begin{Verbatim}
	[      <--------------  start of list of fusion rings 
	[    <--------------  fusion ring 1
	[  <--------------  matrix 1
	[1,0,0,0,0,0,0],  <---- row 1
	[0,1,0,0,0,0,0],
	...
	[0,0,0,0,0,0,1]  <---- last row
	], <--------------  end of matrix 1
	...
	],   <------------- end of fusion ruing 1     
	...
	]     <------------- end of outer list
\end{Verbatim}


\subsection{Necessary conditions for modular categorification}\label{modular}

Fusion rings that allow modular categorification (see \cite{ABPP} and \cite{EGNO}) must satisfy certain conditions. Normaliz can be asked to compute only fusion rings that satisfy them.

\subsubsection{Commutativity}

The first condition is commutativity. Then the first entry of the duality is $-1$.  Example \verb*|bracket_4_comm.in|:
\begin{Verbatim}
amb_space auto
fusion_type
[1,1,2,3,3,6,6,8,8,8,12,12]
fusion_duality
[-1,1,2,3,4,5,6,7,8,9,11,10]
\end{Verbatim}
This somewhat strange way to communicate commutativity is necessary because Normaliz must know it already at the time of construction. It would come too late as an algorithmic variant.

Note that forcing commutativity is superfluous if the duality is the identity. However, in other cases it can speed up computations significantly. 

\subsubsection{Graded structure}

Normaliz requires that fusion rings to which the options in this section are to be applied are commutative, as explained in the preceding section.

The base elements $b_i$ with $d_i=1$ form a group under multiplication. Let $m$ be their number. For modular categorification the ring must be graded with respect to this group. The homogeneous components are modules over the neutral component and generated by subsets $B_i$, $i=1,\dots,m$, of $\{b_0,\dots,b_{r-1}\}$ as $\ZZ$-modules. (We are counting basis elements from $0$. ) Set $f_i = \sum_{j\in B_i} d_j^2$. Then  $f_1=\dots=f_m$.

See \cite[Section 3.5]{EGNO} for basic facts about gradings of fusion rings and \cite[8.22.9(iii), 4.14.3]{EGNO} for the existence in the case of modular categorification.

At present Normaliz allows only $m\le 4$. The reason for this restriction is that the group table that underlies the grading is uniquely determined by the duality. For groups of higher order further input data would be necessary, at least if one wants to use the grading already in the computation for which it often has a significant effect.

There is another ambiguity that must be taken care of: the type and the duality may allow different partitions of the set of base vectors that are compatible to the group structure. To be on the safe side one first runs the input file with the option
\begin{itemize}
\itemtt[ModularGradings]
\end{itemize}
Example \verb*|find_mod_grad.in|:
\begin{Verbatim}
amb_space 14
fusion_type
1 1 1 1 2 2 2 2 2 2 2 4 4 4
fusion_duality
-1 1 2 3 4 5 6 7 8 9 10 11 12 13
ModularGradings
\end{Verbatim}
Result:
\begin{Verbatim}
2 modular gradings

***********************************************************************

2 modular gradings:
modular grading 1

0 1 2 3 4 5 6 7 
8 11 
9 12 
10 13 
---------------------
modular grading 2

0 1 2 3 11 
4 5 6 7 8 
9 12 
10 13 
---------------------
\end{Verbatim}
All other modular gradings differ from the two above by automorphisms of the system, and it is enough to consider only one representative in each orbit.

For the computation one must fix one of the gradings as in the input by
\begin{itemize}
\itemtta[modular\_grading <g>]{modulargrading}
\end{itemize}
where \verb*|<g>| is the index of the grading, as in \verb*|mod_grad.-in|:
\begin{Verbatim}
amb_space 14
fusion_type
1 1 1 1 2 2 2 2 2 2 2 4 4 4
fusion_duality
-1 1 2 3 4 5 6 7 8 9 10 11 12 13
modular_grading 2
UseModularGrading
\end{Verbatim}
The option
\begin{itemize}
\itemtt[UseModularGrading]
\end{itemize}
has told Normaliz  to use the chosen modular grading. If there is only one modular grading, the choice is superfluous.

\subsubsection{Induction to the center}

Let $R$ be a fusion ring with fusion data $(N_{i,j}^k)$ and basis $\{a_1, \ldots, a_r\}$, where $a_1$ is the unit. For simplicity, we restrict ourselves  to integral and commutative $R$. Both these properties are essential for the discussion below. \emph{However, in version 3.10.5 Normaliz can also deal with noncommutative fusion rings of rank $\le 8$.}

Assume that $R$ admits a categorification into a fusion category $\mathcal{C}$ over the complex field. ($\mathcal{C}$ is not necessarily uniquely determined.) Then the Drinfeld center $Z(\mathcal{C})$ of $\mathcal{C}$ is an integral modular fusion category see \cite[Section 9.2]{EGNO} for the mathematics). Let $ZR$ be the Grothendieck ring of $Z(\mathcal{C})$ Let $\{b_1, \ldots, b_n\}$ be the basis of $ZR$, where $n \geq r$. By the properties of the Drinfeld center, $ZR$ is an integral commutative $1/2$-Frobenius fusion ring: this means that $\FPdim(b_i)^2$ divides $\FPdim(ZR)$ in $\ZZ$ ($\FPdim$ was introduced on p.~\pageref{FPdim}).

Let $d_i = \FPdim(a_i)$ and $m_i = \FPdim(b_i)$. Then, $\FPdim(R) = \sum_i d_i^2$ and $\FPdim(ZR) = \sum_i m_i^2$. There is a theorem stating that $\FPdim(ZR) = \FPdim(R)^2$ \cite[Thm. 7.16.6]{EGNO}. By the $1/2$-Frobenius property, $m_i^2$ divides $\FPdim(ZR)$, so $m_i$ divides $\FPdim(R)$.

There is a ring morphism $F: ZR \to R$ preserving $\FPdim$, induced by the (so-called) forgetful functor $Z(\mathcal{C}) \to \mathcal{C}$. Thus
\[ F(b_i) = \sum_j F_{i,j} a_j, \]
where $F_{i,j}$ are nonnegative integers and
\[ m_i = \FPdim(\sum F_{i,j} a_j) = \sum_j F_{i,j}d_j. \]

There is an additive morphism $I: R \to ZR$ (not preserving $\FPdim$, so not multiplicative) induced by the adjoint of the forgetful functor. As a matrix, $I$ is just the transpose of $F$, i.e.,
\[ I(a_j) = \sum_i F_{i,j} b_i = \sum_i F_{i,j} b_i. \]

The $r \times n$ matrix of $I$ is usually called the \emph{induction matrix}. It satisfies a list of properties implying that for a given fusion ring, there are only finitely many possible induction matrices. Normaliz can compute them---at least in principle since the computation may need an astronomical time.

There can be zero, one or several possible induction matrices. If none, then the fusion ring $R$ is excluded from categorification, which is very useful. If there are induction matrices but no $ZR$ compatible with them, then $R$ is excluded as well from categorification. Idem if there are compatible $ZR$ but no modular data. 

In general, for a given fusion ring $R$, several $ZR$ are possible, and several $n$ ( = rank($ZR$)) are possible. Hence, the rank $n$ of $ZR$ is also a variable.

A theorem \cite[Prop. 9.2.2]{EGNO} states that, for all $j$,
\[ F(I(a_j)) = \sum_t a_t a_j a_{t^*}. \]

But
\[ F(I(a_j)) = \sum_k \left( \sum_i F_{i,j} F_{i,k} \right) a_k, \]
and
\[ \sum_t a_t a_j a_{t^*} = \sum_k \left( \sum_{s,t} N_{t,j}^s N_{s,t^*}^k \right) a_k. \]

We get the following equation:
\begin{equation}
\sum_i F_{i,j} F_{i,k} = \sum_{s,t} N_{t,j}^s N_{s,t^*}^k\quad\text{for all}\quad j,k = 1,\dots,n. \ \label{indequ}
\end{equation}


The left multiplication matrix for $F(I(a_1)) = \sum_t a_t a_{t^*}$ admits eigenvalues $(f_i)_{i=1,..,r}$ called formal codegrees, which by a theorem \cite[Cor. 2.14]{Ost}, must be integers dividing $\FPdim(R)$. Moreover, for $i \in \{1,..,r\}$, we can choose $m_i = \FPdim(R)/f_i$. Note that this is a negative criterion: if the sum of the multiplicities of these eigenvalues is $< r$, then there is no induction matrix.

Note that
\[ F(I(a_1)) = \sum_t a_t a_{t^*} = \sum_k \left( \sum_t N_{t,t^*}^k \right) a_k, \]
so the left multiplication matrix for $F(I(a_1))$ is
\[ \left( \sum_{t,k} N_{t,t^*}^k N_{k,l}^s \right) _{s,l}. \]

Finally:
\begin{gather*}
 F(b_1) = a_1, \text{ so } F_{1,j} = \delta_{1,j}, \\
 F_{i,1} = 1, \text{ for all } i \in \{1,..,r\}, \\
 F_{i,1} = 0 \text{ for all } i \in \{r+1,\ldots,n\}.
\end{gather*} 

The rows of the matrix $F = (F_{ij})$ must satisfy the condition
$$
t = \sum_{j} F_{ij} d_j, \qquad t\mid \FPdim(R).
$$
The starting point of the computation therefore is to find all solutions to the equation $t = \sum_{j} F_{ij} d_j$ for the divisors $t$ of $\FPdim(R)$ that satisfy the additional conditions just mentioned. Then we must assemble $F$ from these rows so that Equation \eqref{indequ} is satisfied as well as $\sum_{i=1}^n m_i^2 = \FPdim(R)^2$. 

Our computation goal is
\begin{itemize}
\itemtt[InductionMatrices]
\end{itemize}
As an example we take \verb*|[1,1][0,1].in|:
\begin{Verbatim}
amb_space 2
fusion_type
1 1
fusion_duality
0 1
InductionMatrices
\end{Verbatim}
The induction matrices are contained in 
\begin{itemize}
\itemtta[<project>.ind]{project.ind}
\end{itemize}
They are printed with the conventions for $F$ above.
For our example it is  \verb*|[1,1][0,1].ind|:
\begin{Verbatim}
[     <----- begin outer list over the fusion rings computed
  [   <----- inner list of vdata for the current fusion ring 
    [
      [0,1] <----- the fusion rfing in the format of <project>.out
    ],
    [  < first matrix F for the current fusion rfing
      [1,0],
      [1,0],
      [0,1],
      [0,1]
    ],
    [
      [1,1,1,1] <---- type of the potential ZR
    ],
    [ <---- list of pairs (i,i*) that are possible
      [1,1],
      [2,2],
      [2,3],
      [3,3]
    ]
  ]
]
\end{Verbatim}
 
The rows of $F$ are ordered by ascending $m_i$. The additional data are meant as a help for setting up the input file for the next step that we discuss below. Note that the matrix F does not define the duality of $ZR$. The list of pairs $(i,i^*)$ is meant as a help for finding suitable dualities. 

Our example defines only a single fusion ring. In general there are more than one. In this case one can either produce induction matrices for all fusion rings or pick one by
\begin{itemize}
\itemtt[chosen\_fusion\_ring <s>] 
\end{itemize}
where \verb*|<s>| is a number between $1$ and the number of fusion rings computed. Example \verb*|chosen_2,in|:
\begin{Verbatim}
amb_space 5
fusion_type
1 1 2 3 3
fusion_duality
0 1 2 3 4
InductionMatrices
chosen_fusion_ring 2
\end{Verbatim}
You can vary the file by choosing fusion ring 1 (no induction matrix) or omit the choice completely.

The second step is computing the potential centers  defined by the matrices $F$. Example \verb*|mini_ind.in| is
\begin{Verbatim}
amb_space auto
fusion_type
[1,1,1,1]
fusion_duality
[0,1,2,3]
fusion_ring_map
[
[1,0],
[1,0],
[0,1],
[0,1]
]
fusion_image_type
[1, 1]
fusion_image_duality
[0, 1]
fusion_image_ring
[0,1]
\end{Verbatim}
Important: The input types
\begin{itemize}
\itemtt[fusion\_image\_type]
\itemtt[fusion\_image\_duality]
\itemtt[fusion\_image\_ring]
\itemtt[fusion\_ring\_map]
\end{itemize}
must be \textbf{formatted} matrices. If the \verb*|fusion_image_type| is missing, it is set to the identity (like \verb*|fusion_duality|).

\verb*|mini_ind.out| is
\begin{Verbatim}
1 fusion rings up to isomorphism
0 simple fusion rings up to isomorphism
1 nonsimple fusion rings up to isomorphism

Embedding dimension = 11

dehomogenization
0 0 0 0 0 0 0 0 0 0 1 

***********************************************************************

0 simple fusion rings up to isomorphism:

1 nonsimple fusion rings up to isomorphism:
0 0 0 0 1 0 0 0 0 0 1
\end{Verbatim}

The input as in \verb*|mini_ind.in| makes Normaliz compute only those fusion rings for the given \verb*|fusion_type| and \verb*|fusion_duality| for which the matrix $F$ defines a homomorphism to the image defined by \verb*|fusion_image_type|, \verb*|fusion_image_duality| and fixed in \verb*|fusion_image_ring|.

A necessary condition is $F_{ij^*} = F_{i*j}$ for $i=1,\dots,n$ and $j= 1,\dots,r$ where $j*$ is defined by the duality on $R$ and $i^*$ by the duality on $ZR$. It is easy to check that $F$ defines a homomorphism if and only if
$$
%\begin{equation} \label{eq:Zdata}
\sum_{k=1}^n M_{i,j}^k F_{k,t} = \sum_{l,s = 1}^r F_{i,l} F_{j,s} N_{l,s}^t, \qquad i,j = 1,\dots,n, \ t = 1,\dots, r.
%\end{equation}
$$
This system of linear equations for $M_{i,j}^k$ is added to the constraints defining $ZR$.

\emph{Remark.}\enspace All computations above only verify necessary conditions for a Drinfeld center. The fusion rings $ZR$ that are defined by an induction matrix and a compatible duality and for which $F$ is a homomorphism to $R$ are not necessarily Grothendieck rings of Drinfeld centers of the categorifications $\mathcal C$ of $R$. A simple example: if one changes the \verb*|fusion_duality| $[0,1,2,3]$ in \verb*|mini_ind.in| to $[0,1,3,2]$, one obtains the group ring of the cyclic group $C_4$ as $ZR$. But the Drinfeld centers of the categorifications of $C_2$ all have the group ring of $C_2\times C_2$ as their Grothendieck ring (as we got it for the duality $[0,1,2,3$]).