File: gnuastro.texi

package info (click to toggle)
gnuastro 0.14-1
  • links: PTS, VCS
  • area: main
  • in suites: bullseye
  • size: 26,688 kB
  • sloc: ansic: 134,935; sh: 8,203; makefile: 826; cpp: 9
file content (30545 lines) | stat: -rw-r--r-- 1,805,830 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
2999
3000
3001
3002
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
3071
3072
3073
3074
3075
3076
3077
3078
3079
3080
3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
3108
3109
3110
3111
3112
3113
3114
3115
3116
3117
3118
3119
3120
3121
3122
3123
3124
3125
3126
3127
3128
3129
3130
3131
3132
3133
3134
3135
3136
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
3160
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
3174
3175
3176
3177
3178
3179
3180
3181
3182
3183
3184
3185
3186
3187
3188
3189
3190
3191
3192
3193
3194
3195
3196
3197
3198
3199
3200
3201
3202
3203
3204
3205
3206
3207
3208
3209
3210
3211
3212
3213
3214
3215
3216
3217
3218
3219
3220
3221
3222
3223
3224
3225
3226
3227
3228
3229
3230
3231
3232
3233
3234
3235
3236
3237
3238
3239
3240
3241
3242
3243
3244
3245
3246
3247
3248
3249
3250
3251
3252
3253
3254
3255
3256
3257
3258
3259
3260
3261
3262
3263
3264
3265
3266
3267
3268
3269
3270
3271
3272
3273
3274
3275
3276
3277
3278
3279
3280
3281
3282
3283
3284
3285
3286
3287
3288
3289
3290
3291
3292
3293
3294
3295
3296
3297
3298
3299
3300
3301
3302
3303
3304
3305
3306
3307
3308
3309
3310
3311
3312
3313
3314
3315
3316
3317
3318
3319
3320
3321
3322
3323
3324
3325
3326
3327
3328
3329
3330
3331
3332
3333
3334
3335
3336
3337
3338
3339
3340
3341
3342
3343
3344
3345
3346
3347
3348
3349
3350
3351
3352
3353
3354
3355
3356
3357
3358
3359
3360
3361
3362
3363
3364
3365
3366
3367
3368
3369
3370
3371
3372
3373
3374
3375
3376
3377
3378
3379
3380
3381
3382
3383
3384
3385
3386
3387
3388
3389
3390
3391
3392
3393
3394
3395
3396
3397
3398
3399
3400
3401
3402
3403
3404
3405
3406
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
3419
3420
3421
3422
3423
3424
3425
3426
3427
3428
3429
3430
3431
3432
3433
3434
3435
3436
3437
3438
3439
3440
3441
3442
3443
3444
3445
3446
3447
3448
3449
3450
3451
3452
3453
3454
3455
3456
3457
3458
3459
3460
3461
3462
3463
3464
3465
3466
3467
3468
3469
3470
3471
3472
3473
3474
3475
3476
3477
3478
3479
3480
3481
3482
3483
3484
3485
3486
3487
3488
3489
3490
3491
3492
3493
3494
3495
3496
3497
3498
3499
3500
3501
3502
3503
3504
3505
3506
3507
3508
3509
3510
3511
3512
3513
3514
3515
3516
3517
3518
3519
3520
3521
3522
3523
3524
3525
3526
3527
3528
3529
3530
3531
3532
3533
3534
3535
3536
3537
3538
3539
3540
3541
3542
3543
3544
3545
3546
3547
3548
3549
3550
3551
3552
3553
3554
3555
3556
3557
3558
3559
3560
3561
3562
3563
3564
3565
3566
3567
3568
3569
3570
3571
3572
3573
3574
3575
3576
3577
3578
3579
3580
3581
3582
3583
3584
3585
3586
3587
3588
3589
3590
3591
3592
3593
3594
3595
3596
3597
3598
3599
3600
3601
3602
3603
3604
3605
3606
3607
3608
3609
3610
3611
3612
3613
3614
3615
3616
3617
3618
3619
3620
3621
3622
3623
3624
3625
3626
3627
3628
3629
3630
3631
3632
3633
3634
3635
3636
3637
3638
3639
3640
3641
3642
3643
3644
3645
3646
3647
3648
3649
3650
3651
3652
3653
3654
3655
3656
3657
3658
3659
3660
3661
3662
3663
3664
3665
3666
3667
3668
3669
3670
3671
3672
3673
3674
3675
3676
3677
3678
3679
3680
3681
3682
3683
3684
3685
3686
3687
3688
3689
3690
3691
3692
3693
3694
3695
3696
3697
3698
3699
3700
3701
3702
3703
3704
3705
3706
3707
3708
3709
3710
3711
3712
3713
3714
3715
3716
3717
3718
3719
3720
3721
3722
3723
3724
3725
3726
3727
3728
3729
3730
3731
3732
3733
3734
3735
3736
3737
3738
3739
3740
3741
3742
3743
3744
3745
3746
3747
3748
3749
3750
3751
3752
3753
3754
3755
3756
3757
3758
3759
3760
3761
3762
3763
3764
3765
3766
3767
3768
3769
3770
3771
3772
3773
3774
3775
3776
3777
3778
3779
3780
3781
3782
3783
3784
3785
3786
3787
3788
3789
3790
3791
3792
3793
3794
3795
3796
3797
3798
3799
3800
3801
3802
3803
3804
3805
3806
3807
3808
3809
3810
3811
3812
3813
3814
3815
3816
3817
3818
3819
3820
3821
3822
3823
3824
3825
3826
3827
3828
3829
3830
3831
3832
3833
3834
3835
3836
3837
3838
3839
3840
3841
3842
3843
3844
3845
3846
3847
3848
3849
3850
3851
3852
3853
3854
3855
3856
3857
3858
3859
3860
3861
3862
3863
3864
3865
3866
3867
3868
3869
3870
3871
3872
3873
3874
3875
3876
3877
3878
3879
3880
3881
3882
3883
3884
3885
3886
3887
3888
3889
3890
3891
3892
3893
3894
3895
3896
3897
3898
3899
3900
3901
3902
3903
3904
3905
3906
3907
3908
3909
3910
3911
3912
3913
3914
3915
3916
3917
3918
3919
3920
3921
3922
3923
3924
3925
3926
3927
3928
3929
3930
3931
3932
3933
3934
3935
3936
3937
3938
3939
3940
3941
3942
3943
3944
3945
3946
3947
3948
3949
3950
3951
3952
3953
3954
3955
3956
3957
3958
3959
3960
3961
3962
3963
3964
3965
3966
3967
3968
3969
3970
3971
3972
3973
3974
3975
3976
3977
3978
3979
3980
3981
3982
3983
3984
3985
3986
3987
3988
3989
3990
3991
3992
3993
3994
3995
3996
3997
3998
3999
4000
4001
4002
4003
4004
4005
4006
4007
4008
4009
4010
4011
4012
4013
4014
4015
4016
4017
4018
4019
4020
4021
4022
4023
4024
4025
4026
4027
4028
4029
4030
4031
4032
4033
4034
4035
4036
4037
4038
4039
4040
4041
4042
4043
4044
4045
4046
4047
4048
4049
4050
4051
4052
4053
4054
4055
4056
4057
4058
4059
4060
4061
4062
4063
4064
4065
4066
4067
4068
4069
4070
4071
4072
4073
4074
4075
4076
4077
4078
4079
4080
4081
4082
4083
4084
4085
4086
4087
4088
4089
4090
4091
4092
4093
4094
4095
4096
4097
4098
4099
4100
4101
4102
4103
4104
4105
4106
4107
4108
4109
4110
4111
4112
4113
4114
4115
4116
4117
4118
4119
4120
4121
4122
4123
4124
4125
4126
4127
4128
4129
4130
4131
4132
4133
4134
4135
4136
4137
4138
4139
4140
4141
4142
4143
4144
4145
4146
4147
4148
4149
4150
4151
4152
4153
4154
4155
4156
4157
4158
4159
4160
4161
4162
4163
4164
4165
4166
4167
4168
4169
4170
4171
4172
4173
4174
4175
4176
4177
4178
4179
4180
4181
4182
4183
4184
4185
4186
4187
4188
4189
4190
4191
4192
4193
4194
4195
4196
4197
4198
4199
4200
4201
4202
4203
4204
4205
4206
4207
4208
4209
4210
4211
4212
4213
4214
4215
4216
4217
4218
4219
4220
4221
4222
4223
4224
4225
4226
4227
4228
4229
4230
4231
4232
4233
4234
4235
4236
4237
4238
4239
4240
4241
4242
4243
4244
4245
4246
4247
4248
4249
4250
4251
4252
4253
4254
4255
4256
4257
4258
4259
4260
4261
4262
4263
4264
4265
4266
4267
4268
4269
4270
4271
4272
4273
4274
4275
4276
4277
4278
4279
4280
4281
4282
4283
4284
4285
4286
4287
4288
4289
4290
4291
4292
4293
4294
4295
4296
4297
4298
4299
4300
4301
4302
4303
4304
4305
4306
4307
4308
4309
4310
4311
4312
4313
4314
4315
4316
4317
4318
4319
4320
4321
4322
4323
4324
4325
4326
4327
4328
4329
4330
4331
4332
4333
4334
4335
4336
4337
4338
4339
4340
4341
4342
4343
4344
4345
4346
4347
4348
4349
4350
4351
4352
4353
4354
4355
4356
4357
4358
4359
4360
4361
4362
4363
4364
4365
4366
4367
4368
4369
4370
4371
4372
4373
4374
4375
4376
4377
4378
4379
4380
4381
4382
4383
4384
4385
4386
4387
4388
4389
4390
4391
4392
4393
4394
4395
4396
4397
4398
4399
4400
4401
4402
4403
4404
4405
4406
4407
4408
4409
4410
4411
4412
4413
4414
4415
4416
4417
4418
4419
4420
4421
4422
4423
4424
4425
4426
4427
4428
4429
4430
4431
4432
4433
4434
4435
4436
4437
4438
4439
4440
4441
4442
4443
4444
4445
4446
4447
4448
4449
4450
4451
4452
4453
4454
4455
4456
4457
4458
4459
4460
4461
4462
4463
4464
4465
4466
4467
4468
4469
4470
4471
4472
4473
4474
4475
4476
4477
4478
4479
4480
4481
4482
4483
4484
4485
4486
4487
4488
4489
4490
4491
4492
4493
4494
4495
4496
4497
4498
4499
4500
4501
4502
4503
4504
4505
4506
4507
4508
4509
4510
4511
4512
4513
4514
4515
4516
4517
4518
4519
4520
4521
4522
4523
4524
4525
4526
4527
4528
4529
4530
4531
4532
4533
4534
4535
4536
4537
4538
4539
4540
4541
4542
4543
4544
4545
4546
4547
4548
4549
4550
4551
4552
4553
4554
4555
4556
4557
4558
4559
4560
4561
4562
4563
4564
4565
4566
4567
4568
4569
4570
4571
4572
4573
4574
4575
4576
4577
4578
4579
4580
4581
4582
4583
4584
4585
4586
4587
4588
4589
4590
4591
4592
4593
4594
4595
4596
4597
4598
4599
4600
4601
4602
4603
4604
4605
4606
4607
4608
4609
4610
4611
4612
4613
4614
4615
4616
4617
4618
4619
4620
4621
4622
4623
4624
4625
4626
4627
4628
4629
4630
4631
4632
4633
4634
4635
4636
4637
4638
4639
4640
4641
4642
4643
4644
4645
4646
4647
4648
4649
4650
4651
4652
4653
4654
4655
4656
4657
4658
4659
4660
4661
4662
4663
4664
4665
4666
4667
4668
4669
4670
4671
4672
4673
4674
4675
4676
4677
4678
4679
4680
4681
4682
4683
4684
4685
4686
4687
4688
4689
4690
4691
4692
4693
4694
4695
4696
4697
4698
4699
4700
4701
4702
4703
4704
4705
4706
4707
4708
4709
4710
4711
4712
4713
4714
4715
4716
4717
4718
4719
4720
4721
4722
4723
4724
4725
4726
4727
4728
4729
4730
4731
4732
4733
4734
4735
4736
4737
4738
4739
4740
4741
4742
4743
4744
4745
4746
4747
4748
4749
4750
4751
4752
4753
4754
4755
4756
4757
4758
4759
4760
4761
4762
4763
4764
4765
4766
4767
4768
4769
4770
4771
4772
4773
4774
4775
4776
4777
4778
4779
4780
4781
4782
4783
4784
4785
4786
4787
4788
4789
4790
4791
4792
4793
4794
4795
4796
4797
4798
4799
4800
4801
4802
4803
4804
4805
4806
4807
4808
4809
4810
4811
4812
4813
4814
4815
4816
4817
4818
4819
4820
4821
4822
4823
4824
4825
4826
4827
4828
4829
4830
4831
4832
4833
4834
4835
4836
4837
4838
4839
4840
4841
4842
4843
4844
4845
4846
4847
4848
4849
4850
4851
4852
4853
4854
4855
4856
4857
4858
4859
4860
4861
4862
4863
4864
4865
4866
4867
4868
4869
4870
4871
4872
4873
4874
4875
4876
4877
4878
4879
4880
4881
4882
4883
4884
4885
4886
4887
4888
4889
4890
4891
4892
4893
4894
4895
4896
4897
4898
4899
4900
4901
4902
4903
4904
4905
4906
4907
4908
4909
4910
4911
4912
4913
4914
4915
4916
4917
4918
4919
4920
4921
4922
4923
4924
4925
4926
4927
4928
4929
4930
4931
4932
4933
4934
4935
4936
4937
4938
4939
4940
4941
4942
4943
4944
4945
4946
4947
4948
4949
4950
4951
4952
4953
4954
4955
4956
4957
4958
4959
4960
4961
4962
4963
4964
4965
4966
4967
4968
4969
4970
4971
4972
4973
4974
4975
4976
4977
4978
4979
4980
4981
4982
4983
4984
4985
4986
4987
4988
4989
4990
4991
4992
4993
4994
4995
4996
4997
4998
4999
5000
5001
5002
5003
5004
5005
5006
5007
5008
5009
5010
5011
5012
5013
5014
5015
5016
5017
5018
5019
5020
5021
5022
5023
5024
5025
5026
5027
5028
5029
5030
5031
5032
5033
5034
5035
5036
5037
5038
5039
5040
5041
5042
5043
5044
5045
5046
5047
5048
5049
5050
5051
5052
5053
5054
5055
5056
5057
5058
5059
5060
5061
5062
5063
5064
5065
5066
5067
5068
5069
5070
5071
5072
5073
5074
5075
5076
5077
5078
5079
5080
5081
5082
5083
5084
5085
5086
5087
5088
5089
5090
5091
5092
5093
5094
5095
5096
5097
5098
5099
5100
5101
5102
5103
5104
5105
5106
5107
5108
5109
5110
5111
5112
5113
5114
5115
5116
5117
5118
5119
5120
5121
5122
5123
5124
5125
5126
5127
5128
5129
5130
5131
5132
5133
5134
5135
5136
5137
5138
5139
5140
5141
5142
5143
5144
5145
5146
5147
5148
5149
5150
5151
5152
5153
5154
5155
5156
5157
5158
5159
5160
5161
5162
5163
5164
5165
5166
5167
5168
5169
5170
5171
5172
5173
5174
5175
5176
5177
5178
5179
5180
5181
5182
5183
5184
5185
5186
5187
5188
5189
5190
5191
5192
5193
5194
5195
5196
5197
5198
5199
5200
5201
5202
5203
5204
5205
5206
5207
5208
5209
5210
5211
5212
5213
5214
5215
5216
5217
5218
5219
5220
5221
5222
5223
5224
5225
5226
5227
5228
5229
5230
5231
5232
5233
5234
5235
5236
5237
5238
5239
5240
5241
5242
5243
5244
5245
5246
5247
5248
5249
5250
5251
5252
5253
5254
5255
5256
5257
5258
5259
5260
5261
5262
5263
5264
5265
5266
5267
5268
5269
5270
5271
5272
5273
5274
5275
5276
5277
5278
5279
5280
5281
5282
5283
5284
5285
5286
5287
5288
5289
5290
5291
5292
5293
5294
5295
5296
5297
5298
5299
5300
5301
5302
5303
5304
5305
5306
5307
5308
5309
5310
5311
5312
5313
5314
5315
5316
5317
5318
5319
5320
5321
5322
5323
5324
5325
5326
5327
5328
5329
5330
5331
5332
5333
5334
5335
5336
5337
5338
5339
5340
5341
5342
5343
5344
5345
5346
5347
5348
5349
5350
5351
5352
5353
5354
5355
5356
5357
5358
5359
5360
5361
5362
5363
5364
5365
5366
5367
5368
5369
5370
5371
5372
5373
5374
5375
5376
5377
5378
5379
5380
5381
5382
5383
5384
5385
5386
5387
5388
5389
5390
5391
5392
5393
5394
5395
5396
5397
5398
5399
5400
5401
5402
5403
5404
5405
5406
5407
5408
5409
5410
5411
5412
5413
5414
5415
5416
5417
5418
5419
5420
5421
5422
5423
5424
5425
5426
5427
5428
5429
5430
5431
5432
5433
5434
5435
5436
5437
5438
5439
5440
5441
5442
5443
5444
5445
5446
5447
5448
5449
5450
5451
5452
5453
5454
5455
5456
5457
5458
5459
5460
5461
5462
5463
5464
5465
5466
5467
5468
5469
5470
5471
5472
5473
5474
5475
5476
5477
5478
5479
5480
5481
5482
5483
5484
5485
5486
5487
5488
5489
5490
5491
5492
5493
5494
5495
5496
5497
5498
5499
5500
5501
5502
5503
5504
5505
5506
5507
5508
5509
5510
5511
5512
5513
5514
5515
5516
5517
5518
5519
5520
5521
5522
5523
5524
5525
5526
5527
5528
5529
5530
5531
5532
5533
5534
5535
5536
5537
5538
5539
5540
5541
5542
5543
5544
5545
5546
5547
5548
5549
5550
5551
5552
5553
5554
5555
5556
5557
5558
5559
5560
5561
5562
5563
5564
5565
5566
5567
5568
5569
5570
5571
5572
5573
5574
5575
5576
5577
5578
5579
5580
5581
5582
5583
5584
5585
5586
5587
5588
5589
5590
5591
5592
5593
5594
5595
5596
5597
5598
5599
5600
5601
5602
5603
5604
5605
5606
5607
5608
5609
5610
5611
5612
5613
5614
5615
5616
5617
5618
5619
5620
5621
5622
5623
5624
5625
5626
5627
5628
5629
5630
5631
5632
5633
5634
5635
5636
5637
5638
5639
5640
5641
5642
5643
5644
5645
5646
5647
5648
5649
5650
5651
5652
5653
5654
5655
5656
5657
5658
5659
5660
5661
5662
5663
5664
5665
5666
5667
5668
5669
5670
5671
5672
5673
5674
5675
5676
5677
5678
5679
5680
5681
5682
5683
5684
5685
5686
5687
5688
5689
5690
5691
5692
5693
5694
5695
5696
5697
5698
5699
5700
5701
5702
5703
5704
5705
5706
5707
5708
5709
5710
5711
5712
5713
5714
5715
5716
5717
5718
5719
5720
5721
5722
5723
5724
5725
5726
5727
5728
5729
5730
5731
5732
5733
5734
5735
5736
5737
5738
5739
5740
5741
5742
5743
5744
5745
5746
5747
5748
5749
5750
5751
5752
5753
5754
5755
5756
5757
5758
5759
5760
5761
5762
5763
5764
5765
5766
5767
5768
5769
5770
5771
5772
5773
5774
5775
5776
5777
5778
5779
5780
5781
5782
5783
5784
5785
5786
5787
5788
5789
5790
5791
5792
5793
5794
5795
5796
5797
5798
5799
5800
5801
5802
5803
5804
5805
5806
5807
5808
5809
5810
5811
5812
5813
5814
5815
5816
5817
5818
5819
5820
5821
5822
5823
5824
5825
5826
5827
5828
5829
5830
5831
5832
5833
5834
5835
5836
5837
5838
5839
5840
5841
5842
5843
5844
5845
5846
5847
5848
5849
5850
5851
5852
5853
5854
5855
5856
5857
5858
5859
5860
5861
5862
5863
5864
5865
5866
5867
5868
5869
5870
5871
5872
5873
5874
5875
5876
5877
5878
5879
5880
5881
5882
5883
5884
5885
5886
5887
5888
5889
5890
5891
5892
5893
5894
5895
5896
5897
5898
5899
5900
5901
5902
5903
5904
5905
5906
5907
5908
5909
5910
5911
5912
5913
5914
5915
5916
5917
5918
5919
5920
5921
5922
5923
5924
5925
5926
5927
5928
5929
5930
5931
5932
5933
5934
5935
5936
5937
5938
5939
5940
5941
5942
5943
5944
5945
5946
5947
5948
5949
5950
5951
5952
5953
5954
5955
5956
5957
5958
5959
5960
5961
5962
5963
5964
5965
5966
5967
5968
5969
5970
5971
5972
5973
5974
5975
5976
5977
5978
5979
5980
5981
5982
5983
5984
5985
5986
5987
5988
5989
5990
5991
5992
5993
5994
5995
5996
5997
5998
5999
6000
6001
6002
6003
6004
6005
6006
6007
6008
6009
6010
6011
6012
6013
6014
6015
6016
6017
6018
6019
6020
6021
6022
6023
6024
6025
6026
6027
6028
6029
6030
6031
6032
6033
6034
6035
6036
6037
6038
6039
6040
6041
6042
6043
6044
6045
6046
6047
6048
6049
6050
6051
6052
6053
6054
6055
6056
6057
6058
6059
6060
6061
6062
6063
6064
6065
6066
6067
6068
6069
6070
6071
6072
6073
6074
6075
6076
6077
6078
6079
6080
6081
6082
6083
6084
6085
6086
6087
6088
6089
6090
6091
6092
6093
6094
6095
6096
6097
6098
6099
6100
6101
6102
6103
6104
6105
6106
6107
6108
6109
6110
6111
6112
6113
6114
6115
6116
6117
6118
6119
6120
6121
6122
6123
6124
6125
6126
6127
6128
6129
6130
6131
6132
6133
6134
6135
6136
6137
6138
6139
6140
6141
6142
6143
6144
6145
6146
6147
6148
6149
6150
6151
6152
6153
6154
6155
6156
6157
6158
6159
6160
6161
6162
6163
6164
6165
6166
6167
6168
6169
6170
6171
6172
6173
6174
6175
6176
6177
6178
6179
6180
6181
6182
6183
6184
6185
6186
6187
6188
6189
6190
6191
6192
6193
6194
6195
6196
6197
6198
6199
6200
6201
6202
6203
6204
6205
6206
6207
6208
6209
6210
6211
6212
6213
6214
6215
6216
6217
6218
6219
6220
6221
6222
6223
6224
6225
6226
6227
6228
6229
6230
6231
6232
6233
6234
6235
6236
6237
6238
6239
6240
6241
6242
6243
6244
6245
6246
6247
6248
6249
6250
6251
6252
6253
6254
6255
6256
6257
6258
6259
6260
6261
6262
6263
6264
6265
6266
6267
6268
6269
6270
6271
6272
6273
6274
6275
6276
6277
6278
6279
6280
6281
6282
6283
6284
6285
6286
6287
6288
6289
6290
6291
6292
6293
6294
6295
6296
6297
6298
6299
6300
6301
6302
6303
6304
6305
6306
6307
6308
6309
6310
6311
6312
6313
6314
6315
6316
6317
6318
6319
6320
6321
6322
6323
6324
6325
6326
6327
6328
6329
6330
6331
6332
6333
6334
6335
6336
6337
6338
6339
6340
6341
6342
6343
6344
6345
6346
6347
6348
6349
6350
6351
6352
6353
6354
6355
6356
6357
6358
6359
6360
6361
6362
6363
6364
6365
6366
6367
6368
6369
6370
6371
6372
6373
6374
6375
6376
6377
6378
6379
6380
6381
6382
6383
6384
6385
6386
6387
6388
6389
6390
6391
6392
6393
6394
6395
6396
6397
6398
6399
6400
6401
6402
6403
6404
6405
6406
6407
6408
6409
6410
6411
6412
6413
6414
6415
6416
6417
6418
6419
6420
6421
6422
6423
6424
6425
6426
6427
6428
6429
6430
6431
6432
6433
6434
6435
6436
6437
6438
6439
6440
6441
6442
6443
6444
6445
6446
6447
6448
6449
6450
6451
6452
6453
6454
6455
6456
6457
6458
6459
6460
6461
6462
6463
6464
6465
6466
6467
6468
6469
6470
6471
6472
6473
6474
6475
6476
6477
6478
6479
6480
6481
6482
6483
6484
6485
6486
6487
6488
6489
6490
6491
6492
6493
6494
6495
6496
6497
6498
6499
6500
6501
6502
6503
6504
6505
6506
6507
6508
6509
6510
6511
6512
6513
6514
6515
6516
6517
6518
6519
6520
6521
6522
6523
6524
6525
6526
6527
6528
6529
6530
6531
6532
6533
6534
6535
6536
6537
6538
6539
6540
6541
6542
6543
6544
6545
6546
6547
6548
6549
6550
6551
6552
6553
6554
6555
6556
6557
6558
6559
6560
6561
6562
6563
6564
6565
6566
6567
6568
6569
6570
6571
6572
6573
6574
6575
6576
6577
6578
6579
6580
6581
6582
6583
6584
6585
6586
6587
6588
6589
6590
6591
6592
6593
6594
6595
6596
6597
6598
6599
6600
6601
6602
6603
6604
6605
6606
6607
6608
6609
6610
6611
6612
6613
6614
6615
6616
6617
6618
6619
6620
6621
6622
6623
6624
6625
6626
6627
6628
6629
6630
6631
6632
6633
6634
6635
6636
6637
6638
6639
6640
6641
6642
6643
6644
6645
6646
6647
6648
6649
6650
6651
6652
6653
6654
6655
6656
6657
6658
6659
6660
6661
6662
6663
6664
6665
6666
6667
6668
6669
6670
6671
6672
6673
6674
6675
6676
6677
6678
6679
6680
6681
6682
6683
6684
6685
6686
6687
6688
6689
6690
6691
6692
6693
6694
6695
6696
6697
6698
6699
6700
6701
6702
6703
6704
6705
6706
6707
6708
6709
6710
6711
6712
6713
6714
6715
6716
6717
6718
6719
6720
6721
6722
6723
6724
6725
6726
6727
6728
6729
6730
6731
6732
6733
6734
6735
6736
6737
6738
6739
6740
6741
6742
6743
6744
6745
6746
6747
6748
6749
6750
6751
6752
6753
6754
6755
6756
6757
6758
6759
6760
6761
6762
6763
6764
6765
6766
6767
6768
6769
6770
6771
6772
6773
6774
6775
6776
6777
6778
6779
6780
6781
6782
6783
6784
6785
6786
6787
6788
6789
6790
6791
6792
6793
6794
6795
6796
6797
6798
6799
6800
6801
6802
6803
6804
6805
6806
6807
6808
6809
6810
6811
6812
6813
6814
6815
6816
6817
6818
6819
6820
6821
6822
6823
6824
6825
6826
6827
6828
6829
6830
6831
6832
6833
6834
6835
6836
6837
6838
6839
6840
6841
6842
6843
6844
6845
6846
6847
6848
6849
6850
6851
6852
6853
6854
6855
6856
6857
6858
6859
6860
6861
6862
6863
6864
6865
6866
6867
6868
6869
6870
6871
6872
6873
6874
6875
6876
6877
6878
6879
6880
6881
6882
6883
6884
6885
6886
6887
6888
6889
6890
6891
6892
6893
6894
6895
6896
6897
6898
6899
6900
6901
6902
6903
6904
6905
6906
6907
6908
6909
6910
6911
6912
6913
6914
6915
6916
6917
6918
6919
6920
6921
6922
6923
6924
6925
6926
6927
6928
6929
6930
6931
6932
6933
6934
6935
6936
6937
6938
6939
6940
6941
6942
6943
6944
6945
6946
6947
6948
6949
6950
6951
6952
6953
6954
6955
6956
6957
6958
6959
6960
6961
6962
6963
6964
6965
6966
6967
6968
6969
6970
6971
6972
6973
6974
6975
6976
6977
6978
6979
6980
6981
6982
6983
6984
6985
6986
6987
6988
6989
6990
6991
6992
6993
6994
6995
6996
6997
6998
6999
7000
7001
7002
7003
7004
7005
7006
7007
7008
7009
7010
7011
7012
7013
7014
7015
7016
7017
7018
7019
7020
7021
7022
7023
7024
7025
7026
7027
7028
7029
7030
7031
7032
7033
7034
7035
7036
7037
7038
7039
7040
7041
7042
7043
7044
7045
7046
7047
7048
7049
7050
7051
7052
7053
7054
7055
7056
7057
7058
7059
7060
7061
7062
7063
7064
7065
7066
7067
7068
7069
7070
7071
7072
7073
7074
7075
7076
7077
7078
7079
7080
7081
7082
7083
7084
7085
7086
7087
7088
7089
7090
7091
7092
7093
7094
7095
7096
7097
7098
7099
7100
7101
7102
7103
7104
7105
7106
7107
7108
7109
7110
7111
7112
7113
7114
7115
7116
7117
7118
7119
7120
7121
7122
7123
7124
7125
7126
7127
7128
7129
7130
7131
7132
7133
7134
7135
7136
7137
7138
7139
7140
7141
7142
7143
7144
7145
7146
7147
7148
7149
7150
7151
7152
7153
7154
7155
7156
7157
7158
7159
7160
7161
7162
7163
7164
7165
7166
7167
7168
7169
7170
7171
7172
7173
7174
7175
7176
7177
7178
7179
7180
7181
7182
7183
7184
7185
7186
7187
7188
7189
7190
7191
7192
7193
7194
7195
7196
7197
7198
7199
7200
7201
7202
7203
7204
7205
7206
7207
7208
7209
7210
7211
7212
7213
7214
7215
7216
7217
7218
7219
7220
7221
7222
7223
7224
7225
7226
7227
7228
7229
7230
7231
7232
7233
7234
7235
7236
7237
7238
7239
7240
7241
7242
7243
7244
7245
7246
7247
7248
7249
7250
7251
7252
7253
7254
7255
7256
7257
7258
7259
7260
7261
7262
7263
7264
7265
7266
7267
7268
7269
7270
7271
7272
7273
7274
7275
7276
7277
7278
7279
7280
7281
7282
7283
7284
7285
7286
7287
7288
7289
7290
7291
7292
7293
7294
7295
7296
7297
7298
7299
7300
7301
7302
7303
7304
7305
7306
7307
7308
7309
7310
7311
7312
7313
7314
7315
7316
7317
7318
7319
7320
7321
7322
7323
7324
7325
7326
7327
7328
7329
7330
7331
7332
7333
7334
7335
7336
7337
7338
7339
7340
7341
7342
7343
7344
7345
7346
7347
7348
7349
7350
7351
7352
7353
7354
7355
7356
7357
7358
7359
7360
7361
7362
7363
7364
7365
7366
7367
7368
7369
7370
7371
7372
7373
7374
7375
7376
7377
7378
7379
7380
7381
7382
7383
7384
7385
7386
7387
7388
7389
7390
7391
7392
7393
7394
7395
7396
7397
7398
7399
7400
7401
7402
7403
7404
7405
7406
7407
7408
7409
7410
7411
7412
7413
7414
7415
7416
7417
7418
7419
7420
7421
7422
7423
7424
7425
7426
7427
7428
7429
7430
7431
7432
7433
7434
7435
7436
7437
7438
7439
7440
7441
7442
7443
7444
7445
7446
7447
7448
7449
7450
7451
7452
7453
7454
7455
7456
7457
7458
7459
7460
7461
7462
7463
7464
7465
7466
7467
7468
7469
7470
7471
7472
7473
7474
7475
7476
7477
7478
7479
7480
7481
7482
7483
7484
7485
7486
7487
7488
7489
7490
7491
7492
7493
7494
7495
7496
7497
7498
7499
7500
7501
7502
7503
7504
7505
7506
7507
7508
7509
7510
7511
7512
7513
7514
7515
7516
7517
7518
7519
7520
7521
7522
7523
7524
7525
7526
7527
7528
7529
7530
7531
7532
7533
7534
7535
7536
7537
7538
7539
7540
7541
7542
7543
7544
7545
7546
7547
7548
7549
7550
7551
7552
7553
7554
7555
7556
7557
7558
7559
7560
7561
7562
7563
7564
7565
7566
7567
7568
7569
7570
7571
7572
7573
7574
7575
7576
7577
7578
7579
7580
7581
7582
7583
7584
7585
7586
7587
7588
7589
7590
7591
7592
7593
7594
7595
7596
7597
7598
7599
7600
7601
7602
7603
7604
7605
7606
7607
7608
7609
7610
7611
7612
7613
7614
7615
7616
7617
7618
7619
7620
7621
7622
7623
7624
7625
7626
7627
7628
7629
7630
7631
7632
7633
7634
7635
7636
7637
7638
7639
7640
7641
7642
7643
7644
7645
7646
7647
7648
7649
7650
7651
7652
7653
7654
7655
7656
7657
7658
7659
7660
7661
7662
7663
7664
7665
7666
7667
7668
7669
7670
7671
7672
7673
7674
7675
7676
7677
7678
7679
7680
7681
7682
7683
7684
7685
7686
7687
7688
7689
7690
7691
7692
7693
7694
7695
7696
7697
7698
7699
7700
7701
7702
7703
7704
7705
7706
7707
7708
7709
7710
7711
7712
7713
7714
7715
7716
7717
7718
7719
7720
7721
7722
7723
7724
7725
7726
7727
7728
7729
7730
7731
7732
7733
7734
7735
7736
7737
7738
7739
7740
7741
7742
7743
7744
7745
7746
7747
7748
7749
7750
7751
7752
7753
7754
7755
7756
7757
7758
7759
7760
7761
7762
7763
7764
7765
7766
7767
7768
7769
7770
7771
7772
7773
7774
7775
7776
7777
7778
7779
7780
7781
7782
7783
7784
7785
7786
7787
7788
7789
7790
7791
7792
7793
7794
7795
7796
7797
7798
7799
7800
7801
7802
7803
7804
7805
7806
7807
7808
7809
7810
7811
7812
7813
7814
7815
7816
7817
7818
7819
7820
7821
7822
7823
7824
7825
7826
7827
7828
7829
7830
7831
7832
7833
7834
7835
7836
7837
7838
7839
7840
7841
7842
7843
7844
7845
7846
7847
7848
7849
7850
7851
7852
7853
7854
7855
7856
7857
7858
7859
7860
7861
7862
7863
7864
7865
7866
7867
7868
7869
7870
7871
7872
7873
7874
7875
7876
7877
7878
7879
7880
7881
7882
7883
7884
7885
7886
7887
7888
7889
7890
7891
7892
7893
7894
7895
7896
7897
7898
7899
7900
7901
7902
7903
7904
7905
7906
7907
7908
7909
7910
7911
7912
7913
7914
7915
7916
7917
7918
7919
7920
7921
7922
7923
7924
7925
7926
7927
7928
7929
7930
7931
7932
7933
7934
7935
7936
7937
7938
7939
7940
7941
7942
7943
7944
7945
7946
7947
7948
7949
7950
7951
7952
7953
7954
7955
7956
7957
7958
7959
7960
7961
7962
7963
7964
7965
7966
7967
7968
7969
7970
7971
7972
7973
7974
7975
7976
7977
7978
7979
7980
7981
7982
7983
7984
7985
7986
7987
7988
7989
7990
7991
7992
7993
7994
7995
7996
7997
7998
7999
8000
8001
8002
8003
8004
8005
8006
8007
8008
8009
8010
8011
8012
8013
8014
8015
8016
8017
8018
8019
8020
8021
8022
8023
8024
8025
8026
8027
8028
8029
8030
8031
8032
8033
8034
8035
8036
8037
8038
8039
8040
8041
8042
8043
8044
8045
8046
8047
8048
8049
8050
8051
8052
8053
8054
8055
8056
8057
8058
8059
8060
8061
8062
8063
8064
8065
8066
8067
8068
8069
8070
8071
8072
8073
8074
8075
8076
8077
8078
8079
8080
8081
8082
8083
8084
8085
8086
8087
8088
8089
8090
8091
8092
8093
8094
8095
8096
8097
8098
8099
8100
8101
8102
8103
8104
8105
8106
8107
8108
8109
8110
8111
8112
8113
8114
8115
8116
8117
8118
8119
8120
8121
8122
8123
8124
8125
8126
8127
8128
8129
8130
8131
8132
8133
8134
8135
8136
8137
8138
8139
8140
8141
8142
8143
8144
8145
8146
8147
8148
8149
8150
8151
8152
8153
8154
8155
8156
8157
8158
8159
8160
8161
8162
8163
8164
8165
8166
8167
8168
8169
8170
8171
8172
8173
8174
8175
8176
8177
8178
8179
8180
8181
8182
8183
8184
8185
8186
8187
8188
8189
8190
8191
8192
8193
8194
8195
8196
8197
8198
8199
8200
8201
8202
8203
8204
8205
8206
8207
8208
8209
8210
8211
8212
8213
8214
8215
8216
8217
8218
8219
8220
8221
8222
8223
8224
8225
8226
8227
8228
8229
8230
8231
8232
8233
8234
8235
8236
8237
8238
8239
8240
8241
8242
8243
8244
8245
8246
8247
8248
8249
8250
8251
8252
8253
8254
8255
8256
8257
8258
8259
8260
8261
8262
8263
8264
8265
8266
8267
8268
8269
8270
8271
8272
8273
8274
8275
8276
8277
8278
8279
8280
8281
8282
8283
8284
8285
8286
8287
8288
8289
8290
8291
8292
8293
8294
8295
8296
8297
8298
8299
8300
8301
8302
8303
8304
8305
8306
8307
8308
8309
8310
8311
8312
8313
8314
8315
8316
8317
8318
8319
8320
8321
8322
8323
8324
8325
8326
8327
8328
8329
8330
8331
8332
8333
8334
8335
8336
8337
8338
8339
8340
8341
8342
8343
8344
8345
8346
8347
8348
8349
8350
8351
8352
8353
8354
8355
8356
8357
8358
8359
8360
8361
8362
8363
8364
8365
8366
8367
8368
8369
8370
8371
8372
8373
8374
8375
8376
8377
8378
8379
8380
8381
8382
8383
8384
8385
8386
8387
8388
8389
8390
8391
8392
8393
8394
8395
8396
8397
8398
8399
8400
8401
8402
8403
8404
8405
8406
8407
8408
8409
8410
8411
8412
8413
8414
8415
8416
8417
8418
8419
8420
8421
8422
8423
8424
8425
8426
8427
8428
8429
8430
8431
8432
8433
8434
8435
8436
8437
8438
8439
8440
8441
8442
8443
8444
8445
8446
8447
8448
8449
8450
8451
8452
8453
8454
8455
8456
8457
8458
8459
8460
8461
8462
8463
8464
8465
8466
8467
8468
8469
8470
8471
8472
8473
8474
8475
8476
8477
8478
8479
8480
8481
8482
8483
8484
8485
8486
8487
8488
8489
8490
8491
8492
8493
8494
8495
8496
8497
8498
8499
8500
8501
8502
8503
8504
8505
8506
8507
8508
8509
8510
8511
8512
8513
8514
8515
8516
8517
8518
8519
8520
8521
8522
8523
8524
8525
8526
8527
8528
8529
8530
8531
8532
8533
8534
8535
8536
8537
8538
8539
8540
8541
8542
8543
8544
8545
8546
8547
8548
8549
8550
8551
8552
8553
8554
8555
8556
8557
8558
8559
8560
8561
8562
8563
8564
8565
8566
8567
8568
8569
8570
8571
8572
8573
8574
8575
8576
8577
8578
8579
8580
8581
8582
8583
8584
8585
8586
8587
8588
8589
8590
8591
8592
8593
8594
8595
8596
8597
8598
8599
8600
8601
8602
8603
8604
8605
8606
8607
8608
8609
8610
8611
8612
8613
8614
8615
8616
8617
8618
8619
8620
8621
8622
8623
8624
8625
8626
8627
8628
8629
8630
8631
8632
8633
8634
8635
8636
8637
8638
8639
8640
8641
8642
8643
8644
8645
8646
8647
8648
8649
8650
8651
8652
8653
8654
8655
8656
8657
8658
8659
8660
8661
8662
8663
8664
8665
8666
8667
8668
8669
8670
8671
8672
8673
8674
8675
8676
8677
8678
8679
8680
8681
8682
8683
8684
8685
8686
8687
8688
8689
8690
8691
8692
8693
8694
8695
8696
8697
8698
8699
8700
8701
8702
8703
8704
8705
8706
8707
8708
8709
8710
8711
8712
8713
8714
8715
8716
8717
8718
8719
8720
8721
8722
8723
8724
8725
8726
8727
8728
8729
8730
8731
8732
8733
8734
8735
8736
8737
8738
8739
8740
8741
8742
8743
8744
8745
8746
8747
8748
8749
8750
8751
8752
8753
8754
8755
8756
8757
8758
8759
8760
8761
8762
8763
8764
8765
8766
8767
8768
8769
8770
8771
8772
8773
8774
8775
8776
8777
8778
8779
8780
8781
8782
8783
8784
8785
8786
8787
8788
8789
8790
8791
8792
8793
8794
8795
8796
8797
8798
8799
8800
8801
8802
8803
8804
8805
8806
8807
8808
8809
8810
8811
8812
8813
8814
8815
8816
8817
8818
8819
8820
8821
8822
8823
8824
8825
8826
8827
8828
8829
8830
8831
8832
8833
8834
8835
8836
8837
8838
8839
8840
8841
8842
8843
8844
8845
8846
8847
8848
8849
8850
8851
8852
8853
8854
8855
8856
8857
8858
8859
8860
8861
8862
8863
8864
8865
8866
8867
8868
8869
8870
8871
8872
8873
8874
8875
8876
8877
8878
8879
8880
8881
8882
8883
8884
8885
8886
8887
8888
8889
8890
8891
8892
8893
8894
8895
8896
8897
8898
8899
8900
8901
8902
8903
8904
8905
8906
8907
8908
8909
8910
8911
8912
8913
8914
8915
8916
8917
8918
8919
8920
8921
8922
8923
8924
8925
8926
8927
8928
8929
8930
8931
8932
8933
8934
8935
8936
8937
8938
8939
8940
8941
8942
8943
8944
8945
8946
8947
8948
8949
8950
8951
8952
8953
8954
8955
8956
8957
8958
8959
8960
8961
8962
8963
8964
8965
8966
8967
8968
8969
8970
8971
8972
8973
8974
8975
8976
8977
8978
8979
8980
8981
8982
8983
8984
8985
8986
8987
8988
8989
8990
8991
8992
8993
8994
8995
8996
8997
8998
8999
9000
9001
9002
9003
9004
9005
9006
9007
9008
9009
9010
9011
9012
9013
9014
9015
9016
9017
9018
9019
9020
9021
9022
9023
9024
9025
9026
9027
9028
9029
9030
9031
9032
9033
9034
9035
9036
9037
9038
9039
9040
9041
9042
9043
9044
9045
9046
9047
9048
9049
9050
9051
9052
9053
9054
9055
9056
9057
9058
9059
9060
9061
9062
9063
9064
9065
9066
9067
9068
9069
9070
9071
9072
9073
9074
9075
9076
9077
9078
9079
9080
9081
9082
9083
9084
9085
9086
9087
9088
9089
9090
9091
9092
9093
9094
9095
9096
9097
9098
9099
9100
9101
9102
9103
9104
9105
9106
9107
9108
9109
9110
9111
9112
9113
9114
9115
9116
9117
9118
9119
9120
9121
9122
9123
9124
9125
9126
9127
9128
9129
9130
9131
9132
9133
9134
9135
9136
9137
9138
9139
9140
9141
9142
9143
9144
9145
9146
9147
9148
9149
9150
9151
9152
9153
9154
9155
9156
9157
9158
9159
9160
9161
9162
9163
9164
9165
9166
9167
9168
9169
9170
9171
9172
9173
9174
9175
9176
9177
9178
9179
9180
9181
9182
9183
9184
9185
9186
9187
9188
9189
9190
9191
9192
9193
9194
9195
9196
9197
9198
9199
9200
9201
9202
9203
9204
9205
9206
9207
9208
9209
9210
9211
9212
9213
9214
9215
9216
9217
9218
9219
9220
9221
9222
9223
9224
9225
9226
9227
9228
9229
9230
9231
9232
9233
9234
9235
9236
9237
9238
9239
9240
9241
9242
9243
9244
9245
9246
9247
9248
9249
9250
9251
9252
9253
9254
9255
9256
9257
9258
9259
9260
9261
9262
9263
9264
9265
9266
9267
9268
9269
9270
9271
9272
9273
9274
9275
9276
9277
9278
9279
9280
9281
9282
9283
9284
9285
9286
9287
9288
9289
9290
9291
9292
9293
9294
9295
9296
9297
9298
9299
9300
9301
9302
9303
9304
9305
9306
9307
9308
9309
9310
9311
9312
9313
9314
9315
9316
9317
9318
9319
9320
9321
9322
9323
9324
9325
9326
9327
9328
9329
9330
9331
9332
9333
9334
9335
9336
9337
9338
9339
9340
9341
9342
9343
9344
9345
9346
9347
9348
9349
9350
9351
9352
9353
9354
9355
9356
9357
9358
9359
9360
9361
9362
9363
9364
9365
9366
9367
9368
9369
9370
9371
9372
9373
9374
9375
9376
9377
9378
9379
9380
9381
9382
9383
9384
9385
9386
9387
9388
9389
9390
9391
9392
9393
9394
9395
9396
9397
9398
9399
9400
9401
9402
9403
9404
9405
9406
9407
9408
9409
9410
9411
9412
9413
9414
9415
9416
9417
9418
9419
9420
9421
9422
9423
9424
9425
9426
9427
9428
9429
9430
9431
9432
9433
9434
9435
9436
9437
9438
9439
9440
9441
9442
9443
9444
9445
9446
9447
9448
9449
9450
9451
9452
9453
9454
9455
9456
9457
9458
9459
9460
9461
9462
9463
9464
9465
9466
9467
9468
9469
9470
9471
9472
9473
9474
9475
9476
9477
9478
9479
9480
9481
9482
9483
9484
9485
9486
9487
9488
9489
9490
9491
9492
9493
9494
9495
9496
9497
9498
9499
9500
9501
9502
9503
9504
9505
9506
9507
9508
9509
9510
9511
9512
9513
9514
9515
9516
9517
9518
9519
9520
9521
9522
9523
9524
9525
9526
9527
9528
9529
9530
9531
9532
9533
9534
9535
9536
9537
9538
9539
9540
9541
9542
9543
9544
9545
9546
9547
9548
9549
9550
9551
9552
9553
9554
9555
9556
9557
9558
9559
9560
9561
9562
9563
9564
9565
9566
9567
9568
9569
9570
9571
9572
9573
9574
9575
9576
9577
9578
9579
9580
9581
9582
9583
9584
9585
9586
9587
9588
9589
9590
9591
9592
9593
9594
9595
9596
9597
9598
9599
9600
9601
9602
9603
9604
9605
9606
9607
9608
9609
9610
9611
9612
9613
9614
9615
9616
9617
9618
9619
9620
9621
9622
9623
9624
9625
9626
9627
9628
9629
9630
9631
9632
9633
9634
9635
9636
9637
9638
9639
9640
9641
9642
9643
9644
9645
9646
9647
9648
9649
9650
9651
9652
9653
9654
9655
9656
9657
9658
9659
9660
9661
9662
9663
9664
9665
9666
9667
9668
9669
9670
9671
9672
9673
9674
9675
9676
9677
9678
9679
9680
9681
9682
9683
9684
9685
9686
9687
9688
9689
9690
9691
9692
9693
9694
9695
9696
9697
9698
9699
9700
9701
9702
9703
9704
9705
9706
9707
9708
9709
9710
9711
9712
9713
9714
9715
9716
9717
9718
9719
9720
9721
9722
9723
9724
9725
9726
9727
9728
9729
9730
9731
9732
9733
9734
9735
9736
9737
9738
9739
9740
9741
9742
9743
9744
9745
9746
9747
9748
9749
9750
9751
9752
9753
9754
9755
9756
9757
9758
9759
9760
9761
9762
9763
9764
9765
9766
9767
9768
9769
9770
9771
9772
9773
9774
9775
9776
9777
9778
9779
9780
9781
9782
9783
9784
9785
9786
9787
9788
9789
9790
9791
9792
9793
9794
9795
9796
9797
9798
9799
9800
9801
9802
9803
9804
9805
9806
9807
9808
9809
9810
9811
9812
9813
9814
9815
9816
9817
9818
9819
9820
9821
9822
9823
9824
9825
9826
9827
9828
9829
9830
9831
9832
9833
9834
9835
9836
9837
9838
9839
9840
9841
9842
9843
9844
9845
9846
9847
9848
9849
9850
9851
9852
9853
9854
9855
9856
9857
9858
9859
9860
9861
9862
9863
9864
9865
9866
9867
9868
9869
9870
9871
9872
9873
9874
9875
9876
9877
9878
9879
9880
9881
9882
9883
9884
9885
9886
9887
9888
9889
9890
9891
9892
9893
9894
9895
9896
9897
9898
9899
9900
9901
9902
9903
9904
9905
9906
9907
9908
9909
9910
9911
9912
9913
9914
9915
9916
9917
9918
9919
9920
9921
9922
9923
9924
9925
9926
9927
9928
9929
9930
9931
9932
9933
9934
9935
9936
9937
9938
9939
9940
9941
9942
9943
9944
9945
9946
9947
9948
9949
9950
9951
9952
9953
9954
9955
9956
9957
9958
9959
9960
9961
9962
9963
9964
9965
9966
9967
9968
9969
9970
9971
9972
9973
9974
9975
9976
9977
9978
9979
9980
9981
9982
9983
9984
9985
9986
9987
9988
9989
9990
9991
9992
9993
9994
9995
9996
9997
9998
9999
10000
10001
10002
10003
10004
10005
10006
10007
10008
10009
10010
10011
10012
10013
10014
10015
10016
10017
10018
10019
10020
10021
10022
10023
10024
10025
10026
10027
10028
10029
10030
10031
10032
10033
10034
10035
10036
10037
10038
10039
10040
10041
10042
10043
10044
10045
10046
10047
10048
10049
10050
10051
10052
10053
10054
10055
10056
10057
10058
10059
10060
10061
10062
10063
10064
10065
10066
10067
10068
10069
10070
10071
10072
10073
10074
10075
10076
10077
10078
10079
10080
10081
10082
10083
10084
10085
10086
10087
10088
10089
10090
10091
10092
10093
10094
10095
10096
10097
10098
10099
10100
10101
10102
10103
10104
10105
10106
10107
10108
10109
10110
10111
10112
10113
10114
10115
10116
10117
10118
10119
10120
10121
10122
10123
10124
10125
10126
10127
10128
10129
10130
10131
10132
10133
10134
10135
10136
10137
10138
10139
10140
10141
10142
10143
10144
10145
10146
10147
10148
10149
10150
10151
10152
10153
10154
10155
10156
10157
10158
10159
10160
10161
10162
10163
10164
10165
10166
10167
10168
10169
10170
10171
10172
10173
10174
10175
10176
10177
10178
10179
10180
10181
10182
10183
10184
10185
10186
10187
10188
10189
10190
10191
10192
10193
10194
10195
10196
10197
10198
10199
10200
10201
10202
10203
10204
10205
10206
10207
10208
10209
10210
10211
10212
10213
10214
10215
10216
10217
10218
10219
10220
10221
10222
10223
10224
10225
10226
10227
10228
10229
10230
10231
10232
10233
10234
10235
10236
10237
10238
10239
10240
10241
10242
10243
10244
10245
10246
10247
10248
10249
10250
10251
10252
10253
10254
10255
10256
10257
10258
10259
10260
10261
10262
10263
10264
10265
10266
10267
10268
10269
10270
10271
10272
10273
10274
10275
10276
10277
10278
10279
10280
10281
10282
10283
10284
10285
10286
10287
10288
10289
10290
10291
10292
10293
10294
10295
10296
10297
10298
10299
10300
10301
10302
10303
10304
10305
10306
10307
10308
10309
10310
10311
10312
10313
10314
10315
10316
10317
10318
10319
10320
10321
10322
10323
10324
10325
10326
10327
10328
10329
10330
10331
10332
10333
10334
10335
10336
10337
10338
10339
10340
10341
10342
10343
10344
10345
10346
10347
10348
10349
10350
10351
10352
10353
10354
10355
10356
10357
10358
10359
10360
10361
10362
10363
10364
10365
10366
10367
10368
10369
10370
10371
10372
10373
10374
10375
10376
10377
10378
10379
10380
10381
10382
10383
10384
10385
10386
10387
10388
10389
10390
10391
10392
10393
10394
10395
10396
10397
10398
10399
10400
10401
10402
10403
10404
10405
10406
10407
10408
10409
10410
10411
10412
10413
10414
10415
10416
10417
10418
10419
10420
10421
10422
10423
10424
10425
10426
10427
10428
10429
10430
10431
10432
10433
10434
10435
10436
10437
10438
10439
10440
10441
10442
10443
10444
10445
10446
10447
10448
10449
10450
10451
10452
10453
10454
10455
10456
10457
10458
10459
10460
10461
10462
10463
10464
10465
10466
10467
10468
10469
10470
10471
10472
10473
10474
10475
10476
10477
10478
10479
10480
10481
10482
10483
10484
10485
10486
10487
10488
10489
10490
10491
10492
10493
10494
10495
10496
10497
10498
10499
10500
10501
10502
10503
10504
10505
10506
10507
10508
10509
10510
10511
10512
10513
10514
10515
10516
10517
10518
10519
10520
10521
10522
10523
10524
10525
10526
10527
10528
10529
10530
10531
10532
10533
10534
10535
10536
10537
10538
10539
10540
10541
10542
10543
10544
10545
10546
10547
10548
10549
10550
10551
10552
10553
10554
10555
10556
10557
10558
10559
10560
10561
10562
10563
10564
10565
10566
10567
10568
10569
10570
10571
10572
10573
10574
10575
10576
10577
10578
10579
10580
10581
10582
10583
10584
10585
10586
10587
10588
10589
10590
10591
10592
10593
10594
10595
10596
10597
10598
10599
10600
10601
10602
10603
10604
10605
10606
10607
10608
10609
10610
10611
10612
10613
10614
10615
10616
10617
10618
10619
10620
10621
10622
10623
10624
10625
10626
10627
10628
10629
10630
10631
10632
10633
10634
10635
10636
10637
10638
10639
10640
10641
10642
10643
10644
10645
10646
10647
10648
10649
10650
10651
10652
10653
10654
10655
10656
10657
10658
10659
10660
10661
10662
10663
10664
10665
10666
10667
10668
10669
10670
10671
10672
10673
10674
10675
10676
10677
10678
10679
10680
10681
10682
10683
10684
10685
10686
10687
10688
10689
10690
10691
10692
10693
10694
10695
10696
10697
10698
10699
10700
10701
10702
10703
10704
10705
10706
10707
10708
10709
10710
10711
10712
10713
10714
10715
10716
10717
10718
10719
10720
10721
10722
10723
10724
10725
10726
10727
10728
10729
10730
10731
10732
10733
10734
10735
10736
10737
10738
10739
10740
10741
10742
10743
10744
10745
10746
10747
10748
10749
10750
10751
10752
10753
10754
10755
10756
10757
10758
10759
10760
10761
10762
10763
10764
10765
10766
10767
10768
10769
10770
10771
10772
10773
10774
10775
10776
10777
10778
10779
10780
10781
10782
10783
10784
10785
10786
10787
10788
10789
10790
10791
10792
10793
10794
10795
10796
10797
10798
10799
10800
10801
10802
10803
10804
10805
10806
10807
10808
10809
10810
10811
10812
10813
10814
10815
10816
10817
10818
10819
10820
10821
10822
10823
10824
10825
10826
10827
10828
10829
10830
10831
10832
10833
10834
10835
10836
10837
10838
10839
10840
10841
10842
10843
10844
10845
10846
10847
10848
10849
10850
10851
10852
10853
10854
10855
10856
10857
10858
10859
10860
10861
10862
10863
10864
10865
10866
10867
10868
10869
10870
10871
10872
10873
10874
10875
10876
10877
10878
10879
10880
10881
10882
10883
10884
10885
10886
10887
10888
10889
10890
10891
10892
10893
10894
10895
10896
10897
10898
10899
10900
10901
10902
10903
10904
10905
10906
10907
10908
10909
10910
10911
10912
10913
10914
10915
10916
10917
10918
10919
10920
10921
10922
10923
10924
10925
10926
10927
10928
10929
10930
10931
10932
10933
10934
10935
10936
10937
10938
10939
10940
10941
10942
10943
10944
10945
10946
10947
10948
10949
10950
10951
10952
10953
10954
10955
10956
10957
10958
10959
10960
10961
10962
10963
10964
10965
10966
10967
10968
10969
10970
10971
10972
10973
10974
10975
10976
10977
10978
10979
10980
10981
10982
10983
10984
10985
10986
10987
10988
10989
10990
10991
10992
10993
10994
10995
10996
10997
10998
10999
11000
11001
11002
11003
11004
11005
11006
11007
11008
11009
11010
11011
11012
11013
11014
11015
11016
11017
11018
11019
11020
11021
11022
11023
11024
11025
11026
11027
11028
11029
11030
11031
11032
11033
11034
11035
11036
11037
11038
11039
11040
11041
11042
11043
11044
11045
11046
11047
11048
11049
11050
11051
11052
11053
11054
11055
11056
11057
11058
11059
11060
11061
11062
11063
11064
11065
11066
11067
11068
11069
11070
11071
11072
11073
11074
11075
11076
11077
11078
11079
11080
11081
11082
11083
11084
11085
11086
11087
11088
11089
11090
11091
11092
11093
11094
11095
11096
11097
11098
11099
11100
11101
11102
11103
11104
11105
11106
11107
11108
11109
11110
11111
11112
11113
11114
11115
11116
11117
11118
11119
11120
11121
11122
11123
11124
11125
11126
11127
11128
11129
11130
11131
11132
11133
11134
11135
11136
11137
11138
11139
11140
11141
11142
11143
11144
11145
11146
11147
11148
11149
11150
11151
11152
11153
11154
11155
11156
11157
11158
11159
11160
11161
11162
11163
11164
11165
11166
11167
11168
11169
11170
11171
11172
11173
11174
11175
11176
11177
11178
11179
11180
11181
11182
11183
11184
11185
11186
11187
11188
11189
11190
11191
11192
11193
11194
11195
11196
11197
11198
11199
11200
11201
11202
11203
11204
11205
11206
11207
11208
11209
11210
11211
11212
11213
11214
11215
11216
11217
11218
11219
11220
11221
11222
11223
11224
11225
11226
11227
11228
11229
11230
11231
11232
11233
11234
11235
11236
11237
11238
11239
11240
11241
11242
11243
11244
11245
11246
11247
11248
11249
11250
11251
11252
11253
11254
11255
11256
11257
11258
11259
11260
11261
11262
11263
11264
11265
11266
11267
11268
11269
11270
11271
11272
11273
11274
11275
11276
11277
11278
11279
11280
11281
11282
11283
11284
11285
11286
11287
11288
11289
11290
11291
11292
11293
11294
11295
11296
11297
11298
11299
11300
11301
11302
11303
11304
11305
11306
11307
11308
11309
11310
11311
11312
11313
11314
11315
11316
11317
11318
11319
11320
11321
11322
11323
11324
11325
11326
11327
11328
11329
11330
11331
11332
11333
11334
11335
11336
11337
11338
11339
11340
11341
11342
11343
11344
11345
11346
11347
11348
11349
11350
11351
11352
11353
11354
11355
11356
11357
11358
11359
11360
11361
11362
11363
11364
11365
11366
11367
11368
11369
11370
11371
11372
11373
11374
11375
11376
11377
11378
11379
11380
11381
11382
11383
11384
11385
11386
11387
11388
11389
11390
11391
11392
11393
11394
11395
11396
11397
11398
11399
11400
11401
11402
11403
11404
11405
11406
11407
11408
11409
11410
11411
11412
11413
11414
11415
11416
11417
11418
11419
11420
11421
11422
11423
11424
11425
11426
11427
11428
11429
11430
11431
11432
11433
11434
11435
11436
11437
11438
11439
11440
11441
11442
11443
11444
11445
11446
11447
11448
11449
11450
11451
11452
11453
11454
11455
11456
11457
11458
11459
11460
11461
11462
11463
11464
11465
11466
11467
11468
11469
11470
11471
11472
11473
11474
11475
11476
11477
11478
11479
11480
11481
11482
11483
11484
11485
11486
11487
11488
11489
11490
11491
11492
11493
11494
11495
11496
11497
11498
11499
11500
11501
11502
11503
11504
11505
11506
11507
11508
11509
11510
11511
11512
11513
11514
11515
11516
11517
11518
11519
11520
11521
11522
11523
11524
11525
11526
11527
11528
11529
11530
11531
11532
11533
11534
11535
11536
11537
11538
11539
11540
11541
11542
11543
11544
11545
11546
11547
11548
11549
11550
11551
11552
11553
11554
11555
11556
11557
11558
11559
11560
11561
11562
11563
11564
11565
11566
11567
11568
11569
11570
11571
11572
11573
11574
11575
11576
11577
11578
11579
11580
11581
11582
11583
11584
11585
11586
11587
11588
11589
11590
11591
11592
11593
11594
11595
11596
11597
11598
11599
11600
11601
11602
11603
11604
11605
11606
11607
11608
11609
11610
11611
11612
11613
11614
11615
11616
11617
11618
11619
11620
11621
11622
11623
11624
11625
11626
11627
11628
11629
11630
11631
11632
11633
11634
11635
11636
11637
11638
11639
11640
11641
11642
11643
11644
11645
11646
11647
11648
11649
11650
11651
11652
11653
11654
11655
11656
11657
11658
11659
11660
11661
11662
11663
11664
11665
11666
11667
11668
11669
11670
11671
11672
11673
11674
11675
11676
11677
11678
11679
11680
11681
11682
11683
11684
11685
11686
11687
11688
11689
11690
11691
11692
11693
11694
11695
11696
11697
11698
11699
11700
11701
11702
11703
11704
11705
11706
11707
11708
11709
11710
11711
11712
11713
11714
11715
11716
11717
11718
11719
11720
11721
11722
11723
11724
11725
11726
11727
11728
11729
11730
11731
11732
11733
11734
11735
11736
11737
11738
11739
11740
11741
11742
11743
11744
11745
11746
11747
11748
11749
11750
11751
11752
11753
11754
11755
11756
11757
11758
11759
11760
11761
11762
11763
11764
11765
11766
11767
11768
11769
11770
11771
11772
11773
11774
11775
11776
11777
11778
11779
11780
11781
11782
11783
11784
11785
11786
11787
11788
11789
11790
11791
11792
11793
11794
11795
11796
11797
11798
11799
11800
11801
11802
11803
11804
11805
11806
11807
11808
11809
11810
11811
11812
11813
11814
11815
11816
11817
11818
11819
11820
11821
11822
11823
11824
11825
11826
11827
11828
11829
11830
11831
11832
11833
11834
11835
11836
11837
11838
11839
11840
11841
11842
11843
11844
11845
11846
11847
11848
11849
11850
11851
11852
11853
11854
11855
11856
11857
11858
11859
11860
11861
11862
11863
11864
11865
11866
11867
11868
11869
11870
11871
11872
11873
11874
11875
11876
11877
11878
11879
11880
11881
11882
11883
11884
11885
11886
11887
11888
11889
11890
11891
11892
11893
11894
11895
11896
11897
11898
11899
11900
11901
11902
11903
11904
11905
11906
11907
11908
11909
11910
11911
11912
11913
11914
11915
11916
11917
11918
11919
11920
11921
11922
11923
11924
11925
11926
11927
11928
11929
11930
11931
11932
11933
11934
11935
11936
11937
11938
11939
11940
11941
11942
11943
11944
11945
11946
11947
11948
11949
11950
11951
11952
11953
11954
11955
11956
11957
11958
11959
11960
11961
11962
11963
11964
11965
11966
11967
11968
11969
11970
11971
11972
11973
11974
11975
11976
11977
11978
11979
11980
11981
11982
11983
11984
11985
11986
11987
11988
11989
11990
11991
11992
11993
11994
11995
11996
11997
11998
11999
12000
12001
12002
12003
12004
12005
12006
12007
12008
12009
12010
12011
12012
12013
12014
12015
12016
12017
12018
12019
12020
12021
12022
12023
12024
12025
12026
12027
12028
12029
12030
12031
12032
12033
12034
12035
12036
12037
12038
12039
12040
12041
12042
12043
12044
12045
12046
12047
12048
12049
12050
12051
12052
12053
12054
12055
12056
12057
12058
12059
12060
12061
12062
12063
12064
12065
12066
12067
12068
12069
12070
12071
12072
12073
12074
12075
12076
12077
12078
12079
12080
12081
12082
12083
12084
12085
12086
12087
12088
12089
12090
12091
12092
12093
12094
12095
12096
12097
12098
12099
12100
12101
12102
12103
12104
12105
12106
12107
12108
12109
12110
12111
12112
12113
12114
12115
12116
12117
12118
12119
12120
12121
12122
12123
12124
12125
12126
12127
12128
12129
12130
12131
12132
12133
12134
12135
12136
12137
12138
12139
12140
12141
12142
12143
12144
12145
12146
12147
12148
12149
12150
12151
12152
12153
12154
12155
12156
12157
12158
12159
12160
12161
12162
12163
12164
12165
12166
12167
12168
12169
12170
12171
12172
12173
12174
12175
12176
12177
12178
12179
12180
12181
12182
12183
12184
12185
12186
12187
12188
12189
12190
12191
12192
12193
12194
12195
12196
12197
12198
12199
12200
12201
12202
12203
12204
12205
12206
12207
12208
12209
12210
12211
12212
12213
12214
12215
12216
12217
12218
12219
12220
12221
12222
12223
12224
12225
12226
12227
12228
12229
12230
12231
12232
12233
12234
12235
12236
12237
12238
12239
12240
12241
12242
12243
12244
12245
12246
12247
12248
12249
12250
12251
12252
12253
12254
12255
12256
12257
12258
12259
12260
12261
12262
12263
12264
12265
12266
12267
12268
12269
12270
12271
12272
12273
12274
12275
12276
12277
12278
12279
12280
12281
12282
12283
12284
12285
12286
12287
12288
12289
12290
12291
12292
12293
12294
12295
12296
12297
12298
12299
12300
12301
12302
12303
12304
12305
12306
12307
12308
12309
12310
12311
12312
12313
12314
12315
12316
12317
12318
12319
12320
12321
12322
12323
12324
12325
12326
12327
12328
12329
12330
12331
12332
12333
12334
12335
12336
12337
12338
12339
12340
12341
12342
12343
12344
12345
12346
12347
12348
12349
12350
12351
12352
12353
12354
12355
12356
12357
12358
12359
12360
12361
12362
12363
12364
12365
12366
12367
12368
12369
12370
12371
12372
12373
12374
12375
12376
12377
12378
12379
12380
12381
12382
12383
12384
12385
12386
12387
12388
12389
12390
12391
12392
12393
12394
12395
12396
12397
12398
12399
12400
12401
12402
12403
12404
12405
12406
12407
12408
12409
12410
12411
12412
12413
12414
12415
12416
12417
12418
12419
12420
12421
12422
12423
12424
12425
12426
12427
12428
12429
12430
12431
12432
12433
12434
12435
12436
12437
12438
12439
12440
12441
12442
12443
12444
12445
12446
12447
12448
12449
12450
12451
12452
12453
12454
12455
12456
12457
12458
12459
12460
12461
12462
12463
12464
12465
12466
12467
12468
12469
12470
12471
12472
12473
12474
12475
12476
12477
12478
12479
12480
12481
12482
12483
12484
12485
12486
12487
12488
12489
12490
12491
12492
12493
12494
12495
12496
12497
12498
12499
12500
12501
12502
12503
12504
12505
12506
12507
12508
12509
12510
12511
12512
12513
12514
12515
12516
12517
12518
12519
12520
12521
12522
12523
12524
12525
12526
12527
12528
12529
12530
12531
12532
12533
12534
12535
12536
12537
12538
12539
12540
12541
12542
12543
12544
12545
12546
12547
12548
12549
12550
12551
12552
12553
12554
12555
12556
12557
12558
12559
12560
12561
12562
12563
12564
12565
12566
12567
12568
12569
12570
12571
12572
12573
12574
12575
12576
12577
12578
12579
12580
12581
12582
12583
12584
12585
12586
12587
12588
12589
12590
12591
12592
12593
12594
12595
12596
12597
12598
12599
12600
12601
12602
12603
12604
12605
12606
12607
12608
12609
12610
12611
12612
12613
12614
12615
12616
12617
12618
12619
12620
12621
12622
12623
12624
12625
12626
12627
12628
12629
12630
12631
12632
12633
12634
12635
12636
12637
12638
12639
12640
12641
12642
12643
12644
12645
12646
12647
12648
12649
12650
12651
12652
12653
12654
12655
12656
12657
12658
12659
12660
12661
12662
12663
12664
12665
12666
12667
12668
12669
12670
12671
12672
12673
12674
12675
12676
12677
12678
12679
12680
12681
12682
12683
12684
12685
12686
12687
12688
12689
12690
12691
12692
12693
12694
12695
12696
12697
12698
12699
12700
12701
12702
12703
12704
12705
12706
12707
12708
12709
12710
12711
12712
12713
12714
12715
12716
12717
12718
12719
12720
12721
12722
12723
12724
12725
12726
12727
12728
12729
12730
12731
12732
12733
12734
12735
12736
12737
12738
12739
12740
12741
12742
12743
12744
12745
12746
12747
12748
12749
12750
12751
12752
12753
12754
12755
12756
12757
12758
12759
12760
12761
12762
12763
12764
12765
12766
12767
12768
12769
12770
12771
12772
12773
12774
12775
12776
12777
12778
12779
12780
12781
12782
12783
12784
12785
12786
12787
12788
12789
12790
12791
12792
12793
12794
12795
12796
12797
12798
12799
12800
12801
12802
12803
12804
12805
12806
12807
12808
12809
12810
12811
12812
12813
12814
12815
12816
12817
12818
12819
12820
12821
12822
12823
12824
12825
12826
12827
12828
12829
12830
12831
12832
12833
12834
12835
12836
12837
12838
12839
12840
12841
12842
12843
12844
12845
12846
12847
12848
12849
12850
12851
12852
12853
12854
12855
12856
12857
12858
12859
12860
12861
12862
12863
12864
12865
12866
12867
12868
12869
12870
12871
12872
12873
12874
12875
12876
12877
12878
12879
12880
12881
12882
12883
12884
12885
12886
12887
12888
12889
12890
12891
12892
12893
12894
12895
12896
12897
12898
12899
12900
12901
12902
12903
12904
12905
12906
12907
12908
12909
12910
12911
12912
12913
12914
12915
12916
12917
12918
12919
12920
12921
12922
12923
12924
12925
12926
12927
12928
12929
12930
12931
12932
12933
12934
12935
12936
12937
12938
12939
12940
12941
12942
12943
12944
12945
12946
12947
12948
12949
12950
12951
12952
12953
12954
12955
12956
12957
12958
12959
12960
12961
12962
12963
12964
12965
12966
12967
12968
12969
12970
12971
12972
12973
12974
12975
12976
12977
12978
12979
12980
12981
12982
12983
12984
12985
12986
12987
12988
12989
12990
12991
12992
12993
12994
12995
12996
12997
12998
12999
13000
13001
13002
13003
13004
13005
13006
13007
13008
13009
13010
13011
13012
13013
13014
13015
13016
13017
13018
13019
13020
13021
13022
13023
13024
13025
13026
13027
13028
13029
13030
13031
13032
13033
13034
13035
13036
13037
13038
13039
13040
13041
13042
13043
13044
13045
13046
13047
13048
13049
13050
13051
13052
13053
13054
13055
13056
13057
13058
13059
13060
13061
13062
13063
13064
13065
13066
13067
13068
13069
13070
13071
13072
13073
13074
13075
13076
13077
13078
13079
13080
13081
13082
13083
13084
13085
13086
13087
13088
13089
13090
13091
13092
13093
13094
13095
13096
13097
13098
13099
13100
13101
13102
13103
13104
13105
13106
13107
13108
13109
13110
13111
13112
13113
13114
13115
13116
13117
13118
13119
13120
13121
13122
13123
13124
13125
13126
13127
13128
13129
13130
13131
13132
13133
13134
13135
13136
13137
13138
13139
13140
13141
13142
13143
13144
13145
13146
13147
13148
13149
13150
13151
13152
13153
13154
13155
13156
13157
13158
13159
13160
13161
13162
13163
13164
13165
13166
13167
13168
13169
13170
13171
13172
13173
13174
13175
13176
13177
13178
13179
13180
13181
13182
13183
13184
13185
13186
13187
13188
13189
13190
13191
13192
13193
13194
13195
13196
13197
13198
13199
13200
13201
13202
13203
13204
13205
13206
13207
13208
13209
13210
13211
13212
13213
13214
13215
13216
13217
13218
13219
13220
13221
13222
13223
13224
13225
13226
13227
13228
13229
13230
13231
13232
13233
13234
13235
13236
13237
13238
13239
13240
13241
13242
13243
13244
13245
13246
13247
13248
13249
13250
13251
13252
13253
13254
13255
13256
13257
13258
13259
13260
13261
13262
13263
13264
13265
13266
13267
13268
13269
13270
13271
13272
13273
13274
13275
13276
13277
13278
13279
13280
13281
13282
13283
13284
13285
13286
13287
13288
13289
13290
13291
13292
13293
13294
13295
13296
13297
13298
13299
13300
13301
13302
13303
13304
13305
13306
13307
13308
13309
13310
13311
13312
13313
13314
13315
13316
13317
13318
13319
13320
13321
13322
13323
13324
13325
13326
13327
13328
13329
13330
13331
13332
13333
13334
13335
13336
13337
13338
13339
13340
13341
13342
13343
13344
13345
13346
13347
13348
13349
13350
13351
13352
13353
13354
13355
13356
13357
13358
13359
13360
13361
13362
13363
13364
13365
13366
13367
13368
13369
13370
13371
13372
13373
13374
13375
13376
13377
13378
13379
13380
13381
13382
13383
13384
13385
13386
13387
13388
13389
13390
13391
13392
13393
13394
13395
13396
13397
13398
13399
13400
13401
13402
13403
13404
13405
13406
13407
13408
13409
13410
13411
13412
13413
13414
13415
13416
13417
13418
13419
13420
13421
13422
13423
13424
13425
13426
13427
13428
13429
13430
13431
13432
13433
13434
13435
13436
13437
13438
13439
13440
13441
13442
13443
13444
13445
13446
13447
13448
13449
13450
13451
13452
13453
13454
13455
13456
13457
13458
13459
13460
13461
13462
13463
13464
13465
13466
13467
13468
13469
13470
13471
13472
13473
13474
13475
13476
13477
13478
13479
13480
13481
13482
13483
13484
13485
13486
13487
13488
13489
13490
13491
13492
13493
13494
13495
13496
13497
13498
13499
13500
13501
13502
13503
13504
13505
13506
13507
13508
13509
13510
13511
13512
13513
13514
13515
13516
13517
13518
13519
13520
13521
13522
13523
13524
13525
13526
13527
13528
13529
13530
13531
13532
13533
13534
13535
13536
13537
13538
13539
13540
13541
13542
13543
13544
13545
13546
13547
13548
13549
13550
13551
13552
13553
13554
13555
13556
13557
13558
13559
13560
13561
13562
13563
13564
13565
13566
13567
13568
13569
13570
13571
13572
13573
13574
13575
13576
13577
13578
13579
13580
13581
13582
13583
13584
13585
13586
13587
13588
13589
13590
13591
13592
13593
13594
13595
13596
13597
13598
13599
13600
13601
13602
13603
13604
13605
13606
13607
13608
13609
13610
13611
13612
13613
13614
13615
13616
13617
13618
13619
13620
13621
13622
13623
13624
13625
13626
13627
13628
13629
13630
13631
13632
13633
13634
13635
13636
13637
13638
13639
13640
13641
13642
13643
13644
13645
13646
13647
13648
13649
13650
13651
13652
13653
13654
13655
13656
13657
13658
13659
13660
13661
13662
13663
13664
13665
13666
13667
13668
13669
13670
13671
13672
13673
13674
13675
13676
13677
13678
13679
13680
13681
13682
13683
13684
13685
13686
13687
13688
13689
13690
13691
13692
13693
13694
13695
13696
13697
13698
13699
13700
13701
13702
13703
13704
13705
13706
13707
13708
13709
13710
13711
13712
13713
13714
13715
13716
13717
13718
13719
13720
13721
13722
13723
13724
13725
13726
13727
13728
13729
13730
13731
13732
13733
13734
13735
13736
13737
13738
13739
13740
13741
13742
13743
13744
13745
13746
13747
13748
13749
13750
13751
13752
13753
13754
13755
13756
13757
13758
13759
13760
13761
13762
13763
13764
13765
13766
13767
13768
13769
13770
13771
13772
13773
13774
13775
13776
13777
13778
13779
13780
13781
13782
13783
13784
13785
13786
13787
13788
13789
13790
13791
13792
13793
13794
13795
13796
13797
13798
13799
13800
13801
13802
13803
13804
13805
13806
13807
13808
13809
13810
13811
13812
13813
13814
13815
13816
13817
13818
13819
13820
13821
13822
13823
13824
13825
13826
13827
13828
13829
13830
13831
13832
13833
13834
13835
13836
13837
13838
13839
13840
13841
13842
13843
13844
13845
13846
13847
13848
13849
13850
13851
13852
13853
13854
13855
13856
13857
13858
13859
13860
13861
13862
13863
13864
13865
13866
13867
13868
13869
13870
13871
13872
13873
13874
13875
13876
13877
13878
13879
13880
13881
13882
13883
13884
13885
13886
13887
13888
13889
13890
13891
13892
13893
13894
13895
13896
13897
13898
13899
13900
13901
13902
13903
13904
13905
13906
13907
13908
13909
13910
13911
13912
13913
13914
13915
13916
13917
13918
13919
13920
13921
13922
13923
13924
13925
13926
13927
13928
13929
13930
13931
13932
13933
13934
13935
13936
13937
13938
13939
13940
13941
13942
13943
13944
13945
13946
13947
13948
13949
13950
13951
13952
13953
13954
13955
13956
13957
13958
13959
13960
13961
13962
13963
13964
13965
13966
13967
13968
13969
13970
13971
13972
13973
13974
13975
13976
13977
13978
13979
13980
13981
13982
13983
13984
13985
13986
13987
13988
13989
13990
13991
13992
13993
13994
13995
13996
13997
13998
13999
14000
14001
14002
14003
14004
14005
14006
14007
14008
14009
14010
14011
14012
14013
14014
14015
14016
14017
14018
14019
14020
14021
14022
14023
14024
14025
14026
14027
14028
14029
14030
14031
14032
14033
14034
14035
14036
14037
14038
14039
14040
14041
14042
14043
14044
14045
14046
14047
14048
14049
14050
14051
14052
14053
14054
14055
14056
14057
14058
14059
14060
14061
14062
14063
14064
14065
14066
14067
14068
14069
14070
14071
14072
14073
14074
14075
14076
14077
14078
14079
14080
14081
14082
14083
14084
14085
14086
14087
14088
14089
14090
14091
14092
14093
14094
14095
14096
14097
14098
14099
14100
14101
14102
14103
14104
14105
14106
14107
14108
14109
14110
14111
14112
14113
14114
14115
14116
14117
14118
14119
14120
14121
14122
14123
14124
14125
14126
14127
14128
14129
14130
14131
14132
14133
14134
14135
14136
14137
14138
14139
14140
14141
14142
14143
14144
14145
14146
14147
14148
14149
14150
14151
14152
14153
14154
14155
14156
14157
14158
14159
14160
14161
14162
14163
14164
14165
14166
14167
14168
14169
14170
14171
14172
14173
14174
14175
14176
14177
14178
14179
14180
14181
14182
14183
14184
14185
14186
14187
14188
14189
14190
14191
14192
14193
14194
14195
14196
14197
14198
14199
14200
14201
14202
14203
14204
14205
14206
14207
14208
14209
14210
14211
14212
14213
14214
14215
14216
14217
14218
14219
14220
14221
14222
14223
14224
14225
14226
14227
14228
14229
14230
14231
14232
14233
14234
14235
14236
14237
14238
14239
14240
14241
14242
14243
14244
14245
14246
14247
14248
14249
14250
14251
14252
14253
14254
14255
14256
14257
14258
14259
14260
14261
14262
14263
14264
14265
14266
14267
14268
14269
14270
14271
14272
14273
14274
14275
14276
14277
14278
14279
14280
14281
14282
14283
14284
14285
14286
14287
14288
14289
14290
14291
14292
14293
14294
14295
14296
14297
14298
14299
14300
14301
14302
14303
14304
14305
14306
14307
14308
14309
14310
14311
14312
14313
14314
14315
14316
14317
14318
14319
14320
14321
14322
14323
14324
14325
14326
14327
14328
14329
14330
14331
14332
14333
14334
14335
14336
14337
14338
14339
14340
14341
14342
14343
14344
14345
14346
14347
14348
14349
14350
14351
14352
14353
14354
14355
14356
14357
14358
14359
14360
14361
14362
14363
14364
14365
14366
14367
14368
14369
14370
14371
14372
14373
14374
14375
14376
14377
14378
14379
14380
14381
14382
14383
14384
14385
14386
14387
14388
14389
14390
14391
14392
14393
14394
14395
14396
14397
14398
14399
14400
14401
14402
14403
14404
14405
14406
14407
14408
14409
14410
14411
14412
14413
14414
14415
14416
14417
14418
14419
14420
14421
14422
14423
14424
14425
14426
14427
14428
14429
14430
14431
14432
14433
14434
14435
14436
14437
14438
14439
14440
14441
14442
14443
14444
14445
14446
14447
14448
14449
14450
14451
14452
14453
14454
14455
14456
14457
14458
14459
14460
14461
14462
14463
14464
14465
14466
14467
14468
14469
14470
14471
14472
14473
14474
14475
14476
14477
14478
14479
14480
14481
14482
14483
14484
14485
14486
14487
14488
14489
14490
14491
14492
14493
14494
14495
14496
14497
14498
14499
14500
14501
14502
14503
14504
14505
14506
14507
14508
14509
14510
14511
14512
14513
14514
14515
14516
14517
14518
14519
14520
14521
14522
14523
14524
14525
14526
14527
14528
14529
14530
14531
14532
14533
14534
14535
14536
14537
14538
14539
14540
14541
14542
14543
14544
14545
14546
14547
14548
14549
14550
14551
14552
14553
14554
14555
14556
14557
14558
14559
14560
14561
14562
14563
14564
14565
14566
14567
14568
14569
14570
14571
14572
14573
14574
14575
14576
14577
14578
14579
14580
14581
14582
14583
14584
14585
14586
14587
14588
14589
14590
14591
14592
14593
14594
14595
14596
14597
14598
14599
14600
14601
14602
14603
14604
14605
14606
14607
14608
14609
14610
14611
14612
14613
14614
14615
14616
14617
14618
14619
14620
14621
14622
14623
14624
14625
14626
14627
14628
14629
14630
14631
14632
14633
14634
14635
14636
14637
14638
14639
14640
14641
14642
14643
14644
14645
14646
14647
14648
14649
14650
14651
14652
14653
14654
14655
14656
14657
14658
14659
14660
14661
14662
14663
14664
14665
14666
14667
14668
14669
14670
14671
14672
14673
14674
14675
14676
14677
14678
14679
14680
14681
14682
14683
14684
14685
14686
14687
14688
14689
14690
14691
14692
14693
14694
14695
14696
14697
14698
14699
14700
14701
14702
14703
14704
14705
14706
14707
14708
14709
14710
14711
14712
14713
14714
14715
14716
14717
14718
14719
14720
14721
14722
14723
14724
14725
14726
14727
14728
14729
14730
14731
14732
14733
14734
14735
14736
14737
14738
14739
14740
14741
14742
14743
14744
14745
14746
14747
14748
14749
14750
14751
14752
14753
14754
14755
14756
14757
14758
14759
14760
14761
14762
14763
14764
14765
14766
14767
14768
14769
14770
14771
14772
14773
14774
14775
14776
14777
14778
14779
14780
14781
14782
14783
14784
14785
14786
14787
14788
14789
14790
14791
14792
14793
14794
14795
14796
14797
14798
14799
14800
14801
14802
14803
14804
14805
14806
14807
14808
14809
14810
14811
14812
14813
14814
14815
14816
14817
14818
14819
14820
14821
14822
14823
14824
14825
14826
14827
14828
14829
14830
14831
14832
14833
14834
14835
14836
14837
14838
14839
14840
14841
14842
14843
14844
14845
14846
14847
14848
14849
14850
14851
14852
14853
14854
14855
14856
14857
14858
14859
14860
14861
14862
14863
14864
14865
14866
14867
14868
14869
14870
14871
14872
14873
14874
14875
14876
14877
14878
14879
14880
14881
14882
14883
14884
14885
14886
14887
14888
14889
14890
14891
14892
14893
14894
14895
14896
14897
14898
14899
14900
14901
14902
14903
14904
14905
14906
14907
14908
14909
14910
14911
14912
14913
14914
14915
14916
14917
14918
14919
14920
14921
14922
14923
14924
14925
14926
14927
14928
14929
14930
14931
14932
14933
14934
14935
14936
14937
14938
14939
14940
14941
14942
14943
14944
14945
14946
14947
14948
14949
14950
14951
14952
14953
14954
14955
14956
14957
14958
14959
14960
14961
14962
14963
14964
14965
14966
14967
14968
14969
14970
14971
14972
14973
14974
14975
14976
14977
14978
14979
14980
14981
14982
14983
14984
14985
14986
14987
14988
14989
14990
14991
14992
14993
14994
14995
14996
14997
14998
14999
15000
15001
15002
15003
15004
15005
15006
15007
15008
15009
15010
15011
15012
15013
15014
15015
15016
15017
15018
15019
15020
15021
15022
15023
15024
15025
15026
15027
15028
15029
15030
15031
15032
15033
15034
15035
15036
15037
15038
15039
15040
15041
15042
15043
15044
15045
15046
15047
15048
15049
15050
15051
15052
15053
15054
15055
15056
15057
15058
15059
15060
15061
15062
15063
15064
15065
15066
15067
15068
15069
15070
15071
15072
15073
15074
15075
15076
15077
15078
15079
15080
15081
15082
15083
15084
15085
15086
15087
15088
15089
15090
15091
15092
15093
15094
15095
15096
15097
15098
15099
15100
15101
15102
15103
15104
15105
15106
15107
15108
15109
15110
15111
15112
15113
15114
15115
15116
15117
15118
15119
15120
15121
15122
15123
15124
15125
15126
15127
15128
15129
15130
15131
15132
15133
15134
15135
15136
15137
15138
15139
15140
15141
15142
15143
15144
15145
15146
15147
15148
15149
15150
15151
15152
15153
15154
15155
15156
15157
15158
15159
15160
15161
15162
15163
15164
15165
15166
15167
15168
15169
15170
15171
15172
15173
15174
15175
15176
15177
15178
15179
15180
15181
15182
15183
15184
15185
15186
15187
15188
15189
15190
15191
15192
15193
15194
15195
15196
15197
15198
15199
15200
15201
15202
15203
15204
15205
15206
15207
15208
15209
15210
15211
15212
15213
15214
15215
15216
15217
15218
15219
15220
15221
15222
15223
15224
15225
15226
15227
15228
15229
15230
15231
15232
15233
15234
15235
15236
15237
15238
15239
15240
15241
15242
15243
15244
15245
15246
15247
15248
15249
15250
15251
15252
15253
15254
15255
15256
15257
15258
15259
15260
15261
15262
15263
15264
15265
15266
15267
15268
15269
15270
15271
15272
15273
15274
15275
15276
15277
15278
15279
15280
15281
15282
15283
15284
15285
15286
15287
15288
15289
15290
15291
15292
15293
15294
15295
15296
15297
15298
15299
15300
15301
15302
15303
15304
15305
15306
15307
15308
15309
15310
15311
15312
15313
15314
15315
15316
15317
15318
15319
15320
15321
15322
15323
15324
15325
15326
15327
15328
15329
15330
15331
15332
15333
15334
15335
15336
15337
15338
15339
15340
15341
15342
15343
15344
15345
15346
15347
15348
15349
15350
15351
15352
15353
15354
15355
15356
15357
15358
15359
15360
15361
15362
15363
15364
15365
15366
15367
15368
15369
15370
15371
15372
15373
15374
15375
15376
15377
15378
15379
15380
15381
15382
15383
15384
15385
15386
15387
15388
15389
15390
15391
15392
15393
15394
15395
15396
15397
15398
15399
15400
15401
15402
15403
15404
15405
15406
15407
15408
15409
15410
15411
15412
15413
15414
15415
15416
15417
15418
15419
15420
15421
15422
15423
15424
15425
15426
15427
15428
15429
15430
15431
15432
15433
15434
15435
15436
15437
15438
15439
15440
15441
15442
15443
15444
15445
15446
15447
15448
15449
15450
15451
15452
15453
15454
15455
15456
15457
15458
15459
15460
15461
15462
15463
15464
15465
15466
15467
15468
15469
15470
15471
15472
15473
15474
15475
15476
15477
15478
15479
15480
15481
15482
15483
15484
15485
15486
15487
15488
15489
15490
15491
15492
15493
15494
15495
15496
15497
15498
15499
15500
15501
15502
15503
15504
15505
15506
15507
15508
15509
15510
15511
15512
15513
15514
15515
15516
15517
15518
15519
15520
15521
15522
15523
15524
15525
15526
15527
15528
15529
15530
15531
15532
15533
15534
15535
15536
15537
15538
15539
15540
15541
15542
15543
15544
15545
15546
15547
15548
15549
15550
15551
15552
15553
15554
15555
15556
15557
15558
15559
15560
15561
15562
15563
15564
15565
15566
15567
15568
15569
15570
15571
15572
15573
15574
15575
15576
15577
15578
15579
15580
15581
15582
15583
15584
15585
15586
15587
15588
15589
15590
15591
15592
15593
15594
15595
15596
15597
15598
15599
15600
15601
15602
15603
15604
15605
15606
15607
15608
15609
15610
15611
15612
15613
15614
15615
15616
15617
15618
15619
15620
15621
15622
15623
15624
15625
15626
15627
15628
15629
15630
15631
15632
15633
15634
15635
15636
15637
15638
15639
15640
15641
15642
15643
15644
15645
15646
15647
15648
15649
15650
15651
15652
15653
15654
15655
15656
15657
15658
15659
15660
15661
15662
15663
15664
15665
15666
15667
15668
15669
15670
15671
15672
15673
15674
15675
15676
15677
15678
15679
15680
15681
15682
15683
15684
15685
15686
15687
15688
15689
15690
15691
15692
15693
15694
15695
15696
15697
15698
15699
15700
15701
15702
15703
15704
15705
15706
15707
15708
15709
15710
15711
15712
15713
15714
15715
15716
15717
15718
15719
15720
15721
15722
15723
15724
15725
15726
15727
15728
15729
15730
15731
15732
15733
15734
15735
15736
15737
15738
15739
15740
15741
15742
15743
15744
15745
15746
15747
15748
15749
15750
15751
15752
15753
15754
15755
15756
15757
15758
15759
15760
15761
15762
15763
15764
15765
15766
15767
15768
15769
15770
15771
15772
15773
15774
15775
15776
15777
15778
15779
15780
15781
15782
15783
15784
15785
15786
15787
15788
15789
15790
15791
15792
15793
15794
15795
15796
15797
15798
15799
15800
15801
15802
15803
15804
15805
15806
15807
15808
15809
15810
15811
15812
15813
15814
15815
15816
15817
15818
15819
15820
15821
15822
15823
15824
15825
15826
15827
15828
15829
15830
15831
15832
15833
15834
15835
15836
15837
15838
15839
15840
15841
15842
15843
15844
15845
15846
15847
15848
15849
15850
15851
15852
15853
15854
15855
15856
15857
15858
15859
15860
15861
15862
15863
15864
15865
15866
15867
15868
15869
15870
15871
15872
15873
15874
15875
15876
15877
15878
15879
15880
15881
15882
15883
15884
15885
15886
15887
15888
15889
15890
15891
15892
15893
15894
15895
15896
15897
15898
15899
15900
15901
15902
15903
15904
15905
15906
15907
15908
15909
15910
15911
15912
15913
15914
15915
15916
15917
15918
15919
15920
15921
15922
15923
15924
15925
15926
15927
15928
15929
15930
15931
15932
15933
15934
15935
15936
15937
15938
15939
15940
15941
15942
15943
15944
15945
15946
15947
15948
15949
15950
15951
15952
15953
15954
15955
15956
15957
15958
15959
15960
15961
15962
15963
15964
15965
15966
15967
15968
15969
15970
15971
15972
15973
15974
15975
15976
15977
15978
15979
15980
15981
15982
15983
15984
15985
15986
15987
15988
15989
15990
15991
15992
15993
15994
15995
15996
15997
15998
15999
16000
16001
16002
16003
16004
16005
16006
16007
16008
16009
16010
16011
16012
16013
16014
16015
16016
16017
16018
16019
16020
16021
16022
16023
16024
16025
16026
16027
16028
16029
16030
16031
16032
16033
16034
16035
16036
16037
16038
16039
16040
16041
16042
16043
16044
16045
16046
16047
16048
16049
16050
16051
16052
16053
16054
16055
16056
16057
16058
16059
16060
16061
16062
16063
16064
16065
16066
16067
16068
16069
16070
16071
16072
16073
16074
16075
16076
16077
16078
16079
16080
16081
16082
16083
16084
16085
16086
16087
16088
16089
16090
16091
16092
16093
16094
16095
16096
16097
16098
16099
16100
16101
16102
16103
16104
16105
16106
16107
16108
16109
16110
16111
16112
16113
16114
16115
16116
16117
16118
16119
16120
16121
16122
16123
16124
16125
16126
16127
16128
16129
16130
16131
16132
16133
16134
16135
16136
16137
16138
16139
16140
16141
16142
16143
16144
16145
16146
16147
16148
16149
16150
16151
16152
16153
16154
16155
16156
16157
16158
16159
16160
16161
16162
16163
16164
16165
16166
16167
16168
16169
16170
16171
16172
16173
16174
16175
16176
16177
16178
16179
16180
16181
16182
16183
16184
16185
16186
16187
16188
16189
16190
16191
16192
16193
16194
16195
16196
16197
16198
16199
16200
16201
16202
16203
16204
16205
16206
16207
16208
16209
16210
16211
16212
16213
16214
16215
16216
16217
16218
16219
16220
16221
16222
16223
16224
16225
16226
16227
16228
16229
16230
16231
16232
16233
16234
16235
16236
16237
16238
16239
16240
16241
16242
16243
16244
16245
16246
16247
16248
16249
16250
16251
16252
16253
16254
16255
16256
16257
16258
16259
16260
16261
16262
16263
16264
16265
16266
16267
16268
16269
16270
16271
16272
16273
16274
16275
16276
16277
16278
16279
16280
16281
16282
16283
16284
16285
16286
16287
16288
16289
16290
16291
16292
16293
16294
16295
16296
16297
16298
16299
16300
16301
16302
16303
16304
16305
16306
16307
16308
16309
16310
16311
16312
16313
16314
16315
16316
16317
16318
16319
16320
16321
16322
16323
16324
16325
16326
16327
16328
16329
16330
16331
16332
16333
16334
16335
16336
16337
16338
16339
16340
16341
16342
16343
16344
16345
16346
16347
16348
16349
16350
16351
16352
16353
16354
16355
16356
16357
16358
16359
16360
16361
16362
16363
16364
16365
16366
16367
16368
16369
16370
16371
16372
16373
16374
16375
16376
16377
16378
16379
16380
16381
16382
16383
16384
16385
16386
16387
16388
16389
16390
16391
16392
16393
16394
16395
16396
16397
16398
16399
16400
16401
16402
16403
16404
16405
16406
16407
16408
16409
16410
16411
16412
16413
16414
16415
16416
16417
16418
16419
16420
16421
16422
16423
16424
16425
16426
16427
16428
16429
16430
16431
16432
16433
16434
16435
16436
16437
16438
16439
16440
16441
16442
16443
16444
16445
16446
16447
16448
16449
16450
16451
16452
16453
16454
16455
16456
16457
16458
16459
16460
16461
16462
16463
16464
16465
16466
16467
16468
16469
16470
16471
16472
16473
16474
16475
16476
16477
16478
16479
16480
16481
16482
16483
16484
16485
16486
16487
16488
16489
16490
16491
16492
16493
16494
16495
16496
16497
16498
16499
16500
16501
16502
16503
16504
16505
16506
16507
16508
16509
16510
16511
16512
16513
16514
16515
16516
16517
16518
16519
16520
16521
16522
16523
16524
16525
16526
16527
16528
16529
16530
16531
16532
16533
16534
16535
16536
16537
16538
16539
16540
16541
16542
16543
16544
16545
16546
16547
16548
16549
16550
16551
16552
16553
16554
16555
16556
16557
16558
16559
16560
16561
16562
16563
16564
16565
16566
16567
16568
16569
16570
16571
16572
16573
16574
16575
16576
16577
16578
16579
16580
16581
16582
16583
16584
16585
16586
16587
16588
16589
16590
16591
16592
16593
16594
16595
16596
16597
16598
16599
16600
16601
16602
16603
16604
16605
16606
16607
16608
16609
16610
16611
16612
16613
16614
16615
16616
16617
16618
16619
16620
16621
16622
16623
16624
16625
16626
16627
16628
16629
16630
16631
16632
16633
16634
16635
16636
16637
16638
16639
16640
16641
16642
16643
16644
16645
16646
16647
16648
16649
16650
16651
16652
16653
16654
16655
16656
16657
16658
16659
16660
16661
16662
16663
16664
16665
16666
16667
16668
16669
16670
16671
16672
16673
16674
16675
16676
16677
16678
16679
16680
16681
16682
16683
16684
16685
16686
16687
16688
16689
16690
16691
16692
16693
16694
16695
16696
16697
16698
16699
16700
16701
16702
16703
16704
16705
16706
16707
16708
16709
16710
16711
16712
16713
16714
16715
16716
16717
16718
16719
16720
16721
16722
16723
16724
16725
16726
16727
16728
16729
16730
16731
16732
16733
16734
16735
16736
16737
16738
16739
16740
16741
16742
16743
16744
16745
16746
16747
16748
16749
16750
16751
16752
16753
16754
16755
16756
16757
16758
16759
16760
16761
16762
16763
16764
16765
16766
16767
16768
16769
16770
16771
16772
16773
16774
16775
16776
16777
16778
16779
16780
16781
16782
16783
16784
16785
16786
16787
16788
16789
16790
16791
16792
16793
16794
16795
16796
16797
16798
16799
16800
16801
16802
16803
16804
16805
16806
16807
16808
16809
16810
16811
16812
16813
16814
16815
16816
16817
16818
16819
16820
16821
16822
16823
16824
16825
16826
16827
16828
16829
16830
16831
16832
16833
16834
16835
16836
16837
16838
16839
16840
16841
16842
16843
16844
16845
16846
16847
16848
16849
16850
16851
16852
16853
16854
16855
16856
16857
16858
16859
16860
16861
16862
16863
16864
16865
16866
16867
16868
16869
16870
16871
16872
16873
16874
16875
16876
16877
16878
16879
16880
16881
16882
16883
16884
16885
16886
16887
16888
16889
16890
16891
16892
16893
16894
16895
16896
16897
16898
16899
16900
16901
16902
16903
16904
16905
16906
16907
16908
16909
16910
16911
16912
16913
16914
16915
16916
16917
16918
16919
16920
16921
16922
16923
16924
16925
16926
16927
16928
16929
16930
16931
16932
16933
16934
16935
16936
16937
16938
16939
16940
16941
16942
16943
16944
16945
16946
16947
16948
16949
16950
16951
16952
16953
16954
16955
16956
16957
16958
16959
16960
16961
16962
16963
16964
16965
16966
16967
16968
16969
16970
16971
16972
16973
16974
16975
16976
16977
16978
16979
16980
16981
16982
16983
16984
16985
16986
16987
16988
16989
16990
16991
16992
16993
16994
16995
16996
16997
16998
16999
17000
17001
17002
17003
17004
17005
17006
17007
17008
17009
17010
17011
17012
17013
17014
17015
17016
17017
17018
17019
17020
17021
17022
17023
17024
17025
17026
17027
17028
17029
17030
17031
17032
17033
17034
17035
17036
17037
17038
17039
17040
17041
17042
17043
17044
17045
17046
17047
17048
17049
17050
17051
17052
17053
17054
17055
17056
17057
17058
17059
17060
17061
17062
17063
17064
17065
17066
17067
17068
17069
17070
17071
17072
17073
17074
17075
17076
17077
17078
17079
17080
17081
17082
17083
17084
17085
17086
17087
17088
17089
17090
17091
17092
17093
17094
17095
17096
17097
17098
17099
17100
17101
17102
17103
17104
17105
17106
17107
17108
17109
17110
17111
17112
17113
17114
17115
17116
17117
17118
17119
17120
17121
17122
17123
17124
17125
17126
17127
17128
17129
17130
17131
17132
17133
17134
17135
17136
17137
17138
17139
17140
17141
17142
17143
17144
17145
17146
17147
17148
17149
17150
17151
17152
17153
17154
17155
17156
17157
17158
17159
17160
17161
17162
17163
17164
17165
17166
17167
17168
17169
17170
17171
17172
17173
17174
17175
17176
17177
17178
17179
17180
17181
17182
17183
17184
17185
17186
17187
17188
17189
17190
17191
17192
17193
17194
17195
17196
17197
17198
17199
17200
17201
17202
17203
17204
17205
17206
17207
17208
17209
17210
17211
17212
17213
17214
17215
17216
17217
17218
17219
17220
17221
17222
17223
17224
17225
17226
17227
17228
17229
17230
17231
17232
17233
17234
17235
17236
17237
17238
17239
17240
17241
17242
17243
17244
17245
17246
17247
17248
17249
17250
17251
17252
17253
17254
17255
17256
17257
17258
17259
17260
17261
17262
17263
17264
17265
17266
17267
17268
17269
17270
17271
17272
17273
17274
17275
17276
17277
17278
17279
17280
17281
17282
17283
17284
17285
17286
17287
17288
17289
17290
17291
17292
17293
17294
17295
17296
17297
17298
17299
17300
17301
17302
17303
17304
17305
17306
17307
17308
17309
17310
17311
17312
17313
17314
17315
17316
17317
17318
17319
17320
17321
17322
17323
17324
17325
17326
17327
17328
17329
17330
17331
17332
17333
17334
17335
17336
17337
17338
17339
17340
17341
17342
17343
17344
17345
17346
17347
17348
17349
17350
17351
17352
17353
17354
17355
17356
17357
17358
17359
17360
17361
17362
17363
17364
17365
17366
17367
17368
17369
17370
17371
17372
17373
17374
17375
17376
17377
17378
17379
17380
17381
17382
17383
17384
17385
17386
17387
17388
17389
17390
17391
17392
17393
17394
17395
17396
17397
17398
17399
17400
17401
17402
17403
17404
17405
17406
17407
17408
17409
17410
17411
17412
17413
17414
17415
17416
17417
17418
17419
17420
17421
17422
17423
17424
17425
17426
17427
17428
17429
17430
17431
17432
17433
17434
17435
17436
17437
17438
17439
17440
17441
17442
17443
17444
17445
17446
17447
17448
17449
17450
17451
17452
17453
17454
17455
17456
17457
17458
17459
17460
17461
17462
17463
17464
17465
17466
17467
17468
17469
17470
17471
17472
17473
17474
17475
17476
17477
17478
17479
17480
17481
17482
17483
17484
17485
17486
17487
17488
17489
17490
17491
17492
17493
17494
17495
17496
17497
17498
17499
17500
17501
17502
17503
17504
17505
17506
17507
17508
17509
17510
17511
17512
17513
17514
17515
17516
17517
17518
17519
17520
17521
17522
17523
17524
17525
17526
17527
17528
17529
17530
17531
17532
17533
17534
17535
17536
17537
17538
17539
17540
17541
17542
17543
17544
17545
17546
17547
17548
17549
17550
17551
17552
17553
17554
17555
17556
17557
17558
17559
17560
17561
17562
17563
17564
17565
17566
17567
17568
17569
17570
17571
17572
17573
17574
17575
17576
17577
17578
17579
17580
17581
17582
17583
17584
17585
17586
17587
17588
17589
17590
17591
17592
17593
17594
17595
17596
17597
17598
17599
17600
17601
17602
17603
17604
17605
17606
17607
17608
17609
17610
17611
17612
17613
17614
17615
17616
17617
17618
17619
17620
17621
17622
17623
17624
17625
17626
17627
17628
17629
17630
17631
17632
17633
17634
17635
17636
17637
17638
17639
17640
17641
17642
17643
17644
17645
17646
17647
17648
17649
17650
17651
17652
17653
17654
17655
17656
17657
17658
17659
17660
17661
17662
17663
17664
17665
17666
17667
17668
17669
17670
17671
17672
17673
17674
17675
17676
17677
17678
17679
17680
17681
17682
17683
17684
17685
17686
17687
17688
17689
17690
17691
17692
17693
17694
17695
17696
17697
17698
17699
17700
17701
17702
17703
17704
17705
17706
17707
17708
17709
17710
17711
17712
17713
17714
17715
17716
17717
17718
17719
17720
17721
17722
17723
17724
17725
17726
17727
17728
17729
17730
17731
17732
17733
17734
17735
17736
17737
17738
17739
17740
17741
17742
17743
17744
17745
17746
17747
17748
17749
17750
17751
17752
17753
17754
17755
17756
17757
17758
17759
17760
17761
17762
17763
17764
17765
17766
17767
17768
17769
17770
17771
17772
17773
17774
17775
17776
17777
17778
17779
17780
17781
17782
17783
17784
17785
17786
17787
17788
17789
17790
17791
17792
17793
17794
17795
17796
17797
17798
17799
17800
17801
17802
17803
17804
17805
17806
17807
17808
17809
17810
17811
17812
17813
17814
17815
17816
17817
17818
17819
17820
17821
17822
17823
17824
17825
17826
17827
17828
17829
17830
17831
17832
17833
17834
17835
17836
17837
17838
17839
17840
17841
17842
17843
17844
17845
17846
17847
17848
17849
17850
17851
17852
17853
17854
17855
17856
17857
17858
17859
17860
17861
17862
17863
17864
17865
17866
17867
17868
17869
17870
17871
17872
17873
17874
17875
17876
17877
17878
17879
17880
17881
17882
17883
17884
17885
17886
17887
17888
17889
17890
17891
17892
17893
17894
17895
17896
17897
17898
17899
17900
17901
17902
17903
17904
17905
17906
17907
17908
17909
17910
17911
17912
17913
17914
17915
17916
17917
17918
17919
17920
17921
17922
17923
17924
17925
17926
17927
17928
17929
17930
17931
17932
17933
17934
17935
17936
17937
17938
17939
17940
17941
17942
17943
17944
17945
17946
17947
17948
17949
17950
17951
17952
17953
17954
17955
17956
17957
17958
17959
17960
17961
17962
17963
17964
17965
17966
17967
17968
17969
17970
17971
17972
17973
17974
17975
17976
17977
17978
17979
17980
17981
17982
17983
17984
17985
17986
17987
17988
17989
17990
17991
17992
17993
17994
17995
17996
17997
17998
17999
18000
18001
18002
18003
18004
18005
18006
18007
18008
18009
18010
18011
18012
18013
18014
18015
18016
18017
18018
18019
18020
18021
18022
18023
18024
18025
18026
18027
18028
18029
18030
18031
18032
18033
18034
18035
18036
18037
18038
18039
18040
18041
18042
18043
18044
18045
18046
18047
18048
18049
18050
18051
18052
18053
18054
18055
18056
18057
18058
18059
18060
18061
18062
18063
18064
18065
18066
18067
18068
18069
18070
18071
18072
18073
18074
18075
18076
18077
18078
18079
18080
18081
18082
18083
18084
18085
18086
18087
18088
18089
18090
18091
18092
18093
18094
18095
18096
18097
18098
18099
18100
18101
18102
18103
18104
18105
18106
18107
18108
18109
18110
18111
18112
18113
18114
18115
18116
18117
18118
18119
18120
18121
18122
18123
18124
18125
18126
18127
18128
18129
18130
18131
18132
18133
18134
18135
18136
18137
18138
18139
18140
18141
18142
18143
18144
18145
18146
18147
18148
18149
18150
18151
18152
18153
18154
18155
18156
18157
18158
18159
18160
18161
18162
18163
18164
18165
18166
18167
18168
18169
18170
18171
18172
18173
18174
18175
18176
18177
18178
18179
18180
18181
18182
18183
18184
18185
18186
18187
18188
18189
18190
18191
18192
18193
18194
18195
18196
18197
18198
18199
18200
18201
18202
18203
18204
18205
18206
18207
18208
18209
18210
18211
18212
18213
18214
18215
18216
18217
18218
18219
18220
18221
18222
18223
18224
18225
18226
18227
18228
18229
18230
18231
18232
18233
18234
18235
18236
18237
18238
18239
18240
18241
18242
18243
18244
18245
18246
18247
18248
18249
18250
18251
18252
18253
18254
18255
18256
18257
18258
18259
18260
18261
18262
18263
18264
18265
18266
18267
18268
18269
18270
18271
18272
18273
18274
18275
18276
18277
18278
18279
18280
18281
18282
18283
18284
18285
18286
18287
18288
18289
18290
18291
18292
18293
18294
18295
18296
18297
18298
18299
18300
18301
18302
18303
18304
18305
18306
18307
18308
18309
18310
18311
18312
18313
18314
18315
18316
18317
18318
18319
18320
18321
18322
18323
18324
18325
18326
18327
18328
18329
18330
18331
18332
18333
18334
18335
18336
18337
18338
18339
18340
18341
18342
18343
18344
18345
18346
18347
18348
18349
18350
18351
18352
18353
18354
18355
18356
18357
18358
18359
18360
18361
18362
18363
18364
18365
18366
18367
18368
18369
18370
18371
18372
18373
18374
18375
18376
18377
18378
18379
18380
18381
18382
18383
18384
18385
18386
18387
18388
18389
18390
18391
18392
18393
18394
18395
18396
18397
18398
18399
18400
18401
18402
18403
18404
18405
18406
18407
18408
18409
18410
18411
18412
18413
18414
18415
18416
18417
18418
18419
18420
18421
18422
18423
18424
18425
18426
18427
18428
18429
18430
18431
18432
18433
18434
18435
18436
18437
18438
18439
18440
18441
18442
18443
18444
18445
18446
18447
18448
18449
18450
18451
18452
18453
18454
18455
18456
18457
18458
18459
18460
18461
18462
18463
18464
18465
18466
18467
18468
18469
18470
18471
18472
18473
18474
18475
18476
18477
18478
18479
18480
18481
18482
18483
18484
18485
18486
18487
18488
18489
18490
18491
18492
18493
18494
18495
18496
18497
18498
18499
18500
18501
18502
18503
18504
18505
18506
18507
18508
18509
18510
18511
18512
18513
18514
18515
18516
18517
18518
18519
18520
18521
18522
18523
18524
18525
18526
18527
18528
18529
18530
18531
18532
18533
18534
18535
18536
18537
18538
18539
18540
18541
18542
18543
18544
18545
18546
18547
18548
18549
18550
18551
18552
18553
18554
18555
18556
18557
18558
18559
18560
18561
18562
18563
18564
18565
18566
18567
18568
18569
18570
18571
18572
18573
18574
18575
18576
18577
18578
18579
18580
18581
18582
18583
18584
18585
18586
18587
18588
18589
18590
18591
18592
18593
18594
18595
18596
18597
18598
18599
18600
18601
18602
18603
18604
18605
18606
18607
18608
18609
18610
18611
18612
18613
18614
18615
18616
18617
18618
18619
18620
18621
18622
18623
18624
18625
18626
18627
18628
18629
18630
18631
18632
18633
18634
18635
18636
18637
18638
18639
18640
18641
18642
18643
18644
18645
18646
18647
18648
18649
18650
18651
18652
18653
18654
18655
18656
18657
18658
18659
18660
18661
18662
18663
18664
18665
18666
18667
18668
18669
18670
18671
18672
18673
18674
18675
18676
18677
18678
18679
18680
18681
18682
18683
18684
18685
18686
18687
18688
18689
18690
18691
18692
18693
18694
18695
18696
18697
18698
18699
18700
18701
18702
18703
18704
18705
18706
18707
18708
18709
18710
18711
18712
18713
18714
18715
18716
18717
18718
18719
18720
18721
18722
18723
18724
18725
18726
18727
18728
18729
18730
18731
18732
18733
18734
18735
18736
18737
18738
18739
18740
18741
18742
18743
18744
18745
18746
18747
18748
18749
18750
18751
18752
18753
18754
18755
18756
18757
18758
18759
18760
18761
18762
18763
18764
18765
18766
18767
18768
18769
18770
18771
18772
18773
18774
18775
18776
18777
18778
18779
18780
18781
18782
18783
18784
18785
18786
18787
18788
18789
18790
18791
18792
18793
18794
18795
18796
18797
18798
18799
18800
18801
18802
18803
18804
18805
18806
18807
18808
18809
18810
18811
18812
18813
18814
18815
18816
18817
18818
18819
18820
18821
18822
18823
18824
18825
18826
18827
18828
18829
18830
18831
18832
18833
18834
18835
18836
18837
18838
18839
18840
18841
18842
18843
18844
18845
18846
18847
18848
18849
18850
18851
18852
18853
18854
18855
18856
18857
18858
18859
18860
18861
18862
18863
18864
18865
18866
18867
18868
18869
18870
18871
18872
18873
18874
18875
18876
18877
18878
18879
18880
18881
18882
18883
18884
18885
18886
18887
18888
18889
18890
18891
18892
18893
18894
18895
18896
18897
18898
18899
18900
18901
18902
18903
18904
18905
18906
18907
18908
18909
18910
18911
18912
18913
18914
18915
18916
18917
18918
18919
18920
18921
18922
18923
18924
18925
18926
18927
18928
18929
18930
18931
18932
18933
18934
18935
18936
18937
18938
18939
18940
18941
18942
18943
18944
18945
18946
18947
18948
18949
18950
18951
18952
18953
18954
18955
18956
18957
18958
18959
18960
18961
18962
18963
18964
18965
18966
18967
18968
18969
18970
18971
18972
18973
18974
18975
18976
18977
18978
18979
18980
18981
18982
18983
18984
18985
18986
18987
18988
18989
18990
18991
18992
18993
18994
18995
18996
18997
18998
18999
19000
19001
19002
19003
19004
19005
19006
19007
19008
19009
19010
19011
19012
19013
19014
19015
19016
19017
19018
19019
19020
19021
19022
19023
19024
19025
19026
19027
19028
19029
19030
19031
19032
19033
19034
19035
19036
19037
19038
19039
19040
19041
19042
19043
19044
19045
19046
19047
19048
19049
19050
19051
19052
19053
19054
19055
19056
19057
19058
19059
19060
19061
19062
19063
19064
19065
19066
19067
19068
19069
19070
19071
19072
19073
19074
19075
19076
19077
19078
19079
19080
19081
19082
19083
19084
19085
19086
19087
19088
19089
19090
19091
19092
19093
19094
19095
19096
19097
19098
19099
19100
19101
19102
19103
19104
19105
19106
19107
19108
19109
19110
19111
19112
19113
19114
19115
19116
19117
19118
19119
19120
19121
19122
19123
19124
19125
19126
19127
19128
19129
19130
19131
19132
19133
19134
19135
19136
19137
19138
19139
19140
19141
19142
19143
19144
19145
19146
19147
19148
19149
19150
19151
19152
19153
19154
19155
19156
19157
19158
19159
19160
19161
19162
19163
19164
19165
19166
19167
19168
19169
19170
19171
19172
19173
19174
19175
19176
19177
19178
19179
19180
19181
19182
19183
19184
19185
19186
19187
19188
19189
19190
19191
19192
19193
19194
19195
19196
19197
19198
19199
19200
19201
19202
19203
19204
19205
19206
19207
19208
19209
19210
19211
19212
19213
19214
19215
19216
19217
19218
19219
19220
19221
19222
19223
19224
19225
19226
19227
19228
19229
19230
19231
19232
19233
19234
19235
19236
19237
19238
19239
19240
19241
19242
19243
19244
19245
19246
19247
19248
19249
19250
19251
19252
19253
19254
19255
19256
19257
19258
19259
19260
19261
19262
19263
19264
19265
19266
19267
19268
19269
19270
19271
19272
19273
19274
19275
19276
19277
19278
19279
19280
19281
19282
19283
19284
19285
19286
19287
19288
19289
19290
19291
19292
19293
19294
19295
19296
19297
19298
19299
19300
19301
19302
19303
19304
19305
19306
19307
19308
19309
19310
19311
19312
19313
19314
19315
19316
19317
19318
19319
19320
19321
19322
19323
19324
19325
19326
19327
19328
19329
19330
19331
19332
19333
19334
19335
19336
19337
19338
19339
19340
19341
19342
19343
19344
19345
19346
19347
19348
19349
19350
19351
19352
19353
19354
19355
19356
19357
19358
19359
19360
19361
19362
19363
19364
19365
19366
19367
19368
19369
19370
19371
19372
19373
19374
19375
19376
19377
19378
19379
19380
19381
19382
19383
19384
19385
19386
19387
19388
19389
19390
19391
19392
19393
19394
19395
19396
19397
19398
19399
19400
19401
19402
19403
19404
19405
19406
19407
19408
19409
19410
19411
19412
19413
19414
19415
19416
19417
19418
19419
19420
19421
19422
19423
19424
19425
19426
19427
19428
19429
19430
19431
19432
19433
19434
19435
19436
19437
19438
19439
19440
19441
19442
19443
19444
19445
19446
19447
19448
19449
19450
19451
19452
19453
19454
19455
19456
19457
19458
19459
19460
19461
19462
19463
19464
19465
19466
19467
19468
19469
19470
19471
19472
19473
19474
19475
19476
19477
19478
19479
19480
19481
19482
19483
19484
19485
19486
19487
19488
19489
19490
19491
19492
19493
19494
19495
19496
19497
19498
19499
19500
19501
19502
19503
19504
19505
19506
19507
19508
19509
19510
19511
19512
19513
19514
19515
19516
19517
19518
19519
19520
19521
19522
19523
19524
19525
19526
19527
19528
19529
19530
19531
19532
19533
19534
19535
19536
19537
19538
19539
19540
19541
19542
19543
19544
19545
19546
19547
19548
19549
19550
19551
19552
19553
19554
19555
19556
19557
19558
19559
19560
19561
19562
19563
19564
19565
19566
19567
19568
19569
19570
19571
19572
19573
19574
19575
19576
19577
19578
19579
19580
19581
19582
19583
19584
19585
19586
19587
19588
19589
19590
19591
19592
19593
19594
19595
19596
19597
19598
19599
19600
19601
19602
19603
19604
19605
19606
19607
19608
19609
19610
19611
19612
19613
19614
19615
19616
19617
19618
19619
19620
19621
19622
19623
19624
19625
19626
19627
19628
19629
19630
19631
19632
19633
19634
19635
19636
19637
19638
19639
19640
19641
19642
19643
19644
19645
19646
19647
19648
19649
19650
19651
19652
19653
19654
19655
19656
19657
19658
19659
19660
19661
19662
19663
19664
19665
19666
19667
19668
19669
19670
19671
19672
19673
19674
19675
19676
19677
19678
19679
19680
19681
19682
19683
19684
19685
19686
19687
19688
19689
19690
19691
19692
19693
19694
19695
19696
19697
19698
19699
19700
19701
19702
19703
19704
19705
19706
19707
19708
19709
19710
19711
19712
19713
19714
19715
19716
19717
19718
19719
19720
19721
19722
19723
19724
19725
19726
19727
19728
19729
19730
19731
19732
19733
19734
19735
19736
19737
19738
19739
19740
19741
19742
19743
19744
19745
19746
19747
19748
19749
19750
19751
19752
19753
19754
19755
19756
19757
19758
19759
19760
19761
19762
19763
19764
19765
19766
19767
19768
19769
19770
19771
19772
19773
19774
19775
19776
19777
19778
19779
19780
19781
19782
19783
19784
19785
19786
19787
19788
19789
19790
19791
19792
19793
19794
19795
19796
19797
19798
19799
19800
19801
19802
19803
19804
19805
19806
19807
19808
19809
19810
19811
19812
19813
19814
19815
19816
19817
19818
19819
19820
19821
19822
19823
19824
19825
19826
19827
19828
19829
19830
19831
19832
19833
19834
19835
19836
19837
19838
19839
19840
19841
19842
19843
19844
19845
19846
19847
19848
19849
19850
19851
19852
19853
19854
19855
19856
19857
19858
19859
19860
19861
19862
19863
19864
19865
19866
19867
19868
19869
19870
19871
19872
19873
19874
19875
19876
19877
19878
19879
19880
19881
19882
19883
19884
19885
19886
19887
19888
19889
19890
19891
19892
19893
19894
19895
19896
19897
19898
19899
19900
19901
19902
19903
19904
19905
19906
19907
19908
19909
19910
19911
19912
19913
19914
19915
19916
19917
19918
19919
19920
19921
19922
19923
19924
19925
19926
19927
19928
19929
19930
19931
19932
19933
19934
19935
19936
19937
19938
19939
19940
19941
19942
19943
19944
19945
19946
19947
19948
19949
19950
19951
19952
19953
19954
19955
19956
19957
19958
19959
19960
19961
19962
19963
19964
19965
19966
19967
19968
19969
19970
19971
19972
19973
19974
19975
19976
19977
19978
19979
19980
19981
19982
19983
19984
19985
19986
19987
19988
19989
19990
19991
19992
19993
19994
19995
19996
19997
19998
19999
20000
20001
20002
20003
20004
20005
20006
20007
20008
20009
20010
20011
20012
20013
20014
20015
20016
20017
20018
20019
20020
20021
20022
20023
20024
20025
20026
20027
20028
20029
20030
20031
20032
20033
20034
20035
20036
20037
20038
20039
20040
20041
20042
20043
20044
20045
20046
20047
20048
20049
20050
20051
20052
20053
20054
20055
20056
20057
20058
20059
20060
20061
20062
20063
20064
20065
20066
20067
20068
20069
20070
20071
20072
20073
20074
20075
20076
20077
20078
20079
20080
20081
20082
20083
20084
20085
20086
20087
20088
20089
20090
20091
20092
20093
20094
20095
20096
20097
20098
20099
20100
20101
20102
20103
20104
20105
20106
20107
20108
20109
20110
20111
20112
20113
20114
20115
20116
20117
20118
20119
20120
20121
20122
20123
20124
20125
20126
20127
20128
20129
20130
20131
20132
20133
20134
20135
20136
20137
20138
20139
20140
20141
20142
20143
20144
20145
20146
20147
20148
20149
20150
20151
20152
20153
20154
20155
20156
20157
20158
20159
20160
20161
20162
20163
20164
20165
20166
20167
20168
20169
20170
20171
20172
20173
20174
20175
20176
20177
20178
20179
20180
20181
20182
20183
20184
20185
20186
20187
20188
20189
20190
20191
20192
20193
20194
20195
20196
20197
20198
20199
20200
20201
20202
20203
20204
20205
20206
20207
20208
20209
20210
20211
20212
20213
20214
20215
20216
20217
20218
20219
20220
20221
20222
20223
20224
20225
20226
20227
20228
20229
20230
20231
20232
20233
20234
20235
20236
20237
20238
20239
20240
20241
20242
20243
20244
20245
20246
20247
20248
20249
20250
20251
20252
20253
20254
20255
20256
20257
20258
20259
20260
20261
20262
20263
20264
20265
20266
20267
20268
20269
20270
20271
20272
20273
20274
20275
20276
20277
20278
20279
20280
20281
20282
20283
20284
20285
20286
20287
20288
20289
20290
20291
20292
20293
20294
20295
20296
20297
20298
20299
20300
20301
20302
20303
20304
20305
20306
20307
20308
20309
20310
20311
20312
20313
20314
20315
20316
20317
20318
20319
20320
20321
20322
20323
20324
20325
20326
20327
20328
20329
20330
20331
20332
20333
20334
20335
20336
20337
20338
20339
20340
20341
20342
20343
20344
20345
20346
20347
20348
20349
20350
20351
20352
20353
20354
20355
20356
20357
20358
20359
20360
20361
20362
20363
20364
20365
20366
20367
20368
20369
20370
20371
20372
20373
20374
20375
20376
20377
20378
20379
20380
20381
20382
20383
20384
20385
20386
20387
20388
20389
20390
20391
20392
20393
20394
20395
20396
20397
20398
20399
20400
20401
20402
20403
20404
20405
20406
20407
20408
20409
20410
20411
20412
20413
20414
20415
20416
20417
20418
20419
20420
20421
20422
20423
20424
20425
20426
20427
20428
20429
20430
20431
20432
20433
20434
20435
20436
20437
20438
20439
20440
20441
20442
20443
20444
20445
20446
20447
20448
20449
20450
20451
20452
20453
20454
20455
20456
20457
20458
20459
20460
20461
20462
20463
20464
20465
20466
20467
20468
20469
20470
20471
20472
20473
20474
20475
20476
20477
20478
20479
20480
20481
20482
20483
20484
20485
20486
20487
20488
20489
20490
20491
20492
20493
20494
20495
20496
20497
20498
20499
20500
20501
20502
20503
20504
20505
20506
20507
20508
20509
20510
20511
20512
20513
20514
20515
20516
20517
20518
20519
20520
20521
20522
20523
20524
20525
20526
20527
20528
20529
20530
20531
20532
20533
20534
20535
20536
20537
20538
20539
20540
20541
20542
20543
20544
20545
20546
20547
20548
20549
20550
20551
20552
20553
20554
20555
20556
20557
20558
20559
20560
20561
20562
20563
20564
20565
20566
20567
20568
20569
20570
20571
20572
20573
20574
20575
20576
20577
20578
20579
20580
20581
20582
20583
20584
20585
20586
20587
20588
20589
20590
20591
20592
20593
20594
20595
20596
20597
20598
20599
20600
20601
20602
20603
20604
20605
20606
20607
20608
20609
20610
20611
20612
20613
20614
20615
20616
20617
20618
20619
20620
20621
20622
20623
20624
20625
20626
20627
20628
20629
20630
20631
20632
20633
20634
20635
20636
20637
20638
20639
20640
20641
20642
20643
20644
20645
20646
20647
20648
20649
20650
20651
20652
20653
20654
20655
20656
20657
20658
20659
20660
20661
20662
20663
20664
20665
20666
20667
20668
20669
20670
20671
20672
20673
20674
20675
20676
20677
20678
20679
20680
20681
20682
20683
20684
20685
20686
20687
20688
20689
20690
20691
20692
20693
20694
20695
20696
20697
20698
20699
20700
20701
20702
20703
20704
20705
20706
20707
20708
20709
20710
20711
20712
20713
20714
20715
20716
20717
20718
20719
20720
20721
20722
20723
20724
20725
20726
20727
20728
20729
20730
20731
20732
20733
20734
20735
20736
20737
20738
20739
20740
20741
20742
20743
20744
20745
20746
20747
20748
20749
20750
20751
20752
20753
20754
20755
20756
20757
20758
20759
20760
20761
20762
20763
20764
20765
20766
20767
20768
20769
20770
20771
20772
20773
20774
20775
20776
20777
20778
20779
20780
20781
20782
20783
20784
20785
20786
20787
20788
20789
20790
20791
20792
20793
20794
20795
20796
20797
20798
20799
20800
20801
20802
20803
20804
20805
20806
20807
20808
20809
20810
20811
20812
20813
20814
20815
20816
20817
20818
20819
20820
20821
20822
20823
20824
20825
20826
20827
20828
20829
20830
20831
20832
20833
20834
20835
20836
20837
20838
20839
20840
20841
20842
20843
20844
20845
20846
20847
20848
20849
20850
20851
20852
20853
20854
20855
20856
20857
20858
20859
20860
20861
20862
20863
20864
20865
20866
20867
20868
20869
20870
20871
20872
20873
20874
20875
20876
20877
20878
20879
20880
20881
20882
20883
20884
20885
20886
20887
20888
20889
20890
20891
20892
20893
20894
20895
20896
20897
20898
20899
20900
20901
20902
20903
20904
20905
20906
20907
20908
20909
20910
20911
20912
20913
20914
20915
20916
20917
20918
20919
20920
20921
20922
20923
20924
20925
20926
20927
20928
20929
20930
20931
20932
20933
20934
20935
20936
20937
20938
20939
20940
20941
20942
20943
20944
20945
20946
20947
20948
20949
20950
20951
20952
20953
20954
20955
20956
20957
20958
20959
20960
20961
20962
20963
20964
20965
20966
20967
20968
20969
20970
20971
20972
20973
20974
20975
20976
20977
20978
20979
20980
20981
20982
20983
20984
20985
20986
20987
20988
20989
20990
20991
20992
20993
20994
20995
20996
20997
20998
20999
21000
21001
21002
21003
21004
21005
21006
21007
21008
21009
21010
21011
21012
21013
21014
21015
21016
21017
21018
21019
21020
21021
21022
21023
21024
21025
21026
21027
21028
21029
21030
21031
21032
21033
21034
21035
21036
21037
21038
21039
21040
21041
21042
21043
21044
21045
21046
21047
21048
21049
21050
21051
21052
21053
21054
21055
21056
21057
21058
21059
21060
21061
21062
21063
21064
21065
21066
21067
21068
21069
21070
21071
21072
21073
21074
21075
21076
21077
21078
21079
21080
21081
21082
21083
21084
21085
21086
21087
21088
21089
21090
21091
21092
21093
21094
21095
21096
21097
21098
21099
21100
21101
21102
21103
21104
21105
21106
21107
21108
21109
21110
21111
21112
21113
21114
21115
21116
21117
21118
21119
21120
21121
21122
21123
21124
21125
21126
21127
21128
21129
21130
21131
21132
21133
21134
21135
21136
21137
21138
21139
21140
21141
21142
21143
21144
21145
21146
21147
21148
21149
21150
21151
21152
21153
21154
21155
21156
21157
21158
21159
21160
21161
21162
21163
21164
21165
21166
21167
21168
21169
21170
21171
21172
21173
21174
21175
21176
21177
21178
21179
21180
21181
21182
21183
21184
21185
21186
21187
21188
21189
21190
21191
21192
21193
21194
21195
21196
21197
21198
21199
21200
21201
21202
21203
21204
21205
21206
21207
21208
21209
21210
21211
21212
21213
21214
21215
21216
21217
21218
21219
21220
21221
21222
21223
21224
21225
21226
21227
21228
21229
21230
21231
21232
21233
21234
21235
21236
21237
21238
21239
21240
21241
21242
21243
21244
21245
21246
21247
21248
21249
21250
21251
21252
21253
21254
21255
21256
21257
21258
21259
21260
21261
21262
21263
21264
21265
21266
21267
21268
21269
21270
21271
21272
21273
21274
21275
21276
21277
21278
21279
21280
21281
21282
21283
21284
21285
21286
21287
21288
21289
21290
21291
21292
21293
21294
21295
21296
21297
21298
21299
21300
21301
21302
21303
21304
21305
21306
21307
21308
21309
21310
21311
21312
21313
21314
21315
21316
21317
21318
21319
21320
21321
21322
21323
21324
21325
21326
21327
21328
21329
21330
21331
21332
21333
21334
21335
21336
21337
21338
21339
21340
21341
21342
21343
21344
21345
21346
21347
21348
21349
21350
21351
21352
21353
21354
21355
21356
21357
21358
21359
21360
21361
21362
21363
21364
21365
21366
21367
21368
21369
21370
21371
21372
21373
21374
21375
21376
21377
21378
21379
21380
21381
21382
21383
21384
21385
21386
21387
21388
21389
21390
21391
21392
21393
21394
21395
21396
21397
21398
21399
21400
21401
21402
21403
21404
21405
21406
21407
21408
21409
21410
21411
21412
21413
21414
21415
21416
21417
21418
21419
21420
21421
21422
21423
21424
21425
21426
21427
21428
21429
21430
21431
21432
21433
21434
21435
21436
21437
21438
21439
21440
21441
21442
21443
21444
21445
21446
21447
21448
21449
21450
21451
21452
21453
21454
21455
21456
21457
21458
21459
21460
21461
21462
21463
21464
21465
21466
21467
21468
21469
21470
21471
21472
21473
21474
21475
21476
21477
21478
21479
21480
21481
21482
21483
21484
21485
21486
21487
21488
21489
21490
21491
21492
21493
21494
21495
21496
21497
21498
21499
21500
21501
21502
21503
21504
21505
21506
21507
21508
21509
21510
21511
21512
21513
21514
21515
21516
21517
21518
21519
21520
21521
21522
21523
21524
21525
21526
21527
21528
21529
21530
21531
21532
21533
21534
21535
21536
21537
21538
21539
21540
21541
21542
21543
21544
21545
21546
21547
21548
21549
21550
21551
21552
21553
21554
21555
21556
21557
21558
21559
21560
21561
21562
21563
21564
21565
21566
21567
21568
21569
21570
21571
21572
21573
21574
21575
21576
21577
21578
21579
21580
21581
21582
21583
21584
21585
21586
21587
21588
21589
21590
21591
21592
21593
21594
21595
21596
21597
21598
21599
21600
21601
21602
21603
21604
21605
21606
21607
21608
21609
21610
21611
21612
21613
21614
21615
21616
21617
21618
21619
21620
21621
21622
21623
21624
21625
21626
21627
21628
21629
21630
21631
21632
21633
21634
21635
21636
21637
21638
21639
21640
21641
21642
21643
21644
21645
21646
21647
21648
21649
21650
21651
21652
21653
21654
21655
21656
21657
21658
21659
21660
21661
21662
21663
21664
21665
21666
21667
21668
21669
21670
21671
21672
21673
21674
21675
21676
21677
21678
21679
21680
21681
21682
21683
21684
21685
21686
21687
21688
21689
21690
21691
21692
21693
21694
21695
21696
21697
21698
21699
21700
21701
21702
21703
21704
21705
21706
21707
21708
21709
21710
21711
21712
21713
21714
21715
21716
21717
21718
21719
21720
21721
21722
21723
21724
21725
21726
21727
21728
21729
21730
21731
21732
21733
21734
21735
21736
21737
21738
21739
21740
21741
21742
21743
21744
21745
21746
21747
21748
21749
21750
21751
21752
21753
21754
21755
21756
21757
21758
21759
21760
21761
21762
21763
21764
21765
21766
21767
21768
21769
21770
21771
21772
21773
21774
21775
21776
21777
21778
21779
21780
21781
21782
21783
21784
21785
21786
21787
21788
21789
21790
21791
21792
21793
21794
21795
21796
21797
21798
21799
21800
21801
21802
21803
21804
21805
21806
21807
21808
21809
21810
21811
21812
21813
21814
21815
21816
21817
21818
21819
21820
21821
21822
21823
21824
21825
21826
21827
21828
21829
21830
21831
21832
21833
21834
21835
21836
21837
21838
21839
21840
21841
21842
21843
21844
21845
21846
21847
21848
21849
21850
21851
21852
21853
21854
21855
21856
21857
21858
21859
21860
21861
21862
21863
21864
21865
21866
21867
21868
21869
21870
21871
21872
21873
21874
21875
21876
21877
21878
21879
21880
21881
21882
21883
21884
21885
21886
21887
21888
21889
21890
21891
21892
21893
21894
21895
21896
21897
21898
21899
21900
21901
21902
21903
21904
21905
21906
21907
21908
21909
21910
21911
21912
21913
21914
21915
21916
21917
21918
21919
21920
21921
21922
21923
21924
21925
21926
21927
21928
21929
21930
21931
21932
21933
21934
21935
21936
21937
21938
21939
21940
21941
21942
21943
21944
21945
21946
21947
21948
21949
21950
21951
21952
21953
21954
21955
21956
21957
21958
21959
21960
21961
21962
21963
21964
21965
21966
21967
21968
21969
21970
21971
21972
21973
21974
21975
21976
21977
21978
21979
21980
21981
21982
21983
21984
21985
21986
21987
21988
21989
21990
21991
21992
21993
21994
21995
21996
21997
21998
21999
22000
22001
22002
22003
22004
22005
22006
22007
22008
22009
22010
22011
22012
22013
22014
22015
22016
22017
22018
22019
22020
22021
22022
22023
22024
22025
22026
22027
22028
22029
22030
22031
22032
22033
22034
22035
22036
22037
22038
22039
22040
22041
22042
22043
22044
22045
22046
22047
22048
22049
22050
22051
22052
22053
22054
22055
22056
22057
22058
22059
22060
22061
22062
22063
22064
22065
22066
22067
22068
22069
22070
22071
22072
22073
22074
22075
22076
22077
22078
22079
22080
22081
22082
22083
22084
22085
22086
22087
22088
22089
22090
22091
22092
22093
22094
22095
22096
22097
22098
22099
22100
22101
22102
22103
22104
22105
22106
22107
22108
22109
22110
22111
22112
22113
22114
22115
22116
22117
22118
22119
22120
22121
22122
22123
22124
22125
22126
22127
22128
22129
22130
22131
22132
22133
22134
22135
22136
22137
22138
22139
22140
22141
22142
22143
22144
22145
22146
22147
22148
22149
22150
22151
22152
22153
22154
22155
22156
22157
22158
22159
22160
22161
22162
22163
22164
22165
22166
22167
22168
22169
22170
22171
22172
22173
22174
22175
22176
22177
22178
22179
22180
22181
22182
22183
22184
22185
22186
22187
22188
22189
22190
22191
22192
22193
22194
22195
22196
22197
22198
22199
22200
22201
22202
22203
22204
22205
22206
22207
22208
22209
22210
22211
22212
22213
22214
22215
22216
22217
22218
22219
22220
22221
22222
22223
22224
22225
22226
22227
22228
22229
22230
22231
22232
22233
22234
22235
22236
22237
22238
22239
22240
22241
22242
22243
22244
22245
22246
22247
22248
22249
22250
22251
22252
22253
22254
22255
22256
22257
22258
22259
22260
22261
22262
22263
22264
22265
22266
22267
22268
22269
22270
22271
22272
22273
22274
22275
22276
22277
22278
22279
22280
22281
22282
22283
22284
22285
22286
22287
22288
22289
22290
22291
22292
22293
22294
22295
22296
22297
22298
22299
22300
22301
22302
22303
22304
22305
22306
22307
22308
22309
22310
22311
22312
22313
22314
22315
22316
22317
22318
22319
22320
22321
22322
22323
22324
22325
22326
22327
22328
22329
22330
22331
22332
22333
22334
22335
22336
22337
22338
22339
22340
22341
22342
22343
22344
22345
22346
22347
22348
22349
22350
22351
22352
22353
22354
22355
22356
22357
22358
22359
22360
22361
22362
22363
22364
22365
22366
22367
22368
22369
22370
22371
22372
22373
22374
22375
22376
22377
22378
22379
22380
22381
22382
22383
22384
22385
22386
22387
22388
22389
22390
22391
22392
22393
22394
22395
22396
22397
22398
22399
22400
22401
22402
22403
22404
22405
22406
22407
22408
22409
22410
22411
22412
22413
22414
22415
22416
22417
22418
22419
22420
22421
22422
22423
22424
22425
22426
22427
22428
22429
22430
22431
22432
22433
22434
22435
22436
22437
22438
22439
22440
22441
22442
22443
22444
22445
22446
22447
22448
22449
22450
22451
22452
22453
22454
22455
22456
22457
22458
22459
22460
22461
22462
22463
22464
22465
22466
22467
22468
22469
22470
22471
22472
22473
22474
22475
22476
22477
22478
22479
22480
22481
22482
22483
22484
22485
22486
22487
22488
22489
22490
22491
22492
22493
22494
22495
22496
22497
22498
22499
22500
22501
22502
22503
22504
22505
22506
22507
22508
22509
22510
22511
22512
22513
22514
22515
22516
22517
22518
22519
22520
22521
22522
22523
22524
22525
22526
22527
22528
22529
22530
22531
22532
22533
22534
22535
22536
22537
22538
22539
22540
22541
22542
22543
22544
22545
22546
22547
22548
22549
22550
22551
22552
22553
22554
22555
22556
22557
22558
22559
22560
22561
22562
22563
22564
22565
22566
22567
22568
22569
22570
22571
22572
22573
22574
22575
22576
22577
22578
22579
22580
22581
22582
22583
22584
22585
22586
22587
22588
22589
22590
22591
22592
22593
22594
22595
22596
22597
22598
22599
22600
22601
22602
22603
22604
22605
22606
22607
22608
22609
22610
22611
22612
22613
22614
22615
22616
22617
22618
22619
22620
22621
22622
22623
22624
22625
22626
22627
22628
22629
22630
22631
22632
22633
22634
22635
22636
22637
22638
22639
22640
22641
22642
22643
22644
22645
22646
22647
22648
22649
22650
22651
22652
22653
22654
22655
22656
22657
22658
22659
22660
22661
22662
22663
22664
22665
22666
22667
22668
22669
22670
22671
22672
22673
22674
22675
22676
22677
22678
22679
22680
22681
22682
22683
22684
22685
22686
22687
22688
22689
22690
22691
22692
22693
22694
22695
22696
22697
22698
22699
22700
22701
22702
22703
22704
22705
22706
22707
22708
22709
22710
22711
22712
22713
22714
22715
22716
22717
22718
22719
22720
22721
22722
22723
22724
22725
22726
22727
22728
22729
22730
22731
22732
22733
22734
22735
22736
22737
22738
22739
22740
22741
22742
22743
22744
22745
22746
22747
22748
22749
22750
22751
22752
22753
22754
22755
22756
22757
22758
22759
22760
22761
22762
22763
22764
22765
22766
22767
22768
22769
22770
22771
22772
22773
22774
22775
22776
22777
22778
22779
22780
22781
22782
22783
22784
22785
22786
22787
22788
22789
22790
22791
22792
22793
22794
22795
22796
22797
22798
22799
22800
22801
22802
22803
22804
22805
22806
22807
22808
22809
22810
22811
22812
22813
22814
22815
22816
22817
22818
22819
22820
22821
22822
22823
22824
22825
22826
22827
22828
22829
22830
22831
22832
22833
22834
22835
22836
22837
22838
22839
22840
22841
22842
22843
22844
22845
22846
22847
22848
22849
22850
22851
22852
22853
22854
22855
22856
22857
22858
22859
22860
22861
22862
22863
22864
22865
22866
22867
22868
22869
22870
22871
22872
22873
22874
22875
22876
22877
22878
22879
22880
22881
22882
22883
22884
22885
22886
22887
22888
22889
22890
22891
22892
22893
22894
22895
22896
22897
22898
22899
22900
22901
22902
22903
22904
22905
22906
22907
22908
22909
22910
22911
22912
22913
22914
22915
22916
22917
22918
22919
22920
22921
22922
22923
22924
22925
22926
22927
22928
22929
22930
22931
22932
22933
22934
22935
22936
22937
22938
22939
22940
22941
22942
22943
22944
22945
22946
22947
22948
22949
22950
22951
22952
22953
22954
22955
22956
22957
22958
22959
22960
22961
22962
22963
22964
22965
22966
22967
22968
22969
22970
22971
22972
22973
22974
22975
22976
22977
22978
22979
22980
22981
22982
22983
22984
22985
22986
22987
22988
22989
22990
22991
22992
22993
22994
22995
22996
22997
22998
22999
23000
23001
23002
23003
23004
23005
23006
23007
23008
23009
23010
23011
23012
23013
23014
23015
23016
23017
23018
23019
23020
23021
23022
23023
23024
23025
23026
23027
23028
23029
23030
23031
23032
23033
23034
23035
23036
23037
23038
23039
23040
23041
23042
23043
23044
23045
23046
23047
23048
23049
23050
23051
23052
23053
23054
23055
23056
23057
23058
23059
23060
23061
23062
23063
23064
23065
23066
23067
23068
23069
23070
23071
23072
23073
23074
23075
23076
23077
23078
23079
23080
23081
23082
23083
23084
23085
23086
23087
23088
23089
23090
23091
23092
23093
23094
23095
23096
23097
23098
23099
23100
23101
23102
23103
23104
23105
23106
23107
23108
23109
23110
23111
23112
23113
23114
23115
23116
23117
23118
23119
23120
23121
23122
23123
23124
23125
23126
23127
23128
23129
23130
23131
23132
23133
23134
23135
23136
23137
23138
23139
23140
23141
23142
23143
23144
23145
23146
23147
23148
23149
23150
23151
23152
23153
23154
23155
23156
23157
23158
23159
23160
23161
23162
23163
23164
23165
23166
23167
23168
23169
23170
23171
23172
23173
23174
23175
23176
23177
23178
23179
23180
23181
23182
23183
23184
23185
23186
23187
23188
23189
23190
23191
23192
23193
23194
23195
23196
23197
23198
23199
23200
23201
23202
23203
23204
23205
23206
23207
23208
23209
23210
23211
23212
23213
23214
23215
23216
23217
23218
23219
23220
23221
23222
23223
23224
23225
23226
23227
23228
23229
23230
23231
23232
23233
23234
23235
23236
23237
23238
23239
23240
23241
23242
23243
23244
23245
23246
23247
23248
23249
23250
23251
23252
23253
23254
23255
23256
23257
23258
23259
23260
23261
23262
23263
23264
23265
23266
23267
23268
23269
23270
23271
23272
23273
23274
23275
23276
23277
23278
23279
23280
23281
23282
23283
23284
23285
23286
23287
23288
23289
23290
23291
23292
23293
23294
23295
23296
23297
23298
23299
23300
23301
23302
23303
23304
23305
23306
23307
23308
23309
23310
23311
23312
23313
23314
23315
23316
23317
23318
23319
23320
23321
23322
23323
23324
23325
23326
23327
23328
23329
23330
23331
23332
23333
23334
23335
23336
23337
23338
23339
23340
23341
23342
23343
23344
23345
23346
23347
23348
23349
23350
23351
23352
23353
23354
23355
23356
23357
23358
23359
23360
23361
23362
23363
23364
23365
23366
23367
23368
23369
23370
23371
23372
23373
23374
23375
23376
23377
23378
23379
23380
23381
23382
23383
23384
23385
23386
23387
23388
23389
23390
23391
23392
23393
23394
23395
23396
23397
23398
23399
23400
23401
23402
23403
23404
23405
23406
23407
23408
23409
23410
23411
23412
23413
23414
23415
23416
23417
23418
23419
23420
23421
23422
23423
23424
23425
23426
23427
23428
23429
23430
23431
23432
23433
23434
23435
23436
23437
23438
23439
23440
23441
23442
23443
23444
23445
23446
23447
23448
23449
23450
23451
23452
23453
23454
23455
23456
23457
23458
23459
23460
23461
23462
23463
23464
23465
23466
23467
23468
23469
23470
23471
23472
23473
23474
23475
23476
23477
23478
23479
23480
23481
23482
23483
23484
23485
23486
23487
23488
23489
23490
23491
23492
23493
23494
23495
23496
23497
23498
23499
23500
23501
23502
23503
23504
23505
23506
23507
23508
23509
23510
23511
23512
23513
23514
23515
23516
23517
23518
23519
23520
23521
23522
23523
23524
23525
23526
23527
23528
23529
23530
23531
23532
23533
23534
23535
23536
23537
23538
23539
23540
23541
23542
23543
23544
23545
23546
23547
23548
23549
23550
23551
23552
23553
23554
23555
23556
23557
23558
23559
23560
23561
23562
23563
23564
23565
23566
23567
23568
23569
23570
23571
23572
23573
23574
23575
23576
23577
23578
23579
23580
23581
23582
23583
23584
23585
23586
23587
23588
23589
23590
23591
23592
23593
23594
23595
23596
23597
23598
23599
23600
23601
23602
23603
23604
23605
23606
23607
23608
23609
23610
23611
23612
23613
23614
23615
23616
23617
23618
23619
23620
23621
23622
23623
23624
23625
23626
23627
23628
23629
23630
23631
23632
23633
23634
23635
23636
23637
23638
23639
23640
23641
23642
23643
23644
23645
23646
23647
23648
23649
23650
23651
23652
23653
23654
23655
23656
23657
23658
23659
23660
23661
23662
23663
23664
23665
23666
23667
23668
23669
23670
23671
23672
23673
23674
23675
23676
23677
23678
23679
23680
23681
23682
23683
23684
23685
23686
23687
23688
23689
23690
23691
23692
23693
23694
23695
23696
23697
23698
23699
23700
23701
23702
23703
23704
23705
23706
23707
23708
23709
23710
23711
23712
23713
23714
23715
23716
23717
23718
23719
23720
23721
23722
23723
23724
23725
23726
23727
23728
23729
23730
23731
23732
23733
23734
23735
23736
23737
23738
23739
23740
23741
23742
23743
23744
23745
23746
23747
23748
23749
23750
23751
23752
23753
23754
23755
23756
23757
23758
23759
23760
23761
23762
23763
23764
23765
23766
23767
23768
23769
23770
23771
23772
23773
23774
23775
23776
23777
23778
23779
23780
23781
23782
23783
23784
23785
23786
23787
23788
23789
23790
23791
23792
23793
23794
23795
23796
23797
23798
23799
23800
23801
23802
23803
23804
23805
23806
23807
23808
23809
23810
23811
23812
23813
23814
23815
23816
23817
23818
23819
23820
23821
23822
23823
23824
23825
23826
23827
23828
23829
23830
23831
23832
23833
23834
23835
23836
23837
23838
23839
23840
23841
23842
23843
23844
23845
23846
23847
23848
23849
23850
23851
23852
23853
23854
23855
23856
23857
23858
23859
23860
23861
23862
23863
23864
23865
23866
23867
23868
23869
23870
23871
23872
23873
23874
23875
23876
23877
23878
23879
23880
23881
23882
23883
23884
23885
23886
23887
23888
23889
23890
23891
23892
23893
23894
23895
23896
23897
23898
23899
23900
23901
23902
23903
23904
23905
23906
23907
23908
23909
23910
23911
23912
23913
23914
23915
23916
23917
23918
23919
23920
23921
23922
23923
23924
23925
23926
23927
23928
23929
23930
23931
23932
23933
23934
23935
23936
23937
23938
23939
23940
23941
23942
23943
23944
23945
23946
23947
23948
23949
23950
23951
23952
23953
23954
23955
23956
23957
23958
23959
23960
23961
23962
23963
23964
23965
23966
23967
23968
23969
23970
23971
23972
23973
23974
23975
23976
23977
23978
23979
23980
23981
23982
23983
23984
23985
23986
23987
23988
23989
23990
23991
23992
23993
23994
23995
23996
23997
23998
23999
24000
24001
24002
24003
24004
24005
24006
24007
24008
24009
24010
24011
24012
24013
24014
24015
24016
24017
24018
24019
24020
24021
24022
24023
24024
24025
24026
24027
24028
24029
24030
24031
24032
24033
24034
24035
24036
24037
24038
24039
24040
24041
24042
24043
24044
24045
24046
24047
24048
24049
24050
24051
24052
24053
24054
24055
24056
24057
24058
24059
24060
24061
24062
24063
24064
24065
24066
24067
24068
24069
24070
24071
24072
24073
24074
24075
24076
24077
24078
24079
24080
24081
24082
24083
24084
24085
24086
24087
24088
24089
24090
24091
24092
24093
24094
24095
24096
24097
24098
24099
24100
24101
24102
24103
24104
24105
24106
24107
24108
24109
24110
24111
24112
24113
24114
24115
24116
24117
24118
24119
24120
24121
24122
24123
24124
24125
24126
24127
24128
24129
24130
24131
24132
24133
24134
24135
24136
24137
24138
24139
24140
24141
24142
24143
24144
24145
24146
24147
24148
24149
24150
24151
24152
24153
24154
24155
24156
24157
24158
24159
24160
24161
24162
24163
24164
24165
24166
24167
24168
24169
24170
24171
24172
24173
24174
24175
24176
24177
24178
24179
24180
24181
24182
24183
24184
24185
24186
24187
24188
24189
24190
24191
24192
24193
24194
24195
24196
24197
24198
24199
24200
24201
24202
24203
24204
24205
24206
24207
24208
24209
24210
24211
24212
24213
24214
24215
24216
24217
24218
24219
24220
24221
24222
24223
24224
24225
24226
24227
24228
24229
24230
24231
24232
24233
24234
24235
24236
24237
24238
24239
24240
24241
24242
24243
24244
24245
24246
24247
24248
24249
24250
24251
24252
24253
24254
24255
24256
24257
24258
24259
24260
24261
24262
24263
24264
24265
24266
24267
24268
24269
24270
24271
24272
24273
24274
24275
24276
24277
24278
24279
24280
24281
24282
24283
24284
24285
24286
24287
24288
24289
24290
24291
24292
24293
24294
24295
24296
24297
24298
24299
24300
24301
24302
24303
24304
24305
24306
24307
24308
24309
24310
24311
24312
24313
24314
24315
24316
24317
24318
24319
24320
24321
24322
24323
24324
24325
24326
24327
24328
24329
24330
24331
24332
24333
24334
24335
24336
24337
24338
24339
24340
24341
24342
24343
24344
24345
24346
24347
24348
24349
24350
24351
24352
24353
24354
24355
24356
24357
24358
24359
24360
24361
24362
24363
24364
24365
24366
24367
24368
24369
24370
24371
24372
24373
24374
24375
24376
24377
24378
24379
24380
24381
24382
24383
24384
24385
24386
24387
24388
24389
24390
24391
24392
24393
24394
24395
24396
24397
24398
24399
24400
24401
24402
24403
24404
24405
24406
24407
24408
24409
24410
24411
24412
24413
24414
24415
24416
24417
24418
24419
24420
24421
24422
24423
24424
24425
24426
24427
24428
24429
24430
24431
24432
24433
24434
24435
24436
24437
24438
24439
24440
24441
24442
24443
24444
24445
24446
24447
24448
24449
24450
24451
24452
24453
24454
24455
24456
24457
24458
24459
24460
24461
24462
24463
24464
24465
24466
24467
24468
24469
24470
24471
24472
24473
24474
24475
24476
24477
24478
24479
24480
24481
24482
24483
24484
24485
24486
24487
24488
24489
24490
24491
24492
24493
24494
24495
24496
24497
24498
24499
24500
24501
24502
24503
24504
24505
24506
24507
24508
24509
24510
24511
24512
24513
24514
24515
24516
24517
24518
24519
24520
24521
24522
24523
24524
24525
24526
24527
24528
24529
24530
24531
24532
24533
24534
24535
24536
24537
24538
24539
24540
24541
24542
24543
24544
24545
24546
24547
24548
24549
24550
24551
24552
24553
24554
24555
24556
24557
24558
24559
24560
24561
24562
24563
24564
24565
24566
24567
24568
24569
24570
24571
24572
24573
24574
24575
24576
24577
24578
24579
24580
24581
24582
24583
24584
24585
24586
24587
24588
24589
24590
24591
24592
24593
24594
24595
24596
24597
24598
24599
24600
24601
24602
24603
24604
24605
24606
24607
24608
24609
24610
24611
24612
24613
24614
24615
24616
24617
24618
24619
24620
24621
24622
24623
24624
24625
24626
24627
24628
24629
24630
24631
24632
24633
24634
24635
24636
24637
24638
24639
24640
24641
24642
24643
24644
24645
24646
24647
24648
24649
24650
24651
24652
24653
24654
24655
24656
24657
24658
24659
24660
24661
24662
24663
24664
24665
24666
24667
24668
24669
24670
24671
24672
24673
24674
24675
24676
24677
24678
24679
24680
24681
24682
24683
24684
24685
24686
24687
24688
24689
24690
24691
24692
24693
24694
24695
24696
24697
24698
24699
24700
24701
24702
24703
24704
24705
24706
24707
24708
24709
24710
24711
24712
24713
24714
24715
24716
24717
24718
24719
24720
24721
24722
24723
24724
24725
24726
24727
24728
24729
24730
24731
24732
24733
24734
24735
24736
24737
24738
24739
24740
24741
24742
24743
24744
24745
24746
24747
24748
24749
24750
24751
24752
24753
24754
24755
24756
24757
24758
24759
24760
24761
24762
24763
24764
24765
24766
24767
24768
24769
24770
24771
24772
24773
24774
24775
24776
24777
24778
24779
24780
24781
24782
24783
24784
24785
24786
24787
24788
24789
24790
24791
24792
24793
24794
24795
24796
24797
24798
24799
24800
24801
24802
24803
24804
24805
24806
24807
24808
24809
24810
24811
24812
24813
24814
24815
24816
24817
24818
24819
24820
24821
24822
24823
24824
24825
24826
24827
24828
24829
24830
24831
24832
24833
24834
24835
24836
24837
24838
24839
24840
24841
24842
24843
24844
24845
24846
24847
24848
24849
24850
24851
24852
24853
24854
24855
24856
24857
24858
24859
24860
24861
24862
24863
24864
24865
24866
24867
24868
24869
24870
24871
24872
24873
24874
24875
24876
24877
24878
24879
24880
24881
24882
24883
24884
24885
24886
24887
24888
24889
24890
24891
24892
24893
24894
24895
24896
24897
24898
24899
24900
24901
24902
24903
24904
24905
24906
24907
24908
24909
24910
24911
24912
24913
24914
24915
24916
24917
24918
24919
24920
24921
24922
24923
24924
24925
24926
24927
24928
24929
24930
24931
24932
24933
24934
24935
24936
24937
24938
24939
24940
24941
24942
24943
24944
24945
24946
24947
24948
24949
24950
24951
24952
24953
24954
24955
24956
24957
24958
24959
24960
24961
24962
24963
24964
24965
24966
24967
24968
24969
24970
24971
24972
24973
24974
24975
24976
24977
24978
24979
24980
24981
24982
24983
24984
24985
24986
24987
24988
24989
24990
24991
24992
24993
24994
24995
24996
24997
24998
24999
25000
25001
25002
25003
25004
25005
25006
25007
25008
25009
25010
25011
25012
25013
25014
25015
25016
25017
25018
25019
25020
25021
25022
25023
25024
25025
25026
25027
25028
25029
25030
25031
25032
25033
25034
25035
25036
25037
25038
25039
25040
25041
25042
25043
25044
25045
25046
25047
25048
25049
25050
25051
25052
25053
25054
25055
25056
25057
25058
25059
25060
25061
25062
25063
25064
25065
25066
25067
25068
25069
25070
25071
25072
25073
25074
25075
25076
25077
25078
25079
25080
25081
25082
25083
25084
25085
25086
25087
25088
25089
25090
25091
25092
25093
25094
25095
25096
25097
25098
25099
25100
25101
25102
25103
25104
25105
25106
25107
25108
25109
25110
25111
25112
25113
25114
25115
25116
25117
25118
25119
25120
25121
25122
25123
25124
25125
25126
25127
25128
25129
25130
25131
25132
25133
25134
25135
25136
25137
25138
25139
25140
25141
25142
25143
25144
25145
25146
25147
25148
25149
25150
25151
25152
25153
25154
25155
25156
25157
25158
25159
25160
25161
25162
25163
25164
25165
25166
25167
25168
25169
25170
25171
25172
25173
25174
25175
25176
25177
25178
25179
25180
25181
25182
25183
25184
25185
25186
25187
25188
25189
25190
25191
25192
25193
25194
25195
25196
25197
25198
25199
25200
25201
25202
25203
25204
25205
25206
25207
25208
25209
25210
25211
25212
25213
25214
25215
25216
25217
25218
25219
25220
25221
25222
25223
25224
25225
25226
25227
25228
25229
25230
25231
25232
25233
25234
25235
25236
25237
25238
25239
25240
25241
25242
25243
25244
25245
25246
25247
25248
25249
25250
25251
25252
25253
25254
25255
25256
25257
25258
25259
25260
25261
25262
25263
25264
25265
25266
25267
25268
25269
25270
25271
25272
25273
25274
25275
25276
25277
25278
25279
25280
25281
25282
25283
25284
25285
25286
25287
25288
25289
25290
25291
25292
25293
25294
25295
25296
25297
25298
25299
25300
25301
25302
25303
25304
25305
25306
25307
25308
25309
25310
25311
25312
25313
25314
25315
25316
25317
25318
25319
25320
25321
25322
25323
25324
25325
25326
25327
25328
25329
25330
25331
25332
25333
25334
25335
25336
25337
25338
25339
25340
25341
25342
25343
25344
25345
25346
25347
25348
25349
25350
25351
25352
25353
25354
25355
25356
25357
25358
25359
25360
25361
25362
25363
25364
25365
25366
25367
25368
25369
25370
25371
25372
25373
25374
25375
25376
25377
25378
25379
25380
25381
25382
25383
25384
25385
25386
25387
25388
25389
25390
25391
25392
25393
25394
25395
25396
25397
25398
25399
25400
25401
25402
25403
25404
25405
25406
25407
25408
25409
25410
25411
25412
25413
25414
25415
25416
25417
25418
25419
25420
25421
25422
25423
25424
25425
25426
25427
25428
25429
25430
25431
25432
25433
25434
25435
25436
25437
25438
25439
25440
25441
25442
25443
25444
25445
25446
25447
25448
25449
25450
25451
25452
25453
25454
25455
25456
25457
25458
25459
25460
25461
25462
25463
25464
25465
25466
25467
25468
25469
25470
25471
25472
25473
25474
25475
25476
25477
25478
25479
25480
25481
25482
25483
25484
25485
25486
25487
25488
25489
25490
25491
25492
25493
25494
25495
25496
25497
25498
25499
25500
25501
25502
25503
25504
25505
25506
25507
25508
25509
25510
25511
25512
25513
25514
25515
25516
25517
25518
25519
25520
25521
25522
25523
25524
25525
25526
25527
25528
25529
25530
25531
25532
25533
25534
25535
25536
25537
25538
25539
25540
25541
25542
25543
25544
25545
25546
25547
25548
25549
25550
25551
25552
25553
25554
25555
25556
25557
25558
25559
25560
25561
25562
25563
25564
25565
25566
25567
25568
25569
25570
25571
25572
25573
25574
25575
25576
25577
25578
25579
25580
25581
25582
25583
25584
25585
25586
25587
25588
25589
25590
25591
25592
25593
25594
25595
25596
25597
25598
25599
25600
25601
25602
25603
25604
25605
25606
25607
25608
25609
25610
25611
25612
25613
25614
25615
25616
25617
25618
25619
25620
25621
25622
25623
25624
25625
25626
25627
25628
25629
25630
25631
25632
25633
25634
25635
25636
25637
25638
25639
25640
25641
25642
25643
25644
25645
25646
25647
25648
25649
25650
25651
25652
25653
25654
25655
25656
25657
25658
25659
25660
25661
25662
25663
25664
25665
25666
25667
25668
25669
25670
25671
25672
25673
25674
25675
25676
25677
25678
25679
25680
25681
25682
25683
25684
25685
25686
25687
25688
25689
25690
25691
25692
25693
25694
25695
25696
25697
25698
25699
25700
25701
25702
25703
25704
25705
25706
25707
25708
25709
25710
25711
25712
25713
25714
25715
25716
25717
25718
25719
25720
25721
25722
25723
25724
25725
25726
25727
25728
25729
25730
25731
25732
25733
25734
25735
25736
25737
25738
25739
25740
25741
25742
25743
25744
25745
25746
25747
25748
25749
25750
25751
25752
25753
25754
25755
25756
25757
25758
25759
25760
25761
25762
25763
25764
25765
25766
25767
25768
25769
25770
25771
25772
25773
25774
25775
25776
25777
25778
25779
25780
25781
25782
25783
25784
25785
25786
25787
25788
25789
25790
25791
25792
25793
25794
25795
25796
25797
25798
25799
25800
25801
25802
25803
25804
25805
25806
25807
25808
25809
25810
25811
25812
25813
25814
25815
25816
25817
25818
25819
25820
25821
25822
25823
25824
25825
25826
25827
25828
25829
25830
25831
25832
25833
25834
25835
25836
25837
25838
25839
25840
25841
25842
25843
25844
25845
25846
25847
25848
25849
25850
25851
25852
25853
25854
25855
25856
25857
25858
25859
25860
25861
25862
25863
25864
25865
25866
25867
25868
25869
25870
25871
25872
25873
25874
25875
25876
25877
25878
25879
25880
25881
25882
25883
25884
25885
25886
25887
25888
25889
25890
25891
25892
25893
25894
25895
25896
25897
25898
25899
25900
25901
25902
25903
25904
25905
25906
25907
25908
25909
25910
25911
25912
25913
25914
25915
25916
25917
25918
25919
25920
25921
25922
25923
25924
25925
25926
25927
25928
25929
25930
25931
25932
25933
25934
25935
25936
25937
25938
25939
25940
25941
25942
25943
25944
25945
25946
25947
25948
25949
25950
25951
25952
25953
25954
25955
25956
25957
25958
25959
25960
25961
25962
25963
25964
25965
25966
25967
25968
25969
25970
25971
25972
25973
25974
25975
25976
25977
25978
25979
25980
25981
25982
25983
25984
25985
25986
25987
25988
25989
25990
25991
25992
25993
25994
25995
25996
25997
25998
25999
26000
26001
26002
26003
26004
26005
26006
26007
26008
26009
26010
26011
26012
26013
26014
26015
26016
26017
26018
26019
26020
26021
26022
26023
26024
26025
26026
26027
26028
26029
26030
26031
26032
26033
26034
26035
26036
26037
26038
26039
26040
26041
26042
26043
26044
26045
26046
26047
26048
26049
26050
26051
26052
26053
26054
26055
26056
26057
26058
26059
26060
26061
26062
26063
26064
26065
26066
26067
26068
26069
26070
26071
26072
26073
26074
26075
26076
26077
26078
26079
26080
26081
26082
26083
26084
26085
26086
26087
26088
26089
26090
26091
26092
26093
26094
26095
26096
26097
26098
26099
26100
26101
26102
26103
26104
26105
26106
26107
26108
26109
26110
26111
26112
26113
26114
26115
26116
26117
26118
26119
26120
26121
26122
26123
26124
26125
26126
26127
26128
26129
26130
26131
26132
26133
26134
26135
26136
26137
26138
26139
26140
26141
26142
26143
26144
26145
26146
26147
26148
26149
26150
26151
26152
26153
26154
26155
26156
26157
26158
26159
26160
26161
26162
26163
26164
26165
26166
26167
26168
26169
26170
26171
26172
26173
26174
26175
26176
26177
26178
26179
26180
26181
26182
26183
26184
26185
26186
26187
26188
26189
26190
26191
26192
26193
26194
26195
26196
26197
26198
26199
26200
26201
26202
26203
26204
26205
26206
26207
26208
26209
26210
26211
26212
26213
26214
26215
26216
26217
26218
26219
26220
26221
26222
26223
26224
26225
26226
26227
26228
26229
26230
26231
26232
26233
26234
26235
26236
26237
26238
26239
26240
26241
26242
26243
26244
26245
26246
26247
26248
26249
26250
26251
26252
26253
26254
26255
26256
26257
26258
26259
26260
26261
26262
26263
26264
26265
26266
26267
26268
26269
26270
26271
26272
26273
26274
26275
26276
26277
26278
26279
26280
26281
26282
26283
26284
26285
26286
26287
26288
26289
26290
26291
26292
26293
26294
26295
26296
26297
26298
26299
26300
26301
26302
26303
26304
26305
26306
26307
26308
26309
26310
26311
26312
26313
26314
26315
26316
26317
26318
26319
26320
26321
26322
26323
26324
26325
26326
26327
26328
26329
26330
26331
26332
26333
26334
26335
26336
26337
26338
26339
26340
26341
26342
26343
26344
26345
26346
26347
26348
26349
26350
26351
26352
26353
26354
26355
26356
26357
26358
26359
26360
26361
26362
26363
26364
26365
26366
26367
26368
26369
26370
26371
26372
26373
26374
26375
26376
26377
26378
26379
26380
26381
26382
26383
26384
26385
26386
26387
26388
26389
26390
26391
26392
26393
26394
26395
26396
26397
26398
26399
26400
26401
26402
26403
26404
26405
26406
26407
26408
26409
26410
26411
26412
26413
26414
26415
26416
26417
26418
26419
26420
26421
26422
26423
26424
26425
26426
26427
26428
26429
26430
26431
26432
26433
26434
26435
26436
26437
26438
26439
26440
26441
26442
26443
26444
26445
26446
26447
26448
26449
26450
26451
26452
26453
26454
26455
26456
26457
26458
26459
26460
26461
26462
26463
26464
26465
26466
26467
26468
26469
26470
26471
26472
26473
26474
26475
26476
26477
26478
26479
26480
26481
26482
26483
26484
26485
26486
26487
26488
26489
26490
26491
26492
26493
26494
26495
26496
26497
26498
26499
26500
26501
26502
26503
26504
26505
26506
26507
26508
26509
26510
26511
26512
26513
26514
26515
26516
26517
26518
26519
26520
26521
26522
26523
26524
26525
26526
26527
26528
26529
26530
26531
26532
26533
26534
26535
26536
26537
26538
26539
26540
26541
26542
26543
26544
26545
26546
26547
26548
26549
26550
26551
26552
26553
26554
26555
26556
26557
26558
26559
26560
26561
26562
26563
26564
26565
26566
26567
26568
26569
26570
26571
26572
26573
26574
26575
26576
26577
26578
26579
26580
26581
26582
26583
26584
26585
26586
26587
26588
26589
26590
26591
26592
26593
26594
26595
26596
26597
26598
26599
26600
26601
26602
26603
26604
26605
26606
26607
26608
26609
26610
26611
26612
26613
26614
26615
26616
26617
26618
26619
26620
26621
26622
26623
26624
26625
26626
26627
26628
26629
26630
26631
26632
26633
26634
26635
26636
26637
26638
26639
26640
26641
26642
26643
26644
26645
26646
26647
26648
26649
26650
26651
26652
26653
26654
26655
26656
26657
26658
26659
26660
26661
26662
26663
26664
26665
26666
26667
26668
26669
26670
26671
26672
26673
26674
26675
26676
26677
26678
26679
26680
26681
26682
26683
26684
26685
26686
26687
26688
26689
26690
26691
26692
26693
26694
26695
26696
26697
26698
26699
26700
26701
26702
26703
26704
26705
26706
26707
26708
26709
26710
26711
26712
26713
26714
26715
26716
26717
26718
26719
26720
26721
26722
26723
26724
26725
26726
26727
26728
26729
26730
26731
26732
26733
26734
26735
26736
26737
26738
26739
26740
26741
26742
26743
26744
26745
26746
26747
26748
26749
26750
26751
26752
26753
26754
26755
26756
26757
26758
26759
26760
26761
26762
26763
26764
26765
26766
26767
26768
26769
26770
26771
26772
26773
26774
26775
26776
26777
26778
26779
26780
26781
26782
26783
26784
26785
26786
26787
26788
26789
26790
26791
26792
26793
26794
26795
26796
26797
26798
26799
26800
26801
26802
26803
26804
26805
26806
26807
26808
26809
26810
26811
26812
26813
26814
26815
26816
26817
26818
26819
26820
26821
26822
26823
26824
26825
26826
26827
26828
26829
26830
26831
26832
26833
26834
26835
26836
26837
26838
26839
26840
26841
26842
26843
26844
26845
26846
26847
26848
26849
26850
26851
26852
26853
26854
26855
26856
26857
26858
26859
26860
26861
26862
26863
26864
26865
26866
26867
26868
26869
26870
26871
26872
26873
26874
26875
26876
26877
26878
26879
26880
26881
26882
26883
26884
26885
26886
26887
26888
26889
26890
26891
26892
26893
26894
26895
26896
26897
26898
26899
26900
26901
26902
26903
26904
26905
26906
26907
26908
26909
26910
26911
26912
26913
26914
26915
26916
26917
26918
26919
26920
26921
26922
26923
26924
26925
26926
26927
26928
26929
26930
26931
26932
26933
26934
26935
26936
26937
26938
26939
26940
26941
26942
26943
26944
26945
26946
26947
26948
26949
26950
26951
26952
26953
26954
26955
26956
26957
26958
26959
26960
26961
26962
26963
26964
26965
26966
26967
26968
26969
26970
26971
26972
26973
26974
26975
26976
26977
26978
26979
26980
26981
26982
26983
26984
26985
26986
26987
26988
26989
26990
26991
26992
26993
26994
26995
26996
26997
26998
26999
27000
27001
27002
27003
27004
27005
27006
27007
27008
27009
27010
27011
27012
27013
27014
27015
27016
27017
27018
27019
27020
27021
27022
27023
27024
27025
27026
27027
27028
27029
27030
27031
27032
27033
27034
27035
27036
27037
27038
27039
27040
27041
27042
27043
27044
27045
27046
27047
27048
27049
27050
27051
27052
27053
27054
27055
27056
27057
27058
27059
27060
27061
27062
27063
27064
27065
27066
27067
27068
27069
27070
27071
27072
27073
27074
27075
27076
27077
27078
27079
27080
27081
27082
27083
27084
27085
27086
27087
27088
27089
27090
27091
27092
27093
27094
27095
27096
27097
27098
27099
27100
27101
27102
27103
27104
27105
27106
27107
27108
27109
27110
27111
27112
27113
27114
27115
27116
27117
27118
27119
27120
27121
27122
27123
27124
27125
27126
27127
27128
27129
27130
27131
27132
27133
27134
27135
27136
27137
27138
27139
27140
27141
27142
27143
27144
27145
27146
27147
27148
27149
27150
27151
27152
27153
27154
27155
27156
27157
27158
27159
27160
27161
27162
27163
27164
27165
27166
27167
27168
27169
27170
27171
27172
27173
27174
27175
27176
27177
27178
27179
27180
27181
27182
27183
27184
27185
27186
27187
27188
27189
27190
27191
27192
27193
27194
27195
27196
27197
27198
27199
27200
27201
27202
27203
27204
27205
27206
27207
27208
27209
27210
27211
27212
27213
27214
27215
27216
27217
27218
27219
27220
27221
27222
27223
27224
27225
27226
27227
27228
27229
27230
27231
27232
27233
27234
27235
27236
27237
27238
27239
27240
27241
27242
27243
27244
27245
27246
27247
27248
27249
27250
27251
27252
27253
27254
27255
27256
27257
27258
27259
27260
27261
27262
27263
27264
27265
27266
27267
27268
27269
27270
27271
27272
27273
27274
27275
27276
27277
27278
27279
27280
27281
27282
27283
27284
27285
27286
27287
27288
27289
27290
27291
27292
27293
27294
27295
27296
27297
27298
27299
27300
27301
27302
27303
27304
27305
27306
27307
27308
27309
27310
27311
27312
27313
27314
27315
27316
27317
27318
27319
27320
27321
27322
27323
27324
27325
27326
27327
27328
27329
27330
27331
27332
27333
27334
27335
27336
27337
27338
27339
27340
27341
27342
27343
27344
27345
27346
27347
27348
27349
27350
27351
27352
27353
27354
27355
27356
27357
27358
27359
27360
27361
27362
27363
27364
27365
27366
27367
27368
27369
27370
27371
27372
27373
27374
27375
27376
27377
27378
27379
27380
27381
27382
27383
27384
27385
27386
27387
27388
27389
27390
27391
27392
27393
27394
27395
27396
27397
27398
27399
27400
27401
27402
27403
27404
27405
27406
27407
27408
27409
27410
27411
27412
27413
27414
27415
27416
27417
27418
27419
27420
27421
27422
27423
27424
27425
27426
27427
27428
27429
27430
27431
27432
27433
27434
27435
27436
27437
27438
27439
27440
27441
27442
27443
27444
27445
27446
27447
27448
27449
27450
27451
27452
27453
27454
27455
27456
27457
27458
27459
27460
27461
27462
27463
27464
27465
27466
27467
27468
27469
27470
27471
27472
27473
27474
27475
27476
27477
27478
27479
27480
27481
27482
27483
27484
27485
27486
27487
27488
27489
27490
27491
27492
27493
27494
27495
27496
27497
27498
27499
27500
27501
27502
27503
27504
27505
27506
27507
27508
27509
27510
27511
27512
27513
27514
27515
27516
27517
27518
27519
27520
27521
27522
27523
27524
27525
27526
27527
27528
27529
27530
27531
27532
27533
27534
27535
27536
27537
27538
27539
27540
27541
27542
27543
27544
27545
27546
27547
27548
27549
27550
27551
27552
27553
27554
27555
27556
27557
27558
27559
27560
27561
27562
27563
27564
27565
27566
27567
27568
27569
27570
27571
27572
27573
27574
27575
27576
27577
27578
27579
27580
27581
27582
27583
27584
27585
27586
27587
27588
27589
27590
27591
27592
27593
27594
27595
27596
27597
27598
27599
27600
27601
27602
27603
27604
27605
27606
27607
27608
27609
27610
27611
27612
27613
27614
27615
27616
27617
27618
27619
27620
27621
27622
27623
27624
27625
27626
27627
27628
27629
27630
27631
27632
27633
27634
27635
27636
27637
27638
27639
27640
27641
27642
27643
27644
27645
27646
27647
27648
27649
27650
27651
27652
27653
27654
27655
27656
27657
27658
27659
27660
27661
27662
27663
27664
27665
27666
27667
27668
27669
27670
27671
27672
27673
27674
27675
27676
27677
27678
27679
27680
27681
27682
27683
27684
27685
27686
27687
27688
27689
27690
27691
27692
27693
27694
27695
27696
27697
27698
27699
27700
27701
27702
27703
27704
27705
27706
27707
27708
27709
27710
27711
27712
27713
27714
27715
27716
27717
27718
27719
27720
27721
27722
27723
27724
27725
27726
27727
27728
27729
27730
27731
27732
27733
27734
27735
27736
27737
27738
27739
27740
27741
27742
27743
27744
27745
27746
27747
27748
27749
27750
27751
27752
27753
27754
27755
27756
27757
27758
27759
27760
27761
27762
27763
27764
27765
27766
27767
27768
27769
27770
27771
27772
27773
27774
27775
27776
27777
27778
27779
27780
27781
27782
27783
27784
27785
27786
27787
27788
27789
27790
27791
27792
27793
27794
27795
27796
27797
27798
27799
27800
27801
27802
27803
27804
27805
27806
27807
27808
27809
27810
27811
27812
27813
27814
27815
27816
27817
27818
27819
27820
27821
27822
27823
27824
27825
27826
27827
27828
27829
27830
27831
27832
27833
27834
27835
27836
27837
27838
27839
27840
27841
27842
27843
27844
27845
27846
27847
27848
27849
27850
27851
27852
27853
27854
27855
27856
27857
27858
27859
27860
27861
27862
27863
27864
27865
27866
27867
27868
27869
27870
27871
27872
27873
27874
27875
27876
27877
27878
27879
27880
27881
27882
27883
27884
27885
27886
27887
27888
27889
27890
27891
27892
27893
27894
27895
27896
27897
27898
27899
27900
27901
27902
27903
27904
27905
27906
27907
27908
27909
27910
27911
27912
27913
27914
27915
27916
27917
27918
27919
27920
27921
27922
27923
27924
27925
27926
27927
27928
27929
27930
27931
27932
27933
27934
27935
27936
27937
27938
27939
27940
27941
27942
27943
27944
27945
27946
27947
27948
27949
27950
27951
27952
27953
27954
27955
27956
27957
27958
27959
27960
27961
27962
27963
27964
27965
27966
27967
27968
27969
27970
27971
27972
27973
27974
27975
27976
27977
27978
27979
27980
27981
27982
27983
27984
27985
27986
27987
27988
27989
27990
27991
27992
27993
27994
27995
27996
27997
27998
27999
28000
28001
28002
28003
28004
28005
28006
28007
28008
28009
28010
28011
28012
28013
28014
28015
28016
28017
28018
28019
28020
28021
28022
28023
28024
28025
28026
28027
28028
28029
28030
28031
28032
28033
28034
28035
28036
28037
28038
28039
28040
28041
28042
28043
28044
28045
28046
28047
28048
28049
28050
28051
28052
28053
28054
28055
28056
28057
28058
28059
28060
28061
28062
28063
28064
28065
28066
28067
28068
28069
28070
28071
28072
28073
28074
28075
28076
28077
28078
28079
28080
28081
28082
28083
28084
28085
28086
28087
28088
28089
28090
28091
28092
28093
28094
28095
28096
28097
28098
28099
28100
28101
28102
28103
28104
28105
28106
28107
28108
28109
28110
28111
28112
28113
28114
28115
28116
28117
28118
28119
28120
28121
28122
28123
28124
28125
28126
28127
28128
28129
28130
28131
28132
28133
28134
28135
28136
28137
28138
28139
28140
28141
28142
28143
28144
28145
28146
28147
28148
28149
28150
28151
28152
28153
28154
28155
28156
28157
28158
28159
28160
28161
28162
28163
28164
28165
28166
28167
28168
28169
28170
28171
28172
28173
28174
28175
28176
28177
28178
28179
28180
28181
28182
28183
28184
28185
28186
28187
28188
28189
28190
28191
28192
28193
28194
28195
28196
28197
28198
28199
28200
28201
28202
28203
28204
28205
28206
28207
28208
28209
28210
28211
28212
28213
28214
28215
28216
28217
28218
28219
28220
28221
28222
28223
28224
28225
28226
28227
28228
28229
28230
28231
28232
28233
28234
28235
28236
28237
28238
28239
28240
28241
28242
28243
28244
28245
28246
28247
28248
28249
28250
28251
28252
28253
28254
28255
28256
28257
28258
28259
28260
28261
28262
28263
28264
28265
28266
28267
28268
28269
28270
28271
28272
28273
28274
28275
28276
28277
28278
28279
28280
28281
28282
28283
28284
28285
28286
28287
28288
28289
28290
28291
28292
28293
28294
28295
28296
28297
28298
28299
28300
28301
28302
28303
28304
28305
28306
28307
28308
28309
28310
28311
28312
28313
28314
28315
28316
28317
28318
28319
28320
28321
28322
28323
28324
28325
28326
28327
28328
28329
28330
28331
28332
28333
28334
28335
28336
28337
28338
28339
28340
28341
28342
28343
28344
28345
28346
28347
28348
28349
28350
28351
28352
28353
28354
28355
28356
28357
28358
28359
28360
28361
28362
28363
28364
28365
28366
28367
28368
28369
28370
28371
28372
28373
28374
28375
28376
28377
28378
28379
28380
28381
28382
28383
28384
28385
28386
28387
28388
28389
28390
28391
28392
28393
28394
28395
28396
28397
28398
28399
28400
28401
28402
28403
28404
28405
28406
28407
28408
28409
28410
28411
28412
28413
28414
28415
28416
28417
28418
28419
28420
28421
28422
28423
28424
28425
28426
28427
28428
28429
28430
28431
28432
28433
28434
28435
28436
28437
28438
28439
28440
28441
28442
28443
28444
28445
28446
28447
28448
28449
28450
28451
28452
28453
28454
28455
28456
28457
28458
28459
28460
28461
28462
28463
28464
28465
28466
28467
28468
28469
28470
28471
28472
28473
28474
28475
28476
28477
28478
28479
28480
28481
28482
28483
28484
28485
28486
28487
28488
28489
28490
28491
28492
28493
28494
28495
28496
28497
28498
28499
28500
28501
28502
28503
28504
28505
28506
28507
28508
28509
28510
28511
28512
28513
28514
28515
28516
28517
28518
28519
28520
28521
28522
28523
28524
28525
28526
28527
28528
28529
28530
28531
28532
28533
28534
28535
28536
28537
28538
28539
28540
28541
28542
28543
28544
28545
28546
28547
28548
28549
28550
28551
28552
28553
28554
28555
28556
28557
28558
28559
28560
28561
28562
28563
28564
28565
28566
28567
28568
28569
28570
28571
28572
28573
28574
28575
28576
28577
28578
28579
28580
28581
28582
28583
28584
28585
28586
28587
28588
28589
28590
28591
28592
28593
28594
28595
28596
28597
28598
28599
28600
28601
28602
28603
28604
28605
28606
28607
28608
28609
28610
28611
28612
28613
28614
28615
28616
28617
28618
28619
28620
28621
28622
28623
28624
28625
28626
28627
28628
28629
28630
28631
28632
28633
28634
28635
28636
28637
28638
28639
28640
28641
28642
28643
28644
28645
28646
28647
28648
28649
28650
28651
28652
28653
28654
28655
28656
28657
28658
28659
28660
28661
28662
28663
28664
28665
28666
28667
28668
28669
28670
28671
28672
28673
28674
28675
28676
28677
28678
28679
28680
28681
28682
28683
28684
28685
28686
28687
28688
28689
28690
28691
28692
28693
28694
28695
28696
28697
28698
28699
28700
28701
28702
28703
28704
28705
28706
28707
28708
28709
28710
28711
28712
28713
28714
28715
28716
28717
28718
28719
28720
28721
28722
28723
28724
28725
28726
28727
28728
28729
28730
28731
28732
28733
28734
28735
28736
28737
28738
28739
28740
28741
28742
28743
28744
28745
28746
28747
28748
28749
28750
28751
28752
28753
28754
28755
28756
28757
28758
28759
28760
28761
28762
28763
28764
28765
28766
28767
28768
28769
28770
28771
28772
28773
28774
28775
28776
28777
28778
28779
28780
28781
28782
28783
28784
28785
28786
28787
28788
28789
28790
28791
28792
28793
28794
28795
28796
28797
28798
28799
28800
28801
28802
28803
28804
28805
28806
28807
28808
28809
28810
28811
28812
28813
28814
28815
28816
28817
28818
28819
28820
28821
28822
28823
28824
28825
28826
28827
28828
28829
28830
28831
28832
28833
28834
28835
28836
28837
28838
28839
28840
28841
28842
28843
28844
28845
28846
28847
28848
28849
28850
28851
28852
28853
28854
28855
28856
28857
28858
28859
28860
28861
28862
28863
28864
28865
28866
28867
28868
28869
28870
28871
28872
28873
28874
28875
28876
28877
28878
28879
28880
28881
28882
28883
28884
28885
28886
28887
28888
28889
28890
28891
28892
28893
28894
28895
28896
28897
28898
28899
28900
28901
28902
28903
28904
28905
28906
28907
28908
28909
28910
28911
28912
28913
28914
28915
28916
28917
28918
28919
28920
28921
28922
28923
28924
28925
28926
28927
28928
28929
28930
28931
28932
28933
28934
28935
28936
28937
28938
28939
28940
28941
28942
28943
28944
28945
28946
28947
28948
28949
28950
28951
28952
28953
28954
28955
28956
28957
28958
28959
28960
28961
28962
28963
28964
28965
28966
28967
28968
28969
28970
28971
28972
28973
28974
28975
28976
28977
28978
28979
28980
28981
28982
28983
28984
28985
28986
28987
28988
28989
28990
28991
28992
28993
28994
28995
28996
28997
28998
28999
29000
29001
29002
29003
29004
29005
29006
29007
29008
29009
29010
29011
29012
29013
29014
29015
29016
29017
29018
29019
29020
29021
29022
29023
29024
29025
29026
29027
29028
29029
29030
29031
29032
29033
29034
29035
29036
29037
29038
29039
29040
29041
29042
29043
29044
29045
29046
29047
29048
29049
29050
29051
29052
29053
29054
29055
29056
29057
29058
29059
29060
29061
29062
29063
29064
29065
29066
29067
29068
29069
29070
29071
29072
29073
29074
29075
29076
29077
29078
29079
29080
29081
29082
29083
29084
29085
29086
29087
29088
29089
29090
29091
29092
29093
29094
29095
29096
29097
29098
29099
29100
29101
29102
29103
29104
29105
29106
29107
29108
29109
29110
29111
29112
29113
29114
29115
29116
29117
29118
29119
29120
29121
29122
29123
29124
29125
29126
29127
29128
29129
29130
29131
29132
29133
29134
29135
29136
29137
29138
29139
29140
29141
29142
29143
29144
29145
29146
29147
29148
29149
29150
29151
29152
29153
29154
29155
29156
29157
29158
29159
29160
29161
29162
29163
29164
29165
29166
29167
29168
29169
29170
29171
29172
29173
29174
29175
29176
29177
29178
29179
29180
29181
29182
29183
29184
29185
29186
29187
29188
29189
29190
29191
29192
29193
29194
29195
29196
29197
29198
29199
29200
29201
29202
29203
29204
29205
29206
29207
29208
29209
29210
29211
29212
29213
29214
29215
29216
29217
29218
29219
29220
29221
29222
29223
29224
29225
29226
29227
29228
29229
29230
29231
29232
29233
29234
29235
29236
29237
29238
29239
29240
29241
29242
29243
29244
29245
29246
29247
29248
29249
29250
29251
29252
29253
29254
29255
29256
29257
29258
29259
29260
29261
29262
29263
29264
29265
29266
29267
29268
29269
29270
29271
29272
29273
29274
29275
29276
29277
29278
29279
29280
29281
29282
29283
29284
29285
29286
29287
29288
29289
29290
29291
29292
29293
29294
29295
29296
29297
29298
29299
29300
29301
29302
29303
29304
29305
29306
29307
29308
29309
29310
29311
29312
29313
29314
29315
29316
29317
29318
29319
29320
29321
29322
29323
29324
29325
29326
29327
29328
29329
29330
29331
29332
29333
29334
29335
29336
29337
29338
29339
29340
29341
29342
29343
29344
29345
29346
29347
29348
29349
29350
29351
29352
29353
29354
29355
29356
29357
29358
29359
29360
29361
29362
29363
29364
29365
29366
29367
29368
29369
29370
29371
29372
29373
29374
29375
29376
29377
29378
29379
29380
29381
29382
29383
29384
29385
29386
29387
29388
29389
29390
29391
29392
29393
29394
29395
29396
29397
29398
29399
29400
29401
29402
29403
29404
29405
29406
29407
29408
29409
29410
29411
29412
29413
29414
29415
29416
29417
29418
29419
29420
29421
29422
29423
29424
29425
29426
29427
29428
29429
29430
29431
29432
29433
29434
29435
29436
29437
29438
29439
29440
29441
29442
29443
29444
29445
29446
29447
29448
29449
29450
29451
29452
29453
29454
29455
29456
29457
29458
29459
29460
29461
29462
29463
29464
29465
29466
29467
29468
29469
29470
29471
29472
29473
29474
29475
29476
29477
29478
29479
29480
29481
29482
29483
29484
29485
29486
29487
29488
29489
29490
29491
29492
29493
29494
29495
29496
29497
29498
29499
29500
29501
29502
29503
29504
29505
29506
29507
29508
29509
29510
29511
29512
29513
29514
29515
29516
29517
29518
29519
29520
29521
29522
29523
29524
29525
29526
29527
29528
29529
29530
29531
29532
29533
29534
29535
29536
29537
29538
29539
29540
29541
29542
29543
29544
29545
29546
29547
29548
29549
29550
29551
29552
29553
29554
29555
29556
29557
29558
29559
29560
29561
29562
29563
29564
29565
29566
29567
29568
29569
29570
29571
29572
29573
29574
29575
29576
29577
29578
29579
29580
29581
29582
29583
29584
29585
29586
29587
29588
29589
29590
29591
29592
29593
29594
29595
29596
29597
29598
29599
29600
29601
29602
29603
29604
29605
29606
29607
29608
29609
29610
29611
29612
29613
29614
29615
29616
29617
29618
29619
29620
29621
29622
29623
29624
29625
29626
29627
29628
29629
29630
29631
29632
29633
29634
29635
29636
29637
29638
29639
29640
29641
29642
29643
29644
29645
29646
29647
29648
29649
29650
29651
29652
29653
29654
29655
29656
29657
29658
29659
29660
29661
29662
29663
29664
29665
29666
29667
29668
29669
29670
29671
29672
29673
29674
29675
29676
29677
29678
29679
29680
29681
29682
29683
29684
29685
29686
29687
29688
29689
29690
29691
29692
29693
29694
29695
29696
29697
29698
29699
29700
29701
29702
29703
29704
29705
29706
29707
29708
29709
29710
29711
29712
29713
29714
29715
29716
29717
29718
29719
29720
29721
29722
29723
29724
29725
29726
29727
29728
29729
29730
29731
29732
29733
29734
29735
29736
29737
29738
29739
29740
29741
29742
29743
29744
29745
29746
29747
29748
29749
29750
29751
29752
29753
29754
29755
29756
29757
29758
29759
29760
29761
29762
29763
29764
29765
29766
29767
29768
29769
29770
29771
29772
29773
29774
29775
29776
29777
29778
29779
29780
29781
29782
29783
29784
29785
29786
29787
29788
29789
29790
29791
29792
29793
29794
29795
29796
29797
29798
29799
29800
29801
29802
29803
29804
29805
29806
29807
29808
29809
29810
29811
29812
29813
29814
29815
29816
29817
29818
29819
29820
29821
29822
29823
29824
29825
29826
29827
29828
29829
29830
29831
29832
29833
29834
29835
29836
29837
29838
29839
29840
29841
29842
29843
29844
29845
29846
29847
29848
29849
29850
29851
29852
29853
29854
29855
29856
29857
29858
29859
29860
29861
29862
29863
29864
29865
29866
29867
29868
29869
29870
29871
29872
29873
29874
29875
29876
29877
29878
29879
29880
29881
29882
29883
29884
29885
29886
29887
29888
29889
29890
29891
29892
29893
29894
29895
29896
29897
29898
29899
29900
29901
29902
29903
29904
29905
29906
29907
29908
29909
29910
29911
29912
29913
29914
29915
29916
29917
29918
29919
29920
29921
29922
29923
29924
29925
29926
29927
29928
29929
29930
29931
29932
29933
29934
29935
29936
29937
29938
29939
29940
29941
29942
29943
29944
29945
29946
29947
29948
29949
29950
29951
29952
29953
29954
29955
29956
29957
29958
29959
29960
29961
29962
29963
29964
29965
29966
29967
29968
29969
29970
29971
29972
29973
29974
29975
29976
29977
29978
29979
29980
29981
29982
29983
29984
29985
29986
29987
29988
29989
29990
29991
29992
29993
29994
29995
29996
29997
29998
29999
30000
30001
30002
30003
30004
30005
30006
30007
30008
30009
30010
30011
30012
30013
30014
30015
30016
30017
30018
30019
30020
30021
30022
30023
30024
30025
30026
30027
30028
30029
30030
30031
30032
30033
30034
30035
30036
30037
30038
30039
30040
30041
30042
30043
30044
30045
30046
30047
30048
30049
30050
30051
30052
30053
30054
30055
30056
30057
30058
30059
30060
30061
30062
30063
30064
30065
30066
30067
30068
30069
30070
30071
30072
30073
30074
30075
30076
30077
30078
30079
30080
30081
30082
30083
30084
30085
30086
30087
30088
30089
30090
30091
30092
30093
30094
30095
30096
30097
30098
30099
30100
30101
30102
30103
30104
30105
30106
30107
30108
30109
30110
30111
30112
30113
30114
30115
30116
30117
30118
30119
30120
30121
30122
30123
30124
30125
30126
30127
30128
30129
30130
30131
30132
30133
30134
30135
30136
30137
30138
30139
30140
30141
30142
30143
30144
30145
30146
30147
30148
30149
30150
30151
30152
30153
30154
30155
30156
30157
30158
30159
30160
30161
30162
30163
30164
30165
30166
30167
30168
30169
30170
30171
30172
30173
30174
30175
30176
30177
30178
30179
30180
30181
30182
30183
30184
30185
30186
30187
30188
30189
30190
30191
30192
30193
30194
30195
30196
30197
30198
30199
30200
30201
30202
30203
30204
30205
30206
30207
30208
30209
30210
30211
30212
30213
30214
30215
30216
30217
30218
30219
30220
30221
30222
30223
30224
30225
30226
30227
30228
30229
30230
30231
30232
30233
30234
30235
30236
30237
30238
30239
30240
30241
30242
30243
30244
30245
30246
30247
30248
30249
30250
30251
30252
30253
30254
30255
30256
30257
30258
30259
30260
30261
30262
30263
30264
30265
30266
30267
30268
30269
30270
30271
30272
30273
30274
30275
30276
30277
30278
30279
30280
30281
30282
30283
30284
30285
30286
30287
30288
30289
30290
30291
30292
30293
30294
30295
30296
30297
30298
30299
30300
30301
30302
30303
30304
30305
30306
30307
30308
30309
30310
30311
30312
30313
30314
30315
30316
30317
30318
30319
30320
30321
30322
30323
30324
30325
30326
30327
30328
30329
30330
30331
30332
30333
30334
30335
30336
30337
30338
30339
30340
30341
30342
30343
30344
30345
30346
30347
30348
30349
30350
30351
30352
30353
30354
30355
30356
30357
30358
30359
30360
30361
30362
30363
30364
30365
30366
30367
30368
30369
30370
30371
30372
30373
30374
30375
30376
30377
30378
30379
30380
30381
30382
30383
30384
30385
30386
30387
30388
30389
30390
30391
30392
30393
30394
30395
30396
30397
30398
30399
30400
30401
30402
30403
30404
30405
30406
30407
30408
30409
30410
30411
30412
30413
30414
30415
30416
30417
30418
30419
30420
30421
30422
30423
30424
30425
30426
30427
30428
30429
30430
30431
30432
30433
30434
30435
30436
30437
30438
30439
30440
30441
30442
30443
30444
30445
30446
30447
30448
30449
30450
30451
30452
30453
30454
30455
30456
30457
30458
30459
30460
30461
30462
30463
30464
30465
30466
30467
30468
30469
30470
30471
30472
30473
30474
30475
30476
30477
30478
30479
30480
30481
30482
30483
30484
30485
30486
30487
30488
30489
30490
30491
30492
30493
30494
30495
30496
30497
30498
30499
30500
30501
30502
30503
30504
30505
30506
30507
30508
30509
30510
30511
30512
30513
30514
30515
30516
30517
30518
30519
30520
30521
30522
30523
30524
30525
30526
30527
30528
30529
30530
30531
30532
30533
30534
30535
30536
30537
30538
30539
30540
30541
30542
30543
30544
30545
\input texinfo @c -*-texinfo-*-

@c ONE SENTENCE PER LINE
@c ---------------------
@c For main printed text in this file, to allow easy tracking of history
@c with Git, we are following a one-sentence-per-line convention.
@c
@c Since the manual is long, this is being done gradually from the start.

@c %**start of header
@setfilename gnuastro.info
@settitle GNU Astronomy Utilities
@documentencoding UTF-8
@allowcodebreaks false
@c@afourpaper

@c %**end of header
@include version.texi
@include formath.texi

@c So dashes and underscores can be used in HTMLs
@allowcodebreaks true

@c So function output type is printed on first line
@deftypefnnewline on

@c Use section titles in cross references, not node titles.
@xrefautomaticsectiontitle on

@c For the indexes:
@syncodeindex vr cp
@syncodeindex pg cp

@c Copyright information:
@copying
This book documents version @value{VERSION} of the GNU Astronomy Utilities (Gnuastro).
Gnuastro provides various programs and libraries for astronomical data manipulation and analysis.

Copyright @copyright{} 2015-2021, Free Software Foundation, Inc.

@quotation
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
A copy of the license is included in the section entitled ``GNU Free Documentation License''.
@end quotation
@end copying

@c To include in the info directory.
@dircategory Astronomy
@direntry
* Gnuastro: (gnuastro).       GNU Astronomy Utilities.
* libgnuastro: (gnuastro)Gnuastro library. Full Gnuastro library doc.

* help-gnuastro: (gnuastro)help-gnuastro mailing list. Getting help.

* bug-gnuastro: (gnuastro)Report a bug. How to report bugs

* Arithmetic: (gnuastro)Arithmetic. Arithmetic operations on pixels.
* astarithmetic: (gnuastro)Invoking astarithmetic. Options to Arithmetic.

* BuildProgram: (gnuastro)BuildProgram. Compile and run programs using Gnuastro's library.
* astbuildprog: (gnuastro)Invoking astbuildprog. Options to BuildProgram.

* ConvertType: (gnuastro)ConvertType. Convert different file types.
* astconvertt: (gnuastro)Invoking astconvertt. Options to ConvertType.

* Convolve: (gnuastro)Convolve. Convolve an input file with kernel.
* astconvolve: (gnuastro)Invoking astconvolve. Options to Convolve.

* CosmicCalculator: (gnuastro)CosmicCalculator. For cosmological params.
* astcosmiccal: (gnuastro)Invoking astcosmiccal. Options to CosmicCalculator.

* Crop: (gnuastro)Crop. Crop region(s) from image(s).
* astcrop: (gnuastro)Invoking astcrop. Options to Crop.

* Fits: (gnuastro)Fits. View and manipulate FITS extensions and keywords.
* astfits: (gnuastro)Invoking astfits. Options to Fits.

* MakeCatalog: (gnuastro)MakeCatalog. Make a catalog from labeled image.
* astmkcatalog: (gnuastro)Invoking astmkcatalog. Options to MakeCatalog.

* MakeNoise: (gnuastro)MakeNoise. Make (add) noise to an image.
* astmknoise: (gnuastro)Invoking astmknoise. Options to MakeNoise.

* MakeProfiles: (gnuastro)MakeProfiles. Make mock profiles.
* astmkprof: (gnuastro)Invoking astmkprof. Options to MakeProfiles.

* Match: (gnuastro)Match. Match two separate catalogs.
* astmatch: (gnuastro)Invoking astmatch. Options to Match.

* NoiseChisel: (gnuastro)NoiseChisel. Detect signal in noise.
* astnoisechisel: (gnuastro)Invoking astnoisechisel. Options to NoiseChisel.

* Segment: (gnuastro)Segment. Segment detections based on signal structure.
* astsegment: (gnuastro)Invoking astsegment. Options to Segment.

* Query: (gnuastro)Query. Access remote databases for downloading data.
* astquery: (gnuastro)Invoking astquery. Options to Query.

* Statistics: (gnuastro)Statistics. Get image Statistics.
* aststatistics: (gnuastro)Invoking aststatistics. Options to Statistics.

* Table: (gnuastro)Table. Read and write FITS binary or ASCII tables.
* asttable: (gnuastro)Invoking asttable. Options to Table.

* Warp: (gnuastro)Warp. Warp a dataset to a new grid.
* astwarp: (gnuastro)Invoking astwarp. Options to Warp.

* astscript-sort-by-night: (gnuastro)Invoking astscript-sort-by-night. Options to this script

@end direntry




















@c Print title information:
@titlepage
@title GNU Astronomy Utilities
@subtitle Astronomical data manipulation and analysis programs and libraries
@subtitle for version @value{VERSION}, @value{UPDATED}
@author Mohammad Akhlaghi

@page
Gnuastro (source code, book and web page) authors (sorted by number of
commits):
@quotation
@include authors.texi
@end quotation
@vskip 0pt plus 1filll
@insertcopying

@page
@quotation
@*
@*
@*
@*
@*
@*
@*
@*
@*
For myself, I am interested in science and in philosophy only because I want to learn something about the riddle of the world in which we live, and the riddle of man's knowledge of that world.
And I believe that only a revival of interest in these riddles can save the sciences and philosophy from narrow specialization and from an obscurantist faith in the expert's special skill, and in his personal knowledge and authority; a faith that so well fits our `post-rationalist' and `post-critical' age, proudly dedicated to the destruction of the tradition of rational philosophy, and of rational thought itself.
@author Karl Popper. The logic of scientific discovery. 1959.
@end quotation

@end titlepage

@shortcontents
@contents











@c Online version top information.
@ifnottex
@node Top, Introduction, (dir), (dir)
@top GNU Astronomy Utilities

@insertcopying

@ifhtml
To navigate easily in this web page, you can use the @code{Next}, @code{Previous}, @code{Up} and @code{Contents} links in the top and bottom of each page.
@code{Next} and @code{Previous} will take you to the next or previous topic in the same level, for example from chapter 1 to chapter 2 or vice versa.
To go to the sections or subsections, you have to click on the menu entries that are there when ever a sub-component to a title is present.
@end ifhtml

@end ifnottex

@menu
* Introduction::                General introduction.
* Tutorials::                   Tutorials or Cookbooks.
* Installation::                Requirements and installation.
* Common program behavior::     Common behavior in all programs.
* Data containers::             Tools to operate on extensions and tables.
* Data manipulation::           Tools for basic image manipulation.
* Data analysis::               Analyze images.
* Modeling and fittings::       Make and fit models.
* High-level calculations::     Physical calculations.
* Library::                     Gnuastro's library of useful functions.
* Developing::                  The development environment.
* Gnuastro programs list::      List and short summary of Gnuastro.
* Other useful software::       Installing other useful software.
* GNU Free Doc. License::       Full text of the GNU Free documentation license.
* GNU General Public License::  Full text of the GNU General public license.
* Index::                       Index of terms

@detailmenu
 --- The Detailed Node Listing ---

Introduction

* Quick start::                 A quick start to installation.
* Science and its tools::       Some philosophy and history.
* Your rights::                 User rights.
* Naming convention::           About names of programs in Gnuastro.
* Version numbering::           Understanding version numbers.
* New to GNU/Linux?::           Suggested GNU/Linux distribution.
* Report a bug::                Search and report the bug you found.
* Suggest new feature::         How to suggest a new feature.
* Announcements::               How to stay up to date with Gnuastro.
* Conventions::                 Conventions used in this book.
* Acknowledgments::             People who helped in the production.

Version numbering

* GNU Astronomy Utilities 1.0::  Plans for version 1.0 release

New to GNU/Linux?

* Command-line interface::      Introduction to the command-line

Tutorials

* Sufi simulates a detection::  Simulating a detection.
* General program usage tutorial::  Tutorial on many programs in generic scenario.
* Detecting large extended targets::  NoiseChisel for huge extended targets.

Sufi simulates a detection

* General program usage tutorial::

General program usage tutorial

* Calling Gnuastro's programs::  Easy way to find Gnuastro's programs.
* Accessing documentation::     Access to manual of programs you are running.
* Setup and data download::     Setup this template and download datasets.
* Dataset inspection and cropping::  Crop the flat region to use in next steps.
* Angular coverage on the sky::  Measure the field size on the sky.
* Cosmological coverage::       Measure the field size at different redshifts.
* Building custom programs with the library::  Easy way to build new programs.
* Option management and configuration files::  Dealing with options and configuring them.
* Warping to a new pixel grid::  Transforming/warping the dataset.
* NoiseChisel and Multiextension FITS files::  Running NoiseChisel and having multiple HDUs.
* NoiseChisel optimization for detection::  Check NoiseChisel's operation and improve it.
* NoiseChisel optimization for storage::  Dramatically decrease output's volume.
* Segmentation and making a catalog::  Finding true peaks and creating a catalog.
* Working with catalogs estimating colors::  Estimating colors using the catalogs.
* Column statistics color-magnitude diagram::  Visualizing column correlations.
* Aperture photometry::         Doing photometry on a fixed aperture.
* Matching catalogs::           Easily find corresponding rows from two catalogs.
* Finding reddest clumps and visual inspection::  Selecting some targets and inspecting them.
* Writing scripts to automate the steps::  Scripts will greatly help in re-doing things fast.
* Citing and acknowledging Gnuastro::  How to cite and acknowledge Gnuastro in your papers.

Detecting large extended targets

* Downloading and validating input data::  How to get and check the input data.
* NoiseChisel optimization::    Detect the extended and diffuse wings.
* Achieved surface brightness level::  Calculate the outer surface brightness.

Downloading and validating input data

* NoiseChisel optimization::    Optimize NoiseChisel to dig very deep.
* Achieved surface brightness level::  Measure how much you detected.

Installation

* Dependencies::                Necessary packages for Gnuastro.
* Downloading the source::      Ways to download the source code.
* Build and install::           Configure, build and install Gnuastro.

Dependencies

* Mandatory dependencies::      Gnuastro will not install without these.
* Optional dependencies::       Adding more functionality.
* Bootstrapping dependencies::  If you have the version controlled source.
* Dependencies from package managers::  Installing from OS package managers.

Mandatory dependencies

* GNU Scientific Library::      Installing GSL.
* CFITSIO::                     C interface to the FITS standard.
* WCSLIB::                      C interface to the WCS standard of FITS.

Downloading the source

* Release tarball::             Download a stable official release.
* Version controlled source::   Get and use the version controlled source.

Version controlled source

* Bootstrapping::               Adding all the automatically generated files.
* Synchronizing::               Keep your local clone up to date.

Build and install

* Configuring::                 Configure Gnuastro
* Separate build and source directories::  Keeping derivate/build files separate.
* Tests::                       Run tests to see if it is working.
* A4 print book::               Customize the print book.
* Known issues::                Issues you might encounter.

Configuring

* Gnuastro configure options::  Configure options particular to Gnuastro.
* Installation directory::      Specify the directory to install.
* Executable names::            Changing executable names.
* Configure and build in RAM::  For minimal use of HDD or SSD, and clean source.

Common program behavior

* Command-line::                How to use the command-line.
* Configuration files::         Values for unspecified variables.
* Getting help::                Getting more information on the go.
* Installed scripts::           Installed Bash scripts, not compiled programs.
* Multi-threaded operations::   How threads are managed in Gnuastro.
* Numeric data types::          Different types and how to specify them.
* Memory management::           How memory is allocated (in RAM or HDD/SSD).
* Tables::                      Recognized table formats.
* Tessellation::                Tile the dataset into non-overlapping bins.
* Automatic output::            About automatic output names.
* Output FITS files::           Common properties when outputs are FITS.

Command-line

* Arguments and options::       Different ways to specify inputs and configuration.
* Common options::              Options that are shared between all programs.
* Standard input::              Using output of another program as input.

Arguments and options

* Arguments::                   For specifying the main input files/operations.
* Options::                     For configuring the behavior of the program.

Common options

* Input output options::        Common input/output options.
* Processing options::          Options for common processing steps.
* Operating mode options::      Common operating mode options.

Configuration files

* Configuration file format::   ASCII format of configuration file.
* Configuration file precedence::  Precedence of configuration files.
* Current directory and User wide::  Local and user configuration files.
* System wide::                 System wide configuration files.

Getting help

* --usage::                     View option names and value formats.
* --help::                      List all options with description.
* Man pages::                   Man pages generated from --help.
* Info::                        View complete book in terminal.
* help-gnuastro mailing list::  Contacting experienced users.

Multi-threaded operations

* A note on threads::           Caution and suggestion on using threads.
* How to run simultaneous operations::  How to run things simultaneously.

Tables

* Recognized table formats::    Table formats that are recognized in Gnuastro.
* Gnuastro text table format::  Gnuastro's convention plain text tables.
* Selecting table columns::     Identify/select certain columns from a table

Recognized table formats

* Gnuastro text table format::  Reading plain text tables

Data containers

* Fits::                        View and manipulate extensions and keywords.
* Sort FITS files by night::    Installed script to sort FITS files by obs night.
* ConvertType::                 Convert data to various formats.
* Table::                       Read and Write FITS tables to plain text.
* Query::                       Import data from external databases.

Fits

* Invoking astfits::            Arguments and options to Header.

Invoking Fits

* HDU information and manipulation::  Learn about the HDUs and move them.
* Keyword manipulation::        Manipulate metadata keywords in a HDU

Sort FITS files by night

* Invoking astscript-sort-by-night::  Inputs and outputs to this script.

ConvertType

* Recognized file formats::     Recognized file formats
* Color::                       Some explanations on color.
* Invoking astconvertt::        Options and arguments to ConvertType.

Table

* Column arithmetic::           How to do operations on table columns.
* Invoking asttable::           Options and arguments to Table.

Query

* Available databases::         List of available databases to Query.
* Invoking astquery::           Inputs, outputs and configuration of Query.

Data manipulation

* Crop::                        Crop region(s) from a dataset.
* Arithmetic::                  Arithmetic on input data.
* Convolve::                    Convolve an image with a kernel.
* Warp::                        Warp/Transform an image to a different grid.

Crop

* Crop modes::                  Basic modes to define crop region.
* Crop section syntax::         How to define a section to crop.
* Blank pixels::                Pixels with no value.
* Invoking astcrop::            Calling Crop on the command-line

Invoking Crop

* Crop options::                A list of all the options with explanation.
* Crop output::                 The outputs of Crop.

Arithmetic

* Reverse polish notation::     The current notation style for Arithmetic
* Arithmetic operators::        List of operators known to Arithmetic
* Invoking astarithmetic::      How to run Arithmetic: options and output

Convolve

* Spatial domain convolution::  Only using the input image values.
* Frequency domain and Fourier operations::  Using frequencies in input.
* Spatial vs. Frequency domain::  When to use which?
* Convolution kernel::          How to specify the convolution kernel.
* Invoking astconvolve::        Options and argument to Convolve.

Spatial domain convolution

* Convolution process::         More basic explanations.
* Edges in the spatial domain::  Dealing with the edges of an image.

Frequency domain and Fourier operations

* Fourier series historical background::  Historical background.
* Circles and the complex plane::  Interpreting complex numbers.
* Fourier series::              Fourier Series definition.
* Fourier transform::           Fourier Transform definition.
* Dirac delta and comb::        Dirac delta and Dirac comb.
* Convolution theorem::         Derivation of Convolution theorem.
* Sampling theorem::            Sampling theorem (Nyquist frequency).
* Discrete Fourier transform::  Derivation and explanation of DFT.
* Fourier operations in two dimensions::  Extend to 2D images.
* Edges in the frequency domain::  Interpretation of edge effects.

Warp

* Warping basics::              Basics of coordinate transformation.
* Merging multiple warpings::   How to merge multiple matrices.
* Resampling::                  Warping an image is re-sampling it.
* Invoking astwarp::            Arguments and options for Warp.

Data analysis

* Statistics::                  Calculate dataset statistics.
* NoiseChisel::                 Detect objects in an image.
* Segment::                     Segment detections based on signal structure.
* MakeCatalog::                 Catalog from input and labeled images.
* Match::                       Match two datasets.

Statistics

* Histogram and Cumulative Frequency Plot::  Basic definitions.
* 2D Histograms::               Plotting the distribution of two variables.
* Sigma clipping::              Definition of @mymath{\sigma}-clipping.
* Sky value::                   Definition and derivation of the Sky value.
* Invoking aststatistics::      Arguments and options to Statistics.

2D Histograms

* 2D histogram as a table::     Format and usage in table format.
* 2D histogram as an image::    Format and usage in image format

Sky value

* Sky value definition::        Definition of the Sky/reference value.
* Sky value misconceptions::    Wrong methods to estimate the Sky value.
* Quantifying signal in a tile::  Method to estimate the presence of signal.

NoiseChisel

* NoiseChisel changes after publication::  Updates since published papers.
* Invoking astnoisechisel::     Options and arguments for NoiseChisel.

Invoking NoiseChisel

* NoiseChisel input::           NoiseChisel's input options.
* Detection options::           Configure detection in NoiseChisel.
* NoiseChisel output::          NoiseChisel's output options and format.

Segment

* Invoking astsegment::         Inputs, outputs and options to Segment

Invoking Segment

* Segment input::               Input files and options.
* Segmentation options::        Parameters of the segmentation process.
* Segment output::              Outputs of Segment

MakeCatalog

* Detection and catalog production::  Discussing why/how to treat these separately
* Quantifying measurement limits::  For comparing different catalogs.
* Measuring elliptical parameters::  Estimating elliptical parameters.
* Adding new columns to MakeCatalog::  How to add new columns.
* Invoking astmkcatalog::       Options and arguments to MakeCatalog.

Invoking MakeCatalog

* MakeCatalog inputs and basic settings::  Input files and basic settings.
* Upper-limit settings::        Settings for upper-limit measurements.
* MakeCatalog measurements::    Available measurements in MakeCatalog.
* MakeCatalog output::          File names of MakeCatalog's output table.

Match

* Invoking astmatch::           Inputs, outputs and options of Match

Modeling and fitting

* MakeProfiles::                Making mock galaxies and stars.
* MakeNoise::                   Make (add) noise to an image.

MakeProfiles

* Modeling basics::             Astronomical modeling basics.
* If convolving afterwards::    Considerations for convolving later.
* Brightness flux magnitude::   About these measures of energy.
* Profile magnitude::           Definition of total profile magnitude.
* Invoking astmkprof::          Inputs and Options for MakeProfiles.

Modeling basics

* Defining an ellipse and ellipsoid::  Definition of these important shapes.
* PSF::                         Radial profiles for the PSF.
* Stars::                       Making mock star profiles.
* Galaxies::                    Radial profiles for galaxies.
* Sampling from a function::    Sample a function on a pixelated canvas.
* Oversampling::                Oversampling the model.

Invoking MakeProfiles

* MakeProfiles catalog::        Required catalog properties.
* MakeProfiles profile settings::  Configuration parameters for all profiles.
* MakeProfiles output dataset::  The canvas/dataset to build profiles over.
* MakeProfiles log file::       A description of the optional log file.

MakeNoise

* Noise basics::                Noise concepts and definitions.
* Invoking astmknoise::         Options and arguments to MakeNoise.

Noise basics

* Photon counting noise::       Poisson noise
* Instrumental noise::          Readout, dark current and other sources.
* Final noised pixel value::    How the final noised value is calculated.
* Generating random numbers::   How random numbers are generated.

High-level calculations

* CosmicCalculator::            Calculate cosmological variables

CosmicCalculator

* Distance on a 2D curved space::  Distances in 2D for simplicity
* Extending distance concepts to 3D::  Going to 3D (our real universe).
* Invoking astcosmiccal::       How to run CosmicCalculator

Invoking CosmicCalculator

* CosmicCalculator input options::  Options to specify input conditions.
* CosmicCalculator basic cosmology calculations::  Like distance modulus, distances and etc.
* CosmicCalculator spectral line calculations::  How they get affected by redshift.

Library

* Review of library fundamentals::  Guide on libraries and linking.
* BuildProgram::                Link and run source files with this library.
* Gnuastro library::            Description of all library functions.
* Library demo programs::       Demonstration for using libraries.

Review of library fundamentals

* Headers::                     Header files included in source.
* Linking::                     Linking the compiled source files into one.
* Summary and example on libraries::  A summary and example on using libraries.

BuildProgram

* Invoking astbuildprog::       Options and examples for using this program.

Gnuastro library

* Configuration information::   General information about library config.
* Multithreaded programming::   Tools for easy multi-threaded operations.
* Library data types::          Definitions and functions for types.
* Pointers::                    Wrappers for easy working with pointers.@strong{}
* Library blank values::        Blank values and functions to deal with them.
* Library data container::      General data container in Gnuastro.
* Dimensions::                  Dealing with coordinates and dimensions.
* Linked lists::                Various types of linked lists.
* Array input output::          Reading and writing images or cubes.
* Table input output::          Reading and writing table columns.
* FITS files::                  Working with FITS data.
* File input output::           Reading and writing to various file formats.
* World Coordinate System::     Dealing with the world coordinate system.
* Arithmetic on datasets::      Arithmetic operations on a dataset.
* Tessellation library::        Functions for working on tiles.
* Bounding box::                Finding the bounding box.
* Polygons::                    Working with the vertices of a polygon.
* Qsort functions::             Helper functions for Qsort.
* K-d tree::                    Space partitioning in K dimensions.
* Permutations::                Re-order (or permute) the values in a dataset.
* Matching::                    Matching catalogs based on position.
* Statistical operations::      Functions for basic statistics.
* Binary datasets::             Datasets that can only have values of 0 or 1.
* Labeled datasets::            Working with Segmented/labeled datasets.
* Convolution functions::       Library functions to do convolution.
* Interpolation::               Interpolate (over blank values possibly).
* Git wrappers::                Wrappers for functions in libgit2.
* Unit conversion library (@file{units.h})::  Convert between units.
* Spectral lines library::      Functions for operating on Spectral lines.
* Cosmology library::           Cosmological calculations.

Multithreaded programming (@file{threads.h})

* Implementation of pthread_barrier::  Some systems don't have pthread_barrier
* Gnuastro's thread related functions::  Functions for managing threads.

Data container (@file{data.h})

* Generic data container::      Definition of Gnuastro's generic container.
* Dataset allocation::          Allocate, initialize and free a dataset.
* Arrays of datasets::          Functions to help with array of datasets.
* Copying datasets::            Functions to copy a dataset to a new one.

Linked lists (@file{list.h})

* List of strings::             Simply linked list of strings.
* List of int32_t::             Simply linked list of int32_ts.
* List of size_t::              Simply linked list of size_ts.
* List of float::               Simply linked list of floats.
* List of double::              Simply linked list of doubles
* List of void::                Simply linked list of void * pointers.
* Ordered list of size_t::      Simply linked, ordered list of size_t.
* Doubly linked ordered list of size_t::  Definition and functions.
* List of gal_data_t::          Simply linked list Gnuastro's generic datatype.

FITS files (@file{fits.h})

* FITS macros errors filenames::  General macros, errors and checking names.
* CFITSIO and Gnuastro types::  Conversion between FITS and Gnuastro types.
* FITS HDUs::                   Opening and getting information about HDUs.
* FITS header keywords::        Reading and writing FITS header keywords.
* FITS arrays::                 Reading and writing FITS images/arrays.
* FITS tables::                 Reading and writing FITS tables.

File input output

* Text files::                  Reading and writing from/to plain text files.
* TIFF files::                  Reading and writing from/to TIFF files.
* JPEG files::                  Reading and writing from/to JPEG files.
* EPS files::                   Writing to EPS files.
* PDF files::                   Writing to PDF files.

Tessellation library (@file{tile.h})

* Independent tiles::           Work on or check independent tiles.
* Tile grid::                   Cover a full dataset with non-overlapping tiles.

Library demo programs

* Library demo - reading a image::  Read a FITS image into memory.
* Library demo - inspecting neighbors::  Inspect the neighbors of a pixel.
* Library demo - multi-threaded operation::  Doing an operation on threads.
* Library demo - reading and writing table columns::  Simple Column I/O.

Developing

* Why C::                       Why Gnuastro is designed in C.
* Program design philosophy::   General ideas behind the package structure.
* Coding conventions::          Gnuastro coding conventions.
* Program source::              Conventions for the code.
* Documentation::               Documentation is an integral part of Gnuastro.
* Building and debugging::      Build and possibly debug during development.
* Test scripts::                Understanding the test scripts.
* Developer's checklist::       Checklist to finalize your changes.
* Gnuastro project webpage::    Central hub for Gnuastro activities.
* Developing mailing lists::    Stay up to date with Gnuastro's development.
* Contributing to Gnuastro::    Share your changes with all users.

Program source

* Mandatory source code files::  Description of files common to all programs.
* The TEMPLATE program::        Template for easy creation of a new program.

Contributing to Gnuastro

* Copyright assignment::        Copyright has to be assigned to the FSF.
* Commit guidelines::           Guidelines for commit messages.
* Production workflow::         Submitting your commits (work) for inclusion.
* Forking tutorial::            Tutorial on workflow steps with Git.

Other useful software

* SAO ds9::                     Viewing FITS images.
* PGPLOT::                      Plotting directly in C

SAO ds9

* Viewing multiextension FITS images::  Configure SAO ds9 for multiextension images.

@end detailmenu
@end menu

@node Introduction, Tutorials, Top, Top
@chapter Introduction

@cindex GNU coding standards
@cindex GNU Astronomy Utilities (Gnuastro)
GNU Astronomy Utilities (Gnuastro) is an official GNU package consisting of separate programs and libraries for the manipulation and analysis of astronomical data.
All the programs share the same basic command-line user interface for the comfort of both the users and developers.
Gnuastro is written to comply fully with the GNU coding standards so it integrates finely with the GNU/Linux operating system.
This also enables astronomers to expect a fully familiar experience in the source code, building, installing and command-line user interaction that they have seen in all the other GNU software that they use.
The official and always up to date version of this book (or manual) is freely available under @ref{GNU Free Doc. License} in various formats (PDF, HTML, plain text, info, and as its Texinfo source) at @url{http://www.gnu.org/software/gnuastro/manual/}.

For users who are new to the GNU/Linux environment, unless otherwise specified most of the topics in @ref{Installation} and @ref{Common program behavior} are common to all GNU software, for example installation, managing command-line options or getting help (also see @ref{New to GNU/Linux?}).
So if you are new to this empowering environment, we encourage you to go through these chapters carefully.
They can be a starting point from which you can continue to learn more from each program's own manual and fully benefit from and enjoy this wonderful environment.
Gnuastro also comes with a large set of libraries, so you can write your own programs using Gnuastro's building blocks, see @ref{Review of library fundamentals} for an introduction.

In Gnuastro, no change to any program or library will be committed to its history, before it has been fully documented here first.
As discussed in @ref{Science and its tools} this is a founding principle of the Gnuastro.

@menu
* Quick start::                 A quick start to installation.
* Science and its tools::       Some philosophy and history.
* Your rights::                 User rights.
* Naming convention::           About names of programs in Gnuastro.
* Version numbering::           Understanding version numbers.
* New to GNU/Linux?::           Suggested GNU/Linux distribution.
* Report a bug::                Search and report the bug you found.
* Suggest new feature::         How to suggest a new feature.
* Announcements::               How to stay up to date with Gnuastro.
* Conventions::                 Conventions used in this book.
* Acknowledgments::             People who helped in the production.
@end menu

@node Quick start, Science and its tools, Introduction, Introduction
@section Quick start

@cindex Test
@cindex Gzip
@cindex Lzip
@cindex Check
@cindex Build
@cindex Compile
@cindex GNU Tar
@cindex Uncompress source
@cindex Source, uncompress
The latest official release tarball is always available as @url{http://ftp.gnu.org/gnu/gnuastro/gnuastro-latest.tar.gz, @file{gnuastro-latest.tar.gz}}.
For better compression (faster download), and robust archival features, an @url{http://www.nongnu.org/lzip/lzip.html, Lzip} compressed tarball is also available at @url{http://ftp.gnu.org/gnu/gnuastro/gnuastro-latest.tar.lz, @file{gnuastro-latest.tar.lz}}, see @ref{Release tarball} for more details on the tarball release@footnote{The Gzip library and program are commonly available on most systems.
However, Gnuastro recommends Lzip as described above and the beta-releases are also only distributed in @file{tar.lz}.
You can download and install Lzip's source (in @file{.tar.gz} format) from its web page and follow the same process as below: Lzip has no dependencies, so simply decompress, then run @command{./configure}, @command{make}, @command{sudo make install}.}.

Let's assume the downloaded tarball is in the @file{TOPGNUASTRO} directory.
The first two commands below can be used to decompress the source.
If you download @file{tar.lz} and your Tar implementation doesn't recognize Lzip (the second command fails), run the third and fourth lines@footnote{In case Tar doesn't directly uncompress your @file{.tar.lz} tarball, you can merge the separate calls to Lzip and Tar (shown in the main body of text) into one command by directly piping the output of Lzip into Tar with a command like this: @command{$ lzip -cd gnuastro-0.5.tar.lz | tar -xf -}}.
Note that lines starting with @code{##} don't need to be typed.

@example
## Go into the download directory.
$ cd TOPGNUASTRO

## Also works on `tar.gz'. GNU Tar recognizes both formats.
$ tar xf gnuastro-latest.tar.lz

## Only when previous command fails.
$ lzip -d gnuastro-latest.tar.lz
$ tar xf gnuastro-latest.tar
@end example

Gnuastro has three mandatory dependencies and some optional dependencies for extra functionality, see @ref{Dependencies} for the full list.
In @ref{Dependencies from package managers} we have prepared the command to easily install Gnuastro's dependencies using the package manager of some operating systems.
When the mandatory dependencies are ready, you can configure, compile, check and install Gnuastro on your system with the following commands.

@example
$ cd gnuastro-X.X                  # Replace X.X with version number.
$ ./configure
$ make -j8                         # Replace 8 with no. CPU threads.
$ make check
$ sudo make install
@end example

@noindent
See @ref{Known issues} if you confront any complications.
For each program there is an `Invoke ProgramName' sub-section in this book which explains how the programs should be run on the command-line (for example @ref{Invoking asttable}).
You can read the same section on the command-line by running @command{$ info astprogname} (for example @command{info asttable}).
The `Invoke ProgramName' sub-section starts with a few examples of each program and goes on to explain the invocation details.
See @ref{Getting help} for all the options you have to get help.
In @ref{Tutorials} some real life examples of how these programs might be used are given.








@node Science and its tools, Your rights, Quick start, Introduction
@section Science and its tools

History of science indicates that there are always inevitably unseen faults, hidden assumptions, simplifications and approximations in all our theoretical models, data acquisition and analysis techniques.
It is precisely these that will ultimately allow future generations to advance the existing experimental and theoretical knowledge through their new solutions and corrections.

In the past, scientists would gather data and process them individually to achieve an analysis thus having a much more intricate knowledge of the data and analysis.
The theoretical models also required little (if any) simulations to compare with the data.
Today both methods are becoming increasingly more dependent on pre-written software.
Scientists are dissociating themselves from the intricacies of reducing raw observational data in experimentation or from bringing the theoretical models to life in simulations.
These `intricacies' are precisely those unseen faults, hidden assumptions, simplifications and approximations that define scientific progress.

@quotation
@cindex Anscombe F. J.
Unfortunately, most persons who have recourse to a computer for statistical analysis of data are not much interested either in computer programming or in statistical method, being primarily concerned with their own proper business.
Hence the common use of library programs and various statistical packages. ... It's time that was changed.
@author F.J. Anscombe. The American Statistician, Vol. 27, No. 1. 1973
@end quotation

@cindex Anscombe's quartet
@cindex Statistical analysis
@url{http://en.wikipedia.org/wiki/Anscombe%27s_quartet,Anscombe's quartet} demonstrates how four data sets with widely different shapes (when plotted) give nearly identical output from standard regression techniques.
Anscombe uses this (now famous) quartet, which was introduced in the paper quoted above, to argue that ``@emph{Good statistical analysis is not a purely routine matter, and generally calls for more than one pass through the computer}''.
Echoing Anscombe's concern after 44 years, some of the highly recognized statisticians of our time (Leek, McShane, Gelman, Colquhoun, Nuijten and Goodman), wrote in Nature that:

@quotation
We need to appreciate that data analysis is not purely computational and algorithmic -- it is a human behaviour....Researchers who hunt hard enough will turn up a result that fits statistical criteria -- but their discovery will probably be a false positive.
@author Five ways to fix statistics, Nature, 551, Nov 2017.
@end quotation

Users of statistical (scientific) methods (software) are therefore not passive (objective) agents in their result.
Therefore, it is necessary to actually understand the method, not just use it as a black box.
The subjective experience gained by frequently using a method/software is not sufficient to claim an understanding of how the tool/method works and how relevant it is to the data and analysis.
This kind of subjective experience is prone to serious misunderstandings about the data, what the software/statistical-method really does (especially as it gets more complicated), and thus the scientific interpretation of the result.
This attitude is further encouraged through non-free software@footnote{@url{https://www.gnu.org/philosophy/free-sw.html}}, poorly written (or non-existent) scientific software manuals, and non-reproducible papers@footnote{Where the authors omit many of the analysis/processing ``details'' from the paper by arguing that they would make the paper too long/unreadable.
However, software engineers have been dealing with such issues for a long time.
There are thus software management solutions that allow us to supplement papers with all the details necessary to exactly reproduce the result.
For example see @url{https://doi.org/10.5281/zenodo.1163746, zenodo.1163746} and @url{https://doi.org/10.5281/zenodo.1164774, zenodo.1164774} and this @url{ http://akhlaghi.org/reproducible-science.html, general discussion}.}.
This approach to scientific software and methods only helps in producing dogmas and an ``@emph{obscurantist faith in the expert's special skill, and in his personal knowledge and authority}''@footnote{Karl Popper. The logic of scientific discovery. 1959.
Larger quote is given at the start of the PDF (for print) version of this book.}.

@quotation
@cindex Douglas Rushkoff
Program or be programmed.
Choose the former, and you gain access to the control panel of civilization.
Choose the latter, and it could be the last real choice you get to make.
@author Douglas Rushkoff. Program or be programmed, O/R Books (2010).
@end quotation

It is obviously impractical for any one human being to gain the intricate knowledge explained above for every step of an analysis.
On the other hand, scientific data can be large and numerous, for example images produced by telescopes in astronomy.
This requires efficient algorithms.
To make things worse, natural scientists have generally not been trained in the advanced software techniques, paradigms and architecture that are taught in computer science or engineering courses and thus used in most software.
The GNU Astronomy Utilities are an effort to tackle this issue.

Gnuastro is not just a software, this book is as important to the idea behind Gnuastro as the source code (software).
This book has tried to learn from the success of the ``Numerical Recipes'' book in educating those who are not software engineers and computer scientists but still heavy users of computational algorithms, like astronomers.
There are two major differences.

The first difference is that Gnuastro's code and the background information are segregated: the code is moved within the actual Gnuastro software source code and the underlying explanations are given here in this book.
In the source code, every non-trivial step is heavily commented and correlated with this book, it follows the same logic of this book, and all the programs follow a similar internal data, function and file structure, see @ref{Program source}.
Complementing the code, this book focuses on thoroughly explaining the concepts behind those codes (history, mathematics, science, software and usage advise when necessary) along with detailed instructions on how to run the programs.
At the expense of frustrating ``professionals'' or ``experts'', this book and the comments in the code also intentionally avoid jargon and abbreviations.
The source code and this book are thus intimately linked, and when considered as a single entity can be thought of as a real (an actual software accompanying the algorithms) ``Numerical Recipes'' for astronomy.

@cindex GNU free documentation license
@cindex GNU General Public License (GPL)
The second major, and arguably more important, difference is that ``Numerical Recipes'' does not allow you to distribute any code that you have learned from it.
In other words, it does not allow you to release your software's source code if you have used their codes, you can only publicly release binaries (a black box) to the community.
Therefore, while it empowers the privileged individual who has access to it, it exacerbates social ignorance.
Exactly at the opposite end of the spectrum, Gnuastro's source code is released under the GNU general public license (GPL) and this book is released under the GNU free documentation license.
You are therefore free to distribute any software you create using parts of Gnuastro's source code or text, or figures from this book, see @ref{Your rights}.

With these principles in mind, Gnuastro's developers aim to impose the
minimum requirements on you (in computer science, engineering and even the
mathematics behind the tools) to understand and modify any step of Gnuastro
if you feel the need to do so, see @ref{Why C} and @ref{Program design
philosophy}.

@cindex Brahe, Tycho
@cindex Galileo, Galilei
Without prior familiarity and experience with optics, it is hard to imagine how, Galileo could have come up with the idea of modifying the Dutch military telescope optics to use in astronomy.
Astronomical objects could not be seen with the Dutch military design of the telescope.
In other words, it is unlikely that Galileo could have asked a random optician to make modifications (not understood by Galileo) to the Dutch design, to do something no astronomer of the time took seriously.
In the paradigm of the day, what could be the purpose of enlarging geometric spheres (planets) or points (stars)? In that paradigm only the position and movement of the heavenly bodies was important, and that had already been accurately studied (recently by Tycho Brahe).

In the beginning of his ``The Sidereal Messenger'' (published in 1610) he cautions the readers on this issue and @emph{before} describing his results/observations, Galileo instructs us on how to build a suitable instrument.
Without a detailed description of @emph{how} he made his tools and done his observations, no reasonable person would believe his results.
Before he actually saw the moons of Jupiter, the mountains on the Moon or the crescent of Venus, Galileo was “evasive”@footnote{Galileo G. (Translated by Maurice A. Finocchiaro). @emph{The essential Galileo}.Hackett publishing company, first edition, 2008.} to Kepler.
Science is defined by its tools/methods, @emph{not} its raw results@footnote{For example, take the following two results on the age of the universe: roughly 14 billion years (suggested by the current consensus of the standard model of cosmology) and less than 10,000 years (suggested from some interpretations of the Bible).
Both these numbers are @emph{results}.
What distinguishes these two results, is the tools/methods that were used to derive them.
Therefore, as the term ``Scientific method'' also signifies, a scientific statement it defined by its @emph{method}, not its result.}.

The same is true today: science cannot progress with a black box, or poorly released code.
The source code of a research is the new (abstractified) communication language in science, understandable by humans @emph{and} computers.
Source code (in any programming language) is a language/notation designed to express all the details that would be too tedious/long/frustrating to report in spoken languages like English, similar to mathematic notation.

Today, the quality of the source code that goes into a scientific result (and the distribution of that code) is as critical to scientific vitality and integrity, as the quality of its written language/English used in publishing/distributing its paper.
A scientific paper will not even be reviewed by any respectable journal if its written in a poor language/English.
A similar level of quality assessment is thus increasingly becoming necessary regarding the codes/methods used to derive the results of a scientific paper.

@cindex Ken Thomson
@cindex Stroustrup, Bjarne
Bjarne Stroustrup (creator of the C++ language) says: ``@emph{Without understanding software, you are reduced to believing in magic}''.
Ken Thomson (the designer or the Unix operating system) says ``@emph{I abhor a system designed for the `user' if that word is a coded pejorative meaning `stupid and unsophisticated'}.'' Certainly no scientist (user of a scientific software) would want to be considered a believer in magic, or stupid and unsophisticated.

This can happen when scientists get too distant from the raw data and methods, and are mainly discussing results.
In other words, when they feel they have tamed Nature into their own high-level (abstract) models (creations), and are mainly concerned with scaling up, or industrializing those results.
Roughly five years before special relativity, and about two decades before quantum mechanics fundamentally changed Physics, Lord Kelvin is quoted as saying:

@quotation
@cindex Lord Kelvin
@cindex William Thomson
There is nothing new to be discovered in physics now.
All that remains is more and more precise measurement.
@author William Thomson (Lord Kelvin), 1900
@end quotation

@noindent
A few years earlier Albert. A. Michelson made the following statement:

@quotation
@cindex Albert. A. Michelson
@cindex Michelson, Albert. A.
The more important fundamental laws and facts of physical science have all been discovered, and these are now so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote....
Our future discoveries must be looked for in the sixth place of decimals.
@author Albert. A. Michelson, dedication of Ryerson Physics Lab, U. Chicago 1894
@end quotation

@cindex Puzzle solving scientist
@cindex Scientist, puzzle solver
If scientists are considered to be more than mere ``puzzle'' solvers@footnote{Thomas S. Kuhn. @emph{The Structure of Scientific Revolutions}, University of Chicago Press, 1962.} (simply adding to the decimals of existing values or observing a feature in 10, 100, or 100000 more galaxies or stars, as Kelvin and Michelson clearly believed), they cannot just passively sit back and uncritically repeat the previous (observational or theoretical) methods/tools on new data.
Today there is a wealth of raw telescope images ready (mostly for free) at the finger tips of anyone who is interested with a fast enough internet connection to download them.
The only thing lacking is new ways to analyze this data and dig out the treasure that is lying hidden in them to existing methods and techniques.

@quotation
@cindex Jaynes E. T.
New data that we insist on analyzing in terms of old ideas (that is, old models which are not questioned) cannot lead us out of the old ideas.
However many data we record and analyze, we may just keep repeating the same old errors, missing the same crucially important things that the experiment was competent to find.
@author Jaynes, Probability theory, the logic of science. Cambridge U. Press (2003).
@end quotation




@node Your rights, Naming convention, Science and its tools, Introduction
@section Your rights

@cindex GNU Texinfo
The paragraphs below, in this section, belong to the GNU Texinfo@footnote{Texinfo is the GNU documentation system.
It is used to create this book in all the various formats.} manual and are not written by us! The name ``Texinfo'' is just changed to ``GNU Astronomy Utilities'' or ``Gnuastro'' because they are released under the same licenses and it is beautifully written to inform you of your rights.

@cindex Free software
@cindex Copyright
@cindex Public domain
GNU Astronomy Utilities is ``free software''; this means that everyone is free to use it and free to redistribute it on certain conditions.
Gnuastro is not in the public domain; it is copyrighted and there are restrictions on its distribution, but these restrictions are designed to permit everything that a good cooperating citizen would want to do.
What is not allowed is to try to prevent others from further sharing any version of Gnuastro that they might get from you.

Specifically, we want to make sure that you have the right to give away copies of the programs that relate to Gnuastro, that you receive the source code or else can get it if you want it, that you can change these programs or use pieces of them in new free programs, and that you know you can do these things.

To make sure that everyone has such rights, we have to forbid you to deprive anyone else of these rights.
For example, if you distribute copies of the Gnuastro related programs, you must give the recipients all the rights that you have.
You must make sure that they, too, receive or can get the source code.
And you must tell them their rights.

Also, for our own protection, we must make certain that everyone finds out that there is no warranty for the programs that relate to Gnuastro.
If these programs are modified by someone else and passed on, we want their recipients to know that what they have is not what we distributed, so that any problems introduced by others will not reflect on our reputation.

@cindex GNU General Public License (GPL)
@cindex GNU Free Documentation License
The full text of the licenses for the Gnuastro book and software can be respectively found in @ref{GNU General Public License}@footnote{Also available in @url{http://www.gnu.org/copyleft/gpl.html}} and @ref{GNU Free Doc. License}@footnote{Also available in @url{http://www.gnu.org/copyleft/fdl.html}}.



@node Naming convention, Version numbering, Your rights, Introduction
@section Naming convention

@cindex Names, programs
@cindex Program names
Gnuastro is a package of independent programs and a collection of libraries, here we are mainly concerned with the programs.
Each program has an official name which consists of one or two words, describing what they do.
The latter are printed with no space, for example NoiseChisel or Crop.
On the command-line, you can run them with their executable names which start with an @file{ast} and might be an abbreviation of the official name, for example @file{astnoisechisel} or @file{astcrop}, see @ref{Executable names}.

@pindex ProgramName
@pindex @file{astprogname}
We will use ``ProgramName'' for a generic official program name and @file{astprogname} for a generic executable name.
In this book, the programs are classified based on what they do and thoroughly explained.
An alphabetical list of the programs that are installed on your system with this installation are given in @ref{Gnuastro programs list}.
That list also contains the executable names and version numbers along with a one line description.



@node Version numbering, New to GNU/Linux?, Naming convention, Introduction
@section Version numbering

@cindex Version number
@cindex Number, version
@cindex Major version number
@cindex Minor version number
@cindex Mailing list: info-gnuastro
Gnuastro can have two formats of version numbers, for official and unofficial releases.
Official Gnuastro releases are announced on the @command{info-gnuastro} mailing list, they have a version control tag in Gnuastro's development history, and their version numbers are formatted like ``@file{A.B}''.
@file{A} is a major version number, marking a significant planned achievement (for example see @ref{GNU Astronomy Utilities 1.0}), while @file{B} is a minor version number, see below for more on the distinction.
Note that the numbers are not decimals, so version 2.34 is much more recent than version 2.5, which is not equal to 2.50.

Gnuastro also allows a unique version number for unofficial releases.
Unofficial releases can mark any point in Gnuastro's development history.
This is done to allow astronomers to easily use any point in the version controlled history for their data-analysis and research publication.
See @ref{Version controlled source} for a complete introduction.
This section is not just for developers and is intended to straightforward and easy to read, so please have a look if you are interested in the cutting-edge.
This unofficial version number is a meaningful and easy to read string of characters, unique to that particular point of history.
With this feature, users can easily stay up to date with the most recent bug fixes and additions that are committed between official releases.

The unofficial version number is formatted like: @file{A.B.C-D}.
@file{A} and @file{B} are the most recent official version number.
@file{C} is the number of commits that have been made after version @file{A.B}.
@file{D} is the first 4 or 5 characters of the commit hash number@footnote{Each point in Gnuastro's history is uniquely identified with a 40 character long hash which is created from its contents and previous history for example: @code{5b17501d8f29ba3cd610673261e6e2229c846d35}.
So the string @file{D} in the version for this commit could be @file{5b17}, or @file{5b175}.}.
Therefore, the unofficial version number `@code{3.92.8-29c8}', corresponds to the 8th commit after the official version @code{3.92} and its commit hash begins with @code{29c8}.
The unofficial version number is sort-able (unlike the raw hash) and as shown above is descriptive of the state of the unofficial release.
Of course an official release is preferred for publication (since its tarballs are easily available and it has gone through more tests, making it more stable), so if an official release is announced prior to your publication's final review, please consider updating to the official release.

The major version number is set by a major goal which is defined by the developers and user community before hand, for example see @ref{GNU Astronomy Utilities 1.0}.
The incremental work done in minor releases are commonly small steps in achieving the major goal.
Therefore, there is no limit on the number of minor releases and the difference between the (hypothetical) versions 2.927 and 3.0 can be a small (negligible to the user) improvement that finalizes the defined goals.

@menu
* GNU Astronomy Utilities 1.0::  Plans for version 1.0 release
@end menu

@node GNU Astronomy Utilities 1.0,  , Version numbering, Version numbering
@subsection GNU Astronomy Utilities 1.0
@cindex Gnuastro major version number
Currently (prior to Gnuastro 1.0), the aim of Gnuastro is to have a complete system for data manipulation and analysis at least similar to IRAF@footnote{@url{http://iraf.noao.edu/}}.
So an astronomer can take all the standard data analysis steps (starting from raw data to the final reduced product and standard post-reduction tools) with the various programs in Gnuastro.

@cindex Shell script
The maintainers of each camera or detector on a telescope can provide a completely transparent shell script or Makefile to the observer for data analysis.
This script can set configuration files for all the required programs to work with that particular camera.
The script can then run the proper programs in the proper sequence.
The user/observer can easily follow the standard shell script to understand (and modify) each step and the parameters used easily.
Bash (or other modern GNU/Linux shell scripts) is powerful and made for this gluing job.
This will simultaneously improve performance and transparency.
Shell scripting (or Makefiles) are also basic constructs that are easy to learn and readily available as part of the Unix-like operating systems.
If there is no program to do a desired step, Gnuastro's libraries can be used to build specific programs.

The main factor is that all observatories or projects can freely contribute to Gnuastro and all simultaneously benefit from it (since it doesn't belong to any particular one of them), much like how for-profit organizations (for example RedHat, or Intel and many others) are major contributors to free and open source software for their shared benefit.
Gnuastro's copyright has been fully awarded to GNU, so it doesn't belong to any particular astronomer or astronomical facility or project.





@node New to GNU/Linux?, Report a bug, Version numbering, Introduction
@section New to GNU/Linux?

Some astronomers initially install and use a GNU/Linux operating system because their necessary tools can only be installed in this environment.
However, the transition is not necessarily easy.
To encourage you in investing the patience and time to make this transition, and actually enjoy it, we will first start with a basic introduction to GNU/Linux operating systems.
Afterwards, in @ref{Command-line interface} we'll discuss the wonderful benefits of the command-line interface, how it beautifully complements the graphic user interface, and why it is worth the (apparently steep) learning curve.
Finally a complete chapter (@ref{Tutorials}) is devoted to real world scenarios of using Gnuastro (on the command-line).
Therefore if you don't yet feel comfortable with the command-line we strongly recommend going through that chapter after finishing this section.

You might have already noticed that we are not using the name ``Linux'', but ``GNU/Linux''.
Please take the time to have a look at the following essays and FAQs for a complete understanding of this very important distinction.

@itemize

@item
@url{https://www.gnu.org/gnu/gnu-users-never-heard-of-gnu.html}

@item
@url{https://www.gnu.org/gnu/linux-and-gnu.html}

@item
@url{https://www.gnu.org/gnu/why-gnu-linux.html}

@item
@url{https://www.gnu.org/gnu/gnu-linux-faq.html}

@end itemize

@cindex Linux
@cindex GNU/Linux
@cindex GNU C library
@cindex GCC: GNU Compiler Collection
@cindex GNU Compiler Collection (GCC)
In short, the Linux kernel@footnote{In Unix-like operating systems, the kernel connects software and hardware worlds.} is built using the GNU C library (glibc) and GNU compiler collection (gcc).
The Linux kernel software alone is just a means for other software to access the hardware resources, it is useless alone: to say ``running Linux'', is like saying ``driving your carburetor''.

To have an operating system, you need lower-level (to build the kernel), and higher-level (to use it) software packages.
The majority of such software in most Unix-like operating systems are GNU software: ``the whole system is basically GNU with Linux loaded''.
Therefore to acknowledge GNU's instrumental role in the creation and usage of the Linux kernel and the operating systems that use it, we should call these operating systems ``GNU/Linux''.


@menu
* Command-line interface::      Introduction to the command-line
@end menu

@node Command-line interface,  , New to GNU/Linux?, New to GNU/Linux?
@subsection Command-line interface
@cindex Shell
@cindex Graphic user interface
@cindex Command-line user interface
@cindex GUI: graphic user interface
@cindex CLI: command-line user interface
One aspect of Gnuastro that might be a little troubling to new GNU/Linux users is that (at least for the time being) it only has a command-line user interface (CLI).
This might be contrary to the mostly graphical user interface (GUI) experience with proprietary operating systems.
Since the various actions available aren't always on the screen, the command-line interface can be complicated, intimidating, and frustrating for a first-time user.
This is understandable and also experienced by anyone who started using the computer (from childhood) in a graphical user interface (this includes most of Gnuastro's authors).
Here we hope to convince you of the unique benefits of this interface which can greatly enhance your productivity while complementing your GUI experience.

@cindex GNOME 3
Through GNOME 3@footnote{@url{http://www.gnome.org/}}, most GNU/Linux based operating systems now have an advanced and useful GUI.
Since the GUI was created long after the command-line, some wrongly consider the command line to be obsolete.
Both interfaces are useful for different tasks.
For example you can't view an image, video, pdf document or web page on the command-line.
On the other hand you can't reproduce your results easily in the GUI.
Therefore they should not be regarded as rivals but as complementary user interfaces, here we will outline how the CLI can be useful in scientific programs.

You can think of the GUI as a veneer over the CLI to facilitate a small subset of all the possible CLI operations.
Each click you do on the GUI, can be thought of as internally running a different CLI command.
So asymptotically (if a good designer can design a GUI which is able to show you all the possibilities to click on) the GUI is only as powerful as the command-line.
In practice, such graphical designers are very hard to find for every program, so the GUI operations are always a subset of the internal CLI commands.
For programs that are only made for the GUI, this results in not including lots of potentially useful operations.
It also results in `interface design' to be a crucially important part of any GUI program.
Scientists don't usually have enough resources to hire a graphical designer, also the complexity of the GUI code is far more than CLI code, which is harmful for a scientific software, see @ref{Science and its tools}.

@cindex GUI: repeating operations
For programs that have a GUI, one action on the GUI (moving and clicking a mouse, or tapping a touchscreen) might be more efficient and easier than its CLI counterpart (typing the program name and your desired configuration).
However, if you have to repeat that same action more than once, the GUI will soon become frustrating and prone to errors.
Unless the designers of a particular program decided to design such a system for a particular GUI action, there is no general way to run any possible series of actions automatically on the GUI.

@cindex GNU Bash
@cindex Reproducible results
@cindex CLI: repeating operations
On the command-line, you can run any series of of actions which can come from various CLI capable programs you have decided your self in any possible permutation with one command@footnote{By writing a shell script and running it, for example see the tutorials in @ref{Tutorials}.}.
This allows for much more creativity and exact reproducibility that is not possible to a GUI user.
For technical and scientific operations, where the same operation (using various programs) has to be done on a large set of data files, this is crucially important.
It also allows exact reproducibility which is a foundation principle for scientific results.
The most common CLI (which is also known as a shell) in GNU/Linux is GNU Bash, we strongly encourage you to put aside several hours and go through this beautifully explained web page: @url{https://flossmanuals.net/command-line/}.
You don't need to read or even fully understand the whole thing, only a general knowledge of the first few chapters are enough to get you going.

Since the operations in the GUI are limited and they are visible, reading a manual is not that important in the GUI (most programs don't even have any!).
However, to give you the creative power explained above, with a CLI program, it is best if you first read the manual of any program you are using.
You don't need to memorize any details, only an understanding of the generalities is needed.
Once you start working, there are more easier ways to remember a particular option or operation detail, see @ref{Getting help}.

@cindex GNU Emacs
@cindex Virtual console
To experience the command-line in its full glory and not in the GUI terminal emulator, press the following keys together: @key{CTRL+ALT+F4}@footnote{Instead of @key{F4}, you can use any of the keys from @key{F1} to @key{F6} for different virtual consoles depending on your GNU/Linux distribution, try them all out.
You can also run a separate GUI from within this console if you want to.} to access the virtual console.
To return back to your GUI, press the same keys above replacing @key{F4} with @key{F7} (or @key{F1}, or @key{F2}, depending on your GNU/Linux distribution).
In the virtual console, the GUI, with all its distracting colors and information, is gone.
Enabling you to focus entirely on your actual work.

@cindex Resource heavy operations
For operations that use a lot of your system's resources (processing a large number of large astronomical images for example), the virtual console is the place to run them.
This is because the GUI is not competing with your research work for your system's RAM and CPU.
Since the virtual consoles are completely independent, you can even log out of your GUI environment to give even more of your hardware resources to the programs you are running and thus reduce the operating time.

@cindex Secure shell
@cindex SSH
@cindex Remote operation
Since it uses far less system resources, the CLI is also convenient for remote access to your computer.
Using secure shell (SSH) you can log in securely to your system (similar to the virtual console) from anywhere even if the connection speeds are low.
There are apps for smart phones and tablets which allow you to do this.





@node Report a bug, Suggest new feature, New to GNU/Linux?, Introduction
@section Report a bug

@cindex Bug
@cindex Wrong output
@cindex Software bug
@cindex Output, wrong
@cindex Wrong results
@cindex Results, wrong
@cindex Halted program
@cindex Program crashing
@cindex Inconsistent results
According to Wikipedia ``a software bug is an error, flaw, failure, or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways''.
So when you see that a program is crashing, not reading your input correctly, giving the wrong results, or not writing your output correctly, you have found a bug.
In such cases, it is best if you report the bug to the developers.
The programs will also inform you if known impossible situations occur (which are caused by something unexpected) and will ask the users to report the bug issue.

@cindex Bug reporting
Prior to actually filing a bug report, it is best to search previous reports.
The issue might have already been found and even solved.
The best place to check if your bug has already been discussed is the bugs tracker on @ref{Gnuastro project webpage} at @url{https://savannah.gnu.org/bugs/?group=gnuastro}.
In the top search fields (under ``Display Criteria'') set the ``Open/Closed'' drop-down menu to ``Any'' and choose the respective program or general category of the bug in ``Category'' and click the ``Apply'' button.
The results colored green have already been solved and the status of those colored in red is shown in the table.

@cindex Version control
Recently corrected bugs are probably not yet publicly released because they are scheduled for the next Gnuastro stable release.
If the bug is solved but not yet released and it is an urgent issue for you, you can get the version controlled source and compile that, see @ref{Version controlled source}.

To solve the issue as readily as possible, please follow the following to guidelines in your bug report.
The @url{http://www.chiark.greenend.org.uk/~sgtatham/bugs.html, How to Report Bugs Effectively} and @url{http://catb.org/~esr/faqs/smart-questions.html, How To Ask Questions The Smart Way} essays also provide some good generic advice for all software (don't contact their authors for Gnuastro's problems).
Mastering the art of giving good bug reports (like asking good questions) can greatly enhance your experience with any free and open source software.
So investing the time to read through these essays will greatly reduce your frustration after you see something doesn't work the way you feel it is supposed to for a large range of software, not just Gnuastro.

@table @strong

@item Be descriptive
Please provide as many details as possible and be very descriptive.
Explain what you expected and what the output was: it might be that your expectation was wrong.
Also please clearly state which sections of the Gnuastro book (this book), or other references you have studied to understand the problem.
This can be useful in correcting the book (adding links to likely places where users will check).
But more importantly, it will be encouraging for the developers, since you are showing how serious you are about the problem and that you have actually put some thought into it.
``To be able to ask a question clearly is two-thirds of the way to getting it answered.'' -- John Ruskin (1819-1900).

@item Individual and independent bug reports
If you have found multiple bugs, please send them as separate (and independent) bugs (as much as possible).
This will significantly help us in managing and resolving them sooner.

@cindex Reproducible bug reports
@item Reproducible bug reports
If we cannot exactly reproduce your bug, then it is very hard to resolve it.
So please send us a Minimal working example@footnote{@url{http://en.wikipedia.org/wiki/Minimal_Working_Example}} along with the description.
For example in running a program, please send us the full command-line text and the output with the @option{-P} option, see @ref{Operating mode options}.
If it is caused only for a certain input, also send us that input file.
In case the input FITS is large, please use Crop to only crop the problematic section and make it as small as possible so it can easily be uploaded and downloaded and not waste the archive's storage, see @ref{Crop}.
@end table

@noindent
There are generally two ways to inform us of bugs:

@itemize
@cindex Mailing list archives
@cindex Mailing list: bug-gnuastro
@cindex @code{bug-gnuastro@@gnu.org}
@item
Send a mail to @code{bug-gnuastro@@gnu.org}.
Any mail you send to this address will be distributed through the bug-gnuastro mailing list@footnote{@url{https://lists.gnu.org/mailman/listinfo/bug-gnuastro}}.
This is the simplest way to send us bug reports.
The developers will then register the bug into the project web page (next choice) for you.

@cindex Gnuastro project page
@cindex Support request manager
@cindex Submit new tracker item
@cindex Anonymous bug submission
@item
Use the Gnuastro project web page at @url{https://savannah.gnu.org/projects/gnuastro/}: There are two ways to get to the submission page as listed below.
Fill in the form as described below and submit it (see @ref{Gnuastro project webpage} for more on the project web page).

@itemize

@item
Using the top horizontal menu items, immediately under the top page title.
Hovering your mouse on ``Support'' will open a drop-down list.
Select ``Submit new''.

@item
In the main body of the page, under the ``Communication tools'' section, click on ``Submit new item''.

@end itemize
@end itemize

@cindex Tracker
@cindex Bug tracker
@cindex Task tracker
@cindex Viewing trackers
Once the items have been registered in the mailing list or web page, the developers will add it to either the ``Bug Tracker'' or ``Task Manager'' trackers of the Gnuastro project web page.
These two trackers can only be edited by the Gnuastro project developers, but they can be browsed by anyone, so you can follow the progress on your bug.
You are most welcome to join us in developing Gnuastro and fixing the bug you have found maybe a good starting point.
Gnuastro is designed to be easy for anyone to develop (see @ref{Science and its tools}) and there is a full chapter devoted to developing it: @ref{Developing}.


@node Suggest new feature, Announcements, Report a bug, Introduction
@section Suggest new feature

@cindex Feature requests
@cindex Additions to Gnuastro
We would always be happy to hear of suggested new features.
For every program there are already lists of features that we are planning to add.
You can see the current list of plans from the Gnuastro project web page at @url{https://savannah.gnu.org/projects/gnuastro/} and following @clicksequence{``Tasks''@click{}``Browse''} on the horizontal menu at the top of the page immediately under the title, see @ref{Gnuastro project webpage}.
If you want to request a feature to an existing program, click on the ``Display Criteria'' above the list and under ``Category'', choose that particular program.
Under ``Category'' you can also see the existing suggestions for new programs or other cases like installation, documentation or libraries.
Also be sure to set the ``Open/Closed'' value to ``Any''.

If the feature you want to suggest is not already listed in the task manager, then follow the steps that are fully described in @ref{Report a bug}.
Please have in mind that the developers are all busy with their own astronomical research, and implementing existing ``task''s to add or resolving bugs.
Gnuastro is a volunteer effort and none of the developers are paid for their hard work.
So, although we will try our best, please don't not expect that your suggested feature be immediately included (with the next release of Gnuastro).

The best person to apply the exciting new feature you have in mind is you, since you have the motivation and need.
In fact Gnuastro is designed for making it as easy as possible for you to hack into it (add new features, change existing ones and so on), see @ref{Science and its tools}.
Please have a look at the chapter devoted to developing (@ref{Developing}) and start applying your desired feature.
Once you have added it, you can use it for your own work and if you feel you want others to benefit from your work, you can request for it to become part of Gnuastro.
You can then join the developers and start maintaining your own part of Gnuastro.
If you choose to take this path of action please contact us before hand (@ref{Report a bug}) so we can avoid possible duplicate activities and get interested people in contact.

@cartouche
@noindent
@strong{Gnuastro is a collection of low level programs:} As described in @ref{Program design philosophy}, a founding principle of Gnuastro is that each library or program should be basic and low-level.
High level jobs should be done by running the separate programs or using separate functions in succession through a shell script or calling the libraries by higher level functions, see the examples in @ref{Tutorials}.
So when making the suggestions please consider how your desired job can best be broken into separate steps and modularized.
@end cartouche



@node Announcements, Conventions, Suggest new feature, Introduction
@section Announcements

@cindex Announcements
@cindex Mailing list: info-gnuastro
Gnuastro has a dedicated mailing list for making announcements (@code{info-gnuastro}).
Anyone can subscribe to this mailing list.
Anytime there is a new stable or test release, an email will be circulated there.
The email contains a summary of the overall changes along with a detailed list (from the @file{NEWS} file).
This mailing list is thus the best way to stay up to date with new releases, easily learn about the updated/new features, or dependencies (see @ref{Dependencies}).

To subscribe to this list, please visit @url{https://lists.gnu.org/mailman/listinfo/info-gnuastro}.
Traffic (number of mails per unit time) in this list is designed to be low: only a handful of mails per year.
Previous announcements are available on @url{http://lists.gnu.org/archive/html/info-gnuastro/, its archive}.




@node Conventions, Acknowledgments, Announcements, Introduction
@section Conventions
In this book we have the following conventions:

@itemize

@item
All commands that are to be run on the shell (command-line) prompt as the user start with a @command{$}.
In case they must be run as a super-user or system administrator, they will start with a single @command{#}.
If the command is in a separate line and next line @code{is also in the code type face}, but doesn't have any of the @command{$} or @command{#} signs, then it is the output of the command after it is run.
As a user, you don't need to type those lines.
A line that starts with @command{##} is just a comment for explaining the command to a human reader and must not be typed.

@item
If the command becomes larger than the page width a @key{\} is inserted in the code.
If you are typing the code by hand on the command-line, you don't need to use multiple lines or add the extra space characters, so you can omit them.
If you want to copy and paste these examples (highly discouraged!) then the @key{\} should stay.

The @key{\} character is a shell escape character which is used commonly to make characters which have special meaning for the shell loose that special place (the shell will not treat them specially if there is a @key{\} behind them).
When it is a last character in a line (the next character is a new-line character) the new-line character looses its meaning an the shell sees it as a simple white-space character, enabling you to use multiple lines to write your commands.

@end itemize



@node Acknowledgments,  , Conventions, Introduction
@section Acknowledgments

Gnuastro would not have been possible without scholarships and grants from several funding institutions.
We thus ask that if you used Gnuastro in any of your papers/reports, please add the proper citation and acknowledge the funding agencies/projects.
For details of which papers to cite (may be different for different programs) and get the acknowledgment statement to include in your paper, please run the relevant programs with the common @option{--cite} option like the example commands below (for more on @option{--cite}, please see @ref{Operating mode options}).

@example
$ astnoisechisel --cite
$ astmkcatalog --cite
@end example

Here, we'll acknowledge all the institutions (and their grants) along with the people who helped make Gnuastro possible.
The full list of Gnuastro authors is available at the start of this book and the @file{AUTHORS} file in the source code (both are generated automatically from the version controlled history).
The plain text file @file{THANKS}, which is also distributed along with the source code, contains the list of people and institutions who played an indirect role in Gnuastro (not committed any code in the Gnuastro version controlled history).

The Japanese Ministry of Education, Culture, Sports, Science, and Technology (MEXT) scholarship for Mohammad Akhlaghi's Masters and PhD degree in Tohoku University Astronomical Institute had an instrumental role in the long term learning and planning that made the idea of Gnuastro possible.
The very critical view points of Professor Takashi Ichikawa (Mohammad's adviser) were also instrumental in the initial ideas and creation of Gnuastro.
Afterwards, the European Research Council (ERC) advanced grant 339659-MUSICOS (Principal investigator: Roland Bacon) was vital in the growth and expansion of Gnuastro.
Working with Roland at the Centre de Recherche Astrophysique de Lyon (CRAL), enabled a thorough re-write of the core functionality of all libraries and programs, turning Gnuastro into the large collection of generic programs and libraries it is today.
Work on improving Gnuastro and making it mature is now continuing primarily in the Instituto de Astrofisica de Canarias (IAC) and in particular in collaboration with Johan Knapen and Ignacio Trujillo.

@c To the developers: please keep this in the same order as the THANKS file
@c (alphabetical, except for the names in the paragraph above).
In general, we would like to gratefully thank the following people for their useful and constructive comments and suggestions (in alphabetical order by family name):
Valentina Abril-melgarejo,
Marjan Akbari,
Carlos Allende Prieto,
Hamed Altafi,
Roland Bacon,
Roberto Baena Gall@'e,
Zahra Bagheri,
Karl Berry,
Leindert Boogaard,
Nicolas Bouch@'e,
Stefan Br@"uns,
Fernando Buitrago,
Adrian Bunk,
Rosa Calvi,
Mark Calabretta
Nushkia Chamba,
Benjamin Clement,
Nima Dehdilani,
Antonio Diaz Diaz,
Alexey Dokuchaev,
Pierre-Alain Duc,
Elham Eftekhari,
Gaspar Galaz,
Th@'er@`ese Godefroy,
Madusha Gunawardhana,
Bruno Haible,
Stephen Hamer,
Takashi Ichikawa,
Ra@'ul Infante Sainz,
Brandon Invergo,
Oryna Ivashtenko,
Aur@'elien Jarno,
Lee Kelvin,
Brandon Kelly,
Mohammad-Reza Khellat,
Johan Knapen,
Geoffry Krouchi,
Floriane Leclercq,
Alan Lefor,
Sebasti@'an Luna Valero,
Guillaume Mahler,
Raphael Morales,
Juan Molina Tobar,
Francesco Montanari,
Dmitrii Oparin,
Bertrand Pain,
William Pence,
Mamta Pommier,
Marcel Popescu,
Bob Proulx,
Joseph Putko,
Samane Raji,
Teymoor Saifollahi,
Joanna Sakowska,
Elham Saremi,
Yahya Sefidbakht,
Alejandro Serrano Borlaff,
Zahra Sharbaf,
David Shupe
Jenny Sorce,
Lee Spitler,
Richard Stallman,
Michael Stein,
Ole Streicher,
Alfred M. Szmidt,
Michel Tallon,
Juan C. Tello,
@'Eric Thi@'ebaut,
Ignacio Trujillo,
David Valls-Gabaud,
Aaron Watkins,
Michael H.F. Wilkinson,
Christopher Willmer,
Sara Yousefi Taemeh,
Johannes Zabl.
The GNU French Translation Team is also managing the French version of the top Gnuastro web page which we highly appreciate.
Finally we should thank all the (sometimes anonymous) people in various online forums which patiently answered all our small (but imporant) technical questions.

All work on Gnuastro has been voluntary, but the authors are most grateful to the following institutions (in chronological order) for hosting/supporting us in our research.
Where necessary, these institutions have disclaimed any ownership of the parts of Gnuastro that were developed there, thus insuring the freedom of Gnuastro for the future (see @ref{Copyright assignment}).
We highly appreciate their support for free software, and thus free science, and therefore a free society.

@quotation
Tohoku University Astronomical Institute, Sendai, Japan.@*
University of Salento, Lecce, Italy.@*
Centre de Recherche Astrophysique de Lyon (CRAL), Lyon, France.@*
Instituto de Astrofisica de Canarias (IAC), Tenerife, Spain.@*
Google Summer of Code 2020
@end quotation













@node Tutorials, Installation, Introduction, Top
@chapter Tutorials

@cindex Tutorial
@cindex Cookbook
To help new users have a smooth and easy start with Gnuastro, in this chapter several thoroughly elaborated tutorials, or cookbooks, are provided.
These tutorials demonstrate the capabilities of different Gnuastro programs and libraries, along with tips and guidelines for the best practices of using them in various realistic situations.

We strongly recommend going through these tutorials to get a good feeling of how the programs are related (built in a modular design to be used together in a pipeline), very similar to the core Unix-based programs that they were modeled on.
Therefore these tutorials will greatly help in optimally using Gnuastro's programs (and generally, the Unix-like command-line environment) effectively for your research.

In @ref{Sufi simulates a detection}, we'll start with a fictional@footnote{The two historically motivated tutorials (@ref{Sufi simulates a detection} is not intended to be a historical reference (the historical facts of this fictional tutorial used Wikipedia as a reference).
This form of presenting a tutorial was influenced by the PGF/TikZ and Beamer manuals.
They are both packages in in @TeX{} and @LaTeX{}, the first is a high-level vector graphic programming environment, while with the second you can make presentation slides.
On a similar topic, there are also some nice words of wisdom for Unix-like systems called @url{http://catb.org/esr/writings/unix-koans, Rootless Root}.
These also have a similar style but they use a mythical figure named Master Foo.
If you already have some experience in Unix-like systems, you will definitely find these Unix Koans entertaining/educative.} tutorial explaining how Abd al-rahman Sufi (903 -- 986 A.D., the first recorded description of ``nebulous'' objects in the heavens is attributed to him) could have used some of Gnuastro's programs for a realistic simulation of his observations and see if his detection of nebulous objects was trust-able.
Because all conditions are under control in a simulated/mock environment/dataset, mock datasets can be a valuable tool to inspect the limitations of your data analysis and processing.
But they need to be as realistic as possible, so the first tutorial is dedicated to this important step of an analysis.

The next two tutorials (@ref{General program usage tutorial} and @ref{Detecting large extended targets}) use real input datasets from some of the deep Hubble Space Telescope (HST) images and the Sloan Digital Sky Survey (SDSS) respectively.
Their aim is to demonstrate some real-world problems that many astronomers often face and how they can be be solved with Gnuastro's programs.

The ultimate aim of @ref{General program usage tutorial} is to detect galaxies in a deep HST image, measure their positions and brightness and select those with the strongest colors.
In the process, it takes many detours to introduce you to the useful capabilities of many of the programs.
So please be patient in reading it.
If you don't have much time and can only try one of the tutorials, we recommend this one.

@cindex PSF
@cindex Point spread function
@ref{Detecting large extended targets} deals with a major problem in astronomy: effectively detecting the faint outer wings of bright (and large) nearby galaxies to extremely low surface brightness levels (roughly one quarter of the local noise level in the example discussed).
Besides the interesting scientific questions in these low-surface brightness features, failure to properly detect them will bias the measurements of the background objects and the survey's noise estimates.
This is an important issue, especially in wide surveys.
Because bright/large galaxies and stars@footnote{Stars also have similarly large and extended wings due to the point spread function, see @ref{PSF}.}, cover a significant fraction of the survey area.

In these tutorials, we have intentionally avoided too many cross references to make it more easy to read.
For more information about a particular program, you can visit the section with the same name as the program in this book.
Each program section in the subsequent chapters starts by explaining the general concepts behind what it does, for example see @ref{Convolve}.
If you only want practical information on running a program, for example its options/configuration, input(s) and output(s), please consult the subsection titled ``Invoking ProgramName'', for example see @ref{Invoking astnoisechisel}.
For an explanation of the conventions we use in the example codes through the book, please see @ref{Conventions}.

@menu
* Sufi simulates a detection::  Simulating a detection.
* General program usage tutorial::  Tutorial on many programs in generic scenario.
* Detecting large extended targets::  NoiseChisel for huge extended targets.
@end menu


@node Sufi simulates a detection, General program usage tutorial, Tutorials, Tutorials
@section Sufi simulates a detection

@cindex Azophi
@cindex Abd al-rahman Sufi
@cindex Sufi, Abd al-rahman
It is the year 953 A.D. and Abd al-rahman Sufi (903 -- 986 A.D.)@footnote{In Latin Sufi is known as Azophi.
He was an Iranian astronomer.
His manuscript ``Book of fixed stars'' contains the first recorded observations of the Andromeda galaxy, the Large Magellanic Cloud and seven other non-stellar or `nebulous' objects.}  is in Shiraz as a guest astronomer.
He had come there to use the advanced 123 centimeter astrolabe for his studies on the Ecliptic.
However, something was bothering him for a long time.
While mapping the constellations, there were several non-stellar objects that he had detected in the sky, one of them was in the Andromeda constellation.
During a trip he had to Yemen, Sufi had seen another such object in the southern skies looking over the Indian ocean.
He wasn't sure if such cloud-like non-stellar objects (which he was the first to call `Sah@={a}bi' in Arabic or `nebulous') were real astronomical objects or if they were only the result of some bias in his observations.
Could such diffuse objects actually be detected at all with his detection technique?

@cindex Almagest
@cindex Claudius Ptolemy
@cindex Ptolemy, Claudius
He still had a few hours left until nightfall (when he would continue his studies on the ecliptic) so he decided to find an answer to this question.
He had thoroughly studied Claudius Ptolemy's (90 -- 168 A.D) Almagest and had made lots of corrections to it, in particular in measuring the brightness.
Using his same experience, he was able to measure a magnitude for the objects and wanted to simulate his observation to see if a simulated object with the same brightness and size could be detected in a simulated noise with the same detection technique.
The general outline of the steps he wants to take are:

@enumerate

@item
Make some mock profiles in an over-sampled image.
The initial mock image has to be over-sampled prior to convolution or other forms of transformation in the image.
Through his experiences, Sufi knew that this is because the image of heavenly bodies is actually transformed by the atmosphere or other sources outside the atmosphere (for example gravitational lenses) prior to being sampled on an image.
Since that transformation occurs on a continuous grid, to best approximate it, he should do all the work on a finer pixel grid.
In the end he can re-sample the result to the initially desired grid size.

@item
@cindex PSF
Convolve the image with a point spread function (PSF, see @ref{PSF}) that is over-sampled to the same resolution as the mock image.
Since he wants to finish in a reasonable time and the PSF kernel will be very large due to oversampling, he has to use frequency domain convolution which has the side effect of dimming the edges of the image.
So in the first step above he also has to build the image to be larger by at least half the width of the PSF convolution kernel on each edge.

@item
With all the transformations complete, the image should be re-sampled to the same size of the pixels in his detector.

@item
He should remove those extra pixels on all edges to remove frequency domain convolution artifacts in the final product.

@item
He should add noise to the (until now, noise-less) mock image.
After all, all observations have noise associated with them.

@end enumerate

Fortunately Sufi had heard of GNU Astronomy Utilities from a colleague in Isfahan (where he worked) and had installed it on his computer a year before.
It had tools to do all the steps above.
He had used MakeProfiles before, but wasn't sure which columns he had chosen in his user or system wide configuration files for which parameters, see @ref{Configuration files}.
So to start his simulation, Sufi runs MakeProfiles with the @option{-P} option to make sure what columns in a catalog MakeProfiles currently recognizes and the output image parameters.
In particular, Sufi is interested in the recognized columns (shown below).

@example
$ astmkprof -P

[[[ ... Truncated lines ... ]]]

# Output:
 type         float32     # Type of output: e.g., int16, float32, etc...
 mergedsize   1000,1000   # Number of pixels along first FITS axis.
 oversample   5           # Scale of oversampling (>0 and odd).

[[[ ... Truncated lines ... ]]]

# Columns, by info (see `--searchin'), or number (starting from 1):
 ccol         2           # Coordinate columns (one call for each dimension).
 ccol         3           # Coordinate columns (one call for each dimension).
 fcol         4           # sersic (1), moffat (2), gaussian (3),
                          # point (4), flat (5), circumference (6).
 rcol         5           # Effective radius or FWHM in pixels.
 ncol         6           # Sersic index or Moffat beta.
 pcol         7           # Position angle.
 qcol         8           # Axis ratio.
 mcol         9           # Magnitude.
 tcol         10          # Truncation in units of radius or pixels.

[[[ ... Truncated lines ... ]]]

@end example

@noindent
In Gnuastro, column counting starts from 1, so the columns are ordered such that the first column (number 1) can be an ID he specifies for each object (and MakeProfiles ignores), each subsequent column is used for another property of the profile.
It is also possible to use column names for the values of these options and change these defaults, but Sufi preferred to stick to the defaults.
Fortunately MakeProfiles has the capability to also make the PSF which is to be used on the mock image and using the @option{--prepforconv} option, he can also make the mock image to be larger by the correct amount and all the sources to be shifted by the correct amount.

For his initial check he decides to simulate the nebula in the Andromeda constellation.
The night he was observing, the PSF had roughly a FWHM of about 5 pixels, so as the first row (profile), he defines the PSF parameters and sets the radius column (@code{rcol} above, fifth column) to @code{5.000}, he also chooses a Moffat function for its functional form.
Remembering how diffuse the nebula in the Andromeda constellation was, he decides to simulate it with a mock S@'{e}rsic index 1.0 profile.
He wants the output to be 499 pixels by 499 pixels, so he can put the center of the mock profile in the central pixel of the image (note that an even number doesn't have a central element).

Looking at his drawings of it, he decides a reasonable effective radius for it would be 40 pixels on this image pixel scale, he sets the axis ratio and position angle to approximately correct values too and finally he sets the total magnitude of the profile to 3.44 which he had accurately measured.
Sufi also decides to truncate both the mock profile and PSF at 5 times the respective radius parameters.
In the end he decides to put four stars on the four corners of the image at very low magnitudes as a visual scale.
While he was preparing the catalog, one of his students approached him and was also following the steps.

Using all the information above, he creates the catalog of mock profiles he wants in a file named @file{cat.txt} (short for catalog) using his favorite text editor and stores it in a directory named @file{simulationtest} in his home directory.
[The @command{cat} command prints the contents of a file, short for ``concatenation''.
So please copy-paste the lines after ``@command{cat cat.txt}'' into @file{cat.txt} when the editor opens in the steps above it, note that there are 7 lines, first one starting with @key{#}.
Also be careful when copying from the PDF format, the Info, web, or text formats shouldn't have any problem]:

@example
$ mkdir ~/simulationtest
$ cd ~/simulationtest
$ pwd
/home/rahman/simulationtest
$ emacs cat.txt
$ ls
cat.txt
$ cat cat.txt
# Column 4: PROFILE_NAME [,str6] Radial profile's functional name
 1  0.0000   0.0000  moffat  5.000  4.765  0.0000  1.000  30.000  5.000
 2  250.00   250.00  sersic  40.00  1.000  -25.00  0.400  3.4400  5.000
 3  50.000   50.000  point   0.000  0.000  0.0000  0.000  6.0000  0.000
 4  450.00   50.000  point   0.000  0.000  0.0000  0.000  6.5000  0.000
 5  50.000   450.00  point   0.000  0.000  0.0000  0.000  7.0000  0.000
 6  450.00   450.00  point   0.000  0.000  0.0000  0.000  7.5000  0.000
@end example

@noindent
The zero point magnitude for his observation was 18.
Now he has all the necessary parameters and runs MakeProfiles with the following command:

@example

$ astmkprof --prepforconv --mergedsize=499,499 --zeropoint=18.0 cat.txt
MakeProfiles started on Sat Oct  6 16:26:56 953
  - 6 profiles read from cat.txt
  - Random number generator (RNG) type: mt19937
  - Using 8 threads.
  ---- row 2 complete, 5 left to go
  ---- row 3 complete, 4 left to go
  ---- row 4 complete, 3 left to go
  ---- row 5 complete, 2 left to go
  ---- ./0_cat.fits created.
  ---- row 0 complete, 1 left to go
  ---- row 1 complete, 0 left to go
  - ./cat.fits created.                                0.041651 seconds
MakeProfiles finished in 0.267234 seconds

$ls
0_cat.fits  cat.fits  cat.txt
@end example

@cindex Oversample
@noindent
The file @file{0_cat.fits} is the PSF Sufi had asked for, and @file{cat.fits} is the image containing the main objects in the catalog.
The size of @file{cat.fits} was surprising for the student, instead of 499 by 499 (as we had requested), it was 2615 by 2615 pixels (from the command below):

@example
$ astfits cat.fits -h1 | grep NAXIS
@end example

@noindent
So Sufi explained why oversampling is important in modeling, especially for parts of the image where the flux change is significant over a pixel.
Recall that when you oversample the model (for example by 5 times), for every desired pixel, you get 25 pixels (@mymath{5\times5}).
Sufi then explained that after convolving (next step below) we will down-sample the image to get our originally desired size/resolution.

Sufi then opened @code{cat.fits} [you can use any FITS viewer, for example, @command{ds9}].
After seeing the image, the student complained that only the large elliptical model for the Andromeda nebula can be seen in the center.
He couldn't see the four stars that we had also requested in the catalog.
So Sufi had to explain that the stars are there in the image, but the reason that they aren't visible when looking at the whole image at once, is that they only cover a single pixel!
To prove it, he centered the image around the coordinates 2308 and 2308, where one of the stars is located in the over-sampled image [you can do this in @command{ds9} by selecting ``Pan'' in the ``Edit'' menu, then clicking around that position].
Sufi then zoomed in to that region and soon, the star's non-zero pixel could be clearly seen.

Sufi explained that the stars will take the shape of the PSF (cover an area of more than one pixel) after convolution.
If we didn't have an atmosphere and we didn't need an aperture, then stars would only cover a single pixel with normal CCD resolutions.
So Sufi convolved the image with this command:

@example
$ astconvolve --kernel=0_cat.fits cat.fits
Convolve started on Sat Oct  6 16:35:32 953
  - Using 8 CPU threads.
  - Input: cat.fits (hdu: 1)
  - Kernel: 0_cat.fits (hdu: 1)
  - Input and Kernel images padded.                    0.075541 seconds
  - Images converted to frequency domain.              6.728407 seconds
  - Multiplied in the frequency domain.                0.040659 seconds
  - Converted back to the spatial domain.              3.465344 seconds
  - Padded parts removed.                              0.016767 seconds
  - Output: cat_convolved.fits
Convolve finished in:  10.422161 seconds

$ls
0_cat.fits  cat_convolved.fits  cat.fits  cat.txt
@end example

@noindent
When convolution finished, Sufi opened @file{cat_convolved.fits} and the four stars could be easily seen now.
It was interesting for the student that all the flux in that single pixel is now distributed over so many pixels (the sum of all the pixels in each convolved star is actually equal to the value of the single pixel before convolution).
Sufi explained how a PSF with a larger FWHM would make the points even wider than this (distributing their flux in a larger area).
With the convolved image ready, they were prepared to re-sample it to the original pixel scale Sufi had planned [from the @command{$ astmkprof -P} command above, recall that MakeProfiles had over-sampled the image by 5 times].
Sufi explained the basic concepts of warping the image to his student and ran Warp with the following command:

@example
$ astwarp --scale=1/5 --centeroncorner cat_convolved.fits
Warp started on Sat Oct  6 16:51:59 953
 Using 8 CPU threads.
 Input: cat_convolved.fits (hdu: 1)
 matrix:
        0.2000   0.0000   0.4000
        0.0000   0.2000   0.4000
        0.0000   0.0000   1.0000

$ ls
0_cat.fits          cat_convolved_scaled.fits     cat.txt
cat_convolved.fits  cat.fits

$ astfits -p cat_convolved_scaled.fits | grep NAXIS
NAXIS   =                    2 / number of data axes
NAXIS1  =                  523 / length of data axis 1
NAXIS2  =                  523 / length of data axis 2
@end example

@noindent
@file{cat_convolved_scaled.fits} now has the correct pixel scale.
However, the image is still larger than what we had wanted, it is 523 (@mymath{499+12+12}) by 523 pixels.
The student is slightly confused, so Sufi also re-samples the PSF with the same scale by running

@example
$ astwarp --scale=1/5 --centeroncorner 0_cat.fits
$ astfits -p 0_cat_scaled.fits | grep NAXIS
NAXIS   =                    2 / number of data axes
NAXIS1  =                   25 / length of data axis 1
NAXIS2  =                   25 / length of data axis 2
@end example

@noindent
Sufi notes that @mymath{25=(2\times12)+1} and goes on to explain how frequency space convolution will dim the edges and that is why he added the @option{--prepforconv} option to MakeProfiles, see @ref{If convolving afterwards}.
Now that convolution is done, Sufi can remove those extra pixels using Crop with the command below.
Crop's @option{--section} option accepts coordinates inclusively and counting from 1 (according to the FITS standard), so the crop region's first pixel has to be 13, not 12.

@example
$ astcrop cat_convolved_scaled.fits --section=13:*-12,13:*-12    \
          --mode=img --zeroisnotblank
Crop started on Sat Oct  6 17:03:24 953
  - Read metadata of 1 image.                          0.001304 seconds
  ---- ...nvolved_scaled_cropped.fits created: 1 input.
Crop finished in:  0.027204 seconds

$ls
0_cat.fits          cat_convolved_scaled_cropped.fits  cat.fits
cat_convolved.fits  cat_convolved_scaled.fits          cat.txt
@end example

@noindent
Finally, @file{cat_convolved_scaled_cropped.fits} is @mymath{499\times499} pixels and the mock Andromeda galaxy is centered on the central pixel (open the image in a FITS viewer and confirm this by zooming into the center, note that an even-width image wouldn't have a central pixel).
This is the same dimensions as Sufi had desired in the beginning.
All this trouble was certainly worth it because now there is no dimming on the edges of the image and the profile centers are more accurately sampled.

The final step to simulate a real observation would be to add noise to the image.
Sufi set the zero point magnitude to the same value that he set when making the mock profiles and looking again at his observation log, he had measured the background flux near the nebula had a magnitude of 7 that night.
So using these values he ran MakeNoise:

@example
$ astmknoise --zeropoint=18 --background=7 --output=out.fits    \
             cat_convolved_scaled_cropped.fits
MakeNoise started on Sat Oct  6 17:05:06 953
  - Generator type: ranlxs1
  - Generator seed: 1428318100
MakeNoise finished in:  0.033491 (seconds)

$ls
0_cat.fits         cat_convolved_scaled_cropped.fits cat.fits  out.fits
cat_convolved.fits cat_convolved_scaled.fits         cat.txt
@end example

@noindent
The @file{out.fits} file now contains the noised image of the mock catalog Sufi had asked for.
Seeing how the @option{--output} option allows the user to specify the name of the output file, the student was confused and wanted to know why Sufi hadn't used it before? Sufi then explained to him that for intermediate steps it is best to rely on the automatic output, see @ref{Automatic output}.
Doing so will give all the intermediate files the same basic name structure, so in the end you can simply remove them all with the Shell's capabilities.
So Sufi decided to show this to the student by making a shell script from the commands he had used before.

The command-line shell has the capability to read all the separate input commands from a file.
This is useful when you want to do the same thing multiple times, with only the names of the files or minor parameters changing between the different instances.
Using the shell's history (by pressing the up keyboard key) Sufi reviewed all the commands and then he retrieved the last 5 commands with the @command{$ history 5} command.
He selected all those lines he had input and put them in a text file named @file{mymock.sh}.
Then he defined the @code{edge} and @code{base} shell variables for easier customization later.
Finally, before every command, he added some comments (lines starting with @key{#}) for future readability.

@example
edge=12
base=cat

# Stop running next commands if one fails.
set -e

# Remove any (possibly) existing output (from previous runs)
# before starting.
rm -f out.fits

# Run MakeProfiles to create an oversampled FITS image.
astmkprof --prepforconv --mergedsize=499,499 --zeropoint=18.0 \
          "$base".txt

# Convolve the created image with the kernel.
astconvolve --kernel=0_"$base".fits "$base".fits

# Scale the image back to the intended resolution.
astwarp --scale=1/5 --centeroncorner "$base"_convolved.fits

# Crop the edges out (dimmed during convolution). ‘--section’ accepts
# inclusive coordinates, so the start of start of the section must be
# one pixel larger than its end.
st_edge=$(( edge + 1 ))
astcrop "$base"_convolved_scaled.fits --zeroisnotblank \
        --mode=img --section=$st_edge:*-$edge,$st_edge:*-$edge

# Add noise to the image.
astmknoise --zeropoint=18 --background=7 --output=out.fits \
           "$base"_convolved_scaled_cropped.fits

# Remove all the temporary files.
rm 0*.fits "$base"*.fits
@end example

@cindex Comments
He used this chance to remind the student of the importance of comments in code or shell scripts: when writing the code, you have a good mental picture of what you are doing, so writing comments might seem superfluous and excessive.
However, in one month when you want to re-use the script, you have lost that mental picture and remembering it can be time-consuming and frustrating.
The importance of comments is further amplified when you want to share the script with a friend/colleague.
So it is good to accompany any script/code with useful comments while you are writing it (create a good mental picture of what/why you are doing something).

@cindex Gedit
@cindex GNU Emacs
Sufi then explained to the eager student that you define a variable by giving it a name, followed by an @code{=} sign and the value you want.
Then you can reference that variable from anywhere in the script by calling its name with a @code{$} prefix.
So in the script whenever you see @code{$base}, the value we defined for it above is used.
If you use advanced editors like GNU Emacs or even simpler ones like Gedit (part of the GNOME graphical user interface) the variables will become a different color which can really help in understanding the script.
We have put all the @code{$base} variables in double quotation marks (@code{"}) so the variable name and the following text do not get mixed, the shell is going to ignore the @code{"} after replacing the variable value.
To make the script executable, Sufi ran the following command:

@example
$ chmod +x mymock.sh
@end example

@noindent
Then finally, Sufi ran the script, simply by calling its file name:

@example
$ ./mymock.sh
@end example

After the script finished, the only file remaining is the @file{out.fits} file that Sufi had wanted in the beginning.
Sufi then explained to the student how he could run this script anywhere that he has a catalog if the script is in the same directory.
The only thing the student had to modify in the script was the name of the catalog (the value of the @code{base} variable in the start of the script) and the value to the @code{edge} variable if he changed the PSF size.
The student was also happy to hear that he won't need to make it executable again when he makes changes later, it will remain executable unless he explicitly changes the executable flag with @command{chmod}.

The student was really excited, since now, through simple shell scripting, he could really speed up his work and run any command in any fashion he likes allowing him to be much more creative in his works.
Until now he was using the graphical user interface which doesn't have such a facility and doing repetitive things on it was really frustrating and some times he would make mistakes.
So he left to go and try scripting on his own computer.

Sufi could now get back to his own work and see if the simulated nebula which resembled the one in the Andromeda constellation could be detected or not.
Although it was extremely faint@footnote{The brightness of a diffuse object is added over all its pixels to give its final magnitude, see @ref{Brightness flux magnitude}.
So although the magnitude 3.44 (of the mock nebula) is orders of magnitude brighter than 6 (of the stars), the central galaxy is much fainter.
Put another way, the brightness is distributed over a large area in the case of a nebula.}, fortunately it passed his detection tests and he wrote it in the draft manuscript that would later become ``Book of fixed stars''.
He still had to check the other nebula he saw from Yemen and several other such objects, but they could wait until tomorrow (thanks to the shell script, he only has to define a new catalog).
It was nearly sunset and they had to begin preparing for the night's measurements on the ecliptic.


@menu
* General program usage tutorial::
@end menu

@node General program usage tutorial, Detecting large extended targets, Sufi simulates a detection, Tutorials
@section General program usage tutorial

@cindex Hubble Space Telescope (HST)
@cindex Colors, broad-band photometry
Measuring colors of astronomical objects in broad-band or narrow-band images is one of the most basic and common steps in astronomical analysis.
Here, we will use Gnuastro's programs to get a physical scale (area at certain redshifts) of the field we are studying, detect objects in a Hubble Space Telescope (HST) image, measure their colors and identify the ones with the strongest colors, do a visual inspection of these objects and inspect spatial position in the image.
After this tutorial, you can also try the @ref{Detecting large extended targets} tutorial which goes into a little more detail on detecting very low surface brightness signal.

During the tutorial, we will take many detours to explain, and practically demonstrate, the many capabilities of Gnuastro's programs.
In the end you will see that the things you learned during this tutorial are much more generic than this particular problem and can be used in solving a wide variety of problems involving the analysis of data (images or tables).
So please don't rush, and go through the steps patiently to optimally master Gnuastro.

@cindex XDF survey
@cindex eXtreme Deep Field (XDF) survey
In this tutorial, we'll use the HST@url{https://archive.stsci.edu/prepds/xdf, eXtreme Deep Field} dataset.
Like almost all astronomical surveys, this dataset is free for download and usable by the public.
You will need the following tools in this tutorial: Gnuastro, SAO DS9 @footnote{See @ref{SAO ds9}, available at @url{http://ds9.si.edu/site/Home.html}}, GNU Wget@footnote{@url{https://www.gnu.org/software/wget}}, and AWK (most common implementation is GNU AWK@footnote{@url{https://www.gnu.org/software/gawk}}).

This tutorial was first prepared for the ``Exploring the Ultra-Low Surface Brightness Universe'' workshop (November 2017) at the ISSI in Bern, Switzerland.
It was further extended in the ``4th Indo-French Astronomy School'' (July 2018) organized by LIO, CRAL CNRS UMR5574, UCBL, and IUCAA in Lyon, France.
We are very grateful to the organizers of these workshops and the attendees for the very fruitful discussions and suggestions that made this tutorial possible.

@cartouche
@noindent
@strong{Write the example commands manually:} Try to type the example commands on your terminal manually and use the history feature of your command-line (by pressing the ``up'' button to retrieve previous commands).
Don't simply copy and paste the commands shown here.
This will help simulate future situations when you are processing your own datasets.
@end cartouche


@menu
* Calling Gnuastro's programs::  Easy way to find Gnuastro's programs.
* Accessing documentation::     Access to manual of programs you are running.
* Setup and data download::     Setup this template and download datasets.
* Dataset inspection and cropping::  Crop the flat region to use in next steps.
* Angular coverage on the sky::  Measure the field size on the sky.
* Cosmological coverage::       Measure the field size at different redshifts.
* Building custom programs with the library::  Easy way to build new programs.
* Option management and configuration files::  Dealing with options and configuring them.
* Warping to a new pixel grid::  Transforming/warping the dataset.
* NoiseChisel and Multiextension FITS files::  Running NoiseChisel and having multiple HDUs.
* NoiseChisel optimization for detection::  Check NoiseChisel's operation and improve it.
* NoiseChisel optimization for storage::  Dramatically decrease output's volume.
* Segmentation and making a catalog::  Finding true peaks and creating a catalog.
* Working with catalogs estimating colors::  Estimating colors using the catalogs.
* Column statistics color-magnitude diagram::  Visualizing column correlations.
* Aperture photometry::         Doing photometry on a fixed aperture.
* Matching catalogs::           Easily find corresponding rows from two catalogs.
* Finding reddest clumps and visual inspection::  Selecting some targets and inspecting them.
* Writing scripts to automate the steps::  Scripts will greatly help in re-doing things fast.
* Citing and acknowledging Gnuastro::  How to cite and acknowledge Gnuastro in your papers.
@end menu

@node Calling Gnuastro's programs, Accessing documentation, General program usage tutorial, General program usage tutorial
@subsection Calling Gnuastro's programs
A handy feature of Gnuastro is that all program names start with @code{ast}.
This will allow your command-line processor to easily list and auto-complete Gnuastro's programs for you.
Try typing the following command (press @key{TAB} key when you see @code{<TAB>}) to see the list:

@example
$ ast<TAB><TAB>
@end example

@noindent
Any program that starts with @code{ast} (including all Gnuastro programs) will be shown.
By choosing the subsequent characters of your desired program and pressing @key{<TAB><TAB>} again, the list will narrow down and the program name will auto-complete once your input characters are unambiguous.
In short, you often don't need to type the full name of the program you want to run.

@node Accessing documentation, Setup and data download, Calling Gnuastro's programs, General program usage tutorial
@subsection Accessing documentation

Gnuastro contains a large number of programs and it is natural to forget the details of each program's options or inputs and outputs.
Therefore, before starting the analysis steps of this tutorial, let's review how you can access this book to refresh your memory any time you want, without having to take your hands off the keyboard.

When you install Gnuastro, this book is also installed on your system along with all the programs and libraries, so you don't need an internet connection to to access/read it.
Also, by accessing this book as described below, you can be sure that it corresponds to your installed version of Gnuastro.

@cindex GNU Info
GNU Info@footnote{GNU Info is already available on almost all Unix-like operating systems.} is the program in charge of displaying the manual on the command-line (for more, see @ref{Info}).
To see this whole book on your command-line, please run the following command and press subsequent keys.
Info has its own mini-environment, therefore we'll show the keys that must be pressed in the mini-environment after a @code{->} sign.
You can also ignore anything after the @code{#} sign in the middle of the line, they are only for your information.

@example
$ info gnuastro                # Open the top of the manual.
-> <SPACE>                     # All the book chapters.
-> <SPACE>                     # Continue down: show sections.
-> <SPACE> ...                 # Keep pressing space to go down.
-> q                           # Quit Info, return to the command-line.
@end example

The thing that greatly simplifies navigation in Info is the links (regions with an underline).
You can immediately go to the next link in the page with the @key{<TAB>} key and press @key{<ENTER>} on it to go into that part of the manual.
Try the commands above again, but this time also use @key{<TAB>} to go to the links and press @key{<ENTER>} on them to go to the respective section of the book.
Then follow a few more links and go deeper into the book.
To return to the previous page, press @key{l} (small L).
If you are searching for a specific phrase in the whole book (for example an option name), press @key{s} and type your search phrase and end it with an @key{<ENTER>}.

You don't need to start from the top of the manual every time.
For example, to get to @ref{Invoking astnoisechisel}, run the following command.
In general, all programs have such an ``Invoking ProgramName'' section in this book.
These sections are specifically for the description of inputs, outputs and configuration options of each program.
You can access them directly for each program by giving its executable name to Info.

@example
$ info astnoisechisel
@end example

The other sections don't have such shortcuts.
To directly access them from the command-line, you need to tell Info to look into Gnuastro's manual, then look for the specific section (an unambiguous title is necessary).
For example, if you only want to review/remember NoiseChisel's @ref{Detection options}), just run the following command.
Note how case is irrelevant for Info when calling a title in this manner.

@example
$ info gnuastro "Detection options"
@end example

In general, Info is a powerful and convenient way to access this whole book with detailed information about the programs you are running.
If you are not already familiar with it, please run the following command and just read along and do what it says to learn it.
Don't stop until you feel sufficiently fluent in it.
Please invest the half an hour's time necessary to start using Info comfortably.
It will greatly improve your productivity and you will start reaping the rewards of this investment very soon.

@example
$ info info
@end example

As a good scientist you need to feel comfortable to play with the features/options and avoid (be critical to) using default values as much as possible.
On the other hand, our human memory is limited, so it is important to be able to easily access any part of this book fast and remember the option names, what they do and their acceptable values.

If you just want the option names and a short description, calling the program with the @option{--help} option might also be a good solution like the first example below.
If you know a few characters of the option name, you can feed the output to @command{grep} like the second or third example commands.

@example
$ astnoisechisel --help
$ astnoisechisel --help | grep quant
$ astnoisechisel --help | grep check
@end example

@node Setup and data download, Dataset inspection and cropping, Accessing documentation, General program usage tutorial
@subsection Setup and data download

The first step in the analysis of the tutorial is to download the necessary input datasets.
First, to keep things clean, let's create a @file{gnuastro-tutorial} directory and continue all future steps in it:

@example
$ mkdir gnuastro-tutorial
$ cd gnuastro-tutorial
@end example

We will be using the near infra-red @url{http://www.stsci.edu/hst/wfc3, Wide Field Camera} dataset.
If you already have them in another directory (for example @file{XDFDIR}, with the same FITS file names), you can set the @file{download} directory to be a symbolic link to @file{XDFDIR} with a command like this:

@example
$ ln -s XDFDIR download
@end example

@noindent
Otherwise, when the following images aren't already present on your system, you can make a @file{download} directory and download them there.

@example
$ mkdir download
$ cd download
$ xdfurl=http://archive.stsci.edu/pub/hlsp/xdf
$ wget $xdfurl/hlsp_xdf_hst_wfc3ir-60mas_hudf_f105w_v1_sci.fits
$ wget $xdfurl/hlsp_xdf_hst_wfc3ir-60mas_hudf_f125w_v1_sci.fits
$ wget $xdfurl/hlsp_xdf_hst_wfc3ir-60mas_hudf_f160w_v1_sci.fits
$ cd ..
@end example

@noindent
In this tutorial, we'll just use these three filters.
Later, you may need to download more filters.
To do that, you can use the shell's @code{for} loop to download them all in series (one after the other@footnote{Note that you only have one port to the internet, so downloading in parallel will actually be slower than downloading in series.}) with one command like the one below for the WFC3 filters.
Put this command instead of the three @code{wget} commands above.
Recall that all the extra spaces, back-slashes (@code{\}), and new lines can be ignored if you are typing on the lines on the terminal.

@example
$ for f in f105w f125w f140w f160w; do \
    wget $xdfurl/hlsp_xdf_hst_wfc3ir-60mas_hudf_"$f"_v1_sci.fits; \
  done
@end example


@node Dataset inspection and cropping, Angular coverage on the sky, Setup and data download, General program usage tutorial
@subsection Dataset inspection and cropping

First, let's visually inspect the datasets we downloaded in @ref{Setup and data download}.
Let's take F160W image as an example.
Do the steps below with the other image(s) too (and later with any dataset that you want to work on).
It is very important to get a good visual feeling of the dataset you intend to use.
Also, note how SAO DS9 (used here for visual inspection of FITS images) doesn't follow the GNU style of options where ``long'' and ``short'' options are preceded by @option{--} and @option{-} respectively (for example @option{--width} and @option{-w}, see @ref{Options}).

Run the command below to see the F160W image with DS9.
Ds9's @option{-zscale} scaling is good to visually highlight the low surface brightness regions, and as the name suggests, @option{-zoom to fit} will fit the whole dataset in the window.
If the window is too small, expand it with your mouse, then press the ``zoom'' button on the top row of buttons above the image.
Afterwards, in the bottom row of buttons, press ``zoom fit''.
You can also zoom in and out by scrolling your mouse or the respective operation on your touch-pad when your cursor/pointer is over the image.

@example
$ ds9 download/hlsp_xdf_hst_wfc3ir-60mas_hudf_f160w_v1_sci.fits     \
      -zscale -zoom to fit
@end example

As you hover your mouse over the image, notice how the ``Value'' and positional fields on the top of the ds9 window get updated.
The first thing you might notice is that when you hover the mouse over the regions with no data, they have a value of zero.
The next thing might be that the dataset actually has two ``depth''s (see @ref{Quantifying measurement limits}).
Recall that this is a combined/reduced image of many exposures, and the parts that have more exposures are deeper.
In particular, the exposure time of the deep inner region is larger than 4 times of the outer (more shallower) parts.

To simplify the analysis in this tutorial, we'll only be working on the deep field, so let's crop it out of the full dataset.
Fortunately the XDF survey web page (above) contains the vertices of the deep flat WFC3-IR field.
With Gnuastro's Crop program@footnote{To learn more about the crop program see @ref{Crop}.}, you can use those vertices to cutout this deep region from the larger image.
But before that, to keep things organized, let's make a directory called @file{flat-ir} and keep the flat (single-depth) regions in that directory (with a `@file{xdf-}' suffix for a shorter and easier filename).

@example
$ mkdir flat-ir
$ astcrop --mode=wcs -h0 --output=flat-ir/xdf-f105w.fits \
          --polygon="53.187414,-27.779152 : 53.159507,-27.759633 : \
                     53.134517,-27.787144 : 53.161906,-27.807208" \
          download/hlsp_xdf_hst_wfc3ir-60mas_hudf_f105w_v1_sci.fits

$ astcrop --mode=wcs -h0 --output=flat-ir/xdf-f125w.fits \
          --polygon="53.187414,-27.779152 : 53.159507,-27.759633 : \
                     53.134517,-27.787144 : 53.161906,-27.807208" \
          download/hlsp_xdf_hst_wfc3ir-60mas_hudf_f125w_v1_sci.fits

$ astcrop --mode=wcs -h0 --output=flat-ir/xdf-f160w.fits \
          --polygon="53.187414,-27.779152 : 53.159507,-27.759633 : \
                     53.134517,-27.787144 : 53.161906,-27.807208" \
          download/hlsp_xdf_hst_wfc3ir-60mas_hudf_f160w_v1_sci.fits
@end example

The only thing varying in the three calls to Gnuastro's Crop program is the filter name!
Note how everything else is the same.
In such cases, you should generally avoid repeating a command manually, it is prone to many bugs, and as you see, it is very hard to read (didn't you suddenly write a @code{7} as an @code{8}?).
To simplify the command, and later allow work on more filters, we can use the shell's @code{for} loop as shown below.
Notice how the place where the filter names (@file{f105w}, @file{f125w} and @file{f160w}) are used above, have been replaced with @file{$f} (the shell variable that @code{for} will update in every loop) below.

@example
$ rm flat-ir/*.fits
$ for f in f105w f125w f160w; do \
    astcrop --mode=wcs -h0 --output=flat-ir/xdf-$f.fits \
            --polygon="53.187414,-27.779152 : 53.159507,-27.759633 : \
                       53.134517,-27.787144 : 53.161906,-27.807208" \
            download/hlsp_xdf_hst_wfc3ir-60mas_hudf_"$f"_v1_sci.fits; \
  done
@end example

Please open these images and inspect them with the same @command{ds9} command you used above.
You will see how it is nicely flat now and doesn't have varying depths.
Another important result of this crop is that regions with no data now have a NaN (Not-a-Number, or a blank value) value.
In the downloaded files, such regions had a value of zero.
However, zero is a number, and is thus meaningful, especially when you later want to NoiseChisel@footnote{As you will see below, unlike most other detection algorithms, NoiseChisel detects the objects from their faintest parts, it doesn't start with their high signal-to-noise ratio peaks.
Since the Sky is already subtracted in many images and noise fluctuates around zero, zero is commonly higher than the initial threshold applied.
Therefore not ignoring zero-valued pixels in this image, will cause them to part of the detections!}.
Generally, when you want to ignore some pixels in a dataset, and avoid higher-level ambiguities or complications, it is always best to give them blank values (not zero, or some other absurdly large or small number).
Gnuastro has the Arithmetic program for such cases, and we'll introduce it later in this tutorial.

@node Angular coverage on the sky, Cosmological coverage, Dataset inspection and cropping, General program usage tutorial
@subsection Angular coverage on the sky

@cindex @code{CDELT}
@cindex Coordinate scales
@cindex Scales, coordinate
This is the deepest image we currently have of the sky.
The first thing that comes to mind may be this: ``How large is this field on the sky?''.
You can get a fast and crude answer with Gnuastro's Fits program using this command:

@example
astfits flat-ir/xdf-f160w.fits --skycoverage
@end example

It will print the sky coverage in two formats (all numbers are in units of degrees for this image): 1) the image's central RA and Dec and full width around that center, 2) the range of RA and Dec covered by this image.
You can use these values in various online query systems.
You can also use this option to automatically calculate the area covered by this image.
With the @option{--quiet} option, the printed output of @option{--skycoverage} will not contain human-readable text, making it easier for further processing:

@example
astfits flat-ir/xdf-f160w.fits --skycoverage --quiet
@end example

The second row is the coverage range along RA and Dec (compare with the outputs before using @option{--quiet}).
We can thus simply subtract the second from the first column and multiply it with the difference of the fourth and third columns to calculate the image area.
We'll also multiply each by 60 to have the area in arc-minutes squared.

@example
astfits flat-ir/xdf-f160w.fits --skycoverage --quiet \
        | awk 'NR==2@{print ($2-$1)*60*($4-$3)*60@}'
@end example

The returned value is @mymath{9.06711} arcmin@mymath{^2}.
@strong{However, this method ignores the fact many of the image pixels are blank!}
In other words, the image does cover this area, but there is no data in more than half of the pixels.
So let's calculate the area coverage over-which we actually have data.

The FITS world coordinate system (WCS) meta data standard contains the key to answering this question.
Run the following command to see all the FITS keywords (metadata) for one of the images (almost identical with the other images because they were are scaled to the same region of Sky):

@example
astfits flat-ir/xdf-f160w.fits -h1
@end example

Look into the keywords grouped under the `@code{World Coordinate System (WCS)}' title.
These keywords define how the image relates to the outside world.
In particular, the @code{CDELT*} keywords (or @code{CDELT1} and @code{CDELT2} in this 2D image) contain the ``Coordinate DELTa'' (or change in coordinate units) with a change in one pixel.
But what is the units of each ``world'' coordinate?
The @code{CUNIT*} keywords (for ``Coordinate UNIT'') have the answer.
In this case, both @code{CUNIT1} and @code{CUNIT1} have a value of @code{deg}, so both ``world'' coordinates are in units of degrees.
We can thus conclude that the value of @code{CDELT*} is in units of degrees-per-pixel@footnote{With the FITS @code{CDELT} convention, rotation (@code{PC} or @code{CD} keywords) and scales (@code{CDELT}) are separated.
In the FITS standard the @code{CDELT} keywords are optional.
When @code{CDELT} keywords aren't present, the @code{PC} matrix is assumed to contain @emph{both} the coordinate rotation and scales.
Note that not all FITS writers use the @code{CDELT} convention.
So you might not find the @code{CDELT} keywords in the WCS meta data of some FITS files.
However, all Gnuastro programs (which use the default FITS keyword writing format of WCSLIB) write their output WCS with the @code{CDELT} convention, even if the input doesn't have it.
If your dataset doesn't use the @code{CDELT} convention, you can feed it to any (simple) Gnuastro program (for example Arithmetic) and the output will have the @code{CDELT} keyword.
See Section 8 of the @url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf, FITS standard} for more}.

With the commands below, we'll use @code{CDELT} (along with the image size) to find the answer of our initial question: ``how much of the sky does this image cover?''.
The lines starting with @code{##} are just comments for you to read and understand each command.
Don't type them on the terminal.
The commands are intentionally repetitive in some places to better understand each step and also to demonstrate the beauty of command-line features like history, variables, pipes and loops (which you will commonly use as you master the command-line).

@cartouche
@noindent
@strong{Use shell history:} Don't forget to make effective use of your shell's history: you don't have to re-type previous command to add something to them.
This is especially convenient when you just want to make a small change to your previous command.
Press the ``up'' key on your keyboard (possibly multiple times) to see your previous command(s) and modify them accordingly.
@end cartouche

@example
## If your system language uses ',' (not '.') as decimal separator.
$ export LANG=C

## See the general statistics of non-blank pixel values.
$ aststatistics flat-ir/xdf-f160w.fits

## We only want the number of non-blank pixels.
$ aststatistics flat-ir/xdf-f160w.fits --number

## Keep the result of the command above in the shell variable `n'.
$ n=$(aststatistics flat-ir/xdf-f160w.fits --number)

## See what is stored the shell variable `n'.
$ echo $n

## Show all the FITS keywords of this image.
$ astfits flat-ir/xdf-f160w.fits -h1

## The resolution (in degrees/pixel) is in the `CDELT' keywords.
## Only show lines that contain these characters, by feeding
## the output of the previous command to the `grep' program.
$ astfits flat-ir/xdf-f160w.fits -h1 | grep CDELT

## Since the resolution of both dimensions is (approximately) equal,
## we'll only use one of them (CDELT1).
$ astfits flat-ir/xdf-f160w.fits -h1 | grep CDELT1

## To extract the value (third token in the line above), we'll
## feed the output to AWK. Note that the first two tokens are
## `CDELT1' and `='.
$ astfits flat-ir/xdf-f160w.fits -h1 | grep CDELT1 | awk '@{print $3@}'

## Save it as the shell variable `r'.
$ r=$(astfits flat-ir/xdf-f160w.fits -h1 | grep CDELT1   \
              | awk '@{print $3@}')

## Print the values of `n' and `r'.
$ echo $n $r

## Use the number of pixels (first number passed to AWK) and
## length of each pixel's edge (second number passed to AWK)
## to estimate the area of the field in arc-minutes squared.
$ echo $n $r | awk '@{print $1 * ($2*60)^2@}'
@end example

The output of the last command (area of this field) is 4.03817 (or approximately 4.04) arc-minutes squared.
Just for comparison, this is roughly 175 times smaller than the average moon's angular area (with a diameter of 30arc-minutes or half a degree).

Some FITS writers don't use the @code{CDELT} convention, making it hard to use the steps above.
In such cases, you can extract the pixel scale with the @option{--pixelscale} option of Gnuastro's Fits program like the command below.
Like the @option{--skycoverage} option above, you can also use the @option{--quiet} option to allow easy usage of the values in scripts.

@example
$ astfits flat-ir/xdf-f160w.fits --pixelscale
@end example

@cindex GNU AWK
@cartouche
@noindent
@strong{AWK for table/value processing:} As you saw above AWK is a powerful and simple tool for text processing.
You will see it often in shell scripts.
GNU AWK (the most common implementation) comes with a free and wonderful @url{https://www.gnu.org/software/gawk/manual/, book} in the same format as this book which will allow you to master it nicely.
Just like this manual, you can also access GNU AWK's manual on the command-line whenever necessary without taking your hands off the keyboard.
Just run @code{info awk}.
@end cartouche

@cartouche
@noindent
@cindex Locale
@cindex @code{LANG}
@cindex @code{LC_ALL}
@cindex Decimal separator
@cindex Language of command-line
@strong{Your locale doesn't use `.' as decimal separator:} the input/output of some core operating system tools like @command{awk} or @command{seq} depend on the @url{https://en.wikipedia.org/wiki/Locale_(computer_software), system locale}.
For example in Spanish and some other languages the decimal separator (symbol used to separate the integer and fractional part of a number), is a comma.
Therefore in systems that have Spanish as their default Locale, @command{seq} will print half of unity as `@code{0,5}' (instead of `@code{0.5}').
This can cause problems for parsing the printed numbers in other programs.
You can check your current locale with the @code{locale} command.
You can test your default decimal separator with this command:

@example
seq 0.5 1
@end example

To avoid these kinds of locale-specific problems (for example another program not being able to read `@code{0,5}' as half of unity), you can change the locale by setting the @code{LANG} environment variable (or the lower-level/generic @code{LC_ALL}).
You can do it only for a single command (the first one below), or all commands within the running session (the second command below):

@example
## Change the locale to the standard, only for this 'seq' command.
$ LANG=C seq 0.5 1

## Change the locale to the standard, for all commands after it.
$ export LANG=C
@end example

If you want to change it generally for all future sessions, you can put the second command in your shell's startup file.
For more on startup files, please see @ref{Installation directory}.
@end cartouche


@node Cosmological coverage, Building custom programs with the library, Angular coverage on the sky, General program usage tutorial
@subsection Cosmological coverage
Having found the angular coverage of the dataset in @ref{Angular coverage on the sky}, we can now use Gnuastro to answer a more physically motivated question: ``How large is this area at different redshifts?''.
To get a feeling of the tangential area that this field covers at redshift 2, you can use Gnuastro's CosmicCalcular program (@ref{CosmicCalculator}).
In particular, you need the tangential distance covered by 1 arc-second as raw output.
Combined with the field's area that was measured before, we can calculate the tangential distance in Mega Parsecs squared (@mymath{Mpc^2}).

@example
## If your system language uses ',' (not '.') as decimal separator.
$ export LANG=C

## Print general cosmological properties at redshift 2 (for example).
$ astcosmiccal -z2

## When given a "Specific calculation" option, CosmicCalculator
## will just print that particular calculation. To see all such
## calculations, add a `--help' token to the previous command
## (under the same title). Note that with `--help', no processing
## is done, so you can always simply append it to remember
## something without modifying the command you want to run.
$ astcosmiccal -z2 --help

## Only print the "Tangential dist. covered by 1arcsec at z (kpc)".
## in units of kpc/arc-seconds.
$ astcosmiccal -z2 --arcsectandist

## But its easier to use the short version of this option (which
## can be appended to other short options.
$ astcosmiccal -sz2

## Convert this distance to kpc^2/arcmin^2 and save in `k'.
$ k=$(astcosmiccal -sz2 | awk '@{print ($1*60)^2@}')

## Re-calculate the area of the dataset in arcmin^2.
$ n=$(aststatistics flat-ir/xdf-f160w.fits --number)
$ r=$(astfits flat-ir/xdf-f160w.fits -h1 | grep CDELT1   \
              | awk '@{print $3@}')
$ a=$(echo $n $r | awk '@{print $1 * ($2^2) * 3600@}')

## Multiply `k' and `a' and divide by 10^6 for value in Mpc^2.
$ echo $k $a | awk '@{print $1 * $2 / 1e6@}'
@end example

@noindent
At redshift 2, this field therefore covers approximately 1.07 @mymath{Mpc^2}.
If you would like to see how this tangential area changes with redshift, you can use a shell loop like below.

@example
$ for z in 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0; do        \
    k=$(astcosmiccal -sz$z);                                  \
    echo $z $k $a | awk '@{print $1, ($2*60)^2 * $3 / 1e6@}';   \
  done
@end example

@noindent
Fortunately, the shell has a useful tool/program to print a sequence of numbers that is nicely called @code{seq}.
You can use it instead of typing all the different redshifts in this example.
For example the loop below will calculate and print the tangential coverage of this field across a larger range of redshifts (0.1 to 5) and with finer increments of 0.1.

@example
## If your system language uses ',' (not '.') as decimal separator.
$ export LANG=C

## The loop over the redshifts
$ for z in $(seq 0.1 0.1 5); do                                  \
    k=$(astcosmiccal -z$z --arcsectandist);                      \
    echo $z $k $a | awk '@{print $1, ($2*60)^2 * $3 / 1e6@}';   \
  done
@end example


@node Building custom programs with the library, Option management and configuration files, Cosmological coverage, General program usage tutorial
@subsection Building custom programs with the library
In @ref{Cosmological coverage}, we repeated a certain calculation/output of a program multiple times using the shell's @code{for} loop.
This simple way repeating a calculation is great when it is only necessary once.
However, if you commonly need this calculation and possibly for a larger number of redshifts at higher precision, the command above can be slow (try it out to see).

This slowness of the repeated calls to a generic program (like CosmicCalculator), is because it can have a lot of overhead on each call.
To be generic and easy to operate, it has to parse the command-line and all configuration files (see @ref{Option management and configuration files}) which contain human-readable characters and need a lot of pre-processing to be ready for processing by the computer.
Afterwards, CosmicCalculator has to check the sanity of its inputs and check which of its many options you have asked for.
All the this pre-processing takes as much time as the high-level calculation you are requesting, and it has to re-do all of these for every redshift in your loop.

To greatly speed up the processing, you can directly access the core work-horse of CosmicCalculator without all that overhead by designing your custom program for this job.
Using Gnuastro's library, you can write your own tiny program particularly designed for this exact calculation (and nothing else!).
To do that, copy and paste the following C program in a file called @file{myprogram.c}.

@example
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <gnuastro/cosmology.h>

int
main(void)
@{
  double area=4.03817;          /* Area of field (arcmin^2). */
  double z, adist, tandist;     /* Temporary variables.      */

  /* Constants from Plank 2018 (arXiv:1807.06209, Table 2) */
  double H0=67.66, olambda=0.6889, omatter=0.3111, oradiation=0;

  /* Do the same thing for all redshifts (z) between 0.1 and 5. */
  for(z=0.1; z<5; z+=0.1)
    @{
      /* Calculate the angular diameter distance. */
      adist=gal_cosmology_angular_distance(z, H0, olambda,
                                           omatter, oradiation);

      /* Calculate the tangential distance of one arcsecond. */
      tandist = adist * 1000 * M_PI / 3600 / 180;

      /* Print the redshift and area. */
      printf("%-5.2f %g\n", z, pow(tandist * 60,2) * area / 1e6);
    @}

  /* Tell the system that everything finished successfully. */
  return EXIT_SUCCESS;
@}
@end example

@noindent
Then run the following command to compile your program and run it.

@example
$ astbuildprog myprogram.c
@end example

@noindent
In the command above, you used Gnuastro's BuildProgram program.
Its job is to greatly simplify the compilation, linking and running of simple C programs that use Gnuastro's library (like this one).
BuildProgram is designed to manage Gnuastro's dependencies, compile and link your custom program and then run it.

Did you notice how your custom program was much faster than the repeated calls to CosmicCalculator in the previous section? You might have noticed that a new file called @file{myprogram} is also created in the directory.
This is the compiled program that was created and run by the command above (its in binary machine code format, not human-readable any more).
You can run it again to get the same results with a command like this:

@example
$ ./myprogram
@end example

The efficiency of your custom @file{myprogram} compared to repeated calls to CosmicCalculator is because in the latter, the requested processing is comparable to the necessary overheads.
For other programs that take large input datasets and do complicated processing on them, the overhead is usually negligible compared to the processing.
In such cases, the libraries are only useful if you want a different/new processing compared to the functionalities in Gnuastro's existing programs.

Gnuastro has a large library which is used extensively by all the programs.
In other words, the library is like the skeleton of Gnuastro.
For the full list of available functions classified by context, please see @ref{Gnuastro library}.
Gnuastro's library and BuildProgram are created to make it easy for you to use these powerful features as you like.
This gives you a high level of creativity, while also providing efficiency and robustness.
Several other complete working examples (involving images and tables) of Gnuastro's libraries can be see in @ref{Library demo programs}.

But for this tutorial, let's stop discussing the libraries at this point in and get back to Gnuastro's already built programs which don't need any programming.
But before continuing, let's clean up the files we don't need any more:

@example
$ rm myprogram*
@end example


@node Option management and configuration files, Warping to a new pixel grid, Building custom programs with the library, General program usage tutorial
@subsection Option management and configuration files
None of Gnuastro's programs keep a default value internally within their code.
However, when you ran CosmicCalculator only with the @option{-z2} option (not specifying the cosmological parameters) in @ref{Cosmological coverage}, it completed its processing and printed results.
Where did the necessary cosmological parameters (like the matter density, etc) that are necessary for its calculations come from? Fast reply: the values come from a configuration file (see @ref{Configuration file precedence}).

CosmicCalculator is a small program with a limited set of parameters/options.
Therefore, let's use it to discuss configuration files in Gnuastro (for more, you can always see @ref{Configuration files}).
Configuration files are an important part of all Gnuastro's programs, especially the ones with a large number of options, so its important to understand this part well .

Once you get comfortable with configuration files here, you can make good use of them in all Gnuastro programs (for example, NoiseChisel).
For example, to do optimal detection on various datasets, you can have configuration files for different noise properties.
The configuration of each program (besides its version) is vital for the reproducibility of your results, so it is important to manage them properly.

As we saw above, the full list of the options in all Gnuastro programs can be seen with the @option{--help} option.
Try calling it with CosmicCalculator as shown below.
Note how options are grouped by context to make it easier to find your desired option.
However, in each group, options are ordered alphabetically.

@example
$ astcosmiccal --help
@end example

@noindent
The options that need a value have an @key{=} sign after their long version and @code{FLT}, @code{INT} or @code{STR} for floating point numbers, integer numbers, and strings (filenames for example) respectively.
All options have a long format and some have a short format (a single character), for more see @ref{Options}.

When you are using a program, it is often necessary to check the value the option has just before the program starts its processing.
In other words, after it has parsed the command-line options and all configuration files.
You can see the values of all options that need one with the @option{--printparams} or @code{-P} option.
@option{--printparams} is common to all programs (see @ref{Common options}).
In the command below, try replacing @code{-P} with @option{--printparams} to see how both do the same operation.

@example
$ astcosmiccal -P
@end example

Let's say you want a different Hubble constant.
Try running the following command (just adding @option{--H0=70} after the command above) to see how the Hubble constant in the output of the command above has changed.

@example
$ astcosmiccal -P --H0=70
@end example

@noindent
Afterwards, delete the @option{-P} and add a @option{-z2} to see the
calculations with the new cosmology (or configuration).

@example
$ astcosmiccal --H0=70 -z2
@end example

From the output of the @code{--help} option, note how the option for Hubble constant has both short (@code{-H}) and long (@code{--H0}) formats.
One final note is that the equal (@key{=}) sign is not mandatory.
In the short format, the value can stick to the actual option (the short option name is just one character after-all, thus easily identifiable) and in the long format, a white-space character is also enough.

@example
$ astcosmiccal -H70    -z2
$ astcosmiccal --H0 70 -z2 --arcsectandist
@end example

@noindent
When an option doesn't need a value, and has a short format (like @option{--arcsectandist}), you can easily append it @emph{before} other short options.
So the last command above can also be written as:

@example
$ astcosmiccal --H0 70 -sz2
@end example

Let's assume that in one project, you want to only use rounded cosmological parameters (H0 of 70km/s/Mpc and matter density of 0.3).
You should therefore run CosmicCalculator like this:

@example
$ astcosmiccal --H0=70 --olambda=0.7 --omatter=0.3 -z2
@end example

But having to type these extra options every time you run CosmicCalculator will be prone to errors (typos in particular), frustrating and slow.
Therefore in Gnuastro, you can put all the options and their values in a ``Configuration file'' and tell the programs to read the option values from there.

Let's create a configuration file...
With your favorite text editor, make a file named @file{my-cosmology.conf} (or @file{my-cosmology.txt}, the suffix doesn't matter, but a more descriptive suffix like @file{.conf} is recommended).
Then put the following lines inside of it.
One space between the option value and name is enough, the values are just under each other to help in readability.
Also note that you can only use long option names in configuration files.

@example
H0       70
olambda  0.7
omatter  0.3
@end example

@noindent
You can now tell CosmicCalculator to read this file for option values immediately using the @option{--config} option as shown below.
Do you see how the output of the following command corresponds to the option values in @file{my-cosmology.conf}, and is therefore identical to the previous command?

@example
$ astcosmiccal --config=my-cosmology.conf -z2
@end example

But still, having to type @option{--config=my-cosmology.conf} every time is annoying, isn't it?
If you need this cosmology every time you are working in a specific directory, you can use Gnuastro's default configuration file names and avoid having to type it manually.

The default configuration files (that are checked if they exist) must be placed in the hidden @file{.gnuastro} sub-directory (in the same directory you are running the program).
Their file name (within @file{.gnuastro}) must also be the same as the program's executable name.
So in the case of CosmicCalculator, the default configuration file in a given directory is @file{.gnuastro/astcosmiccal.conf}.

Let's do this.
We'll first make a directory for our custom cosmology, then build a @file{.gnuastro} within it.
Finally, we'll copy the custom configuration file there:

@example
$ mkdir my-cosmology
$ mkdir my-cosmology/.gnuastro
$ mv my-cosmology.conf my-cosmology/.gnuastro/astcosmiccal.conf
@end example

Once you run CosmicCalculator within @file{my-cosmology} (as shown below), you will see how your custom cosmology has been implemented without having to type anything extra on the command-line.

@example
$ cd my-cosmology
$ astcosmiccal -P
$ cd ..
@end example

To further simplify the process, you can use the @option{--setdirconf} option.
If you are already in your desired working directory, calling this option with the others will automatically write the final values (along with descriptions) in @file{.gnuastro/astcosmiccal.conf}.
For example try the commands below:

@example
$ mkdir my-cosmology2
$ cd my-cosmology2
$ astcosmiccal -P
$ astcosmiccal --H0 70 --olambda=0.7 --omatter=0.3 --setdirconf
$ astcosmiccal -P
$ cd ..
@end example

Gnuastro's programs also have default configuration files for a specific user (when run in any directory).
This allows you to set a special behavior every time a program is run by a specific user.
Only the directory and filename differ from the above, the rest of the process is similar to before.
Finally, there are also system-wide configuration files that can be used to define the option values for all users on a system.
See @ref{Configuration file precedence} for a more detailed discussion.

We'll stop the discussion on configuration files here, but you can always read about them in @ref{Configuration files}.
Before continuing the tutorial, let's delete the two extra directories that we don't need any more:

@example
$ rm -rf my-cosmology*
@end example


@node Warping to a new pixel grid, NoiseChisel and Multiextension FITS files, Option management and configuration files, General program usage tutorial
@subsection Warping to a new pixel grid
We are now ready to start processing the downloaded images.
The XDF datasets we are using here are already aligned to the same pixel grid.
However, warping to a different/matched pixel grid is commonly needed before higher-level analysis when you are using datasets from different instruments.
So let's have a look at Gnuastro's features warping features here.

Gnuastro's Warp program should be used for warping the pixel-grid (see @ref{Warp}).
For example, try rotating one of the images by 20 degrees:

@example
$ astwarp flat-ir/xdf-f160w.fits --rotate=20
@end example

@noindent
Open the output (@file{xdf-f160w_rotated.fits}) and see how it is rotated.
If your final image is already aligned with RA and Dec, you can simply use the @option{--align} option and let Warp calculate the necessary rotation and apply it.
For example, try aligning the rotated image back to the standard orientation (just note that because of the two rotations, the NaN parts of the image are larger now):

@example
$ astwarp xdf-f160w_rotated.fits --align
@end example

Warp can generally be used for many kinds of pixel grid manipulation (warping), not just rotations.
For example the outputs of the commands below will respectively have larger pixels (new resolution being one quarter the original resolution), get shifted by 2.8 (by sub-pixel), get a shear of 2, and be tilted (projected).
Run each of them and open the output file to see the effect, they will become handy for you in the future.

@example
$ astwarp flat-ir/xdf-f160w.fits --scale=0.25
$ astwarp flat-ir/xdf-f160w.fits --translate=2.8
$ astwarp flat-ir/xdf-f160w.fits --shear=0.2
$ astwarp flat-ir/xdf-f160w.fits --project=0.001,0.0005
@end example

@noindent
If you need to do multiple warps, you can combine them in one call to Warp.
For example to first rotate the image, then scale it, run this command:

@example
$ astwarp flat-ir/xdf-f160w.fits --rotate=20 --scale=0.25
@end example

If you have multiple warps, do them all in one command.
Don't warp them in separate commands because the correlated noise will become too strong.
As you see in the matrix that is printed when you run Warp, it merges all the warps into a single warping matrix (see @ref{Merging multiple warpings}) and simply applies that (mixes the pixel values) just once.
However, if you run Warp multiple times, the pixels will be mixed multiple times, creating a strong artificial blur/smoothing, or stronger correlated noise.

Recall that the merging of multiple warps is done through matrix multiplication, therefore order matters in the separate operations.
At a lower level, through Warp's @option{--matrix} option, you can directly request your desired final warp and don't have to break it up into different warps like above (see @ref{Invoking astwarp}).

Fortunately these datasets are already aligned to the same pixel grid, so you don't actually need the files that were just generated.You can safely delete them all with the following command.
Here, you see why we put the processed outputs that we need later into a separate directory.
In this way, the top directory can be used for temporary files for testing that you can simply delete with a generic command like below.

@example
$ rm *.fits
@end example


@node NoiseChisel and Multiextension FITS files, NoiseChisel optimization for detection, Warping to a new pixel grid, General program usage tutorial
@subsection NoiseChisel and Multiextension FITS files
Having completed a review of the basics in the previous sections, we are now ready to separate the signal (galaxies or stars) from the background noise in the image.
We will be using the results of @ref{Dataset inspection and cropping}, so be sure you already have them.
Gnuastro has NoiseChisel for this job.
But NoiseChisel's output is a multi-extension FITS file, therefore to better understand how to use NoiseChisel, let's take a look at multi-extension FITS files and how you can interact with them.

In the FITS format, each extension contains a separate dataset (image in this case).
You can get basic information about the extensions in a FITS file with Gnuastro's Fits program (see @ref{Fits}).
To start with, let's run NoiseChisel without any options, then use Gnuastro's FITS program to inspect the number of extensions in this file.

@example
$ astnoisechisel flat-ir/xdf-f160w.fits
$ astfits xdf-f160w_detected.fits
@end example

From the output list, we see that NoiseChisel's output contains 5 extensions and the first (counting from zero, with name @code{NOISECHISEL-CONFIG}) is empty: it has value of @code{0} in the last column (which shows its size).
The first extension in all the outputs of Gnuastro's programs only contains meta-data: data about/describing the datasets within (all) the output's extensions.
This is recommended by the FITS standard, see @ref{Fits} for more.
In the case of Gnuastro's programs, this generic zero-th/meta-data extension (for the whole file) contains all the configuration options of the program that created the file.

The second extension of NoiseChisel's output (numbered 1, named @code{INPUT-NO-SKY}) is the Sky-subtracted input that you provided.
The third (@code{DETECTIONS}) is NoiseChisel's main output which is a binary image with only two possible values for all pixels: 0 for noise and 1 for signal.
Since it only has two values, to avoid taking too much space on your computer, its numeric datatype an unsigned 8-bit integer (or @code{uint8})@footnote{To learn more about numeric data types see @ref{Numeric data types}.}.
The fourth and fifth (@code{SKY} and @code{SKY_STD}) extensions, have the Sky and its standard deviation values for the input on a tile grid and were calculated over the undetected regions (for more on the importance of the Sky value, see @ref{Sky value}).

Metadata regarding how the analysis was done (or a dataset was created) is very important for higher-level analysis and reproducibility.
Therefore, Let's first take a closer look at the @code{NOISECHISEL-CONFIG} extension.
If you specify a special header in the FITS file, Gnuastro's Fits program will print the header keywords (metadata) of that extension.
You can either specify the HDU/extension counter (starting from 0), or name.
Therefore, the two commands below are identical for this file:

@example
$ astfits xdf-f160w_detected.fits -h0
$ astfits xdf-f160w_detected.fits -hNOISECHISEL-CONFIG
@end example

The first group of FITS header keywords are standard keywords (containing the @code{SIMPLE} and @code{BITPIX} keywords the first empty line).
They are required by the FITS standard and must be present in any FITS extension.
The second group contains the input file and all the options with their values in that run of NoiseChisel.
Finally, the last group contains the date and version information of Gnuastro and its dependencies.
The ``versions and date'' group of keywords are present in all Gnuastro's FITS extension outputs, for more see @ref{Output FITS files}.

Note that if a keyword name is larger than 8 characters, it is preceded by a @code{HIERARCH} keyword and that all keyword names are in capital letters.
Therefore, if you want to see only one keyword's value by feeding the output to Grep, you should ask Grep to ignore case with its @option{-i} option (short name for @option{--ignore-case}).
For example, below we'll check the value to the @option{--snminarea} option, note how we don't need Grep's @option{-i} option when it is fed with @command{astnoisechisel -P} since it is already in small-caps there.
The extra white spaces in the first command are only to help in readability, you can ignore them when typing.

@example
$ astnoisechisel -P                   | grep    snminarea
$ astfits xdf-f160w_detected.fits -h0 | grep -i snminarea
@end example

@noindent
The metadata (that is stored in the output) can later be used to exactly reproduce/understand your result, even if you have lost/forgot the command you used to create the file.
This feature is present in all of Gnuastro's programs, not just NoiseChisel.

@cindex DS9
@cindex GNOME
@cindex SAO DS9
Let's continue with the extensions in NoiseChisel's output that contain a dataset by visually inspecting them (here, we'll use SAO DS9).
Since the file contains multiple related extensions, the easiest way to view all of them in DS9 is to open the file as a ``Multi-extension data cube'' with the @option{-mecube} option as shown below@footnote{You can configure your graphic user interface to open DS9 in multi-extension cube mode by default when using the GUI (double clicking on the file).
If your graphic user interface is GNOME (another GNU software, it is most common in GNU/Linux operating systems), a full description is given in @ref{Viewing multiextension FITS images}}.

@example
$ ds9 -mecube xdf-f160w_detected.fits -zscale -zoom to fit
@end example

A ``cube'' window opens along with DS9's main window.
The buttons and horizontal scroll bar in this small new window can be used to navigate between the extensions.
In this mode, all DS9's settings (for example zoom or color-bar) will be identical between the extensions.
Try zooming into to one part and flipping through the extensions to see how the galaxies were detected along with the Sky and Sky standard deviation values for that region.
Just have in mind that NoiseChisel's job is @emph{only} detection (separating signal from noise), We'll do segmentation on this result later to find the individual galaxies/peaks over the detected pixels.

Each HDU/extension in a FITS file is an independent dataset (image or table) which you can delete from the FITS file, or copy/cut to another file.
For example, with the command below, you can copy NoiseChisel's @code{DETECTIONS} HDU/extension to another file:

@example
$ astfits xdf-f160w_detected.fits --copy=DETECTIONS -odetections.fits
@end example

There are similar options to conveniently cut (@option{--cut}, copy, then remove from the input) or delete (@option{--remove}) HDUs from a FITS file also.
See @ref{HDU information and manipulation} for more.



@node NoiseChisel optimization for detection, NoiseChisel optimization for storage, NoiseChisel and Multiextension FITS files, General program usage tutorial
@subsection NoiseChisel optimization for detection
In @ref{NoiseChisel and Multiextension FITS files}, we ran NoiseChisel and reviewed NoiseChisel's output format.
Now that you have a better feeling for multi-extension FITS files, let's optimize NoiseChisel for this particular dataset.

One good way to see if you have missed any signal (small galaxies, or the wings of brighter galaxies) is to mask all the detected pixels and inspect the noise pixels.
For this, you can use Gnuastro's Arithmetic program (in particular its @code{where} operator, see @ref{Arithmetic operators}).
The command below will produce @file{mask-det.fits}.
In it, all the pixels in the @code{INPUT-NO-SKY} extension that are flagged 1 in the @code{DETECTIONS} extension (dominated by signal, not noise) will be set to NaN.

Since the various extensions are in the same file, for each dataset we need the file and extension name.
To make the command easier to read/write/understand, let's use shell variables: `@code{in}' will be used for the Sky-subtracted input image and `@code{det}' will be used for the detection map.
Recall that a shell variable's value can be retrieved by adding a @code{$} before its name, also note that the double quotations are necessary when we have white-space characters in a variable name (like this case).

@example
$ in="xdf-f160w_detected.fits -hINPUT-NO-SKY"
$ det="xdf-f160w_detected.fits -hDETECTIONS"
$ astarithmetic $in $det nan where --output=mask-det.fits
@end example

@noindent
To invert the result (only keep the detected pixels), you can flip the detection map (from 0 to 1 and vice-versa) by adding a `@code{not}' after the second @code{$det}:

@example
$ astarithmetic $in $det not nan where --output=mask-sky.fits
@end example

@cindex Correlated noise
@cindex Noise, correlated
Look again at the @code{DETECTIONS} extension, in particular the long worm-like structure around @footnote{To find a particular coordiante easily in DS9, you can do this: Click on the ``Edit'' menu, and select ``Region''.
Then click on any random part of the image to see a circle show up in that location (this is the ``region'').
Double-click on the region and a ``Circle'' window will open.
If you have celestial coordinates, keep the default ``fk5'' in the scroll-down menu after the ``Center''.
But if you have pixel/image coordinates, click on the ``fk5'' and select ``Image''.
Now you can set the ``Center'' coordinates of the region (@code{1650} and @code{1470} in this case) by manually typing them in the two boxes in front of ``Center''.
Finally, when everything is ready, click on the ``Apply'' button and your region will go over your requested coordinates.
You can zoom out (to see the whole image) and visually find it.} pixel 1650 (X) and 1470 (Y).
These types of long wiggly structures show that we have dug too deep into the noise, and are a signature of correlated noise.
Correlated noise is created when we warp (for example rotate) individual exposures (that are each slightly offset compared to each other) into the same pixel grid before adding them into one deeper image.
During the warping, nearby pixels are mixed and the effect of this mixing on the noise (which is in every pixel) is called ``correlated noise''.
Correlated noise is a form of convolution and it slightly smooths the image.

In terms of the number of exposures (and thus correlated noise), the XDF dataset is by no means an ordinary dataset.
Therefore the default parameters need to be slightly customized.
It is the result of warping and adding roughly 80 separate exposures which can create strong correlated noise/smoothing.
In common surveys the number of exposures is usually 10 or less.
See Figure 2 of @url{https://arxiv.org/abs/1909.11230, Akhlaghi [2019]} and the discussion on @option{--detgrowquant} there for more on how NoiseChisel ``grow''s the detected objects and the patterns caused by correlated noise.

Let's tweak NoiseChisel's configuration a little to get a better result on this dataset.
Don't forget that ``@emph{Good statistical analysis is not a purely routine matter, and generally calls for more than one pass through the computer}'' (Anscombe 1973, see @ref{Science and its tools}).
A good scientist must have a good understanding of her tools to make a meaningful analysis.
So don't hesitate in playing with the default configuration and reviewing the manual when you have a new dataset (from a new instrument) in front of you.
Robust data analysis is an art, therefore a good scientist must first be a good artist.
Once you have found the good configuration for that particular noise pattern (instrument) you can safely use it for all new data that have a similar noise pattern.

NoiseChisel can produce ``Check images'' to help you visualize and inspect how each step is done.
You can see all the check images it can produce with this command.

@example
$ astnoisechisel --help | grep check
@end example

Let's check the overall detection process to get a better feeling of what NoiseChisel is doing with the following command.
To learn the details of NoiseChisel in more detail, please see @ref{NoiseChisel}, @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]} and @url{https://arxiv.org/abs/1909.11230, Akhlaghi [2019]}.

@example
$ astnoisechisel flat-ir/xdf-f160w.fits --checkdetection
@end example

The check images/tables are also multi-extension FITS files.
As you saw from the command above, when check datasets are requested, NoiseChisel won't go to the end.
It will abort as soon as all the extensions of the check image are ready.
Please list the extensions of the output with @command{astfits} and then opening it with @command{ds9} as we done above.
If you have read the paper, you will see why there are so many extensions in the check image.

@example
$ astfits xdf-f160w_detcheck.fits
$ ds9 -mecube xdf-f160w_detcheck.fits -zscale -zoom to fit
@end example

In order to understand the parameters and their biases (especially as you are starting to use Gnuastro, or running it a new dataset), it is @emph{strongly} encouraged to play with the different parameters and use the respective check images to see which step is affected by your changes and how, for example see @ref{Detecting large extended targets}.

@cindex FWHM
Let's focus on one step: the @code{OPENED_AND_LABELED} extension shows the initial detection step of NoiseChisel.
We see the seeds of that correlated noise structure with many small detections (a relatively early stage in the processing).
Such connections at the lowest surface brightness limits usually occur when the dataset is too smoothed, the threshold is too low, or the final ``growth'' is too much.

As you see from the 2nd (@code{CONVOLVED}) extension, the first operation that NoiseChisel does on the data is to slightly smooth it.
However, the natural correlated noise of this dataset is already one level of artificial smoothing, so further smoothing it with the default kernel may be the culprit.
To see the effect, let's use a sharper kernel as a first step to convolve/smooth the input.

By default NoiseChisel uses a Gaussian with full-width-half-maximum (FWHM) of 2 pixels.
We can use Gnuastro's MakeProfiles to build a kernel with FWHM of 1.5 pixel (truncated at 5 times the FWHM, like the default) using the following command.
MakeProfiles is a powerful tool to build any number of mock profiles on one image or independently, to learn more of its features and capabilities, see @ref{MakeProfiles}.

@example
$ astmkprof --kernel=gaussian,1.5,5 --oversample=1
@end example

@noindent
Please open the output @file{kernel.fits} and have a look (it is very small and sharp).
We can now tell NoiseChisel to use this instead of the default kernel with the following command (we'll keep the @option{--checkdetection} to continue checking the detection steps)

@example
$ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits  \
                 --checkdetection
@end example

Open the output @file{xdf-f160w_detcheck.fits} as a multi-extension FITS file and go to the last extension (@code{DETECTIONS-FINAL}, it is the same pixels as the final NoiseChisel output without @option{--checkdetections}).
Look again at that position mentioned above (1650,1470), you see that the long wiggly structure is gone.
This shows we are making progress :-).

Looking at the new @code{OPENED_AND_LABELED} extension, we see that the thin connections between smaller peaks has now significantly decreased.
Going two extensions/steps ahead (in the first @code{HOLES-FILLED}), you can see that during the process of finding false pseudo-detections, too many holes have been filled: do you see how the many of the brighter galaxies are connected? At this stage all holes are filled, irrespective of their size.

Try looking two extensions ahead (in the first @code{PSEUDOS-FOR-SN}), you can see that there aren't too many pseudo-detections because of all those extended filled holes.
If you look closely, you can see the number of pseudo-detections in the printed outputs of NoiseChisel (around 6400).
This is another side-effect of correlated noise.
To address it, we should slightly increase the pseudo-detection threshold (before changing @option{--dthresh}, run with @option{-P} to see the default value):

@example
$ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits \
                 --dthresh=0.1 --checkdetection
@end example

Before visually inspecting the check image, you can already see the effect of this small change in NoiseChisel's command-line output: notice how the number of pseudo-detections has increased to more than 7100!
Open the check image now and have a look, you can see how the pseudo-detections are distributed much more evenly in the blank sky regions of the @code{PSEUDOS-FOR-SN} extension.

@cartouche
@noindent
@strong{Maximize the number of pseudo-detections:} When using NoiseChisel on datasets with a new noise-pattern (for example going to a Radio astronomy image, or a shallow ground-based image), play with @code{--dthresh} until you get a maximal number of pseudo-detections: the total number of pseudo-detections is printed on the command-line when you run NoiseChisel, you don't even need to open a FITS viewer.

In this particular case, try @option{--dthresh=0.2} and you will see that the total printed number decreases to around 6700 (recall that with @option{--dthresh=0.1}, it was roughly 7100).
So for this type of very deep HST images, we should set @option{--dthresh=0.1}.
@end cartouche

As discussed in Section 3.1.5 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}, the signal-to-noise ratio of pseudo-detections are critical to identifying/removing false detections.
For an optimal detection they are very important to get right (where you want to detect the faintest and smallest objects in the image successfully).
Let's have a look at their signal-to-noise distribution with @option{--checksn}.

@example
$ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits  \
                 --dthresh=0.1 --checkdetection --checksn
@end example

The output (@file{xdf-f160w_detsn.fits}) contains two extensions for the pseudo-detections containing two-column tables over the undetected (@code{SKY_PSEUDODET_SN}) regions and those over detections (@code{DET_PSEUDODET_SN}).
With the first command below you can see the HDUs of this file, and with the second you can see the information of the table in the first HDU (which is the default when you don't use @option{--hdu}):

@example
$ astfits xdf-f160w_detsn.fits
$ asttable xdf-f160w_detsn.fits -i
@end example

@noindent
You can see the table columns with the first command below and get a feeling of the signal-to-noise value distribution with the second command (the two Table and Statistics programs will be discussed later in the tutorial):

@example
$ asttable xdf-f160w_detsn.fits -hSKY_PSEUDODET_SN
$ aststatistics xdf-f160w_detsn.fits -hSKY_PSEUDODET_SN -c2
... [output truncated] ...
Histogram:
 |           *
 |          ***
 |         ******
 |        *********
 |        **********
 |       *************
 |      *****************
 |     ********************
 |    **************************
 |   ********************************
 |*******************************************************   * **       *
 |----------------------------------------------------------------------
@end example

The correlated noise is again visible in the signal-to-noise distribution of sky pseudo-detections!
Do you see how skewed this distribution is?
In an image with less correlated noise, this distribution would be much more symmetric.
A small change in the quantile will translate into a big change in the S/N value.
For example see the difference between the three 0.99, 0.95 and 0.90 quantiles with this command:

@example
$ aststatistics xdf-f160w_detsn.fits -hSKY_PSEUDODET_SN -c2      \
                --quantile=0.99 --quantile=0.95 --quantile=0.90
@end example

We get a change of almost 2 units (which is very significant).
If you run NoiseChisel with @option{-P}, you'll see the default signal-to-noise quantile @option{--snquant} is 0.99.
In effect with this option you specify the purity level you want (contamination by false detections).
With the @command{aststatistics} command above, you see that a small number of extra false detections (impurity) in the final result causes a big change in completeness (you can detect more lower signal-to-noise true detections).
So let's loosen-up our desired purity level, remove the check-image options, and then mask the detected pixels like before to see if we have missed anything.

@example
$ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits  \
                 --dthresh=0.1 --snquant=0.95
$ in="xdf-f160w_detected.fits -hINPUT-NO-SKY"
$ det="xdf-f160w_detected.fits -hDETECTIONS"
$ astarithmetic $in $det nan where --output=mask-det.fits
@end example

Overall it seems good, but if you play a little with the color-bar and look closer in the noise, you'll see a few very sharp, but faint, objects that have not been detected.
For example the object around pixel (456, 1662).
Despite its high valued pixels, this object was lost because erosion ignores the precise pixel values.
Loosing small/sharp objects like this only happens for under-sampled datasets like HST (where the pixel size is larger than the point spread function FWHM).
So this won't happen on ground-based images.

To address this problem of sharp objects, we can use NoiseChisel's @option{--noerodequant} option.
All pixels above this quantile will not be eroded, thus allowing us to preserve small/sharp objects (that cover a small area, but have a lot of signal in it).
Check its default value, then run NoiseChisel like below and make the mask again.

@example
$ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits     \
                 --noerodequant=0.95 --dthresh=0.1 --snquant=0.95
@end example

This seems to be fine and the object above is now detected.
We'll stop the configuration here, but please feel free to keep looking into the data to see if you can improve it even more.

Once you have found the proper customization for the type of images you will be using you don't need to change them any more.
The same configuration can be used for any dataset that has been similarly produced (and has a similar noise pattern).
But entering all these options on every call to NoiseChisel is annoying and prone to bugs (mistakenly typing the wrong value for example).
To simply things, we'll make a configuration file in a visible @file{config} directory.
Then we'll define the hidden @file{.gnuastro} directory (that all Gnuastro's programs will look into for configuration files) as a symbolic link to the @file{config} directory.
Finally, we'll write the finalized values of the options into NoiseChisel's standard configuration file within that directory.
We'll also put the kernel in a separate directory to keep the top directory clean of any files we later need.

@example
$ mkdir kernel config
$ ln -s config/ .gnuastro
$ mv kernel.fits kernel/noisechisel.fits
$ echo "kernel kernel/noisechisel.fits" > config/astnoisechisel.conf
$ echo "noerodequant 0.95"             >> config/astnoisechisel.conf
$ echo "dthresh      0.1"              >> config/astnoisechisel.conf
$ echo "snquant      0.95"             >> config/astnoisechisel.conf
@end example

@noindent
We are now ready to finally run NoiseChisel on the three filters and keep the output in a dedicated directory (which we'll call @file{nc} for simplicity).
@example
$ rm *.fits
$ mkdir nc
$ for f in f105w f125w f160w; do \
    astnoisechisel flat-ir/xdf-$f.fits --output=nc/xdf-$f.fits
  done
@end example


@node NoiseChisel optimization for storage, Segmentation and making a catalog, NoiseChisel optimization for detection, General program usage tutorial
@subsection NoiseChisel optimization for storage

As we showed before (in @ref{NoiseChisel and Multiextension FITS files}), NoiseChisel's output is a multi-extension FITS file with several images the same size as the input.
As the input datasets get larger this output can become hard to manage and waste a lot of storage space.
Fortunately there is a solution to this problem (which is also useful for Segment's outputs).

In this small section we'll take a short detour to show this feature.
Please note that the outputs generated here are not needed for the rest of the tutorial.
But first, let's have a look at the contents/HDUs and volume of NoiseChisel's output from @ref{NoiseChisel optimization for detection} (fast answer, its larger than 100 mega-bytes):

@example
$ astfits nc/xdf-f160w.fits
$ ls -lh nc/xdf-f160w.fits
@end example

Two options can drastically decrease NoiseChisel's output file size: 1) With the @option{--rawoutput} option, NoiseChisel won't create a Sky-subtracted input.
After all, it is redundant: you can always generate it by subtracting the @code{SKY} extension from the input image (which you have in your database) using the Arithmetic program.
2) With the @option{--oneelempertile}, you can tell NoiseChisel to store its Sky and Sky standard deviation results with one pixel per tile (instead of many pixels per tile).
So let's run NoiseChisel with these options, then have another look at the HDUs and the over-all file size:

@example
$ astnoisechisel flat-ir/xdf-f160w.fits --oneelempertile --rawoutput \
                 --output=nc-for-storage.fits
$ astfits nc-for-storage.fits
$ ls -lh nc-for-storage.fits
@end example

@noindent
See how @file{nc-for-storage.fits} has four HDUs, while @file{nc/xdf-f160w.fits} had five HDUs?
As explained above, the missing extension is @code{INPUT-NO-SKY}.
Also, look at the sizes of the @code{SKY} and @code{SKY_STD} HDUs, unlike before, they aren't the same size as @code{DETECTIONS}, they only have one pixel for each tile (group of pixels in raw input).
Finally, you see that @file{nc-for-storage.fits} is just under 8 mega byes (while @file{nc/xdf-f160w.fits} was 100 mega bytes)!

But were are not finished!
You can even be more efficient in storage, archival or transferring NoiseChisel's output by compressing this file.
Try the command below to see how NoiseChisel's output has now shrunk to about 250 kilo-byes while keeping all the necessary information as the original 100 mega-byte output.

@example
$ gzip --best nc-for-storage.fits
$ ls -lh nc-for-storage.fits.gz
@end example

We can get this wonderful level of compression because NoiseChisel's output is binary with only two values: 0 and 1.
Compression algorithms are highly optimized in such scenarios.

You can open @file{nc-for-storage.fits.gz} directly in SAO DS9 or feed it to any of Gnuastro's programs without having to decompress it.
Higher-level programs that take NoiseChisel's output (for example Segment or MakeCatalog) can also deal with this compressed image where the Sky and its Standard deviation are one pixel-per-tile.
You just have to give the ``values'' image as a separate option, for more, see @ref{Segment} and @ref{MakeCatalog}.

Segment (the program we will introduce in the next section for identifying sub-structure), also has similar features to optimize its output for storage.
Since this file was only created for a fast detour demonstration, let's keep our top directory clean and move to the next step:

@example
rm nc-for-storage.fits.gz
@end example



@node Segmentation and making a catalog, Working with catalogs estimating colors, NoiseChisel optimization for storage, General program usage tutorial
@subsection Segmentation and making a catalog
The main output of NoiseChisel is the binary detection map (@code{DETECTIONS} extension, see @ref{NoiseChisel optimization for detection}).
which only has two values of 1 or 0.
This is useful when studying the noise or background properties, but hardly of any use when you actually want to study the targets/galaxies in the image, especially in such a deep field where almost everything is connected.
To find the galaxies over the detections, we'll use Gnuastro's @ref{Segment} program:

@example
$ mkdir seg
$ astsegment nc/xdf-f160w.fits -oseg/xdf-f160w.fits
$ astsegment nc/xdf-f125w.fits -oseg/xdf-f125w.fits
$ astsegment nc/xdf-f105w.fits -oseg/xdf-f105w.fits
@end example

Segment's operation is very much like NoiseChisel (in fact, prior to version 0.6, it was part of NoiseChisel).
For example the output is a multi-extension FITS file, it has check images and uses the undetected regions as a reference.
Please have a look at Segment's multi-extension output with @command{ds9} to get a good feeling of what it has done.

@example
$ ds9 -mecube seg/xdf-f160w.fits -zscale -zoom to fit
@end example

Like NoiseChisel, the first extension is the input.
The @code{CLUMPS} extension shows the true ``clumps'' with values that are @mymath{\ge1}, and the diffuse regions labeled as @mymath{-1}.
Please flip between the first extension and the clumps extension and zoom-in on some of the clumps to get a feeling of what they are.
In the @code{OBJECTS} extension, we see that the large detections of NoiseChisel (that may have contained many galaxies) are now broken up into separate labels.
Play with the color-bar and hover your mouse of the various detections to see their different labels.

The clumps are not affected by the hard-to-deblend and low signal-to-noise diffuse regions, they are more robust for calculating the colors (compared to objects).
From this step onward, we'll continue with clumps.

Having localized the regions of interest in the dataset, we are ready to do measurements on them with @ref{MakeCatalog}.
Besides the IDs, we want to measure (in this order) the Right Ascension (with @option{--ra}), Declination (@option{--dec}), magnitude (@option{--magnitude}), and signal-to-noise ratio (@option{--sn}) of the objects and clumps.
Furthermore, as mentioned above, we also want measurements on clumps, so we also need to call @option{--clumpscat}.
The following command will make these measurements on Segment's F160W output and write them in a catalog for each object and clump in a FITS table.

@example
$ mkdir cat
$ astmkcatalog seg/xdf-f160w.fits --ids --ra --dec --magnitude --sn \
               --zeropoint=25.94 --clumpscat --output=cat/xdf-f160w.fits
@end example

@noindent
From the printed statements on the command-line, you see that MakeCatalog read all the extensions in Segment's output for the various measurements it needed.
To calculate colors, we also need magnitude measurements on the other filters.
So let's repeat the command above on them, just changing the file names and zeropoint (which we got from the XDF survey web page):

@example
$ astmkcatalog seg/xdf-f125w.fits --ids --ra --dec --magnitude --sn \
               --zeropoint=26.23 --clumpscat --output=cat/xdf-f125w.fits

$ astmkcatalog seg/xdf-f105w.fits --ids --ra --dec --magnitude --sn \
               --zeropoint=26.27 --clumpscat --output=cat/xdf-f105w.fits
@end example

However, the galaxy properties might differ between the filters (which is the whole purpose behind observing in different filters!).
Also, the noise properties and depth of the datasets differ.
You can see the effect of these factors in the resulting clump catalogs, with Gnuastro's Table program.
We'll go deep into working with tables in the next section, but in summary: the @option{-i} option will print information about the columns and number of rows.
To see the column values, just remove the @option{-i} option.
In the output of each command below, look at the @code{Number of rows:}, and note that they are different.

@example
$ asttable cat/xdf-f105w.fits -hCLUMPS -i
$ asttable cat/xdf-f125w.fits -hCLUMPS -i
$ asttable cat/xdf-f160w.fits -hCLUMPS -i
@end example

Matching the catalogs is possible (for example with @ref{Match}).
However, the measurements of each column are also done on different pixels: the clump labels can/will differ from one filter to another for one object.
Please open them and focus on one object to see for your self.
This can bias the result, if you match catalogs.

An accurate color calculation can only be done when magnitudes are measured from the same pixels on both images.
Fortunately in these images, the Point spread function (PSF) are very similar, allowing us to do this directly@footnote{When the PSFs between two images differ largely, you would have to PSF-match the images before using the same pixels for measurements.}.
You can do this with MakeCatalog and is one of the reasons that NoiseChisel or Segment don't generate a catalog at all (to give you the freedom of selecting the pixels to do catalog measurements on).

The F160W image is deeper, thus providing better detection/segmentation, and redder, thus observing smaller/older stars and representing more of the mass in the galaxies.
We will thus use the F160W filter as a reference and use its segment labels to identify which pixels to use for which objects/clumps.
But we will do the measurements on the sky-subtracted F105W and F125W images (using MakeCatalog's @option{--valuesfile} option) as shown below:
Notice that the only difference between these calls and the call to generate the raw F160W catalog (excluding the zero point and the output name) is the @option{--valuesfile}.

@example
$ astmkcatalog seg/xdf-f160w.fits --ids --ra --dec --magnitude --sn \
               --valuesfile=nc/xdf-f125w.fits --zeropoint=26.23 \
               --clumpscat --output=cat/xdf-f125w-on-f160w-lab.fits

$ astmkcatalog seg/xdf-f160w.fits --ids --ra --dec --magnitude --sn \
               --valuesfile=nc/xdf-f105w.fits --zeropoint=26.27 \
               --clumpscat --output=cat/xdf-f105w-on-f160w-lab.fits
@end example

After running the commands above, look into what MakeCatalog printed on the command-line.
You can see that (as requested) the object and clump labels for both were taken from the respective extensions in @file{seg/xdf-f160w.fits}, while the values and Sky standard deviation were taken from @file{nc/xdf-f105w.fits} and @file{nc/xdf-f125w.fits}.
Since we used the same labeled image on both filters, the number of rows in both catalogs are now identical.
Let's have a look:

@example
$ asttable cat/xdf-f105w-on-f160w-lab.fits -hCLUMPS -i
$ asttable cat/xdf-f125w-on-f160w-lab.fits -hCLUMPS -i
$ asttable cat/xdf-f160w.fits -hCLUMPS -i
@end example

Finally, the comments in MakeCatalog's output (@code{COMMENT} keywords in the FITS headers, or lines starting with @code{#} in plain text) contain some important information about the input datasets and other useful info (for example pixel area or per-pixel surface brightness limit).
You can see them with this command:

@example
$ astfits cat/xdf-f160w.fits -h1 | grep COMMENT
@end example


@node Working with catalogs estimating colors, Column statistics color-magnitude diagram, Segmentation and making a catalog, General program usage tutorial
@subsection Working with catalogs (estimating colors)
In the previous step we generated catalogs of objects and clumps over our dataset (see @ref{Segmentation and making a catalog}).
The catalogs are available in the two extensions of the single FITS file@footnote{MakeCatalog can also output plain text tables.
However, in the plain text format you can only have one table per file.
Therefore, if you also request measurements on clumps, two plain text tables will be created (suffixed with @file{_o.txt} and @file{_c.txt}).}.
Let's see the extensions and their basic properties with the Fits program:

@example
$ astfits  cat/xdf-f160w.fits              # Extension information
@end example

Let's inspect the table in each extension with Gnuastro's Table program (see @ref{Table}).
We should have used @option{-hOBJECTS} and @option{-hCLUMPS} instead of @option{-h1} and @option{-h2} respectively.
The numbers are just used here to convey that both names or numbers are possible, in the next commands, we'll just use names.

@example
$ asttable cat/xdf-f160w.fits -h1 --info   # Objects catalog info.
$ asttable cat/xdf-f160w.fits -h1          # Objects catalog columns.
$ asttable cat/xdf-f160w.fits -h2 -i       # Clumps catalog info.
$ asttable cat/xdf-f160w.fits -h2          # Clumps catalog columns.
@end example

@noindent
As you see above, when given a specific table (file name and extension), Table will print the full contents of all the columns.
To see the basic metadata about each column (for example name, units and comments), simply append a @option{--info} (or @option{-i}) to the command.

To print the contents of special column(s), just give the column number(s) (counting from @code{1}) or the column name(s) (if they have one) to the @option{--column} (or @option{-c}) option.
For example, if you just want the magnitude and signal-to-noise ratio of the clumps (in the clumps catalog), you can get it with any of the following commands

@example
$ asttable cat/xdf-f160w.fits -hCLUMPS --column=5,6
$ asttable cat/xdf-f160w.fits -hCLUMPS -c5,SN
$ asttable cat/xdf-f160w.fits -hCLUMPS -c5         -c6
$ asttable cat/xdf-f160w.fits -hCLUMPS -cMAGNITUDE -cSN
@end example

@noindent
Similar to HDUs, when the columns have names, always use the name: it is so common to mis-write numbers or forget the order later!
Using column names instead of numbers has many advantages:
@enumerate
@item
You don't have to worry about the order of columns in the table.
@item
It acts as a documentation in the script.
@item
Column meta-data (including a name) aren't just limited to FITS tables and can also be used in plain text tables, see @ref{Gnuastro text table format}.
@end enumerate

@noindent
Table also has tools to limit the displayed rows.
For example with the first command below only rows with a magnitude in the range of 29 to 30 will be shown.
With the second command, you can further limit the displayed rows to rows with an S/N larger than 10 (a range between 10 to infinity).
You can further sort the output rows, only show the top (or bottom) N rows and etc, for more see @ref{Table}.

@example
$ asttable cat/xdf-f160w.fits -hCLUMPS --range=MAGNITUDE,28:29
$ asttable cat/xdf-f160w.fits -hCLUMPS \
           --range=MAGNITUDE,28:29 --range=SN,10:inf
@end example

Now that you are comfortable in viewing table columns and rows, let's look into merging columns of multiple tables into one table (which is necessary for measuring the color of the clumps).
Since @file{cat/xdf-f160w.fits} and @file{cat/xdf-f105w-on-f160w-lab.fits} have exactly the same number of rows and the rows correspond to the same clump, let's merge them to have one table with magnitudes in both filters.

We can merge columns with the @option{--catcolumnfile} option like below.
You give this option a file name (which is assumed to be a table that has the same number of rows as the main input), and all the table's columns will be concatenated/appended to the main table.
So please try it out with the commands below.
We'll first look at the metadata of the first table (only the @code{CLUMPS} extension).
With the second command, we'll concatenate the two tables and write them in, @file{two-in-one.fits} and finally, we'll check the new catalog's metadata.

@example
$ asttable cat/xdf-f160w.fits -i -hCLUMPS
$ asttable cat/xdf-f160w.fits -hCLUMPS --output=two-in-one.fits \
           --catcolumnfile=cat/xdf-f125w-on-f160w-lab.fits \
           --catcolumnhdu=CLUMPS
$ asttable two-in-one.fits -i
@end example

By comparing the two metadata, we see that both tables have the same number of rows.
But what might have attracted your attention more, is that @file{two-in-one.fits} has double the number of columns (as expected, after all, you merged both tables into one file, and didn't ask for any specific column).
In fact you can concatenate any number of other tables in one command, for example:

@example
$ asttable cat/xdf-f160w.fits -hCLUMPS --output=three-in-one.fits \
           --catcolumnfile=cat/xdf-f125w-on-f160w-lab.fits \
           --catcolumnfile=cat/xdf-f105w-on-f160w-lab.fits \
           --catcolumnhdu=CLUMPS --catcolumnhdu=CLUMPS
$ asttable three-in-one.fits -i
@end example

As you see, to avoid confusion in column names, Table has intentionally appended a @code{-1} to the column names of the first concatenated table (so for example we have the original @code{RA} column, and another one called @code{RA-1}).
Similarly a @code{-2} has been added for the columns of the second concatenated table.

However, this example clearly shows a problem with this full concatenation: some columns are identical (for example @code{HOST_OBJ_ID} and @code{HOST_OBJ_ID-1}), or not needed (for example @code{RA-1} and @code{DEC-1} which are not necessary here).
In such cases, you can use @option{--catcolumns} to only concatenate certain columns, not the whole table.
For example this command:

@example
$ asttable cat/xdf-f160w.fits -hCLUMPS --output=two-in-one-2.fits \
           --catcolumnfile=cat/xdf-f125w-on-f160w-lab.fits \
           --catcolumnhdu=CLUMPS --catcolumns=MAGNITUDE
$ asttable two-in-one-2.fits -i
@end example

You see that we have now only appended the @code{MAGNITUDE} column of @file{cat/xdf-f125w-on-f160w-lab.fits}.
This is what we needed to be able to later subtract the magnitudes.
Let's go ahead and add the F105W magnitudes also with the command below.
Note how we need to call @option{--catcolumnhdu} once for every table that should be appended, but we only call @option{--catcolumn} once (assuming all the tables that should be appended have this column).

@example
$ asttable cat/xdf-f160w.fits -hCLUMPS --output=three-in-one-2.fits \
           --catcolumnfile=cat/xdf-f125w-on-f160w-lab.fits \
           --catcolumnfile=cat/xdf-f105w-on-f160w-lab.fits \
           --catcolumnhdu=CLUMPS --catcolumnhdu=CLUMPS \
           --catcolumns=MAGNITUDE
$ asttable three-in-one-2.fits -i
@end example

But we aren't finished yet!
There is a very big problem: its not immediately clear which one of @code{MAGNITUDE}, @code{MAGNITUDE-1} or @code{MAGNITUDE-2} columns belong to which filter!
Right now, you know this because you just ran this command.
But in one hour, you'll start doubting your self and will be forced to go through your command history, trying to figure out if you added F105W first, or F125W.
You should never torture your future-self (or your colleagues) like this!
So, let's rename these confusing columns in the matched catalog.

Fortunately, with the @option{--colmetadata} option, you can correct the column metadata of the final table (just before it is written).
It takes four values:
1) the original column name or number,
2) the new column name,
3) the column unit and
4) the column comments.
Since the comments are usually human-friendly sentences and contain space characters, you should put them in double quotations like below.
For example by adding three calls of this option to the previous command, we write the filter name in the magnitude column name and description.

@example
$ asttable cat/xdf-f160w.fits -hCLUMPS --output=three-in-one-3.fits \
        --catcolumnfile=cat/xdf-f125w-on-f160w-lab.fits \
        --catcolumnfile=cat/xdf-f105w-on-f160w-lab.fits \
        --catcolumnhdu=CLUMPS --catcolumnhdu=CLUMPS \
        --catcolumns=MAGNITUDE \
        --colmetadata=MAGNITUDE,MAG-F160w,log,"Magnitude in F160W." \
        --colmetadata=MAGNITUDE-1,MAG-F125w,log,"Magnitude in F125W." \
        --colmetadata=MAGNITUDE-2,MAG-F105w,log,"Magnitude in F105W."
$ asttable three-in-one-3.fits -i
@end example

We now have all three magnitudes in one table and can start doing arithmetic on them (to estimate colors, which are just a subtraction of magnitudes).
To use column arithmetic, simply call the column selection option (@option{--column} or @option{-c}), put the value in single quotations and start the value with @code{arith} (followed by a space) like the example below.
Column arithmetic uses the same ``reverse polish notation'' as the Arithmetic program (see @ref{Reverse polish notation}), with almost all the same operators (see @ref{Arithmetic operators}), and some column-specific operators (that aren't available for images).
In column-arithmetic, you can identify columns by number (prefixed with a @code{$}) or name, for more see @ref{Column arithmetic}.

So let's estimate one color from @file{three-in-one-3.fits} using column arithmetic.
All the commands below will produce the same output, try them each and focus on the differences.
Note that column arithmetic can be mixed with other ways to choose output columns (the @code{-c} option).

@example
$ asttable three-in-one-3.fits -ocolor-cat.fits \
           -c1,2,3,4,'arith $5 $7 -'

$ asttable three-in-one-3.fits -ocolor-cat.fits \
           -c1,2,RA,DEC,'arith MAG-F125W MAG-F160W -'

$ asttable three-in-one-3.fits -ocolor-cat.fits -c1,2 \
           -cRA,DEC --column='arith MAG-F105W MAG-F160W -'
@end example

This example again highlights the important point on using column names: if you don't know the commands before, you have no way of making sense of the first command: what is in column 5 and 7? why not subtract columns 3 and 4 from each other?
Do you see how cryptic the first one is?
Then look at the last one: even if you have no idea how this table was created, you immediately understand the desired operation.
@strong{When you have column names, please use them.}
If your table doesn't have column names, give them names with the @option{--colmetadata} (described above) as you are creating them.
But how about the metadata for the column you just created with column arithmetic?
Have a look at the column metadata of the table produced above:

@example
$ asttable color-cat.fits -i
@end example

The name of the column produced by arithmetic column is @command{ARITH_1}!
This is natural: Arithmetic has no idea what the modified column is!
You could have multiplied two columns, or done much more complex transformations with many columns.
@emph{Metadata can't be set automatically, your (the human) input is necessary.}
To add metadata, you can use @option{--colmetadata} like before:

@example
$ asttable three-in-one-3.fits -ocolor-cat.fits -c1,2,RA,DEC \
         --column='arith MAG-F105W MAG-F160W -' \
         --colmetadata=ARITH_1,F105W-F160W,log,"Magnitude difference"
$ asttable color-cat.fits -i
@end example

We are now ready to make our final table.
We want it to have the magnitudes in all three filters, as well as the three possible colors.
Recall that by convention in astronomy colors are defined by subtracting the bluer magnitude from the redder magnitude.
In this way a larger color value corresponds to a redder object.
So from the three magnitudes, we can produce three colors (as shown below).
Also, because this is the final table we are creating here and want to use it later, we'll store it in @file{cat/} and we'll also give it a clear name and use the @option{--range} option to only print columns with a signal-to-noise ratio (@code{SN} column, from the F160W filter) above 5.

@example
$ asttable three-in-one-3.fits --range=SN,5,inf -c1,2,RA,DEC,SN \
         -cMAG-F160W,MAG-F125W,MAG-F105W \
         -c'arith MAG-F125W MAG-F160W -' \
         -c'arith MAG-F105W MAG-F125W -' \
         -c'arith MAG-F105W MAG-F160W -' \
         --colmetadata=SN,SN-F160W,ratio,"F160W signal to noise ratio" \
         --colmetadata=ARITH_1,F125W-F160W,log,"Color F125W and F160W" \
         --colmetadata=ARITH_2,F105W-F125W,log,"Color F105W and F125W" \
         --colmetadata=ARITH_3,F105W-F160W,log,"Color F105W and F160W" \
         --output=cat/mags-with-color.fits
$ asttable cat/mags-with-color.fits -i
@end example

The table now has all the columns we need and it has the proper metadata to let us safely use it later (without frustrating over column orders!) or passing it to colleagues.

Let's finish this section of the tutorial with a useful tip on modifying column metadata.
Above, updating/changing column metadata was done with the @option{--colmetadata} in the same command that produced the newly created Table file.
But in many situations, the table is already made and you just want to update the metadata of one column.
In such cases using @option{--colmetadata} is over-kill (wasting CPU/RAM energy or time if the table is large) because it will load the full table data and metadata into memory, just change the metadata and write it back into a file.

In scenarios when the table's data doesn't need to be changed and you just want to set or update the metadata, it is much more efficient to use basic FITS keyword editing.
For example, in the FITS standard, column names are stored in the @code{TTYPE} header keywords, so let's have a look:

@example
$ asttable two-in-one.fits -i
$ astfits two-in-one.fits -h1 | grep TTYPE
@end example

Changing/updating the column names is as easy as updating the values to these keywords.
You don't need to touch the actual data!
With the command below, we'll just update the @code{MAGNITUDE} and @code{MAGNITUDE-1} columns (which are respectively stored in the @code{TTYPE5} and @code{TTYPE11} keywords) by modifying the keyword values and checking the effect by listing the column metadata again:

@example
$ astfits two-in-one.fits -h1 \
          --update=TTYPE5,MAG-F160W \
          --update=TTYPE11,MAG-F125W
$ asttable two-in-one.fits -i
@end example

You can see that the column names have indeed been changed without touching any of the data.
You can do the same for the column units or comments by modifying the keywords starting with @code{TUNIT} or @code{TCOMM}.

Generally, Gnuastro's table is a very useful program in data analysis and what you have seen so far is just the tip of the iceberg.
But to avoid making the tutorial even longer, we'll stop reviewing the features here, for more, please see @ref{Table}.
Before continuing, let's just delete all the temporary FITS tables we placed in the top project directory:

@example
rm *.fits
@end example





@node Column statistics color-magnitude diagram, Aperture photometry, Working with catalogs estimating colors, General program usage tutorial
@subsection Column statistics (color-magnitude diagram)
In @ref{Working with catalogs estimating colors} we created a single catalog containing the magnitudes of our desired clumps in all three filters, and their colors.
To start with, let's inspect the distribution of three colors with the Statistics program.

@example
$ aststatistics cat/mags-with-color.fits -cF105W-F125W
$ aststatistics cat/mags-with-color.fits -cF105W-F160W
$ aststatistics cat/mags-with-color.fits -cF125W-F160W
@end example

This tiny and cute ASCII histogram (and the general information printed above it) gives you a crude (but very useful and fast) feeling on the distribution.
You can later use Gnuastro's Statistics program with the @option{--histogram} option to build a much more fine-grained histogram as a table to feed into your favorite plotting program for a much more accurate/appealing plot (for example with PGFPlots in @LaTeX{}).
If you just want a specific measure, for example the mean, median and standard deviation, you can ask for them specifically, like below:

@example
$ aststatistics cat/mags-with-color.fits -cF105W-F160W \
                --mean --median --std
@end example

@cindex Color-magnitude diagram
The basic statistics we measured above were just on one column.
In many scenarios this is fine, but things get much more exciting if you look at the correlation of two columns with each other.
For example, let's create the color-magnitude diagram for our measured targets.

@cindex Scatter plot
@cindex 2D histogram
@cindex Plot, scatter
@cindex Histogram, 2D
In many papers, the color-magnitude diagram is usually plotted as a scatter plot.
However, scatter plots have a major limitation when there are a lot of points and they cluster together in one region of the plot: the possible correlation in that dense region is lost (because the points fall over each other).
In such cases, its much better to use a 2D histogram.
In a 2D histogram, the full range in both columns is divided into discrete 2D bins (or pixels!) and we count how many objects fall in that 2D bin.

Since a 2D histogram is a pixelated space, we can simply save it as a FITS image and view it in a FITS viewer.
Let's do this in the command below.
As is common with color-magnitude plots, we'll put the redder magnitude on the horizontal axis and the color on the vertical axis.
We'll set both dimensions to have 100 bins (with @option{--numbins} for the horizontal and @option{--numbins2} for the vertical).
Also, to avoid strong outliers in any of the dimensions, we'll manually set the range of each dimension with the @option{--greaterequal}, @option{--greaterequal2}, @option{--lessthan} and @option{--lessthan2} options.

@example
$ aststatistics cat/mags-with-color.fits -cMAG-F160W,F105W-F160W \
                --histogram2d=image --manualbinrange \
                --numbins=100  --greaterequal=22  --lessthan=30 \
                --numbins2=100 --greaterequal2=-1 --lessthan2=3 \
                --manualbinrange --output=cmd.fits
@end example

@noindent
You can now open this FITS file as a normal FITS image, for example with the command below.
Try hovering/zooming over the pixels: not only will you see the number of objects in the UVUDF catalog that fall in each bin/pixel, but you also see the @code{F160W} magnitude and color of that pixel also (in the same place you usually see RA and Dec when hovering over an astronomical image).

@example
$ ds9 cmd.fits -cmap sls -zoom to fit
@end example

Having a 2D histogram as a FITS image with WCS has many great advantages.
For example, just like FITS images of the night sky, you can ``match'' many 2D histograms that were created independently.
You can add two histograms with each other, or you can use advanced features of FITS viewers to find structure in the correlation of your columns.

@noindent
With the first command below, you can activate the grid feature of DS9 to actually see the coordinate grid, as well as values on each line.
With the second command, DS9 will even read the labels of the axises and use them to generate an almost publication-ready plot.

@example
$ ds9 cmd.fits -cmap sls -zoom to fit -grid yes
$ ds9 cmd.fits -cmap sls -zoom to fit -grid yes -grid type publication
@end example

If you are happy with the grid and coloring and etc, you can also use ds9 to save this as a JPEG image to directly use in your documents/slides with these extra DS9 options (DS9 will write the image to @file{cmd-2d.jpeg} and quit immediately afterwards):

@example
$ ds9 cmd.fits -cmap sls -zoom 4 -grid yes -grid type publication \
      -saveimage cmd-2d.jpeg -quit
@end example

@cindex PGFPlots (@LaTeX{} package)
This is good for a fast progress update.
But for your paper or more official report, you want to show something with higher quality.
For that, you can use the PGFPlots package in @LaTeX{} to add axises in the same font as your text, sharp grids and many other elegant/powerful features (like over-plotting interesting points, lines and etc).
But to load the 2D histogram into PGFPlots first you need to convert the FITS image into a more standard format, for example PDF.
We'll use Gnuastro's @ref{ConvertType} for this, and use the @code{sls-inverse} color map (which will map the pixels with a value of zero to white):

@example
$ astconvertt cmd.fits --colormap=sls-inverse --borderwidth=0 -ocmd.pdf
@end example

@noindent
Below you can see a minimally working example of how to add axis numbers, labels and a grid to the PDF generated above.
First, let's create a new @file{report} directory to keep the @LaTeX{} outputs, then put the minimal report's source in a file called @file{report.tex}.
Notice the @code{xmin}, @code{xmax}, @code{ymin}, @code{ymax} values and how they are the same as the range specified above.

@example
$ mkdir report
$ mv cmd.pdf report/
$ cat report/report.tex
\documentclass@{article@}
\usepackage@{pgfplots@}
\dimendef\prevdepth=0
\begin@{document@}

You can write all you want here...\par

\begin@{tikzpicture@}
  \begin@{axis@}[
      enlargelimits=false,
      grid,
      axis on top,
      width=\linewidth,
      height=\linewidth,
      xlabel=@{Magnitude (F160W)@},
      ylabel=@{Color (F105W-F160W)@}]

    \addplot graphics[xmin=22, xmax=30, ymin=-1, ymax=3] @{cmd.pdf@};
  \end@{axis@}
\end@{tikzpicture@}
\end@{document@}
@end example

@noindent
Run this command to build your PDF (assuming you have @LaTeX{} and PGFPlots).

@example
$ cd report
$ pdflatex report.tex
@end example

Open the newly created @file{report.pdf} and enjoy the exquisite quality.
The improved quality, blending in with the text, vector-graphics resolution and other features make this plot pleasing to the eye, and let your readers focus on the main point of your scientific argument.
PGFPlots can also built the PDF of the plot separately from the rest of the paper/report, see @ref{2D histogram as a table} for the necessary changes in the preamble.

We won't go much deeper into the Statistics program here, but there is so much more you can do with it.
After finishing the tutorial, see @ref{Statistics}.





@node Aperture photometry, Matching catalogs, Column statistics color-magnitude diagram, General program usage tutorial
@subsection Aperture photometry
The colors we calculated in @ref{Working with catalogs estimating colors} used a different segmentation map for each object.
This might not satisfy some science cases that need the flux within a fixed area/aperture.
Fortunately Gnuastro's modular programs make it very easy do this type of measurement (photometry).
To do this, we can ignore the labeled images of NoiseChisel of Segment, we can just built our own labeled image!
That labeled image can then be given to MakeCatalog

@cindex GNU AWK
To generate the apertures catalog we'll use Gnuastro's MakeProfiles (see @ref{MakeProfiles}).
But first we need a list of positions (aperture photometry needs a-priori knowledge of your target positions).
So we'll first read the clump positions from the F160W catalog, then use AWK to set the other parameters of each profile to be a fixed circle of radius 5 pixels (recall that we want all apertures to have an identical size/area in this scenario).

@example
$ rm *.fits *.txt
$ asttable cat/xdf-f160w.fits -hCLUMPS -cRA,DEC \
           | awk '!/^#/@{print NR, $1, $2, 5, 5, 0, 0, 1, NR, 1@}' \
           > apertures.txt
$ cat apertures.txt
@end example

We can now feed this catalog into MakeProfiles using the command below to build the apertures over the image.
The most important option for this particular job is @option{--mforflatpix}, it tells MakeProfiles that the values in the magnitude column should be used for each pixel of a flat profile.
Without it, MakeProfiles would build the profiles such that the @emph{sum} of the pixels of each profile would have a @emph{magnitude} (in log-scale) of the value given in that column (what you would expect when simulating a galaxy for example).
See @ref{Invoking astmkprof} for details on the options.

@example
$ astmkprof apertures.txt --background=flat-ir/xdf-f160w.fits \
            --clearcanvas --replace --type=int16 --mforflatpix \
            --mode=wcs
@end example

The first thing you might notice in the printed information is that the profiles are not built in order.
This is because MakeProfiles works in parallel, and parallel CPU operations are asynchronous.
You can try running MakeProfiles with one thread (using @option{--numthreads=1}) to see how order is respected in that case, but slower (note that the multi-threaded run will be much more faster when more mathematically-complicated profiles are built, like S@'eric profiles).

Open @file{apertures.fits} with a FITS viewer and look around at the circles placed over the targets.
Also open the input image and Segment's clumps image and compare them with the positions of these circles.
Where the apertures overlap, you will notice that one label has replaced the other (because of the @option{--replace} option).
In the future, MakeCatalog will be able to work with overlapping labels, but currently it doesn't.
If you are interested, please join us in completing Gnuastro with added improvements like this (see task 14750 @footnote{@url{https://savannah.gnu.org/task/index.php?14750}}).

We can now feed the @file{apertures.fits} labeled image into MakeCatalog instead of Segment's output as shown below.
In comparison with the previous MakeCatalog call, you will notice that there is no more @option{--clumpscat} option, since there is no more separate ``clump'' image now, each aperture is treated as a separate ``object''.

@example
$ astmkcatalog apertures.fits -h1 --zeropoint=26.27 \
               --valuesfile=nc/xdf-f105w.fits \
               --ids --ra --dec --magnitude --sn \
               --output=cat/xdf-f105w-aper.fits
@end example

This catalog has the same number of rows as the catalog produced from clumps in @ref{Working with catalogs estimating colors}.
Therefore similar to how we found colors, you can compare the aperture and clump magnitudes for example.

You can also change the filter name and zero point magnitudes and run this command again to have the fixed aperture magnitude in the F160W filter and measure colors on apertures.



@node Matching catalogs, Finding reddest clumps and visual inspection, Aperture photometry, General program usage tutorial
@subsection Matching catalogs

In the example above, we had the luxury to generate the catalogs ourselves, and where thus able to generate them in a way that the rows match.
But this isn't generally the case.
In many situations, you need to use catalogs from many different telescopes, or catalogs with high-level calculations that you can't simply regenerate with the same pixels without spending a lot of time or using heavy computation.
In such cases, when each catalog has the coordinates of its own objects, you can use the coordinates to match the rows with Gnuastro's Match program (see @ref{Match}).

As the name suggests, Gnuastro's Match program will match rows based on distance (or aperture in 2D) in one, two, or three columns.
For this tutorial, let's try matching the two catalogs that weren't created from the same labeled images, recall how each has a different number of rows:

@example
$ asttable cat/xdf-f105w.fits -hCLUMPS -i
$ asttable cat/xdf-f160w.fits -hCLUMPS -i
@end example

You give Match two catalogs (from the two different filters we derived above) as argument, and the HDUs containing them (if they are FITS files) with the @option{--hdu} and @option{--hdu2} options.
The @option{--ccol1} and @option{--ccol2} options specify the coordinate-columns which should be matched with which in the two catalogs.
With @option{--aperture} you specify the acceptable error (radius in 2D), in the same units as the columns.

@example
$ astmatch cat/xdf-f160w.fits           cat/xdf-f105w.fits \
           --hdu=CLUMPS                 --hdu2=CLUMPS \
           --ccol1=RA,DEC               --ccol2=RA,DEC \
           --aperture=0.5/3600 \
           --output=matched.fits
$ astfits matched.fits
@end example

From the second command, you see that the output has two extensions and that both have the same number of rows.
The rows in each extension are the matched rows of the respective input table: those in the first HDU come from the first input and those in the second HDU come from the second.
However, their order may be different from the input tables because the rows match: the first row in the first HDU matches with the first row in the second HDU, and etc.
You can also see which objects didn't match with the @option{--notmatched}, like below.
Note how each extension of  now has a different number of rows.

@example
$ astmatch cat/xdf-f160w.fits           cat/xdf-f105w.fits \
           --hdu=CLUMPS                 --hdu2=CLUMPS \
           --ccol1=RA,DEC               --ccol2=RA,DEC \
           --aperture=0.5/3600 \
           --output=not-matched.fits    --notmatched
$ astfits not-matched.fits
@end example

The @option{--outcols} of Match is a very convenient feature: you can use it to specify which columns from the two catalogs you want in the output (merge two input catalogs into one).
If the first character is an `@key{a}', the respective matched column (number or name, similar to Table above) in the first catalog will be written in the output table.
When the first character is a `@key{b}', the respective column from the second catalog will be written in the output.
Also, if the first character is followed by @code{_all}, then all the columns from the respective catalog will be put in the output.

@example
$ astmatch cat/xdf-f160w.fits           cat/xdf-f105w.fits \
           --hdu=CLUMPS                 --hdu2=CLUMPS \
           --ccol1=RA,DEC               --ccol2=RA,DEC \
           --aperture=0.35/3600 \
           --outcols=a_all,bMAGNITUDE,bSN \
           --output=matched.fits
$ astfits matched.fits
@end example





@node Finding reddest clumps and visual inspection, Writing scripts to automate the steps, Matching catalogs, General program usage tutorial
@subsection Finding reddest clumps and visual inspection
@cindex GNU AWK
As a final step, let's go back to the original clumps-based color measurement we generated in @ref{Working with catalogs estimating colors}.
We'll find the objects with the strongest color and make a cutout to inspect them visually and finally, we'll see how they are located on the image.
With the command below, we'll select the reddest objects (those with a color larger than 1.5):

@example
$ asttable cat/mags-with-color.fits --range=F105W-F160W,1.5,inf
@end example

@noindent
You can see how many they are by piping it to @code{wc -l}:

@example
$ asttable cat/mags-with-color.fits --range=F105W-F160W,1.5,inf | wc -l
@end example

Let's crop the F160W image around each of these objects, but we first need a unique identifier for them.
We'll define this identifier using the object and clump labels (with an underscore between them) and feed the output of the command above to AWK to generate a catalog.
Note that since we are making a plain text table, we'll define the necessary (for the string-type first column) metadata manually (see @ref{Gnuastro text table format}).

@example
$ echo "# Column 1: ID [name, str10] Object ID" > reddest.txt
$ asttable cat/mags-with-color.fits --range=F105W-F160W,1.5,inf \
           | awk '@{printf("%d_%-10d %f %f\n", $1, $2, $3, $4)@}' \
           >> reddest.txt
@end example

We can now feed @file{reddest.txt} into Gnuastro's Crop program to see what these objects look like.
To keep things clean, we'll make a directory called @file{crop-red} and ask Crop to save the crops in this directory.
We'll also add a @file{-f160w.fits} suffix to the crops (to remind us which filter they came from).
The width of the crops will be 15 arc-seconds (or 15/3600 degrees, which is the units of the WCS).

@example
$ mkdir crop-red
$ astcrop flat-ir/xdf-f160w.fits --mode=wcs --namecol=ID \
          --catalog=reddest.txt --width=15/3600,15/3600  \
          --suffix=-f160w.fits --output=crop-red
@end example

You can see all the cropped FITS files in the @file{crop-red} directory.
Like the MakeProfiles command in @ref{Aperture photometry}, you might notice that the crops aren't made in order.
This is because each crop is independent of the rest, therefore crops are done in parallel, and parallel operations are asynchronous.
In the command above, you can change @file{f160w} to @file{f105w} to make the crops in both filters.

To view the crops more easily (not having to open ds9 for each image), you can convert the FITS crops into the JPEG format with a shell loop like below.

@example
$ cd crop-red
$ for f in *.fits; do                                                  \
    astconvertt $f --fluxlow=-0.001 --fluxhigh=0.005 --invert -ojpg;   \
  done
$ cd ..
$ ls crop-red/
@end example

You can now use your general graphic user interface image viewer to flip through the images more easily, or import them into your papers/reports.

@cindex GNU Parallel
The @code{for} loop above to convert the images will do the job in series: each file is converted only after the previous one is complete.
If you have @url{https://www.gnu.org/s/parallel, GNU Parallel}, you can greatly speed up this conversion.
GNU Parallel will run the separate commands simultaneously on different CPU threads in parallel.
For more information on efficiently using your threads, see @ref{Multi-threaded operations}.
Here is a replacement for the shell @code{for} loop above using GNU Parallel.

@example
$ cd crop-red
$ parallel astconvertt --fluxlow=-0.001 --fluxhigh=0.005 --invert   \
           -ojpg ::: *.fits
$ cd ..
@end example

@noindent
Did you notice how much faster this one was? When possible, its always very helpful to do your analysis in parallel.
But the problem is that many operations are not as simple as this.
For such cases, you can use @url{https://en.wikipedia.org/wiki/Make_(software), Make} which will greatly help designing workflows.
But that is beyond the topic here.

@cindex DS9
@cindex SAO DS9
As the final action, let's see how these objects are positioned over the dataset.
DS9 has the ``Region''s concept for this purpose.
You just have to convert your catalog into a ``region file'' to feed into DS9.
To do that, you can use AWK again as shown below.

@example
$ awk 'BEGIN@{print "# Region file format: DS9 version 4.1";      \
             print "global color=green width=2";                 \
             print "fk5";@}                                       \
       !/^#/@{printf "circle(%s,%s,1\") # text=@{%s@}\n",$2,$3,$1;@}'\
      reddest.txt > reddest.reg
@end example

This region file can be loaded into DS9 with its @option{-regions} option to display over any image (that has world coordinate system).
In the example below, we'll open Segment's output and load the regions over all the extensions (to see the image and the respective clump):

@example
$ ds9 -mecube seg/xdf-f160w.fits -zscale -zoom to fit    \
      -regions load all reddest.reg
@end example

@node Writing scripts to automate the steps, Citing and acknowledging Gnuastro, Finding reddest clumps and visual inspection, General program usage tutorial
@subsection Writing scripts to automate the steps

In the previous sub-sections, we went through a series of steps like downloading the necessary datasets (in @ref{Setup and data download}), detecting the objects in the image, and finally selecting a particular subset of them to inspect visually (in @ref{Finding reddest clumps and visual inspection}).
To benefit most effectively from this subsection, please go through the previous sub-sections, and if you haven't actually done them, we recommended to do/run them before continuing here.

@cindex @command{history}
@cindex Shell history
Each sub-section/step of the sub-sections above involved several commands on the command-line.
Therefore, if you want to reproduce the previous results (for example to only change one part, and see its effect), you'll have to go through all the sections above and read through them again.
If you done the commands recently, you may also have them in the history of your shell (command-line environment).
You can see many of your previous commands on the shell (even if you have closed the terminal) with the @command{history} command, like this:

@example
$ history
@end example

@cindex GNU Bash
Try it in your teminal to see for your self.
By default in GNU Bash, it shows the last 500 commands.
You can also save this ``history'' of previous commands to a file using shell redirection (to have it after your next 500 commands), with this command

@example
$ history > my-previous-commands.txt
@end example

This is a good way to temporarily keep track of every single command you ran.
But in the middle of all the useful commands, you will have many extra commands, like tests that you did before/after the good output of a step (that you decided to continue working on), or an unrelated job you had to do in the middle of this project.
Because of these impurities, after a few days (that you have forgot the context: tests you didn't end-up using, or unrelated jobs) reading this full history will be very frustrating.

Keeping the final commands that were used in each step of an analysis is a common problem for anyone who is doing something serious with the computer.
But simply keeping the most important commands in a text file is not enough, the small steps in the middle (like making a directory to keep the outputs of one step) are also important.
In other words, the only way you can be sure that you are under control of your processing (and actually understand how you produced your final result) is to run the commands automatically.

@cindex Shell script
@cindex Script, shell
Fortunately, typing commands interactively with your fingers isn't the only way to operate the shell.
The shell can also take its orders/commands from a plain-text file, which is called a @emph{script}.
When given a script, the shell will read it line-by-line as if you have actually typed it manually.

@cindex Redirection in shell
@cindex Shell redirection
Let's continue with an example: try typing the commands below in your shell.
With these commands we are making a text file (@code{a.txt}) containing a simple @mymath{3\times3} matrix, converting it to a FITS image and computing its basic statistics.
After the first three commands open @file{a.txt} with a text editor to actually see the values we wrote in it, and after the fourth, open the FITS file to see the matrix as an image.
@file{a.txt} is created through the shell's redirection feature: `@code{>}' overwrites the existing contents of a file, and `@code{>>}' appends the new contents after the old contents.

@example
$ echo "1 1 1" > a.txt
$ echo "1 2 1" >> a.txt
$ echo "1 1 1" >> a.txt
$ astconvertt a.txt --output=a.fits
$ aststatistics a.fits
@end example

To automate these series of commands, you should put them in a text file.
But that text file must have two special features:
1) It should tell the shell what program should interpret the script.
2) The operating system should know that the file can be directly executed.

@cindex Shebang
@cindex Hashbang
For the first, Unix-like operating systems define the @emph{shebang} concept (also known as @emph{sha-bang} or @emph{hashbang}).
In the shebang convention, the first two characters of a file should be `@code{#!}'.
When confronted with these characters, the script will be interpreted with the program that follows them.
In this case, we want to write a shell script and the most common shell program is GNU Bash which is installed in @file{/bin/bash}.
So the first line of your script should be `@code{#!/bin/bash}'@footnote{
When the script is to be run by the same shell that is calling it (like this script), the shebang is optional.
But it is still recommended, because it ensures that even if the user isn't using GNU Bash, the script will be run in GNU Bash: given the differences between various shells, writing truely portable shell scripts, that can be run by many shell programs/implementations, isn't easy (sometimes not possible!).}.

It may happen (rarely) that GNU Bash is in another location on your system.
In other cases, you may prefer to use a non-standard version of Bash installed in another location (that has higher priority in your @code{PATH}, see @ref{Installation directory}).
In such cases, you can use the `@code{#!/usr/bin/env bash}' shebang instead.
Through the @code{env} program, this shebang will look in your @code{PATH} and use the first @command{bash} it finds to run your script.
But for simplicity in the rest of the tutorial, we'll continue with the `@code{#!/bin/bash}' shebang.

Using your favorite text editor, make a new empty file, let's call it @file{my-first-script.sh}.
Write the GNU Bash shebang (above) as its first line
After the shebang, copy the series of commands we ran above.
Just note that the `@code{$}' sign at the start of every line above is the prompt of the interactive shell (you never actually typed it, remember?).
Therefore, commands in a shell script should not start with a `@code{$}'.
Once you add the commands, close the text editor and run the @command{cat} command to confirm its contents.
It should look like the example below.
Recall that you should only type the line that starts with a `@code{$}', the lines without a `@code{$}', are printed automatically on the command-line (they are the contents of your script).

@example
$ cat my-first-script.sh
#!/bin/bash
echo "1 1 1" > a.txt
echo "1 2 1" >> a.txt
echo "1 1 1" >> a.txt
astconvertt a.txt --output=a.fits
aststatistics a.fits
@end example

@cindex File flags
@cindex Flags, file
The script contents are now ready, but to run it, you should activate the script file's @emph{executable flag}.
In Unix-like operating systems, every file has three types of flags: @emph{read} (or @code{r}), @emph{write} (or @code{w}) and @emph{execute} (or @code{x}).
To toggle a file's flags, you should use the @command{chmod} (for ``change mode'') command.
To activate a flag, you put a `@code{+}' before the flag character (for example @code{+x}).
To deactivate it, you put a `@code{-}' (for example @code{-x}).
In this case, you want to activate the script's executable flag, so you should run

@example
$ chmod +x my-first-script.sh
@end example

Your script is now ready to run/execute the series of commands.
To run it, you should call it while specifying its location in the file system.
Since you are currently in the same directory as the script, its easiest to use relative addressing like below (where `@code{./}' means the current directory).
But before running your script, first delete the two @file{a.txt} and @file{a.fits} files that were created when you interactively ran the commands.

@example
$ rm a.txt a.fits
$ ls
$ ./my-first-script.sh
$ ls
@end example

@noindent
The script immediately prints the statistics while doing all the previous steps in the background.
With the last @command{ls}, you see that it automatically re-built the @file{a.txt} and @file{a.fits} files, open them and have a look at their contents.

An extremely useful feature of shell scripts is that the shell will ignore anything after a `@code{#}' character.
You can thus add descriptions/comments to the commands and make them much more useful for the future.
For example, after adding comments, your script might look like this:

@example
$ cat my-first-script.sh
#!/bin/bash

# This script is my first attempt at learning to write shell scripts.
# As a simple series of commands, I am just building a small FITS
# image, and calculating its basic statistics.

# Write the matrix into a file.
echo "1 1 1" > a.txt
echo "1 2 1" >> a.txt
echo "1 1 1" >> a.txt

# Convert the matrix to a FITS image.
astconvertt a.txt --output=a.fits

# Calculate the statistics of the FITS image.
aststatistics a.fits
@end example

@noindent
Isn't this much more easier to read now?
Comments help to provide human-friendly context to the raw commands.
At the time you make a script, comments may seem like an extra effort and slow you down.
But in one year, you will forget almost everything about your script and you will appreciate the effort so much!
Think of the comments as an email to your future-self and always put a well-written description of the context/purpose (most importantly, things that aren't directly clear by reading the commands) in your scripts.

The example above was very basic and mostly redundant series of commands, to show the basic concepts behind scripts.
You can put any (arbitrarily long and complex) series of commands in a script by following the two rules: 1) add a shebang, and 2) enable the executable flag.
Infact, as you continue your own research projects, you will find that any time you are dealing with more than two or three commands, keeping them in a script (and modifying that script, and running it) is much more easier, and future-proof, then typing the commands directly on the command-line and relying on things like @command{history}. Here are some tips that will come in handy when you are writing your scripts:

As a more realistic example, let's have a look at a script that will do the steps of @ref{Setup and data download} and @ref{Dataset inspection and cropping}.
In particular note how often we are using variables to avoid repeating fixed strings of characters (usually file/directory names).
This greatly helps in scaling up your project, and avoiding hard-to-find bugs that are caused by typos in those fixed strings.

@example
$ cat gnuastro-tutorial-1.sh
#!/bin/bash


# Download the input datasets
# ---------------------------
#
# The default file names have this format (where `FILTER' differs for
# each filter):
#   hlsp_xdf_hst_wfc3ir-60mas_hudf_FILTER_v1_sci.fits
# To make the script easier to read, a prefix and suffix variable are
# used to sandwich the filter name into one short line.
downloaddir=download
xdfsuffix=_v1_sci.fits
xdfprefix=hlsp_xdf_hst_wfc3ir-60mas_hudf_
xdfurl=http://archive.stsci.edu/pub/hlsp/xdf

# The file name and full URLs of the input data.
f105w_in=$xdfprefix"f105w"$xdfsuffix
f160w_in=$xdfprefix"f160w"$xdfsuffix
f105w_full=$xdfurl/$f105w_in
f160w_full=$xdfurl/$f160w_in

# Go into the download directory and download the images there,
# then come back up to the top running directory.
mkdir $downloaddir
cd $downloaddir
wget $f105w_full
wget $f160w_full
cd ..


# Only work on the deep region
# ----------------------------
#
# To help in readability, each vertice of the deep/flat field is stored
# as a separate variable. They are then merged into one variable to
# define the polygon.
flatdir=flat-ir
vertice1="53.187414,-27.779152"
vertice2="53.159507,-27.759633"
vertice3="53.134517,-27.787144"
vertice4="53.161906,-27.807208"
f105w_flat=$flatdir/xdf-f105w.fits
f160w_flat=$flatdir/xdf-f160w.fits
deep_polygon="$vertice1:$vertice2:$vertice3:$vertice4"

mkdir $flatdir
astcrop --mode=wcs -h0 --output=$f105w_flat \
        --polygon=$deep_polygon $downloaddir/$f105w_in
astcrop --mode=wcs -h0 --output=$f160w_flat \
        --polygon=$deep_polygon $downloaddir/$f160w_in
@end example

The first thing you may notice is that even if you already have the downloaded input images, this script will always try to re-download them.
Also, if you re-run the script, you will notice that @command{mkdir} prints an error message that the download directory already exists.
Therefore, the script above isn't too useful and some modifications are necessary to make it more generally useful.
Here are some general tips that are often very useful when writing scripts:

@table @strong
@item Stop script if a command crashes
By default, if a command in a script crashes (aborts and fails to do what it was meant to do), the script will continue onto the next command.
In GNU Bash, you can tell the shell to stop a script in the case of a crash by adding this line at the start of your script:

@example
set -e
@end example

@item Check if a file/directory exists to avoid re-creating it
Conditionals are a very useful feature in scripts.
One common conditional is to check if a file exists or not.
Assuming the file's name is @file{FILENAME}, you can check its existance (to avoid re-doing the commands that build it) like this:
@example
if [ -f FILENAME ]; then
  echo "FILENAME exists"
else
  # Some commands to generate the file
  echo "done" > FILENAME
fi
@end example
To check the existance of a directory instead of a file, use @code{-d} instead of @code{-f}.
To negate a conditional, use `@code{!}' and note that conditionals can be written in one line also (useful for when its short).

One common scenario that you'll need to check the existance of directories is when you are making them: the default @command{mkdir} command will crash if the desired directory already exists.
On some systems (including GNU/Linux distributions), @code{mkdir} has options to deal with such cases. But if you want your script to be portable, its best to check yourself like below:

@example
if ! [ -d DIRNAME ]; then mkdir DIRNAME; fi
@end example
@end table

@noindent
Taking these tips into consideration, we can write a better version of the script above that includes checks on every step to avoid repeating steps/commands.
Please compare this script with the previous one carefully to spot the differences.
These are very important points that you will definitely encouter during your own research, and knowing them can greatly help your productiveity, so pay close attention (even in the comments).

@example
$ cat gnuastro-tutorial-2.sh
#!/bin/bash
set -e


# Download the input datasets
# ---------------------------
#
# The default file names have this format (where `FILTER' differs for
# each filter):
#   hlsp_xdf_hst_wfc3ir-60mas_hudf_FILTER_v1_sci.fits
# To make the script easier to read, a prefix and suffix variable are
# used to sandwich the filter name into one short line.
downloaddir=download
xdfsuffix=_v1_sci.fits
xdfprefix=hlsp_xdf_hst_wfc3ir-60mas_hudf_
xdfurl=http://archive.stsci.edu/pub/hlsp/xdf

# The file name and full URLs of the input data.
f105w_in=$xdfprefix"f105w"$xdfsuffix
f160w_in=$xdfprefix"f160w"$xdfsuffix
f105w_full=$xdfurl/$f105w_in
f160w_full=$xdfurl/$f160w_in

# Go into the download directory and download the images there,
# then come back up to the top running directory.
if ! [ -d $downloaddir ]; then mkdir $downloaddir; fi
cd $downloaddir
if ! [ -f $f105w_in ]; then wget $f105w_full; fi
if ! [ -f $f160w_in ]; then wget $f160w_full; fi
cd ..


# Only work on the deep region
# ----------------------------
#
# To help in readability, each vertice of the deep/flat field is stored
# as a separate variable. They are then merged into one variable to
# define the polygon.
flatdir=flat-ir
vertice1="53.187414,-27.779152"
vertice2="53.159507,-27.759633"
vertice3="53.134517,-27.787144"
vertice4="53.161906,-27.807208"
f105w_flat=$flatdir/xdf-f105w.fits
f160w_flat=$flatdir/xdf-f160w.fits
deep_polygon="$vertice1:$vertice2:$vertice3:$vertice4"

if ! [ -d $flatdir ]; then mkdir $flatdir; fi
if ! [ -f $f105w_flat ]; then
    astcrop --mode=wcs -h0 --output=$f105w_flat \
            --polygon=$deep_polygon $downloaddir/$f105w_in
fi
if ! [ -f $f160w_flat ]; then
    astcrop --mode=wcs -h0 --output=$f160w_flat \
            --polygon=$deep_polygon $downloaddir/$f160w_in
fi
@end example

@node Citing and acknowledging Gnuastro,  , Writing scripts to automate the steps, General program usage tutorial
@subsection Citing and acknowledging Gnuastro
In conclusion, we hope this extended tutorial has been a good starting point to help in your exciting research.
If this book or any of the programs in Gnuastro have been useful for your research, please cite the respective papers, and acknowledge the funding agencies that made all of this possible.
Without citations, we won't be able to secure future funding to continue working on Gnuastro or improving it, so please take software citation seriously (for all the scientific software you use, not just Gnuastro).

To help you in this aspect is well, all Gnuastro programs have a @option{--cite} option to facilitate the citation and acknowledgment.
Just note that it may be necessary to cite additional papers for different programs, so please try it out on all the programs that you used, for example:

@example
$ astmkcatalog --cite
$ astnoisechisel --cite
@end example








@node Detecting large extended targets,  , General program usage tutorial, Tutorials
@section Detecting large extended targets

The outer wings of large and extended objects can sink into the noise very gradually and can have a large variety of shapes (for example due to tidal interactions).
Therefore separating the outer boundaries of the galaxies from the noise can be particularly tricky.
Besides causing an under-estimation in the total estimated brightness of the target, failure to detect such faint wings will also cause a bias in the noise measurements, thereby hampering the accuracy of any measurement on the dataset.
Therefore even if they don't constitute a significant fraction of the target's light, or aren't your primary target, these regions must not be ignored.
In this tutorial, we'll walk you through the strategy of detecting such targets using @ref{NoiseChisel}.

@cartouche
@noindent
@strong{Don't start with this tutorial:} If you haven't already completed @ref{General program usage tutorial}, we strongly recommend going through that tutorial before starting this one.
Basic features like access to this book on the command-line, the configuration files of Gnuastro's programs, benefiting from the modular nature of the programs, viewing multi-extension FITS files, or using NoiseChisel's outputs are discussed in more detail there.
@end cartouche

@cindex M51
@cindex NGC5195
@cindex SDSS, Sloan Digital Sky Survey
@cindex Sloan Digital Sky Survey, SDSS
We'll try to detect the faint tidal wings of the beautiful M51 group@footnote{@url{https://en.wikipedia.org/wiki/M51_Group}} in this tutorial.
We'll use a dataset/image from the public @url{http://www.sdss.org/, Sloan Digital Sky Survey}, or SDSS.
Due to its more peculiar low surface brightness structure/features, we'll focus on the dwarf companion galaxy of the group (or NGC 5195).


@menu
* Downloading and validating input data::  How to get and check the input data.
* NoiseChisel optimization::    Detect the extended and diffuse wings.
* Achieved surface brightness level::  Calculate the outer surface brightness.
@end menu

@node Downloading and validating input data, NoiseChisel optimization, Detecting large extended targets, Detecting large extended targets
@subsection Downloading and validating input data

To get the image, you can use SDSS's @url{https://dr12.sdss.org/fields, Simple field search} tool.
As long as it is covered by the SDSS, you can find an image containing your desired target either by providing a standard name (if it has one), or its coordinates.
To access the dataset we will use here, write @code{NGC5195} in the ``Object Name'' field and press ``Submit'' button.

@cartouche
@noindent
@strong{Type the example commands:} Try to type the example commands on your terminal and use the history feature of your command-line (by pressing the ``up'' button to retrieve previous commands).
Don't simply copy and paste the commands shown here.
This will help simulate future situations when you are processing your own datasets.
@end cartouche

@cindex GNU Wget
You can see the list of available filters under the color image.
For this demonstration, we'll use the r-band filter image.
By clicking on the ``r-band FITS'' link, you can download the image.
Alternatively, you can just run the following command to download it with GNU Wget@footnote{To make the command easier to view on screen or in a page, we have defined the top URL of the image as the @code{topurl} shell variable.
You can just replace the value of this variable with @code{$topurl} in the @command{wget} command.}.
To keep things clean, let's also put it in a directory called @file{ngc5195}.
With the @option{-O} option, we are asking Wget to save the downloaded file with a more manageable name: @file{r.fits.bz2} (this is an r-band image of NGC 5195, which was the directory name).

@example
$ mkdir ngc5195
$ cd ngc5195
$ topurl=https://dr12.sdss.org/sas/dr12/boss/photoObj/frames
$ wget $topurl/301/3716/6/frame-r-003716-6-0117.fits.bz2 -Or.fits.bz2
@end example

When you want to reproduce a previous result (a known analysis, on a known dataset, to get a known result: like the case here!) it is important to verify that the file is correct: that the input file hasn't changed (on the remote server, or in your own archive), or there was no downloading problem.
Otherwise, if the data have changed in your server/archive, and you use the same script, you will get a different result, causing a lot of confusion!

@cindex Checksum
@cindex SHA-1 checksum
@cindex Verification, checksum
One good way to verify the contents of a file is to store its @emph{Checksum} in your analysis script and check it before any other operation.
The @emph{Checksum} algorithms look into the contents of a file and calculate a fixed-length string from them.
If any change (even in a bit or byte) is made within the file, the resulting string will change, for more see @url{https://en.wikipedia.org/wiki/Checksum, Wikipedia}.
There are many common algorithms, but a simple one is the @url{https://en.wikipedia.org/wiki/SHA-1, SHA-1 algorithm} (Secure Hash Algorithm 1) that you can calculate easily with the command below (the second line is the output, and the checksum is the first/long string: it is independent of the file name)

@example
$ sha1sum r.fits.bz2
5fb06a572c6107c72cbc5eb8a9329f536c7e7f65  r.fits.bz2
@end example

If the checksum on your computer is different from this, either the file has been incorrectly downloaded (most probable), or it has changed on SDSS servers (very unlikely@footnote{If your checksum is different, try uncompressing the file with the @command{bunzip2} command after this, and open the resulting FITS file.
If it opens and you see the image of M51 and NGC5195, then there was no download problem, and the file has indeed changed on the SDSS servers!
In this case, please contact us at @code{bug-gnuastro@@gnu.org}.}).
To get a better feeling of checksums open your favorite text editor and make a test file by writing something in it.
Save it and calculate the text file's SHA-1 checksum with @command{sha1sum}.
Try renaming that file, and you'll see the checksum hasn't changed (checksums only look into the contents, not the name/location of the file).
Then open the file with your text editor again, make a change and re-calculate its checksum, you'll see the checksum string has changed.

Its always good to keep this short checksum string with your project's scripts and validate your input data before using them.
You can do this with a shell conditional like this:

@example
filename=r.fits.bz2
expected=5fb06a572c6107c72cbc5eb8a9329f536c7e7f65
sum=$(sha1sum $filename | awk '@{print $1@}')
if [ $sum = $expected ]; then
  echo "$filename: validated"
else
  echo "$filename: wrong checksum!"
  exit 1
fi
@end example

@cindex Bzip2
@noindent
Now that we know you have the same data that we wrote this tutorial with, let's continue.
The SDSS server keeps the files in a Bzip2 compressed file format (that have a @code{.bz2} suffix).
So we'll first decompress it with the following command to use it as a normal FITS file.
By convention, compression programs delete the original file (compressed when uncompressing, or uncompressed when compressing).
To keep the original file, you can use the @option{--keep} or @option{-k} option which is available in most compression programs for this job.
Here, we don't need the compressed file any more, so we'll just let @command{bunzip} delete it for us and keep the directory clean.

@example
$ bunzip2 r.fits.bz2
@end example


@menu
* NoiseChisel optimization::    Optimize NoiseChisel to dig very deep.
* Achieved surface brightness level::  Measure how much you detected.
@end menu

@node NoiseChisel optimization, Achieved surface brightness level, Downloading and validating input data, Detecting large extended targets
@subsection NoiseChisel optimization
In @ref{Detecting large extended targets} we downloaded the single exposure SDSS image.
Let's see how NoiseChisel operates on it with its default parameters:

@example
$ astnoisechisel r.fits -h0
@end example

As described in @ref{NoiseChisel and Multiextension FITS files}, NoiseChisel's default output is a multi-extension FITS file.
Open the output @file{r_detected.fits} file and have a look at the extensions, the 0-th extension is only meta-data and contains NoiseChisel's configuration parameters.
The rest are the Sky-subtracted input, the detection map, Sky values and Sky standard deviation.

@example
$ ds9 -mecube r_detected.fits -zscale -zoom to fit
@end example

Flipping through the extensions in a FITS viewer, you will see that the first image (Sky-subtracted image) looks reasonable: there are no major artifacts due to bad Sky subtraction compared to the input.
The second extension also seems reasonable with a large detection map that covers the whole of NGC5195, but also extends towards the bottom of the image where we actually see faint and diffuse signal in the input image.

Now try flipping between the @code{DETECTIONS} and @code{SKY} extensions.
In the @code{SKY} extension, you'll notice that there is still significant signal beyond the detected pixels.
You can tell that this signal belongs to the galaxy because the far-right side of the image (away from M51) is dark (has lower values) and the brighter parts in the Sky image (with larger values) are just under the detections and follow a similar pattern.

The fact that signal from the galaxy remains in the @code{SKY} HDU shows that NoiseChisel can be optimized for a much better result.
The @code{SKY} extension must not contain any light around the galaxy.
Generally, any time your target is much larger than the tile size and the signal is very diffuse and extended at low signal-to-noise values (like this case), this @emph{will} happen.
Therefore, when there are large objects in the dataset, @strong{the best place} to check the accuracy of your detection is the estimated Sky image.

When dominated by the background, noise has a symmetric distribution.
However, signal is not symmetric (we don't have negative signal).
Therefore when non-constant@footnote{by constant, we mean that it has a single value in the region we are measuring.} signal is present in a noisy dataset, the distribution will be positively skewed.
For a demonstration, see Figure 1 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
This skewness is a good measure of how much faint signal we have in the distribution.
The skewness can be accurately measured by the difference in the mean and median (assuming no strong outliers): the more distant they are, the more skewed the dataset is.
For more see @ref{Quantifying signal in a tile}.

However, skewness is only a proxy for signal when the signal has structure (varies per pixel).
Therefore, when it is approximately constant over a whole tile, or sub-set of the image, the constant signal's effect is just to shift the symmetric center of the noise distribution to the positive and there won't be any skewness (major difference between the mean and median).
This positive@footnote{In processed images, where the Sky value can be over-estimated, this constant shift can be negative.} shift that preserves the symmetric distribution is the Sky value.
When there is a gradient over the dataset, different tiles will have different constant shifts/Sky-values, for example see Figure 11 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.

To make this very large diffuse/flat signal detectable, you will therefore need a larger tile to contain a larger change in the values within it (and improve number statistics, for less scatter when measuring the mean and median).
So let's play with the tessellation a little to see how it affects the result.
In Gnuastro, you can see the option values (@option{--tilesize} in this case) by adding the @option{-P} option to your last command.
Try running NoiseChisel with @option{-P} to see its default tile size.

You can clearly see that the default tile size is indeed much smaller than this (huge) galaxy and its tidal features.
As a result, NoiseChisel was unable to identify the skewness within the tiles under the outer parts of M51 and NGC 5159 and the threshold has been over-estimated on those tiles.
To see which tiles were used for estimating the quantile threshold (no skewness was measured), you can use NoiseChisel's @option{--checkqthresh} option:

@example
$ astnoisechisel r.fits -h0 --checkqthresh
@end example

Did you see how NoiseChisel aborted after finding and applying the quantile thresholds?
When you call any of NoiseChisel's @option{--check*} options, by default, it will abort as soon as all the check steps have been written in the check file (a multi-extension FITS file).
This allows you to focus on the problem you wanted to check as soon as possible (you can disable this feature with the @option{--continueaftercheck} option).

To optimize the threshold-related settings for this image, let's play with this quantile threshold check image a little.
Don't forget that ``@emph{Good statistical analysis is not a purely routine matter, and generally calls for more than one pass through the computer}'' (Anscombe 1973, see @ref{Science and its tools}).
A good scientist must have a good understanding of her tools to make a meaningful analysis.
So don't hesitate in playing with the default configuration and reviewing the manual when you have a new dataset (from a new instrument) in front of you.
Robust data analysis is an art, therefore a good scientist must first be a good artist.
So let's open the check image as a multi-extension cube:

@example
$ ds9 -mecube r_qthresh.fits -zscale -cmap sls -zoom to fit
@end example

The first extension (called @code{CONVOLVED}) of @file{r_qthresh.fits} is the convolved input image where the threshold(s) is(are) defined (and later applied to).
For more on the effect of convolution and thresholding, see Sections 3.1.1 and 3.1.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
The second extension (@code{QTHRESH_ERODE}) has a blank/white value for all the pixels of any tile that was identified as having significant signal.
The other tiles have the measured threshold over them.
The next two extensions (@code{QTHRESH_NOERODE} and @code{QTHRESH_EXPAND}) are the other two quantile thresholds that are necessary in NoiseChisel's later steps.
Every step in this file is repeated on the three thresholds.

Play a little with the color bar of the @code{QTHRESH_ERODE} extension, you clearly see how the non-blank tiles around NGC 5195 have a gradient.
As one line of attack against discarding too much signal below the threshold, NoiseChisel rejects outlier tiles.
Go forward by three extensions to @code{VALUE1_NO_OUTLIER} and you will see that many of the tiles over the galaxy have been removed in this step.
For more on the outlier rejection algorithm, see the latter half of @ref{Quantifying signal in a tile}.

Even though much of the galaxy's footprint has been rejected as outliers, there are still tiles with signal remaining:
play with the DS9 color-bar and you still see a gradient near the outer tidal feature of the galaxy.
Before trying to correct this, let's look at the other extensions of this check image.
We will use a @code{*} as a wild-card that can be 1, 2 or 3.
In the @code{THRESH*_INTERP} extensions, you see that all the blank tiles have been interpolated using their nearest neighbors (the relevant option here is @option{--interpnumngb}).
In the following @code{THRESH*_SMOOTH} extensions, you can see the tile values after smoothing (configured with @option{--smoothwidth} option).
Finally, in @code{QTHRESH-APPLIED}, you see the thresholded image: pixels with a value of 1 will be eroded later, but pixels with a value of 2 will pass the erosion step un-touched.

Let's get back to the problem of optimizing the result.
You have two strategies for detecting the outskirts of the merging galaxies:
1) Increase the tile size to get more accurate measurements of skewness.
2) Strengthen the outlier rejection parameters to discard more of the tiles with signal.
Fortunately in this image we have a sufficiently large region on the right of the image that the galaxy doesn't extend to.
So we can use the more robust first solution.
In situations where this doesn't happen (for example if the field of view in this image was shifted to the left to have more of M51 and less sky) you are limited to a combination of the two solutions or just to the second solution.

@cartouche
@noindent
@strong{Skipping convolution for faster tests:} The slowest step of NoiseChisel is the convolution of the input dataset.
Therefore when your dataset is large (unlike the one in this test), and you are not changing the input dataset or kernel in multiple runs (as in the tests of this tutorial), it is faster to do the convolution separately once (using @ref{Convolve}) and use NoiseChisel's @option{--convolved} option to directly feed the convolved image and avoid convolution.
For more on @option{--convolved}, see @ref{NoiseChisel input}.
@end cartouche

To better identify the skewness caused by the flat NGC 5195 and M51 tidal features on the tiles under it, we have to choose a larger tile size.
Let's try a tile size of 100 by 100 pixels and inspect the check image.

@example
$ astnoisechisel r.fits -h0 --tilesize=100,100 --checkqthresh
$ ds9 -mecube r_qthresh.fits -zscale -cmap sls -zoom to fit
@end example

You can clearly see the effect of this increased tile size: the tiles are much larger and when you look into @code{VALUE1_NO_OUTLIER}, you see that all the tiles are nicely grouped on the right side of the image (the farthest from M51, where we don't see a gradient in @code{QTHRESH_ERODE}).
Things look good now, so let's remove @option{--checkqthresh} and let NoiseChisel proceed with its detection.

@example
$ astnoisechisel r.fits -h0 --tilesize=100,100
$ ds9 -mecube r_detected.fits -zscale -cmap sls -zoom to fit
@end example

The detected pixels of the @code{DETECTIONS} extension have expanded a little, but not as much.
Also, the gradient in the @code{SKY} image is almost fully removed (and doesn't fall over M51 anymore).
However, on the bottom-right of the m51 detection, we see many holes gradually increasing in size.
This hints that there is still signal out there.
Let's check the next series of detection steps by adding the @code{--checkdetection} option this time:

@example
$ astnoisechisel r.fits -h0 --tilesize=100,100 --checkdetection
$ ds9 -mecube r_detcheck.fits -zscale -cmap sls -zoom to fit
@end example

@cindex Erosion (image processing)
The output now has 16 extensions, showing every step that is taken by NoiseChisel.
The first and second (@code{INPUT} and @code{CONVOLVED}) are clear from their names.
The third (@code{THRESHOLDED}) is the thresholded image after finding the quantile threshold (last extension of the output of @code{--checkqthresh}).
The fourth HDU (@code{ERODED}) is new: its the name-stake of NoiseChisel, or eroding pixels that are above the threshold.
By erosion, we mean that all pixels with a value of @code{1} (above the threshold) that are touching a pixel with a value of @code{0} (below the threshold) will be flipped to zero (or ``carved'' out)@footnote{Pixels with a value of @code{2} are very high signal-to-noise pixels, they are not eroded, to preserve sharp and bright sources.}.
You can see its effect directly by going back and forth between the @code{THRESHOLDED} and @code{ERODED} extensions.

@cindex Dilation (image processing)
In the fifth extension (@code{OPENED-AND-LABELED}) the image is ``opened'', which is a name for eroding once, then dilating (dilation is the inverse of erosion).
This is good to remove thin connections that are only due to noise.
Each separate connected group of pixels is also given its unique label here.
Do you see how just beyond the large M51 detection, there are many smaller detections that get smaller as you go more distant?
This hints at the solution: the default number of erosions is too much.
Let's see how many erosions take place by default (by adding @command{-P | grep erode} to the previous command)

@example
$ astnoisechisel r.fits -h0 --tilesize=100,100 -P | grep erode
@end example

@noindent
We see that the value of @code{erode} is @code{2}.
The default NoiseChisel parameters are primarily targeted to processed images (where there is correlated noise due to all the processing that has gone into the warping and stacking of raw images, see @ref{NoiseChisel optimization for detection}).
In those scenarios 2 erosions are commonly necessary.
But here, we have a single-exposure image where there is no correlated noise (the pixels aren't mixed).
So let's see how things change with only one erosion:

@example
$ astnoisechisel r.fits -h0 --tilesize=100,100 --erode=1 \
                 --checkdetection
$ ds9 -mecube r_detcheck.fits -zscale -cmap sls -zoom to fit
@end example

Looking at the @code{OPENED-AND-LABELED} extension again, we see that the main/large detection is now much larger than before.
While the immediately-outer connected regions are still present, they have decreased dramatically, so we can pass this step.

After the @code{OPENED-AND-LABELED} extension, NoiseChisel goes onto finding false detections using the undetected pixels.
The process is fully described in Section 3.1.5. (Defining and Removing False Detections) of arXiv:@url{https://arxiv.org/pdf/1505.01664.pdf,1505.01664}.
Please compare the extensions to what you read there and things will be very clear.
In the last HDU (@code{DETECTION-FINAL}), we have the final detected pixels that will be used to estimate the Sky and its Standard deviation.
We see that the main detection has indeed been detected very far out, so let's see how the full NoiseChisel will estimate the Sky and its standard deviation (by removing @code{--checkdetection}):

@example
$ astnoisechisel r.fits -h0 --tilesize=100,100 --erode=1
$ ds9 -mecube r_detected.fits -zscale -cmap sls -zoom to fit
@end example

The @code{DETECTIONS} extension of @code{r_detected.fits} closely follows what the @code{DETECTION-FINAL} of the check image (looks good!).
If you go ahead to the @code{SKY} extension, things still look good.
But it can still be improved.

Look at the @code{DETECTIONS} again, you will see the right-ward edges of M51's detected pixels have many ``holes'' that are fully surrounded by signal (value of @code{1}) and the signal stretches out in the noise very thinly (the size of the holes increases as we go out).
This suggests that there is still undetected signal and that we can still dig deeper into the noise.

With the @option{--detgrowquant} option, NoiseChisel will ``grow'' the detections in to the noise.
Its value is the ultimate limit of the growth in units of quantile (between 0 and 1).
Therefore @option{--detgrowquant=1} means no growth and @option{--detgrowquant=0.5} means an ultimate limit of the Sky level (which is usually too much and will cover the whole image!).
See Figure 2 of arXiv:@url{https://arxiv.org/pdf/1909.11230.pdf,1909.11230} for more on this option.
Try running the previous command with various values (from 0.6 to higher values) to see this option's effect on this dataset.
For this particularly huge galaxy (with signal that extends very gradually into the noise), we'll set it to @option{0.75}:

@example
$ astnoisechisel r.fits -h0 --tilesize=100,100 --erode=1 \
                 --detgrowquant=0.75
$ ds9 -mecube r_detected.fits -zscale -cmap sls -zoom to fit
@end example

Beyond this level (smaller @option{--detgrowquant} values), you see many of the smaller background galaxies (towards the right side of the image) starting to create thin spider-leg-like features, showing that we are following correlated noise for too much.
Please try it for your self by changing it to @code{0.6} for example.

When you look at the @code{DETECTIONS} extension of the command shown above, you see the wings of the galaxy being detected much farther out, But you also see many holes which are clearly just caused by noise.
After growing the objects, NoiseChisel also allows you to fill such holes when they are smaller than a certain size through the @option{--detgrowmaxholesize} option.
In this case, a maximum area/size of 10,000 pixels seems to be good:

@example
$ astnoisechisel r.fits -h0 --tilesize=100,100 --erode=1 \
                 --detgrowquant=0.75 --detgrowmaxholesize=10000
$ ds9 -mecube r_detected.fits -zscale -cmap sls -zoom to fit
@end example

When looking at the raw input image (which is very ``shallow'': less than a minute exposure!), you don't see anything so far out of the galaxy.
You might just think to yourself that ``this is all noise, I have just dug too deep and I'm following systematics''! If you feel like this, have a look at the deep images of this system in @url{https://arxiv.org/abs/1501.04599, Watkins et al. [2015]}, or a 12 hour deep image of this system (with a 12-inch telescope): @url{https://i.redd.it/jfqgpqg0hfk11.jpg}@footnote{The image is taken from this Reddit discussion: @url{https://www.reddit.com/r/Astronomy/comments/9d6x0q/12_hours_of_exposure_on_the_whirlpool_galaxy/}}.
In these deeper images you clearly see how the outer edges of the M51 group follow this exact structure, below in @ref{Achieved surface brightness level}, we'll measure the exact level.

As the gradient in the @code{SKY} extension shows, and the deep images cited above confirm, the galaxy's signal extends even beyond this.
But this is already far deeper than what most (if not all) other tools can detect.
Therefore, we'll stop configuring NoiseChisel at this point in the tutorial and let you play with the other options a little more, while reading more about it in the papers (@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]} and @url{https://arxiv.org/abs/1909.11230, Akhlaghi [2019]}) and @ref{NoiseChisel}.
When you do find a better configuration feel free to contact us for feedback.
Don't forget that good data analysis is an art, so like a sculptor, master your chisel for a good result.

To avoid typing all these options every time you run NoiseChisel on this image, you can use Gnuastro's configuration files, see @ref{Configuration files}.
For an applied example of setting/using them, see @ref{Option management and configuration files}.

@cartouche
@noindent
@strong{This NoiseChisel configuration is NOT GENERIC:} Don't use the configuration derived above, on another instrument's image @emph{blindly}.
If you are unsure, just use the default values.
As you saw above, the reason we chose this particular configuration for NoiseChisel to detect the wings of the M51 group was strongly influenced by the noise properties of this particular image.
Remember @ref{NoiseChisel optimization for detection}, where we looked into the very deep XDF image which had strong correlated noise?

As long as your other images have similar noise properties (from the same data-reduction step of the same instrument), you can use your configuration on any of them.
But for images from other instruments, please follow a similar logic to what was presented in these tutorials to find the optimal configuration.
@end cartouche

@cartouche
@noindent
@strong{Smart NoiseChisel:} As you saw during this section, there is a clear logic behind the optimal parameter value for each dataset.
Therefore, we plan to capabilities to (optionally) automate some of the choices made here based on the actual dataset, please join us in doing this if you are interested.
However, given the many problems in existing ``smart'' solutions, such automatic changing of the configuration may cause more problems than they solve.
So even when they are implemented, we would strongly recommend quality checks for a robust analysis.
@end cartouche

@node Achieved surface brightness level,  , NoiseChisel optimization, Detecting large extended targets
@subsection Achieved surface brightness level
In @ref{NoiseChisel optimization} we showed how to customize NoiseChisel for a single-exposure SDSS image of the M51 group.
Let's measure how deep we carved the signal out of noise.
For this measurement, we'll need to estimate the average flux on the outer edges of the detection.
Fortunately all this can be done with a few simple commands (and no higher-level language mini-environments like Python or IRAF) using @ref{Arithmetic} and @ref{MakeCatalog}.

@cindex Opening
First, let's separate each detected region, or give a unique label/counter to all the connected pixels of NoiseChisel's detection map:

@example
$ det="r_detected.fits -hDETECTIONS"
$ astarithmetic $det 2 connected-components -olabeled.fits
@end example

You can find the label of the main galaxy visually (by opening the image and hovering your mouse over the M51 group's label).
But to have a little more fun, let's do this automatically (which is necessary in a general scenario).
The M51 group detection is by far the largest detection in this image, this allows us to find its ID/label easily.
We'll first run MakeCatalog to find the area of all the labels, then we'll use Table to find the ID of the largest object and keep it as a shell variable (@code{id}):

@example
# Run MakeCatalog to find the area of each label.
$ astmkcatalog labeled.fits --ids --geoarea -h1 -ocat.fits

## Sort the table by the area column.
$ asttable cat.fits --sort=AREA_FULL

## The largest object, is the last one, so we'll use '--tail'.
$ asttable cat.fits --sort=AREA_FULL --tail=1

## We only want the ID, so let's only ask for that column:
$ asttable cat.fits --sort=AREA_FULL --tail=1 --column=OBJ_ID

## Now, let's put this result in a variable (instead of printing)
$ id=$(asttable cat.fits --sort=AREA_FULL --tail=1 --column=OBJ_ID)

## Just to confirm everything is fine.
$ echo $id
@end example

To separate the outer edges of the detections, we'll need to ``erode'' the M51 group detection.
We'll erode three times (to have more pixels and thus less scatter), using a maximum connectivity of 2 (8-connected neighbors).
We'll then save the output in @file{eroded.fits}.

@example
$ astarithmetic labeled.fits $id eq 2 erode 2 erode 2 erode \
                -oeroded.fits
@end example

@noindent
In @file{labeled.fits}, we can now set all the 1-valued pixels of @file{eroded.fits} to 0 using Arithmetic's @code{where} operator added to the previous command.
We'll need the pixels of the M51 group in @code{labeled.fits} two times: once to do the erosion, another time to find the outer pixel layer.
To do this (and be efficient and more readable) we'll use the @code{set-i} operator.
In the command below, it will save/set/name the pixels of the M51 group as the `@code{i}'.
In this way we can use it any number of times afterwards, while only reading it from disk and finding M51's pixels once.

@example
$ astarithmetic labeled.fits $id eq set-i i \
                i 2 erode 2 erode 2 erode 0 where -oedge.fits
@end example

Open the image and have a look.
You'll see that the detected edge of the M51 group is now clearly visible.
You can use @file{edge.fits} to mark (set to blank) this boundary on the input image and get a visual feeling of how far it extends:

@example
$ astarithmetic r.fits edge.fits nan where -oedge-masked.fits -h0
@end example

To quantify how deep we have detected the low-surface brightness regions (in units of signal to-noise ratio), we'll use the command below.
In short it just divides all the non-zero pixels of @file{edge.fits} in the Sky subtracted input (first extension of NoiseChisel's output) by the pixel standard deviation of the same pixel.
This will give us a signal-to-noise ratio image.
The mean value of this image shows the level of surface brightness that we have achieved.

You can also break the command below into multiple calls to Arithmetic and create temporary files to understand it better.
However, if you have a look at @ref{Reverse polish notation} and @ref{Arithmetic operators}, you should be able to easily understand what your computer does when you run this command@footnote{@file{edge.fits} (extension @code{1}) is a binary (0 or 1 valued) image.
Applying the @code{not} operator on it, just flips all its pixels.
Through the @code{where} operator, we are setting all the newly 1-valued pixels in @file{r_detected.fits} (extension @code{INPUT-NO-SKY}) to NaN/blank.
In the second line, we are dividing all the non-blank values by @file{r_detected.fits} (extension @code{SKY_STD}).
This gives the signal-to-noise ratio for each of the pixels on the boundary.
Finally, with the @code{meanvalue} operator, we are taking the mean value of all the non-blank pixels and reporting that as a single number.}.

@example
$ edge="edge.fits -h1"
$ skystd="r_detected.fits -hSKY_STD"
$ skysub="r_detected.fits -hINPUT-NO-SKY"
$ astarithmetic $skysub $skystd / $edge not nan where \
                meanvalue --quiet
@end example

@cindex Surface brightness
We have thus detected the wings of the M51 group down to roughly 1/3rd of the noise level in this image! But the signal-to-noise ratio is a relative measurement.
Let's also measure the depth of our detection in absolute surface brightness units; or magnitudes per square arc-seconds (see @ref{Brightness flux magnitude}).
Fortunately Gnuastro's MakeCatalog does this operation easily.
SDSS image pixel values are calibrated in units of ``nanomaggy'', so the zero point magnitude is 22.5@footnote{From @url{https://www.sdss.org/dr12/algorithms/magnitudes}}.

@example
astmkcatalog edge.fits -h1 --valuesfile=r_detected.fits \
             --zeropoint=22.5 --ids --surfacebrightness
asttable edge_cat.fits
@end example

We have thus reached an outer surface brightness of @mymath{25.69} magnitudes/arcsec@mymath{^2} (second column in @file{edge_cat.fits}) on this single exposure SDSS image!
In interpreting this value, you should just have in mind that NoiseChisel works based on the contiguity of signal in the pixels.
Therefore the larger the object (with a similarly diffuse emission), the deeper NoiseChisel can carve it out of the noise.
In other words, this reported depth, is the depth we have reached for this object in this dataset, processed with this particular NoiseChisel configuration.
If the M51 group in this image was larger/smaller than this (the field of view was smaller/larger), or if the image was from a different instrument, or if we had used a different configuration, we would go deeper/shallower.

To continue your analysis of such datasets with extended emission, you can use @ref{Segment} to identify all the ``clumps'' over the diffuse regions: background galaxies and foreground stars.

@example
$ astsegment r_detected.fits --output=r_segmented.fits
$ ds9 -mecube r_segmented.fits -zscale -cmap sls -zoom to fit
@end example

@cindex DS9
@cindex SAO DS9
Open the output @file{r_segmented.fits} as a multi-extension data cube like before and flip through the first and second extensions to see the detected clumps (all pixels with a value larger than 1).
To optimize the parameters and make sure you have detected what you wanted, we recommend to visually inspect the detected clumps on the input image.

For visual inspection, you can make a simple shell script like below.
It will first call MakeCatalog to estimate the positions of the clumps, then make an SAO ds9 region file and open ds9 with the image and region file.
Recall that in a shell script, the numeric variables (like @code{$1}, @code{$2}, and @code{$3} in the example below) represent the arguments given to the script.
But when used in the AWK arguments, they refer to column numbers.

To create the shell script, using your favorite text editor, put the contents below into a file called @file{check-clumps.sh}.
Recall that everything after a @code{#} is just comments to help you understand the command (so read them!).
Also note that if you are copying from the PDF version of this book, fix the single quotes in the AWK command.

@example
#! /bin/bash
set -e	   # Stop execution when there is an error.
set -u	   # Stop execution when a variable is not initialized.

# Run MakeCatalog to write the coordinates into a FITS table.
# Default output is `$1_cat.fits'.
astmkcatalog $1.fits --clumpscat --ids --ra --dec

# Use Gnuastro's Table program to read the RA and Dec columns of the
# clumps catalog (in the `CLUMPS' extension). Then pipe the columns
# to AWK for saving as a DS9 region file.
asttable $1"_cat.fits" -hCLUMPS -cRA,DEC                               \
         | awk 'BEGIN @{ print "# Region file format: DS9 version 4.1"; \
                        print "global color=green width=1";            \
                        print "fk5" @}                                  \
                @{ printf "circle(%s,%s,1\")\n", $1, $2 @}' > $1.reg

# Show the image (with the requested color scale) and the region file.
ds9 -geometry 1800x3000 -mecube $1.fits -zoom to fit                   \
    -scale limits $2 $3 -regions load all $1.reg

# Clean up (delete intermediate files).
rm $1"_cat.fits" $1.reg
@end example

@noindent
Finally, you just have to activate the script's executable flag with the command below.
This will enable you to directly/easily call the script as a command.

@example
$ chmod +x check-clumps.sh
@end example

@cindex AWK
@cindex GNU AWK
This script doesn't expect the @file{.fits} suffix of the input's filename as the first argument.
Because the script produces intermediate files (a catalog and DS9 region file, which are later deleted).
However, we don't want multiple instances of the script (on different files in the same directory) to collide (read/write to the same intermediate files).
Therefore, we have used suffixes added to the input's name to identify the intermediate files.
Note how all the @code{$1} instances in the commands (not within the AWK command@footnote{In AWK, @code{$1} refers to the first column, while in the shell script, it refers to the first argument.}) are followed by a suffix.
If you want to keep the intermediate files, put a @code{#} at the start of the last line.

The few, but high-valued, bright pixels in the central parts of the galaxies can hinder easy visual inspection of the fainter parts of the image.
With the second and third arguments to this script, you can set the numerical values of the color map (first is minimum/black, second is maximum/white).
You can call this script with any@footnote{Some modifications are necessary based on the input dataset: depending on the dynamic range, you have to adjust the second and third arguments.
But more importantly, depending on the dataset's world coordinate system, you have to change the region @code{width}, in the AWK command.
Otherwise the circle regions can be too small/large.} output of Segment (when @option{--rawoutput} is @emph{not} used) with a command like this:

@example
$ ./check-clumps.sh r_segmented -0.1 2
@end example

Go ahead and run this command.
You will see the intermediate processing being done and finally it opens SAO DS9 for you with the regions superimposed on all the extensions of Segment's output.
The script will only finish (and give you control of the command-line) when you close DS9.
If you need your access to the command-line before closing DS9, add a @code{&} after the end of the command above.

@cindex Purity
@cindex Completeness
While DS9 is open, slide the dynamic range (values for black and white, or minimum/maximum values in different color schemes) and zoom into various regions of the M51 group to see if you are satisfied with the detected clumps.
Don't forget that through the ``Cube'' window that is opened along with DS9, you can flip through the extensions and see the actual clumps also.
The questions you should be asking your self are these: 1) Which real clumps (as you visually @emph{feel}) have been missed? In other words, is the @emph{completeness} good? 2) Are there any clumps which you @emph{feel} are false? In other words, is the @emph{purity} good?

Note that completeness and purity are not independent of each other, they are anti-correlated: the higher your purity, the lower your completeness and vice-versa.
You can see this by playing with the purity level using the @option{--snquant} option.
Run Segment as shown above again with @code{-P} and see its default value.
Then increase/decrease it for higher/lower purity and check the result as before.
You will see that if you want the best purity, you have to sacrifice completeness and vice versa.

One interesting region to inspect in this image is the many bright peaks around the central parts of M51.
Zoom into that region and inspect how many of them have actually been detected as true clumps.
Do you have a good balance between completeness and purity? Also look out far into the wings of the group and inspect the completeness and purity there.

An easier way to inspect completeness (and only completeness) is to mask all the pixels detected as clumps and visually inspecting the rest of the pixels.
You can do this using Arithmetic in a command like below.
For easy reading of the command, we'll define the shell variable @code{i} for the image name and save the output in @file{masked.fits}.

@example
$ in="r_segmented.fits -hINPUT"
$ clumps="r_segmented.fits -hCLUMPS"
$ astarithmetic $in $clumps 0 gt nan where -oclumps-masked.fits
@end example

Inspecting @file{clumps-masked.fits}, you can see some very diffuse peaks that have been missed, especially as you go farther away from the group center and into the diffuse wings.
This is due to the fact that with this configuration, we have focused more on the sharper clumps.
To put the focus more on diffuse clumps, you can use a wider convolution kernel.
Using a larger kernel can also help in detecting the existing clumps to fainter levels (thus better separating them from the surrounding diffuse signal).

You can make any kernel easily using the @option{--kernel} option in @ref{MakeProfiles}.
But note that a larger kernel is also going to wash-out many of the sharp/small clumps close to the center of M51 and also some smaller peaks on the wings.
Please continue playing with Segment's configuration to obtain a more complete result (while keeping reasonable purity).
We'll finish the discussion on finding true clumps at this point.

The properties of the clumps within M51, or the background objects can then easily be measured using @ref{MakeCatalog}.
To measure the properties of the background objects (detected as clumps over the diffuse region), you shouldn't mask the diffuse region.
When measuring clump properties with @ref{MakeCatalog} and using the @option{--clumpscat}, the ambient flux (from the diffuse region) is calculated and subtracted.
If the diffuse region is masked, its effect on the clump brightness cannot be calculated and subtracted.

To keep this tutorial short, we'll stop here.
See @ref{Segmentation and making a catalog} and @ref{Segment} for more on using Segment, producing catalogs with MakeCatalog and using those catalogs.









@node Installation, Common program behavior, Tutorials, Top
@chapter Installation

@c This link is put here because the `Quick start' section of the first
@c chapter is not the most eye-catching part of the manual and some users
@c were seen to follow this ``Installation'' chapter title in search of the
@c tarball and fast instructions.
@cindex Installation
The latest released version of Gnuastro source code is always available at the following URL:

@url{http://ftpmirror.gnu.org/gnuastro/gnuastro-latest.tar.gz}

@noindent
@ref{Quick start} describes the commands necessary to configure, build, and install Gnuastro on your system.
This chapter will be useful in cases where the simple procedure above is not sufficient, for example your system lacks a mandatory/optional dependency (in other words, you can't pass the @command{$ ./configure} step), or you want greater customization, or you want to build and install Gnuastro from other random points in its history, or you want a higher level of control on the installation.
Thus if you were happy with downloading the tarball and following @ref{Quick start}, then you can safely ignore this chapter and come back to it in the future if you need more customization.

@ref{Dependencies} describes the mandatory, optional and bootstrapping dependencies of Gnuastro.
Only the first group are required/mandatory when you are building Gnuastro using a tarball (see @ref{Release tarball}), they are very basic and low-level tools used in most astronomical software, so you might already have them installed, if not they are very easy to install as described for each.
@ref{Downloading the source} discusses the two methods you can obtain the source code: as a tarball (a significant snapshot in Gnuastro's history), or the full history@footnote{@ref{Bootstrapping dependencies} are required if you clone the full history.}.
The latter allows you to build Gnuastro at any random point in its history (for example to get bug fixes or new features that are not released as a tarball yet).

The building and installation of Gnuastro is heavily customizable, to learn more about them, see @ref{Build and install}.
This section is essentially a thorough explanation of the steps in @ref{Quick start}.
It discusses ways you can influence the building and installation.
If you encounter any problems in the installation process, it is probably already explained in @ref{Known issues}.
In @ref{Other useful software} the installation and usage of some other free software that are not directly required by Gnuastro but might be useful in conjunction with it is discussed.


@menu
* Dependencies::                Necessary packages for Gnuastro.
* Downloading the source::      Ways to download the source code.
* Build and install::           Configure, build and install Gnuastro.
@end menu




@node Dependencies, Downloading the source, Installation, Installation
@section Dependencies

A minimal set of dependencies are mandatory for building Gnuastro from the standard tarball release.
If they are not present you cannot pass Gnuastro's configuration step.
The mandatory dependencies are therefore very basic (low-level) tools which are easy to obtain, build and install, see @ref{Mandatory dependencies} for a full discussion.

If you have the packages of @ref{Optional dependencies}, Gnuastro will have additional functionality (for example converting FITS images to JPEG or PDF).
If you are installing from a tarball as explained in @ref{Quick start}, you can stop reading after this section.
If you are cloning the version controlled source (see @ref{Version controlled source}), an additional bootstrapping step is required before configuration and its dependencies are explained in @ref{Bootstrapping dependencies}.

Your operating system's package manager is an easy and convenient way to download and install the dependencies that are already pre-built for your operating system.
In @ref{Dependencies from package managers}, we'll list some common operating system package manager commands to install the optional and mandatory dependencies.

@menu
* Mandatory dependencies::      Gnuastro will not install without these.
* Optional dependencies::       Adding more functionality.
* Bootstrapping dependencies::  If you have the version controlled source.
* Dependencies from package managers::  Installing from OS package managers.
@end menu

@node Mandatory dependencies, Optional dependencies, Dependencies, Dependencies
@subsection Mandatory dependencies

@cindex Dependencies, Gnuastro
@cindex GNU build system
The mandatory Gnuastro dependencies are very basic and low-level tools.
They all follow the same basic GNU based build system (like that shown in @ref{Quick start}), so even if you don't have them, installing them should be pretty straightforward.
In this section we explain each program and any specific note that might be necessary in the installation.


@menu
* GNU Scientific Library::      Installing GSL.
* CFITSIO::                     C interface to the FITS standard.
* WCSLIB::                      C interface to the WCS standard of FITS.
@end menu

@node GNU Scientific Library, CFITSIO, Mandatory dependencies, Mandatory dependencies
@subsubsection GNU Scientific library

@cindex GNU Scientific Library
The @url{http://www.gnu.org/software/gsl/, GNU Scientific Library}, or GSL, is a large collection of functions that are very useful in scientific applications, for example integration, random number generation, and Fast Fourier Transform among many others.
To install GSL from source, you can run the following commands after you have downloaded @url{http://ftpmirror.gnu.org/gsl/gsl-latest.tar.gz, @file{gsl-latest.tar.gz}}:

@example
$ tar xf gsl-latest.tar.gz
$ cd gsl-X.X                     # Replace X.X with version number.
$ ./configure
$ make -j8                       # Replace 8 with no. CPU threads.
$ make check
$ sudo make install
@end example

@node CFITSIO, WCSLIB, GNU Scientific Library, Mandatory dependencies
@subsubsection CFITSIO

@cindex CFITSIO
@cindex FITS standard
@url{http://heasarc.gsfc.nasa.gov/fitsio/, CFITSIO} is the closest you can get to the pixels in a FITS image while remaining faithful to the @url{http://fits.gsfc.nasa.gov/fits_standard.html, FITS standard}.
It is written by William Pence, the principal author of the FITS standard@footnote{Pence, W.D. et al. Definition of the Flexible Image Transport System (FITS), version 3.0. (2010) Astronomy and Astrophysics, Volume 524, id.A42, 40 pp.}, and is regularly updated.
Setting the definitions for all other software packages using FITS images.

@vindex --enable-reentrant
@cindex Reentrancy, multiple file opening
@cindex Multiple file opening, reentrancy
Some GNU/Linux distributions have CFITSIO in their package managers, if it is available and updated, you can use it.
One problem that might occur is that CFITSIO might not be configured with the @option{--enable-reentrant} option by the distribution.
This option allows CFITSIO to open a file in multiple threads, it can thus provide great speed improvements.
If CFITSIO was not configured with this option, any program which needs this capability will warn you and abort when you ask for multiple threads (see @ref{Multi-threaded operations}).

To install CFITSIO from source, we strongly recommend that you have a look through Chapter 2 (Creating the CFITSIO library) of the CFITSIO manual and understand the options you can pass to @command{$ ./configure} (they aren't too much).
This is a very basic package for most astronomical software and it is best that you configure it nicely with your system.
Once you download the source and unpack it, the following configure script should be enough for most purposes.
Don't forget to read chapter two of the manual though, for example the second option is only for 64bit systems.
The manual also explains how to check if it has been installed correctly.

CFITSIO comes with two executable files called @command{fpack} and @command{funpack}.
From their manual: they ``are standalone programs for compressing and uncompressing images and tables that are stored in the FITS (Flexible Image Transport System) data format.
They are analogous to the gzip and gunzip compression programs except that they are optimized for the types of astronomical images that are often stored in FITS format''.
The commands below will compile and install them on your system along with CFITSIO.
They are not essential for Gnuastro, since they are just wrappers for functions within CFITSIO, but they can come in handy.
The @command{make utils} command is only available for versions above 3.39, it will build these executable files along with several other executable test files which are deleted in the following commands before the installation (otherwise the test files will also be installed).

The commands necessary to decompress, build and install CFITSIO from source are described below.
Let's assume you have downloaded @url{http://heasarc.gsfc.nasa.gov/FTP/software/fitsio/c/cfitsio_latest.tar.gz, @file{cfitsio_latest.tar.gz}} and are in the same directory:

@example
$ tar xf cfitsio_latest.tar.gz
$ cd cfitsio-X.XX                   # Replace X.XX with version
$ ./configure --prefix=/usr/local --enable-sse2 --enable-reentrant
$ make
$ make utils
$ ./testprog > testprog.lis
$ diff testprog.lis testprog.out    # Should have no output
$ cmp testprog.fit testprog.std     # Should have no output
$ rm cookbook fitscopy imcopy smem speed testprog
$ sudo make install
@end example





@node WCSLIB,  , CFITSIO, Mandatory dependencies
@subsubsection WCSLIB

@cindex WCS
@cindex WCSLIB
@cindex World Coordinate System
@url{http://www.atnf.csiro.au/people/mcalabre/WCS/, WCSLIB} is written and maintained by one of the authors of the World Coordinate System (WCS) definition in the @url{http://fits.gsfc.nasa.gov/fits_standard.html, FITS standard}@footnote{Greisen E.W., Calabretta M.R. (2002) Representation of world coordinates in FITS.
Astronomy and Astrophysics, 395, 1061-1075.}, Mark Calabretta.
It might be already built and ready in your distribution's package management system.
However, here the installation from source is explained, for the advantages of installation from source please see @ref{Mandatory dependencies}.
To install WCSLIB you will need to have CFITSIO already installed, see @ref{CFITSIO}.

@vindex --without-pgplot
WCSLIB also has plotting capabilities which use PGPLOT (a plotting library for C).
If you wan to use those capabilities in WCSLIB, @ref{PGPLOT} provides the PGPLOT installation instructions.
However PGPLOT is old@footnote{As of early June 2016, its most recent version was uploaded in February 2001.}, so its installation is not easy, there are also many great modern WCS plotting tools (mostly in written in Python).
Hence, if you will not be using those plotting functions in WCSLIB, you can configure it with the @option{--without-pgplot} option as shown below.

If you have the cURL library @footnote{@url{https://curl.haxx.se}} on your system and you installed CFITSIO version 3.42 or later, you will need to also link with the cURL library at configure time (through the @code{-lcurl} option as shown below).
CFITSIO uses the cURL library for its HTTPS (or HTTP Secure@footnote{@url{https://en.wikipedia.org/wiki/HTTPS}}) support and if it is present on your system, CFITSIO will depend on it.
Therefore, if @command{./configure} command below fails (you don't have the cURL library), then remove this option and rerun it.

Let's assume you have downloaded @url{ftp://ftp.atnf.csiro.au/pub/software/wcslib/wcslib.tar.bz2, @file{wcslib.tar.bz2}} and are in the same directory, to configure, build, check and install WCSLIB follow the steps below.
@example
$ tar xf wcslib.tar.bz2

## In the `cd' command, replace `X.X' with version number.
$ cd wcslib-X.X

## If `./configure' fails, remove `-lcurl' and run again.
$ ./configure LIBS="-pthread -lcurl -lm" --without-pgplot     \
              --disable-fortran
$ make
$ make check
$ sudo make install
@end example



@node Optional dependencies, Bootstrapping dependencies, Mandatory dependencies, Dependencies
@subsection Optional dependencies

The libraries listed here are only used for very specific applications, therefore they are optional and Gnuastro can be built without them (with only those specific features disabled).
Since these are pretty low-level tools, they are not too hard to install from source, but you can also use your operating system's package manager to easily install all of them.
For more, see @ref{Dependencies from package managers}.

@cindex GPL Ghostscript
If the @command{./configure} script can't find any of these optional dependencies, it will notify you of the operation(s) you can't do due to not having them.
If you continue the build and request an operation that uses a missing library, Gnuastro's programs will warn that the optional library was missing at build-time and abort.
Since Gnuastro was built without that library, installing the library afterwards won't help.
The only way is to re-build Gnuastro from scratch (after the library has been installed).
However, for program dependencies (like cURL or GhostScript) things are easier: you can install them after building Gnuastro also.
This is because libraries are used to build the internal structure of Gnuastro's executables.
However, a program dependency is called by Gnuastro's programs at run-time and has no effect on their internal structure.
So if a dependency program becomes available later, it will be used next time it is requested.

@table @asis

@item GNU Libtool
@cindex GNU Libtool
Libtool is a program to simplify managing of the libraries to build an executable (a program).
GNU Libtool has some added functionality compared to other implementations.
If GNU Libtool isn't present on your system at configuration time, a warning will be printed and @ref{BuildProgram} won't be built or installed.
The configure script will look into your search path (@code{PATH}) for GNU Libtool through the following executable names: @command{libtool} (acceptable only if it is the GNU implementation) or @command{glibtool}.
See @ref{Installation directory} for more on @code{PATH}.

GNU Libtool (the binary/executable file) is a low-level program that is probably already present on your system, and if not, is available in your operating system package manager@footnote{Note that we want the binary/executable Libtool program which can be run on the command-line.
In Debian-based operating systems which separate various parts of a package, you want want @code{libtool-bin}, the @code{libtool} package won't contain the executable program.}.
If you want to install GNU Libtool's latest version from source, please visit its @url{https://www.gnu.org/software/libtool/, web page}.

Gnuastro's tarball is shipped with an internal implementation of GNU Libtool.
Even if you have GNU Libtool, Gnuastro's internal implementation is used for the building and installation of Gnuastro.
As a result, you can still build, install and use Gnuastro even if you don't have GNU Libtool installed on your system.
However, this internal Libtool does not get installed.
Therefore, after Gnuastro's installation, if you want to use @ref{BuildProgram} to compile and link your own C source code which uses the @ref{Gnuastro library}, you need to have GNU Libtool available on your system (independent of Gnuastro).
See @ref{Review of library fundamentals} to learn more about libraries.

@item libgit2
@cindex Git
@pindex libgit2
@cindex Version control systems
Git is one of the most common version control systems (see @ref{Version controlled source}).
When @file{libgit2} is present, and Gnuastro's programs are run within a version controlled directory, outputs will contain the version number of the working directory's repository for future reproducibility.
See the @command{COMMIT} keyword header in @ref{Output FITS files} for a discussion.

@item libjpeg
@pindex libjpeg
@cindex JPEG format
libjpeg is only used by ConvertType to read from and write to JPEG images, see @ref{Recognized file formats}.
@url{http://www.ijg.org/, libjpeg} is a very basic library that provides tools to read and write JPEG images, most Unix-like graphic programs and libraries use it.
Therefore you most probably already have it installed.
@url{http://libjpeg-turbo.virtualgl.org/, libjpeg-turbo} is an alternative to libjpeg.
It uses Single instruction, multiple data (SIMD) instructions for ARM based systems that significantly decreases the processing time of JPEG compression and decompression algorithms.

@item libtiff
@pindex libtiff
@cindex TIFF format
libtiff is used by ConvertType and the libraries to read TIFF images, see @ref{Recognized file formats}.
@url{http://www.simplesystems.org/libtiff/, libtiff} is a very basic library that provides tools to read and write TIFF images, most Unix-like operating system graphic programs and libraries use it.
Therefore even if you don't have it installed, it must be easily available in your package manager.

@item cURL
@cindex cURL (downloading tool)
cURL's executable (@command{curl}) is called by @ref{Query} for submitting queries to remote datasets and retrieving the results.
It isn't necessary for the build of Gnuastro from source (only a warning will be printed if it can't be found at configure time), so if you don't have it at build-time there is no problem.
Just be sure to have it when you run @command{astquery}, otherwise you'll get an error about not finding @command{curl}.

@item GPL Ghostscript
@cindex GPL Ghostscript
GPL Ghostscript's executable (@command{gs}) is called by ConvertType to compile a PDF file from a source PostScript file, see @ref{ConvertType}.
Therefore its headers (and libraries) are not needed.

@end table




@node Bootstrapping dependencies, Dependencies from package managers, Optional dependencies, Dependencies
@subsection Bootstrapping dependencies

Bootstrapping is only necessary if you have decided to obtain the full version controlled history of Gnuastro, see @ref{Version controlled source} and @ref{Bootstrapping}.
Using the version controlled source enables you to always be up to date with the most recent development work of Gnuastro (bug fixes, new functionalities, improved algorithms, etc).
If you have downloaded a tarball (see @ref{Downloading the source}), then you can ignore this subsection.

To successfully run the bootstrapping process, there are some additional dependencies to those discussed in the previous subsections.
These are low level tools that are used by a large collection of Unix-like operating systems programs, therefore they are most probably already available in your system.
If they are not already installed, you should be able to easily find them in any GNU/Linux distribution package management system (@command{apt-get}, @command{yum}, @command{pacman}, etc).
The short names in parenthesis in @command{typewriter} font after the package name can be used to search for them in your package manager.
For the GNU Portability Library, GNU Autoconf Archive and @TeX{} Live, it is recommended to use the instructions here, not your operating system's package manager.

@table @asis

@item GNU Portability Library (Gnulib)
@cindex GNU C library
@cindex Gnulib: GNU Portability Library
@cindex GNU Portability Library (Gnulib)
To ensure portability for a wider range of operating systems (those that don't include GNU C library, namely glibc), Gnuastro depends on the GNU portability library, or Gnulib.
Gnulib keeps a copy of all the functions in glibc, implemented (as much as possible) to be portable to other operating systems.
The @file{bootstrap} script can automatically clone Gnulib (as a @file{gnulib/} directory inside Gnuastro), however, as described in @ref{Bootstrapping} this is not recommended.

The recommended way to bootstrap Gnuastro is to first clone Gnulib and the Autoconf archives (see below) into a local directory outside of Gnuastro.
Let's call it @file{DEVDIR}@footnote{If you are not a developer in Gnulib or Autoconf archives, @file{DEVDIR} can be a directory that you don't backup.
In this way the large number of files in these projects won't slow down your backup process or take bandwidth (if you backup to a remote server).}  (which you can set to any directory).
Currently in Gnuastro, both Gnulib and Autoconf archives have to be cloned in the same top directory@footnote{If you already have the Autoconf archives in a separate directory, or can't clone it in the same directory as Gnulib, or you have it with another directory name (not @file{autoconf-archive/}), you can follow this short step.
Set @file{AUTOCONFARCHIVES} to your desired address.
Then define a symbolic link in @file{DEVDIR} with the following command so Gnuastro's bootstrap script can find it:@*@command{$ ln -s $AUTOCONFARCHIVES $DEVDIR/autoconf-archive}.} like the case here@footnote{If your internet connection is active, but Git complains about the network, it might be due to your network setup not recognizing the git protocol.
In that case use the following URL for the HTTP protocol instead (for Autoconf archives, replace the name): @command{http://git.sv.gnu.org/r/gnulib.git}}:

@example
$ DEVDIR=/home/yourname/Development
$ cd $DEVDIR
$ git clone git://git.sv.gnu.org/gnulib.git
$ git clone git://git.sv.gnu.org/autoconf-archive.git
@end example

@noindent
You now have the full version controlled source of these two repositories in separate directories.
Both these packages are regularly updated, so every once in a while, you can run @command{$ git pull} within them to get any possible updates.

@item GNU Automake (@command{automake})
@cindex GNU Automake
GNU Automake will build the @file{Makefile.in} files in each sub-directory using the (hand-written) @file{Makefile.am} files.
The @file{Makefile.in}s are subsequently used to generate the @file{Makefile}s when the user runs @command{./configure} before building.

@item GNU Autoconf (@command{autoconf})
@cindex GNU Autoconf
GNU Autoconf will build the @file{configure} script using the configurations we have defined (hand-written) in @file{configure.ac}.

@item GNU Autoconf Archive
@cindex GNU Autoconf Archive
These are a large collection of tests that can be called to run at @command{./configure} time.
See the explanation under GNU Portability Library above for instructions on obtaining it and keeping it up to date.

@item GNU Libtool (@command{libtool})
@cindex GNU Libtool
GNU Libtool is in charge of building all the libraries in Gnuastro.
The libraries contain functions that are used by more than one program and are installed for use in other programs.
They are thus put in a separate directory (@file{lib/}).

@item GNU help2man (@command{help2man})
@cindex GNU help2man
GNU help2man is used to convert the output of the @option{--help} option
(@ref{--help}) to the traditional Man page (@ref{Man pages}).

@item @LaTeX{} and some @TeX{} packages
@cindex @LaTeX{}
@cindex @TeX{} Live
Some of the figures in this book are built by @LaTeX{} (using the PGF/TikZ package).
The @LaTeX{} source for those figures is version controlled for easy maintenance not the actual figures.
So the @file{./boostrap} script will run @LaTeX{} to build the figures.
The best way to install @LaTeX{} and all the necessary packages is through @url{https://www.tug.org/texlive/, @TeX{} live} which is a package manager for @TeX{} related tools that is independent of any operating system.
It is thus preferred to the @TeX{} Live versions distributed by your operating system.

To install @TeX{} Live, go to the web page and download the appropriate installer by following the ``download'' link.
Note that by default the full package repository will be downloaded and installed (around 4 Giga Bytes) which can take @emph{very} long to download and to update later.
However, most packages are not needed by everyone, it is easier, faster and better to install only the ``Basic scheme'' (consisting of only the most basic @TeX{} and @LaTeX{} packages, which is less than 200 Mega bytes)@footnote{You can also download the DVD iso file at a later time to keep as a backup for when you don't have internet connection if you need a package.}.

After the installation, be sure to set the environment variables as suggested in the end of the outputs.
Any time you confront (need) a package you don't have, simply install it with a command like below (similar to how you install software from your operating system's package manager)@footnote{After running @TeX{}, or @LaTeX{}, you might get a warning complaining about a @file{missingfile}.
Run `@command{tlmgr info missingfile}' to see the package(s) containing that file which you can install.}.
To install all the necessary @TeX{} packages for a successful Gnuastro bootstrap, run this command:

@example
$ su
# tlmgr install epsf jknapltx caption biblatex biber iftex \
                etoolbox logreq xstring xkeyval pgf ms     \
                xcolor pgfplots times rsfs ps2eps epspdf
@end example

@item ImageMagick (@command{imagemagick})
@cindex ImageMagick
ImageMagick is a wonderful and robust program for image manipulation on the command-line.
@file{bootstrap} uses it to convert the book images into the formats necessary for the various book formats.

@end table



@node Dependencies from package managers,  , Bootstrapping dependencies, Dependencies
@subsection Dependencies from package managers

@cindex Package managers
@cindex Source code building
@cindex Building from source
@cindex Compiling from source
@cindex Source code compilation
@cindex Distributions, GNU/Linux
The most basic way to install a package on your system is to build the packages from source yourself.
Alternatively, you can use your operating system's package manager to download pre-compiled files and install them.
The latter choice is easier and faster.
However, we recommend that you build the @ref{Mandatory dependencies} yourself from source (all necessary commands and links are given in the respective section).
Here are some basic reasons behind this recommendation.

@enumerate

@item
Your operating system's pre-built software might not be the most recent release.
For example, Gnuastro itself is also packaged in some package managers.
For the list see: @url{https://repology.org/project/gnuastro/versions}.
You will notice that Gnuastro's version in some operating systems is more than 10 versions old!
It is the same for all the dependencies of Gnuastro.

@item
For each package, Gnuastro might preform better (or require) certain configuration options that your distribution's package managers didn't add for you.
If present, these configuration options are explained during the installation of each in the sections below (for example in @ref{CFITSIO}).
When the proper configuration has not been set, the programs should complain and inform you.

@item
For the libraries, they might separate the binary file from the header files which can cause confusion, see @ref{Known issues}.

@item
Like any other tool, the science you derive from Gnuastro's tools highly depend on these lower level dependencies, so generally it is much better to have a close connection with them.
By reading their manuals, installing them and staying up to date with changes/bugs in them, your scientific results and understanding (of what is going on, and thus how you interpret your scientific results) will also correspondingly improve.
@end enumerate

Based on your package manager, you can use any of the following commands to install the mandatory and optional dependencies.
If your package manager isn't included in the list below, please send us the respective command, so we add it.

As discussed above, we recommend installing the @emph{mandatory} dependencies manually from source (see @ref{Mandatory dependencies}).
Therefore, in each command below, first the optional dependencies are given.
The mandatory dependencies are included after an empty line.
If you would also like to install the mandatory dependencies with your package manager, just ignore the empty line.

For better archivability and compression ratios, Gnuastro's recommended tarball compression format is with the @url{http://lzip.nongnu.org/lzip.html, Lzip} program, see @ref{Release tarball}.
Therefore, the package manager commands below also contain Lzip.

@table @asis
@item @command{apt-get} (Debian-based OSs: Debian, Ubuntu, Linux Mint, etc)
@cindex Debian
@cindex Ubuntu
@cindex Linux Mint
@cindex @command{apt-get}
@cindex Advanced Packaging Tool (APT, Debian)
@url{https://en.wikipedia.org/wiki/Debian,Debian} is one of the oldest
GNU/Linux
distributions@footnote{@url{https://en.wikipedia.org/wiki/List_of_Linux_distributions#Debian-based}}.
It thus has a very extended user community and a robust internal structure and standards.
All of it is free software and based on the work of volunteers around the world.
Many distributions are thus derived from it, for example Ubuntu and Linux Mint.
This arguably makes Debian-based OSs the largest, and most used, class of GNU/Linux distributions.
All of them use Debian's Advanced Packaging Tool (APT, for example @command{apt-get}) for managing packages.
@example
$ sudo apt-get install ghostscript libtool-bin libjpeg-dev  \
                       libtiff-dev libgit2-dev curl lzip    \
                                                            \
                       libgsl0-dev libcfitsio-dev wcslib-dev
@end example

@noindent
Gnuastro is @url{https://tracker.debian.org/pkg/gnuastro,packaged} in Debian (and thus some of its derivate operating systems).
Just make sure it is the most recent version.

@item @command{dnf}
@itemx @command{yum} (Red Hat-based OSs: Red Hat, Fedora, CentOS, Scientific Linux, etc)
@cindex RHEL
@cindex Fedora
@cindex CentOS
@cindex Red Hat
@cindex @command{dnf}
@cindex @command{yum}
@cindex Scientific Linux
@url{https://en.wikipedia.org/wiki/Red_Hat,Red Hat Enterprise Linux} (RHEL) is released by Red Hat Inc.
RHEL requires paid subscriptions for use of its binaries and support.
But since it is free software, many other teams use its code to spin-off their own distributions based on RHEL.
Red Hat-based GNU/Linux distributions initially used the ``Yellowdog Updated, Modifier'' (YUM) package manager, which has been replaced by ``Dandified yum'' (DNF).
If the latter isn't available on your system, you can use @command{yum} instead of @command{dnf} in the command below.
@example
$ sudo dnf install ghostscript libtool libjpeg-devel     \
                   libtiff-devel libgit2-devel lzip curl \
                                                         \
                   gsl-devel cfitsio-devel wcslib-devel
@end example

@item @command{brew} (macOS)
@cindex macOS
@cindex Homebrew
@cindex MacPorts
@cindex @command{brew}
@url{https://en.wikipedia.org/wiki/MacOS,macOS} is the operating system used on Apple devices.
macOS does not come with a package manager pre-installed, but several widely used, third-party package managers exist, such as Homebrew or MacPorts.
Both are free software.
Currently we have only tested Gnuastro's installation with Homebrew as described below.

If not already installed, first obtain Homebrew by following the instructions at @url{https://brew.sh}.
Homebrew manages packages in different `taps'.
To install WCSLIB (discussed in @ref{Mandatory dependencies}) via Homebrew you will need to @command{tap} into @command{brewsci/science} first (the tap may change in the future, but can be found by calling @command{brew search wcslib}).
@example
$ brew install ghostscript libtool libjpeg libtiff \
               libgit2 curl lzip                   \
                                                   \
               gsl cfitsio
$ brew tap brewsci/science
$ brew install wcslib
@end example

@item @command{pacman} (Arch Linux)
@cindex Arch Linux
@cindex @command{pacman}
@url{https://en.wikipedia.org/wiki/Arch_Linux,Arch Linux} is a smaller GNU/Linux distribution, which follows the KISS principle (``keep it simple, stupid'') as a general guideline.
It ``focuses on elegance, code correctness, minimalism and simplicity, and expects the user to be willing to make some effort to understand the system's operation''.
Arch Linux uses ``Package manager'' (Pacman) to manage its packages/components.
@example
$ sudo pacman -S ghostscript libtool libjpeg libtiff \
                 libgit2 curl lzip                   \
                                                     \
                 gsl cfitsio wcslib
@end example

@item @command{zypper} (openSUSE and SUSE Linux Enterprise Server)
@cindex openSUSE
@cindex SUSE Linux Enterprise Server
@cindex @command{zypper}, OpenSUSE package manager
SUSE Linux Enterprise Server@footnote{@url{https://www.suse.com/products/server}} (SLES) is the commercial offering which shares code and tools.
Many additional packages are offered in the Build Service@footnote{@url{https://build.opensuse.org}}.
openSUSE and SLES use @command{zypper} (cli) and YaST (GUI) for managing repositories and
packages.

@example
$ sudo zypper install ghostscript_any libtool pkgconfig   \
              libcurl-devel libgit2-devel libjpeg62-devel \
              libtiff-devel curl                          \
                                                          \
              gsl-devel cfitsio-devel wcslib-devel
@end example
@noindent
When building Gnuastro, run the configure script with the following @code{CPPFLAGS} environment variable:

@example
$ ./configure CPPFLAGS="-I/usr/include/cfitsio"
@end example

@c Gnuastro is @url{https://software.opensuse.org/package/gnuastro,packaged}
@c in @command{zypper}. Just make sure it is the most recent version.
@end table

Usually, when libraries are installed by operating system package managers, there should be no problems when configuring and building other programs from source (that depend on the libraries: Gnuastro in this case).
However, in some special conditions, problems may pop-up during the configuration, building, or checking/running any of Gnuastro's programs.
The most common of such problems and their solution are discussed below.

@cartouche
@noindent
@strong{Not finding library during configuration:} If a library is installed, but during Gnuastro's @command{configure} step the library isn't found, then configure Gnuastro like the command below (correcting @file{/path/to/lib}).
For more, see @ref{Known issues} and @ref{Installation directory}.
@example
$ ./configure LDFLAGS="-L/path/to/lib"
@end example
@end cartouche

@cartouche
@noindent
@strong{Not finding header (.h) files while building:} If a library is installed, but during Gnuastro's @command{make} step, the library's header (file with a @file{.h} suffix) isn't found, then configure Gnuastro like the command below (correcting @file{/path/to/include}).
For more, see @ref{Known issues} and @ref{Installation directory}.
@example
$ ./configure CPPFLAGS="-I/path/to/include"
@end example
@end cartouche

@cartouche
@noindent
@strong{Gnuastro's programs don't run during check or after install:}
If a library is installed, but the programs don't run due to linking problems, set the @code{LD_LIBRARY_PATH} variable like below (assuming Gnuastro is installed in @file{/path/to/installed}).
For more, see @ref{Known issues} and @ref{Installation directory}.
@example
$ export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/path/to/installed/lib"
@end example
@end cartouche









@node Downloading the source, Build and install, Dependencies, Installation
@section Downloading the source

Gnuastro's source code can be downloaded in two ways.
As a tarball, ready to be configured and installed on your system (as described in @ref{Quick start}), see @ref{Release tarball}.
If you want official releases of stable versions this is the best, easiest and most common option.
Alternatively, you can clone the version controlled history of Gnuastro, run one extra bootstrapping step and then follow the same steps as the tarball.
This will give you access to all the most recent work that will be included in the next release along with the full project history.
The process is thoroughly introduced in @ref{Version controlled source}.



@menu
* Release tarball::             Download a stable official release.
* Version controlled source::   Get and use the version controlled source.
@end menu

@node Release tarball, Version controlled source, Downloading the source, Downloading the source
@subsection Release tarball

A release tarball (commonly compressed) is the most common way of obtaining free and open source software.
A tarball is a snapshot of one particular moment in the Gnuastro development history along with all the necessary files to configure, build, and install Gnuastro easily (see @ref{Quick start}).
It is very straightforward and needs the least set of dependencies (see @ref{Mandatory dependencies}).
Gnuastro has tarballs for official stable releases and pre-releases for testing.
See @ref{Version numbering} for more on the two types of releases and the formats of the version numbers.
The URLs for each type of release are given below.

@table @asis

@item Official stable releases (@url{http://ftp.gnu.org/gnu/gnuastro}):
This URL hosts the official stable releases of Gnuastro.
Always use the most recent version (see @ref{Version numbering}).
By clicking on the ``Last modified'' title of the second column, the files will be sorted by their date which you can also use to find the latest version.
It is recommended to use a mirror to download these tarballs, please visit @url{http://ftpmirror.gnu.org/gnuastro/} and see below.

@item Pre-release tar-balls (@url{http://alpha.gnu.org/gnu/gnuastro}):
This URL contains unofficial pre-release versions of Gnuastro.
The pre-release versions of Gnuastro here are for enthusiasts to try out before an official release.
If there are problems, or bugs then the testers will inform the developers to fix before the next official release.
See @ref{Version numbering} to understand how the version numbers here are formatted.
If you want to remain even more up-to-date with the developing activities, please clone the version controlled source as described in @ref{Version controlled source}.

@end table

@cindex Gzip
@cindex Lzip
Gnuastro's official/stable tarball is released with two formats: Gzip (with suffix @file{.tar.gz}) and Lzip (with suffix @file{.tar.lz}).
The pre-release tarballs (after version 0.3) are released only as an Lzip tarball.
Gzip is a very well-known and widely used compression program created by GNU and available in most systems.
However, Lzip provides a better compression ratio and more robust archival capacity.
For example Gnuastro 0.3's tarball was 2.9MB and 4.3MB with Lzip and Gzip respectively, see the @url{http://www.nongnu.org/lzip/lzip.html, Lzip web page} for more.
Lzip might not be pre-installed in your operating system, if so, installing it from your operating system's package manager or from source is very easy and fast (it is a very small program).

The GNU FTP server is mirrored (has backups) in various locations on the globe (@url{http://www.gnu.org/order/ftp.html}).
You can use the closest mirror to your location for a more faster download.
Note that only some mirrors keep track of the pre-release (alpha) tarballs.
Also note that if you want to download immediately after and announcement (see @ref{Announcements}), the mirrors might need some time to synchronize with the main GNU FTP server.


@node Version controlled source,  , Release tarball, Downloading the source
@subsection Version controlled source

@cindex Git
@cindex Version control
The publicly distributed Gnuastro tar-ball (for example @file{gnuastro-X.X.tar.gz}) does not contain the revision history, it is only a snapshot of the source code at one significant instant of Gnuastro's history (specified by the version number, see @ref{Version numbering}), ready to be configured and built.
To be able to develop successfully, the revision history of the code can be very useful to track when something was added or changed, also some updates that are not yet officially released might be in it.

We use Git for the version control of Gnuastro.
For those who are not familiar with it, we recommend the @url{https://git-scm.com/book/en, ProGit book}.
The whole book is publicly available for online reading and downloading and does a wonderful job at explaining the concepts and best practices.

Let's assume you want to keep Gnuastro in the @file{TOPGNUASTRO} directory (can be any directory, change the value below).
The full version controlled history of Gnuastro can be cloned in @file{TOPGNUASTRO/gnuastro} by running the following commands@footnote{If your internet connection is active, but Git complains about the network, it might be due to your network setup not recognizing the Git protocol.
In that case use the following URL which uses the HTTP protocol instead: @command{http://git.sv.gnu.org/r/gnuastro.git}}:

@example
$ TOPGNUASTRO=/home/yourname/Research/projects/
$ cd $TOPGNUASTRO
$ git clone git://git.sv.gnu.org/gnuastro.git
@end example

@noindent
The @file{$TOPGNUASTRO/gnuastro} directory will contain hand-written (version controlled) source code for Gnuastro's programs, libraries, this book and the tests.
All are divided into sub-directories with standard and very descriptive names.
The version controlled files in the top cloned directory are either mainly in capital letters (for example @file{THANKS} and @file{README}) or mainly written in small-caps (for example @file{configure.ac} and @file{Makefile.am}).
The former are non-programming, standard writing for human readers containing high-level information about the whole package.
The latter are instructions to customize the GNU build system for Gnuastro.
For more on Gnuastro's source code structure, please see @ref{Developing}.
We won't go any deeper here.

The cloned Gnuastro source cannot immediately be configured, compiled, or installed since it only contains hand-written files, not automatically generated or imported files which do all the hard work of the build process.
See @ref{Bootstrapping} for the process of generating and importing those files (its not too hard!).
Once you have bootstrapped Gnuastro, you can run the standard procedures (in @ref{Quick start}).
Very soon after you have cloned it, Gnuastro's main @file{master} branch will be updated on the main repository (since the developers are actively working on Gnuastro), for the best practices in keeping your local history in sync with the main repository see @ref{Synchronizing}.





@menu
* Bootstrapping::               Adding all the automatically generated files.
* Synchronizing::               Keep your local clone up to date.
@end menu

@node Bootstrapping, Synchronizing, Version controlled source, Version controlled source
@subsubsection Bootstrapping

@cindex Bootstrapping
@cindex GNU Autoconf Archive
@cindex Gnulib: GNU Portability Library
@cindex GNU Portability Library (Gnulib)
@cindex Automatically created build files
@noindent
The version controlled source code lacks the source files that we have not written or are automatically built.
These automatically generated files are included in the distributed tar ball for each distribution (for example @file{gnuastro-X.X.tar.gz}, see @ref{Version numbering}) and make it easy to immediately configure, build, and install Gnuastro.
However from the perspective of version control, they are just bloatware and sources of confusion (since they are not changed by Gnuastro developers).

The process of automatically building and importing necessary files into the cloned directory is known as @emph{bootstrapping}.
All the instructions for an automatic bootstrapping are available in @file{bootstrap} and configured using @file{bootstrap.conf}.
@file{bootstrap} and @file{COPYING} (which contains the software copyright notice) are the only files not written by Gnuastro developers but under version control to enable simple bootstrapping and legal information on usage immediately after cloning.
@file{bootstrap.conf} is maintained by the GNU Portability Library (Gnulib) and this file is an identical copy, so do not make any changes in this file since it will be replaced when Gnulib releases an update.
Make all your changes in @file{bootstrap.conf}.

The bootstrapping process has its own separate set of dependencies, the full list is given in @ref{Bootstrapping dependencies}.
They are generally very low-level and used by a very large set of commonly used programs, so they are probably already installed on your system.
The simplest way to bootstrap Gnuastro is to simply run the bootstrap script within your cloned Gnuastro directory as shown below.
However, please read the next paragraph before doing so (see @ref{Version controlled source} for @file{TOPGNUASTRO}).

@example
$ cd TOPGNUASTRO/gnuastro
$ ./bootstrap                      # Requires internet connection
@end example

Without any options, @file{bootstrap} will clone Gnulib within your cloned Gnuastro directory (@file{TOPGNUASTRO/gnuastro/gnulib}) and download the necessary Autoconf archives macros.
So if you run bootstrap like this, you will need an internet connection every time you decide to bootstrap.
Also, Gnulib is a large package and cloning it can be slow.
It will also keep the full Gnulib repository within your Gnuastro repository, so if another one of your projects also needs Gnulib, and you insist on running bootstrap like this, you will have two copies.
In case you regularly backup your important files, Gnulib will also slow down the backup process.
Therefore while the simple invocation above can be used with no problem, it is not recommended.
To do better, see the next paragraph.

The recommended way to get these two packages is thoroughly discussed in @ref{Bootstrapping dependencies} (in short: clone them in the separate @file{DEVDIR/} directory).
The following commands will take you into the cloned Gnuastro directory and run the @file{bootstrap} script, while telling it to copy some files (instead of making symbolic links, with the @option{--copy} option, this is not mandatory@footnote{The @option{--copy} option is recommended because some backup systems might do strange things with symbolic links.}) and where to look for Gnulib (with the @option{--gnulib-srcdir} option).
Please note that the address given to @option{--gnulib-srcdir} has to be an absolute address (so don't use @file{~} or @file{../} for example).

@example
$ cd $TOPGNUASTRO/gnuastro
$ ./bootstrap --copy --gnulib-srcdir=$DEVDIR/gnulib
@end example


@cindex GNU Texinfo
@cindex GNU Libtool
@cindex GNU Autoconf
@cindex GNU Automake
@cindex GNU C library
@cindex GNU build system
Since Gnulib and Autoconf archives are now available in your local directories, you don't need an internet connection every time you decide to remove all untracked files and redo the bootstrap (see box below).
You can also use the same command on any other project that uses Gnulib.
All the necessary GNU C library functions, Autoconf macros and Automake inputs are now available along with the book figures.
The standard GNU build system (@ref{Quick start}) will do the rest of the job.

@cartouche
@noindent
@strong{Undoing the bootstrap:}
During the development, it might happen that you want to remove all the automatically generated and imported files.
In other words, you might want to reverse the bootstrap process.
Fortunately Git has a good program for this job: @command{git clean}.
Run the following command and every file that is not version controlled will be removed.

@example
git clean -fxd
@end example

@noindent
It is best to commit any recent change before running this command.
You might have created new files since the last commit and if they haven't been committed, they will all be gone forever (using @command{rm}).
To get a list of the non-version controlled files instead of deleting them, add the @option{n} option to @command{git clean}, so it becomes @option{-fxdn}.
@end cartouche

Besides the @file{bootstrap} and @file{bootstrap.conf}, the @file{bootstrapped/} directory and @file{README-hacking} file are also related to the bootstrapping process.
The former hosts all the imported (bootstrapped) directories.
Thus, in the version controlled source, it only contains a @file{REAME} file, but in the distributed tar-ball it also contains sub-directories filled with all bootstrapped files.
@file{README-hacking} contains a summary of the bootstrapping process discussed in this section.
It is a necessary reference when you haven't built this book yet.
It is thus not distributed in the Gnuastro tarball.


@node Synchronizing,  , Bootstrapping, Version controlled source
@subsubsection Synchronizing

The bootstrapping script (see @ref{Bootstrapping}) is not regularly needed: you mainly need it after you have cloned Gnuastro (once) and whenever you want to re-import the files from Gnulib, or Autoconf archives@footnote{@url{https://savannah.gnu.org/task/index.php?13993} is defined for you to check if significant (for Gnuastro) updates are made in these repositories, since the last time you pulled from them.} (not too common).
However, Gnuastro developers are constantly working on Gnuastro and are pushing their changes to the official repository.
Therefore, your local Gnuastro clone will soon be out-dated.
Gnuastro has two mailing lists dedicated to its developing activities (see @ref{Developing mailing lists}).
Subscribing to them can help you decide when to synchronize with the official repository.

To pull all the most recent work in Gnuastro, run the following command from the top Gnuastro directory.
If you don't already have a built system, ignore @command{make distclean}.
The separate steps are described in detail afterwards.

@example
$ make distclean && git pull && autoreconf -f
@end example

@noindent
You can also run the commands separately:

@example
$ make distclean
$ git pull
$ autoreconf -f
@end example

@cindex GNU Autoconf
@cindex Mailing list: info-gnuastro
@cindex @code{info-gnuastro@@gnu.org}
If Gnuastro was already built in this directory, you don't want some outputs from the previous version being mixed with outputs from the newly pulled work.
Therefore, the first step is to clean/delete all the built files with @command{make distclean}.
Fortunately the GNU build system allows the separation of source and built files (in separate directories).
This is a great feature to keep your source directory clean and you can use it to avoid the cleaning step.
Gnuastro comes with a script with some useful options for this job.
It is useful if you regularly pull recent changes, see @ref{Separate build and source directories}.

After the pull, we must re-configure Gnuastro with @command{autoreconf -f} (part of GNU Autoconf).
It will update the @file{./configure} script and all the @file{Makefile.in}@footnote{In the GNU build system, @command{./configure} will use the @file{Makefile.in} files to create the necessary @file{Makefile} files that are later read by @command{make} to build the package.} files based on the hand-written configurations (in @file{configure.ac} and the @file{Makefile.am} files).
After running @command{autoreconf -f}, a warning about @code{TEXI2DVI} might show up, you can ignore that.

The most important reason for re-building Gnuastro's build system is to generate/update the version number for your updated Gnuastro snapshot.
This generated version number will include the commit information (see @ref{Version numbering}).
The version number is included in nearly all outputs of Gnuastro's programs, therefore it is vital for reproducing an old result.

As a summary, be sure to run `@command{autoreconf -f}' after every change in the Git history.
This includes synchronization with the main server or even a commit you have made yourself.

If you would like to see what has changed since you last synchronized your local clone, you can take the following steps instead of the simple command above (don't type anything after @code{#}):

@example
$ git checkout master             # Confirm if you are on master.
$ git fetch origin                # Fetch all new commits from server.
$ git log master..origin/master   # See all the new commit messages.
$ git merge origin/master         # Update your master branch.
$ autoreconf -f                   # Update the build system.
@end example

@noindent
By default @command{git log} prints the most recent commit first, add the @option{--reverse} option to see the changes chronologically.
To see exactly what has been changed in the source code along with the commit message, add a @option{-p} option to the @command{git log}.

If you want to make changes in the code, have a look at @ref{Developing} to get started easily.
Be sure to commit your changes in a separate branch (keep your @code{master} branch to follow the official repository) and re-run @command{autoreconf -f} after the commit.
If you intend to send your work to us, you can safely use your commit since it will be ultimately recorded in Gnuastro's official history.
If not, please upload your separate branch to a public hosting service, for example @url{https://gitlab.com, GitLab}, and link to it in your report/paper.
Alternatively, run @command{make distcheck} and upload the output @file{gnuastro-X.X.X.XXXX.tar.gz} to a publicly accessible web page so your results can be considered scientific (reproducible) later.













@node Build and install,  , Downloading the source, Installation
@section Build and install

This section is basically a longer explanation to the sequence of commands given in @ref{Quick start}.
If you didn't have any problems during the @ref{Quick start} steps, you want to have all the programs of Gnuastro installed in your system, you don't want to change the executable names during or after installation, you have root access to install the programs in the default system wide directory, the Letter paper size of the print book is fine for you or as a summary you don't feel like going into the details when everything is working, you can safely skip this section.

If you have any of the above problems or you want to understand the details for a better control over your build and install, read along.
The dependencies which you will need prior to configuring, building and installing Gnuastro are explained in @ref{Dependencies}.
The first three steps in @ref{Quick start} need no extra explanation, so we will skip them and start with an explanation of Gnuastro specific configuration options and a discussion on the installation directory in @ref{Configuring}, followed by some smaller subsections: @ref{Tests}, @ref{A4 print book}, and @ref{Known issues} which explains the solutions to known problems you might encounter in the installation steps and ways you can solve them.


@menu
* Configuring::                 Configure Gnuastro
* Separate build and source directories::  Keeping derivate/build files separate.
* Tests::                       Run tests to see if it is working.
* A4 print book::               Customize the print book.
* Known issues::                Issues you might encounter.
@end menu





@node Configuring, Separate build and source directories, Build and install, Build and install
@subsection Configuring

@pindex ./configure
@cindex Configuring
The @command{$ ./configure} step is the most important step in the build and install process.
All the required packages, libraries, headers and environment variables are checked in this step.
The behaviors of make and make install can also be set through command line options to this command.

@cindex Configure options
@cindex Customizing installation
@cindex Installation, customizing
The configure script accepts various arguments and options which enable the final user to highly customize whatever she is building.
The options to configure are generally very similar to normal program options explained in @ref{Arguments and options}.
Similar to all GNU programs, you can get a full list of the options along with a short explanation by running

@example
$ ./configure --help
@end example

@noindent
@cindex GNU Autoconf
A complete explanation is also included in the @file{INSTALL} file.
Note that this file was written by the authors of GNU Autoconf (which builds the @file{configure} script), therefore it is common for all programs which use the @command{$ ./configure} script for building and installing, not just Gnuastro.
Here we only discuss cases where you don't have super-user access to the system and if you want to change the executable names.
But before that, a review of the options to configure that are particular to Gnuastro are discussed.

@menu
* Gnuastro configure options::  Configure options particular to Gnuastro.
* Installation directory::      Specify the directory to install.
* Executable names::            Changing executable names.
* Configure and build in RAM::  For minimal use of HDD or SSD, and clean source.
@end menu

@node Gnuastro configure options, Installation directory, Configuring, Configuring
@subsubsection Gnuastro configure options

@cindex @command{./configure} options
@cindex Configure options particular to Gnuastro
Most of the options to configure (which are to do with building) are similar for every program which uses this script.
Here the options that are particular to Gnuastro are discussed.
The next topics explain the usage of other configure options which can be applied to any program using the GNU build system (through the configure script).

@vtable @option

@item --enable-debug
@cindex Valgrind
@cindex Debugging
@cindex GNU Debugger
Compile/build Gnuastro with debugging information, no optimization and without shared libraries.

In order to allow more efficient programs when using Gnuastro (after the installation), by default Gnuastro is built with a 3rd level (a very high level) optimization and no debugging information.
By default, libraries are also built for static @emph{and} shared linking (see @ref{Linking}).
However, when there are crashes or unexpected behavior, these three features can hinder the process of localizing the problem.
This configuration option is identical to manually calling the configuration script with @code{CFLAGS="-g -O0" --disable-shared}.

In the (rare) situations where you need to do your debugging on the shared libraries, don't use this option.
Instead run the configure script by explicitly setting @code{CFLAGS} like this:
@example
$ ./configure CFLAGS="-g -O0"
@end example

@item --enable-check-with-valgrind
@cindex Valgrind
Do the @command{make check} tests through Valgrind.
Therefore, if any crashes or memory-related issues (segmentation faults in particular) occur in the tests, the output of Valgrind will also be put in the @file{tests/test-suite.log} file without having to manually modify the check scripts.
This option will also activate Gnuastro's debug mode (see the @option{--enable-debug} configure-time option described above).

Valgrind is free software.
It is a program for easy checking of memory-related issues in programs.
It runs a program within its own controlled environment and can thus identify the exact line-number in the program's source where a memory-related issue occurs.
However, it can significantly slow-down the tests.
So this option is only useful when a segmentation fault is found during @command{make check}.

@item --enable-progname
Only build and install @file{progname} along with any other program that is enabled in this fashion.
@file{progname} is the name of the executable without the @file{ast}, for example @file{crop} for Crop (with the executable name of @file{astcrop}).

Note that by default all the programs will be installed.
This option (and the @option{--disable-progname} options) are only relevant when you don't want to install all the programs.
Therefore, if this option is called for any of the programs in Gnuastro, any program which is not explicitly enabled will not be built or installed.

@item --disable-progname
@itemx --enable-progname=no
Do not build or install the program named @file{progname}.
This is very similar to the @option{--enable-progname}, but will build and install all the other programs except this one.

@cartouche
@noindent
@strong{Note:} If some programs are enabled and some are disabled, it is equivalent to simply enabling those that were enabled.
Listing the disabled programs is redundant.
@end cartouche

@item --enable-gnulibcheck
@cindex GNU C library
@cindex Gnulib: GNU Portability Library
@cindex GNU Portability Library (Gnulib)
Enable checks on the GNU Portability Library (Gnulib).
Gnulib is used by Gnuastro to enable users of non-GNU based operating systems (that don't use GNU C library or glibc) to compile and use the advanced features that this library provides.
We make extensive use of such functions.
If you give this option to @command{$ ./configure}, when you run @command{$ make check}, first the functions in Gnulib will be tested, then the Gnuastro executables.
If your operating system does not support glibc or has an older version of it and you have problems in the build process (@command{$ make}), you can give this flag to configure to see if the problem is caused by Gnulib not supporting your operating system or Gnuastro, see @ref{Known issues}.

@item --disable-guide-message
@itemx --enable-guide-message=no
Do not print a guiding message during the GNU Build process of @ref{Quick start}.
By default, after each step, a message is printed guiding the user what the next command should be.
Therefore, after @command{./configure}, it will suggest running @command{make}.
After @command{make}, it will suggest running @command{make check} and so on.
If Gnuastro is configured with this option, for example
@example
$ ./configure --disable-guide-message
@end example
Then these messages will not be printed after any step (like most programs).
For people who are not yet fully accustomed to this build system, these guidelines can be very useful and encouraging.
However, if you find those messages annoying, use this option.

@item --without-libgit2
@cindex Git
@pindex libgit2
@cindex Version control systems
Build Gnuastro without libgit2 (for including Git commit hashes in output files), see @ref{Optional dependencies}.
libgit2 is an optional dependency, with this option, Gnuastro will ignore any possibly existing libgit2 that may already be on the system.

@item --without-libjpeg
@pindex libjpeg
@cindex JPEG format
Build Gnuastro without libjpeg (for reading/writing to JPEG files), see @ref{Optional dependencies}.
libjpeg is an optional dependency, with this option, Gnuastro will ignore any possibly existing libjpeg that may already be on the system.

@item --without-libtiff
@pindex libtiff
@cindex TIFF format
Build Gnuastro without libtiff (for reading/writing to TIFF files), see @ref{Optional dependencies}.
libtiff is an optional dependency, with this option, Gnuastro will ignore any possibly existing libtiff that may already be on the system.

@end vtable

The tests of some programs might depend on the outputs of the tests of other programs.
For example MakeProfiles is one the first programs to be tested when you run @command{$ make check}.
MakeProfiles' test outputs (FITS images) are inputs to many other programs (which in turn provide inputs for other programs).
Therefore, if you don't install MakeProfiles for example, the tests for many the other programs will be skipped.
To avoid this, in one run, you can install all the programs and run the tests but not install.
If everything is working correctly, you can run configure again with only the programs you want.
However, don't run the tests and directly install after building.



@node Installation directory, Executable names, Gnuastro configure options, Configuring
@subsubsection Installation directory

@vindex --prefix
@cindex Superuser, not possible
@cindex Root access, not possible
@cindex No access to super-user install
@cindex Install with no super-user access
One of the most commonly used options to @file{./configure} is @option{--prefix}, it is used to define the directory that will host all the installed files (or the ``prefix'' in their final absolute file name).
For example, when you are using a server and you don't have administrator or root access.
In this example scenario, if you don't use the @option{--prefix} option, you won't be able to install the built files and thus access them from anywhere without having to worry about where they are installed.
However, once you prepare your startup file to look into the proper place (as discussed thoroughly below), you will be able to easily use this option and benefit from any software you want to install without having to ask the system administrators or install and use a different version of a software that is already installed on the server.

The most basic way to run an executable is to explicitly write its full file name (including all the directory information) and run it.
One example is running the configuration script with the @command{$ ./configure} command (see @ref{Quick start}).
By giving a specific directory (the current directory or @file{./}), we are explicitly telling the shell to look in the current directory for an executable file named `@file{configure}'.
Directly specifying the directory is thus useful for executables in the current (or nearby) directories.
However, when the program (an executable file) is to be used a lot, specifying all those directories will become a significant burden.
For example, the @file{ls} executable lists the contents in a given directory and it is (usually) installed in the @file{/usr/bin/} directory by the operating system maintainers.
Therefore, if using the full address was the only way to access an executable, each time you wanted a listing of a directory, you would have to run the following command (which is very inconvenient, both in writing and in remembering the various directories).

@example
$ /usr/bin/ls
@end example

@cindex Shell variables
@cindex Environment variables
To address this problem, we have the @file{PATH} environment variable.
To understand it better, we will start with a short introduction to the shell variables.
Shell variable values are basically treated as strings of characters.
For example, it doesn't matter if the value is a name (string of @emph{alphabetic} characters), or a number (string of @emph{numeric} characters), or both.
You can define a variable and a value for it by running
@example
$ myvariable1=a_test_value
$ myvariable2="a test value"
@end example
@noindent
As you see above, if the value contains white space characters, you have to put the whole value (including white space characters) in double quotes (@key{"}).
You can see the value it represents by running
@example
$ echo $myvariable1
$ echo $myvariable2
@end example
@noindent
@cindex Environment
@cindex Environment variables
If a variable has no value or it wasn't defined, the last command will only print an empty line.
A variable defined like this will be known as long as this shell or terminal is running.
Other terminals will have no idea it existed.
The main advantage of shell variables is that if they are exported@footnote{By running @command{$ export myvariable=a_test_value} instead of the simpler case in the text}, subsequent programs that are run within that shell can access their value.
So by changing their value, you can change the ``environment'' of a program which uses them.
The shell variables which are accessed by programs are therefore known as ``environment variables''@footnote{You can use shell variables for other actions too, for example to temporarily keep some names or run loops on some files.}.
You can see the full list of exported variables that your shell recognizes by running:

@example
$ printenv
@end example

@cindex @file{HOME}
@cindex @file{HOME/.local/}
@cindex Environment variable, @code{HOME}
@file{HOME} is one commonly used environment variable, it is any user's (the one that is logged in) top directory.
Try finding it in the command above.
It is used so often that the shell has a special expansion (alternative) for it: `@file{~}'.
Whenever you see file names starting with the tilde sign, it actually represents the value to the @file{HOME} environment variable, so @file{~/doc} is the same as @file{$HOME/doc}.

@vindex PATH
@pindex ./configure
@cindex Setting @code{PATH}
@cindex Default executable search directory
@cindex Search directory for executables
Another one of the most commonly used environment variables is @file{PATH}, it is a list of directories to search for executable names.
Its value is a list of directories (separated by a colon, or `@key{:}').
When the address of the executable is not explicitly given (like @file{./configure} above), the system will look for the executable in the directories specified by @file{PATH}.
If you have a computer nearby, try running the following command to see which directories your system will look into when it is searching for executable (binary) files, one example is printed here (notice how @file{/usr/bin}, in the @file{ls} example above, is one of the directories in @command{PATH}):

@example
$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/bin
@end example

By default @file{PATH} usually contains system-wide directories, which are readable (but not writable) by all users, like the above example.
Therefore if you don't have root (or administrator) access, you need to add another directory to @file{PATH} which you actually have write access to.
The standard directory where you can keep installed files (not just executables) for your own user is the @file{~/.local/} directory.
The names of hidden files start with a `@key{.}' (dot), so it will not show up in your common command-line listings, or on the graphical user interface.
You can use any other directory, but this is the most recognized.

The top installation directory will be used to keep all the package's components: programs (executables), libraries, include (header) files, shared data (like manuals), or configuration files (see @ref{Review of library fundamentals} for a thorough introduction to headers and linking).
So it commonly has some of the following sub-directories for each class of installed components respectively: @file{bin/}, @file{lib/}, @file{include/} @file{man/}, @file{share/}, @file{etc/}.
Since the @file{PATH} variable is only used for executables, you can add the @file{~/.local/bin} directory (which keeps the executables/programs or more generally, ``binary'' files) to @file{PATH} with the following command.
As defined below, first the existing value of @file{PATH} is used, then your given directory is added to its end and the combined value is put back in @file{PATH} (run `@command{$ echo $PATH}' afterwards to check if it was added).

@example
$ PATH=$PATH:~/.local/bin
@end example

@cindex GNU Bash
@cindex Startup scripts
@cindex Scripts, startup
Any executable that you installed in @file{~/.local/bin} will now be usable without having to remember and write its full address.
However, as soon as you leave/close your current terminal session, this modified @file{PATH} variable will be forgotten.
Adding the directories which contain executables to the @file{PATH} environment variable each time you start a terminal is also very inconvenient and prone to errors.
Fortunately, there are standard `startup files' defined by your shell precisely for this (and other) purposes.
There is a special startup file for every significant starting step:

@table @asis

@cindex GNU Bash
@item @file{/etc/profile} and everything in @file{/etc/profile.d/}
These startup scripts are called when your whole system starts (for example after you turn on your computer).
Therefore you need administrator or root privileges to access or modify them.

@item @file{~/.bash_profile}
If you are using (GNU) Bash as your shell, the commands in this file are run, when you log in to your account @emph{through Bash}.
Most commonly when you login through the virtual console (where there is no graphic user interface).

@item @file{~/.bashrc}
If you are using (GNU) Bash as your shell, the commands here will be run each time you start a terminal and are already logged in.
For example, when you open your terminal emulator in the graphic user interface.

@end table

For security reasons, it is highly recommended to directly type in your @file{HOME} directory value by hand in startup files instead of using variables.
So in the following, let's assume your user name is `@file{name}' (so @file{~} may be replaced with @file{/home/name}).
To add @file{~/.local/bin} to your @file{PATH} automatically on any startup file, you have to ``export'' the new value of @command{PATH} in the startup file that is most relevant to you by adding this line:

@example
export PATH=$PATH:/home/name/.local/bin
@end example

@cindex GNU build system
@cindex Install directory
@cindex Directory, install
Now that you know your system will look into @file{~/.local/bin} for executables, you can tell Gnuastro's configure script to install everything in the top @file{~/.local} directory using the @option{--prefix} option.
When you subsequently run @command{$ make install}, all the install-able files will be put in their respective directory under @file{~/.local/} (the executables in @file{~/.local/bin}, the compiled library files in @file{~/.local/lib}, the library header files in @file{~/.local/include} and so on, to learn more about these different files, please see @ref{Review of library fundamentals}).
Note that tilde (`@key{~}') expansion will not happen if you put a `@key{=}' between @option{--prefix} and @file{~/.local}@footnote{If you insist on using `@key{=}', you can use @option{--prefix=$HOME/.local}.}, so we have avoided the @key{=} character here which is optional in GNU-style options, see @ref{Options}.

@example
$ ./configure --prefix ~/.local
@end example

@cindex @file{MANPATH}
@cindex @file{INFOPATH}
@cindex @file{LD_LIBRARY_PATH}
@cindex Library search directory
@cindex Default library search directory
You can install everything (including libraries like GSL, CFITSIO, or WCSLIB which are Gnuastro's mandatory dependencies, see @ref{Mandatory dependencies}) locally by configuring them as above.
However, recall that @command{PATH} is only for executable files, not libraries and that libraries can also depend on other libraries.
For example WCSLIB depends on CFITSIO and Gnuastro needs both.
Therefore, when you installed a library in a non-recognized directory, you have to guide the program that depends on them to look into the necessary library and header file directories.
To do that, you have to define the @command{LDFLAGS} and @command{CPPFLAGS} environment variables respectively.
This can be done while calling @file{./configure} as shown below:

@example
$ ./configure LDFLAGS=-L/home/name/.local/lib            \
              CPPFLAGS=-I/home/name/.local/include       \
              --prefix ~/.local
@end example

It can be annoying/buggy to do this when configuring every software that depends on such libraries.
Hence, you can define these two variables in the most relevant startup file (discussed above).
The convention on using these variables doesn't include a colon to separate values (as @command{PATH}-like variables do), they use white space characters and each value is prefixed with a compiler option@footnote{These variables are ultimately used as options while building the programs, so every value has be an option name followed be a value as discussed in @ref{Options}.}: note the @option{-L} and @option{-I} above (see @ref{Options}), for @option{-I} see @ref{Headers}, and for @option{-L}, see @ref{Linking}.
Therefore we have to keep the value in double quotation signs to keep the white space characters and adding the following two lines to the startup file of choice:

@example
export LDFLAGS="$LDFLAGS -L/home/name/.local/lib"
export CPPFLAGS="$CPPFLAGS -I/home/name/.local/include"
@end example

@cindex Dynamic libraries
Dynamic libraries are linked to the executable every time you run a program that depends on them (see @ref{Linking} to fully understand this important concept).
Hence dynamic libraries also require a special path variable called @command{LD_LIBRARY_PATH} (same formatting as @command{PATH}).
To use programs that depend on these libraries, you need to add @file{~/.local/lib} to your @command{LD_LIBRARY_PATH} environment variable by adding the following line to the relevant start-up file:

@example
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/name/.local/lib
@end example

If you also want to access the Info (see @ref{Info}) and man pages (see @ref{Man pages}) documentations add @file{~/.local/share/info} and @file{~/.local/share/man} to your @command{INFOPATH}@footnote{Info has the following convention: ``If the value of @command{INFOPATH} ends with a colon [or it isn't defined] ..., the initial list of directories is constructed by appending the build-time default to the value of @command{INFOPATH}.'' So when installing in a non-standard directory and if @command{INFOPATH} was not initially defined, add a colon to the end of @command{INFOPATH} as shown below, otherwise Info will not be able to find system-wide installed documentation:@*@command{echo 'export INFOPATH=$INFOPATH:/home/name/.local/share/info:' >> ~/.bashrc}@* Note that this is only an internal convention of Info, do not use it for other @command{*PATH} variables.} and @command{MANPATH} environment variables respectively.

@cindex Search directory order
@cindex Order in search directory
A final note is that order matters in the directories that are searched for all the variables discussed above.
In the examples above, the new directory was added after the system specified directories.
So if the program, library or manuals are found in the system wide directories, the user directory is no longer searched.
If you want to search your local installation first, put the new directory before the already existing list, like the example below.

@example
export LD_LIBRARY_PATH=/home/name/.local/lib:$LD_LIBRARY_PATH
@end example

@noindent
This is good when a library, for example CFITSIO, is already present on the system, but the system-wide install wasn't configured with the correct configuration flags (see @ref{CFITSIO}), or you want to use a newer version and you don't have administrator or root access to update it on the whole system/server.
If you update @file{LD_LIBRARY_PATH} by placing @file{~/.local/lib} first (like above), the linker will first find the CFITSIO you installed for yourself and link with it.
It thus will never reach the system-wide installation.

There are important security problems with using local installations first: all important system-wide executables and libraries (important executables like @command{ls} and @command{cp}, or libraries like the C library) can be replaced by non-secure versions with the same file names and put in the customized directory (@file{~/.local} in this example).
So if you choose to search in your customized directory first, please @emph{be sure} to keep it clean from executables or libraries with the same names as important system programs or libraries.

@cartouche
@noindent
@strong{Summary:} When you are using a server which doesn't give you administrator/root access AND you would like to give priority to your own built programs and libraries, not the version that is (possibly already) present on the server, add these lines to your startup file.
See above for which startup file is best for your case and for a detailed explanation on each.
Don't forget to replace `@file{/YOUR-HOME-DIR}' with your home directory (for example `@file{/home/your-id}'):

@example
export PATH="/YOUR-HOME-DIR/.local/bin:$PATH"
export LDFLAGS="-L/YOUR-HOME-DIR/.local/lib $LDFLAGS"
export MANPATH="/YOUR-HOME-DIR/.local/share/man/:$MANPATH"
export CPPFLAGS="-I/YOUR-HOME-DIR/.local/include $CPPFLAGS"
export INFOPATH="/YOUR-HOME-DIR/.local/share/info/:$INFOPATH"
export LD_LIBRARY_PATH="/YOUR-HOME-DIR/.local/lib:$LD_LIBRARY_PATH"
@end example

@noindent
Afterwards, you just need to add an extra @option{--prefix=/YOUR-HOME-DIR/.local} to the @file{./configure} command of the software that you intend to install.
Everything else will be the same as a standard build and install, see @ref{Quick start}.
@end cartouche

@node Executable names, Configure and build in RAM, Installation directory, Configuring
@subsubsection Executable names

@cindex Executable names
@cindex Names of executables
At first sight, the names of the executables for each program might seem to be uncommonly long, for example @command{astnoisechisel} or @command{astcrop}.
We could have chosen terse (and cryptic) names like most programs do.
We chose this complete naming convention (something like the commands in @TeX{}) so you don't have to spend too much time remembering what the name of a specific program was.
Such complete names also enable you to easily search for the programs.

@cindex Shell auto-complete
@cindex Auto-complete in the shell
To facilitate typing the names in, we suggest using the shell auto-complete.
With this facility you can find the executable you want very easily.
It is very similar to file name completion in the shell.
For example, simply by typing the letters below (where @key{[TAB]} stands for the Tab key on your keyboard)

@example
$ ast[TAB][TAB]
@end example

@noindent
you will get the list of all the available executables that start with @command{ast} in your @command{PATH} environment variable directories.
So, all the Gnuastro executables installed on your system will be listed.
Typing the next letter for the specific program you want along with a Tab, will limit this list until you get to your desired program.

@cindex Names, customize
@cindex Customize executable names
In case all of this does not convince you and you still want to type short names, some suggestions are given below.
You should have in mind though, that if you are writing a shell script that you might want to pass on to others, it is best to use the standard name because other users might not have adopted the same customization.
The long names also serve as a form of documentation in such scripts.
A similar reasoning can be given for option names in scripts: it is good practice to always use the long formats of the options in shell scripts, see @ref{Options}.

@cindex Symbolic link
The simplest solution is making a symbolic link to the actual executable.
For example let's assume you want to type @file{ic} to run Crop instead of @file{astcrop}.
Assuming you installed Gnuastro executables in @file{/usr/local/bin} (default) you can do this simply by running the following command as root:

@example
# ln -s /usr/local/bin/astcrop /usr/local/bin/ic
@end example

@noindent
In case you update Gnuastro and a new version of Crop is installed, the
default executable name is the same, so your custom symbolic link still
works.

@vindex --program-prefix
@vindex --program-suffix
@vindex --program-transform-name
The installed executable names can also be set using options to @command{$ ./configure}, see @ref{Configuring}.
GNU Autoconf (which configures Gnuastro for your particular system), allows the builder to change the name of programs with the three options @option{--program-prefix}, @option{--program-suffix} and @option{--program-transform-name}.
The first two are for adding a fixed prefix or suffix to all the programs that will be installed.
This will actually make all the names longer!  You can use it to add versions of program names to the programs in order to simultaneously have two executable versions of a program.

@cindex SED, stream editor
@cindex Stream editor, SED
The third configure option allows you to set the executable name at install time using the SED program.
SED is a very useful `stream editor'.
There are various resources on the internet to use it effectively.
However, we should caution that using configure options will change the actual executable name of the installed program and on every re-install (an update for example), you have to also add this option to keep the old executable name updated.
Also note that the documentation or configuration files do not change from their standard names either.

@cindex Removing @file{ast} from executables
For example, let's assume that typing @file{ast} on every invocation of every program is really annoying you! You can remove this prefix from all the executables at configure time by adding this option:

@example
$ ./configure --program-transform-name='s/ast/ /'
@end example



@node Configure and build in RAM,  , Executable names, Configuring
@subsubsection Configure and build in RAM

@cindex File I/O
@cindex Input/Output, file
Gnuastro's configure and build process (the GNU build system) involves the creation, reading, and modification of a large number of files (input/output, or I/O).
Therefore file I/O issues can directly affect the work of developers who need to configure and build Gnuastro numerous times.
Some of these issues are listed below:

@itemize
@cindex HDD
@cindex SSD
@item
I/O will cause wear and tear on both the HDDs (mechanical failures) and
SSDs (decreasing the lifetime).

@cindex Backup
@item
Having the built files mixed with the source files can greatly affect backing up (synchronization) of source files (since it involves the management of a large number of small files that are regularly changed.
Backup software can of course be configured to ignore the built files and directories.
However, since the built files are mixed with the source files and can have a large variety, this will require a high level of customization.
@end itemize

@cindex tmpfs file system
@cindex file systems, tmpfs
One solution to address both these problems is to use the @url{https://en.wikipedia.org/wiki/Tmpfs, tmpfs file system}.
Any file in tmpfs is actually stored in the RAM (and possibly SAWP), not on HDDs or SSDs.
The RAM is built for extensive and fast I/O.
Therefore the large number of file I/Os associated with configuring and building will not harm the HDDs or SSDs.
Due to the volatile nature of RAM, files in the tmpfs file-system will be permanently lost after a power-off.
Since all configured and built files are derivative files (not files that have been directly written by hand) there is no problem in this and this feature can be considered as an automatic cleanup.

@cindex Linux kernel
@cindex GNU C library
@cindex GNU build system
The modern GNU C library (and thus the Linux kernel) defines the @file{/dev/shm} directory for this purpose in the RAM (POSIX shared memory).
To build in it, you can use the GNU build system's ability to build in a separate directory (not necessarily in the source directory) as shown below.
Just set @file{SRCDIR} as the address of Gnuastro's top source directory (for example, the unpacked tarball).

@example
$ mkdir /dev/shm/tmp-gnuastro-build
$ cd /dev/shm/tmp-gnuastro-build
$ SRCDIR/configure --srcdir=SRCDIR
$ make
@end example

Gnuastro comes with a script to simplify this process of configuring and building in a different directory (a ``clean'' build), for more see @ref{Separate build and source directories}.

@node Separate build and source directories, Tests, Configuring, Build and install
@subsection Separate build and source directories

The simple steps of @ref{Quick start} will mix the source and built files.
This can cause inconvenience for developers or enthusiasts following the the most recent work (see @ref{Version controlled source}).
The current section is mainly focused on this later group of Gnuastro users.
If you just install Gnuastro on major releases (following @ref{Announcements}), you can safely ignore this section.

@cindex GNU build system
When it is necessary to keep the source (which is under version control), but not the derivative (built) files (after checking or installing), the best solution is to keep the source and the built files in separate directories.
One application of this is already discussed in @ref{Configure and build in RAM}.

To facilitate this process of configuring and building in a separate directory, Gnuastro comes with the @file{developer-build} script.
It is available in the top source directory and is @emph{not} installed.
It will make a directory under a given top-level directory (given to @option{--top-build-dir}) and build Gnuastro in there directory.
It thus keeps the source completely separated from the built files.
For easy access to the built files, it also makes a symbolic link to the built directory in the top source files called @file{build}.

When run without any options, default values will be used for its configuration.
As with Gnuastro's programs, you can inspect the default values with @option{-P} (or @option{--printparams}, the output just looks a little different here).
The default top-level build directory is @file{/dev/shm}: the shared memory directory in RAM on GNU/Linux systems as described in @ref{Configure and build in RAM}.

@cindex Debug
Besides these, it also has some features to facilitate the job of developers or bleeding edge users like the @option{--debug} option to do a fast build, with debug information, no optimization, and no shared libraries.
Here is the full list of options you can feed to this script to configure its operations.

@cartouche
@noindent
@strong{Not all Gnuastro's common program behavior usable here:}
@file{developer-build} is just a non-installed script with a very limited scope as described above.
It thus doesn't have all the common option behaviors or configuration files for example.
@end cartouche

@cartouche
@noindent
@strong{White space between option and value:} @file{developer-build}
doesn't accept an @key{=} sign between the options and their values.
It also needs at least one character between the option and its value.
Therefore @option{-n 4} or @option{--numthreads 4} are acceptable, while @option{-n4}, @option{-n=4}, or @option{--numthreads=4} aren't.
Finally multiple short option names cannot be merged: for example you can say @option{-c -n 4}, but unlike Gnuastro's programs, @option{-cn4} is not acceptable.
@end cartouche

@cartouche
@noindent
@strong{Reusable for other packages:} This script can be used in any software which is configured and built using the GNU Build System.
Just copy it in the top source directory of that software and run it from there.
@end cartouche

@table @option

@item -b STR
@itemx --top-build-dir STR
The top build directory to make a directory for the build.
If this option isn't called, the top build directory is @file{/dev/shm} (only available in GNU/Linux operating systems, see @ref{Configure and build in RAM}).

@item -V
@itemx --version
Print the version string of Gnuastro that will be used in the build.
This string will be appended to the directory name containing the built files.

@item -a
@itemx --autoreconf
Run @command{autoreconf -f} before building the package.
In Gnuastro, this is necessary when a new commit has been made to the project history.
In Gnuastro's build system, the Git description will be used as the version, see @ref{Version numbering} and @ref{Synchronizing}.

@item -c
@itemx --clean
@cindex GNU Autoreconf
Delete the contents of the build directory (clean it) before starting the configuration and building of this run.

This is useful when you have recently pulled changes from the main Git repository, or committed a change your self and ran @command{autoreconf -f}, see @ref{Synchronizing}.
After running GNU Autoconf, the version will be updated and you need to do a clean build.

@item -d
@itemx --debug
@cindex Valgrind
@cindex GNU Debugger (GDB)
Build with debugging flags (for example to use in GNU Debugger, also known as GDB, or Valgrind), disable optimization and also the building of shared libraries.
Similar to running the configure script of below

@example
$ ./configure --enable-debug
@end example

Besides all the debugging advantages of building with this option, it will also be significantly speed up the build (at the cost of slower built programs).
So when you are testing something small or working on the build system itself, it will be much faster to test your work with this option.

@item -v
@itemx --valgrind
@cindex Valgrind
Build all @command{make check} tests within Valgrind.
For more, see the description of @option{--enable-check-with-valgrind} in @ref{Gnuastro configure options}.

@item -j INT
@itemx --jobs INT
The maximum number of threads/jobs for Make to build at any moment.
As the name suggests (Make has an identical option), the number given to this option is directly passed on to any call of Make with its @option{-j} option.

@item -C
@itemx --check
After finishing the build, also run @command{make check}.
By default, @command{make check} isn't run because the developer usually has their own checks to work on (for example defined in @file{tests/during-dev.sh}).

@item -i
@itemx --install
After finishing the build, also run @command{make install}.

@item -D
@itemx --dist
Run @code{make dist-lzip pdf} to build a distribution tarball (in @file{.tar.lz} format) and a PDF manual.
This can be useful for archiving, or sending to colleagues who don't use Git for an easy build and manual.

@item -u STR
@item --upload STR
Activate the @option{--dist} (@option{-D}) option, then use secure copy (@command{scp}, part of the SSH tools) to copy the tarball and PDF to the @file{src} and @file{pdf} sub-directories of the specified server and its directory (value to this option).
For example @command{--upload my-server:dir}, will copy the tarball in the @file{dir/src}, and the PDF manual in @file{dir/pdf} of @code{my-server} server.
It will then make a symbolic link in the top server directory to the tarball that is called @file{gnuastro-latest.tar.lz}.

@item -p
@itemx --publish
Short for @option{--autoreconf --clean --debug --check --upload STR}.
@option{--debug} is added because it will greatly speed up the build.
It will have no effect on the produced tarball.
This is good when you have made a commit and are ready to publish it on your server (if nothing crashes).
Recall that if any of the previous steps fail the script aborts.

@item -I
@item --install-archive
Short for @option{--autoreconf --clean --check --install --dist}.
This is useful when you actually want to install the commit you just made (if the build and checks succeed).
It will also produce a distribution tarball and PDF manual for easy access to the installed tarball on your system at a later time.

Ideally, Gnuastro's Git version history makes it easy for a prepared system to revert back to a different point in history.
But Gnuastro also needs to bootstrap files and also your collaborators might (usually do!) find it too much of a burden to do the bootstrapping themselves.
So it is convenient to have a tarball and PDF manual of the version you have installed (and are using in your research) handily available.

@item -h
@itemx --help
@itemx -P
@itemx --printparams
Print a description of this script along with all the options and their
current values.

@end table



@node Tests, A4 print book, Separate build and source directories, Build and install
@subsection Tests

@cindex @command{make check}
@cindex @file{mock.fits}
@cindex Tests, running
@cindex Checking tests
After successfully building (compiling) the programs with the @command{$ make} command you can check the installation before installing.
To run the tests, run

@example
$ make check
@end example

For every program some tests are designed to check some possible operations.
Running the command above will run those tests and give you a final report.
If everything is OK and you have built all the programs, all the tests should pass.
In case any of the tests fail, please have a look at @ref{Known issues} and if that still doesn't fix your problem, look that the @file{./tests/test-suite.log} file to see if the source of the error is something particular to your system or more general.
If you feel it is general, please contact us because it might be a bug.
Note that the tests of some programs depend on the outputs of other program's tests, so if you have not installed them they might be skipped or fail.
Prior to releasing every distribution all these tests are checked.
If you have a reasonably modern terminal, the outputs of the successful tests will be colored green and the failed ones will be colored red.

These scripts can also act as a good set of examples for you to see how the programs are run.
All the tests are in the @file{tests/} directory.
The tests for each program are shell scripts (ending with @file{.sh}) in a sub-directory of this directory with the same name as the program.
See @ref{Test scripts} for more detailed information about these scripts in case you want to inspect them.




@node A4 print book, Known issues, Tests, Build and install
@subsection A4 print book

@cindex A4 print book
@cindex Modifying print book
@cindex A4 paper size
@cindex US letter paper size
@cindex Paper size, A4
@cindex Paper size, US letter
The default print version of this book is provided in the letter paper size.
If you would like to have the print version of this book on paper and you are living in a country which uses A4, then you can rebuild the book.
The great thing about the GNU build system is that the book source code which is in Texinfo is also distributed with the program source code, enabling you to do such customization (hacking).

@cindex GNU Texinfo
In order to change the paper size, you will need to have GNU Texinfo installed.
Open @file{doc/gnuastro.texi} with any text editor.
This is the source file that created this book.
In the first few lines you will see this line:

@example
@@c@@afourpaper
@end example

@noindent
In Texinfo, a line is commented with @code{@@c}.
Therefore, un-comment this line by deleting the first two characters such that it changes to:

@example
@@afourpaper
@end example

@noindent
Save the file and close it.
You can now run the following command

@example
$ make pdf
@end example

@noindent
and the new PDF book will be available in @file{SRCdir/doc/gnuastro.pdf}.
By changing the @command{pdf} in @command{$ make pdf} to @command{ps} or @command{dvi} you can have the book in those formats.
Note that you can do this for any book that is in Texinfo format, they might not have @code{@@afourpaper} line, so you can add it close to the top of the Texinfo source file.




@node Known issues,  , A4 print book, Build and install
@subsection Known issues

Depending on your operating system and the version of the compiler you are using, you might confront some known problems during the configuration (@command{$ ./configure}), compilation (@command{$ make}) and tests (@command{$ make check}).
Here, their solutions are discussed.

@itemize
@cindex Configuration, not finding library
@cindex Development packages
@item
@command{$ ./configure}: @emph{Configure complains about not finding a library even though you have installed it.}
The possible solution is based on how you installed the package:

@itemize
@item
From your distribution's package manager.
Most probably this is because your distribution has separated the header files of a library from the library parts.
Please also install the `development' packages for those libraries too.
Just add a @file{-dev} or @file{-devel} to the end of the package name and re-run the package manager.
This will not happen if you install the libraries from source.
When installed from source, the headers are also installed.

@item
@cindex @command{LDFLAGS}
From source.
Then your linker is not looking where you installed the library.
If you followed the instructions in this chapter, all the libraries will be installed in @file{/usr/local/lib}.
So you have to tell your linker to look in this directory.
To do so, configure Gnuastro like this:

@example
$ ./configure LDFLAGS="-L/usr/local/lib"
@end example

If you want to use the libraries for your other programming projects, then
export this environment variable in a start-up script similar to the case
for @file{LD_LIBRARY_PATH} explained below, also see @ref{Installation
directory}.
@end itemize

@item
@vindex --enable-gnulibcheck
@cindex Gnulib: GNU Portability Library
@cindex GNU Portability Library (Gnulib)
@command{$ make}: @emph{Complains about an unknown function on a non-GNU based operating system.}
In this case, please run @command{$ ./configure} with the @option{--enable-gnulibcheck} option to see if the problem is from the GNU Portability Library (Gnulib) not supporting your system or if there is a problem in Gnuastro, see @ref{Gnuastro configure options}.
If the problem is not in Gnulib and after all its tests you get the same complaint from @command{make}, then please contact us at @file{bug-gnuastro@@gnu.org}.
The cause is probably that a function that we have used is not supported by your operating system and we didn't included it along with the source tar ball.
If the function is available in Gnulib, it can be fixed immediately.

@item
@cindex @command{CPPFLAGS}
@command{$ make}: @emph{Can't find the headers (.h files) of installed libraries.}
Your C pre-processor (CPP) isn't looking in the right place.
To fix this, configure Gnuastro with an additional @code{CPPFLAGS} like below (assuming the library is installed in @file{/usr/local/include}:

@example
$ ./configure CPPFLAGS="-I/usr/local/include"
@end example

If you want to use the libraries for your other programming projects, then export this environment variable in a start-up script similar to the case for @file{LD_LIBRARY_PATH} explained below, also see @ref{Installation directory}.

@cindex Tests, only one passes
@cindex @file{LD_LIBRARY_PATH}
@item
@command{$ make check}: @emph{Only the first couple of tests pass, all the rest fail or get skipped.}  It is highly likely that when searching for shared libraries, your system doesn't look into the @file{/usr/local/lib} directory (or wherever you installed Gnuastro or its dependencies).
To make sure it is added to the list of directories, add the following line to your @file{~/.bashrc} file and restart your terminal.
Don't forget to change @file{/usr/local/lib} if the libraries are installed in other (non-standard) directories.

@example
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/lib"
@end example

You can also add more directories by using a colon `@code{:}' to separate them.
See @ref{Installation directory} and @ref{Linking} to learn more on the @code{PATH} variables and dynamic linking respectively.

@cindex GPL Ghostscript
@item
@command{$ make check}: @emph{The tests relying on external programs (for example @file{fitstopdf.sh} fail.}) This is probably due to the fact that the version number of the external programs is too old for the tests we have preformed.
Please update the program to a more recent version.
For example to create a PDF image, you will need GPL Ghostscript, but older versions do not work, we have successfully tested it on version 9.15.
Older versions might cause a failure in the test result.

@item
@cindex @TeX{}
@cindex GNU Texinfo
@command{$ make pdf}: @emph{The PDF book cannot be made.}
To make a PDF book, you need to have the GNU Texinfo program (like any program, the more recent the better).
A working @TeX{} program is also necessary, which you can get from Tex Live@footnote{@url{https://www.tug.org/texlive/}}.

@item
@cindex GNU Libtool
After @code{make check}: do not copy the programs' executables to another (for example, the installation) directory manually (using @command{cp}, or @command{mv} for example).
In the default configuration@footnote{If you configure Gnuastro with the @option{--disable-shared} option, then the libraries will be statically linked to the programs and this problem won't exist, see @ref{Linking}.}, the program binaries need to link with Gnuastro's shared library which is also built and installed with the programs.
Therefore, to run successfully before and after installation, linking modifications need to be made by GNU Libtool at installation time.
@command{make install} does this internally, but a simple copy might give linking errors when you run it.
If you need to copy the executables, you can do so after installation.

@end itemize

@noindent
If your problem was not listed above, please file a bug report (@ref{Report a bug}).



















@node Common program behavior, Data containers, Installation, Top
@chapter Common program behavior

All the programs in Gnuastro share a set of common behavior mainly to do with user interaction to facilitate their usage and development.
This includes how to feed input datasets into the programs, how to configure them, specifying the outputs, numerical data types, treating columns of information in tables, etc.
This chapter is devoted to describing this common behavior in all programs.
Because the behaviors discussed here are common to several programs, they are not repeated in each program's description.

In @ref{Command-line}, a very general description of running the programs on the command-line is discussed, like difference between arguments and options, as well as options that are common/shared between all programs.
None of Gnuastro's programs keep any internal configuration value (values for their different operational steps), they read their configuration primarily from the command-line, then from specific files in directory, user, or system-wide settings.
Using these configuration files can greatly help reproducible and robust usage of Gnuastro, see @ref{Configuration files} for more.

It is not possible to always have the different options and configurations of each program on the top of your head.
It is very natural to forget the options of a program, their current default values, or how it should be run and what it did.
Gnuastro's programs have multiple ways to help you refresh your memory in multiple levels (just an option name, a short description, or fast access to the relevant section of the manual.
See @ref{Getting help} for more for more on benefiting from this very convenient feature.

Many of the programs use the multi-threaded character of modern CPUs, in @ref{Multi-threaded operations} we'll discuss how you can configure this behavior, along with some tips on making best use of them.
In @ref{Numeric data types}, we'll review the various types to store numbers in your datasets: setting the proper type for the usage context@footnote{For example if the values in your dataset can only be integers between 0 or 65000, store them in a unsigned 16-bit type, not 64-bit floating point type (which is the default in most systems).
It takes four times less space and is much faster to process.} can greatly improve the file size and also speed of reading, writing or processing them.

We'll then look into the recognized table formats in @ref{Tables} and how large datasets are broken into tiles, or mesh grid in @ref{Tessellation}.
Finally, we'll take a look at the behavior regarding output files: @ref{Automatic output} describes how the programs set a default name for their output when you don't give one explicitly (using @option{--output}).
When the output is a FITS file, all the programs also store some very useful information in the header that is discussed in @ref{Output FITS files}.

@menu
* Command-line::                How to use the command-line.
* Configuration files::         Values for unspecified variables.
* Getting help::                Getting more information on the go.
* Installed scripts::           Installed Bash scripts, not compiled programs.
* Multi-threaded operations::   How threads are managed in Gnuastro.
* Numeric data types::          Different types and how to specify them.
* Memory management::           How memory is allocated (in RAM or HDD/SSD).
* Tables::                      Recognized table formats.
* Tessellation::                Tile the dataset into non-overlapping bins.
* Automatic output::            About automatic output names.
* Output FITS files::           Common properties when outputs are FITS.
@end menu

@node Command-line, Configuration files, Common program behavior, Common program behavior
@section Command-line

Gnuastro's programs are customized through the standard Unix-like command-line environment and GNU style command-line options.
Both are very common in many Unix-like operating system programs.
In @ref{Arguments and options} we'll start with the difference between arguments and options and elaborate on the GNU style of options.
Afterwards, in @ref{Common options}, we'll go into the detailed list of all the options that are common to all the programs in Gnuastro.

@menu
* Arguments and options::       Different ways to specify inputs and configuration.
* Common options::              Options that are shared between all programs.
* Standard input::              Using output of another program as input.
@end menu

@node Arguments and options, Common options, Command-line, Command-line
@subsection Arguments and options

@cindex Shell
@cindex Options to programs
@cindex Command-line options
@cindex Arguments to programs
@cindex Command-line arguments
When you type a command on the command-line, it is passed onto the shell (a generic name for the program that manages the command-line) as a string of characters.
As an example, see the ``Invoking ProgramName'' sections in this manual for some examples of commands with each program, like @ref{Invoking asttable}, @ref{Invoking astfits}, or @ref{Invoking aststatistics}.

The shell then brakes up your string into separate @emph{tokens} or @emph{words} using any @emph{metacharacters} (like white-space, tab, @command{|}, @command{>} or @command{;}) that are in the string.
On the command-line, the first thing you usually enter is the name of the program you want to run.
After that, you can specify two types of tokens: @emph{arguments} and @emph{options}.
In the GNU-style, arguments are those tokens that are not preceded by any hyphens (@command{-}, see @ref{Arguments}).
Here is one example:

@example
$ astcrop --center=53.162551,-27.789676 -w10/3600 --mode=wcs udf.fits
@end example

In the example above, we are running @ref{Crop} to crop a region of width 10 arc-seconds centered at the given RA and Dec from the input Hubble Ultra-Deep Field (UDF) FITS image.
Here, the argument is @file{udf.fits}.
Arguments are most commonly the input file names containing your data.
Options start with one or two hyphens, followed by an identifier for the option (the option's name, for example, @option{--center}, @option{-w}, @option{--mode} in the example above) and its value (anything after the option name, or the optional @key{=} character).
Through options you can configure how the program runs (interprets the data you provided).

@vindex --help
@vindex --usage
@cindex Mandatory arguments
Arguments can be mandatory and optional and unlike options, they don't have any identifiers.
Hence, when there multiple arguments, their order might also matter (for example in @command{cp} which is used for copying one file to another location).
The outputs of @option{--usage} and @option{--help} shows which arguments are optional and which are mandatory, see @ref{--usage}.

As their name suggests, @emph{options} can be considered to be optional and most of the time, you don't have to worry about what order you specify them in.
When the order does matter, or the option can be invoked multiple times, it is explicitly mentioned in the ``Invoking ProgramName'' section of each program (this is a very important aspect of an option).

@cindex Metacharacters on the command-line In case your arguments or option values contain any of the shell's meta-characters, you have to quote them.
If there is only one such character, you can use a backslash (@command{\}) before it.
If there are multiple, it might be easier to simply put your whole argument or option value inside of double quotes (@command{"}).
In such cases, everything inside the double quotes will be seen as one token or word.

@cindex HDU
@cindex Header data unit
For example, let's say you want to specify the header data unit (HDU) of your FITS file using a complex expression like `@command{3; images(exposure > 100)}'.
If you simply add these after the @option{--hdu} (@option{-h}) option, the programs in Gnuastro will read the value to the HDU option as `@command{3}' and run.
Then, the shell will attempt to run a separate command `@command{images(exposure > 100)}' and complain about a syntax error.
This is because the semicolon (@command{;}) is an `end of command' character in the shell.
To solve this problem you can simply put double quotes around the whole string you want to pass to @option{--hdu} as seen below:
@example
$ astcrop --hdu="3; images(exposure > 100)" image.fits
@end example





@menu
* Arguments::                   For specifying the main input files/operations.
* Options::                     For configuring the behavior of the program.
@end menu

@node Arguments, Options, Arguments and options, Arguments and options
@subsubsection Arguments
In Gnuastro, arguments are almost exclusively used as the input data file names.
Please consult the first few paragraph of the ``Invoking ProgramName'' section for each program for a description of what it expects as input, how many arguments, or input data, it accepts, or in what order.
Everything particular about how a program treats arguments, is explained under the ``Invoking ProgramName'' section for that program.

Generally, if there is a standard file name extension for a particular format, that filename extension is used to separate the kinds of arguments.
The list below shows the data formats that are recognized in Gnuastro's programs based on their file name endings.
Any argument that doesn't end with the specified extensions below is considered to be a text file (usually catalogs, see @ref{Tables}).
In some cases, a program can accept specific formats, for example @ref{ConvertType} also accepts @file{.jpg} images.

@cindex Astronomical data suffixes
@cindex Suffixes, astronomical data
@itemize

@item
@file{.fits}: The standard file name ending of a FITS image.

@item
@file{.fit}: Alternative (3 character) FITS suffix.

@item
@file{.fits.Z}: A FITS image compressed with @command{compress}.

@item
@file{.fits.gz}: A FITS image compressed with GNU zip (gzip).

@item
@file{.fits.fz}: A FITS image compressed with @command{fpack}.

@item
@file{.imh}: IRAF format image file.

@end itemize

Through out this book and in the command-line outputs, whenever we want to generalize all such astronomical data formats in a text place holder, we will use @file{ASTRdata}, we will assume that the extension is also part of this name.
Any file ending with these names is directly passed on to CFITSIO to read.
Therefore you don't necessarily have to have these files on your computer, they can also be located on an FTP or HTTP server too, see the CFITSIO manual for more information.

CFITSIO has its own error reporting techniques, if your input file(s)
cannot be opened, or read, those errors will be printed prior to the
final error by Gnuastro.



@node Options,  , Arguments, Arguments and options
@subsubsection Options

@cindex GNU style options
@cindex Options, GNU style
@cindex Options, short (@option{-}) and long (@option{--})
Command-line options allow configuring the behavior of a program in all GNU/Linux applications for each particular execution on a particular input data.
A single option can be called in two ways: @emph{long} or @emph{short}.
All options in Gnuastro accept the long format which has two hyphens an can have many characters (for example @option{--hdu}).
Short options only have one hyphen (@key{-}) followed by one character (for example @option{-h}).
You can see some examples in the list of options in @ref{Common options} or those for each program's ``Invoking ProgramName'' section.
Both formats are shown for those which support both.
First the short is shown then the long.

Usually, the short options are for when you are writing on the command-line and want to save keystrokes and time.
The long options are good for shell scripts, where you aren't usually rushing.
Long options provide a level of documentation, since they are more descriptive and less cryptic.
Usually after a few months of not running a program, the short options will be forgotten and reading your previously written script will not be easy.

@cindex On/Off options
@cindex Options, on/off
Some options need to be given a value if they are called and some don't.
You can think of the latter type of options as on/off options.
These two types of options can be distinguished using the output of the @option{--help} and @option{--usage} options, which are common to all GNU software, see @ref{Getting help}.
In Gnuastro we use the following strings to specify when the option needs a value and what format that value should be in.
More specific tests will be done in the program and if the values are out of range (for example negative when the program only wants a positive value), an error will be reported.

@vtable @option

@item INT
The value is read as an integer.

@item FLT
The value is read as a float.
There are generally two types, depending on the context.
If they are for fractions, they will have to be less than or equal to unity.

@item STR
The value is read as a string of characters (for example a file name)
or other particular settings like a HDU name, see below.

@end vtable

@noindent
@cindex Values to options
@cindex Option values
To specify a value in the short format, simply put the value after the option.
Note that since the short options are only one character long, you don't have to type anything between the option and its value.
For the long option you either need white space or an @option{=} sign, for example @option{-h2}, @option{-h 2}, @option{--hdu 2} or @option{--hdu=2} are all equivalent.

The short format of on/off options (those that don't need values) can be concatenated for example these two hypothetical sequences of options are equivalent: @option{-a -b -c4} and @option{-abc4}.
As an example, consider the following command to run Crop:
@example
$ astcrop -Dr3 --wwidth 3 catalog.txt --deccol=4 ASTRdata
@end example
@noindent
The @command{$} is the shell prompt, @command{astcrop} is the program name.
There are two arguments (@command{catalog.txt} and @command{ASTRdata}) and four options, two of them given in short format (@option{-D}, @option{-r}) and two in long format (@option{--width} and @option{--deccol}).
Three of them require a value and one (@option{-D}) is an on/off option.

@vindex --printparams
@cindex Options, abbreviation
@cindex Long option abbreviation
If an abbreviation is unique between all the options of a program, the long option names can be abbreviated.
For example, instead of typing @option{--printparams}, typing @option{--print} or maybe even @option{--pri} will be enough, if there are conflicts, the program will warn you and show you the alternatives.
Finally, if you want the argument parser to stop parsing arguments beyond a certain point, you can use two dashes: @option{--}.
No text on the command-line beyond these two dashes will be parsed.

@cindex Repeated options
@cindex Options, repeated
Gnuastro has two types of options with values, those that only take a single value are the most common type.
If these options are repeated or called more than once on the command-line, the value of the last time it was called will be assigned to it.
This is very useful when you are testing/experimenting.
Let's say you want to make a small modification to one option value.
You can simply type the option with a new value in the end of the command and see how the script works.
If you are satisfied with the change, you can remove the original option for human readability.
If the change wasn't satisfactory, you can remove the one you just added and not worry about forgetting the original value.
Without this capability, you would have to memorize or save the original value somewhere else, run the command and then change the value again which is not at all convenient and is potentially cause lots of bugs.

On the other hand, some options can be called multiple times in one run of a program and can thus take multiple values (for example see the @option{--column} option in @ref{Invoking asttable}.
In these cases, the order of stored values is the same order that you specified on the command-line.

@cindex Configuration files
@cindex Default option values
Gnuastro's programs don't keep any internal default values, so some options are mandatory and if they don't have a value, the program will complain and abort.
Most programs have many such options and typing them by hand on every call is impractical.
To facilitate the user experience, after parsing the command-line, Gnuastro's programs read special configuration files to get the necessary values for the options you haven't identified on the command-line.
These configuration files are fully described in @ref{Configuration files}.

@cartouche
@noindent
@cindex Tilde expansion as option values
@strong{CAUTION:} In specifying a file address, if you want to use the shell's tilde expansion (@command{~}) to specify your home directory, leave at least one space between the option name and your value.
For example use @command{-o ~/test}, @command{--output ~/test} or @command{--output= ~/test}.
Calling them with @command{-o~/test} or @command{--output=~/test} will disable shell expansion.
@end cartouche
@cartouche
@noindent
@strong{CAUTION:} If you forget to specify a value for an option which requires one, and that option is the last one, Gnuastro will warn you.
But if it is in the middle of the command, it will take the text of the next option or argument as the value which can cause undefined behavior.
@end cartouche
@cartouche
@noindent
@cindex Counting from zero.
@strong{NOTE:} In some contexts Gnuastro's counting starts from 0 and in others 1.
You can assume by default that counting starts from 1, if it starts from 0 for a special option, it will be explicitly mentioned.
@end cartouche

@node Common options, Standard input, Arguments and options, Command-line
@subsection Common options

@cindex Options common to all programs
@cindex Gnuastro common options
To facilitate the job of the users and developers, all the programs in Gnuastro share some basic command-line options for the options that are common to many of the programs.
The full list is classified as @ref{Input output options}, @ref{Processing options}, and @ref{Operating mode options}.
In some programs, some of the options are irrelevant, but still recognized (you won't get an unrecognized option error, but the value isn't used).
Unless otherwise mentioned, these options are identical between all programs.

@menu
* Input output options::        Common input/output options.
* Processing options::          Options for common processing steps.
* Operating mode options::      Common operating mode options.
@end menu

@node Input output options, Processing options, Common options, Common options
@subsubsection Input/Output options

These options are to do with the input and outputs of the various
programs.

@vtable @option

@cindex Timeout
@cindex Standard input
@item --stdintimeout
Number of micro-seconds to wait for writing/typing in the @emph{first line} of standard input from the command-line (see @ref{Standard input}).
This is only relevant for programs that also accept input from the standard input, @emph{and} you want to manually write/type the contents on the terminal.
When the standard input is already connected to a pipe (output of another program), there won't be any waiting (hence no timeout, thus making this option redundant).

If the first line-break (for example with the @key{ENTER} key) is not provided before the timeout, the program will abort with an error that no input was given.
Note that this time interval is @emph{only} for the first line that you type.
Once the first line is given, the program will assume that more data will come and accept rest of your inputs without any time limit.
You need to specify the ending of the standard input, for example by pressing @key{CTRL-D} after a new line.

Note that any input you write/type into a program on the command-line with Standard input will be discarded (lost) once the program is finished.
It is only recoverable manually from your command-line (where you actually typed) as long as the terminal is open.
So only use this feature when you are sure that you don't need the dataset (or have a copy of it somewhere else).


@cindex HDU
@cindex Header data unit
@item -h STR/INT
@itemx --hdu=STR/INT
The name or number of the desired Header Data Unit, or HDU, in the FITS image.
A FITS file can store multiple HDUs or extensions, each with either an image or a table or nothing at all (only a header).
Note that counting of the extensions starts from 0(zero), not 1(one).
Counting from 0 is forced on us by CFITSIO which directly reads the value you give with this option (see @ref{CFITSIO}).
When specifying the name, case is not important so @command{IMAGE}, @command{image} or @command{ImAgE} are equivalent.

CFITSIO has many capabilities to help you find the extension you want, far beyond the simple extension number and name.
See CFITSIO manual's ``HDU Location Specification'' section for a very complete explanation with several examples.
A @code{#} is appended to the string you specify for the HDU@footnote{With the @code{#} character, CFITSIO will only read the desired HDU into your memory, not all the existing HDUs in the fits file.} and the result is put in square brackets and appended to the FITS file name before calling CFITSIO to read the contents of the HDU for all the programs in Gnuastro.

@item -s STR
@itemx --searchin=STR
Where to match/search for columns when the column identifier wasn't a number, see @ref{Selecting table columns}.
The acceptable values are @command{name}, @command{unit}, or @command{comment}.
This option is only relevant for programs that take table columns as input.

@item -I
@itemx --ignorecase
Ignore case while matching/searching column meta-data (in the field specified by the @option{--searchin}).
The FITS standard suggests to treat the column names as case insensitive, which is strongly recommended here also but is not enforced.
This option is only relevant for programs that take table columns as input.

This option is not relevant to @ref{BuildProgram}, hence in that program the short option @option{-I} is used for include directories, not to ignore case.

@item -o STR
@itemx --output=STR
The name of the output file or directory. With this option the automatic output names explained in @ref{Automatic output} are ignored.

@item -T STR
@itemx --type=STR
The data type of the output depending on the program context.
This option isn't applicable to some programs like @ref{Fits} and will be ignored by them.
The different acceptable values to this option are fully described in @ref{Numeric data types}.

@item -D
@itemx --dontdelete
By default, if the output file already exists, Gnuastro's programs will silently delete it and put their own outputs in its place.
When this option is activated, if the output file already exists, the programs will not delete it, will warn you, and will abort.

@item -K
@itemx --keepinputdir
In automatic output names, don't remove the directory information of the input file names.
As explained in @ref{Automatic output}, if no output name is specified (with @option{--output}), then the output name will be made in the existing directory based on your input's file name (ignoring the directory of the input).
If you call this option, the directory information of the input will be kept and the automatically generated output name will be in the same directory as the input (usually with a suffix added).
Note that his is only relevant if you are running the program in a different directory than the input data.

@item -t STR
@itemx --tableformat=STR
The output table's type.
This option is only relevant when the output is a table and its format cannot be deduced from its filename.
For example, if a name ending in @file{.fits} was given to @option{--output}, then the program knows you want a FITS table.
But there are two types of FITS tables: FITS ASCII, and FITS binary.
Thus, with this option, the program is able to identify which type you want.
The currently recognized values to this option are:

@table @command
@item txt
A plain text table with white-space characters between the columns (see
@ref{Gnuastro text table format}).
@item fits-ascii
A FITS ASCII table (see @ref{Recognized table formats}).
@item fits-binary
A FITS binary table (see @ref{Recognized table formats}).
@end table

@end vtable


@node Processing options, Operating mode options, Input output options, Common options
@subsubsection Processing options

Some processing steps are common to several programs, so they are defined as common options to all programs.
Note that this class of common options is thus necessarily less common between all the programs than those described in @ref{Input output options}, or @ref{Operating mode options} options.
Also, if they are irrelevant for a program, these options will not display in the @option{--help} output of the program.

@table @option

@item --minmapsize=INT
The minimum size (in bytes) to memory-map a processing/internal array as a file (on the non-volatile HDD/SSD), and not use the system's RAM.
Before using this option, please read @ref{Memory management}.
By default processing arrays will only be memory-mapped to a file when the RAM is full.
With this option, you can force the memory-mapping, even when there is enough RAM.
To ensure this default behavior, the pre-defined value to this option is an extremely large value (larger than any existing RAM).

Please note that using a non-volatile file (in the HDD/SDD) instead of RAM can significantly increase the program's running time, especially on HDDs (where read/write is slower).
Also, note that the number of memory-mapped files that your kernel can support is limited.
So when this option is necessary, it is best to give it values larger than 1 megabyte (@option{--minmapsize=1000000}).
You can then decrease it for a specific program's invocation on a large input after you see memory issues arise (for example an error, or the program not aborting and fully consuming your memory).
If you see randomly named files remaining in this directory when the program finishes normally, please send us a bug report so we address the problem, see @ref{Report a bug}.

@cartouche
@noindent
@strong{Limited number of memory-mapped files:} The operating system kernels usually support a limited number of memory-mapped files.
Therefore never set @code{--minmapsize} to zero or a small number of bytes (so too many files are created).
If the kernel capacity is exceeded, the program will crash.
@end cartouche

@item --quietmmap
Don't print any message when an array is stored in non-volatile memory
(HDD/SSD) and not RAM, see the description of @option{--minmapsize} (above)
for more.

@item -Z INT[,INT[,...]]
@itemx --tilesize=[,INT[,...]]
The size of regular tiles for tessellation, see @ref{Tessellation}.
For each dimension an integer length (in units of data-elements or pixels) is necessary.
If the number of input dimensions is different from the number of values given to this option, the program will stop with an error.
Values must be separated by commas (@key{,}) and can also be fractions (for example @code{4/2}).
If they are fractions, the result must be an integer, otherwise an error will be printed.

@item -M INT[,INT[,...]]
@itemx --numchannels=INT[,INT[,...]]
The number of channels for larger input tessellation, see @ref{Tessellation}.
The number and types of acceptable values are similar to @option{--tilesize}.
The only difference is that instead of length, the integers values given to this option represent the @emph{number} of channels, not their size.

@item -F FLT
@itemx --remainderfrac=FLT
The fraction of remainder size along all dimensions to add to the first tile.
See @ref{Tessellation} for a complete description.
This option is only relevant if @option{--tilesize} is not exactly divisible by the input dataset's size in a dimension.
If the remainder size is larger than this fraction (compared to @option{--tilesize}), then the remainder size will be added with one regular tile size and divided between two tiles at the start and end of the given dimension.

@item --workoverch
Ignore the channel borders for the high-level job of the given application.
As a result, while the channel borders are respected in defining the small tiles (such that no tile will cross a channel border), the higher-level program operation will ignore them, see @ref{Tessellation}.

@item --checktiles
Make a FITS file with the same dimensions as the input but each pixel is replaced with the ID of the tile that it is associated with.
Note that the tile IDs start from 0.
See @ref{Tessellation} for more on Tiling an image in Gnuastro.

@item --oneelempertile
When showing the tile values (for example with @option{--checktiles}, or when the program's output is tessellated) only use one element for each tile.
This can be useful when only the relative values given to each tile compared to the rest are important or need to be checked.
Since the tiles usually have a large number of pixels within them the output will be much smaller, and so easier to read, write, store, or send.

Note that when the full input size in any dimension is not exactly divisible by the given @option{--tilesize} in that dimension, the edge tile(s) will have different sizes (in units of the input's size), see @option{--remainderfrac}.
But with this option, all displayed values are going to have the (same) size of one data-element.
Hence, in such cases, the image proportions are going to be slightly different with this option.

If your input image is not exactly divisible by the tile size and you want one value per tile for some higher-level processing, all is not lost though.
You can see how many pixels were within each tile (for example to weight the values or discard some for later processing) with Gnuastro's Statistics (see @ref{Statistics}) as shown below.
The output FITS file is going to have two extensions, one with the median calculated on each tile and one with the number of elements that each tile covers.
You can then use the @code{where} operator in @ref{Arithmetic} to set the values of all tiles that don't have the regular area to a blank value.

@example
$ aststatistics --median --number --ontile input.fits    \
                --oneelempertile --output=o.fits
$ REGULAR_AREA=1600    # Check second extension of `o.fits'.
$ astarithmetic o.fits o.fits $REGULAR_AREA ne nan where \
                -h1 -h2
@end example

Note that if @file{input.fits} also has blank values, then the median on
tiles with blank values will also be ignored with the command above (which
is desirable).


@item --inteponlyblank
When values are to be interpolated, only change the values of the blank
elements, keep the non-blank elements untouched.

@item --interpmetric=STR
@cindex Radial metric
@cindex Taxicab metric
@cindex Manhattan metric
@cindex Metric: Manhattan, Taxicab, Radial
The metric to use for finding nearest neighbors.
Currently it only accepts the Manhattan (or taxicab) metric with @code{manhattan}, or the radial metric with @code{radial}.

The Manhattan distance between two points is defined with @mymath{|\Delta{x}|+|\Delta{y}|}.
Thus the Manhattan metric has the advantage of being fast, but at the expense of being less accurate.
The radial distance is the standard definition of distance in a Euclidean space: @mymath{\sqrt{\Delta{x}^2+\Delta{y}^2}}.
It is accurate, but the multiplication and square root can slow down the processing.

@item --interpnumngb=INT
The number of nearby non-blank neighbors to use for interpolation.
@end table

@node Operating mode options,  , Processing options, Common options
@subsubsection Operating mode options

Another group of options that are common to all the programs in Gnuastro are those to do with the general operation of the programs.
The explanation for those that are not only limited to Gnuastro but are common to all GNU programs start with (GNU option).

@vtable @option

@item --
(GNU option) Stop parsing the command-line.
This option can be useful in scripts or when using the shell history.
Suppose you have a long list of options, and want to see if removing some of them (to read from configuration files, see @ref{Configuration files}) can give a better result.
If the ones you want to remove are the last ones on the command-line, you don't have to delete them, you can just add @option{--} before them and if you don't get what you want, you can remove the @option{--} and get the same initial result.

@item --usage
(GNU option) Only print the options and arguments and abort.
This is very useful for when you know the what the options do, and have just forgot their long/short identifiers, see @ref{--usage}.

@item -?
@itemx --help
(GNU option) Print all options with an explanation and abort.
Adding this option will print all the options in their short and long formats, also displaying which ones need a value if they are called (with an @option{=} after the long format followed by a string specifying the format, see @ref{Options}).
A short explanation is also given for what the option is for.
The program will quit immediately after the message is printed and will not do any form of processing, see @ref{--help}.

@item -V
@itemx --version
(GNU option) Print a short message, showing the full name, version, copyright information and program authors and abort.
On the first line, it will print the official name (not executable name) and version number of the program.
Following this is a blank line and a copyright information.
The program will not run.

@item -q
@itemx --quiet
Don't report steps.
All the programs in Gnuastro that have multiple major steps will report their steps for you to follow while they are operating.
If you do not want to see these reports, you can call this option and only error/warning messages will be printed.
If the steps are done very fast (depending on the properties of your input) disabling these reports will also decrease running time.

@item --cite
Print all necessary information to cite and acknowledge Gnuastro in your published papers.
With this option, the programs will print the Bib@TeX{} entry to include in your paper for Gnuastro in general, and the particular program's paper (if that program comes with a separate paper).
It will also print the necessary acknowledgment statement to add in the respective section of your paper and it will abort.
For a more complete explanation, please see @ref{Acknowledgments}.

Citations and acknowledgments are vital for the continued work on Gnuastro.
Gnuastro started, and is continued, based on separate research projects.
So if you find any of the tools offered in Gnuastro to be useful in your research, please use the output of this command to cite and acknowledge the program (and Gnuastro) in your research paper.
Thank you.

Gnuastro is still new, there is no separate paper only devoted to Gnuastro yet.
Therefore currently the paper to cite for Gnuastro is the paper for NoiseChisel which is the first published paper introducing Gnuastro to the astronomical community.
Upon reaching a certain point, a paper completely devoted to describing Gnuastro's many functionalities will be published, see @ref{GNU Astronomy Utilities 1.0}.

@item -P
@itemx --printparams
With this option, Gnuastro's programs will read your command-line options and all the configuration files.
If there is no problem (like a missing parameter or a value in the wrong format or range) and immediately before actually running, the programs will print the full list of option names, values and descriptions, sorted and grouped by context and abort.
They will also report the version number, the date they were configured on your system and the time they were reported.

As an example, you can give your full command-line options and even the input and output file names and finally just add @option{-P} to check if all the parameters are finely set.
If everything is OK, you can just run the same command (easily retrieved from the shell history, with the top arrow key) and simply remove the last two characters that showed this option.

No program will actually start its processing when this option is called.
The otherwise mandatory arguments for each program (for example input image or catalog files) are no longer required when you call this option.

@item --config=STR
Parse @option{STR} as a configuration file immediately when this option is confronted (see @ref{Configuration files}).
The @option{--config} option can be called multiple times in one run of any Gnuastro program on the command-line or in the configuration files.
In any case, it will be immediately read (before parsing the rest of the options on the command-line, or lines in a configuration file).

Note that by definition, options on the command-line still take precedence over those in any configuration file, including the file(s) given to this option if they are called before it.
Also see @option{--lastconfig} and @option{--onlyversion} on how this option can be used for reproducible results.
You can use @option{--checkconfig} (below) to check/confirm the parsing of configuration files.

@item --checkconfig
Print options and their values, within the command-line or configuration files, as they are parsed (see @ref{Configuration file precedence}).
If an option has already been set, or is ignored by the program, this option will also inform you with special values like @code{--ALREADY-SET--}.
Only options that are parsed after this option are printed, so to see the parsing of all input options, it is recommended to put this option immediately after the program name before any other options.

@cindex Debug
This is a very good option to confirm where the value of each option is has been defined in scenarios where there are multiple configuration files (for debugging).

@item -S
@itemx --setdirconf
Update the current directory configuration file for the Gnuastro program and quit.
The full set of command-line and configuration file options will be parsed and options with a value will be written in the current directory configuration file for this program (see @ref{Configuration files}).
If the configuration file or its directory doesn't exist, it will be created.
If a configuration file exists it will be replaced (after it, and all other configuration files have been read).
In any case, the program will not run.

This is the recommended method@footnote{Alternatively, you can use your favorite text editor.} to edit/set the configuration file for all future calls to Gnuastro's programs.
It will internally check if your values are in the correct range and type and save them according to the configuration file format, see @ref{Configuration file format}.
So if there are unreasonable values to some options, the program will notify you and abort before writing the final configuration file.

When this option is called, the otherwise mandatory arguments, for
example input image or catalog file(s), are no longer mandatory (since
the program will not run).

@item -U
@itemx --setusrconf
Update the user configuration file and quit (see @ref{Configuration files}).
See explanation under @option{--setdirconf} for more details.

@item --lastconfig
This is the last configuration file that must be read.
When this option is confronted in any stage of reading the options (on the command-line or in a configuration file), no other configuration file will be parsed, see @ref{Configuration file precedence} and @ref{Current directory and User wide}.
Like all on/off options, on the command-line, this option doesn't take any values.
But in a configuration file, it takes the values of @option{0} or @option{1}, see @ref{Configuration file format}.
If it is present in a configuration file with a value of @option{0}, then all later occurrences of this option will be ignored.


@item --onlyversion=STR
Only run the program if Gnuastro's version is exactly equal to @option{STR} (see @ref{Version numbering}).
Note that it is not compared as a number, but as a string of characters, so @option{0}, or @option{0.0} and @option{0.00} are different.
If the running Gnuastro version is different, then this option will report an error and abort as soon as it is confronted on the command-line or in a configuration file.
If the running Gnuastro version is the same as @option{STR}, then the program will run as if this option was not called.

This is useful if you want your results to be exactly reproducible and not mistakenly run with an updated/newer or older version of the program.
Besides internal algorithmic/behavior changes in programs, the existence of options or their names might change between versions (especially in these earlier versions of Gnuastro).

Hence, when using this option (probably in a script or in a configuration file), be sure to call it before other options.
The benefit is that, when the version differs, the other options won't be parsed and you, or your collaborators/users, won't get errors saying an option in your configuration doesn't exist in the running version of the program.

Here is one example of how this option can be used in conjunction with the @option{--lastconfig} option.
Let's assume that you were satisfied with the results of this command: @command{astnoisechisel image.fits --snquant=0.95} (along with various options set in various configuration files).
You can save the state of NoiseChisel and reproduce that exact result on @file{image.fits} later by following these steps (the extra spaces, and @key{\}, are only for easy readability, if you want to try it out, only one space between each token is enough).

@example
$ echo "onlyversion X.XX"             > reproducible.conf
$ echo "lastconfig 1"                >> reproducible.conf
$ astnoisechisel image.fits --snquant=0.95 -P            \
                                     >> reproducible.conf
@end example

@option{--onlyversion} was available from Gnuastro 0.0, so putting it immediately at the start of a configuration file will ensure that later, you (or others using different version) won't get a non-recognized option error in case an option was added/removed.
@option{--lastconfig} will inform the installed NoiseChisel to not parse any other configuration files.
This is done because we don't want the user's user-wide or system wide option values affecting our results.
Finally, with the third command, which has a @option{-P} (short for @option{--printparams}), NoiseChisel will print all the option values visible to it (in all the configuration files) and the shell will append them to @file{reproduce.conf}.
Hence, you don't have to worry about remembering the (possibly) different options in the different configuration files.

Afterwards, if you run NoiseChisel as shown below (telling it to read this configuration file with the @file{--config} option).
You can be sure that there will either be an error (for version mismatch) or it will produce exactly the same result that you got before.

@example
$ astnoisechisel --config=reproducible.conf
@end example

@item --log
Some programs can generate extra information about their outputs in a log file.
When this option is called in those programs, the log file will also be printed.
If the program doesn't generate a log file, this option is ignored.

@cartouche
@noindent
@strong{@option{--log} isn't thread-safe}: The log file usually has a fixed name.
Therefore if two simultaneous calls (with @option{--log}) of a program are made in the same directory, the program will try to write to he same file.
This will cause problems like unreasonable log file, undefined behavior, or a crash.
@end cartouche

@cindex CPU threads, set number
@cindex Number of CPU threads to use
@item -N INT
@itemx --numthreads=INT
Use @option{INT} CPU threads when running a Gnuastro program (see @ref{Multi-threaded operations}).
If the value is zero (@code{0}), or this option is not given on the command-line or any configuration file, the value will be determined at run-time: the maximum number of threads available to the system when you run a Gnuastro program.

Note that multi-threaded programming is only relevant to some programs.
In others, this option will be ignored.

@end vtable





@node Standard input,  , Common options, Command-line
@subsection Standard input

@cindex Standard input
@cindex Stream: standard input
The most common way to feed the primary/first input dataset into a program is to give its filename as an argument (discussed in @ref{Arguments}).
When you want to run a series of programs in sequence, this means that each will have to keep the output of each program in a separate file and re-type that file's name in the next command.
This can be very slow and frustrating (mis-typing a file's name).

@cindex Standard output stream
@cindex Stream: standard output
To solve the problem, the founders of Unix defined pipes to directly feed the output of one program (its ``Standard output'' stream) into the ``standard input'' of a next program.
This removes the need to make temporary files between separate processes and became one of the best demonstrations of the Unix-way, or Unix philosophy.

Every program has three streams identifying where it reads/writes non-file inputs/outputs: @emph{Standard input}, @emph{Standard output}, and @emph{Standard error}.
When a program is called alone, all three are directed to the terminal that you are using.
If it needs an input, it will prompt you for one and you can type it in.
Or, it prints its results in the terminal for you to see.

For example, say you have a FITS table/catalog containing the B and V band magnitudes (@code{MAG_B} and @code{MAG_V} columns) of a selection of galaxies along with many other columns.
If you want to see only these two columns in your terminal, can use Gnuastro's @ref{Table} program like below:

@example
$ asttable cat.fits -cMAG_B,MAG_V
@end example

Through the Unix pipe mechanism, when the shell confronts the pipe character (@key{|}), it connects the standard output of the program before the pipe, to the standard input of the program after it.
So it is literally a ``pipe'': everything that you would see printed by the first program on the command (without any pipe), is now passed to the second program (and not seen by you).

@cindex AWK
@cindex GNU AWK
To continue the previous example, let's say you want to see the B-V color.
To do this, you can pipe Table's output to AWK (a wonderful tool for processing things like plain text tables):

@example
$ asttable cat.fits -cMAG_B,MAG_V | awk '@{print $1-$2@}'
@end example

But understanding the distribution by visually seeing all the numbers under each other is not too useful! You can therefore feed this single column information into @ref{Statistics} to give you a general feeling of the distribution with the same command:

@example
$ asttable cat.fits -cMAG_B,MAG_V | awk '@{print $1-$2@}' | aststatistics
@end example

Gnuastro's programs that accept input from standard input, only look into the Standard input stream if there is no first argument.
In other words, arguments take precedence over Standard input.
When no argument is provided, the programs check if the standard input stream is already full or not (output from another program is waiting to be used).
If data is present in the standard input stream, it is used.

When the standard input is empty, the program will wait @option{--stdintimeout} micro-seconds for you to manually enter the first line (ending with a new-line character, or the @key{ENTER} key, see @ref{Input output options}).
If it detects the first line in this time, there is no more time limit, and you can manually write/type all the lines for as long as it takes.
To inform the program that Standard input has finished, press @key{CTRL-D} after a new line.
If the program doesn't catch the first line before the time-out finishes, it will abort with an error saying that no input was provided.

@cartouche
@noindent
@strong{Manual input in Standard input is discarded:}
Be careful that when you manually fill the Standard input, the data will be discarded once the program finishes and reproducing the result will be impossible.
Therefore this form of providing input is only good for temporary tests.
@end cartouche

@cartouche
@noindent
@strong{Standard input currently only for plain text:}
Currently Standard input only works for plain text inputs like the example above.
We will later allow FITS files into the programs through standard input also.
@end cartouche


@node Configuration files, Getting help, Command-line, Common program behavior
@section Configuration files

@cindex @file{etc}
@cindex Configuration files
@cindex Necessary parameters
@cindex Default option values
@cindex File system Hierarchy Standard
Each program needs a certain number of parameters to run.
Supplying all the necessary parameters each time you run the program is very frustrating and prone to errors.
Therefore all the programs read the values for the necessary options you have not given in the command line from one of several plain text files (which you can view and edit with any text editor).
These files are known as configuration files and are usually kept in a directory named @file{etc/} according to the file system hierarchy
standard@footnote{@url{http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard}}.

@vindex --output
@vindex --numthreads
@cindex CPU threads, number
@cindex Internal default value
@cindex Number of CPU threads to use
The thing to have in mind is that none of the programs in Gnuastro keep any internal default value.
All the values must either be stored in one of the configuration files or explicitly called in the command-line.
In case the necessary parameters are not given through any of these methods, the program will print a missing option error and abort.
The only exception to this is @option{--numthreads}, whose default value is determined at run-time using the number of threads available to your system, see @ref{Multi-threaded operations}.
Of course, you can still provide a default value for the number of threads at any of the levels below, but if you don't, the program will not abort.
Also note that through automatic output name generation, the value to the @option{--output} option is also not mandatory on the command-line or in the configuration files for all programs which don't rely on that value as an input@footnote{One example of a program which uses the value given to @option{--output} as an input is ConvertType, this value specifies the type of the output through the value to @option{--output}, see @ref{Invoking astconvertt}.}, see @ref{Automatic output}.



@menu
* Configuration file format::   ASCII format of configuration file.
* Configuration file precedence::  Precedence of configuration files.
* Current directory and User wide::  Local and user configuration files.
* System wide::                 System wide configuration files.
@end menu

@node Configuration file format, Configuration file precedence, Configuration files, Configuration files
@subsection Configuration file format

@cindex Configuration file suffix
The configuration files for each program have the standard program executable name with a `@file{.conf}' suffix.
When you download the source code, you can find them in the same directory as the source code of each program, see @ref{Program source}.

@cindex White space character
@cindex Configuration file format
Any line in the configuration file whose first non-white character is a @key{#} is considered to be a comment and is ignored.
An empty line is also similarly ignored.
The long name of the option should be used as an identifier.
The parameter name and parameter value have to be separated by any number of `white-space' characters: space, tab or vertical tab.
By default several space characters are used.
If the value of an option has space characters (most commonly for the @option{hdu} option), then the full value can be enclosed in double quotation signs (@key{"}, similar to the example in @ref{Arguments and options}).
If it is an option without a value in the @option{--help} output (on/off option, see @ref{Options}), then the value should be @option{1} if it is to be `on' and @option{0} otherwise.

In each non-commented and non-blank line, any text after the first two words (option identifier and value) is ignored.
If an option identifier is not recognized in the configuration file, the name of the file, the line number of the unrecognized option, and the unrecognized identifier name will be reported and the program will abort.
If a parameter is repeated more more than once in the configuration files, accepts only one value, and is not set on the command-line, then only the first value will be used, the rest will be ignored.

@cindex Writing configuration files
@cindex Automatic configuration file writing
@cindex Configuration files, writing
You can build or edit any of the directories and the configuration files yourself using any text editor.
However, it is recommended to use the @option{--setdirconf} and @option{--setusrconf} options to set default values for the current directory or this user, see @ref{Operating mode options}.
With these options, the values you give will be checked before writing in the configuration file.
They will also print a set of commented lines guiding the reader and will also classify the options based on their context and write them in their logical order to be more understandable.


@node Configuration file precedence, Current directory and User wide, Configuration file format, Configuration files
@subsection Configuration file precedence

@cindex Configuration file precedence
@cindex Configuration file directories
@cindex Precedence, configuration files
The option values in all the programs of Gnuastro will be filled in the following order.
If an option only takes one value which is given in an earlier step, any value for that option in a later step will be ignored.
Note that if the @option{lastconfig} option is specified in any step below, no other configuration files will be parsed (see @ref{Operating mode options}).

@enumerate
@item
Command-line options, for a particular run of ProgramName.

@item
@file{.gnuastro/astprogname.conf} is parsed by ProgramName in the current directory.

@item
@file{.gnuastro/gnuastro.conf} is parsed by all Gnuastro programs in the current directory.

@item
@file{$HOME/.local/etc/astprogname.conf} is parsed by ProgramName in the user's home directory (see @ref{Current directory and User wide}).

@item
@file{$HOME/.local/etc/gnuastro.conf} is parsed by all Gnuastro programs in the user's home directory (see @ref{Current directory and User wide}).

@item
@file{prefix/etc/astprogname.conf} is parsed by ProgramName in the system-wide installation directory (see @ref{System wide} for @file{prefix}).

@item
@file{prefix/etc/gnuastro.conf} is parsed by all Gnuastro programs in the system-wide installation directory (see @ref{System wide} for @file{prefix}).

@end enumerate

The basic idea behind setting this progressive state of checking for parameter values is that separate users of a computer or separate folders in a user's file system might need different values for some parameters.

@cartouche
@noindent
@strong{Checking the order:}
You can confirm/check the order of parsing configuration files using the @option{--checkconfig} option with any Gnuastro program, see @ref{Operating mode options}.
Just be sure to place this option immediately after the program name, before any other option.
@end cartouche

As you see above, there can also be a configuration file containing the common options in all the programs: @file{gnuastro.conf} (see @ref{Common options}).
If options specific to one program are specified in this file, there will be unrecognized option errors, or unexpected behavior if the option has different behavior in another program.
On the other hand, there is no problem with @file{astprogname.conf} containing common options@footnote{As an example, the @option{--setdirconf} and @option{--setusrconf} options will also write the common options they have read in their produced @file{astprogname.conf}.}.

@cartouche
@noindent
@strong{Manipulating the order:} You can manipulate this order or add new files with the following two options which are fully described in
@ref{Operating mode options}:
@table @option
@item --config
Allows you to define any file to be parsed as a configuration file on the command-line or within the any other configuration file.
Recall that the file given to @option{--config} is parsed immediately when this option is confronted (on the command-line or in a configuration file).

@item --lastconfig
Allows you to stop the parsing of subsequent configuration files.
Note that if this option is given in a configuration file, it will be fully read, so its position in the configuration doesn't matter (unlike @option{--config}).
@end table
@end cartouche

One example of benefiting from these configuration files can be this: raw telescope images usually have their main image extension in the second FITS extension, while processed FITS images usually only have one extension.
If your system-wide default input extension is 0 (the first), then when you want to work with the former group of data you have to explicitly mention it to the programs every time.
With this progressive state of default values to check, you can set different default values for the different directories that you would like to run Gnuastro in for your different purposes, so you won't have to worry about this issue any more.

The same can be said about the @file{gnuastro.conf} files: by specifying a behavior in this single file, all Gnuastro programs in the respective directory, user, or system-wide steps will behave similarly.
For example to keep the input's directory when no specific output is given (see @ref{Automatic output}), or to not delete an existing file if it has the same name as a given output (see @ref{Input output options}).


@node Current directory and User wide, System wide, Configuration file precedence, Configuration files
@subsection Current directory and User wide

@cindex @file{$HOME}
@cindex @file{./.gnuastro/}
@cindex @file{$HOME/.local/etc/}
For the current (local) and user-wide directories, the configuration files are stored in the hidden sub-directories named @file{.gnuastro/} and @file{$HOME/.local/etc/} respectively.
Unless you have changed it, the @file{$HOME} environment variable should point to your home directory.
You can check it by running @command{$ echo $HOME}.
Each time you run any of the programs in Gnuastro, this environment variable is read and placed in the above address.
So if you suddenly see that your home configuration files are not being read, probably you (or some other program) has changed the value of this environment variable.

@vindex --setdirconf
@vindex --setusrconf
Although it might cause confusions like above, this dependence on the @file{HOME} environment variable enables you to temporarily use a different directory as your home directory.
This can come in handy in complicated situations.
To set the user or current directory configuration files based on your command-line input, you can use the @option{--setdirconf} or @option{--setusrconf}, see @ref{Operating mode options}.



@node System wide,  , Current directory and User wide, Configuration files
@subsection System wide

@cindex @file{prefix/etc/}
@cindex System wide configuration files
@cindex Configuration files, system wide
When Gnuastro is installed, the configuration files that are shipped with the distribution are copied into the (possibly system wide) @file{prefix/etc/} directory.
For more details on @file{prefix}, see @ref{Installation directory} (by default it is: @file{/usr/local}).
This directory is the final place (with the lowest priority) that the programs in Gnuastro will check to retrieve parameter values.

If you remove an option and its value from the system wide configuration files, you either have to specify it in more immediate configuration files or set it each time in the command-line.
Recall that none of the programs in Gnuastro keep any internal default values and will abort if they don't find a value for the necessary parameters (except the number of threads and output file name).
So even though you might never expect to use an optional option, it safe to have it available in this system-wide configuration file even if you don't intend to use it frequently.

Note that in case you install Gnuastro from your distribution's repositories, @file{prefix} will either be set to @file{/} (the root directory) or @file{/usr}, so you can find the system wide configuration variables in @file{/etc/} or @file{/usr/etc/}.
The prefix of @file{/usr/local/} is conventionally used for programs you install from source by your self as in @ref{Quick start}.










@node Getting help, Installed scripts, Configuration files, Common program behavior
@section Getting help

@cindex Help
@cindex Book formats
@cindex Remembering options
@cindex Convenient book formats
Probably the first time you read this book, it is either in the PDF or HTML formats.
These two formats are very convenient for when you are not actually working, but when you are only reading.
Later on, when you start to use the programs and you are deep in the middle of your work, some of the details will inevitably be forgotten.
Going to find the PDF file (printed or digital) or the HTML web page is a major distraction.

@cindex Online help
@cindex Command-line help
GNU software have a very unique set of tools for aiding your memory on the command-line, where you are working, depending how much of it you need to remember.
In the past, such command-line help was known as ``online'' help, because they were literally provided to you `on' the command `line'.
However, nowadays the word ``online'' refers to something on the internet, so that term will not be used.
With this type of help, you can resume your exciting research without taking your hands off the keyboard.

@cindex Installed help methods
Another major advantage of such command-line based help routines is that they are installed with the software in your computer, therefore they are always in sync with the executable you are actually running.
Three of them are actually part of the executable.
You don't have to worry about the version of the book or program.
If you rely on external help (a PDF in your personal print or digital archive or HTML from the official web page) you have to check to see if their versions fit with your installed program.

If you only need to remember the short or long names of the options, @option{--usage} is advised.
If it is what the options do, then @option{--help} is a great tool.
Man pages are also provided for those who are use to this older system of documentation.
This full book is also available to you on the command-line in Info format.
If none of these seems to resolve the problems, there is a mailing list which enables you to get in touch with experienced Gnuastro users.
In the subsections below each of these methods are reviewed.


@menu
* --usage::                     View option names and value formats.
* --help::                      List all options with description.
* Man pages::                   Man pages generated from --help.
* Info::                        View complete book in terminal.
* help-gnuastro mailing list::  Contacting experienced users.
@end menu

@node --usage, --help, Getting help, Getting help
@subsection @option{--usage}
@vindex --usage
@cindex Usage pattern
@cindex Mandatory arguments
@cindex Optional and mandatory tokens
If you give this option, the program will not run.
It will only print a very concise message showing the options and arguments.
Everything within square brackets (@option{[]}) is optional.
For example here are the first and last two lines of Crop's @option{--usage} is shown:

@example
$ astcrop --usage
Usage: astcrop [-Do?IPqSVW] [-d INT] [-h INT] [-r INT] [-w INT]
            [-x INT] [-y INT] [-c INT] [-p STR] [-N INT] [--deccol=INT]
            ....
            [--setusrconf] [--usage] [--version] [--wcsmode]
            [ASCIIcatalog] FITSimage(s).fits
@end example

There are no explanations on the options, just their short and long names shown separately.
After the program name, the short format of all the options that don't require a value (on/off options) is displayed.
Those that do require a value then follow in separate brackets, each displaying the format of the input they want, see @ref{Options}.
Since all options are optional, they are shown in square brackets, but arguments can also be optional.
For example in this example, a catalog name is optional and is only required in some modes.
This is a standard method of displaying optional arguments for all GNU software.

@node --help, Man pages, --usage, Getting help
@subsection @option{--help}

@vindex --help
If the command-line includes this option, the program will not be run.
It will print a complete list of all available options along with a short explanation.
The options are also grouped by their context.
Within each context, the options are sorted alphabetically.
Since the options are shown in detail afterwards, the first line of the @option{--help} output shows the arguments and if they are optional or not, similar to @ref{--usage}.

In the @option{--help} output of all programs in Gnuastro, the options for each program are classified based on context.
The first two contexts are always options to do with the input and output respectively.
For example input image extensions or supplementary input files for the inputs.
The last class of options is also fixed in all of Gnuastro, it shows operating mode options.
Most of these options are already explained in @ref{Operating mode options}.

@cindex Long outputs
@cindex Redirection of output
@cindex Command-line, long outputs
The help message will sometimes be longer than the vertical size of your terminal.
If you are using a graphical user interface terminal emulator, you can scroll the terminal with your mouse, but we promised no mice distractions! So here are some suggestions:

@itemize
@item
@cindex Scroll command-line
@cindex Command-line scroll
@cindex @key{Shift + PageUP} and @key{Shift + PageDown}
@key{Shift + PageUP} to scroll up and @key{Shift + PageDown} to scroll down.
For most help output this should be enough.
The problem is that it is limited by the number of lines that your terminal keeps in memory and that you can't scroll by lines, only by whole screens.

@item
@cindex Pipe
@cindex @command{less}
Pipe to @command{less}.
A pipe is a form of shell re-direction.
The @command{less} tool in Unix-like systems was made exactly for such outputs of any length.
You can pipe (@command{|}) the output of any program that is longer than the screen to it and then you can scroll through (up and down) with its many tools.
For example:
@example
$ astnoisechisel --help | less
@end example
@noindent
Once you have gone through the text, you can quit @command{less} by pressing the @key{q} key.


@item
@cindex Save output to file
@cindex Redirection of output
Redirect to a file.
This is a less convenient way, because you will then have to open the file in a text editor!
You can do this with the shell redirection tool (@command{>}):
@example
$ astnoisechisel --help > filename.txt
@end example
@end itemize

@cindex GNU Grep
@cindex Searching text
@cindex Command-line searching text
In case you have a special keyword you are looking for in the help, you don't have to go through the full list.
GNU Grep is made for this job.
For example if you only want the list of options whose @option{--help} output contains the word ``axis'' in Crop, you can run the following command:

@example
$ astcrop --help | grep axis
@end example

@cindex @code{ARGP_HELP_FMT}
@cindex Argp argument parser
@cindex Customize @option{--help} output
@cindex @option{--help} output customization
If the output of this option does not fit nicely within the confines of your terminal, GNU does enable you to customize its output through the environment variable @code{ARGP_HELP_FMT}, you can set various parameters which specify the formatting of the help messages.
For example if your terminals are wider than 70 spaces (say 100) and you feel there is too much empty space between the long options and the short explanation, you can change these formats by giving values to this environment variable before running the program with the @option{--help} output.
You can define this environment variable in this manner:
@example
$ export ARGP_HELP_FMT=rmargin=100,opt-doc-col=20
@end example
@cindex @file{.bashrc}
This will affect all GNU programs using GNU C library's @file{argp.h} facilities as long as the environment variable is in memory.
You can see the full list of these formatting parameters in the ``Argp User Customization'' part of the GNU C library manual.
If you are more comfortable to read the @option{--help} outputs of all GNU software in your customized format, you can add your customization (similar to the line above, without the @command{$} sign) to your @file{~/.bashrc} file.
This is a standard option for all GNU software.

@node Man pages, Info, --help, Getting help
@subsection Man pages
@cindex Man pages
Man pages were the Unix method of providing command-line documentation to a program.
With GNU Info, see @ref{Info} the usage of this method of documentation is highly discouraged.
This is because Info provides a much more easier to navigate and read environment.

However, some operating systems require a man page for packages that are installed and some people are still used to this method of command line help.
So the programs in Gnuastro also have Man pages which are automatically generated from the outputs of @option{--version} and @option{--help} using the GNU help2man program.
So if you run
@example
$ man programname
@end example
@noindent
You will be provided with a man page listing the options in the
standard manner.





@node Info, help-gnuastro mailing list, Man pages, Getting help
@subsection Info

@cindex GNU Info
@cindex Command-line, viewing full book
Info is the standard documentation format for all GNU software.
It is a very useful command-line document viewing format, fully equipped with links between the various pages and menus and search capabilities.
As explained before, the best thing about it is that it is available for you the moment you need to refresh your memory on any command-line tool in the middle of your work without having to take your hands off the keyboard.
This complete book is available in Info format and can be accessed from anywhere on the command-line.

To open the Info format of any installed programs or library on your system which has an Info format book, you can simply run the command below (change @command{executablename} to the executable name of the program or library):

@example
$ info executablename
@end example

@noindent
@cindex Learning GNU Info
@cindex GNU software documentation
In case you are not already familiar with it, run @command{$ info info}.
It does a fantastic job in explaining all its capabilities its self.
It is very short and you will become sufficiently fluent in about half an hour.
Since all GNU software documentation is also provided in Info, your whole GNU/Linux life will significantly improve.

@cindex GNU Emacs
@cindex GNU C library
Once you've become an efficient navigator in Info, you can go to any part of this book or any other GNU software or library manual, no matter how long it is, in a matter of seconds.
It also blends nicely with GNU Emacs (a text editor) and you can search manuals while you are writing your document or programs without taking your hands off the keyboard, this is most useful for libraries like the GNU C library.
To be able to access all the Info manuals installed in your GNU/Linux within Emacs, type @key{Ctrl-H + i}.

To see this whole book from the beginning in Info, you can run

@example
$ info gnuastro
@end example

@noindent
If you run Info with the particular program executable name, for
example @file{astcrop} or @file{astnoisechisel}:

@example
$ info astprogramname
@end example

@noindent
you will be taken to the section titled ``Invoking ProgramName'' which explains the inputs and outputs along with the command-line options for that program.
Finally, if you run Info with the official program name, for example Crop or NoiseChisel:

@example
$ info ProgramName
@end example

@noindent
you will be taken to the top section which introduces the program.
Note that in all cases, Info is not case sensitive.



@node help-gnuastro mailing list,  , Info, Getting help
@subsection help-gnuastro mailing list

@cindex help-gnuastro mailing list
@cindex Mailing list: help-gnuastro
Gnuastro maintains the help-gnuastro mailing list for users to ask any questions related to Gnuastro.
The experienced Gnuastro users and some of its developers are subscribed to this mailing list and your email will be sent to them immediately.
However, when contacting this mailing list please have in mind that they are possibly very busy and might not be able to answer immediately.

@cindex Mailing list archives
@cindex @code{help-gnuastro@@gnu.org}
To ask a question from this mailing list, send a mail to @code{help-gnuastro@@gnu.org}.
Anyone can view the mailing list archives at @url{http://lists.gnu.org/archive/html/help-gnuastro/}.
It is best that before sending a mail, you search the archives to see if anyone has asked a question similar to yours.
If you want to make a suggestion or report a bug, please don't send a mail to this mailing list.
We have other mailing lists and tools for those purposes, see @ref{Report a bug} or @ref{Suggest new feature}.










@node Installed scripts, Multi-threaded operations, Getting help, Common program behavior
@section Installed scripts

Gnuastro's programs (introduced in previous chapters) are designed to be highly modular and thus mainly contain lower-level operations on the data.
However, in many contexts, higher-level operations (for example a sequence of calls to multiple Gnuastro programs, or a special way of running a program and using the outputs) are also very similar between various projects.

To facilitate data analysis on these higher-level steps also, Gnuastro also installs some scripts on your system with the (@code{astscript-}) prefix (in contrast to the other programs that only have the @code{ast} prefix).

@cindex GNU Bash
Like all of Gnuastro's source code, these scripts are also heavily commented.
They are written in GNU Bash, which doesn't need compilation.
Therefore, if you open the installed scripts in a text editor, you can actually read them@footnote{Gnuastro's installed programs (those only starting with @code{ast}) aren't human-readable.
They are written in C and are thus compiled (optimized in binary CPU instructions that will be given directly to your CPU).
Because they don't need an interpreter like Bash on every run, they are much faster and more independent than scripts.
To read the source code of the programs, look into the @file{bin/progname} directory of Gnuastro's source (@ref{Downloading the source}).
If you would like to read more about why C was chosen for the programs, please see @ref{Why C}.}.
Bash is the same language that is mainly used when typing on the command-line.
Because of these factors, Bash is much more widely known and used than C (the language of other Gnuastro programs).
Gnuastro's installed scripts also do higher-level operations, so customizing these scripts for a special project will be more common than the programs.
You can always inspect them (to customize, check, or educate your self) with this command (just replace @code{emacs} with your favorite text editor):

@example
$ emacs $(which astscript-NAME)
@end example

These scripts also accept options and are in many ways similar to the programs (see @ref{Common options}) with some minor differences:

@itemize
@item
Currently they don't accept configuration files themselves.
However, the configuration files of the Gnuastro programs they call are indeed parsed and used by those programs.

As a result, they don't have the following options: @option{--checkconfig}, @option{--config}, @option{--lastconfig}, @option{--onlyversion}, @option{--printparams}, @option{--setdirconf} and @option{--setusrconf}.

@item
They don't directly allocate any memory, so there is no @option{--minmapsize}.

@item
They don't have an independent @option{--usage} option: when called with @option{--usage}, they just recommend running @option{--help}.

@item
The output of @option{--help} is not configurable like the programs (see @ref{--help}).

@item
@cindex GNU AWK
@cindex GNU SED
The scripts will commonly use your installed Bash and other basic command-line tools (for example AWK or SED).
Different systems have different versions and implementations of these basic tools (for example GNU/Linux systems use GNU AWK and GNU SED which are far more advanced and up to date then the minimalist AWK and SED of most other systems).
Therefore, unexpected errors in these tools might come up when you run these scripts.
We will try our best to write these scripts in a portable way.
However, if you do confront such strange errors, please submit a bug report so we fix it (see @ref{Report a bug}).

@end itemize










@node Multi-threaded operations, Numeric data types, Installed scripts, Common program behavior
@section Multi-threaded operations

@pindex nproc
@cindex pthread
@cindex CPU threads
@cindex GNU Coreutils
@cindex Using CPU threads
@cindex CPU, using all threads
@cindex Multi-threaded programs
@cindex Using multiple CPU cores
@cindex Simultaneous multithreading
Some of the programs benefit significantly when you use all the threads your computer's CPU has to offer to your operating system.
The number of threads available can be larger than the number of physical (hardware) cores in the CPU (also known as Simultaneous multithreading).
For example, in Intel's CPUs (those that implement its Hyper-threading technology) the number of threads is usually double the number of physical cores in your CPU.
On a GNU/Linux system, the number of threads available can be found with the command @command{$ nproc} command (part of GNU Coreutils).

@vindex --numthreads
@cindex Number of threads available
@cindex Available number of threads
@cindex Internally stored option value
Gnuastro's programs can find the number of threads available to your system internally at run-time (when you execute the program).
However, if a value is given to the @option{--numthreads} option, the given number will be used, see @ref{Operating mode options} and @ref{Configuration files} for ways to use this option.
Thus @option{--numthreads} is the only common option in Gnuastro's programs with a value that doesn't have to be specified anywhere on the command-line or in the configuration files.

@menu
* A note on threads::           Caution and suggestion on using threads.
* How to run simultaneous operations::  How to run things simultaneously.
@end menu

@node A note on threads, How to run simultaneous operations, Multi-threaded operations, Multi-threaded operations
@subsection A note on threads

@cindex Using multiple threads
@cindex Best use of CPU threads
@cindex Efficient use of CPU threads
Spinning off threads is not necessarily the most efficient way to run an application.
Creating a new thread isn't a cheap operation for the operating system.
It is most useful when the input data are fixed and you want the same operation to be done on parts of it.
For example one input image to Crop and multiple crops from various parts of it.
In this fashion, the image is loaded into memory once, all the crops are divided between the number of threads internally and each thread cuts out those parts which are assigned to it from the same image.
On the other hand, if you have multiple images and you want to crop the same region(s) out of all of them, it is much more efficient to set @option{--numthreads=1} (so no threads spin off) and run Crop multiple times simultaneously, see @ref{How to run simultaneous operations}.

@cindex Wall-clock time
You can check the boost in speed by first running a program on one of the data sets with the maximum number of threads and another time (with everything else the same) and only using one thread.
You will notice that the wall-clock time (reported by most programs at their end) in the former is longer than the latter divided by number of physical CPU cores (not threads) available to your operating system.
Asymptotically these two times can be equal (most of the time they aren't).
So limiting the programs to use only one thread and running them independently on the number of available threads will be more efficient.

@cindex System Cache
@cindex Cache, system
Note that the operating system keeps a cache of recently processed data, so usually, the second time you process an identical data set (independent of the number of threads used), you will get faster results.
In order to make an unbiased comparison, you have to first clean the system's cache with the following command between the two runs.

@example
$ sync; echo 3 | sudo tee /proc/sys/vm/drop_caches
@end example

@cartouche
@noindent
@strong{SUMMARY: Should I use multiple threads?} Depends:
@itemize
@item
If you only have @strong{one} data set (image in most cases!), then yes, the more threads you use (with a maximum of the number of threads available to your OS) the faster you will get your results.

@item
If you want to run the same operation on @strong{multiple} data sets, it is best to set the number of threads to 1 and use Make, or GNU Parallel, as explained in @ref{How to run simultaneous operations}.
@end itemize
@end cartouche





@node How to run simultaneous operations,  , A note on threads, Multi-threaded operations
@subsection How to run simultaneous operations

There are two@footnote{A third way would be to open multiple terminal emulator windows in your GUI, type the commands separately on each and press @key{Enter} once on each terminal, but this is far too frustrating, tedious and prone to errors.
It's therefore not a realistic solution when tens, hundreds or thousands of operations (your research targets, multiplied by the operations you do on each) are to be done.} approaches to simultaneously execute a program: using GNU Parallel or Make (GNU Make is the most common implementation).
The first is very useful when you only want to do one job multiple times and want to get back to your work without actually keeping the command you ran.
The second is usually for more important operations, with lots of dependencies between the different products (for example a full scientific research).

@table @asis

@item GNU Parallel
@cindex GNU Parallel
When you only want to run multiple instances of a command on different threads and get on with the rest of your work, the best method is to use GNU parallel.
Surprisingly GNU Parallel is one of the few GNU packages that has no Info documentation but only a Man page, see @ref{Info}.
So to see the documentation after installing it please run

@example
$ man parallel
@end example
@noindent
As an example, let's assume we want to crop a region fixed on the pixels (500, 600) with the default width from all the FITS images in the @file{./data} directory ending with @file{sci.fits} to the current directory.
To do this, you can run:

@example
$ parallel astcrop --numthreads=1 --xc=500 --yc=600 ::: \
  ./data/*sci.fits
@end example

@noindent
GNU Parallel can help in many more conditions, this is one of the simplest, see the man page for lots of other examples.
For absolute beginners: the backslash (@command{\}) is only a line breaker to fit nicely in the page.
If you type the whole command in one line, you should remove it.

@item Make
@cindex Make
Make is a program for building ``targets'' (e.g., files) using ``recipes'' (a set of operations) when their known ``prerequisites'' (other files) have been updated.
It elegantly allows you to define dependency structures for building your final output and updating it efficiently when the inputs change.
It is the most common infra-structure to build software today.

Scientific research methodology is very similar to software development: you start by testing a hypothesis on a small sample of objects/targets with a simple set of steps.
As you are able to get promising results, you improve the method and use it on a larger, more general, sample.
In the process, you will confront many issues that have to be corrected (bugs in software development jargon).
Make a wonderful tool to manage this style of development.
It has been used to make reproducible papers, for example see @url{https://gitlab.com/makhlaghi/NoiseChisel-paper, the reproduction pipeline} of the paper introducing @ref{NoiseChisel} (one of Gnuastro's programs).

@cindex GNU Make
GNU Make@footnote{@url{https://www.gnu.org/software/make/}} is the most common implementation which (similar to nearly all GNU programs, comes with a wonderful manual@footnote{@url{https://www.gnu.org/software/make/manual/}}).
Make is very basic and simple, and thus the manual is short (the most important parts are in the first roughly 100 pages) and easy to read/understand.

Make comes with a @option{--jobs} (@option{-j}) option which allows you to specify the maximum number of jobs that can be done simultaneously.
For example if you have 8 threads available to your operating system.
You can run:

@example
$ make -j8
@end example

With this command, Make will process your @file{Makefile} and create all the targets (can be thousands of FITS images for example) simultaneously on 8 threads, while fully respecting their dependencies (only building a file/target when its prerequisites are successfully built).
Make is thus strongly recommended for managing scientific research where robustness, archiving, reproducibility and speed@footnote{Besides its multi-threaded capabilities, Make will only re-build those targets that depend on a change you have made, not the whole work.
For example, if you have set the prerequisites properly, you can easily test the changing of a parameter on your paper's results without having to re-do everything (which is much faster).
This allows you to be much more productive in easily checking various ideas/assumptions of the different stages of your research and thus produce a more robust result for your exciting science.} are important.

@end table





@node Numeric data types, Memory management, Multi-threaded operations, Common program behavior
@section Numeric data types

@cindex Bit
@cindex Type
At the lowest level, the computer stores everything in terms of @code{1} or @code{0}.
For example, each program in Gnuastro, or each astronomical image you take with the telescope is actually a string of millions of these zeros and ones.
The space required to keep a zero or one is the smallest unit of storage, and is known as a @emph{bit}.
However, understanding and manipulating this string of bits is extremely hard for most people.
Therefore, different standards are defined to package the bits into separate @emph{type}s with a fixed interpretation of the bits in each package.

@cindex Byte
@cindex Signed integer
@cindex Unsigned integer
@cindex Integer, Signed
To store numbers, the most basic standard/type is for integers (@mymath{..., -2, -1, 0, 1, 2, ...}).
The common integer types are 8, 16, 32, and 64 bits wide (more bits will give larger limits).
Each bit corresponds to a power of 2 and they are summed to create the final number.
In the integer types, for each width there are two standards for reading the bits: signed and unsigned.
In the `signed' convention, one bit is reserved for the sign (stating that the integer is positive or negative).
The `unsigned' integers use that bit in the actual number and thus contain only positive numbers (starting from zero).

Therefore, at the same number of bits, both signed and unsigned integers can allow the same number of integers, but the positive limit of the @code{unsigned} types is double their @code{signed} counterparts with the same width (at the expense of not having negative numbers).
When the context of your work doesn't involve negative numbers (for example counting, where negative is not defined), it is best to use the @code{unsigned} types.
For the full numerical range of all integer types, see below.

Another standard of converting a given number of bits to numbers is the floating point standard, this standard can @emph{approximately} store any real number with a given precision.
There are two common floating point types: 32-bit and 64-bit, for single and double precision floating point numbers respectively.
The former is sufficient for data with less than 8 significant decimal digits (most astronomical data), while the latter is good for less than 16 significant decimal digits.
The representation of real numbers as bits is much more complex than integers.
If you are interested to learn more about it, you can start with the @url{https://en.wikipedia.org/wiki/Floating_point, Wikipedia article}.

Practically, you can use Gnuastro's Arithmetic program to convert/change the type of an image/datacube (see @ref{Arithmetic}), or Gnuastro Table program to convert a table column's data type (see @ref{Column arithmetic}).
Conversion of a dataset's type is necessary in some contexts.
For example the program/library, that you intend to feed the data into, only accepts floating point values, but you have an integer image/column.
Another situation that conversion can be helpful is when you know that your data only has values that fit within @code{int8} or @code{uint16}.
However it is currently formatted in the @code{float64} type.

The important thing to consider is that operations involving wider, floating point, or signed types can be significantly slower than smaller-width, integer, or unsigned types respectively.
Note that besides speed, a wider type also requires much more storage space (by 4 or 8 times).
Therefore, when you confront such situations that can be optimized and want to store/archive/transfer the data, it is best to use the most efficient type.
For example if your dataset (image or table column) only has positive integers less than 65535, store it as an unsigned 16-bit integer for faster processing, faster transfer, and less storage space.

The short and long names for the recognized numeric data types in Gnuastro are listed below.
Both short and long names can be used when you want to specify a type.
For example, as a value to the common option @option{--type} (see @ref{Input output options}), or in the information comment lines of @ref{Gnuastro text table format}.
The ranges listed below are inclusive.

@table @code
@item u8
@itemx uint8
8-bit unsigned integers, range:@*
@mymath{[0\rm{\ to\ }2^8-1]} or @mymath{[0\rm{\ to\ }255]}.

@item i8
@itemx int8
8-bit signed integers, range:@*
@mymath{[-2^7\rm{\ to\ }2^7-1]} or @mymath{[-128\rm{\ to\ }127]}.

@item u16
@itemx uint16
16-bit unsigned integers, range:@*
@mymath{[0\rm{\ to\ }2^{16}-1]} or @mymath{[0\rm{\ to\ }65535]}.

@item i16
@itemx int16
16-bit signed integers, range:@* @mymath{[-2^{15}\rm{\ to\ }2^{15}-1]} or
@mymath{[-32768\rm{\ to\ }32767]}.

@item u32
@itemx uint32
32-bit unsigned integers, range:@* @mymath{[0\rm{\ to\ }2^{32}-1]} or
@mymath{[0\rm{\ to\ }4294967295]}.

@item i32
@itemx int32
32-bit signed integers, range:@* @mymath{[-2^{31}\rm{\ to\ }2^{31}-1]} or
@mymath{[-2147483648\rm{\ to\ }2147483647]}.

@item u64
@itemx uint64
64-bit unsigned integers, range@* @mymath{[0\rm{\ to\ }2^{64}-1]} or
@mymath{[0\rm{\ to\ }18446744073709551615]}.

@item i64
@itemx int64
64-bit signed integers, range:@* @mymath{[-2^{63}\rm{\ to\ }2^{63}-1]} or
@mymath{[-9223372036854775808\rm{\ to\ }9223372036854775807]}.

@item f32
@itemx float32
32-bit (single-precision) floating point types.
The maximum (minimum is its negative) possible value is @mymath{3.402823\times10^{38}}.
Single-precision floating points can accurately represent a floating point number up to @mymath{\sim7.2} significant decimals.
Given the heavy noise in astronomical data, this is usually more than sufficient for storing results.

@item f64
@itemx float64
64-bit (double-precision) floating point types.
The maximum (minimum is its negative) possible value is @mymath{\sim10^{308}}.
Double-precision floating points can accurately represent a floating point number @mymath{\sim15.9} significant decimals.
This is usually good for processing (mixing) the data internally, for example a sum of single precision data (and later storing the result as @code{float32}).
@end table

@cartouche
@noindent
@strong{Some file formats don't recognize all types.} For example the FITS standard (see @ref{Fits}) does not define @code{uint64} in binary tables or images.
When a type is not acceptable for output into a given file format, the respective Gnuastro program or library will let you know and abort.
On the command-line, you can convert the numerical type of an image, or table column into another type with @ref{Arithmetic} or @ref{Table} respectively.
If you are writing your own program, you can use the @code{gal_data_copy_to_new_type()} function in Gnuastro's library, see @ref{Copying datasets}.
@end cartouche



@node Memory management, Tables, Numeric data types, Common program behavior
@section Memory management

@cindex Memory management
@cindex Non-volatile memory
@cindex Memory, non-volatile
In this section we'll review how Gnuastro manages your input data in your system's memory.
Knowing this can help you optimize your usage (in speed and memory consumption) when the data volume is large and approaches, or exceeds, your available RAM (usually in various calls to multiple programs simultaneously).
But before diving into the details, let's have a short basic introduction to memory in general and in particular the types of memory most relevant to this discussion.

Input datasets (that are later fed into programs for analysis) are commonly first stored in @emph{non-volatile memory}.
This is a type of memory that doesn't need a constant power supply to keep the data and is therefore primarily aimed for long-term storage, like HDDs or SSDs.
So data in this type of storage is preserved when you turn off your computer.
But by its nature, non-volatile memory is much slower, in reading or writing, than the speeds that CPUs can process the data.
Thus relying on this type of memory alone would create a bad bottleneck in the input/output (I/O) phase of any processing.

@cindex RAM
@cindex Volatile memory
@cindex Memory, volatile
The first step to decrease this bottleneck is to have a faster storage space, but with a much limited storage volume.
For this type of storage, computers have a Random Access Memory (or RAM).
RAM is classified as a @emph{volatile memory} because it needs a constant flow of electricity to keep the information.
In other words, the moment power is cut-off, all the stored information in your RAM is gone (hence the 'volatile' name).
But thanks to that constant supply of power, it can access any random address with equal (and very high!) speed.

Hence, the general/simplistic way that programs deal with memory is the following (this is general to almost all programs, not just Gnuastro's):
1) Load/copy the input data from the non-volatile memory into RAM.
2) Use the copy of the data in RAM as input for all the internal processing as well as the intermediate data that is necessary during the processing.
3) Finally, when the analysis is complete, write the final output data back into non-volatile memory, and free/delete all the used space in the RAM (the initial copy and all the intermediate data).
Usually the RAM is most important for the data of the intermediate steps (that you never see as a user of a program!).

When the input dataset(s) to a program are small (compared to the available space in your system's RAM at the moment it is run) Gnuastro's programs and libraries follow the standard series of steps above.
The only exception is that deleting the intermediate data is not only done at the end of the program.
As soon as an intermediate dataset is no longer necessary for the next internal steps, the space it occupied is deleted/freed.
This allows Gnuastro programs to minimize their usage of your system's RAM over the full running time.

The situation gets complicated when the datasets are large (compared to your available RAM when the program is run).
For example if a dataset if half the size of your system's available RAM, and the program's internal analysis needs three or more intermediately processed copies of it at one moment in its analysis.
There won't be enough RAM to keep those higher-level intermediate data.
In such cases, programs that don't do any memory management will crash.
But fortunately Gnuastro's programs do have a memory management plans for such situations.

@cindex Memory-mapped file
When the necessary amount of space for an intermediate dataset cannot be allocated in the RAM, Gnuastro's programs will not use the RAM at all.
They will use the ``memory-mapped file'' concept in modern operating systems to create a randomly-named file in your non-volatile memory and use that instead of the RAM.
That file will have the exact size (in bytes) of that intermediate dataset.
Any time the program needs that intermediate dataset, the operating system will directly go to that file, and bypass your RAM.
As soon as that file is no longer necessary for the analysis, it will be deleted.
But as mentioned above, non-volatile memory has much slower I/O speed than the RAM.
Hence in such situations, the programs will become noticeably slower (sometimes by factors of 10 times slower, depending on your non-volatile memory speeds).

Because of the drop in I/O speed (and thus the speed of your running program), the moment that any to-be-allocated dataset is memory-mapped, Gnuastro's programs and libraries will notify you with a descriptive statement like below (can happen in any phase of their analysis).
It shows the location of the memory-mapped file, its size, complemented with a small description of the cause, a pointer to this section of the book for more information on how to deal with it (if necessary), and what to do to suppress it.

@example
astarithmetic: ./gnuastro_mmap/Fu7Dhs: temporary memory-mapped file
(XXXXXXXXXXX bytes) created for intermediate data that is not stored
in RAM (see the "Memory management" section of Gnuastro's manual for
optimizing your project's memory management, and thus speed). To
disable this warning, please use the option '--quiet-mmap'
@end example

@noindent
Finally, when the intermediate dataset is no longer necessary, the program will automatically delete it and notify you with a statement like this:

@example
astarithmetic: ./gnuastro_mmap/B1QgVf: deleted
@end example

@noindent
To disable these messages, you can run the program with @code{--quietmmap}, or set the @code{quietmmap} variable in the allocating library function to be non-zero.

An important component of these messages is the name of the memory-mapped file.
Knowing that the file has been deleted is important for the user if the program crashes for any reason: internally (for example a parameter is given wrongly) or externally (for example you mistakenly kill the running job).
In the event of a crash, the memory-mapped files will not be deleted and you have to manually delete them because they are usually large and they may soon fill your full storage if not deleted in a long time due to successive crashes.

This brings us to managing the memory-mapped files in your non-volatile memory.
In other words: knowing where they are saved, or intentionally placing them in different places of your file system, or deleting them when necessary.
As the examples above show, memory-mapped files are stored in a sub-directory of the the running directory called @file{gnuastro_mmap}.
If this directory doesn't exist, Gnuastro will automatically create it when memory mapping becomes necessary.
Alternatively, it may happen that the @file{gnuastro_mmap} sub-directory exists and isn't writable, or it can't be created.
In such cases, the memory-mapped file for each dataset will be created in the running directory with a @file{gnuastro_mmap_} prefix.

Therefore one easy way to delete all memory-mapped files in case of a crash, is to delete everything within the sub-directory (first command below), or all files stating with this prefix:

@example
rm -f gnuastro_mmap/*
rm -f gnuastro_mmap_*
@end example

A much more common issue in dealing with memory-mapped files is their location.
For example you may be running a program in a partition that is hosted by an HDD.
But you also have another partition on an SSD (which has much faster I/O).
So you want your memory-mapped files to be created in the SSD to speed up your processing.
In this scenario, you want your project source directory to only contain your plain-text scripts and you want your project's built products (even the temporary memory-mapped files) to be built in a different location because they are large; thus I/O speed becomes important.

To host the memory-mapped files in another location (with fast I/O), you can set (@file{gnuastro_mmap}) to be a symbolic link to it.
For example, let's assume you want your memory-mapped files to be stored in @file{/path/to/dir/for/mmap}.
All you have to do is to run the following command before your Gnuastro analysis command(s).

@example
ln -s /path/to/dir/for/mmap gnuastro_mmap
@end example

The programs will delete a memory-mapped file when it is no longer needed, but they won't delete the @file{gnuastro_mmap} directory that hosts them.
So if your project involves many Gnuastro programs (possibly called in parallel) and you want your memory-mapped files to be in a different location, you just have to make the symbolic link above once at the start, and all the programs will use it if necessary.

Another memory-management scenario that may happen is this: you don't want a Gnuastro program to allocate internal datasets in the RAM at all.
For example the speed of your Gnuastro-related project doesn't matter at that moment, and you have higher-priority jobs that are being run at the same time which need to have RAM available.
In such cases, you can use the @option{--minmapsize} option that is available in all Gnuastro programs (see @ref{Processing options}).
Any intermediate dataset that has a size larger than the value of this option will be memory-mapped, even if there is space available in your RAM.
For example if you want any dataset larger than 100 megabytes to be memory-mapped, use @option{--minmapsize=100000000} (8 zeros!).

@cindex Linux kernel
@cindex Kernel, Linux
You shouldn't set the value of @option{--minmapsize} to be too small, otherwise even small intermediate values (that are usually very numerous) in the program will be memory-mapped.
However the kernel can only host a limited number of memory-mapped files at every moment (by all running programs combined).
For example in the default@footnote{If you need to host more memory-mapped files at one moment, you need to build your own customized Linux kernel.} Linux kernel on GNU/Linux operating systems this limit is roughly 64000.
If the total number of memory-mapped files exceeds this number, all the programs using them will crash.
Gnuastro's programs will warn you if your given value is too small and may cause a problem later.

Actually, the default behavior for Gnuastro's programs (to only use memory-mapped files when there isn't enough RAM) is a side-effect of @option{--minmapsize}.
The pre-defined value to this option is an extremely large value in the lowest-level Gnuastro configuration file (the installed @file{gnuastro.conf} described in @ref{Configuration file precedence}).
This value is larger than the largest possible available RAM.
You can check by running any Gnuastro program with a @option{-P} option.
Because no dataset will be larger than this, by default the programs will first attempt to use the RAM for temporary storage.
But if writing in the RAM fails (for any reason, mainly due to lack of available space), then a memory-mapped file will be created.





@node Tables, Tessellation, Memory management, Common program behavior
@section Tables

``A table is a collection of related data held in a structured format within a database.
It consists of columns, and rows.'' (from Wikipedia).
Each column in the table contains the values of one property and each row is a collection of properties (columns) for one target object.
For example, let's assume you have just ran MakeCatalog (see @ref{MakeCatalog}) on an image to measure some properties for the labeled regions (which might be detected galaxies for example) in the image.
For each labeled region (detected galaxy), there will be a @emph{row} which groups its measured properties as @emph{columns}, one column for each property.
One such property can be the object's magnitude, which is the sum of pixels with that label, or its center can be defined as the light-weighted average value of those pixels.
Many such properties can be derived from the raw pixel values and their position, see @ref{Invoking astmkcatalog} for a long list.

As a summary, for each labeled region (or, galaxy) we have one @emph{row} and for each measured property we have one @emph{column}.
This high-level structure is usually the first step for higher-level analysis, for example finding the stellar mass or photometric redshift from magnitudes in multiple colors.
Thus, tables are not just outputs of programs, in fact it is much more common for tables to be inputs of programs.
For example, to make a mock galaxy image, you need to feed in the properties of each galaxy into @ref{MakeProfiles} for it do the inverse of the process above and make a simulated image from a catalog, see @ref{Sufi simulates a detection}.
In other cases, you can feed a table into @ref{Crop} and it will crop out regions centered on the positions within the table, see @ref{Finding reddest clumps and visual inspection}.
So to end this relatively long introduction, tables play a very important role in astronomy, or generally all branches of data analysis.

In @ref{Recognized table formats} the currently recognized table formats in Gnuastro are discussed.
You can use any of these tables as input or ask for them to be built as output.
The most common type of table format is a simple plain text file with each row on one line and columns separated by white space characters, this format is easy to read/write by eye/hand.
To give it the full functionality of more specific table types like the FITS tables, Gnuastro has a special convention which you can use to give each column a name, type, unit, and comments, while still being readable by other plain text table readers.
This convention is described in @ref{Gnuastro text table format}.

When tables are input to a program, the program reading it needs to know which column(s) it should use for its desired purposes.
Gnuastro's programs all follow a similar convention, on the way you can select columns in a table.
They are thoroughly discussed in @ref{Selecting table columns}.


@menu
* Recognized table formats::    Table formats that are recognized in Gnuastro.
* Gnuastro text table format::  Gnuastro's convention plain text tables.
* Selecting table columns::     Identify/select certain columns from a table
@end menu

@node Recognized table formats, Gnuastro text table format, Tables, Tables
@subsection Recognized table formats

The list of table formats that Gnuastro can currently read from and write to are described below.
Each has their own advantage and disadvantages, so a short review of the format is also provided to help you make the best choice based on how you want to define your input tables or later use your output tables.

@table @asis

@item Plain text table
This is the most basic and simplest way to create, view, or edit the table by hand on a text editor.
The other formats described below are less eye-friendly and have a more formal structure (for easier computer readability).
It is fully described in @ref{Gnuastro text table format}.

@cindex FITS Tables
@cindex Tables FITS
@cindex ASCII table, FITS
@item FITS ASCII tables
The FITS ASCII table extension is fully in ASCII encoding and thus easily readable on any text editor (assuming it is the only extension in the FITS file).
If the FITS file also contains binary extensions (for example an image or binary table extensions), then there will be many hard to print characters.
The FITS ASCII format doesn't have new line characters to separate rows.
In the FITS ASCII table standard, each row is defined as a fixed number of characters (value to the @code{NAXIS1} keyword), so to visually inspect it properly, you would have to adjust your text editor's width to this value.
All columns start at given character positions and have a fixed width (number of characters).

Numbers in a FITS ASCII table are printed into ASCII format, they are not in binary (that the CPU uses).
Hence, they can take a larger space in memory, loose their precision, and take longer to read into memory.
If you are dealing with integer type columns (see @ref{Numeric data types}), another issue with FITS ASCII tables is that the type information for the column will be lost (there is only one integer type in FITS ASCII tables).
One problem with the binary format on the other hand is that it isn't portable (different CPUs/compilers) have different standards for translating the zeros and ones.
But since ASCII characters are defined on a byte and are well recognized, they are better for portability on those various systems.
Gnuastro's plain text table format described below is much more portable and easier to read/write/interpret by humans manually.

Generally, as the name implies, this format is useful for when your table mainly contains ASCII columns (for example file names, or descriptions).
They can be useful when you need to include columns with structured ASCII information along with other extensions in one FITS file.
In such cases, you can also consider header keywords (see @ref{Fits}).

@cindex Binary table, FITS
@item FITS binary tables
The FITS binary table is the FITS standard's solution to the issues discussed with keeping numbers in ASCII format as described under the FITS ASCII table title above.
Only columns defined as a string type (a string of ASCII characters) are readable in a text editor.
The portability problem with binary formats discussed above is mostly solved thanks to the portability of CFITSIO (see @ref{CFITSIO}) and the very long history of the FITS format which has been widely used since the 1970s.

In the case of most numbers, storing them in binary format is more memory efficient than ASCII format.
For example, to store @code{-25.72034} in ASCII format, you need 9 bytes/characters.
But if you keep this same number (to the approximate precision possible) as a 4-byte (32-bit) floating point number, you can keep/transmit it with less than half the amount of memory.
When catalogs contain thousands/millions of rows in tens/hundreds of columns, this can lead to significant improvements in memory/band-width usage.
Moreover, since the CPU does its operations in the binary formats, reading the table in and writing it out is also much faster than an ASCII table.

When you are dealing with integer numbers, the compression ratio can be even better, for example if you know all of the values in a column are positive and less than @code{255}, you can use the @code{unsigned char} type which only takes one byte! If they are between @code{-128} and @code{127}, then you can use the (signed) @code{char} type.
So if you are thoughtful about the limits of your integer columns, you can greatly reduce the size of your file and also the speed at which it is read/written.
This can be very useful when sharing your results with collaborators or publishing them.
To decrease the file size even more you can name your output as ending in @file{.fits.gz} so it is also compressed after creation.
Just note that compression/decompressing is CPU intensive and can slow down the writing/reading of the file.

Fortunately the FITS Binary table format also accepts ASCII strings as column types (along with the various numerical types).
So your dataset can also contain non-numerical columns.

@end table

@menu
* Gnuastro text table format::  Reading plain text tables
@end menu

@node Gnuastro text table format, Selecting table columns, Recognized table formats, Tables
@subsection Gnuastro text table format

Plain text files are the most generic, portable, and easiest way to (manually) create, (visually) inspect, or (manually) edit a table.
In this format, the ending of a row is defined by the new-line character (a line on a text editor).
So when you view it on a text editor, every row will occupy one line.
The delimiters (or characters separating the columns) are white space characters (space, horizontal tab, vertical tab) and a comma (@key{,}).
The only further requirement is that all rows/lines must have the same number of columns.

The columns don't have to be exactly under each other and the rows can be arbitrarily long with different lengths.
For example the following contents in a file would be interpreted as a table with 4 columns and 2 rows, with each element interpreted as a @code{double} type (see @ref{Numeric data types}).

@example
1     2.234948   128   39.8923e8
2 , 4.454        792     72.98348e7
@end example

However, the example above has no other information about the columns (it is just raw data, with no meta-data).
To use this table, you have to remember what the numbers in each column represent.
Also, when you want to select columns, you have to count their position within the table.
This can become frustrating and prone to bad errors (getting the columns wrong) especially as the number of columns increase.
It is also bad for sending to a colleague, because they will find it hard to remember/use the columns properly.

To solve these problems in Gnuastro's programs/libraries you aren't limited to using the column's number, see @ref{Selecting table columns}.
If the columns have names, units, or comments you can also select your columns based on searches/matches in these fields, for example see @ref{Table}.
Also, in this manner, you can't guide the program reading the table on how to read the numbers.
As an example, the first and third columns above can be read as integer types: the first column might be an ID and the third can be the number of pixels an object occupies in an image.
So there is no need to read these to columns as a @code{double} type (which takes more memory, and is slower).

In the bare-minimum example above, you also can't use strings of characters, for example the names of filters, or some other identifier that includes non-numerical characters.
In the absence of any information, only numbers can be read robustly.
Assuming we read columns with non-numerical characters as string, there would still be the problem that the strings might contain space (or any delimiter) character for some rows.
So, each `word' in the string will be interpreted as a column and the program will abort with an error that the rows don't have the same number of columns.

To correct for these limitations, Gnuastro defines the following convention for storing the table meta-data along with the raw data in one plain text file.
The format is primarily designed for ease of reading/writing by eye/fingers, but is also structured enough to be read by a program.

When the first non-white character in a line is @key{#}, or there are no non-white characters in it, then the line will not be considered as a row of data in the table (this is a pretty standard convention in many programs, and higher level languages).
In the former case, the line is interpreted as a @emph{comment}.
If the comment line starts with `@code{# Column N:}', then it is assumed to contain information about column @code{N} (a number, counting from 1).
Comment lines that don't start with this pattern are ignored and you can use them to include any further information you want to store with the table in the text file.
A column information comment is assumed to have the following format:

@example
# Column N: NAME [UNIT, TYPE, BLANK] COMMENT
@end example

@cindex NaN
@noindent
Any sequence of characters between `@key{:}' and `@key{[}' will be interpreted as the column name (so it can contain anything except the `@key{[}' character).
Anything between the `@key{]}' and the end of the line is defined as a comment.
Within the brackets, anything before the first `@key{,}' is the units (physical units, for example km/s, or erg/s), anything before the second `@key{,}' is the short type identifier (see below, and @ref{Numeric data types}).

Finally (still within the brackets), any non-white characters after the second `@key{,}' are interpreted as the blank value for that column (see @ref{Blank pixels}).
The blank value can either be in the same type as the column (for example @code{-99} for a signed integer column), or any string (for example @code{NaN} in that same column).
In both cases, the values will be stored in memory as Gnuastro's fixed blank values for each type.
For floating point types, Gnuastro's internal blank value is IEEE NaN (Not-a-Number).
For signed integers, it is the smallest possible value and for unsigned integers its the largest possible value.

When a formatting problem occurs (for example you have specified the wrong type code, see below), or the column was already given meta-data in a previous comment, or the column number is larger than the actual number of columns in the table (the non-commented or empty lines), then the comment information line will be ignored.

When a comment information line can be used, the leading and trailing white space characters will be stripped from all of the elements.
For example in this line:

@example
# Column 5:  column name   [km/s,    f32,-99] Redshift as speed
@end example

The @code{NAME} field will be `@code{column name}' and the @code{TYPE} field will be `@code{f32}'.
Note how all the white space characters before and after strings are not used, but those in the middle remained.
Also, white space characters aren't mandatory.
Hence, in the example above, the @code{BLANK} field will be given the value of `@code{-99}'.

Except for the column number (@code{N}), the rest of the fields are optional.
Also, the column information comments don't have to be in order.
In other words, the information for column @mymath{N+m} (@mymath{m>0}) can be given in a line before column @mymath{N}.
Also, you don't have to specify information for all columns.
Those columns that don't have this information will be interpreted with the default settings (like the case above: values are double precision floating point, and the column has no name, unit, or comment).
So these lines are all acceptable for any table (the first one, with nothing but the column number is redundant):

@example
# Column 5:
# Column 1: ID [,i8] The Clump ID.
# Column 3: mag_f160w [AB mag, f32] Magnitude from the F160W filter
@end example

@noindent
The data type of the column should be specified with one of the following values:

@itemize
@item
For a numeric column, you can use any of the numeric types (and their
recognized identifiers) described in @ref{Numeric data types}.
@item
`@code{strN}': for strings.
The @code{N} value identifies the length of the string (how many characters it has).
The start of the string on each row is the first non-delimiter character of the column that has the string type.
The next @code{N} characters will be interpreted as a string and all leading and trailing white space will be removed.

If the next column's characters, are closer than @code{N} characters to the start of the string column in that line/row, they will be considered part of the string column.
If there is a new-line character before the ending of the space given to the string column (in other words, the string column is the last column), then reading of the string will stop, even if the @code{N} characters are not complete yet.
See @file{tests/table/table.txt} for one example.
Therefore, the only time you have to pay attention to the positioning and spaces given to the string column is when it is not the last column in the table.

The only limitation in this format is that trailing and leading white space characters will be removed from the columns that are read.
In most cases, this is the desired behavior, but if trailing and leading white-spaces are critically important to your analysis, define your own starting and ending characters and remove them after the table has been read.
For example in the sample table below, the two `@key{|}' characters (which are arbitrary) will remain in the value of the second column and you can remove them manually later.
If only one of the leading or trailing white spaces is important for your work, you can only use one of the `@key{|}'s.

@example
# Column 1: ID [label, u8]
# Column 2: Notes [no unit, str50]
1    leading and trailing white space is ignored here    2.3442e10
2   |         but they will be preserved here        |   8.2964e11
@end example

@end itemize

Note that the FITS binary table standard does not define the @code{unsigned int} and @code{unsigned long} types, so if you want to convert your tables to FITS binary tables, use other types.
Also, note that in the FITS ASCII table, there is only one integer type (@code{long}).
So if you convert a Gnuastro plain text table to a FITS ASCII table with the @ref{Table} program, the type information for integers will be lost.
Conversely if integer types are important for you, you have to manually set them when reading a FITS ASCII table (for example with the Table program when reading/converting into a file, or with the @file{gnuastro/table.h} library functions when reading into memory).


@node Selecting table columns,  , Gnuastro text table format, Tables
@subsection Selecting table columns

At the lowest level, the only defining aspect of a column in a table is its number, or position.
But selecting columns purely by number is not very convenient and, especially when the tables are large it can be very frustrating and prone to errors.
Hence, table file formats (for example see @ref{Recognized table formats}) have ways to store additional information about the columns (meta-data).
Some of the most common pieces of information about each column are its @emph{name}, the @emph{units} of data in it, and a @emph{comment} for longer/informal description of the column's data.

To facilitate research with Gnuastro, you can select columns by matching, or searching in these three fields, besides the low-level column number.
To view the full list of information on the columns in the table, you can use the Table program (see @ref{Table}) with the command below (replace @file{table-file} with the filename of your table, if its FITS, you might also need to specify the HDU/extension which contains the table):

@example
$ asttable --information table-file
@end example

Gnuastro's programs need the columns for different purposes, for example in Crop, you specify the columns containing the central coordinates of the crop centers with the @option{--coordcol} option (see @ref{Crop options}).
On the other hand, in MakeProfiles, to specify the column containing the profile position angles, you must use the @option{--pcol} option (see @ref{MakeProfiles catalog}).
Thus, there can be no unified common option name to select columns for all programs (different columns have different purposes).
However, when the program expects a column for a specific context, the option names end in the @option{col} suffix like the examples above.
These options accept values in integer (column number), or string (metadata match/search) format.

If the value can be parsed as a positive integer, it will be seen as the low-level column number.
Note that column counting starts from 1, so if you ask for column 0, the respective program will abort with an error.
When the value can't be interpreted as an a integer number, it will be seen as a string of characters which will be used to match/search in the table's meta-data.
The meta-data field which the value will be compared with can be selected through the @option{--searchin} option, see @ref{Input output options}.
@option{--searchin} can take three values: @code{name}, @code{unit}, @code{comment}.
The matching will be done following this convention:

@itemize
@item
If the value is enclosed in two slashes (for example @command{-x/RA_/}, or @option{--coordcol=/RA_/}, see @ref{Crop options}), then it is assumed to be a regular expression with the same convention as GNU AWK.
GNU AWK has a very well written @url{https://www.gnu.org/software/gawk/manual/html_node/Regexp.html, chapter} describing regular expressions, so we will not continue discussing them here.
Regular expressions are a very powerful tool in matching text and useful in many contexts.
We thus strongly encourage reviewing this chapter for greatly improving the quality of your work in many cases, not just for searching column meta-data in Gnuastro.

@item
When the string isn't enclosed between `@key{/}'s, any column that exactly matches the given value in the given field will be selected.
@end itemize

Note that in both cases, you can ignore the case of alphabetic characters with the @option{--ignorecase} option, see @ref{Input output options}.
Also, in both cases, multiple columns may be selected with one call to this function.
In this case, the order of the selected columns (with one call) will be the same order as they appear in the table.





@node Tessellation, Automatic output, Tables, Common program behavior
@section Tessellation

It is sometimes necessary to classify the elements in a dataset (for example pixels in an image) into a grid of individual, non-overlapping tiles.
For example when background sky gradients are present in an image, you can define a tile grid over the image.
When the tile sizes are set properly, the background's variation over each tile will be negligible, allowing you to measure (and subtract) it.
In other cases (for example spatial domain convolution in Gnuastro, see @ref{Convolve}), it might simply be for speed of processing: each tile can be processed independently on a separate CPU thread.
In the arts and mathematics, this process is formally known as @url{https://en.wikipedia.org/wiki/Tessellation, tessellation}.

The size of the regular tiles (in units of data-elements, or pixels in an image) can be defined with the @option{--tilesize} option.
It takes multiple numbers (separated by a comma) which will be the length along the respective dimension (in FORTRAN/FITS dimension order).
Divisions are also acceptable, but must result in an integer.
For example @option{--tilesize=30,40} can be used for an image (a 2D dataset).
The regular tile size along the first FITS axis (horizontal when viewed in SAO ds9) will be 30 pixels and along the second it will be 40 pixels.
Ideally, @option{--tilesize} should be selected such that all tiles in the image have exactly the same size.
In other words, that the dataset length in each dimension is divisible by the tile size in that dimension.

However, this is not always possible: the dataset can be any size and every pixel in it is valuable.
In such cases, Gnuastro will look at the significance of the remainder length, if it is not significant (for example one or two pixels), then it will just increase the size of the first tile in the respective dimension and allow the rest of the tiles to have the required size.
When the remainder is significant (for example one pixel less than the size along that dimension), the remainder will be added to one regular tile's size and the large tile will be cut in half and put in the two ends of the grid/tessellation.
In this way, all the tiles in the central regions of the dataset will have the regular tile sizes and the tiles on the edge will be slightly larger/smaller depending on the remainder significance.
The fraction which defines the remainder significance along all dimensions can be set through @option{--remainderfrac}.

The best tile size is directly related to the spatial properties of the property you want to study (for example, gradient on the image).
In practice we assume that the gradient is not present over each tile.
So if there is a strong gradient (for example in long wavelength ground based images) or the image is of a crowded area where there isn't too much blank area, you have to choose a smaller tile size.
A larger mesh will give more pixels and and so the scatter in the results will be less (better statistics).

@cindex CCD
@cindex Amplifier
@cindex Bias current
@cindex Subaru Telescope
@cindex Hyper Suprime-Cam
@cindex Hubble Space Telescope (HST)
For raw image processing, a single tessellation/grid is not sufficient.
Raw images are the unprocessed outputs of the camera detectors.
Modern detectors usually have multiple readout channels each with its own amplifier.
For example the Hubble Space Telescope Advanced Camera for Surveys (ACS) has four amplifiers over its full detector area dividing the square field of view to four smaller squares.
Ground based image detectors are not exempt, for example each CCD of Subaru Telescope's Hyper Suprime-Cam camera (which has 104 CCDs) has four amplifiers, but they have the same height of the CCD and divide the width by four parts.

@cindex Channel
The bias current on each amplifier is different, and initial bias subtraction is not perfect.
So even after subtracting the measured bias current, you can usually still identify the boundaries of different amplifiers by eye.
See Figure 11(a) in Akhlaghi and Ichikawa (2015) for an example.
This results in the final reduced data to have non-uniform amplifier-shaped regions with higher or lower background flux values.
Such systematic biases will then propagate to all subsequent measurements we do on the data (for example photometry and subsequent stellar mass and star formation rate measurements in the case of galaxies).

Therefore an accurate analysis requires a two layer tessellation: the top layer contains larger tiles, each covering one amplifier channel.
For clarity we'll call these larger tiles ``channels''.
The number of channels along each dimension is defined through the @option{--numchannels}.
Each channel is then covered by its own individual smaller tessellation (with tile sizes determined by the @option{--tilesize} option).
This will allow independent analysis of two adjacent pixels from different channels if necessary.
If the image is processed or the detector only has one amplifier, you can set the number of channels in both dimension to 1.

The final tessellation can be inspected on the image with the @option{--checktiles} option that is available to all programs which use tessellation for localized operations.
When this option is called, a FITS file with a @file{_tiled.fits} suffix will be created along with the outputs, see @ref{Automatic output}.
Each pixel in this image has the number of the tile that covers it.
If the number of channels in any dimension are larger than unity, you will notice that the tile IDs are defined such that the first channels is covered first, then the second and so on.
For the full list of processing-related common options (including tessellation options), please see @ref{Processing options}.





@node Automatic output, Output FITS files, Tessellation, Common program behavior
@section Automatic output

@cindex Standard input
@cindex Automatic output file names
@cindex Output file names, automatic
@cindex Setting output file names automatically
All the programs in Gnuastro are designed such that specifying an output file or directory (based on the program context) is optional.
When no output name is explicitly given (with @option{--output}, see @ref{Input output options}), the programs will automatically set an output name based on the input name(s) and what the program does.
For example when you are using ConvertType to save FITS image named @file{dataset.fits} to a JPEG image and don't specify a name for it, the JPEG output file will be name @file{dataset.jpg}.
When the input is from the standard input (for example a pipe, see @ref{Standard input}), and @option{--output} isn't given, the output name will be the program's name (for example @file{converttype.jpg}).

@vindex --keepinputdir
Another very important part of the automatic output generation is that all the directory information of the input file name is stripped off of it.
This feature can be disabled with the @option{--keepinputdir} option, see @ref{Input output options}.
It is the default because astronomical data are usually very large and organized specially with special file names.
In some cases, the user might not have write permissions in those directories@footnote{In fact, even if the data is stored on your own computer, it is advised to only grant write permissions to the super user or root.
This way, you won't accidentally delete or modify your valuable data!}.

Let's assume that we are working on a report and want to process the FITS images from two projects (ABC and DEF), which are stored in the sub-directories named @file{ABCproject/} and @file{DEFproject/} of our top data directory (@file{/mnt/data}).
The following shell commands show how one image from the former is first converted to a JPEG image through ConvertType and then the objects from an image in the latter project are detected using NoiseChisel.
The text after the @command{#} sign are comments (not typed!).

@example
$ pwd                                               # Current location
/home/usrname/research/report
$ ls                                         # List directory contents
ABC01.jpg
$ ls /mnt/data/ABCproject                                  # Archive 1
ABC01.fits ABC02.fits ABC03.fits
$ ls /mnt/data/DEFproject                                  # Archive 2
DEF01.fits DEF02.fits DEF03.fits
$ astconvertt /mnt/data/ABCproject/ABC02.fits --output=jpg    # Prog 1
$ ls
ABC01.jpg ABC02.jpg
$ astnoisechisel /mnt/data/DEFproject/DEF01.fits              # Prog 2
$ ls
ABC01.jpg ABC02.jpg DEF01_detected.fits
@end example





@node Output FITS files,  , Automatic output, Common program behavior
@section Output FITS files

@cindex FITS
@cindex Output FITS headers
@cindex CFITSIO version on outputs
The output of many of Gnuastro's programs are (or can be) FITS files.
The FITS format has many useful features for storing scientific datasets (cubes, images and tables) along with a robust features for archivability.
For more on this standard, please see @ref{Fits}.

As a community convention described in @ref{Fits}, the first extension of all FITS files produced by Gnuastro's programs only contains the meta-data that is intended for the file's extension(s).
For a Gnuastro program, this generic meta-data (that is stored as FITS keyword records) is its configuration when it produced this dataset: file name(s) of input(s) and option names, values and comments.
Note that when the configuration is too trivial (only input filename, for example the program @ref{Table}) no meta-data is written in this extension.

FITS keywords have the following limitations in regards to generic option names and values which are described below:

@itemize
@item
If a keyword (option name) is longer than 8 characters, the first word in the record (80 character line) is @code{HIERARCH} which is followed by the keyword name.

@item
Values can be at most 75 characters, but for strings, this changes to 73 (because of the two extra @key{'} characters that are necessary).
However, if the value is a file name, containing slash (@key{/}) characters to separate directories, Gnuastro will break the value into multiple keywords.

@item
Keyword names ignore case, therefore they are all in capital letters.
Therefore, if you want to use Grep to inspect these keywords, use the @option{-i} option, like the example below.

@example
$ astfits image_detected.fits -h0 | grep -i snquant
@end example
@end itemize

The keywords above are classified (separated by an empty line and title) as a group titled ``ProgramName configuration''.
This meta-data extension, as well as all the other extensions (which contain data), also contain have final group of keywords to keep the basic date and version information of Gnuastro, its dependencies and the pipeline that is using Gnuastro (if its under version control).

@table @command

@item DATE
The creation time of the FITS file.
This date is written directly by CFITSIO and is in UT format.

@item COMMIT
Git's commit description from the running directory of Gnuastro's programs.
If the running directory is not version controlled or @file{libgit2} isn't installed (see @ref{Optional dependencies}) then this keyword will not be present.
The printed value is equivalent to the output of the following command:

@example
git describe --dirty --always
@end example

If the running directory contains non-committed work, then the stored value will have a `@command{-dirty}' suffix.
This can be very helpful to let you know that the data is not ready to be shared with collaborators or submitted to a journal.
You should only share results that are produced after all your work is committed (safely stored in the version controlled history and thus reproducible).

At first sight, version control appears to be mainly a tool for software developers.
However progress in a scientific research is almost identical to progress in software development: first you have a rough idea that starts with handful of easy steps.
But as the first results appear to be promising, you will have to extend, or generalize, it to make it more robust and work in all the situations your research covers, not just your first test samples.
Slowly you will find wrong assumptions or bad implementations that need to be fixed (`bugs' in software development parlance).
Finally, when you submit the research to your collaborators or a journal, many comments and suggestions will come in, and you have to address them.

Software developers have created version control systems precisely for this kind of activity.
Each significant moment in the project's history is called a ``commit'', see @ref{Version controlled source}.
A snapshot of the project in each ``commit'' is safely stored away, so you can revert back to it at a later time, or check changes/progress.
This way, you can be sure that your work is reproducible and track the progress and history.
With version control, experimentation in the project's analysis is greatly facilitated, since you can easily revert back if a brainstorm test procedure fails.

One important feature of version control is that the research result (FITS image, table, report or paper) can be stamped with the unique commit information that produced it.
This information will enable you to exactly reproduce that same result later, even if you have made changes/progress.
For one example of a research paper's reproduction pipeline, please see the @url{https://gitlab.com/makhlaghi/NoiseChisel-paper, reproduction pipeline} of the @url{https://arxiv.org/abs/1505.01664, paper} describing @ref{NoiseChisel}.

@item CFITSIO
The version of CFITSIO used (see @ref{CFITSIO}).

@item WCSLIB
The version of WCSLIB used (see @ref{WCSLIB}).
Note that older versions of WCSLIB do not report the version internally.
So this is only available if you are using more recent WCSLIB versions.

@item GSL
The version of GNU Scientific Library that was used, see @ref{GNU Scientific Library}.

@item GNUASTRO
The version of Gnuastro used (see @ref{Version numbering}).
@end table

Here is one example of the last few lines of an example output.

@example
              / Versions and date
DATE    = '...'                / file creation date
COMMIT  = 'v0-8-g547f6eb'      / Commit description in running dir.
CFITSIO = '3.45    '           / CFITSIO version.
WCSLIB  = '5.19    '           / WCSLIB version.
GSL     = '2.5     '           / GNU Scientific Library version.
GNUASTRO= '0.7     '           / GNU Astronomy Utilities version.
END
@end example














@node Data containers, Data manipulation, Common program behavior, Top
@chapter Data containers

@cindex File operations
@cindex Operations on files
@cindex General file operations
The most low-level and basic property of a dataset is how it is stored.
To process, archive and transmit the data, you need a container to store it first.
From the start of the computer age, different formats have been defined to store data, optimized for particular applications.
One format/container can never be useful for all applications: the storage defines the application and vice-versa.
In astronomy, the Flexible Image Transport System (FITS) standard has become the most common format of data storage and transmission.
It has many useful features, for example multiple sub-containers (also known as extensions or header data units, HDUs) within one file, or support for tables as well as images.
Each HDU can store an independent dataset and its corresponding meta-data.
Therefore, Gnuastro has one program (see @ref{Fits}) specifically designed to manipulate FITS HDUs and the meta-data (header keywords) in each HDU.

Your astronomical research does not just involve data analysis (where the FITS format is very useful).
For example you want to demonstrate your raw and processed FITS images or spectra as figures within slides, reports, or papers.
The FITS format is not defined for such applications.
Thus, Gnuastro also comes with the ConvertType program (see @ref{ConvertType}) which can be used to convert a FITS image to and from (where possible) other formats like plain text and JPEG (which allow two way conversion), along with EPS and PDF (which can only be created from FITS, not the other way round).

Finally, the FITS format is not just for images, it can also store tables.
Binary tables in particular can be very efficient in storing catalogs that have more than a few tens of columns and rows.
However, unlike images (where all elements/pixels have one data type), tables contain multiple columns and each column can have different properties: independent data types (see @ref{Numeric data types}) and meta-data.
In practice, each column can be viewed as a separate container that is grouped with others in the table.
The only shared property of the columns in a table is thus the number of elements they contain.
To allow easy inspection/manipulation of table columns, Gnuastro has the Table program (see @ref{Table}).
It can be used to select certain table columns in a FITS table and see them as a human readable output on the command-line, or to save them into another plain text or FITS table.

@menu
* Fits::                        View and manipulate extensions and keywords.
* Sort FITS files by night::    Installed script to sort FITS files by obs night.
* ConvertType::                 Convert data to various formats.
* Table::                       Read and Write FITS tables to plain text.
* Query::                       Import data from external databases.
@end menu





@node Fits, Sort FITS files by night, Data containers, Data containers
@section Fits

@cindex Vatican library
The ``Flexible Image Transport System'', or FITS, is by far the most common data container format in astronomy and in constant use since the 1970s.
Archiving (future usage, simplicity) has been one of the primary design principles of this format.
In the last few decades it has proved so useful and robust that the Vatican Library has also chosen FITS for its ``long-term digital preservation'' project@footnote{@url{https://www.vaticanlibrary.va/home.php?pag=progettodigit}}.

@cindex IAU, international astronomical union
Although the full name of the standard invokes the idea that it is only for images, it also contains complete and robust features for tables.
It started off in the 1970s and was formally published as a standard in 1981, it was adopted by the International Astronomical Union (IAU) in 1982 and an IAU working group to maintain its future was defined in 1988.
The FITS 2.0 and 3.0 standards were approved in 2000 and 2008 respectively, and the 4.0 draft has also been released recently, please see the @url{https://fits.gsfc.nasa.gov/fits_standard.html, FITS standard document web page} for the full text of all versions.
Also see the @url{https://doi.org/10.1051/0004-6361/201015362, FITS 3.0 standard paper} for a nice introduction and history along with the full standard.

@cindex Meta-data
Many common image formats, for example a JPEG, only have one image/dataset per file, however one great advantage of the FITS standard is that it allows you to keep multiple datasets (images or tables along with their separate meta-data) in one file.
In the FITS standard, each data + metadata is known as an extension, or more formally a header data unit or HDU.
The HDUs in a file can be completely independent: you can have multiple images of different dimensions/sizes or tables as separate extensions in one file.
However, while the standard doesn't impose any constraints on the relation between the datasets, it is strongly encouraged to group data that are contextually related with each other in one file.
For example an image and the table/catalog of objects and their measured properties in that image.
Other examples can be images of one patch of sky in different colors (filters), or one raw telescope image along with its calibration data (tables or images).

As discussed above, the extensions in a FITS file can be completely independent.
To keep some information (meta-data) about the group of extensions in the FITS file, the community has adopted the following convention: put no data in the first extension, so it is just meta-data.
This extension can thus be used to store Meta-data regarding the whole file (grouping of extensions).
Subsequent extensions may contain data along with their own separate meta-data.
All of Gnuastro's programs also follow this convention: the main output dataset(s) are placed in the second (or later) extension(s).
The first extension contains no data the program's configuration (input file name, along with all its option values) are stored as its meta-data, see @ref{Output FITS files}.

The meta-data contain information about the data, for example which region of the sky an image corresponds to, the units of the data, what telescope, camera, and filter the data were taken with, it observation date, or the software that produced it and its configuration.
Without the meta-data, the raw dataset is practically just a collection of numbers and really hard to understand, or connect with the real world (other datasets).
It is thus strongly encouraged to supplement your data (at any level of processing) with as much meta-data about your processing/science as possible.

The meta-data of a FITS file is in ASCII format, which can be easily viewed or edited with a text editor or on the command-line.
Each meta-data element (known as a keyword generally) is composed of a name, value, units and comments (the last two are optional).
For example below you can see three FITS meta-data keywords for specifying the world coordinate system (WCS, or its location in the sky) of a dataset:

@example
LATPOLE =           -27.805089 / [deg] Native latitude of celestial pole
RADESYS = 'FK5'                / Equatorial coordinate system
EQUINOX =               2000.0 / [yr] Equinox of equatorial coordinates
@end example

However, there are some limitations which discourage viewing/editing the keywords with text editors.
For example there is a fixed length of 80 characters for each keyword (its name, value, units and comments) and there are no new-line characters, so on a text editor all the keywords are seen in one line.
Also, the meta-data keywords are immediately followed by the data which are commonly in binary format and will show up as strange looking characters on a text editor, and significantly slowing down the processor.

Gnuastro's Fits program was designed to allow easy manipulation of FITS extensions and meta-data keywords on the command-line while conforming fully with the FITS standard.
For example you can copy or cut (copy and remove) HDUs/extensions from one FITS file to another, or completely delete them.
It also has features to delete, add, or edit meta-data keywords within one HDU.

@menu
* Invoking astfits::            Arguments and options to Header.
@end menu

@node Invoking astfits,  , Fits, Fits
@subsection Invoking Fits

Fits can print or manipulate the FITS file HDUs (extensions), meta-data keywords in a given HDU.
The executable name is @file{astfits} with the following general template

@example
$ astfits [OPTION...] ASTRdata
@end example


@noindent
One line examples:

@example
## View general information about every extension:
$ astfits image.fits

## Print the header keywords in the second HDU (counting from 0):
$ astfits image.fits -h1

## Only print header keywords that contain `NAXIS':
$ astfits image.fits -h1 | grep NAXIS

## Only print the WCS standard PC matrix elements
$ astfits image.fits -h1 | grep 'PC._.'

## Copy a HDU from input.fits to out.fits:
$ astfits input.fits --copy=hdu-name --output=out.fits

## Update the OLDKEY keyword value to 153.034:
$ astfits --update=OLDKEY,153.034,"Old keyword comment"

## Delete one COMMENT keyword and add a new one:
$ astfits --delete=COMMENT --comment="Anything you like ;-)."

## Write two new keywords with different values and comments:
$ astfits --write=MYKEY1,20.00,"An example keyword" --write=MYKEY2,fd
@end example

@cindex HDU
@cindex HEALPix
When no action is requested (and only a file name is given), Fits will print a list of information about the extension(s) in the file.
This information includes the HDU number, HDU name (@code{EXTNAME} keyword), type of data (see @ref{Numeric data types}, and the number of data elements it contains (size along each dimension for images and table rows and columns).
Optionally, a comment column is printed for special situations (like a 2D HEALPix grid that is usually stored as a 1D dataset/table).
You can use this to get a general idea of the contents of the FITS file and what HDU to use for further processing, either with the Fits program or any other Gnuastro program.

Here is one example of information about a FITS file with four extensions: the first extension has no data, it is a purely meta-data HDU (commonly used to keep meta-data about the whole file, or grouping of extensions, see @ref{Fits}).
The second extension is an image with name @code{IMAGE} and single precision floating point type (@code{float32}, see @ref{Numeric data types}), it has 4287 pixels along its first (horizontal) axis and 4286 pixels along its second (vertical) axis.
The third extension is also an image with name @code{MASK}.
It is in 2-byte integer format (@code{int16}) which is commonly used to keep information about pixels (for example to identify which ones were saturated, or which ones had cosmic rays and so on), note how it has the same size as the @code{IMAGE} extension.
The third extension is a binary table called @code{CATALOG} which has 12371 rows and 5 columns (it probably contains information about the sources in the image).

@example
GNU Astronomy Utilities X.X
Run on Day Month DD HH:MM:SS YYYY
-----
HDU (extension) information: `image.fits'.
 Column 1: Index (counting from 0).
 Column 2: Name (`EXTNAME' in FITS standard).
 Column 3: Image data type or `table' format (ASCII or binary).
 Column 4: Size of data in HDU.
-----
0      n/a             uint8           0
1      IMAGE           float32         4287x4286
2      MASK            int16           4287x4286
3      CATALOG         table_binary    12371x5
@end example

If a specific HDU is identified on the command-line with the @option{--hdu} (or @option{-h} option) and no operation requested, then the full list of header keywords in that HDU will be printed (as if the @option{--printallkeys} was called, see below).
It is important to remember that this only occurs when @option{--hdu} is given on the command-line.
The @option{--hdu} value given in a configuration file will only be used when a specific operation on keywords requested.
Therefore as described in the paragraphs above, when no explicit call to the @option{--hdu} option is made on the command-line and no operation is requested (on the command-line or configuration files), the basic information of each HDU/extension is printed.

The operating mode and input/output options to Fits are similar to the other programs and fully described in @ref{Common options}.
The options particular to Fits can be divided into two groups:
1) those related to modifying HDUs or extensions (see @ref{HDU information and manipulation}), and
2) those related to viewing/modifying meta-data keywords (see @ref{Keyword manipulation}).
These two classes of options cannot be called together in one run: you can either work on the extensions or meta-data keywords in any instance of Fits.

@menu
* HDU information and manipulation::  Learn about the HDUs and move them.
* Keyword manipulation::        Manipulate metadata keywords in a HDU
@end menu





@node HDU information and manipulation, Keyword manipulation, Invoking astfits, Invoking astfits
@subsubsection HDU information and manipulation
Each FITS file header data unit, or HDU (also known as an extension) is an independent dataset (data + meta-data).
Multiple HDUs can be stored in one FITS file, see @ref{Fits}.
The general HDU-related options to the Fits program are listed below as two general classes:
the first group below focus on HDU information while the latter focus on manipulating (moving or deleting) the HDUs.

The options below print information about the given HDU on the command-line.
Thus they cannot be called together in one command (each has its own independent output).

@table @option
@item -n
@itemx --numhdus
Print the number of extensions/HDUs in the given file.
Note that this option must be called alone and will only print a single number.
It is thus useful in scripts, for example when you need to do check the number of extensions in a FITS file.

For a complete list of basic meta-data on the extensions in a FITS file, don't use any of the options in this section or in @ref{Keyword manipulation}.
For more, see @ref{Invoking astfits}.

@item --pixelscale
Print the HDU's pixel-scale (change in world coordinate for one pixel along each dimension) and pixel area or voxel volume.
Without the @option{--quiet} option, the output of @option{--pixelscale} has multiple lines and explanations, thus being more human-friendly.
It prints the file/HDU name, number of dimensions, and the units along with the actual pixel scales.
Also, when any of the units are in degrees, the pixel scales and area/volume are also printed in units of arc-seconds.
For 3D datasets, the pixel area (on each 2D slice of the 3D cube) is printed as well as the voxel volume.

However, in scripts (that are to be run automatically), this human-friendly format is annoying, so when called with the @option{--quiet} option, only the pixel-scale value(s) along each dimension is(are) printed in one line.
These numbers are followed by the pixel area (in the raw WCS units).
For 3D datasets, this will be area on each 2D slice.
Finally, for 3D datasets, a final number (the voxel volume) is printed.
As a summary, in @option{--quiet} mode, for 2D datasets three numbers are printed and for 3D datasets, 5 numbers are printed.

@item --skycoverage
@cindex Image's sky coverage
@cindex Coverage of image over sky
Print the rectangular area (or 3D cube) covered by the given image/datacube HDU over the Sky in the WCS units.
The covered area is reported in two ways:
1) the center and full width in each dimension,
2) the minimum and maximum sky coordinates in each dimension.
This is option is thus useful when you want to get a general feeling of a new image/dataset, or prepare the inputs to query external databases in the region of the image (for example with @ref{Query}).

If run without the @option{--quiet} option, the values are given with a human-friendly description.
For example here is the output of this option on an image taken near the star Castor:

@example
$ astfits castor.fits --skycoverage
Input file: castor.fits (hdu: 1)

Sky coverage by center and (full) width:
  Center: 113.9149075    31.93759664
  Width:  2.41762045     2.67945253

Sky coverage by range along dimensions:
  RA       112.7235592    115.1411797
  DEC      30.59262123    33.27207376
@end example

With the @option{--quiet} option, the values are more machine-friendly (easy to parse).
It has two lines, where the first line contains the center/width values and the second line shows the coordinate ranges in each dimension.

@example
$ astfits castor.fits --skycoverage --quiet
113.9149075     31.93759664     2.41762045      2.67945253
112.7235592     115.1411797     30.59262123     33.27207376
@end example

Note that this is a simple rectangle (cube in 3D) definition, so if the image is rotated in relation to the celestial coordinates a general polygon is necessary to exactly describe the coverage.
Hence when there is rotation, the reported area will be larger than the actual area containing data, you can visually see the area with the @option{--align} option of @ref{Warp}.

@item --datasum
@cindex @code{DATASUM}: FITS keyword
Calculate and print the given HDU's "datasum" to stdout.
The given HDU is specified with the @option{--hdu} (or @option{-h}) option.
This number is calculated by parsing all the bytes of the given HDU's data records (excluding keywords).
This option ignores any possibly existing @code{DATASUM} keyword in the HDU.
For more on @code{DATASUM} in the FITS standard, see @ref{Keyword manipulation} (under the @code{checksum} component of @option{--write}).

You can use this option to confirm that the data in two different HDUs (possibly with different keywords) is identical.
Its advantage over @option{--write=datasum} (which writes the @code{DATASUM} keyword into the given HDU) is that it doesn't require write permissions.
@end table

The following options manipulate (move/delete) the HDUs in one FITS file or to another FITS file.
These options may be called multiple times in one run.
If so, the extensions will be copied from the input FITS file to the output FITS file in the given order (on the command-line and also in configuration files, see @ref{Configuration file precedence}).
If the separate classes are called together in one run of Fits, then first @option{--copy} is run (on all specified HDUs), followed by @option{--cut} (again on all specified HDUs), and then @option{--remove} (on all specified HDUs).

The @option{--copy} and @option{--cut} options need an output FITS file (specified with the @option{--output} option).
If the output file exists, then the specified HDU will be copied following the last extension of the output file (the existing HDUs in it will be untouched).
Thus, after Fits finishes, the copied HDU will be the last HDU of the output file.
If no output file name is given, then automatic output will be used to store the HDUs given to this option (see @ref{Automatic output}).

@table @option

@item -C STR
@itemx --copy=STR
Copy the specified extension into the output file, see explanations above.

@item -k STR
@itemx --cut=STR
Cut (copy to output, remove from input) the specified extension into the
output file, see explanations above.

@item -R STR
@itemx --remove=STR
Remove the specified HDU from the input file.

The first (zero-th) HDU cannot be removed with this option.
Consider using @option{--copy} or @option{--cut} in combination with @option{primaryimghdu} to not have an empty zero-th HDU.
From CFITSIO: ``In the case of deleting the primary array (the first HDU in the file) then [it] will be replaced by a null primary array containing the minimum set of required keywords and no data.''.
So in practice, any existing data (array) and meta-data in the first extension will be removed, but the number of extensions in the file won't change.
This is because of the unique position the first FITS extension has in the FITS standard (for example it cannot be used to store tables).

@item --primaryimghdu
Copy or cut an image HDU to the zero-th HDU/extension a file that doesn't yet exist.
This option is thus irrelevant if the output file already exists or the copied/cut extension is a FITS table.
For example with the commands below, first we make sure that @file{out.fits} doesn't exist, then we copy the first extension of @file{in.fits} to the zero-th extension of @file{out.fits}.

@example
$ rm -f out.fits
$ astfits in.fits --copy=1 --primaryimghdu --output=out.fits
@end example

If we hadn't used @option{--primaryimghdu}, then the zero-th extension of @file{out.fits} would have no data, and its second extension would host the copied image (just like any other output of Gnuastro).

@end table


@node Keyword manipulation,  , HDU information and manipulation, Invoking astfits
@subsubsection Keyword manipulation
The meta-data in each header data unit, or HDU (also known as extension, see @ref{Fits}) is stored as ``keyword''s.
Each keyword consists of a name, value, unit, and comments.
The Fits program (see @ref{Fits}) options related to viewing and manipulating keywords in a FITS HDU are described below.

To see the full list of keywords in a FITS HDU, you can use the @option{--printallkeys} option.
If any of the keywords are to be modified, the headers of the input file will be changed.
If you want to keep the original FITS file or HDU, it is easiest to create a copy first and then run Fits on that.
In the FITS standard, keywords are always uppercase.
So case does not matter in the input or output keyword names you specify.

Most of the options can accept multiple instances in one command.
For example you can add multiple keywords to delete by calling @option{--delete} multiple times, since repeated keywords are allowed, you can even delete the same keyword multiple times.
The action of such options will start from the top most keyword.

The precedence of operations are described below.
Note that while the order within each class of actions is preserved, the order of individual actions is not.
So irrespective of what order you called @option{--delete} and @option{--update}.
First, all the delete operations are going to take effect then the update operations.
@enumerate
@item
@option{--delete}
@item
@option{--rename}
@item
@option{--update}
@item
@option{--write}
@item
@option{--asis}
@item
@option{--history}
@item
@option{--comment}
@item
@option{--date}
@item
@option{--printallkeys}
@item
@option{--verify}
@item
@option{--copykeys}
@end enumerate
@noindent
All possible syntax errors will be reported before the keywords are actually written.
FITS errors during any of these actions will be reported, but Fits won't stop until all the operations are complete.
If @option{--quitonerror} is called, then Fits will immediately stop upon the first error.

@cindex GNU Grep
If you want to inspect only a certain set of header keywords, it is easiest to pipe the output of the Fits program to GNU Grep.
Grep is a very powerful and advanced tool to search strings which is precisely made for such situations.
For example if you only want to check the size of an image FITS HDU, you can run:

@example
$ astfits input.fits | grep NAXIS
@end example

@cartouche
@noindent
@strong{FITS STANDARD KEYWORDS:}
Some header keywords are necessary for later operations on a FITS file, for example BITPIX or NAXIS, see the FITS standard for their full list.
If you modify (for example remove or rename) such keywords, the FITS file extension might not be usable any more.
Also be careful for the world coordinate system keywords, if you modify or change their values, any future world coordinate system (like RA and Dec) measurements on the image will also change.
@end cartouche


@noindent
The keyword related options to the Fits program are fully described below.
@table @option

@item -d STR
@itemx --delete=STR
Delete one instance of the @option{STR} keyword from the FITS header.
Multiple instances of @option{--delete} can be given (possibly even for the same keyword, when its repeated in the meta-data).
All keywords given will be removed from the headers in the same given order.
If the keyword doesn't exist, Fits will give a warning and return with a non-zero value, but will not stop.
To stop as soon as an error occurs, run with @option{--quitonerror}.

@item -r STR
@itemx --rename=STR
Rename a keyword to a new value.
@option{STR} contains both the existing and new names, which should be separated by either a comma (@key{,}) or a space character.
Note that if you use a space character, you have to put the value to this option within double quotation marks (@key{"}) so the space character is not interpreted as an option separator.
Multiple instances of @option{--rename} can be given in one command.
The keywords will be renamed in the specified order.
If the keyword doesn't exist, Fits will give a warning and return with a non-zero value, but will not stop.
To stop as soon as an error occurs, run with @option{--quitonerror}.

@item -u STR
@itemx --update=STR
Update a keyword, its value, its comments and its units in the format described below.
If there are multiple instances of the keyword in the header, they will be changed from top to bottom (with multiple @option{--update} options).

@noindent
The format of the values to this option can best be specified with an
example:

@example
--update=KEYWORD,value,"comments for this keyword",unit
@end example

If there is a writing error, Fits will give a warning and return with a non-zero value, but will not stop.
To stop as soon as an error occurs, run with @option{--quitonerror}.

@noindent
The value can be any numerical or string value@footnote{Some tricky situations arise with values like `@command{87095e5}', if this was intended to be a number it will be kept in the header as @code{8709500000} and there is no problem.
But this can also be a shortened Git commit hash.
In the latter case, it should be treated as a string and stored as it is written.
Commit hashes are very important in keeping the history of a file during your research and such values might arise without you noticing them in your reproduction pipeline.
One solution is to use @command{git describe} instead of the short hash alone.
A less recommended solution is to add a space after the commit hash and Fits will write the value as `@command{87095e5 }' in the header.
If you later compare the strings on the shell, the space character will be ignored by the shell in the latter solution and there will be no problem.}.
Other than the @code{KEYWORD}, all the other values are optional.
To leave a given token empty, follow the preceding comma (@key{,}) immediately with the next.
If any space character is present around the commas, it will be considered part of the respective token.
So if more than one token has space characters within it, the safest method to specify a value to this option is to put double quotation marks around each individual token that needs it.
Note that without double quotation marks, space characters will be seen as option separators and can lead to undefined behavior.

@item -w STR
@itemx --write=STR
Write a keyword to the header.
For the possible value input formats, comments and units for the keyword, see the @option{--update} option above.
The special names (first string) below will cause a special behavior:

@table @option

@item /
Write a ``title'' to the list of keywords.
A title consists of one blank line and another which is blank for several spaces and starts with a slash (@key{/}).
The second string given to this option is the ``title'' or string printed after the slash.
For example with the command below you can add a ``title'' of `My keywords' after the existing keywords and add the subsequent @code{K1} and @code{K2} keywords under it (note that keyword names are not case sensitive).

@example
$ astfits test.fits -h1 --write=/,"My keywords" \
          --write=k1,1.23,"My first keyword"    \
          --write=k2,4.56,"My second keyword"
$ astfits test.fits -h1
[[[ ... truncated ... ]]]

                      / My keywords
K1      =                 1.23 / My first keyword
K2      =                 4.56 / My second keyword
END
@end example

Adding a ``title'' before each contextually separate group of header keywords greatly helps in readability and visual inspection of the keywords.
So generally, when you want to add new FITS keywords, its good practice to also add a title before them.

The reason you need to use @key{/} as the keyword name for setting a title is that @key{/} is the first non-white character.

The title(s) is(are) written into the FITS with the same order that @option{--write} is called.
Therefore in one run of the Fits program, you can specify many different titles (with their own keywords under them).
For example the command below that builds on the previous example and adds another group of keywords named @code{A1} and @code{A2}.

@example
$ astfits test.fits -h1 --write=/,"My keywords"   \
          --write=k1,1.23,"My first keyword"      \
          --write=k2,4.56,"My second keyword"     \
          --write=/,"My second group of keywords" \
          --write=a1,7.89,"First keyword"         \
          --write=a2,0.12,"Second keyword"
@end example

@item -a STR
@itemx --asis=STR
Write the given @code{STR} @emph{exactly} as it is, into the given FITS file header with no modifications.
If the contents of @code{STR} does not conform to the FITS standard for keywords, then it may (most probably: it will) corrupt your file and you may not be able to open it any more.
So please be @strong{very careful} with this option.
If you want to define the keyword from scratch, it is best to use the @option{--write} option (see below) and let CFITSIO worry about complying with the FITS standard.

The best way to use this option is when you want to add a keyword from one FITS file to another, unchanged and untouched.
Below is an example of such a case that can be very useful sometimes (on the command-line or in scripts):

@example
$ key=$(astfits firstimage.fits | grep KEYWORD)
$ astfits --asis="$key" secondimage.fits
@end example

@cindex GNU Bash
In particular note the double quotation signs (@key{"}) around the shell variable (@command{$key}).
This is because FITS keyword strings usually have lots of space characters, if this variable is not quoted, the shell will only give the first word in the full keyword to this option, which will definitely be a non-standard FITS keyword and will make it hard to work on the file afterwords.
See the ``Quoting'' section of the GNU Bash manual for more information if your keyword has the special characters @key{$}, @key{`}, or @key{\}.

You can also use @option{--asis} to copy multiple keywords from one file to another.
But the process will be a little more complicated. So we'll show the process as the simple shell script below.
You can customize it in the first block of variable definitions:
1) set the names of the keywords you want to copy: it can be as many keys as you want, just put a `@code{\|}' between them.
2) The input FITS file (where keywords should be read from) and its HDU.
3) The output FITS file (where keywords should be written to) and its HDU.
4) Set a ``title'' for the newly added keywords in the output (so they are visually separate from the existing keywords in the output).

@example
#!/bin/bash

# Customizations (input, output and key names).
# NOTE: put a '\|' between each keyword name.
keys="KEYa\|KEYb\|KEYc\|KEYd"
ifits=original.fits;     ihdu=1
ofits=to_write.fits;     ohdu=1
title="Keys from $ifits (hdu $ihdu)"

# Read keywords from input and write in output.
oIFS=$IFS
IFS=''
c="astfits $ofits -h$ohdu --write=/,\"$title\""
astfits $ifits -h$ihdu \
        | grep $keys \
        | (while read line; \
             do c="$c --asis=\"$line\""; \
           done; eval $c); \
IFS=$oIFS
@end example

Since its not too long, you can also simply put the variable values of the first block into the second, and write it directly on the command line (if its just for one time).

@item checksum
@cindex CFITSIO
@cindex @code{DATASUM}: FITS keyword
@cindex @code{CHECKSUM}: FITS keyword
When nothing is given afterwards, the header integrity keywords @code{DATASUM} and @code{CHECKSUM} will be calculated and written/updated.
This is calculation and writing is done fully by CFITSIO.
They thus comply with the FITS standard 4.0@footnote{@url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf}} that defines these keywords (its Appendix J).

If a value is given (e.g., @option{--write=checksum,MyOwnCheckSum}), then CFITSIO won't be called to calculate these two keywords and the value (as well as possible comment and unit) will be written just like any other keyword.
This is generally not recommended, but necessary in special circumstances (for example when the checksum needs to be manually updated).

@code{DATASUM} only depends on the data section of the HDU/extension, so it is not changed when you update the keywords.
But @code{CHECKSUM} also depends on the header and will not be valid if you make any further changes to the header.
This includes any further keyword modification options in the same call to the Fits program.
Therefore it is recommended to write these keywords as the last keywords that are written/modified in the extension.
You can use the @option{--verify} option (described below) to verify the values of these two keywords.

@item datasum
Similar to @option{checksum}, but only write the @code{DATASUM} keyword (that doesn't depend on the header keywords, only the data).
@end table

@item -H STR
@itemx --history STR
Add a @code{HISTORY} keyword to the header with the given value. A new @code{HISTORY} keyword will be created for every instance of this option. If the string given to this option is longer than 70 characters, it will be separated into multiple keyword cards. If there is an error, Fits will give a warning and return with a non-zero value, but will not stop. To stop as soon as an error occurs, run with @option{--quitonerror}.

@item -c STR
@itemx --comment STR
Add a @code{COMMENT} keyword to the header with the given value.
Similar to the explanation for @option{--history} above.

@item -t
@itemx --date
Put the current date and time in the header.
If the @code{DATE} keyword already exists in the header, it will be updated.
If there is a writing error, Fits will give a warning and return with a non-zero value, but will not stop.
To stop as soon as an error occurs, run with @option{--quitonerror}.

@item -p
@itemx --printallkeys
Print all the keywords in the specified FITS extension (HDU) on the command-line.
If this option is called along with any of the other keyword editing commands, as described above, all other editing commands take precedence to this.
Therefore, it will print the final keywords after all the editing has been done.

@item -v
@itemx --verify
Verify the @code{DATASUM} and @code{CHECKSUM} data integrity keywords of the FITS standard.
See the description under the @code{checksum} (under @option{--write}, above) for more on these keywords.

This option will print @code{Verified} for both keywords if they can be verified.
Otherwise, if they don't exist in the given HDU/extension, it will print @code{NOT-PRESENT}, and if they cannot be verified it will print @code{INCORRECT}.
In the latter case (when the keyword values exist but can't be verified), the Fits program will also return with a failure.

By default this function will also print a short description of the @code{DATASUM} AND @code{CHECKSUM} keywords.
You can suppress this extra information with @code{--quiet} option.

@item --copykeys=INT:INT
Copy the input's keyword records in the given range (inclusive) to the
output HDU (specified with the @option{--output} and @option{--outhdu}
options, for the filename and HDU/extension respectively).

The given string to this option must be two integers separated by a colon (@key{:}).
The first integer must be positive (counting of the keyword records starts from 1).
The second integer may be negative (zero is not acceptable) or an integer larger than the first.

A negative second integer means counting from the end.
So @code{-1} is the last copy-able keyword (not including the @code{END} keyword).

To see the header keywords of the input with a number before them, you can pipe the output of the FITS program (when it prints all the keywords in an extension) into the @command{cat} program like below:

@example
$ astfits input.fits -h1 | cat -n
@end example

@item --outhdu
The HDU/extension to write the output keywords of @option{--copykeys}.

@item -Q
@itemx --quitonerror
Quit if any of the operations above are not successful.
By default if an error occurs, Fits will warn the user of the faulty keyword and continue with the rest of actions.

@item -s STR
@itemx --datetosec STR
@cindex Unix epoch time
@cindex Time, Unix epoch
@cindex Epoch, Unix time
Interpret the value of the given keyword in the FITS date format (most generally: @code{YYYY-MM-DDThh:mm:ss.ddd...}) and return the corresponding Unix epoch time (number of seconds that have passed since 00:00:00 Thursday, January 1st, 1970).
The @code{Thh:mm:ss.ddd...} section (specifying the time of day), and also the @code{.ddd...} (specifying the fraction of a second) are optional.
The value to this option must be the FITS keyword name that contains the requested date, for example @option{--datetosec=DATE-OBS}.

@cindex GNU C Library
This option can also interpret the older FITS date format (@code{DD/MM/YYThh:mm:ss.ddd...}) where only two characters are given to the year.
In this case (following the GNU C Library), this option will make the following assumption: values 68 to 99 correspond to the years 1969 to 1999, and values 0 to 68 as the years 2000 to 2068.

This is a very useful option for operations on the FITS date values, for example sorting FITS files by their dates, or finding the time difference between two FITS files.
The advantage of working with the Unix epoch time is that you don't have to worry about calendar details (for example the number of days in different months, or leap years, etc).

@item --wcsdistortion STR
@cindex WCS distortion
@cindex Distortion, WCS
@cindex SIP WCS distortion
@cindex TPV WCS distortion
If the argument has a WCS distortion, the output (file given with the @option{--output} option) will have the distortion given to this option (for example @code{SIP}, @code{TPV}).
With this option, the FITS program will read the minimal set of keywords from the input HDU and the HDU data, it will then write them into the file given to the @option{--output} option but with a newly created set of WCS-related keywords corresponding to the desired distortion standard.

If no @option{--output} file is specified, an automatically generated output name will be used which is composed of the input's name but with the @file{-DDD.fits} suffix, see @ref{Automatic output}.
Where @file{DDD} is the value given to this option (desired output distortion).

Note that all possible conversions between all standards are not yet supported.
If the requested conversion is not supported, an informative error message will be printed.
If this happens, please let us know and we'll try our best to add the respective conversions.

For example with the command below, you can be sure that if @file{in.fits} has a distortion in its WCS, the distortion of @file{out.fits} will be in the SIP standard.

@example
$ astfits in.fits --wcsdistortion=SIP --output=out.fits
@end example
@end table




















@node Sort FITS files by night, ConvertType, Fits, Data containers
@section Sort FITS files by night

@cindex Calendar
FITS images usually contain (several) keywords for preserving important dates.
In particular, for lower-level data, this is usually the observation date and time (for example, stored in the @code{DATE-OBS} keyword value).
When analyzing observed datasets, many calibration steps (like the dark, bias or flat-field), are commonly calculated on a per-observing-night basis.

However, the FITS standard's date format (@code{YYYY-MM-DDThh:mm:ss.ddd}) is based on the western (Gregorian) calendar.
Dates that are stored in this format are complicated for automatic processing: a night starts in the final hours of one calendar day, and extends to the early hours of the next calendar day.
As a result, to identify datasets from one night, we commonly need to search for two dates.
However calendar peculiarities can make this identification very difficult.
For example when an observation is done on the night separating two months (like the night starting on March 31st and going into April 1st), or two years (like the night starting on December 31st 2018 and going into January 1st, 2019).
To account for such situations, it is necessary to keep track of how many days are in a month, and leap years, etc.

@cindex Unix epoch time
@cindex Time, Unix epoch
@cindex Epoch, Unix time
Gnuastro's @file{astscript-sort-by-night} script is created to help in such important scenarios.
It uses @ref{Fits} to convert the FITS date format into the Unix epoch time (number of seconds since 00:00:00 of January 1st, 1970), using the @option{--datetosec} option.
The Unix epoch time is a single number (integer, if not given in sub-second precision), enabling easy comparison and sorting of dates after January 1st, 1970.

You can use this script as a basis for making a much more highly customized sorting script.
Here are some examples

@itemize
@item
If you need to copy the files, but only need a single extension (not the whole file), you can add a step just before the making of the symbolic links, or copies, and change it to only copy a certain extension of the FITS file using the Fits program's @option{--copy} option, see @ref{HDU information and manipulation}.

@item
If you need to classify the files with finer detail (for example the purpose of the dataset), you can add a step just before the making of the symbolic links, or copies, to specify a file-name prefix based on other certain keyword values in the files.
For example when the FITS files have a keyword to specify if the dataset is a science, bias, or flat-field image.
You can read it and to add a @code{sci-}, @code{bias-}, or @code{flat-} to the created file (after the @option{--prefix}) automatically.

For example, let's assume the observing mode is stored in the hypothetical @code{MODE} keyword, which can have three values of @code{BIAS-IMAGE}, @code{SCIENCE-IMAGE} and @code{FLAT-EXP}.
With the step below, you can generate a mode-prefix, and add it to the generated link/copy names (just correct the filename and extension of the first line to the script's variables):

@example
modepref=$(astfits infile.fits -h1 \
                   | sed -e"s/'/ /g" \
                   | awk '$1=="MODE"@{ \
                       if($3=="BIAS-IMAGE") print "bias-"; \
                       else if($3=="SCIENCE-IMAGE") print "sci-"; \
                       else if($3==FLAT-EXP) print "flat-"; \
                       else print $3, "NOT recognized"; exit 1@}')
@end example

@cindex GNU AWK
@cindex GNU Sed
Here is a description of it.
We first use @command{astfits} to print all the keywords in extension @code{1} of @file{infile.fits}.
In the FITS standard, string values (that we are assuming here) are placed in single quotes (@key{'}) which are annoying in this context/use-case.
Therefore, we pipe the output of @command{astfits} into @command{sed} to remove all such quotes (substituting them with a blank space).
The result is then piped to AWK for giving us the final mode-prefix: with @code{$1=="MODE"}, we ask AWK to only consider the line where the first column is @code{MODE}.
There is an equal sign between the key name and value, so the value is the third column (@code{$3} in AWK).
We thus use a simple @code{if-else} structure to look into this value and print our custom prefix based on it.
The output of AWK is then stored in the @code{modepref} shell variable which you can add to the link/copy name.

With the solution above, the increment of the file counter for each night will be independent of the mode.
If you want the counter to be mode-dependent, you can add a different counter for each mode and use that counter instead of the generic counter for each night (based on the value of @code{modepref}).
But we'll leave the implementation of this step to you as an exercise.

@end itemize

@menu
* Invoking astscript-sort-by-night::  Inputs and outputs to this script.
@end menu

@node Invoking astscript-sort-by-night,  , Sort FITS files by night, Sort FITS files by night
@subsection Invoking astscript-sort-by-night

This installed script will read a FITS date formatted value from the given keyword, and classify the input FITS files into individual nights.
For more on installed scripts please see (see @ref{Installed scripts}).
This script can be used with the following general template:

@example
$ astscript-sort-by-night [OPTION...] FITS-files
@end example

@noindent
One line examples:

@example
## Use the DATE-OBS keyword
$ astscript-sort-by-night --key=DATE-OBS /path/to/data/*.fits

## Make links to the input files with the `img-' prefix
$ astscript-sort-by-night --link --prefix=img- /path/to/data/*.fits
@end example

This script will look into a HDU/extension (@option{--hdu}) for a keyword (@option{--key}) in the given FITS files and interpret the value as a date.
The inputs will be separated by "night"s (9:00a.m to next day's 8:59:59a.m, spanning two calendar days, exact hour can be set with @option{--hour}).

The default output is a list of all the input files along with the following two columns: night number and file number in that night (sorted by time).
With @option{--link} a symbolic link will be made (one for each input) that contains the night number, and number of file in that night (sorted by time), see the description of @option{--link} for more.
When @option{--copy} is used instead of a link, a copy of the inputs will be made instead of symbolic link.

Below you can see one example where all the @file{target-*.fits} files in the @file{data} directory should be separated by observing night according to the @code{DATE-OBS} keyword value in their second extension (number @code{1}, recall that HDU counting starts from 0).
You can see the output after the @code{ls} command.

@example
$ astscript-sort-by-night -pimg- -h1 -kDATE-OBS data/target-*.fits
$ ls
img-n1-1.fits img-n1-2.fits img-n2-1.fits ...
@end example

The outputs can be placed in a different (already existing) directory by including that directory's name in the @option{--prefix} value, for example @option{--prefix=sorted/img-} will put them all under the @file{sorted} directory.

This script can be configured like all Gnuastro's programs (through command-line options, see @ref{Common options}), with some minor differences that are described in @ref{Installed scripts}.
The particular options to this script are listed below:

@table @option
@item -h STR
@itemx --hdu=STR
The HDU/extension to use in all the given FITS files.
All of the given FITS files must have this extension.

@item -k STR
@itemx --key=STR
The keyword name that contains the FITS date format to classify/sort by.

@item -H FLT
@itemx --hour=FLT
The hour that defines the next ``night''.
By default, all times before 9:00a.m are considered to belong to the previous calendar night.
If a sub-hour value is necessary, it should be given in units of hours, for example @option{--hour=9.5} corresponds to 9:30a.m.

@cartouche
@noindent
@cindex Time zone
@cindex UTC (Universal time coordinate)
@cindex Universal time coordinate (UTC)
@strong{Dealing with time zones:}
The time that is recorded in @option{--key} may be in UTC (Universal Time Coordinate).
However, the organization of the images taken during the night depends on the local time.
It is possible to take this into account by setting the @option{--hour} option to the local time in UTC.

For example, consider a set of images taken in Auckland (New Zealand, UTC+12) during different nights.
If you want to classify these images by night, you have to know at which time (in UTC time) the Sun rises (or any other separator/definition of a different night).
In this particular example, you can use @option{--hour=21}.
Because in Auckland, a night finishes (roughly) at the local time of 9:00, which corresponds to 21:00 UTC.
@end cartouche

@item -l
@itemx --link
Create a symbolic link for each input FITS file.
This option cannot be used with @option{--copy}.
The link will have a standard name in the following format (variable parts are written in @code{CAPITAL} letters and described after it):

@example
PnN-I.fits
@end example

@table @code
@item P
This is the value given to @option{--prefix}.
By default, its value is @code{./} (to store the links in the directory this script was run in).
See the description of @code{--prefix} for more.
@item N
This is the night-counter: starting from 1.
@code{N} is just incremented by 1 for the next night, no matter how many nights (without any dataset) there are between two subsequent observing nights (its just an identifier for each night which you can easily map to different calendar nights).
@item I
File counter in that night, sorted by time.
@end table

@item -c
@itemx --copy
Make a copy of each input FITS file with the standard naming convention described in @option{--link}.
With this option, instead of making a link, a copy is made.
This option cannot be used with @option{--link}.

@item -p STR
@itemx --prefix=STR
Prefix to append before the night-identifier of each newly created link or copy.
This option is thus only relevant with the @option{--copy} or @option{--link} options.
See the description of @option{--link} for how its used.
For example, with @option{--prefix=img-}, all the created file names in the current directory will start with @code{img-}, making outputs like @file{img-n1-1.fits} or @file{img-n3-42.fits}.

@option{--prefix} can also be used to store the links/copies in another directory relative to the directory this script is being run (it must already exist).
For example @code{--prefix=/path/to/processing/img-} will put all the links/copies in the @file{/path/to/processing} directory, and the files (in that directory) will all start with @file{img-}.
@end table


















@node ConvertType, Table, Sort FITS files by night, Data containers
@section ConvertType

@cindex Data format conversion
@cindex Converting data formats
@cindex Image format conversion
@cindex Converting image formats
@pindex @r{ConvertType (}astconvertt@r{)}
The FITS format used in astronomy was defined mainly for archiving, transmission, and processing.
In other situations, the data might be useful in other formats.
For example, when you are writing a paper or report, or if you are making slides for a talk, you can't use a FITS image.
Other image formats should be used.
In other cases you might want your pixel values in a table format as plain text for input to other programs that don't recognize FITS.
ConvertType is created for such situations.
The various types will increase with future updates and based on need.

The conversion is not only one way (from FITS to other formats), but two ways (except the EPS and PDF formats@footnote{Because EPS and PDF are vector, not raster/pixelated formats}).
So you can also convert a JPEG image or text file into a FITS image.
Basically, other than EPS/PDF, you can use any of the recognized formats as different color channel inputs to get any of the recognized outputs.
So before explaining the options and arguments (in @ref{Invoking astconvertt}), we'll start with a short description of the recognized files types in @ref{Recognized file formats}, followed a short introduction to digital color in @ref{Color}.

@menu
* Recognized file formats::     Recognized file formats
* Color::                       Some explanations on color.
* Invoking astconvertt::        Options and arguments to ConvertType.
@end menu

@node Recognized file formats, Color, ConvertType, ConvertType
@subsection Recognized file formats

The various standards and the file name extensions recognized by ConvertType are listed below.
Currently Gnuastro uses the file name's suffix to identify the format.

@table @asis
@item FITS or IMH
@cindex IRAF
@cindex Astronomical data format
Astronomical data are commonly stored in the FITS format (or the older data IRAF @file{.imh} format), a list of file name suffixes which indicate that the file is in this format is given in @ref{Arguments}.

Each image extension of a FITS file only has one value per pixel/element.
Therefore, when used as input, each input FITS image contributes as one color channel.
If you want multiple extensions in one FITS file for different color channels, you have to repeat the file name multiple times and use the @option{--hdu}, @option{--hdu2}, @option{--hdu3} or @option{--hdu4} options to specify the different extensions.

@item JPEG
@cindex JPEG format
@cindex Raster graphics
@cindex Pixelated graphics
The JPEG standard was created by the Joint photographic experts group.
It is currently one of the most commonly used image formats.
Its major advantage is the compression algorithm that is defined by the standard.
Like the FITS standard, this is a raster graphics format, which means that it is pixelated.

A JPEG file can have 1 (for gray-scale), 3 (for RGB) and 4 (for CMYK) color channels.
If you only want to convert one JPEG image into other formats, there is no problem, however, if you want to use it in combination with other input files, make sure that the final number of color channels does not exceed four.
If it does, then ConvertType will abort and notify you.

@cindex Suffixes, JPEG images
The file name endings that are recognized as a JPEG file for input are:
@file{.jpg}, @file{.JPG}, @file{.jpeg}, @file{.JPEG}, @file{.jpe}, @file{.jif}, @file{.jfif} and @file{.jfi}.

@item TIFF
@cindex TIFF format
TIFF (or Tagged Image File Format) was originally designed as a common format for scanners in the early 90s and since then it has grown to become very general.
In many aspects, the TIFF standard is similar to the FITS image standard: it can allow data of many types (see @ref{Numeric data types}), and also allows multiple images to be stored in a single file (each image in the file is called a `directory' in the TIFF standard).
However, unlike FITS, it can only store images, it has no constructs for tables.
Another (inconvenient) difference with the FITS standard is that keyword names are stored as numbers, not human-readable text.

However, outside of astronomy, because of its support of different numeric
data types, many fields use TIFF images for accurate (for example 16-bit
integer or floating point for example) imaging data.

Currently ConvertType can only read TIFF images, if you are interested in
writing TIFF images, please get in touch with us.

@item EPS
@cindex EPS
@cindex PostScript
@cindex Vector graphics
@cindex Encapsulated PostScript
The Encapsulated PostScript (EPS) format is essentially a one page PostScript file which has a specified size.
PostScript also includes non-image data, for example lines and texts.
It is a fully functional programming language to describe a document.
Therefore in ConvertType, EPS is only an output format and cannot be used as input.
Contrary to the FITS or JPEG formats, PostScript is not a raster format, but is categorized as vector graphics.

@cindex PDF
@cindex Adobe systems
@cindex PostScript vs. PDF
@cindex Compiled PostScript
@cindex Portable Document format
@cindex Static document description format
The Portable Document Format (PDF) is currently the most common format for documents.
Some believe that PDF has replaced PostScript and that PostScript is now obsolete.
This view is wrong, a PostScript file is an actual plain text file that can be edited like any program source with any text editor.
To be able to display its programmed content or print, it needs to pass through a processor or compiler.
A PDF file can be thought of as the processed output of the compiler on an input PostScript file.
PostScript, EPS and PDF were created and are registered by Adobe Systems.

@cindex @TeX{}
@cindex @LaTeX{}
With these features in mind, you can see that when you are compiling a document with @TeX{} or @LaTeX{}, using an EPS file is much more low level than a JPEG and thus you have much greater control and therefore quality.
Since it also includes vector graphic lines we also use such lines to make a thin border around the image to make its appearance in the document much better.
No matter the resolution of the display or printer, these lines will always be clear and not pixelated.
In the future, addition of text might be included (for example labels or object IDs) on the EPS output.
However, this can be done better with tools within @TeX{} or @LaTeX{} such as PGF/Tikz@footnote{@url{http://sourceforge.net/projects/pgf/}}.

@cindex Binary image
@cindex Saving binary image
@cindex Black and white image
If the final input image (possibly after all operations on the flux explained below) is a binary image or only has two colors of black and white (in segmentation maps for example), then PostScript has another great advantage compared to other formats.
It allows for 1 bit pixels (pixels with a value of 0 or 1), this can decrease the output file size by 8 times.
So if a gray-scale image is binary, ConvertType will exploit this property in the EPS and PDF (see below) outputs.

@cindex Suffixes, EPS format
The standard formats for an EPS file are @file{.eps}, @file{.EPS}, @file{.epsf} and @file{.epsi}.
The EPS outputs of ConvertType have the @file{.eps} suffix.

@item PDF
@cindex Suffixes, PDF format
@cindex GPL Ghostscript
As explained above, a PDF document is a static document description format, viewing its result is therefore much faster and more efficient than PostScript.
To create a PDF output, ConvertType will make a PostScript page description and convert that to PDF using GPL Ghostscript.
The suffixes recognized for a PDF file are: @file{.pdf}, @file{.PDF}.
If GPL Ghostscript cannot be run on the PostScript file, it will remain and a warning will be printed.

@item @option{blank}
@cindex @file{blank} color channel
This is not actually a file type! But can be used to fill one color channel with a blank value.
If this argument is given for any color channel, that channel will not be used in the output.

@item Plain text
@cindex Plain text
@cindex Suffixes, plain text
Plain text files have the advantage that they can be viewed with any text editor or on the command-line.
Most programs also support input as plain text files.
As input, each plain text file is considered to contain one color channel.

In ConvertType, the recognized extensions for plain text files are @file{.txt} and @file{.dat}.
As described in @ref{Invoking astconvertt}, if you just give these extensions, (and not a full filename) as output, then automatic output will be preformed to determine the final output name (see @ref{Automatic output}).
Besides these, when the format of a file cannot be recognized from its name, ConvertType will fall back to plain text mode.
So you can use any name (even without an extension) for a plain text input or output.
Just note that when the suffix is not recognized, automatic output will not be preformed.

The basic input/output on plain text images is very similar to how tables are read/written as described in @ref{Gnuastro text table format}.
Simply put, the restrictions are very loose, and there is a convention to define a name, units, data type (see @ref{Numeric data types}), and comments for the data in a commented line.
The only difference is that as a table, a text file can contain many datasets (columns), but as a 2D image, it can only contain one dataset.
As a result, only one information comment line is necessary for a 2D image, and instead of the starting `@code{# Column N}' (@code{N} is the column number), the information line for a 2D image must start with `@code{# Image 1}'.
When ConvertType is asked to output to plain text file, this information comment line is written before the image pixel values.

When converting an image to plain text, consider the fact that if the image is large, the number of columns in each line will become very large, possibly making it very hard to open in some text editors.

@item Standard output (command-line)
This is very similar to the plain text output, but instead of creating a file to keep the printed values, they are printed on the command line.
This can be very useful when you want to redirect the results directly to another program in one command with no intermediate file.
The only difference is that only the pixel values are printed (with no information comment line).
To print to the standard output, set the output name to `@file{stdout}'.

@end table

@node Color, Invoking astconvertt, Recognized file formats, ConvertType
@subsection Color

@cindex RGB
@cindex CMYK
@cindex Image
@cindex Color
@cindex Pixels
@cindex Colormap
@cindex Primary colors
Color is defined by mixing various measurements/filters.
In digital monitors or common digital cameras, colors are displayed/stored by mixing the three basic colors of red, green and blue (RGB) with various proportions.
When printing on paper, standard printers use the cyan, magenta, yellow and key (CMYK, key=black) color space.
In other words, for each displayed/printed pixel of a color image, the dataset/image has three or four values.

@cindex Color channel
@cindex Channel, color
To store/show the three values for each pixel, cameras and monitors allocate a certain fraction of each pixel's area to red, green and blue filters.
These three filters are thus built into the hardware at the pixel level.
However, because measurement accuracy is very important in scientific instruments, and we want to do measurements (take images) with various/custom filters (without having to order a new expensive detector!), scientific detectors use the full area of the pixel to store one value for it in a single/mono channel dataset.
To make measurements in different filters, we just place a filter in the light path before the detector.
Therefore, the FITS format that is used to store astronomical datasets is inherently a mono-channel format (see @ref{Recognized file formats} or @ref{Fits}).

When a subject has been imaged in multiple filters, you can feed each different filter into the red, green and blue channels and obtain a colored visualization.
In ConvertType, you can do this by giving each separate single-channel dataset (for example in the FITS image format) as an argument (in the proper order), then asking for the output in a format that supports multi-channel datasets (for example JPEG or PDF, see the examples in @ref{Invoking astconvertt}).

@cindex Grayscale
@cindex Visualization
@cindex Colormap, HSV
@cindex Colormap, gray-scale
@cindex HSV: Hue Saturation Value
As discussed above, color is not defined when a dataset/image contains a single value for each pixel.
However, we interact with scientific datasets through monitors or printers (which allow multiple values per pixel and produce color with them).
As a result, there is a lot of freedom in visualizing a single-channel dataset.
The most basic is to use shades of black (because of its strong contrast with white).
This scheme is called grayscale.
To help in visualization, more complex mappings can be defined.
For example, the values can be scaled to a range of 0 to 360 and used as the ``Hue'' term of the @url{https://en.wikipedia.org/wiki/HSL_and_HSV, Hue-Saturation-Value} (HSV) color space (while fixing the ``Saturation'' and ``Value'' terms).
In ConvertType, you can use the @option{--colormap} option to choose between different mappings of mono-channel inputs, see @ref{Invoking astconvertt}.

Since grayscale is a commonly used mapping of single-valued datasets, we'll continue with a closer look at how it is stored.
One way to represent a gray-scale image in different color spaces is to use the same proportions of the primary colors in each pixel.
This is the common way most FITS image viewers work: for each pixel, they fill all the channels with the single value.
While this is necessary for displaying a dataset, there are downsides when storing/saving this type of grayscale visualization (for example in a paper).

@itemize

@item
Three (for RGB) or four (for CMYK) values have to be stored for every pixel, this makes the output file very heavy (in terms of bytes).

@item
If printing, the printing errors of each color channel can make the printed image slightly more blurred than it actually is.

@end itemize

@cindex PNG standard
@cindex Single channel CMYK
To solve both these problems when storing grayscale visualization, the best way is to save a single-channel dataset into the black channel of the CMYK color space.
The JPEG standard is the only common standard that accepts CMYK color space.

The JPEG and EPS standards set two sizes for the number of bits in each channel: 8-bit and 12-bit.
The former is by far the most common and is what is used in ConvertType.
Therefore, each channel should have values between 0 to @math{2^8-1=255}.
From this we see how each pixel in a gray-scale image is one byte (8 bits) long, in an RGB image, it is 3 bytes long and in CMYK it is 4 bytes long.
But thanks to the JPEG compression algorithms, when all the pixels of one channel have the same value, that channel is compressed to one pixel.
Therefore a Grayscale image and a CMYK image that has only the K-channel filled are approximately the same file size.


@node Invoking astconvertt,  , Color, ConvertType
@subsection Invoking ConvertType

ConvertType will convert any recognized input file type to any specified output type.
The executable name is @file{astconvertt} with the following general template

@example
$ astconvertt [OPTION...] InputFile [InputFile2] ... [InputFile4]
@end example

@noindent
One line examples:

@example
## Convert an image in FITS to PDF:
$ astconvertt image.fits --output=pdf

## Similar to before, but use the Viridis color map:
$ astconvertt image.fits --colormap=viridis --output=pdf

## Convert an image in JPEG to FITS (with multiple extensions
## if its color):
$ astconvertt image.jpg -oimage.fits

## Use three plain text 2D arrays to create an RGB JPEG output:
$ astconvertt f1.txt f2.txt f3.fits -o.jpg

## Use two images and one blank for an RGB EPS output:
$ astconvertt M31_r.fits M31_g.fits blank -oeps

## Directly pass input from output of another program through Standard
## input (not a file).
$ cat 2darray.txt | astconvertt -oimg.fits
@end example

@noindent
The output's file format will be interpreted from the value given to the @option{--output} option.
It can either be given on the command-line or in any of the configuration files (see @ref{Configuration files}).
Note that if the output suffix is not recognized, it will default to plain text format, see @ref{Recognized file formats}.

@cindex Standard input
At most four input files (one for each color channel for formats that allow it) are allowed in ConvertType.
The first input dataset can either be a file or come from Standard input (see @ref{Standard input}).
The order of multiple input files is important.
After reading the input file(s) the number of color channels in all the inputs will be used to define which color space to use for the outputs and how each color channel is interpreted.

Some formats can allow more than one color channel (for example in the JPEG format, see @ref{Recognized file formats}).
If there is one input dataset (color channel) the output will be gray-scale, if three input datasets (color channels) are given, they are respectively considered to be the red, green and blue color channels.
Finally, if there are four color channels they will be be cyan, magenta, yellow and black (CMYK colors).

The value to @option{--output} (or @option{-o}) can be either a full file name or just the suffix of the desired output format.
In the former case, it will used for the output.
In the latter case, the name of the output file will be set based on the automatic output guidelines, see @ref{Automatic output}.
Note that the suffix name can optionally start a @file{.} (dot), so for example @option{--output=.jpg} and @option{--output=jpg} are equivalent.
See @ref{Recognized file formats}.

Besides the common set of options explained in @ref{Common options}, the options to ConvertType can be classified into input, output and flux related options.
The majority of the options are to do with the flux range.
Astronomical data usually have a very large dynamic range (difference between maximum and minimum value) and different subjects might be better demonstrated with a limited flux range.

@noindent
Input:
@table @option
@item -h STR/INT
@itemx --hdu=STR/INT
In ConvertType, it is possible to call the HDU option multiple times for the different input FITS or TIFF files in the same order that they are called on the command-line.
Note that in the TIFF standard, one `directory' (similar to a FITS HDU) may contain multiple color channels (for example when the image is in RGB).

Except for the fact that multiple calls are possible, this option is identical to the common @option{--hdu} in @ref{Input output options}.
The number of calls to this option cannot be less than the number of input FITS or TIFF files, but if there are more, the extra HDUs will be ignored, note that they will be read in the order described in @ref{Configuration file precedence}.

Unlike CFITSIO, libtiff (which is used to read TIFF files) only recognizes numbers (counting from zero, similar to CFITSIO) for `directory' identification.
Hence the concept of names is not defined for the directories and the values to this option for TIFF files must be numbers.
@end table

@noindent
Output:
@table @option

@item -w FLT
@itemx --widthincm=FLT
The width of the output in centimeters.
This is only relevant for those formats that accept such a width (not plain text for example).
For most digital purposes, the number of pixels is far more important than the value to this parameter because you can adjust the absolute width (in inches or centimeters) in your document preparation program.

@item -b INT
@itemx --borderwidth=INT
@cindex Border on an image
The width of the border to be put around the EPS and PDF outputs in units of PostScript points.
There are 72 or 28.35 PostScript points in an inch or centimeter respectively.
In other words, there are roughly 3 PostScript points in every millimeter.
If you are planning on adding a border, its significance is highly correlated with the value you give to the @option{--widthincm} parameter.

Unfortunately in the document structuring convention of the PostScript language, the ``bounding box'' has to be in units of PostScript points with no fractions allowed.
So the border values only have to be specified in integers.
To have a final border that is thinner than one PostScript point in your document, you can ask for a larger width in ConvertType and then scale down the output EPS or PDF file in your document preparation program.
For example by setting @command{width} in your @command{includegraphics} command in @TeX{} or @LaTeX{}.
Since it is vector graphics, the changes of size have no effect on the quality of your output quality (pixels don't get different values).

@item -x
@itemx --hex
@cindex ASCII85 encoding
@cindex Hexadecimal encoding
Use Hexadecimal encoding in creating EPS output.
By default the ASCII85 encoding is used which provides a much better compression ratio.
When converted to PDF (or included in @TeX{} or @LaTeX{} which is finally saved as a PDF file), an efficient binary encoding is used which is far more efficient than both of them.
The choice of EPS encoding will thus have no effect on the final PDF.

So if you want to transfer your EPS files (for example if you want to submit your paper to arXiv or journals in PostScript), their storage might become important if you have large images or lots of small ones.
By default ASCII85 encoding is used which offers a much better compression ratio (nearly 40 percent) compared to Hexadecimal encoding.

@item -u INT
@itemx --quality=INT
@cindex JPEG compression quality
@cindex Compression quality in JPEG
@cindex Quality of compression in JPEG
The quality (compression) of the output JPEG file with values from 0 to 100 (inclusive).
For other formats the value to this option is ignored.
Note that only in gray-scale (when one input color channel is given) will this actually be the exact quality (each pixel will correspond to one input value).
If it is in color mode, some degradation will occur.
While the JPEG standard does support loss-less graphics, it is not commonly supported.

@item --colormap=STR[,FLT,...]
The color map to visualize a single channel.
The first value given to this option is the name of the color map, which is shown below.
Some color maps can be configured.
In this case, the configuration parameters are optionally given as numbers following the name of the color map for example see @option{hsv}.
The table below contains the usable names of the color maps that are currently supported:

@table @option
@item gray
@itemx grey
@cindex Colorspace, gray-scale
Grayscale color map.
This color map doesn't have any parameters.
The full dataset range will be scaled to 0 and @mymath{2^8-1=255} to be stored in the requested format.

@item hsv
@cindex Colorspace, HSV
@cindex Hue, saturation, value
@cindex HSV: Hue Saturation Value
Hue, Saturation, Value@footnote{@url{https://en.wikipedia.org/wiki/HSL_and_HSV}} color map.
If no values are given after the name (@option{--colormap=hsv}), the dataset will be scaled to 0 and 360 for hue covering the full spectrum of colors.
However, you can limit the range of hue (to show only a special color range) by explicitly requesting them after the name (for example @option{--colormap=hsv,20,240}).

The mapping of a single-channel dataset to HSV is done through the Hue and Value elements: Lower dataset elements have lower ``value'' @emph{and} lower ``hue''.
This creates darker colors for fainter parts, while also respecting the range of colors.

@item viridis
@cindex matplotlib
@cindex Colormap: Viridis
@cindex Viridis: Colormap
Viridis is the default colormap of the popular Matplotlib module of Python and available in many other visualization tools like PGFPlots.

@item sls
@cindex DS9
@cindex SAO DS9
@cindex SLS Color
@cindex Colormap: SLS
The SLS color range, taken from the commonly used @url{http://ds9.si.edu,SAO DS9}.
The advantage of this color range is that it starts with black, going into dark blue and finishes with the brighter colors of red and white.
So unlike the HSV color range, it includes black and white and brighter colors (like yellow, red) show the larger values.

@item sls-inverse
@cindex Colormap: SLS-inverse
The inverse of the SLS color map (see above), where the lowest value corresponds to white and the highest value is black.
While SLS is good for visualizing on the monitor, SLS-inverse is good for printing.
@end table

@item --rgbtohsv
When there are three input channels and the output is in the FITS format, interpret the three input channels as red, green and blue channels (RGB) and convert them to the hue, saturation, value (HSV) color space.

The currently supported output formats of ConvertType don't have native support for HSV.
Therefore this option is only supported when the output is in FITS format and each of the hue, saturation and value arrays can be saved as one FITS extension in the output for further analysis (for example to select a certain color).

@end table

@noindent
Flux range:

@table @option

@item -c STR
@itemx --change=STR
@cindex Change converted pixel values
(@option{=STR}) Change pixel values with the following format @option{"from1:to1, from2:to2,..."}.
This option is very useful in displaying labeled pixels (not actual data images which have noise) like segmentation maps.
In labeled images, usually a group of pixels have a fixed integer value.
With this option, you can manipulate the labels before the image is displayed to get a better output for print or to emphasize on a particular set of labels and ignore the rest.
The labels in the images will be changed in the same order given.
By default first the pixel values will be converted then the pixel values will be truncated (see @option{--fluxlow} and @option{--fluxhigh}).

You can use any number for the values irrespective of your final output, your given values are stored and used in the double precision floating point format.
So for example if your input image has labels from 1 to 20000 and you only want to display those with labels 957 and 11342 then you can run ConvertType with these options:

@example
$ astconvertt --change=957:50000,11342:50001 --fluxlow=5e4 \
   --fluxhigh=1e5 segmentationmap.fits --output=jpg
@end example

@noindent
While the output JPEG format is only 8 bit, this operation is done in an intermediate step which is stored in double precision floating point.
The pixel values are converted to 8-bit after all operations on the input fluxes have been complete.
By placing the value in double quotes you can use as many spaces as you like for better readability.

@item -C
@itemx --changeaftertrunc
Change pixel values (with @option{--change}) after truncation of the flux values, by default it is the opposite.

@item -L FLT
@itemx --fluxlow=FLT
The minimum flux (pixel value) to display in the output image, any pixel value below this value will be set to this value in the output.
If the value to this option is the same as @option{--fluxhigh}, then no flux truncation will be applied.
Note that when multiple channels are given, this value is used for all the color channels.

@item -H FLT
@itemx --fluxhigh=FLT
The maximum flux (pixel value) to display in the output image, see
@option{--fluxlow}.

@item -m INT
@itemx --maxbyte=INT
This is only used for the JPEG and EPS output formats which have an 8-bit space for each channel of each pixel.
The maximum value in each pixel can therefore be @mymath{2^8-1=255}.
With this option you can change (decrease) the maximum value.
By doing so you will decrease the dynamic range.
It can be useful if you plan to use those values for other purposes.

@item -A INT
@itemx --forcemin=INT
Enforce the value of @option{--fluxlow} (when its given), even if its smaller than the minimum of the dataset and the output is format supporting color.
This is particularly useful when you are converting a number of images to a common image format like JPEG or PDF with a single command and want them all to have the same range of colors, independent of the contents of the dataset.
Note that if the minimum value is smaller than @option{--fluxlow}, then this option is redundant.

@cindex PDF
@cindex EPS
@cindex PostScript
By default, when the dataset only has two values, @emph{and} the output format is PDF or EPS, ConvertType will use the PostScript optimization that allows setting the pixel values per bit, not byte (@ref{Recognized file formats}).
This can greatly help reduce the file size.
However, when @option{--fluxlow} or @option{--fluxhigh} are called, this optimization is disabled: even though there are only two values (is binary), the difference between them does not correspond to the full contrast of black and white.

@item -B INT
@itemx --forcemax=INT
Similar to @option{--forcemin}, but for the maximum.


@item -i
@itemx --invert
For 8-bit output types (JPEG, EPS, and PDF for example) the final value that is stored is inverted so white becomes black and vice versa.
The reason for this is that astronomical images usually have a very large area of blank sky in them.
The result will be that a large are of the image will be black.
Note that this behavior is ideal for gray-scale images, if you want a color image, the colors are going to be mixed up.

@end table

@node Table, Query, ConvertType, Data containers
@section Table

Tables are the products of processing astronomical images and spectra.
For example in Gnuastro, MakeCatalog will process the defined pixels over an object and produce a catalog (see @ref{MakeCatalog}).
For each identified object, MakeCatalog can print its position on the image or sky, its total brightness and many other information that is deducible from the given image.
Each one of these properties is a column in its output catalog (or table) and for each input object, we have a row.

When there are only a small number of objects (rows) and not too many properties (columns), then a simple plain text file is mainly enough to store, transfer, or even use the produced data.
However, to be more efficient in all these aspects, astronomers have defined the FITS binary table standard to store data in a binary (0 and 1) format, not plain text.
This can offer major advantages in all those aspects: the file size will be greatly reduced and the reading and writing will be faster (because the RAM and CPU also work in binary).

The FITS standard also defines a standard for ASCII tables, where the data are stored in the human readable ASCII format, but within the FITS file structure.
These are mainly useful for keeping ASCII data along with images and possibly binary data as multiple (conceptually related) extensions within a FITS file.
The acceptable table formats are fully described in @ref{Tables}.

@cindex AWK
@cindex GNU AWK
Binary tables are not easily readable by human eyes.
There is no fixed/unified standard on how the zero and ones should be interpreted.
The Unix-like operating systems have flourished because of a simple fact: communication between the various tools is based on human readable characters@footnote{In ``The art of Unix programming'', Eric Raymond makes this suggestion to programmers: ``When you feel the urge to design a complex binary file format, or a complex binary application protocol, it is generally wise to lie down until the feeling passes.''.
This is a great book and strongly recommended, give it a look if you want to truly enjoy your work/life in this environment.}.
So while the FITS table standards are very beneficial for the tools that recognize them, they are hard to use in the vast majority of available software.
This creates limitations for their generic use.

`Table' is Gnuastro's solution to this problem.
With Table, FITS tables (ASCII or binary) are directly accessible to the Unix-like operating systems power-users (those working the command-line or shell, see @ref{Command-line interface}).
With Table, a FITS table (in binary or ASCII formats) is only one command away from AWK (or any other tool you want to use).
Just like a plain text file that you read with the @command{cat} command.
You can pipe the output of Table into any other tool for higher-level processing, see the examples in @ref{Invoking asttable} for some simple examples.

@menu
* Column arithmetic::           How to do operations on table columns.
* Invoking asttable::           Options and arguments to Table.
@end menu

@node Column arithmetic, Invoking asttable, Table, Table
@subsection Column arithmetic

After reading the requested columns from the input table, you can also do operations/arithmetic on the columns and save the resulting values as new column(s) in the output table (possibly in between other requested columns).
To enable column arithmetic, the first 6 characters of the value to @option{--column} (@code{-c}) should be the arithmetic activation word `@option{arith }' (note the space character in the end, after `@code{arith}').

After the activation word, you can use the reverse polish notation to identify the operators and their operands, see @ref{Reverse polish notation}.
Just note that white-space characters are used between the tokens of the arithmetic expression and that they are meaningful to the command-line environment.
Therefore the whole expression (including the activation word) has to be quoted on the command-line or in a shell script (see the examples below).

To identify a column you can directly use its name, or specify its number (counting from one, see @ref{Selecting table columns}).
When you are giving a column number, it is necessary to prefix the number with a @code{$}, similar to AWK.
Otherwise the number is not distinguishable from a constant number to use in the arithmetic operation.

For example with the command below, the first two columns of @file{table.fits} will be printed along with a third column that is the result of multiplying the first column with @mymath{10^{10}} (for example to convert wavelength from Meters to Angstroms).
Note that without the `@key{$}', it is not possible to distinguish between ``1'' as a column-counter, or as a constant number to use in the arithmetic operation.
Also note that because of the significance of @key{$} for the command-line environment, the single-quotes are used here (as in an AWK expression), not double-quotes.

@example
$ asttable table.fits -c1,2 -c'arith $1 1e10 x'
@end example

@cartouche
@noindent
@strong{Single quotes when string contains @key{$}}: On the command-line, or in shell-scripts, @key{$} is used to expand variables, for example @code{echo $PATH} prints the value (a string of characters) in the variable @code{PATH}, it will not simply print @code{$PATH}.
This operation is also permitted within double quotes, so @code{echo "$PATH"} will produce the same output.
This is good when printing values, for example in the command below, @code{$PATH} will expand to the value within it.

@example
$ echo "My path is: $PATH"
@end example

If you actually want to return the literal string @code{$PATH}, not the value in the @code{PATH} variable (like the scenario here in column arithmetic), you should put it in single quotes like below.
The printed value here will include the @code{$}, please try it to see for your self and compare to above.

@example
$ echo 'My path is: $PATH'
@end example

Therefore, when your column arithmetic involves the @key{$} sign (to specify columns by number), quote your @code{arith } string with a single quotation mark.
Otherwise you can use both single or double quotes.
@end cartouche

Alternatively, if the columns have meta-data and the first two are respectively called @code{AWAV} and @code{SPECTRUM}, the command above is equivalent to the command below.
Note that the character `@key{$}' is no longer necessary in this scenario (because names will not be confused with numbers):

@example
$ asttable table.fits -cAWAV,SPECTRUM -c'arith AWAV 1e10 x'
@end example

Comparison of the two commands above clearly shows why it is recommended to use column names instead of numbers.
When the columns have descriptive names, the command/script actually becomes much more readable, describing the intent of the operation.
It is also independent of the low-level table structure: for the second command, the position of the @code{AWAV} and @code{SPECTRUM} columns in @file{table.fits} is irrelevant.

By nature, column arithmetic changes the values of the data within the column.
So the old column meta data can't be used any more.
By default the new column created for the arithmetic operation will be given generic metadata (for example its name will be @code{ARITH_1}, which is hardly useful!).
But meta data are critically important and it is good practice to always have short, but descriptive, names for each columns, units and also some comments for more explanation.
To add metadata to a column, you can use the @option{--colmetadata} option that is described in @ref{Invoking asttable}.

Finally, since the arithmetic expressions are a value to @option{--column}, it doesn't necessarily have to be a separate option, so the commands above are also identical to the command below (note that this only has one @option{-c} option).
Just be very careful with the quoting!

@example
$ asttable table.fits -cAWAV,SPECTRUM,'arith AWAV 1e10 x'
@end example

Almost all the arithmetic operators of @ref{Arithmetic operators} are also supported for column arithmetic in Table.
In particular, the few that are not present in the Gnuastro library aren't yet supported.
For a list of the Gnuastro library arithmetic operators, please see the macros starting with @code{GAL_ARITHMETIC_OP} and ending with the operator name in @ref{Arithmetic on datasets}.
Besides the operators in @ref{Arithmetic operators}, several operators are only available in Table to use on table columns.



@cindex WCS: World Coordinate System
@cindex World Coordinate System (WCS)
@table @code
@item  sin
@itemx cos
@itemx tan
@cindex Trigonometry
Basic trigonometric functions.
They take one operand, in units of degrees.

@item  asin
@itemx acos
@itemx atan
Inverse trigonometric functions.
They take one operand and the returned values are in units of degrees.

@item atan2
Inverse tangent (output in units of degrees) which uses the signs of the input coordinates to distinguish between the quadrants.
This operator therefore needs two operands: the first popped operand is assumed to be the X axis position of the point, and the second popped operand is its Y axis coordinate.

For example see the commands below for four points in the four quadrants (if you want to try them, you don't need to type/copy the parts after @key{#}).
The first point (2,2) is in the first quadrant, therefore the returned angle is 45 degrees.
But the second, third and fourth points are in the quadrants of the same order, and the returned angles reflect the quadrant.

@example
$ echo " 2  2" | asttable -c'arith $2 $1 atan2'   # -->   45
$ echo " 2 -2" | asttable -c'arith $2 $1 atan2'   # -->  -45
$ echo "-2 -2" | asttable -c'arith $2 $1 atan2'   # --> -135
$ echo "-2  2" | asttable -c'arith $2 $1 atan2'   # -->  135
@end example

However, if you simply use the classic arc-tangent operator (@code{atan}) for the same points, the result will only be in two quadrants as you see below:

@example
echo " 2  2" | "$utility" -c'arith $2 $1 / atan'  # -->   45
echo " 2 -2" | "$utility" -c'arith $2 $1 / atan'  # -->  -45
echo "-2 -2" | "$utility" -c'arith $2 $1 / atan'  # -->   45
echo "-2  2" | "$utility" -c'arith $2 $1 / atan'  # -->  -45
@end example

@item  sinh
@itemx cosh
@itemx tanh
@cindex Hyperbolic functions
Hyperbolic sine, cosine, and tangent.
These operators take a single operand.

@item  asinh
@itemx acosh
@itemx atanh
Inverse Hyperbolic sine, cosine, and tangent.
These operators take a single operand.

@item wcstoimg
Convert the given WCS positions to image/dataset coordinates based on the number of dimensions in the WCS structure of @option{--wcshdu} extension/HDU in @option{--wcsfile}.
It will output the same number of columns.
The first popped operand is the last FITS dimension.

For example the two commands below (which have the same output) will produce 5 columns.
The first three columns are the input table's ID, RA and Dec columns.
The fourth and fifth columns will be the pixel positions in @file{image.fits} that correspond to each RA and Dec.

@example
$ asttable table.fits -cID,RA,DEC,'arith RA DEC wcstoimg' \
           --wcsfile=image.fits
$ asttable table.fits -cID,RA -cDEC \
           -c'arith RA DEC wcstoimg' --wcsfile=image.fits
@end example

@item imgtowcs
Similar to @code{wcstoimg}, except that image/dataset coordinates are converted to WCS coordinates.

@item distance-flat
Return the distance between two points assuming they are on a flat surface.
Note that each point needs two coordinates, so this operator needs four operands (currently it only works for 2D spaces).
The first and second popped operands are considered to belong to one point and the third and fourth popped operands to the second point.

Each of the input points can be a single coordinate or a full table column (containing many points).
In other words, the following commands are all valid:

@example
$ asttable table.fits \
           -c'arith X1 Y1 X2 Y2 distance-flat'
$ asttable table.fits \
           -c'arith X Y 12.345 6.789 distance-flat'
$ asttable table.fits \
           -c'arith 12.345 6.789 X Y distance-flat'
@end example

In the first case we are assuming that @file{table.fits} has the following four columns @code{X1}, @code{Y1}, @code{X2}, @code{Y2}.
The returned column by this operator will be the difference between two points in each row with coordinates like the following (@code{X1}, @code{Y1}) and (@code{X2}, @code{Y2}).
In other words, for each row, the distance between different points is calculated.
In the second and third cases (which are identical), it is assumed that @file{table.fits} has the two columns @code{X} and @code{Y}.
The returned column by this operator will be the difference of each row with the fixed point at (12.345, 6.789).

@item distance-on-sphere
Return the spherical angular distance (along a great circle, in degrees) between the given two points.
Note that each point needs two coordinates (in degrees), so this operator needs four operands.
The first and second popped operands are considered to belong to one point and the third and fourth popped operands to the second point.

Each of the input points can be a single coordinate or a full table column (containing many points).
In other words, the following commands are all valid:

@example
$ asttable table.fits \
           -c'arith RA1 DEC1 RA2 DEC2 distance-on-sphere'
$ asttable table.fits \
           -c'arith RA DEC 9.876 5.432 distance-on-sphere'
$ asttable table.fits \
           -c'arith 9.876 5.432 RA DEC distance-on-sphere'
@end example

In the first case we are assuming that @file{table.fits} has the following four columns @code{RA1}, @code{DEC1}, @code{RA2}, @code{DEC2}.
The returned column by this operator will be the difference between two points in each row with coordinates like the following (@code{RA1}, @code{DEC1}) and (@code{RA2}, @code{DEC2}).
In other words, for each row, the angular distance between different points is calculated.
In the second and third cases (which are identical), it is assumed that @file{table.fits} has the two columns @code{RA} and @code{DEC}.
The returned column by this operator will be the difference of each row with the fixed point at (9.876, 5.432).

The distance (along a great circle) on a sphere between two points is calculated with the equation below, where @mymath{r_1}, @mymath{r_2}, @mymath{d_1} and @mymath{d_2} are the right ascensions and declinations of points 1 and 2.

@dispmath {\cos(d)=\sin(d_1)\sin(d_2)+\cos(d_1)\cos(d_2)\cos(r_1-r_2)}

@item ra-to-degree
Convert the hour-wise Right Ascension (RA) string, in the sexagesimal format of @code{_h_m_s} or @code{_:_:_}, to degrees.
Note that the input column has to have a string format.
In FITS tables, string columns are well-defined.
For plain-text tables, please follow the standards defined in @ref{Gnuastro text table format}, otherwise the string column won't be read.
@example
$ asttable catalog.fits -c'arith RA ra-to-degree'
$ asttable catalog.fits -c'arith $5 ra-to-degree'
@end example

@item dec-to-degree
Convert the sexagesimal Declination (Dec) string, in the format of @code{_d_m_s} or @code{_:_:_}, to degrees (a single floating point number).
For more details please see the @option{ra-to-degree} operator.

@item degree-to-ra
@cindex Sexagesimal
@cindex Right Ascension
Convert degrees (a column with a single floating point number) to the Right Ascension, RA, string (in the sexagesimal format hours, minutes and seconds, written as @code{_h_m_s}).
The output will be a string column so no further mathematical operations can be done on it.
The output file can be in any format (for example FITS or plain-text).
If it is plain-text, the string column will be written following the standards described in @ref{Gnuastro text table format}.

@item degree-to-dec
@cindex Declination
Convert degrees (a column with a single floating point number) to the Declination, Dec, string (in the format of @code{_d_m_s}).
See the @option{degree-to-ra} for more on the format of the output.
@end table


@node Invoking asttable,  , Column arithmetic, Table
@subsection Invoking Table

Table will read/write, select, convert, or show the information of the columns in FITS ASCII table, FITS binary table and plain text table files, see @ref{Tables}.
Output columns can also be determined by number or regular expression matching of column names, units, or comments.
The executable name is @file{asttable} with the following general template

@example
$ asttable [OPTION...] InputFile
@end example

@noindent
One line examples:

@example
## Get the table column information (name, data type, or units):
$ asttable bintab.fits --information

## Print columns named RA and DEC, followed by all the columns where
## the name starts with "MAG_":
$ asttable bintab.fits --column=RA --column=DEC --column=/^MAG_/

## Similar to the above, but with one call to `--column' (or `-c'),
## also sort the rows by the input's photometric redshift (`Z_PHOT')
## column. To confirm the sort, you can add `Z_PHOT' to the columns
## to print.
$ asttable bintab.fits -cRA,DEC,/^MAG_/ --sort=Z_PHOT

## Similar to the above, but only print rows that have a photometric
## redshift between 2 and 3.
$ asttable bintab.fits -cRA,DEC,/^MAG_/ --range=Z_PHOT,2:3

## Only print rows with a value in the 10th column above 100000:
$ asttable bintab.fits --range=10,10e5,inf

## Only print the 2nd column, and the third column multiplied by 5,
## Save the resulting two columns in `table.txt'
$ asttable bintab.fits -c2,'arith $2 5 x' -otable.fits

## Sort the output columns by the third column, save output:
$ asttable bintab.fits --sort=3 -ooutput.txt

## Subtract the first column from the second in `cat.fits' (can also
## be a text table) and keep the third and fourth columns.
$ asttable cat.txt -c'arith $2 $1 -',3,4 -ocat.fits

## Convert sexagesimal coordinates to degrees (same can be done in a
## large table given as argument).
$ echo "7h34m35.5498 31d53m14.352s" | asttable

## Convert RA and Dec in degrees to sexagesimal (same can be done in a
## large table given as argument).
echo "113.64812416667 31.88732" \
     | asttable -c'arith $1 degree-to-ra $2 degree-to-dec'
@end example

Table's input dataset can be given either as a file or from Standard input (see @ref{Standard input}).
In the absence of selected columns, all the input's columns and rows will be written to the output.
If any output file is explicitly requested (with @option{--output}) the output table will be written in it.
When no output file is explicitly requested the output table will be written to the standard output.

If the specified output is a FITS file, the type of FITS table (binary or ASCII) will be determined from the @option{--tabletype} option.
If the output is not a FITS file, it will be printed as a plain text table (with space characters between the columns).
When the columns are accompanied by meta-data (like column name, units, or comments), this information will also printed in the plain text file before the table, as described in @ref{Gnuastro text table format}.

For the full list of options common to all Gnuastro programs please see @ref{Common options}.
Options can also be stored in directory, user or system-wide configuration files to avoid repeating on the command-line, see @ref{Configuration files}.
Table does not follow Automatic output that is common in most Gnuastro programs, see @ref{Automatic output}.
Thus, in the absence of an output file, the selected columns will be printed on the command-line with no column information, ready for redirecting to other tools like AWK or sort, similar to the examples above.

@cartouche
@noindent
@strong{Sexagesimal coordinates as floats in plain-text tables:}
When a column is determined to be a floating point type (32-bit or 64-bit) in a plain-text table, it can contain sexagesimal values in the format of `@code{_h_m_s}' (for RA) and `@code{_d_m_s}' (for Dec).
In this case, the string will be immediately converted to a single floating point number (in units of degrees) and stored in memory with the rest of the column or table.
Besides being useful in large tables, with this feature, conversion to sexagesimal coordinates to degrees becomes very easy, for example:
@example
echo "7h34m35.5498 31d53m14.352s" | asttable
@end example

@noindent
The inverse can also be done with the more general column arithmetic
operators:
@example
echo "113.64812416667 31.88732" \
     | asttable -c'arith $1 degree-to-ra $2 degree-to-dec'
@end example
@end cartouche

@table @option

@item -i
@itemx --information
Only print the column information in the specified table on the command-line and exit.
Each column's information (number, name, units, data type, and comments) will be printed as a row on the command-line.
Note that the FITS standard only requires the data type (see @ref{Numeric data types}), and in plain text tables, no meta-data/information is mandatory.
Gnuastro has its own convention in the comments of a plain text table to store and transfer this information as described in @ref{Gnuastro text table format}.

This option will take precedence over the @option{--column} option, so when it is called along with requested columns, the latter will be ignored.
This can be useful if you forget the identifier of a column after you have already typed some on the command-line.
You can simply add a @option{-i} and run Table to see the whole list and remember.
Then you can use the shell history (with the up arrow key on the keyboard), and retrieve the last command with all the previously typed columns present, delete @option{-i} and add the identifier you had forgot.

@cindex AWK
@cindex GNU AWK
@item -c STR/INT
@itemx --column=STR/INT
Set the output columns either by specifying the column number, or name.
For more on selecting columns, see @ref{Selecting table columns}.
If a value of this option starts with `@code{arith }', this option will do the requested operations/arithmetic on the specified columns and output the result in that place (among other requested columns).
For more on column arithmetic see @ref{Column arithmetic}.

To ask for multiple columns this option can be used in two way: 1) multiple calls to this option, 2) using a comma between each column specifier in one call to this option.
These different solutions may be mixed in one call to Table: for example, @option{-cRA,DEC -cMAG}, or @option{-cRA -cDEC -cMAG} are both equivalent to @option{-cRA -cDEC -cMAG}.
The order of the output columns will be the same order given to the option or in the configuration files (see @ref{Configuration file precedence}).

This option is not mandatory, if no specific columns are requested, all the input table columns are output.
When this option is called multiple times, it is possible to output one column more than once.

@item -w STR
@itemx --wcsfile=STR
FITS file that contains the WCS to be used in the @code{wcstoimg} and @code{imgtowcs} operators of @option{--column} (see above).
The extension name/number within the FITS file can be specified with @option{--wcshdu}.

If the value to this option is @option{none}, no WCS will be written in the output.

@item -W STR
@itemx --wcshdu=STR
FITS extension/HDU that contains the WCS to be used in the @code{wcstoimg} and @code{imgtowcs} operators of @option{--column} (see above).
The FITS file name can be specified with @option{--wcsfile}.

@item -L STR
@itemx --catcolumnfile=STR
Concatenate (or add, or append) the columns of this option's value (a filename) to the output columns.
This option may be called multiple times (to add columns from more than one file into the final output), the columns from each file will be added in the same order that this option is called.

By default all the columns of the given file will be appended, if you only want certain columns to be appended, use the @option{--catcolumns} option to specify their name or number (see @ref{Selecting table columns}).
Note that the columns given to @option{--catcolumns} must be present in all the given files (if this option is called more than once).

The concatenation is done after any column selection (for example with @option{--column}) or row selection (for example with @option{--range}) is applied to the main input table given to Table.
The number of rows in the file(s) given to this option has to be the same as the final output table if this option wasn't given.

If the file given to this option is a FITS file, its necessary to also define the corresponding HDU/extension with @option{--catcolumnhdu}.
Also note that no operation (for example row selection, arithmetic or etc) is applied to the table given to this option.

If the appended columns have a name, the column names of each file will be appended with a @code{-N}, where @code{N} is a counter starting from 1 for each appended file.
This is done because when concatenating columns from multiple tables (more than two) into one, they may have the same name, and its not good practice to have multiple columns with the same name.
You can disable this feature with @option{--catcolumnrawname}.
To have full control over the concatenated column names, you can use the @option{--colmetadata} option described below.

For example, let's assume you have two catalogs of the same objects (same number of rows) in different filters.
Such that @file{f160w-cat.fits} has a @code{MAGNITUDE} column that has the magnitude of each object in the @code{F160W} filter and similarly @file{f105w-cat.fits}, also has a @code{MAGNITUDE} column, but for the @code{F105W} filter.
You can use column concatenation like below to import the @code{MAGNITUDE} column from the @code{F105W} catalog into the @code{F160W} catalog, while giving each magnitude column a different name:

@example
asttable f160w-cat.fits --output=both.fits \
  --catcolumnfile=f105w-cat.fits --catcolumns=MAGNITUDE \
  --colmetadata=MAGNITUDE,MAG-F160W,log,"Magnitude in F160W" \
  --colmetadata=MAGNITUDE-1,MAG-F105W,log,"Magnitude in F105W"
@end example

@item -u STR/INT
@itemx --catcolumnhdu=STR/INT
The HDU/extension of the FITS file(s) that should be concatenated, or appended, with @option{--catcolumnfile}.
If @option{--catcolumn} is called more than once with more than one FITS file, its necessary to call this option more than once.
The HDUs will be loaded in the same order as the FITS files given to @option{--catcolumnfile}.

@item -C STR/INT
@itemx --catcolumns=STR/INT
The column(s) in the file(s) given to @option{--catcolumnfile} to append.
When this option is not given, all the columns will be concatenated.
See @option{--catcolumnfile} for more.

@item --catcolumnrawname
Don't modify the names of the concatenated (appended) columns, see description in @option{--catcolumnfile}.

@item -O
@itemx --colinfoinstdout
@cindex Standard output
Add column metadata when the output is printed in the standard output.
Usually the standard output is used for a fast visual check or to pipe into other program for further processing.
So by default meta-data aren't included.

@item -r STR,FLT:FLT
@itemx --range=STR,FLT:FLT
Only output rows that have a value within the given range in the @code{STR} column (can be a name or counter).
Note that the range is only inclusive in the lower-limit.
For example with @code{--range=sn,5:20} the output's columns will only contain rows that have a value in the @code{sn} column (not case-sensitive) that is greater or equal to 5, and less than 20.

This option can be called multiple times (different ranges for different columns) in one run of the Table program.
This is very useful for selecting the final rows from multiple criteria/columns.

The chosen column doesn't have to be in the output columns.
This is good when you just want to select using one column's values, but don't need that column anymore afterwards.

For one example of using this option, see the example under
@option{--sigclip-median} in @ref{Invoking aststatistics}.

@item --inpolygon=STR1,STR2
Only return rows where the given coordinates are inside the polygon specified by the @option{--polygon} option.
The coordinate columns are the given @code{STR1} and @code{STR2} columns, they can be a column name or counter (see @ref{Selecting table columns}).

Note that the chosen columns doesn't have to be in the output columns (which are specified by the @code{--column} option).
For example if we want to select rows in the polygon specified in @ref{Dataset inspection and cropping}, this option can be used like this (you can remove the double quotations and write them all in one line if you remove the white-spaces around the colon separating the column vertices):

@example
asttable table.fits --inpolygon=RA,DEC      \
         --polygon="53.187414,-27.779152    \
                    : 53.159507,-27.759633  \
                    : 53.134517,-27.787144  \
                    : 53.161906,-27.807208" \
@end example

@cartouche
@noindent
@strong{Flat/Euclidean space: } The @option{--inpolygon} option assumes a flat/Euclidean space so it is only correct for RA and Dec when the polygon size is very small like the example above.
If your polygon is a degree or larger, it may not return correct results.
We are working on other options for this.
@end cartouche

@item --outpolygon=STR1,STR2
Only return rows where the given coordinates are outside the polygon specified by the @option{--polygon} option.
This option is very similar to the @option{--inpolygon} option, so see the description there for more.

@item --polygon=FLT:FLT,...
The polygon to use for the @code{--inpolygon} and @option{--outpolygon} options.
The values to this option is parsed in the same way that the Crop program, see its description there for more: @ref{Crop options}.

@item -e STR,INT/FLT,...
@itemx --equal=STR,INT/FLT,...
Only output rows that are equal to the given number(s) in the given column.
The first argument is the column identifier (name or number, see @ref{Selecting table columns}), after that you can specify any number of values.
For example @option{--equal=ID,5,6,8} will only print the rows that have a value of 5, 6, or 8 in the @code{ID} column.
This option can also be called multiple times, so @option{--equal=ID,4,5 --equal=ID,6,7} has the same effect as @option{--equal=4,5,6,7}.

The @option{--equal} and @option{--notequal} options also work when the given column has a string type.
In this case the given value to the option will also be parsed as a string, not as a number.
When dealing with string columns, be careful with trailing white space characters (the actual value maybe adjusted to the right, left, or center of the column's width).
If you need to account for such white spaces, you can use shell quoting.
For example @code{--equal=NAME,"  myname "}.

@cartouche
@noindent
@strong{Equality and floating point numbers:} Floating point numbers are only approximate values (see @ref{Numeric data types}).
In this context, their equality depends on how the the input table was originally stored (as a plain text table or as an ASCII/binary FITS table).
If you want to select floating point numbers, it is strongly recommended to use the @option{--range} option and set a very small interval around your desired number, don't use @option{--equal} or @option{--notequal}.
@end cartouche

@item -n STR,INT/FLT,...
@itemx --notequal=STR,INT/FLT,...
Only output rows that are @emph{not} equal to the given number(s) in the given column.
The first argument is the column identifier (name or number, see @ref{Selecting table columns}), after that you can specify any number of values.
For example @option{--notequal=ID,5,6,8} will only print the rows where the @code{ID} column doesn't have value of 5, 6, or 8.
This option can also be called multiple times, so @option{--notequal=ID,4,5 --notequal=ID,6,7} has the same effect as @option{--notequal=4,5,6,7}.

Be very careful if you want to use the non-equality with floating point numbers, see the special note under @option{--equal} for more.
This option also works when the given column has a string type, see the description under @option{--equal} (above) for more.

@item -s STR
@item --sort=STR
Sort the output rows based on the values in the @code{STR} column (can be a column name or number).
By default the sort is done in ascending/increasing order, to sort in a descending order, use @option{--descending}.

The chosen column doesn't have to be in the output columns.
This is good when you just want to sort using one column's values, but don't need that column anymore afterwards.

@item -d
@itemx --descending
When called with @option{--sort}, rows will be sorted in descending order.

@item -H INT
@itemx --head=INT
Only print the given number of rows from the @emph{top} of the final table.
Note that this option only affects the @emph{output} table.
For example if you use @option{--sort}, or @option{--range}, the printed rows are the first @emph{after} applying the sort sorting, or selecting a range of the full input.

@cindex GNU Coreutils
If the given value to @option{--head} is 0, the output columns won't have any rows and if its larger than the number of rows in the input table, all the rows are printed (this option is effectively ignored).
This behavior is taken from the @command{head} program in GNU Coreutils.

@item -t INT
@itemx --tail=INT
Only print the given number of rows from the @emph{bottom} of the final table.
See @option{--head} for more.

@item -b STR[,STR[,STR]]
@itemx --noblank=STR[,STR[,STR]]
Remove all rows in the given @emph{output} columns that have a blank value.
Like above, the columns can be specified by their name or number (counting from 1).
@code{--noblank} is applied just before writing the final table (after @option{--colmetadata} has finished).
So in case you changed the column metadata, or added new columns, you can use the new names, or the newly defined column numbers.

For example if @file{table.fits} has blank values (NaN in floating point types) in the @code{magnitude} and @code{sn} columns, with @code{--noblank=magnitude,sn}, the output will not contain any rows with blank values in these columns.

@item -m STR/INT,STR[,STR[,STR]]
@itemx --colmetadata=STR/INT,STR[,STR[,STR]]
Update the specified column metadata in the output table.
This option is applied after all other column-related operations are complete, for example column arithmetic, or column concatenation.
The first value (before the first comma) given to this option is the column's identifier.
It can either be a counter (positive integer, counting from 1), or a name (the column's name in the output if this option wasn't called).

After the to-be-updated column is identified, at least one other string should be given, with a maximum of three strings.
The first string after the original name will the the selected column's new name.
The next (optional) string will be the selected column's unit and the third (optional) will be its comments.
If the two optional strings aren't given, the original column's units or comments will remain unchanged.
Some examples of this option are available in the tutorials, in particular @ref{Working with catalogs estimating colors}.
Here are some more specific examples

@table @option

@item --colmetadata=MAGNITUDE,MAG_F160W
This will convert name of the original @code{MAGNITUDE} column to @code{MAG_F160W}, leaving the unit and comments unchanged.

@item --colmetadata=3,MAG_F160W,mag
This will convert name of the third column of the final output to @code{MAG_F160W} and the units to @code{mag}, while leaving the comments untouched.

@item --colmetadata=MAGNITUDE,MAG_F160W,mag,"Magnitude in F160W filter"
This will convert name of the original @code{MAGNITUDE} column to @code{MAG_F160W}, and the units to @code{mag} and the comments to @code{Magnitude in F160W filter}.
Note the double quotations around the comment string, they are necessary to preserve the white-space characters within the column comment from the command-line, into the program (otherwise, upon reaching a white-space character, the shell will consider this option to be finished and cause un-expected behavior).
@end table

If your table is large and generated by a script, you can first do all your operations on your table's data and write it into a temporary file (maybe called @file{temp.fits}).
Then, look into that file's metadata (with @command{asttable temp.fits -i}) to see the exact column positions and possible names, then add the necessary calls to this option to your previous call to @command{asttable}, so it writes proper metadata in the same run (for example in a script or Makefile).
Recall that when a name is given, this option will update the metadata of the first column that matches, so if you have multiple columns with the same name, you can call this options multiple times with the same first argument to change them all to different names.

Finally, if you already have a FITS table by other means (for example by downloading) and you merely want to update the column metadata and leave the data intact, it is much more efficient to directly modify the respective FITS header keywords with @code{astfits}, using the keyword manipulation features described in @ref{Keyword manipulation}.
@option{--colmetadata} is mainly intended for scenarios where you want to edit the data so it will always load the full/partial dataset into memory, then write out the resulting datasets with updated/corrected metadata.
@end table




















@node Query,  , Table, Data containers
@section Query

@cindex IVOA
@cindex Query
@cindex TAP (Table Access Protocol)
@cindex ADQL (Astronomical Data Query Language)
@cindex Astronomical Data Query Language (ADQL)
There are many astronomical databases available for downloading astronomical data.
Most follow the International Virtual Observatory Alliance (IVOA, @url{https://ivoa.net}) standards (and in particular the Table Access Protocol, or TAP@footnote{@url{https://ivoa.net/documents/TAP}}).
With TAP, it is possible to submit your queries via a command-line downloader (for example @command{curl}) to only get specific tables, targets (rows in a table) or measurements (columns in a table): you don't have to download the full table (which can be very large in some cases)!
These customizations are done through the Astronomical Data Query Language (ADQL@footnote{@url{https://ivoa.net/documents/ADQL}}).

Therefore, if you are sufficiently familiar with TAP and ADQL, you can easily custom-download any part of an online dataset.
However, you also need to keep a record of the URLs of each database and in many cases, the commands will become long and hard/buggy to type on the command-line.
On the other hand, most astronomers don't know TAP or ADQL at all, and are forced to go to the database's web page which is slow (it needs to download so many images, and has too much annoying information), requires manual interaction (further making it slow and buggy), and can't be automated.

Gnuastro's Query program is designed to be the middle-man in this process: it provides a simple high-level interface to let you specify your constraints on what you want to download.
It then internally constructs the command to download the data based on your inputs and runs it to download your desired data.
Query also prints the full command before it executes it (if not called with @option{--quiet}).
Also, if you ask for a FITS output table, the full command is written into its 0-th extension along with other input parameters to query (all Gnuastro programs generally keep their input configuration parameters as FITS keywords in the zero-th output).
You can see it with Gnuastro's Fits program, like below:

@example
$ astfits query-output.fits -h0
@end example

With the full command used to download the dataset, you only need a minimal knowledge of ADQL to do lower-level customizations on your downloaded dataset.
You can simply copy that command and change the parts of the query string you want: ADQL is very powerful!
For example you can ask the server to do mathematical operations on the columns and apply selections after those operations, or combine/match multiple datasets and etc.
We will try to add high-level interfaces for such capabilities, but generally, don't limit yourself to the high-level operations (that can't cover everything!).

@menu
* Available databases::         List of available databases to Query.
* Invoking astquery::           Inputs, outputs and configuration of Query.
@end menu

@node Available databases, Invoking astquery, Query, Query
@subsection Available databases

The current list of databases supported by Query are listed at the end of this section.
To get the list of available datasets within each database, you can use the @option{--information} option.
For example with the command below you can get a list of the roughly 100 datasets that are available within the ESA Gaia server with their description:

@example
$ astquery gaia --information
@end example

@noindent
However, other databases like VizieR host many more datasets (tens of thousands!).
Therefore it is very inconvenient to get the @emph{full} information every time you want to find your dataset of interest (the full metadata file VizieR is more than 20Mb).
In such cases, you can limit the downloaded and displayed information with the @code{--limitinfo} option.
For example with the first command below, you can get all datasets relating to the MUSE (an instrument on the Very Large Telescope), and those that include Roland Bacon (Principle Investigator of MUSE) as an author (@code{Bacon, R.}).
Recall that @option{-i} is the short format of @option{--information}.

@example
$ astquery vizier -i --limitinfo=MUSE
$ astquery vizier -i --limitinfo="Bacon R."
@end example

Once you find the recognized name of your desired dataset, you can see the column information of that dataset with adding the dataset name.
For example, with the command below you can see the column metadata in the @code{J/A+A/608/A2/udf10} dataset (one of the datasets in the search above) using this command:

@example
$ astquery vizier --dataset=J/A+A/608/A2/udf10 -i
@end example

@cindex SDSS DR12
For very popular datasets of a database, Query provides an easier-to-remember short name that you can feed to @option{--dataset}.
This short name will map to the officially recognized name of the dataset on the server.
In this mode, Query will also set positional columns accordingly.
For example most VizieR datasets have an @code{RAJ2000} column (the RA and the epoch of 2000) so it is the default RA column name for coordinate search (using @option{--center} or @option{--overlapwith}).
However, some datasets don't have this column (for example SDSS DR12).
So when you use the short name and Query knows about this dataset, it will internally set the coordinate columns that SDSS DR12 has: @code{RA_ICRS} and @code{DEC_ICRS}.
Recall that you can always change the coordinate columns with @option{--ccol}.

For example in the VizieR and Gaia databases, the recognized name for the early data release 3 data is respectively @code{I/350/gaiaedr3} and @code{gaiaedr3.gaia_source}.
These technical names can be hard to remember.
Therefore Query provides @code{gaiaedr3} (for VizieR) and @code{edr3} (for ESA's Gaia) shortcuts which you can give to @option{--dataset} instead.
They will be directly mapped to the fully recognized name by Query.
In the list below that describes the available databases, the available short names are also listed.

@cartouche
@noindent
@strong{Not all datasets support TAP:} Large databases like VizieR have TAP access for all their datasets.
However, smaller databases haven't implemented TAP for all their tables.
Therefore some datasets that are searchable in their web interface may not be available for a TAP search.
To see the full list of TAP-ed datasets in a database, use the @option{--information} (or @option{-i}) option with the dataset name like the command below.

@example
$ astquery astron -i
@end example

@noindent
If your desired dataset isn't in this list, but has web-access, contact the database maintainers and ask them to add TAP access for it.
After they do it, you should see the name added to the output list of the command above.
@end cartouche

The list of databases recognized by Query (and their names in Query) is described below.
Since Query is a new member of the Gnuastro family (first available in Gnuastro 0.14), this list will hopefully grow significantly in the next releases.
If you have any particular datasets in mind, please let us know by sending an email to @code{bug-gnuastro@@gnu.org}.
If the dataset supports IVOA's TAP (Table Access Protocol), it should be very easy to add.


@table @code

@item astron
@cindex ASTRON
@cindex Radio astronomy
The ASTRON Virtual Observatory service (@url{https://vo.astron.nl}) is a database focused on radio astronomy data and images, primarily those collected by ASTRON itself.
A query to @code{astron} is submitted to @code{https://vo.astron.nl/__system__/tap/run/tap/sync}.

Here is the list of short names for dataset(s) in ASTRON's VO service:
@itemize
@item
@code{tgssadr --> tgssadr.main}
@end itemize

@item gaia
@cindex Gaia catalog
@cindex Catalog, Gaia
@cindex Database, Gaia
The Gaia project (@url{https://www.cosmos.esa.int/web/gaia}) database which is a large collection of star positions on the celestial sphere, as well as peculiar velocities, parallaxes and magnitudes in some bands among many others.
Besides scientific studies (like studying resolved stellar populations in the Galaxy and its halo), Gaia is also invaluable for raw data calibrations, like astrometry.
A query to @code{gaia} is submitted to @code{https://gea.esac.esa.int/tap-server/tap/sync}.

Here is the list of short names for popular datasets within Gaia:
@itemize
@item
@code{edr3 --> gaiaedr3.gaia_source}
@item
@code{dr2 --> gaiadr2.gaia_source}
@item
@code{dr1 --> gaiadr1.gaia_source}
@item
@code{tycho2 --> public.tycho2}
@item
@code{hipparcos --> public.hipparcos}
@end itemize

@item ned
@cindex NASA/IPAC Extragalactic Database (NED)
@cindex NED (NASA/IPAC Extragalactic Database)
The NASA/IPAC Extragalactic Database (NED, @url{http://ned.ipac.caltech.edu}) is a fusion database, integrating the information about extra-galactic sources from many large sky surveys into a single catalog.
It covers the full spectrum, from Gamma rays to radio frequencies and is updated when new data arrives.
A query to @code{ned} is submitted to @code{https://ned.ipac.caltech.edu/tap/sync}.

Currently NED only has its main dataset for TAP access (shown below), more datasets will be added for TAP access in the future.
@itemize
@item
@code{objdir --> NEDTAP.objdir}
@end itemize

@item vizier
@cindex VizieR
@cindex CDS, VizieR
@cindex Catalog, Vizier
@cindex Database, VizieR
Vizier (@url{https://vizier.u-strasbg.fr}) is arguably the largest catalog database in astronomy: containing more than 20500 catalogs as of mid January 2021.
Almost all published catalogs in major projects, and even the tables in many papers are archived and accessible here.
For example VizieR also has a full copy of the Gaia database mentioned below, with some additional standardized columns (like RA and Dec in J2000).

The current implementation of @option{--limitinfo} only looks into the description of the datasets, but since VizieR is so large, there is still a lot of room for improvement.
Until then, if @option{--limitinfo} isn't sufficient, you can use VizieR's own web-based search for your desired dataset: @url{http://cdsarc.u-strasbg.fr/viz-bin/cat}


Because VizieR curates such a diverse set of data from tens of thousands of projects and aims for interoperability between them, the column names in VizieR may not be identical to the column names in the surveys' own databases (Gaia in the example above).
A query to @code{vizier} is submitted to @code{http://tapvizier.u-strasbg.fr/TAPVizieR/tap/sync}.

@cindex 2MASS All-Sky Catalog
@cindex AKARI/FIS All-Sky Survey
@cindex AllWISE Data Release
@cindex AAVSO Photometric All Sky Survey, DR9
@cindex CatWISE 2020 catalog
@cindex Dark Energy Survey data release 1
@cindex GAIA Data Release (2 or 3)
@cindex All-sky Survey of GALEX DR5
@cindex Naval Observatory Merged Astrometric Dataset
@cindex Pan-STARRS Data Release 1
@cindex SDSS Photometric Catalogue, Release 12
@cindex Whole-Sky USNO-B1.0 Catalog
@cindex U.S. Naval Observatory CCD Astrograph Catalog
@cindex Band-merged unWISE Catalog
@cindex WISE All-Sky data Release
Here is the list of short names for popular datasets within VizieR (sorted alphabetically by their short name).
Please feel free to suggest other major catalogs (covering a wide area or commonly used in your field)..
For details on each dataset with necessary citations, and links to web pages, look into their details with their ViziR names in @url{https://vizier.u-strasbg.fr/viz-bin/VizieR}.
@itemize
@item
@code{2mass --> II/246/out} (2MASS All-Sky Catalog)
@item
@code{akarifis --> II/298/fis} (AKARI/FIS All-Sky Survey)
@item
@code{allwise --> II/328/allwise} (AllWISE Data Release)
@item
@code{apass9 --> II/336/apass9} (AAVSO Photometric All Sky Survey, DR9)
@item
@code{catwise --> II/365/catwise} (CatWISE 2020 catalog)
@item
@code{des1 --> II/357/des_dr1} (Dark Energy Survey data release 1)
@item
@code{gaiadr2 --> I/345/gaia2} (GAIA Data Release 2)
@item
@code{gaiaedr3 --> I/350/gaiaedr3} (GAIA Data Release 3)
@item
@code{galex5 --> II/312/ais} (All-sky Survey of GALEX DR5)
@item
@code{nomad --> I/297/out} (Naval Observatory Merged Astrometric Dataset)
@item
@code{panstarrs1 --> II/349/ps1} (Pan-STARRS Data Release 1).
@item
@code{ppmxl --> I/317/sample} (Positions and proper motions on the ICRS)
@item
@code{sdss12 --> V/147/sdss12} (SDSS Photometric Catalogue, Release 12)
@item
@code{usnob1 --> I/284/out} (Whole-Sky USNO-B1.0 Catalog)
@item
@code{ucac5 --> I/340/ucac5} (5th U.S. Naval Obs. CCD Astrograph Catalog)
@item
@code{unwise --> II/363/unwise} (Band-merged unWISE Catalog)
@item
@code{wise --> II/311/wise} (WISE All-Sky data Release)
@end itemize
@end table




@node Invoking astquery,  , Available databases, Query
@subsection Invoking Query

Query provides a high-level interface to downloading subsets of data from databases.
The executable name is @file{astquery} with the following general template

@example
$ astquery DATABASE-NAME [OPTION...] ...
@end example

@noindent
One line examples:

@example

## Information about all datasets in ESA's GAIA database:
$ astquery gaia --information

## Only show catalogs in VizieR that have 'MUSE' in their
## description. The '-i' is short for '--information'.
$ astquery vizier -i --limitinfo=MUSE

## List of columns in 'J/A+A/608/A2/udf10' (one of the above).
$ astquery vizier --dataset=J/A+A/608/A2/udf10 -i

## ID, RA and Dec of all Gaia sources within an image.
$ astquery gaia --dataset=edr3 --overlapwith=image.fits \
           -csource_id,ra,dec

## RA, Dec and Spectroscopic redshifts of objects in SDSS DR12
## spectroscopic redshift that overlap with 'image.fits'.
$ astquery vizier --dataset=sdss12 --overlapwith=image.fits \
           -cRA_ICRS,DE_ICRS,zsp --range=zsp,1e-10,inf

## All columns of all entries in the Gaia eDR3 catalog (hosted at
## VizieR) within 1 arc-minute of the given coordinate.
$ astquery vizier --dataset=I/350/gaiaedr3 --output=my-gaia.fits \
           --center=113.8729761,31.9027152 --radius=1/60 \

## Similar to above, but only ID, RA and Dec columns for objects with
## magnitude range 10 to 15. In VizieR, this column is called 'Gmag'.
$ astquery vizier --dataset=I/350/gaiaedr3 --output=my-gaia.fits \
           --center=113.8729761,31.9027152 --radius=1/60 \
           --range=Gmag,10:15 -cEDR3Name,RAJ2000,DEJ2000
@end example

Query takes a single argument which is the name of the database.
For the full list of available databases and accessing them, see @ref{Available databases}.
There are two methods to query the databases, each is more fully discussed in its option's description below.
@itemize
@item
@strong{Low-level:}
With @option{--query} you can directly give a raw query statement that is recognized by the database.
This is very low level and will require a good knowledge of the database's query language, but of course, it is much more powerful.
If this option is given, the raw string is directly passed to the server and all other constraints/options (for Query's high-level interface) are ignored.
@item
@strong{High-level:}
With the high-level options (like @option{--column}, @option{--center}, @option{--radius}, @option{--range} and other constraining options below), the low-level query will be constructed automatically for the particular database.
This method is only limited to the generic capabilities that Query provides for all servers.
So @option{--query} is more powerful, however, in this mode, you don't need any knowledge of the database's query language.
You can see the internally generated query on the terminal (if @option{--quiet} is not used) or in the 0-th extension of the output (if its a FITS file).
This full command contains the internally generated query.
@end itemize

The name of the downloaded output file can be set with @option{--output}.
The requested output format can have any of the @ref{Recognized table formats} (currently @file{.txt} or @file{.fits}).
Like all Gnuastro programs, if the output is a FITS file, the zero-th/first HDU of the output will contain all the command-line options given to Query as well as the full command used to access the server.
When @option{--output} is not set, the output name will be in the format of @file{NAME-STRING.fits}, where @file{NAME} is the name of the database and @file{STRING} is a randomly selected 6-character set of numbers and alphabetic characters.
With this feature, a second run of @command{astquery} that isn't called with @option{--output} will not over-write an already downloaded one.
Generally, when calling Query more than once, it is recommended to set an output name for each call based on your project's context.

The outputs of Query will have a common output format, irrespective of the used database.
To achieve this, Query will ask the databases to provide a FITS table output (for larger tables, FITS can consume much less download volume).
After downloading is complete, the raw downloaded file will be read into memory once by Query, and written into the file given to @option{--output}.
The raw downloaded file will be deleted by default, but can be preserved with the @option{--keeprawdownload} option.
This strategy avoids unnecessary surprises depending on database.
For example some databases can download a compressed FITS table, even though we ask for FITS.
But with the strategy above, the final output will be an uncompressed FITS file.
The metadata that is added by Query (including the full download command) is also very useful for future usage of the downloaded data.
Unfortunately many databases don't write the input queries into their generated tables.

@table @option

@item --dry-run
Only print the final download command to contact the server, don't actually run it.
This option is good when you want to check the finally constructed query or download options given to the download program.
You may also want to use the constructed command as a base to do further customizations on it and run it yourself.

@item -k
@itemx --keeprawdownload
Don't delete the raw downloaded file from the database.
The name of the raw download will have a @file{OUTPUT-raw-download.fits} format.
Where @file{OUTPUT} is either the base-name of the final output file (without a suffix).

@item -i
@itemx --information
Print the information of all datasets (tables) within a database or all columns within a database.
When @option{--dataset} is specified, the latter mode (all column information) is downloaded and printed and when its not defined, all dataset information (within the database) is printed.

Some databases (like VizieR) contain tens of thousands of datasets, so you can limit the downloaded and printed information for available databases with the @option{--limitinfo} option (described below).
Dataset descriptions are often large and contain a lot of text (unlike column descriptions).
Therefore when printing the information of all datasets within a database, the information (e.g., database name) will be printed on separate lines before the description.
However, when printing column information, the output has the same format as a similar option in Table (see @ref{Invoking asttable}).

Important note to consider: the printed order of the datasets or columns is just for displaying in the printed output.
You cannot ask for datasets or columns based on the printed order, you need to use dataset or column names.

@item -L STR
@itemx --limitinfo=STR
Limit the information that is downloaded and displayed (with @option{--information}) to those that have the string given to this option in their description.
Note that @emph{this is case-sensitive}.
This option is only relevant when @option{--information} is also called.

Databases may have thousands (or tens of thousands) of datasets.
Therefore just the metadata (information) to show with @option{--information} can be tens of megabytes (for example the full VizieR metadata file is about 23Mb as of January 2021).
Once downloaded, it can also be hard to parse manually.
With @option{--limitinfo}, only the metadata of datasets that contain this string @emph{in their description} will be downloaded and displayed, greatly improving the speed of finding your desired dataset.

@item -Q "STR"
@itemx --query="STR"
Directly specify the query to be passed onto the database.
The queries will generally contain space and other meta-characters, so we recommend placing the query within quotations.

@item -s STR
@itemx --dataset=STR
The dataset to query within the database (not compatible with @option{--query}).
This option is mandatory when @option{--query} or @option{--information} aren't provided.
You can see the list of available datasets within a database using @option{--information} (possibly supplemented by @option{--limitinfo}).
The output of @option{--information} will contain the recognized name of the datasets within that database.
You can pass the recognized name directly to this option.
For more on finding and using your desired database, see @ref{Available databases}.

@item -c STR
@itemx --column=STR[,STR[,...]]
The column name(s) to retrieve from the dataset in the given order (not compatible with @option{--query}).
If not given, all the dataset's columns for the selected rows will be queried (which can be large!).
This option can take multiple values in one instance (for example @option{--column=ra,dec,mag}), or in multiple instances (for example @option{-cra -cdec -cmag}), or mixed (for example @option{-cra,dec -cmag}).

In case, you don't know the full list of the dataset's column names a-priori, and you don't want to download all the columns (which can greatly decrease your download speed), you can use the @option{--information} option combined with the @option{--dataset} option, see @ref{Available databases}.

@item -H INT
@itemx --head=INT
Only ask for the first @code{INT} rows of the finally selected columns, not all the rows.
This can be good when your search can result a large dataset, but before downloading the full volume, you want to see the top rows and get a feeling of what the whole dataset looks like.

@item -v STR
@itemx --overlapwith=STR
File name of FITS file containing an image (in the HDU given by @option{--hdu}) to use for identifying the region to query in the give database and dataset.
Based on the image's WCS and pixel size, the sky coverage of the image is estimated and values to the @option{--center}, @option{--width} will be calculated internally.
Hence this option cannot be used with @code{--center}, @code{--width} or @code{--radius}.
Also, since it internally generates the query, it can't be used with @code{--query}.

Note that if the image has WCS distortions and the reference point for the WCS is not within the image, the WCS will not be well-defined.
Therefore the resulting catalog may not overlap, or correspond to a larger/small area in the sky.

@item -C FLT,FLT
@itemx --center=FLT,FLT
The spatial center position (mostly RA and Dec) to use for the automatically generated query (not compatible with @option{--query}).
The given values will be compared to two columns in the database to find/return rows within a certain region around this center position will be requested and downloaded.
Pre-defined RA and Dec column names are defined in Query for every database, however you can use @option{--ccol} to select other columns to use instead.
The region can either be a circle and the point (configured with @option{--radius}) or a box/rectangle around the point (configured with @option{--width}).

@item --ccol=STR,STR
The name of the coordinate-columns in the dataset to compare with the values given to @option{--center}.
Query will use its internal defaults for each dataset (for example @code{RAJ2000} and @code{DEJ2000} for VizieR data).
But each dataset is treated separately and it isn't guaranteed that these columns exist in all datasets.
Also, more than one coordinate system/epoch may be present in a dataset and you can use this option to construct your spatial constraint based on the others coordinate systems/epochs.

@item -r FLT
@itemx --radius=FLT
The radius about the requested center to use for the automatically generated query (not compatible with @option{--query}).
The radius is in units of degrees, but you can use simple division with this option directly on the command-line.
For example if you want a radius of 20 arc-minutes or 20 arc-seconds, you can use @option{--radius=20/60} or @option{--radius=20/3600} respectively (which is much more human-friendly than @code{0.3333} or @code{0.005556}).

@item -w FLT[,FLT]
@itemx --width=FLT[,FLT]
The square (or rectangle) side length (width) about the requested center to use for the automatically generated query (not compatible with @option{--query}).
If only one value is given to @code{--width} the region will be a square, but if two values are given, the widths of the query box along each dimension will be different.
The value(s) is (are) in the same units as the coordinate column (see @option{--ccol}, usually RA and Dec which are degrees).
You can use simple division for each value directly on the command-line if you want relatively small (and more human-friendly) sizes.
For example if you want your box to be 1 arc-minutes along the RA and 2 arc-minutes along Dec, you can use @option{--width=1/60,2/60}.

@item -g STR,FLT,FLT
@itemx --range=STR,FLT,FLT
The column name and numerical range (inclusive) of acceptable values in that column (not compatible with @option{--query}).
This option can be called multiple times for applying range limits on many columns in one call (thus greatly reducing the download size).
For example when used on the ESA gaia database, you can use @code{--range=phot_g_mean_mag,10:15} to only get rows that have a value between 10 and 15 (inclusive on both sides) in the @code{phot_g_mean_mag} column.

If you want all rows larger, or smaller, than a certain number, you can use @code{inf}, or @code{-inf} as the first or second values respectively.
For example, if you want objects with SDSS spectroscopic redshifts larger than 2 (from the VizieR @code{sdss12} database), you can use @option{--range=zsp,2,inf}

If you want the interval to not be inclusive on both sides, you can run @code{astquery} once and get the command that it executes.
Then you can edit it to be non-inclusive on your desired side.

@item -b STR[,STR]
@item --noblank=STR[,STR]
Only ask for rows that don't have a blank value in the @code{STR} column.
This option can be called many times, and each call can have multiple column names (separated by a comma or @key{,}).
For example if you want the retrieved rows to not have a blank value in columns @code{A}, @code{B}, @code{C} and @code{D}, you can use @command{--noblank=A -bB,C,D}.

@item --sort=STR[,STR]
Ask for the server to sort the downloaded data based on the given columns.
For example let's assume your desired catalog has column @code{Z} for redshift and column @code{MAG_R} for magnitude in the R band.
When you call @option{--sort=Z,MAG_R}, it will primarily sort the columns based on the redshift, but if two objects have the same redshift, they will be sorted by magnitude.
You can add as many columns as you like for higher-level sorting.
@end table




















@node Data manipulation, Data analysis, Data containers, Top
@chapter Data manipulation

Images are one of the major formats of data that is used in astronomy.
The functions in this chapter explain the GNU Astronomy Utilities which are provided for their manipulation.
For example cropping out a part of a larger image or convolving the image with a given kernel or applying a transformation to it.

@menu
* Crop::                        Crop region(s) from a dataset.
* Arithmetic::                  Arithmetic on input data.
* Convolve::                    Convolve an image with a kernel.
* Warp::                        Warp/Transform an image to a different grid.
@end menu

@node Crop, Arithmetic, Data manipulation, Data manipulation
@section Crop

@cindex Section of an image
@cindex Crop part of image
@cindex Postage stamp images
@cindex Large astronomical images
@pindex @r{Crop (}astcrop@r{)}
Astronomical images are often very large, filled with thousands of galaxies.
It often happens that you only want a section of the image, or you have a catalog of sources and you want to visually analyze them in small postage stamps.
Crop is made to do all these things.
When more than one crop is required, Crop will divide the crops between multiple threads to significantly reduce the run time.

@cindex Mosaicing
@cindex Image tiles
@cindex Image mosaic
@cindex COSMOS survey
@cindex Imaging surveys
@cindex Hubble Space Telescope (HST)
Astronomical surveys are usually extremely large.
So large in fact, that the whole survey will not fit into a reasonably sized file.
Because of this, surveys usually cut the final image into separate tiles and store each tile in a file.
For example the COSMOS survey's Hubble space telescope, ACS F814W image consists of 81 separate FITS images, with each one having a volume of 1.7 Giga bytes.

@cindex Stitch multiple images
Even though the tile sizes are chosen to be large enough that too many galaxies/targets don't fall on the edges of the tiles, inevitably some do.
So when you simply crop the image of such targets from one tile, you will miss a large area of the surrounding sky (which is essential in estimating the noise).
Therefore in its WCS mode, Crop will stitch parts of the tiles that are relevant for a target (with the given width) from all the input images that cover that region into the output.
Of course, the tiles have to be present in the list of input files.

Besides cropping postage stamps around certain coordinates, Crop can also crop arbitrary polygons from an image (or a set of tiles by stitching the relevant parts of different tiles within the polygon), see @option{--polygon} in @ref{Invoking astcrop}.
Alternatively, it can crop out rectangular regions through the @option{--section} option from one image, see @ref{Crop section syntax}.

@menu
* Crop modes::                  Basic modes to define crop region.
* Crop section syntax::         How to define a section to crop.
* Blank pixels::                Pixels with no value.
* Invoking astcrop::            Calling Crop on the command-line
@end menu

@node Crop modes, Crop section syntax, Crop, Crop
@subsection Crop modes
In order to be comprehensive, intuitive, and easy to use, there are two ways to define the crop:

@enumerate
@item
From its center and side length.
For example if you already know the coordinates of an object and want to inspect it in an image or to generate postage stamps of a catalog containing many such coordinates.

@item
The vertices of the crop region, this can be useful for larger crops over
many targets, for example to crop out a uniformly deep, or contiguous,
region of a large survey.
@end enumerate

Irrespective of how the crop region is defined, the coordinates to define the crop can be in Image (pixel) or World Coordinate System (WCS) standards.
All coordinates are read as floating point numbers (not integers, except for the @option{--section} option, see below).
By setting the @emph{mode} in Crop, you define the standard that the given coordinates must be interpreted.
Here, the different ways to specify the crop region are discussed within each standard.
For the full list options, please see @ref{Invoking astcrop}.

When the crop is defined by its center, the respective (integer) central pixel position will be found internally according to the FITS standard.
To have this pixel positioned in the center of the cropped region, the final cropped region will have an add number of pixels (even if you give an even number to @option{--width} in image mode).

Furthermore, when the crop is defined as by its center, Crop allows you to only keep crops what don't have any blank pixels in the vicinity of their center (your primary target).
This can be very convenient when your input catalog/coordinates originated from another survey/filter which is not fully covered by your input image, to learn more about this feature, please see the description of the @option{--checkcenter} option in @ref{Invoking astcrop}.

@table @asis
@item Image coordinates
In image mode (@option{--mode=img}), Crop interprets the pixel coordinates and widths in units of the input data-elements (for example pixels in an image, not world coordinates).
In image mode, only one image may be input.
The output crop(s) can be defined in multiple ways as listed below.

@table @asis
@item Center of multiple crops (in a catalog)
The center of (possibly multiple) crops are read from a text file.
In this mode, the columns identified with the @option{--coordcol} option are interpreted as the center of a crop with a width of @option{--width} pixels along each dimension.
The columns can contain any floating point value.
The value to @option{--output} option is seen as a directory which will host (the possibly multiple) separate crop files, see @ref{Crop output} for more.
For a tutorial using this feature, please see @ref{Finding reddest clumps and visual inspection}.

@item Center of a single crop (on the command-line)
The center of the crop is given on the command-line with the @option{--center} option.
The crop width is specified by the @option{--width} option along each dimension.
The given coordinates and width can be any floating point number.

@item Vertices of a single crop
In Image mode there are two options to define the vertices of a region to crop: @option{--section} and @option{--polygon}.
The former is lower-level (doesn't accept floating point vertices, and only a rectangular region can be defined), it is also only available in Image mode.
Please see @ref{Crop section syntax} for a full description of this method.

The latter option (@option{--polygon}) is a higher-level method to define any polygon (with any number of vertices) with floating point values.
Please see the description of this option in @ref{Invoking astcrop} for its syntax.
@end table

@item WCS coordinates
In WCS mode (@option{--mode=wcs}), the coordinates and widths are interpreted using the World Coordinate System (WCS, that must accompany the dataset), not pixel coordinates.
In WCS mode, Crop accepts multiple datasets as input.
When the cropped region (defined by its center or vertices) overlaps with multiple of the input images/tiles, the overlapping regions will be taken from the respective input (they will be stitched when necessary for each output crop).

In this mode, the input images do not necessarily have to be the same size, they just need to have the same orientation and pixel resolution.
Currently only orientation along the celestial coordinates is accepted, if your input has a different orientation you can use Warp's @option{--align} option to align the image before cropping it (see @ref{Warp}).

Each individual input image/tile can even be smaller than the final crop.
In any case, any part of any of the input images which overlaps with the desired region will be used in the crop.
Note that if there is an overlap in the input images/tiles, the pixels from the last input image read are going to be used for the overlap.
Crop will not change pixel values, so it assumes your overlapping tiles were cutout from the same original image.
There are multiple ways to define your cropped region as listed below.

@table @asis

@item Center of multiple crops (in a catalog)
Similar to catalog inputs in Image mode (above), except that the values along each dimension are assumed to have the same units as the dataset's WCS information.
For example, the central RA and Dec value for each crop will be read from the first and second calls to the @option{--coordcol} option.
The width of the cropped box (in units of the WCS, or degrees in RA and Dec mode) must be specified with the @option{--width} option.

@item Center of a single crop (on the command-line)
You can specify the center of only one crop box with the @option{--center} option.
If it exists in the input images, it will be cropped similar to the catalog mode, see above also for @code{--width}.

@item Vertices of a single crop
The @option{--polygon} option is a high-level method to define any convex polygon (with any number of vertices).
Please see the description of this option in @ref{Invoking astcrop} for its syntax.
@end table

@cartouche
@noindent
@strong{CAUTION:} In WCS mode, the image has to be aligned with the celestial coordinates, such that the first FITS axis is parallel (opposite direction) to the Right Ascension (RA) and the second FITS axis is parallel to the declination.
If these conditions aren't met for an image, Crop will warn you and abort.
You can use Warp's @option{--align} option to align the input image with these coordinates, see @ref{Warp}.
@end cartouche

@end table

As a summary, if you don't specify a catalog, you have to define the cropped region manually on the command-line.
In any case the mode is mandatory for Crop to be able to interpret the values given as coordinates or widths.


@node Crop section syntax, Blank pixels, Crop modes, Crop
@subsection Crop section syntax

@cindex Crop a given section of image
When in image mode, one of the methods to crop only one rectangular section from the input image is to use the @option{--section} option.
Crop has a powerful syntax to read the box parameters from a string of characters.
If you leave certain parts of the string to be empty, Crop can fill them for you based on the input image sizes.

@cindex Define section to crop
To define a box, you need the coordinates of two points: the first (@code{X1}, @code{Y1}) and the last pixel (@code{X2}, @code{Y2}) pixel positions in the image, or four integer numbers in total.
The four coordinates can be specified with one string in this format: `@command{X1:X2,Y1:Y2}'.
This string is given to the @option{--section} option.
Therefore, the pixels along the first axis that are @mymath{\geq}@command{X1} and @mymath{\leq}@command{X2} will be included in the cropped image.
The same goes for the second axis.
Note that each different term will be read as an integer, not a float.

The reason it only accepts integers is that @option{--section} is a low-level option (which is also very fast!).
For a higher-level way to specify region (any polygon, not just a box), please see the @option{--polygon} option in @ref{Crop options}.
Also note that in the FITS standard, pixel indexes along each axis start from unity(1) not zero(0).

@cindex Crop section format
You can omit any of the values and they will be filled automatically.
The left hand side of the colon (@command{:}) will be filled with @command{1}, and the right side with the image size.
So, @command{2:,:} will include the full range of pixels along the second axis and only those with a first axis index larger than @command{2} in the first axis.
If the colon is omitted for a dimension, then the full range is automatically used.
So the same string is also equal to @command{2:,} or @command{2:} or even @command{2}.
If you want such a case for the second axis, you should set it to: @command{,2}.

If you specify a negative value, it will be seen as before the indexes of the image which are outside the image along the bottom or left sides when viewed in SAO ds9.
In case you want to count from the top or right sides of the image, you can use an asterisk (@option{*}).
When confronted with a @option{*}, Crop will replace it with the maximum length of the image in that dimension.
So @command{*-10:*+10,*-20:*+20} will mean that the crop box will be @math{20\times40} pixels in size and only include the top corner of the input image with 3/4 of the image being covered by blank pixels, see @ref{Blank pixels}.

If you feel more comfortable with space characters between the values, you can use as many space characters as you wish, just be careful to put your value in double quotes, for example @command{--section="5:200, 123:854"}.
If you forget the quotes, anything after the first space will not be seen by @option{--section} and you will most probably get an error because the rest of your string will be read as a filename (which most probably doesn't exist).
See @ref{Command-line} for a description of how the command-line works.


@node Blank pixels, Invoking astcrop, Crop section syntax, Crop
@subsection Blank pixels

@cindex Blank pixel
The cropped box can potentially include pixels that are beyond the image range.
For example when a target in the input catalog was very near the edge of the input image.
The parts of the cropped image that were not in the input image will be filled with the following two values depending on the data type of the image.
In both cases, SAO ds9 will not color code those pixels.
@itemize
@item
If the data type of the image is a floating point type (float or double), IEEE NaN (Not a number) will be used.
@item
For integer types, pixels out of the image will be filled with the value of the @command{BLANK} keyword in the cropped image header.
The value assigned to it is the lowest value possible for that type, so you will probably never need it any way.
Only for the unsigned character type (@command{BITPIX=8} in the FITS header), the maximum value is used because it is unsigned, the smallest value is zero which is often meaningful.
@end itemize
You can ask for such blank regions to not be included in the output crop image using the @option{--noblank} option.
In such cases, there is no guarantee that the image size of your outputs are what you asked for.

In some survey images, unfortunately they do not use the @command{BLANK} FITS keyword.
Instead they just give all pixels outside of the survey area a value of zero.
So by default, when dealing with float or double image types, any values that are 0.0 are also regarded as blank regions.
This can be turned off with the @option{--zeroisnotblank} option.


@node Invoking astcrop,  , Blank pixels, Crop
@subsection Invoking Crop

Crop will crop a region from an image.
If in WCS mode, it will also stitch parts from separate images in the input files.
The executable name is @file{astcrop} with the following general template

@example
$ astcrop [OPTION...] [ASCIIcatalog] ASTRdata ...
@end example


@noindent
One line examples:

@example
## Crop all objects in cat.txt from image.fits:
$ astcrop --catalog=cat.txt image.fits

## Crop all options in catalog (with RA,DEC) from all the files
## ending in `_drz.fits' in `/mnt/data/COSMOS/':
$ astcrop --mode=wcs --catalog=cat.txt /mnt/data/COSMOS/*_drz.fits

## Crop the outer 10 border pixels of the input image:
$ astcrop --section=10:*-10,10:*-10 --hdu=2 image.fits

## Crop region around RA and Dec of (189.16704, 62.218203):
$ astcrop --mode=wcs --center=189.16704,62.218203 goodsnorth.fits

## Crop region around pixel coordinate (568.342, 2091.719):
$ astcrop --mode=img --center=568.342,2091.719 --width=201 image.fits
@end example

@noindent
Crop has one mandatory argument which is the input image name(s), shown above with @file{ASTRdata ...}.
You can use shell expansions, for example @command{*} for this if you have lots of images in WCS mode.
If the crop box centers are in a catalog, you can use the @option{--catalog} option.
In other cases, you have to provide the single cropped output parameters must be given with command-line options.
See @ref{Crop output} for how the output file name(s) can be specified.
For the full list of general options to all Gnuastro programs (including Crop), please see @ref{Common options}.

Floating point numbers can be used to specify the crop region (except the @option{--section} option, see @ref{Crop section syntax}).
In such cases, the floating point values will be used to find the desired integer pixel indices based on the FITS standard.
Hence, Crop ultimately doesn't do any sub-pixel cropping (in other words, it doesn't change pixel values).
If you need such crops, you can use @ref{Warp} to first warp the image to the a new pixel grid, then crop from that.
For example, let's assume you want a crop from pixels 12.982 to 80.982 along the first dimension.
You should first translate the image by @mymath{-0.482} (note that the edge of a pixel is at integer multiples of @mymath{0.5}).
So you should run Warp with @option{--translate=-0.482,0} and then crop the warped image with @option{--section=13:81}.

There are two ways to define the cropped region: with its center or its vertices.
See @ref{Crop modes} for a full description.
In the former case, Crop can check if the central region of the cropped image is indeed filled with data or is blank (see @ref{Blank pixels}), and not produce any output when the center is blank, see the description under @option{--checkcenter} for more.

@cindex Asynchronous thread allocation
When in catalog mode, Crop will run in parallel unless you set @option{--numthreads=1}, see @ref{Multi-threaded operations}.
Note that when multiple outputs are created with threads, the outputs will not be created in the same order.
This is because the threads are asynchronous and thus not started in order.
This has no effect on each output, see @ref{Finding reddest clumps and visual inspection} for a tutorial on effectively using this feature.

@menu
* Crop options::                A list of all the options with explanation.
* Crop output::                 The outputs of Crop.
@end menu

@node Crop options, Crop output, Invoking astcrop, Invoking astcrop
@subsubsection Crop options

The options can be classified into the following contexts: Input, Output and operating mode options.
Options that are common to all Gnuastro program are listed in @ref{Common options} and will not be repeated here.

When you are specifying the crop vertices your self (through @option{--section}, or @option{--polygon}) on relatively small regions (depending on the resolution of your images) the outputs from image and WCS mode can be approximately equivalent.
However, as the crop sizes get large, the curved nature of the WCS coordinates have to be considered.
For example, when using @option{--section}, the right ascension of the bottom left and top left corners will not be equal.
If you only want regions within a given right ascension, use @option{--polygon} in WCS mode.

@noindent
Input image parameters:
@table @option

@item --hstartwcs=INT
@cindex CANDELS survey
Specify the first keyword card (line number) to start finding the input image world coordinate system information.
Distortions were only recently included in WCSLIB (from version 5).
Therefore until now, different telescope would apply their own specific set of WCS keywords and put them into the image header along with those that WCSLIB does recognize.
So now that WCSLIB recognizes most of the standard distortion parameters, they will get confused with the old ones and give completely wrong results.
For example in the CANDELS-GOODS South images@footnote{@url{https://archive.stsci.edu/pub/hlsp/candels/goods-s/gs-tot/v1.0/}}.

The two @option{--hstartwcs} and @option{--hendwcs} are thus provided so when using older datasets, you can specify what region in the FITS headers you want to use to read the WCS keywords.
Note that this is only relevant for reading the WCS information, basic data information like the image size are read separately.
These two options will only be considered when the value to @option{--hendwcs} is larger than that of @option{--hstartwcs}.
So if they are equal or @option{--hstartwcs} is larger than @option{--hendwcs}, then all the input keywords will be parsed to get the WCS information of the image.

@item --hendwcs=INT
Specify the last keyword card to read for specifying the image world coordinate system on the input images.
See @option{--hstartwcs}

@end table

@noindent
Crop box parameters:
@table @option

@item -c FLT[,FLT[,...]]
@itemx --center=FLT[,FLT[,...]]
The central position of the crop in the input image.
The positions along each dimension must be separated by a comma (@key{,}) and fractions are also acceptable.
The number of values given to this option must be the same as the dimensions of the input dataset.
The width of the crop should be set with @code{--width}.
The units of the coordinates are read based on the value to the @option{--mode} option, see below.

@item -w FLT[,FLT[,...]]
@itemx --width=FLT[,FLT[,...]]
Width of the cropped region about its center.
@option{--width} may take either a single value (to be used for all dimensions) or multiple values (a specific value for each dimension).
If in WCS mode, value(s) given to this option will be read in the same units as the dataset's WCS information along this dimension.
The final output will have an odd number of pixels to allow easy identification of the pixel which keeps your requested coordinate (from @option{--center} or @option{--catalog}).

The @code{--width} option also accepts fractions.
For example if you want the width of your crop to be 3 by 5 arcseconds along RA and Dec respectively, you can call it with: @option{--width=3/3600,5/3600}.

If you want an even sided crop, you can run Crop afterwards with @option{--section=":*-1,:*-1"} or @option{--section=2:,2:} (depending on which side you don't need), see @ref{Crop section syntax}.

@item -l STR
@itemx --polygon=STR
String of vertices to define a polygon to crop.
The vertices are used to define the polygon in the same order given to this option.
When the vertices are not necessarily ordered in the proper order (for example one vertice in a square comes after its diagonal opposite), you can add the @option{--polygonsort} option which will attempt to sort the vertices before cropping.
Note that for concave polygons, sorting is not recommended because there is no unique solution, for more, see the description under @option{--polygonsort}.

This option can be used both in the image and WCS modes, see @ref{Crop modes}.
The cropped image will be the size of the rectangular region that completely encompasses the polygon.
By default all the pixels that are outside of the polygon will be set as blank values (see @ref{Blank pixels}).
However, if @option{--polygonout} is called all pixels internal to the vertices will be set to blank.
In WCS-mode, you may provide many FITS images/tiles: Crop will stitch them to produce this cropped region, then apply the polygon.

The syntax for the polygon vertices is similar to, and simpler than, that for @option{--section}.
In short, the dimensions of each coordinate are separated by a comma (@key{,}) and each vertex is separated by a colon (@key{:}).
You can define as many vertices as you like.
If you would like to use space characters between the dimensions and vertices to make them more human-readable, then you have to put the value to this option in double quotation marks.

For example, let's assume you want to work on the deepest part of the WFC3/IR images of Hubble Space Telescope eXtreme Deep Field (HST-XDF).
@url{https://archive.stsci.edu/prepds/xdf/, According to the web page}@footnote{@url{https://archive.stsci.edu/prepds/xdf/}} the deepest part is contained within the coordinates:

@example
[ (53.187414,-27.779152), (53.159507,-27.759633),
  (53.134517,-27.787144), (53.161906,-27.807208) ]
@end example

They have provided mask images with only these pixels in the WFC3/IR images, but what if you also need to work on the same region in the full resolution ACS images? Also what if you want to use the CANDELS data for the shallow region? Running Crop with @option{--polygon} will easily pull out this region of the image for you, irrespective of the resolution.
If you have set the operating mode to WCS mode in your nearest configuration file (see @ref{Configuration files}), there is no need to call @option{--mode=wcs} on the command line.

@example
$ astcrop --mode=wcs desired-filter-image(s).fits           \
   --polygon="53.187414,-27.779152 : 53.159507,-27.759633 : \
              53.134517,-27.787144 : 53.161906,-27.807208"
@end example
@cindex SAO DS9
In other cases, you have an image and want to define the polygon yourself (it isn't already published like the example above).
As the number of vertices increases, checking the vertex coordinates on a FITS viewer (for example SAO ds9) and typing them in one by one can be very tedious and prone to typo errors.

You can take the following steps to avoid the frustration and possible typos: Open the image with ds9 and activate its ``region'' mode with @clicksequence{Edit@click{}Region}.
Then define the region as a polygon with @clicksequence{Region@click{}Shape@click{}Polygon}.
Click on the approximate center of the region you want and a small square will appear.
By clicking on the vertices of the square you can shrink or expand it, clicking and dragging anywhere on the edges will enable you to define a new vertex.
After the region has been nicely defined, save it as a file with @clicksequence{Region@click{}Save Regions}.
You can then select the name and address of the output file, keep the format as @command{REG} and press ``OK''.
In the next window, keep format as ``ds9'' and ``Coordinate System'' as ``fk5''.
A plain text file (let's call it @file{ds9.reg}) is now created.

You can now convert this plain text file to Crop's polygon format with this command (when typing on the command-line, ignore the ``@key{\}'' at the end of the first and second lines along with the extra spaces, these are only for nice printing):

@example
$ v=$(awk 'NR==4' ds9.reg | sed -e's/polygon(//'        \
           -e's/\([^,]*,[^,]*\),/\1:/g' -e's/)//' )
$ astcrop --mode=wcs image.fits --polygon=$v
@end example

@item --polygonout
Keep all the regions outside the polygon and mask the inner ones with blank pixels (see @ref{Blank pixels}).
This is practically the inverse of the default mode of treating polygons.
Note that this option only works when you have only provided one input image.
If multiple images are given (in WCS mode), then the full area covered by all the images has to be shown and the polygon excluded.
This can lead to a very large area if large surveys like COSMOS are used.
So Crop will abort and notify you.
In such cases, it is best to crop out the larger region you want, then mask the smaller region with this option.

@item --polygonsort
Sort the given set of vertices to the @option{--polygon} option.
For a concave polygon it will sort the vertices correctly, however for a convex polygon it there is no unique sorting, so be careful because the crop may not be what you expected.

@cindex Convex polygons
@cindex Concave polygons
@cindex Polygons, Convex
@cindex Polygons, Concave
Polygons come in two classes: convex and concave (or generally, non-convex!), see below for a demonstration.
Convex polygons are those where all inner angles are less than 180 degrees.
By contrast, a convex polygon is one where an inner angle may be more than 180 degrees.

@example
            Concave Polygon        Convex Polygon

             D --------C          D------------- C
              \        |        E /              |
               \E      |          \              |
               /       |           \             |
              A--------B             A ----------B
@end example

@item -s STR
@itemx --section=STR
Section of the input image which you want to be cropped.
See @ref{Crop section syntax} for a complete explanation on the syntax required for this input.

@item -x STR/INT
@itemx --coordcol=STR/INT
The column in a catalog to read as a coordinate.
The value can be either the column number (starting from 1), or a match/search in the table meta-data, see @ref{Selecting table columns}.
This option must be called multiple times, depending on the number of dimensions in the input dataset.
If it is called more than necessary, the extra columns (later calls to this option on the command-line or configuration files) will be ignored, see @ref{Configuration file precedence}.

@item -n STR/INT
@item --namecol=STR/INT
Column selection of crop file name.
The value can be either the column number (starting from 1), or a match/search in the table meta-data, see @ref{Selecting table columns}.
This option can be used both in Image and WCS modes, and not a mandatory.
When a column is given to this option, the final crop base file name will be taken from the contents of this column.
The directory will be determined by the @option{--output} option (current directory if not given) and the value to @option{--suffix} will be appended.
When this column isn't given, the row number will be used instead.

@end table

@noindent
Output options:
@table @option

@item -c FLT/INT
@itemx --checkcenter=FLT/INT
@cindex Check center of crop
Square box width of region in the center of the image to check for blank values.
If any of the pixels in this central region of a crop (defined by its center) are blank, then it will not be stored in an output file.
If the value to this option is zero, no checking is done.
This check is only applied when the cropped region(s) are defined by their center (not by the vertices, see @ref{Crop modes}).

The units of the value are interpreted based on the @option{--mode} value (in WCS or pixel units).
The ultimate checked region size (in pixels) will be an odd integer around the center (converted from WCS, or when an even number of pixels are given to this option).
In WCS mode, the value can be given as fractions, for example if the WCS units are in degrees, @code{0.1/3600} will correspond to a check size of 0.1 arcseconds.

Because survey regions don't often have a clean square or rectangle shape, some of the pixels on the sides of the survey FITS image don't commonly have any data and are blank (see @ref{Blank pixels}).
So when the catalog was not generated from the input image, it often happens that the image does not have data over some of the points.

When the given center of a crop falls in such regions or outside the dataset, and this option has a non-zero value, no crop will be created.
Therefore with this option, you can specify a width of a small box (3 pixels is often good enough) around the central pixel of the cropped image.
You can check which crops were created and which weren't from the command-line (if @option{--quiet} was not called, see @ref{Operating mode options}), or in Crop's log file (see @ref{Crop output}).

@item -p STR
@itemx --suffix=STR
The suffix (or post-fix) of the output files for when you want all the cropped images to have a special ending.
One case where this might be helpful is when besides the science images, you want the weight images (or exposure maps, which are also distributed with survey images) of the cropped regions too.
So in one run, you can set the input images to the science images and @option{--suffix=_s.fits}.
In the next run you can set the weight images as input and @option{--suffix=_w.fits}.

@item --primaryimghdu
Write the output into the primary (0-th) HDU/extension of the output.
By default, like all Gnuastro's default outputs, no data is written in the primary extension because the FITS standard suggests keeping that extension free of data and only for meta data.

@item -b
@itemx --noblank
Pixels outside of the input image that are in the crop box will not be used.
By default they are filled with blank values (depending on type), see @ref{Blank pixels}.
This option only applies only in Image mode, see @ref{Crop modes}.

@item -z
@itemx --zeroisnotblank
In float or double images, it is common to give the value of zero to blank pixels.
If the input image type is one of these two types, such pixels will also be considered as blank.
You can disable this behavior with this option, see @ref{Blank pixels}.

@end table

@noindent
Operating mode options:
@table @option

@item -O STR
@itemx --mode=STR
Operate in Image mode or WCS mode when the input coordinates can be both image or WCS.
The value must either be @option{img} or @option{wcs}, see @ref{Crop modes} for a full description.

@end table





@node Crop output,  , Crop options, Invoking astcrop
@subsubsection Crop output

The string given to @option{--output} option will be interpreted depending
on how many crops were requested, see @ref{Crop modes}:

@itemize
@item
When a catalog is given, the value of the @option{--output} (see @ref{Common options}) will be read as the directory to store the output cropped images.
Hence if it doesn't already exist, Crop will abort with an ``No such file or directory'' error.

The crop file names will consist of two parts: a variable part (the row number of each target starting from 1) along with a fixed string which you can set with the @option{--suffix} option.
Optionally, you may also use the @option{--namecol} option to define a column in the input catalog to use as the file name instead of numbers.

@item
When only one crop is desired, the value to @option{--output} will be read as a file name.
If no output is specified or if it is a directory, the output file name will follow the automatic output names of Gnuastro, see @ref{Automatic output}: The string given to @option{--suffix} will be replaced with the @file{.fits} suffix of the input.
@end itemize

By default, as suggested by the FITS standard and implemented in all Gnuastro programs, the first/primary extension of the output files will only contain meta data.
The cropped images/cubes will be written into the 2nd HDU of their respective FITS file (which is actually counted as @code{1} because HDU counting starts from @code{0}).
However, if you want the cropped data to be written into the primary (0-th) HDU, run Crop with the @option{--primaryimghdu} option.

The header of each output cropped image will contain the names of the input image(s) it was cut from.
If a name is longer than the 70 character space that the FITS standard allows for header keyword values, the name will be cut into several keywords from the nearest slash (@key{/}).
The keywords have the following format: @command{ICFn_m} (for Crop File).
Where @command{n} is the number of the image used in this crop and @command{m} is the part of the name (it can be broken into multiple keywords).
Following the name is another keyword named @command{ICFnPIX} which shows the pixel range from that input image in the same syntax as @ref{Crop section syntax}.
So this string can be directly given to the @option{--section} option later.

Once done, a log file can be created in the current directory with the @code{--log} option.
This file will have three columns and the same number of rows as the number of cropped images.
There are also comments on the top of the log file explaining basic information about the run and descriptions for the columns.
A short description of the columns is also given below:

@enumerate
@item
The cropped image file name for that row.
@item
The number of input images that were used to create that image.
@item
A @code{0} if the central few pixels (value to the @option{--checkcenter} option) are blank and @code{1} if they aren't.
When the crop was not defined by its center (see @ref{Crop modes}), or @option{--checkcenter} was given a value of 0 (see @ref{Invoking astcrop}), the center will not be checked and this column will be given a value of @code{-1}.
@end enumerate


















@node Arithmetic, Convolve, Crop, Data manipulation
@section Arithmetic

It is commonly necessary to do operations on some or all of the elements of a dataset independently (pixels in an image).
For example, in the reduction of raw data it is necessary to subtract the Sky value (@ref{Sky value}) from each image image.
Later (once the images as warped into a single grid using Warp for example, see @ref{Warp}), the images are co-added (the output pixel grid is the average of the pixels of the individual input images).
Arithmetic is Gnuastro's program for such operations on your datasets directly from the command-line.
It currently uses the reverse polish or post-fix notation, see @ref{Reverse polish notation} and will work on the native data types of the input images/data to reduce CPU and RAM resources, see @ref{Numeric data types}.
For more information on how to run Arithmetic, please see @ref{Invoking astarithmetic}.


@menu
* Reverse polish notation::     The current notation style for Arithmetic
* Arithmetic operators::        List of operators known to Arithmetic
* Invoking astarithmetic::      How to run Arithmetic: options and output
@end menu

@node Reverse polish notation, Arithmetic operators, Arithmetic, Arithmetic
@subsection Reverse polish notation

@cindex Post-fix notation
@cindex Reverse Polish Notation
The most common notation for arithmetic operations is the @url{https://en.wikipedia.org/wiki/Infix_notation, infix notation} where the operator goes between the two operands, for example @mymath{4+5}.
While the infix notation is the preferred way in most programming languages, currently the Gnuastro's program (in particular Arithmetic and Table, when doing column arithmetic) do not use it.
This is because it will require parenthesis which can complicate the implementation of the code.
In the near future we do plan to also allow this notation@footnote{@url{https://savannah.gnu.org/task/index.php?13867}}, but for the time being (due to time constraints on the developers), arithmetic operations can only be done in the post-fix notation (also known as @url{https://en.wikipedia.org/wiki/Reverse_Polish_notation, reverse polish notation}).
The Wikipedia article provides some excellent explanation on this notation but here we will give a short summary here for self-sufficiency.

In the post-fix notation, the operator is placed after the operands, as we will see below this removes the need to define parenthesis for most ordinary operators.
For example, instead of writing @command{5+6}, we write @command{5 6 +}.
To easily understand how this notation works, you can think of each operand as a node in a ``last-in-first-out'' stack.
Every time an operator is confronted, the operator pops the number of operands it needs from the top of the stack (so they don't exist in the stack any more), does its operation and pushes the result back on top of the stack.
So if you want the average of 5 and 6, you would write: @command{5 6 + 2 /}.
The operations that are done are:

@enumerate
@item
@command{5} is an operand, so it is pushed to the top of the stack (which is initially empty).
@item
@command{6} is an operand, so it is pushed to the top of the stack.
@item
@command{+} is a @emph{binary} operator, so it will pop the top two elements of the stack out of it, and perform addition on them (the order is @mymath{5+6} in the example above).
The result is @command{11} which is pushed to the top of the stack.
@item
@command{2} is an operand so push it onto the top of the stack.
@item
@command{/} is a binary operator, so pull out the top two elements of the stack (top-most is @command{2}, then @command{11}) and divide the second one by the first.
@end enumerate

In the Arithmetic program, the operands can be FITS images or numbers (see @ref{Invoking astarithmetic}).
In Table's column arithmetic, they can be any column or a number (see @ref{Column arithmetic}).

With this notation, very complicated procedures can be created without the need for parenthesis or worrying about precedence.
Even functions which take an arbitrary number of arguments can be defined in this notation.
This is a very powerful notation and is used in languages like Postscript @footnote{See the EPS and PDF part of @ref{Recognized file formats} for a little more on the Postscript language.} which produces PDF files when compiled.





@node Arithmetic operators, Invoking astarithmetic, Reverse polish notation, Arithmetic
@subsection Arithmetic operators

The recognized operators in Arithmetic are listed below.
See @ref{Reverse polish notation} for more on how the operators and operands should be ordered on the command-line.
The operands to all operators can be a data array (for example a FITS image) or a number, the output will be an array or number according to the inputs.
For example a number multiplied by an array will produce an array.
The conditional operators will return pixel, or numerical values of 0 (false) or 1 (true) and stored in an @code{unsigned char} data type (see @ref{Numeric data types}).

@table @command

@item +
Addition, so ``@command{4 5 +}'' is equivalent to @mymath{4+5}.
For example in the command below, the value 20000 is added to each pixel's value in @file{image.fits}:
@example
$ astarithmetic 20000 image.fits +
@end example
You can also use this operator is to sum the values of one pixel in two images (which have to be the same size).
For example in the commands below (which are identical, see paragraph after the commands), each pixel of @file{sum.fits} is the sum of the same pixel's values in @file{a.fits} and @file{b.fits}.
@example
$ astarithmetic a.fits b.fits + -h1 -h1 --output=sum.fits
$ astarithmetic a.fits b.fits + -g1     --output=sum.fits
@end example
The HDU/extension has to be specified for each image with @option{-h}.
However, if the HDUs are the same in all inputs, you can use @option{-g} to only specify the HDU once

@item -
Subtraction, so ``@command{4 5 -}'' is equivalent to @mymath{4-5}.
Usage of this operator is similar to @command{+} operator, for example:
@example
$ astarithmetic 20000 image.fits -
$ astarithmetic a.fits b.fits - -g1 --output=sub.fits
@end example

@item x
Multiplication, so ``@command{4 5 x}'' is equivalent to @mymath{4\times5}.
For example in the command below, the value of each output pixel is 5 times its value in @file{image.fits}:
@example
$ astarithmetic image.fits 5 x
@end example
And you can multiply the value of each pixel in two images, like this:
@example
$ astarithmetic a.fits a.fits x -g1 –output=multip.fits
@end example

@item /
Division, so ``@command{4 5 /}'' is equivalent to @mymath{4/5}.
Like the multiplication, for example
@example
$ astarithmetic image.fits 5 -h1 /
$ astarithmetic a.fits b.fits / -g1 –output=div.fits
@end example

@item %
Modulo (remainder), so ``@command{3 2 %}'' is equivalent to @mymath{1}.
Note that the modulo operator only works on integer types (see @ref{Numeric data types}).
This operator is therefore not defined for most processed astronomical astronomical images that have floating-point value.
However it is useful in labeled images, for example @ref{Segment output}).
In such cases, each pixel is the integer label of the object it is associated with hence with the example command below, we can change the labels to only be between 1 and 4 and decrease all objects on the image to 4/5th (all objects with a label that is a multiple of 5 will be set to 0).
@example
$ astarithmetic label.fits 5 1 %
@end example

@item abs
Absolute value of first operand, so ``@command{4 abs}'' is equivalent to @mymath{|4|}.
For example the output of the command bellow will not have any negative pixels (all negative pixels will be multiplied by @mymath{-1} to become positive)
@example
$ astarithmetic image.fits abs
@end example


@item pow
First operand to the power of the second, so ``@command{4.3 5 pow}'' is equivalent to @mymath{4.3^{5}}.
For example with the command below all pixels will be squared
@example
$ astarithmetic image.fits 2 pow
@end example

@item sqrt
The square root of the first operand, so ``@command{5 sqrt}'' is equivalent to @mymath{\sqrt{5}}.
Since the square root is only defined for positive values, any negative-valued pixel will become NaN (blank).
The output will have a floating point type, but its precision is determined from the input: if the input is a 64-bit floating point, the output will also be 64-bit.
Otherwise, the output will be 32-bit floating point (see @ref{Numeric data types} for the respective precision).
Therefore if you require 64-bit precision in estimating the square root, convert the input to 64-bit floating point first, for example with @code{5 float64 sqrt}.
For example each pixel of the output of the command below will be the square root of that pixel in the input.
@example
$ astarithmetic image.fits sqrt
@end example

If you just want to scale an image with negative values using this operator (for better visual inspection, and the actual values don't matter for you), you can subtract the image from its minimum value, then take its square root:

@example
$ astarithmetic image.fits image.fits minvalue - sqrt -g1
@end example

Alternatively, to avoid reading the image into memory two times, you can use the @option{set-} operator to read it into the variable @option{i} and use @option{i} two times to speed up the operation (described below):

@example
$ astarithmetic image.fits set-i i i minvalue - sqrt
@end example

@item log
Natural logarithm of first operand, so ``@command{4 log}'' is equivalent to @mymath{ln(4)}.
Negative pixels will become NaN, and the output type is determined from the input, see the explanation under @command{sqrt} for more on these features.
For example the command below will take the natural logarithm of every pixel in the input.
@example
$ astarithmetic image.fits log --output=log.fits
@end example

@item log10
Base-10 logarithm of first popped operand, so ``@command{4 log}'' is equivalent to @mymath{log_{10}(4)}.
Negative pixels will become NaN, and the output type is determined from the input, see the explanation under @command{sqrt} for more on these features.
For example the command below will take the base-10 logarithm of every pixel in the input.
@example
$ astarithmetic image.fits log10
@end example

@item minvalue
Minimum value in the first popped operand, so ``@command{a.fits minvalue}'' will push the minimum pixel value in this image onto the stack.
When this operator acts on a single image, the output (operand that is put back on the stack) will no longer be an image, but a number.
The output of this operand is in the same type as the input.
This operator is mainly intended for multi-element datasets (for example images or data cubes), if the popped operand is a number, it will just return it without any change.

Note that when the final remaining/output operand is a single number, it is printed onto the standard output.
For example with the command below the minimum pixel value in @file{image.fits} will be printed in the terminal:
@example
$ astarithmetic image.fits minvalue
@end example

However, the output above also includes a lot of extra information that are not relevant in this context.
If you just want the final number, run Arithmetic in quiet mode:
@example
$ astarithmetic image.fits minvalue -q
@end example

Also see the description of @option{sqrt} for other example usages of this operator.

@item maxvalue
Maximum value of first operand in the same type, similar to @command{minvalue}, see the description there for more.
For example
@example
$ astarithmetic image.fits maxvalue -q
@end example

@item numbervalue
Number of non-blank elements in first operand in the @code{uint64} type (since it is always a positive integer, see @ref{Numeric data types}).
Its usage is similar to @command{minvalue}, for example
@example
$ astarithmetic image.fits numbervalue -q
@end example

@item sumvalue
Sum of non-blank elements in first operand in the @code{float32} type.
Its usage is similar to @command{minvalue}, for example
@example
$ astarithmetic image.fits sumvalue -q
@end example

@item meanvalue
Mean value of non-blank elements in first operand in the @code{float32} type.
Its usage is similar to @command{minvalue}, for example
@example
$ astarithmetic image.fits meanvalue -q
@end example

@item stdvalue
Standard deviation of non-blank elements in first operand in the @code{float32} type.
Its usage is similar to @command{minvalue}, for example
@example
$ astarithmetic image.fits stdvalue -q
@end example

@item medianvalue
Median of non-blank elements in first operand with the same type.
Its usage is similar to @command{minvalue}, for example
@example
$ astarithmetic image.fits medianvalue -q
@end example


@cindex NaN
@item min
For each pixel, find the minimum value in all given datasets.
The output will have the same type as the input.

The first popped operand to this operator must be a positive integer number which specifies how many further operands should be popped from the stack.
All the subsequently popped operands must have the same type and size.
This operator (and all the variable-operand operators similar to it that are discussed below) will work in multi-threaded mode unless Arithmetic is called with the @option{--numthreads=1} option, see @ref{Multi-threaded operations}.

Each pixel of the output of the @code{min} operator will be given the minimum value of the same pixel from all the popped operands/images.
For example the following command will produce an image with the same size and type as the three inputs, but each output pixel value will be the minimum of the same pixel's values in all three input images.

@example
$ astarithmetic a.fits b.fits c.fits 3 min
@end example

Important notes:
@itemize

@item
NaN/blank pixels will be ignored, see @ref{Blank pixels}.

@item
The output will have the same type as the inputs.
This is natural for the @command{min} and @command{max} operators, but for other similar operators (for example @command{sum}, or @command{average}) the per-pixel operations will be done in double precision floating point and then stored back in the input type.
Therefore, if the input was an integer, C's internal type conversion will be used.

@end itemize

@item max
For each pixel, find the maximum value in all given datasets.
The output will have the same type as the input.
This operator is called similar to the @command{min} operator, please see there for more.
For example
@example
$ astarithmetic a.fits b.fits c.fits 3 min -omax.fits
@end example


@item number
For each pixel count the number of non-blank pixels in all given datasets.
The output will be an unsigned 32-bit integer datatype (see @ref{Numeric data types}).
This operator is called similar to the @command{min} operator, please see there for more.
Note that some datasets may have blank values (which are also ignored in all similar operators like @command{min}, @command{sum}, @command{mean} or @command{median}).
Hence final pixel values of this operator will not, in general, be equal to the number of inputs.
For example
@example
$ astarithmetic a.fits b.fits c.fits 3 number -onum.fits
@end example

@item sum
For each pixel, calculate the sum in all given datasets.
The output will have the a single-precision (32-bit) floating point type.
This operator is called similar to the @command{min} operator, please see there for more.
For example
@example
$ astarithmetic a.fits b.fits c.fits 3 sum -ostack-sum.fits
@end example

@item mean
For each pixel, calculate the mean in all given datasets.
The output will have the a single-precision (32-bit) floating point type.
This operator is called similar to the @command{min} operator, please see there for more.
For example
@example
$ astarithmetic a.fits b.fits c.fits 3 mean -ocoadd-mean.fits
@end example

@item std
For each pixel, find the standard deviation in all given datasets.
The output will have the a single-precision (32-bit) floating point type.
This operator is called similar to the @command{min} operator, please see there for more.
For example
@example
$ astarithmetic a.fits b.fits c.fits 3 std -ostd.fits
@end example

@item median
For each pixel, find the median in all given datasets.
The output will have the a single-precision (32-bit) floating point type.
This operator is called similar to the @command{min} operator, please see there for more.
For example
@example
$ astarithmetic a.fits b.fits c.fits 3 mean \
                --output=stack-median.fits
@end example

@item quantile
For each pixel, find the quantile from all given datasets.
The output will have the same numeric data type and size as the input datasets.
Besides the input datasets, the quantile operator also needs a single parameter (the requested quantile).
The parameter should be the first popped operand, with a value between (and including) 0 and 1.
The second popped operand must be the number of datasets to use.

In the example below, the first-popped operand (@command{0.7}) is the quantile, the second-popped operand (@command{3}) is the number of datasets to pop.

@example
astarithmetic a.fits b.fits c.fits 3 0.7 quantile
@end example

@item sigclip-number
For each pixel, find the sigma-clipped number (after removing outliers) in all given datasets.
The output will have the an unsigned 32-bit integer type (see @ref{Numeric data types}).

This operator will combine the specified number of inputs into a single output that contains the number of remaining elements after @mymath{\sigma}-clipping on each element/pixel (for more on @mymath{\sigma}-clipping, see @ref{Sigma clipping}).
This operator is very similar to @command{min}, with the exception that it expects two operands (parameters for sigma-clipping) before the total number of inputs.
The first popped operand is the termination criteria and the second is the multiple of @mymath{\sigma}.

For example in the command below, the first popped operand (@command{0.2}) is the sigma clipping termination criteria.
If the termination criteria is larger than, or equal to, 1 it is interpreted as the number of clips to do.
But if it is between 0 and 1, then it is the tolerance level on the standard deviation (see @ref{Sigma clipping}).
The second popped operand (@command{5}) is the multiple of sigma to use in sigma-clipping.
The third popped operand (@command{10}) is number of datasets that will be used (similar to the first popped operand to @command{min}).

@example
astarithmetic a.fits b.fits c.fits 3 5 0.2 sigclip-number
@end example

@item sigclip-median
For each pixel, find the sigma-clipped median in all given datasets.
The output will have the a single-precision (32-bit) floating point type.
This operator is called similar to the @command{sigclip-number} operator, please see there for more.
For example
@example
astarithmetic a.fits b.fits c.fits 3 5 0.2 sigclip-median
@end example

@item sigclip-mean
For each pixel, find the sigma-clipped mean in all given datasets.
The output will have the a single-precision (32-bit) floating point type.
This operator is called similar to the @command{sigclip-number} operator, please see there for more.
For example
@example
astarithmetic a.fits b.fits c.fits 3 5 0.2 sigclip-mean
@end example

@item sigclip-std
For each pixel, find the sigma-clipped standard deviation in all given datasets.
The output will have the a single-precision (32-bit) floating point type.
This operator is called similar to the @command{sigclip-number} operator, please see there for more.
For example
@example
astarithmetic a.fits b.fits c.fits 3 5 0.2 sigclip-std
@end example

@item filter-mean
Apply mean filtering (or @url{https://en.wikipedia.org/wiki/Moving_average, moving average}) on the input dataset.
During mean filtering, each pixel (data element) is replaced by the mean value of all its surrounding pixels (excluding blank values).
The number of surrounding pixels in each dimension (to calculate the mean) is determined through the earlier operands that have been pushed onto the stack prior to the input dataset.
The number of necessary operands is determined by the dimensions of the input dataset (first popped operand).
The order of the dimensions on the command-line is the order in FITS format.
Here is one example:

@example
$ astarithmetic 5 4 image.fits filter-mean
@end example

@noindent
In this example, each pixel is replaced by the mean of a 5 by 4 box around it.
The box is 5 pixels along the first FITS dimension (horizontal when viewed in ds9) and 4 pixels along the second FITS dimension (vertical).

Each pixel will be placed in the center of the box that the mean is calculated on.
If the given width along a dimension is even, then the center is assumed to be between the pixels (not in the center of a pixel).
When the pixel is close to the edge, the pixels of the box that fall outside the image are ignored.
Therefore, on the edge, less points will be used in calculating the mean.

The final effect of mean filtering is to smooth the input image, it is essentially a convolution with a kernel that has identical values for all its pixels (is flat), see @ref{Convolution process}.

Note that blank pixels will also be affected by this operator: if there are any non-blank elements in the box surrounding a blank pixel, in the filtered image, it will have the mean of the non-blank elements, therefore it won't be blank any more.
If blank elements are important for your analysis, you can use the @code{isblank} with the @code{where} operator to set them back to blank after filtering.

@item filter-median
Apply @url{https://en.wikipedia.org/wiki/Median_filter, median filtering} on the input dataset.
This is very similar to @command{filter-mean}, except that instead of the mean value of the box pixels, the median value is used to replace a pixel value.
For more on how to use this operator, please see @command{filter-mean}.

The median is less susceptible to outliers compared to the mean.
As a result, after median filtering, the pixel values will be more discontinuous than mean filtering.

@item filter-sigclip-mean
Apply a @mymath{\sigma}-clipped mean filtering onto the input dataset.
This is very similar to @code{filter-mean}, except that all outliers (identified by the @mymath{\sigma}-clipping algorithm) have been removed, see @ref{Sigma clipping} for more on the basics of this algorithm.
As described there, two extra input parameters are necessary for @mymath{\sigma}-clipping: the multiple of @mymath{\sigma} and the termination criteria.
@code{filter-sigclip-mean} therefore needs to pop two other operands from the stack after the dimensions of the box.

For example the line below uses the same box size as the example of @code{filter-mean}.
However, all elements in the box that are iteratively beyond @mymath{3\sigma} of the distribution's median are removed from the final calculation of the mean until the change in @mymath{\sigma} is less than @mymath{0.2}.

@example
$ astarithmetic 3 0.2 5 4 image.fits filter-sigclip-mean
@end example

The median (which needs a sorted dataset) is necessary for @mymath{\sigma}-clipping, therefore @code{filter-sigclip-mean} can be significantly slower than @code{filter-mean}.
However, if there are strong outliers in the dataset that you want to ignore (for example emission lines on a spectrum when finding the continuum), this is a much better solution.

@item filter-sigclip-median
Apply a @mymath{\sigma}-clipped median filtering onto the input dataset.
This operator and its necessary operands are almost identical to @code{filter-sigclip-mean}, except that after @mymath{\sigma}-clipping, the median value (which is less affected by outliers than the mean) is added back to the stack.

@item interpolate-medianngb
Interpolate the blank elements of the second popped operand with the median of nearest non-blank neighbors to each.
The number of the nearest non-blank neighbors used to calculate the median is given by the first popped operand.

The distance of the nearest non-blank neighbors is irrelevant in this interpolation.
The neighbors of each blank pixel will be parsed in expanding circular rings (for 2D images) or spherical surfaces (for 3D cube) and each non-blank element over them is stored in memory.
When the requested number of non-blank neighbors have been found, their median is used to replace that blank element.
For example the line below replaces each blank element with the the median of the nearest 5 pixels.

@example
$ astarithmetic image.fits 5 interpolate-medianngb
@end example

When you want to interpolate blank regions and you want each blank region to have a fixed value (for example the centers of saturated stars) this operator is not good.
Because the pixels used to interpolate various parts of the region differ.
For such scenarios, you may use @code{interpolate-maxofregion} or @code{interpolate-inofregion} (described below).

@item interpolate-minngb
Similar to @code{interpolate-medianngb}, but will fill the blank values of the dataset with the minimum value of the nearest neighbors.

@item interpolate-maxngb
Similar to @code{interpolate-medianngb}, but will fill the blank values of the dataset with the maximum value of the nearest neighbors.
One useful implementation of this operator is to fill the saturated pixels of stars in images.

@item interpolate-minofregion
Interpolate all blank regions (consisting of many blank pixels that are touching) in the second popped operand with the minimum value of the pixels that are immediately bordering that region (a single value).
The first popped operand is the connectivity (see description in @command{connected-components}).

For example with the command below all the connected blank regions of @file{image.fits} will be filled.
Its an image (2D dataset), so a 2 connectivity means that the independent blank regions are defined by 8-connected neighbors.
If connectivity was 1, the regions would be defined by 4-connectivity: blank regions that may only be touching on the corner of one pixel would be identified as separate regions.

@example
$ astarithmetic image.fits 2 interpolate-minofregion
@end example

@item interpolate-maxofregion
@cindex Saturated pixels
Similar to @code{interpolate-minofregion}, but the maximum is used to fill the blank regions.

This operator can be useful in filling saturated pixels in stars for example.
Recall that the @option{interpolate-maxngb} operator looks for the maximum value with a given number of neighboring pixels and is more useful in small noisy regions.
Therefore as the blank regions become larger, @option{interpolate-maxngb} can cause a fragmentation in the connected blank region because the nearest neighbor to one part of the blank region, may not fall within the pixels searched for the other regions.
With this option, the size of the blank region is irrelevant: all the pixels bordering the blank region are parsed and their maximum value is used for the whole region.

@item collapse-sum
Collapse the given dataset (second popped operand), by summing all elements along the first popped operand (a dimension in FITS standard: counting from one, from fastest dimension).
The returned dataset has one dimension less compared to the input.

The output will have a double-precision floating point type irrespective of the input dataset's type.
Doing the operation in double-precision (64-bit) floating point will help the collapse (summation) be affected less by floating point errors.
But afterwards, single-precision floating points are usually enough in real (noisy) datasets.
So depending on the type of the input and its nature, it is recommended to use one of the type conversion operators on the returned dataset.

@cindex World Coordinate System (WCS)
If any WCS is present, the returned dataset will also lack the respective dimension in its WCS matrix.
Therefore, when the WCS is important for later processing, be sure that the input is aligned with the respective axes: all non-diagonal elements in the WCS matrix are zero.

@cindex Data cubes
@cindex 3D data-cubes
@cindex Cubes (3D data)
@cindex Narrow-band image
@cindex IFU: Integral Field Unit
@cindex Integral field unit (IFU)
One common application of this operator is the creation of pseudo broad-band or narrow-band 2D images from 3D data cubes.
For example integral field unit (IFU) data products that have two spatial dimensions (first two FITS dimensions) and one spectral dimension (third FITS dimension).
The command below will collapse the whole third dimension into a 2D array the size of the first two dimensions, and then convert the output to single-precision floating point (as discussed above).

@example
$ astarithmetic cube.fits 3 collapse-sum float32
@end example

@item collapse-mean
Similar to @option{collapse-sum}, but the returned dataset will be the mean value along the collapsed dimension, not the sum.

@item collapse-number
Similar to @option{collapse-sum}, but the returned dataset will be the number of non-blank values along the collapsed dimension.
The output will have a 32-bit signed integer type.
If the input dataset doesn't have blank values, all the elements in the returned dataset will have a single value (the length of the collapsed dimension).
Therefore this is mostly relevant when there are blank values in the dataset.

@item collapse-min
Similar to @option{collapse-sum}, but the returned dataset will have the same numeric type as the input and will contain the minimum value for each pixel along the collapsed dimension.

@item collapse-max
Similar to @option{collapse-sum}, but the returned dataset will have the same numeric type as the input and will contain the maximum value for each pixel along the collapsed dimension.

@item add-dimension
Build a higher-dimensional dataset from all the input datasets stacked after one another (along the slowest dimension).
The first popped operand has to be a single number.
It is used by the operator to know how many operands it should pop from the stack (and the size of the output in the new dimension).
The rest of the operands must have the same size and numerical data type.
This operator currently only works for 2D input operands, please contact us if you want inputs to have different dimensions.

The output's WCS (which should have a different dimensionality compared to the inputs) can be read from another file with the @option{--wcsfile} option.
If no file is specified for the WCS, the first dataset's WCS will be used, you can later add/change the necessary WCS keywords with the FITS keyword modification features of the Fits program (see @ref{Fits}).

If your datasets don't have the same type, you can use the type transformation operators of Arithmetic that are discussed below.
Just beware of overflow if you are transforming to a smaller type, see @ref{Numeric data types}.

For example if you want to put the three @file{img1.fits}, @file{img2.fits} and @file{img3.fits} images (each a 2D dataset) into one 3D datacube, you can use this command:

@example
$ astarithmetic img1.fits img2.fits img3.fits 3 add-dimension
@end example

@item unique
Remove all duplicate (and blank) elements from the first popped operand.
The unique elements of the dataset will be stored in a single-dimensional dataset.

Recall that by default, single-dimensional datasets are stored as a table column in the output.
But you can use @option{--onedasimage} or @option{--onedonstdout} to respectively store them as a single-dimensional FITS array/image, or to print them on the standard output.

Although you can use this operator on the floating point dataset, due to floating-point errors it may give non-reasonable values: because the tenth digit of the decimal point is also considered although it may be statistically meaningless, see @ref{Numeric data types}.
It is therefore better/recommended to use it on the integer dataset like the labeled images of @ref{Segment output} where each pixel has the integer label of the object/clump it is associated with.
For example let's assume you have cropped a region of a larger labeled image and want to find the labels/objects that are within the crop.
With this operator, this job is trivial:
@example
$ astarithmetic seg-crop.fits unique
@end example

@item erode
@cindex Erosion
Erode the foreground pixels (with value @code{1}) of the input dataset (second popped operand).
The first popped operand is the connectivity (see description in @command{connected-components}).
Erosion is simply a flipping of all foreground pixels (with value @code{1}) to background (with value @code{0}) that are ``touching'' background pixels.
``Touching'' is defined by the connectivity.

In effect, this operator ``carves off'' the outer borders of the foreground, making them thinner.
This operator assumes a binary dataset (all pixels are @code{0} or @code{1}).
For example, imagine that you have an astronomical image with a mean/sky value of 0 units and a standard deviation (@mymath{\sigma}) of 100 units and many galaxies in it.
With the first command below, you can apply a threshold of @mymath{2\sigma} on the image (by only keeping pixels that are greater than 200 using the @command{gt} operator).
The output of thresholding the image is a binary image (each pixel is either smaller or equal to the threshold or larger than it).
You can then erode the binary image with the second command below to remove very small false positives  (one or two pixel peaks).
@example
$ astarithmetic image.fits 100 gt -obinary.fits
$ astarithmetic binary.fits 2 erode -oout.fits
@end example

In fact, you can merge these operations into one command thanks to the reverse polish notation (see @ref{Reverse polish notation}):
@example
$ astarithmetic image.fits 100 gt 2 erode -oout.fits
@end example

To see the effect of connectivity, try this:
@example
$ astarithmetic image.fits 100 gt 1 erode -oout-con-1.fits
@end example

@item dilate
@cindex Dilation
Dilate the foreground pixels (with value @code{1}) of the binary input dataset (second popped operand).
The first popped operand is the connectivity (see description in @command{connected-components}).
Dilation is simply a flipping of all background pixels (with value @code{0}) to foreground (with value @code{1}) that are ``touching'' foreground pixels.
``Touching'' is defined by the connectivity.
In effect, this expands the outer borders of the foreground.
This operator assumes a binary dataset (all pixels are @code{0} and @code{1}).
The usage is similar to @code{erode}, for example:
@example
$ astarithmetic binary.fits 2 erode -oout.fits
@end example

@item connected-components
@cindex Connected components
Find the connected components in the input dataset (second popped operand).
The first popped is the connectivity used in the connected components algorithm.
The second popped operand is the dataset where connected components are to be found.
It is assumed to be a binary image (with values of 0 or 1).
It must have an 8-bit unsigned integer type which is the format produced by conditional operators.
This operator will return a labeled dataset where the non-zero pixels in the input will be labeled with a counter (starting from 1).

The connectivity is a number between 1 and the number of dimensions in the dataset (inclusive).
1 corresponds to the weakest (symmetric) connectivity between elements and the number of dimensions the strongest.
For example on a 2D image, a connectivity of 1 corresponds to 4-connected neighbors and 2 corresponds to 8-connected neighbors.

One example usage of this operator can be the identification of regions above a certain threshold, as in the command below.
With this command, Arithmetic will first separate all pixels greater than 100 into a binary image (where pixels with a value of 1 are above that value).
Afterwards, it will label all those that are connected.

@example
$ astarithmetic in.fits 100 gt 2 connected-components
@end example

If your input dataset doesn't have a binary type, but you know all its values are 0 or 1, you can use the @code{uint8} operator (below) to convert it to binary.

@item fill-holes
Flip background (0) pixels surrounded by foreground (1) in a binary dataset.
This operator takes two operands (similar to @code{connected-components}): the second is the binary (0 or 1 valued) dataset to fill holes in and the first popped operand is the connectivity (to define a hole).
Imagine that in your dataset there are some holes with zero value inside the objects with one value (for example the output of the thresholding example of @command{erode}) and you want to fill the holes:
@example
$ astarithmetic binary.fits 2 fill-holes
@end example

@item invert
Invert an unsigned integer dataset (won't work on other data types, see @ref{Numeric data types}).
This is the only operator that ignores blank values (which are set to be the maximum values in the unsigned integer types).

This is useful in cases where the target(s) has(have) been imaged in absorption as raw formats (which are unsigned integer types).
With this option, the maximum value for the given type will be subtracted from each pixel value, thus ``inverting'' the image, so the target(s) can be treated as emission.
This can be useful when the higher-level analysis methods/tools only work on emission (positive skew in the noise, not negative).
@example
$ astarithmetic image.fits invert
@end example

@item lt
Less than: creates a binary output (values either 0 or 1) where each pixel will be 1 if the second popped operand is smaller than the first popped operand and 0 otherwise.
If both operands are images, then all the pixels will be compared with their counterparts in the other image.
For example:
@example
$ astarithmetic image1.fits image2.fits -g1 lt
@end example
If only one operand is an image, then all the pixels will be compared with the single value (number) of the other operand.
For example:
@example
$ astaithmetic image1.fits 1000 lt
@end example
Finally if both are numbers, then the output is also just one number (0 or 1).
@example
$ astarithmetic 4 5 lt
@end example


@item le
Less or equal: similar to @code{lt} (`less than' operator), but returning 1 when the second popped operand is smaller or equal to the first.
For example
@example
$ astaithmetic image1.fits 1000 le
@end example

@item gt
Greater than: similar to @code{lt} (`less than' operator), but returning 1 when the second popped operand is greater than the first.
For example
@example
$ astaithmetic image1.fits 1000 gt
@end example

@item ge
Greater or equal: similar to @code{lt} (`less than' operator), but returning 1 when the second popped operand is larger or equal to the first.
For example
@example
$ astaithmetic image1.fits 1000 ge
@end example

@item eq
Equality: similar to @code{lt} (`less than' operator), but returning 1 when the two popped operands are equal (to double precision floating point accuracy).
@example
$ astaithmetic image1.fits 1000 eq
@end example

@item ne
Non-Equality: similar to @code{lt} (`less than' operator), but returning 1 when the two popped operands are @emph{not} equal (to double precision floating point accuracy).
@example
$ astaithmetic image1.fits 1000 ne
@end example

@item and
Logical AND: returns 1 if both operands have a non-zero value and 0 if both are zero.
Both operands have to be the same kind: either both images or both numbers and it mostly makes meaningful values when the inputs are binary (with pixel values of 0 or 1).
@example
$ astarithmetic image1.fits image2.fits -g1 and
@end example

For example if you only want to see which pixels in an image have a value @emph{between} 50 (greater equal, or inclusive) and 200 (less than, or exclusive), you can use this command:
@example
$ astarithmetic image.fits set-i i 50 ge i 200 lt and
@end example

@item or
Logical OR: returns 1 if either one of the operands is non-zero and 0 only when both operators are zero.
Both operands have to be the same kind: either both images or both numbers.
The usage is similar to @code{and}.

For example if you only want to see which pixels in an image have a value @emph{outside of} -100 (greater equal, or inclusive) and 200 (less than, or exclusive), you can use this command:
@example
$ astarithmetic image.fits set-i i -100 lt i 200 ge or
@end example

@item not
Logical NOT: returns 1 when the operand is 0 and 0 when the operand is non-zero.
The operand can be an image or number, for an image, it is applied to each pixel separately.
For example if you want to know which pixels are not blank, you can use not on the output of the @command{isblank} operator described below:
@example
$ astarithmetic image.fits isblank not
@end example

@cindex Blank pixel
@item isblank
Test for a blank value (see @ref{Blank pixels}).
In essence, this is very similar to the conditional operators: the output is either 1 or 0 (see the `less than' operator above).
The difference is that it only needs one operand.
For example:
@example
$ astarithmetic image.fits isblank
@end example
Because of the definition of a blank pixel, a blank value is not even equal to itself, so you cannot use the equal operator above to select blank pixels.
See the ``Blank pixels'' box below for more on Blank pixels in Arithmetic.

@item where
Change the input (pixel) value @emph{where}/if a certain condition holds.
The conditional operators above can be used to define the condition.
Three operands are required for @command{where}.
The input format is demonstrated in this simplified example:

@example
$ astarithmetic modify.fits binary.fits if-true.fits where
@end example

The value of any pixel in @file{modify.fits} that corresponds to a non-zero @emph{and} non-blank pixel of @file{binary.fits} will be changed to the value of the same pixel in @file{if-true.fits} (this may also be a number).
The 3rd and 2nd popped operands (@file{modify.fits} and @file{binary.fits} respectively, see @ref{Reverse polish notation}) have to have the same dimensions/size.
@file{if-true.fits} can be either a number, or have the same dimension/size as the other two.

The 2nd popped operand (@file{binary.fits}) has to have @code{uint8} (or @code{unsigned char} in standard C) type (see @ref{Numeric data types}).
It is treated as a binary dataset (with only two values: zero and non-zero, hence the name @code{binary.fits} in this example).
However, commonly you won't be dealing with an actual FITS file of a condition/binary image.
You will probably define the condition in the same run based on some other reference image and use the conditional and logical operators above to make a true/false (or one/zero) image for you internally.
For example the case below:

@example
$ astarithmetic in.fits reference.fits 100 gt new.fits where
@end example

In the example above, any of the @file{in.fits} pixels that has a value in @file{reference.fits} greater than @command{100}, will be replaced with the corresponding pixel in @file{new.fits}.
Effectively the @code{reference.fits 100 gt} part created the condition/binary image which was added to the stack (in memory) and later used by @code{where}.
The command above is thus equivalent to these two commands:

@example
$ astarithmetic reference.fits 100 gt --output=binary.fits
$ astarithmetic in.fits binary.fits new.fits where
@end example

Finally, the input operands are read and used independently, so you can use the same file more than once as any of the operands.

When the 1st popped operand to @code{where} (@file{if-true.fits}) is a single number, it may be a NaN value (or any blank value, depending on its type) like the example below (see @ref{Blank pixels}).
When the number is blank, it will be converted to the blank value of the type of the 3rd popped operand (@code{in.fits}).
Hence, in the example below, all the pixels in @file{reference.fits} that have a value greater than 100, will become blank in the natural data type of @file{in.fits} (even though NaN values are only defined for floating point types).

@example
$ astarithmetic in.fits reference.fits 100 gt nan where
@end example

@item bitand
Bitwise AND operator: only bits with values of 1 in both popped operands will get the value of 1, the rest will be set to 0.
For example (assuming numbers can be written as bit strings on the command-line): @code{00101000 00100010 bitand} will give @code{00100000}.
Note that the bitwise operators only work on integer type datasets.

@item bitor
Bitwise inclusive OR operator: The bits where at least one of the two popped operands has a 1 value get a value of 1, the others 0.
For example (assuming numbers can be written as bit strings on the command-line): @code{00101000 00100010 bitand} will give @code{00101010}.
Note that the bitwise operators only work on integer type datasets.

@item bitxor
Bitwise exclusive OR operator: A bit will be 1 if it differs between the two popped operands.
For example (assuming numbers can be written as bit strings on the command-line): @code{00101000 00100010 bitand} will give @code{00001010}.
Note that the bitwise operators only work on integer type datasets.

@item lshift
Bitwise left shift operator: shift all the bits of the first operand to the left by a number of times given by the second operand.
For example (assuming numbers can be written as bit strings on the command-line): @code{00101000 2 lshift} will give @code{10100000}.
This is equivalent to multiplication by 4.
Note that the bitwise operators only work on integer type datasets.

@item rshift
Bitwise right shift operator: shift all the bits of the first operand to the right by a number of times given by the second operand.
For example (assuming numbers can be written as bit strings on the command-line): @code{00101000 2 rshift} will give @code{00001010}.
Note that the bitwise operators only work on integer type datasets.

@item bitnot
Bitwise not (more formally known as one's complement) operator: flip all the bits of the popped operand (note that this is the only unary, or single operand, bitwise operator).
In other words, any bit with a value of @code{0} is changed to @code{1} and vice-versa.
For example (assuming numbers can be written as bit strings on the command-line): @code{00101000 bitnot} will give @code{11010111}.
Note that the bitwise operators only work on integer type datasets/numbers.

@item uint8
Convert the type of the popped operand to 8-bit unsigned integer type (see @ref{Numeric data types}).
The internal conversion of C will be used.

@item int8
Convert the type of the popped operand to 8-bit signed integer type (see @ref{Numeric data types}).
The internal conversion of C will be used.

@item uint16
Convert the type of the popped operand to 16-bit unsigned integer type (see @ref{Numeric data types}).
The internal conversion of C will be used.

@item int16
Convert the type of the popped operand to 16-bit signed integer (see @ref{Numeric data types}).
The internal conversion of C will be used.

@item uint32
Convert the type of the popped operand to 32-bit unsigned integer type (see @ref{Numeric data types}).
The internal conversion of C will be used.

@item int32
Convert the type of the popped operand to 32-bit signed integer type (see @ref{Numeric data types}).
The internal conversion of C will be used.

@item uint64
Convert the type of the popped operand to 64-bit unsigned integer (see @ref{Numeric data types}).
The internal conversion of C will be used.

@item float32
Convert the type of the popped operand to 32-bit (single precision) floating point (see @ref{Numeric data types}).
The internal conversion of C will be used.

@item float64
Convert the type of the popped operand to 64-bit (double precision) floating point (see @ref{Numeric data types}).
The internal conversion of C will be used.

@item size
Size of the dataset along a given FITS/Fortran dimension (counting from 1).
The desired dimension should be the first popped operand and the dataset must be the second popped operand.
The output will be a single unsigned integer (dimensions cannot be negative).
For example, the following command will produce the size of the first extension/HDU (the default HDU) of @file{a.fits} along the second FITS axis.

@example
astarithmetic a.fits 2 size
@end example

@item makenew
Create a new dataset that only has zero values.
The number of dimensions is read as the first popped operand and the number of elements along each dimension are the next popped operand (in reverse of the popping order).
The type of the new dataset is an unsigned 8-bit integer and all pixel values have a value of zero.
For example, if you want to create a new 100 by 200 pixel image, you can run this command:

@example
astarithmetic 100 200 2 makenew
@end example

@item set-AAA
Set the characters after the dash (@code{AAA} in the case shown here) as a name for the first popped operand on the stack.
The named dataset will be freed from memory as soon as it is no longer needed, or if the name is reset to refer to another dataset later in the command.
This operator thus enables re-usability of a dataset without having to re-read it from a file every time it is necessary during a process.
When a dataset is necessary more than once, this operator can thus help simplify reading/writing on the command-line (thus avoiding potential bugs), while also speeding up the processing.

Like all operators, this operator pops the top operand off of the main processing stack, but unlike other operands, it won't add anything back to the stack immediately.
It will keep the popped dataset in memory through a separate list of named datasets (not on the main stack).
That list will be used to add/copy any requested dataset to the main processing stack when the name is called.

The name to give the popped dataset is part of the operator's name.
For example the @code{set-a} operator of the command below, gives the name ``@code{a}'' to the contents of @file{image.fits}.
This name is then used instead of the actual filename to multiply the dataset by two.

@example
$ astarithmetic image.fits set-a a 2 x
@end example

The name can be any string, but avoid strings ending with standard filename suffixes (for example @file{.fits})@footnote{A dataset name like @file{a.fits} (which can be set with @command{set-a.fits}) will cause confusion in the initial parser of Arithmetic.
It will assume this name is a FITS file, and if it is used multiple times, Arithmetic will abort, complaining that you haven't provided enough HDUs.}.

One example of the usefulness of this operator is in the @code{where} operator.
For example, let's assume you want to mask all pixels larger than @code{5} in @file{image.fits} (extension number 1) with a NaN value.
Without setting a name for the dataset, you have to read the file two times from memory in a command like this:

@example
$ astarithmetic image.fits image.fits 5 gt nan where -g1
@end example

But with this operator you can simply give @file{image.fits} the name @code{i} and simplify the command above to the more readable one below (which greatly helps when the filename is long):

@example
$ astarithmetic image.fits set-i   i i 5 gt nan where
@end example

@item tofile-AAA
Write the top operand on the operands stack into a file called @code{AAA} (can be any FITS file name) without changing the operands stack.
If you don't need the dataset any more and would like to free it, see the @code{tofilefree} operator below.

By default, any file that is given to this operator is deleted before Arithmetic actually starts working on the input datasets.
The deletion can be deactivated with the @option{--dontdelete} option (as in all Gnuastro programs, see @ref{Input output options}).
If the same FITS file is given to this operator multiple times, it will contain multiple extensions (in the same order that it was called.

For example the operator @command{tofile-check.fits} will write the top operand to @file{check.fits}.
Since it doesn't modify the operands stack, this operator is very convenient when you want to debug, or understanding, a string of operators and operands given to Arithmetic: simply put @command{tofile-AAA} anywhere in the process to see what is happening behind the scenes without modifying the overall process.

@item tofilefree-AAA
Similar to the @code{tofile} operator, with the only difference that the dataset that is written to a file is popped from the operand stack and freed from memory (cannot be used any more).

@end table

@cartouche
@noindent
@strong{Blank pixels in Arithmetic:} Blank pixels in the image (see @ref{Blank pixels}) will be stored based on the data type.
When the input is floating point type, blank values are NaN.
One aspect of NaN values is that by definition they will fail on @emph{any} comparison.
Hence both equal and not-equal operators will fail when both their operands are NaN!
Therefore, the only way to guarantee selection of blank pixels is through the @command{isblank} operator explained above.

One way you can exploit this property of the NaN value to your advantage is when you want a fully zero-valued image (even over the blank pixels) based on an already existing image (with same size and world coordinate system settings).
The following command will produce this for you:

@example
$ astarithmetic input.fits nan eq --output=all-zeros.fits
@end example

@noindent
Note that on the command-line you can write NaN in any case (for example @command{NaN}, or @command{NAN} are also acceptable).
Reading NaN as a floating point number in Gnuastro isn't case-sensitive.
@end cartouche


@node Invoking astarithmetic,  , Arithmetic operators, Arithmetic
@subsection Invoking Arithmetic

Arithmetic will do pixel to pixel arithmetic operations on the individual pixels of input data and/or numbers.
For the full list of operators with explanations, please see @ref{Arithmetic operators}.
Any operand that only has a single element (number, or single pixel FITS image) will be read as a number, the rest of the inputs must have the same dimensions.
The general template is:

@example
$ astarithmetic [OPTION...] ASTRdata1 [ASTRdata2] OPERATOR ...
@end example

@noindent
One line examples:

@example
## Calculate (10.32-3.84)^2.7 quietly (will just print 155.329):
$ astarithmetic -q 10.32 3.84 - 2.7 pow

## Inverse the input image (1/pixel):
$ astarithmetic 1 image.fits / --out=inverse.fits

## Multiply each pixel in image by -1:
$ astarithmetic image.fits -1 x --out=negative.fits

## Subtract extension 4 from extension 1 (counting from zero):
$ astarithmetic image.fits image.fits - --out=skysub.fits           \
                --hdu=1 --hdu=4

## Add two images, then divide them by 2 (2 is read as floating point):
## Note that without the '.0', the '2' will be read/used as an integer.
$ astarithmetic image1.fits image2.fits + 2.0 / --out=average.fits

## Use Arithmetic's average operator:
$ astarithmetic image1.fits image2.fits average --out=average.fits

## Calculate the median of three images in three separate extensions:
$ astarithmetic img1.fits img2.fits img3.fits median                \
                -h0 -h1 -h2 --out=median.fits
@end example

Arithmetic's notation for giving operands to operators is fully described in @ref{Reverse polish notation}.
The output dataset is last remaining operand on the stack.
When the output dataset a single number, it will be printed on the command-line.
When the output is an array, it will be stored as a file.

The name of the final file can be specified with the @option{--output} option, but if its not given, Arithmetic will use ``automatic output'' on the name of the first FITS image encountered to generate an output file name, see @ref{Automatic output}.
By default, if the output file already exists, it will be deleted before Arithmetic starts operation.
However, this can be disabled with the @option{--dontdelete} option (see below).
At any point during Arithmetic's operation, you can also write the top operand on the stack to a file, using the @code{tofile} or @code{tofilefree} operators, see @ref{Arithmetic operators}.

By default, the world coordinate system (WCS) information of the output dataset will be taken from the first input image (that contains a WCS) on the command-line.
This can be modified with the @option{--wcsfile} and @option{--wcshdu} options described below.
When the @option{--quiet} option isn't given, the name and extension of the dataset used for the output's WCS is printed on the command-line.

Through operators like those starting with @code{collapse-}, the dimensionality of the inputs may not be the same as the outputs.
By default, when the output is 1D, Arithmetic will write it as a table, not an image/array.
The format of the output table (plain text or FITS ASCII or binary) can be set with the @option{--tableformat} option, see @ref{Input output options}).
You can disable this feature (write 1D arrays as FITS images/arrays, or to the standard output) with the @option{--onedasimage} or @option{--onedonstdout} options.

See @ref{Common options} for a review of the options in all Gnuastro programs.
Arithmetic just redefines the @option{--hdu} and @option{--dontdelete} options as explained below.

@table @option

@item -h INT/STR
@itemx --hdu INT/STR
The header data unit of the input FITS images, see @ref{Input output options}.
Unlike most options in Gnuastro (which will ultimately only have one value for this option), Arithmetic allows @option{--hdu} to be called multiple times and the value of each invocation will be stored separately (for the unlimited number of input images you would like to use).
Recall that for other programs this (common) option only takes a single value.
So in other programs, if you specify it multiple times on the command-line, only the last value will be used and in the configuration files, it will be ignored if it already has a value.

The order of the values to @option{--hdu} has to be in the same order as input FITS images.
Options are first read from the command-line (from left to right), then top-down in each configuration file, see @ref{Configuration file precedence}.

If the number of HDUs is less than the number of input images, Arithmetic will abort and notify you.
However, if there are more HDUs than FITS images, there is no problem: they will be used in the given order (every time a FITS image comes up on the stack) and the extra HDUs will be ignored in the end.
So there is no problem with having extra HDUs in the configuration files and by default several HDUs with a value of @option{0} are kept in the system-wide configuration file when you install Gnuastro.

@item -g INT/STR
@itemx --globalhdu INT/STR
Use the value to this option as the HDU of all input FITS files.
This option is very convenient when you have many input files and the dataset of interest is in the same HDU of all the files.
When this option is called, any values given to the @option{--hdu} option (explained above) are ignored and will not be used.

@item -w STR
@itemx --wcsfile STR
FITS Filename containing the WCS structure that must be written to the output.
The HDU/extension should be specified with @option{--wcshdu}.

When this option is used, the respective WCS will be read before any processing is done on the command-line and directly used in the final output.
If the given file doesn't have any WCS, then the default WCS (first file on the command-line with WCS) will be used in the output.

This option will mostly be used when the default file (first of the set of inputs) is not the one containing your desired WCS.
But with this option, you can also use Arithmetic to rewrite/change the WCS of an existing FITS dataset from another file:

@example
$ astarithmetic data.fits --wcsfile=other.fits -ofinal.fits
@end example

@item -W STR
@itemx --wcshdu STR
HDU/extension to read the WCS within the file given to @option{--wcsfile}.
For more, see the description of @option{--wcsfile}.

@item -O
@itemx --onedasimage
When final dataset to write as output only has one dimension, write it as a FITS image/array.
By default, if the output is 1D, it will be written as a table, see above.

@item -s
@itemx --onedonstdout
When final dataset to write as output only has one dimension, print it on the standard output, not in a file.
By default, if the output is 1D, it will be written as a table, see above.

@item -D
@itemx --dontdelete
Don't delete the output file, or files given to the @code{tofile} or @code{tofilefree} operators, if they already exist.
Instead append the desired datasets to the extensions that already exist in the respective file.
Note it doesn't matter if the final output file name is given with the @option{--output} option, or determined automatically.

Arithmetic treats this option differently from its default operation in other Gnuastro programs (see @ref{Input output options}).
If the output file exists, when other Gnuastro programs are called with @option{--dontdelete}, they simply complain and abort.
But when Arithmetic is called with @option{--dontdelete}, it will appended the dataset(s) to the existing extension(s) in the file.
@end table

Arithmetic accepts two kinds of input: images and numbers.
Images are considered to be any of the inputs that is a file name of a recognized type (see @ref{Arguments}) and has more than one element/pixel.
Numbers on the command-line will be read into the smallest type (see @ref{Numeric data types}) that can store them, so @command{-2} will be read as a @code{char} type (which is signed on most systems and can thus keep negative values), @command{2500} will be read as an @code{unsigned short} (all positive numbers will be read as unsigned), while @code{3.1415926535897} will be read as a @code{double} and @code{3.14} will be read as a @code{float}.
To force a number to be read as float, put a @code{.} after it (possibly followed by a zero for easier readability), or add an @code{f} after it.
Hence while @command{5} will be read as an integer, @command{5.}, @command{5.0} or @command{5f} will be added to the stack as @code{float} (see @ref{Reverse polish notation}).

Unless otherwise stated (in @ref{Arithmetic operators}), the operators can deal with numeric multiple data types (see @ref{Numeric data types}).
For example in ``@command{a.fits b.fits +}'', the image types can be @code{long} and @code{float}.
In such cases, C's internal type conversion will be used.
The output type will be set to the higher-ranking type of the two inputs.
Unsigned integer types have smaller ranking than their signed counterparts and floating point types have higher ranking than the integer types.
So the internal C type conversions done in the example above are equivalent to this piece of C:

@example
size_t i;
long a[100];
float b[100], out[100];
for(i=0;i<100;++i) out[i]=a[i]+b[i];
@end example

@noindent
Relying on the default C type conversion significantly speeds up the processing and also requires less RAM (when using very large images).

Some operators can only work on integer types (of any length, for example bitwise operators) while others only work on floating point types, (currently only the @code{pow} operator).
In such cases, if the operand type(s) are different, an error will be printed.
Arithmetic also comes with internal type conversion operators which you can use to convert the data into the appropriate type, see @ref{Arithmetic operators}.

@cindex Options
The hyphen (@command{-}) can be used both to specify options (see @ref{Options}) and also to specify a negative number which might be necessary in your arithmetic.
In order to enable you to do this, Arithmetic will first parse all the input strings and if the first character after a hyphen is a digit, then that hyphen is temporarily replaced by the vertical tab character which is not commonly used.
The arguments are then parsed and these strings will not be specified as an option.
Then the given arguments are parsed and any vertical tabs are replaced back with a hyphen so they can be read as negative numbers.
Therefore, as long as the names of the files you want to work on, don't start with a vertical tab followed by a digit, there is no problem.
An important consequence of this implementation is that you should not write negative fractions like this: @command{-.3}, instead write them as @command{-0.3}.

@cindex AWK
@cindex GNU AWK
Without any images, Arithmetic will act like a simple calculator and print the resulting output number on the standard output like the first example above.
If you really want such calculator operations on the command-line, AWK (GNU AWK is the most common implementation) is much faster, easier and much more powerful.
For example, the numerical one-line example above can be done with the following command.
In general AWK is a fantastic tool and GNU AWK has a wonderful manual (@url{https://www.gnu.org/software/gawk/manual/}).
So if you often confront situations like this, or have to work with large text tables/catalogs, be sure to checkout AWK and simplify your life.

@example
$ echo "" | awk '@{print (10.32-3.84)^2.7@}'
155.329
@end example















@node Convolve, Warp, Arithmetic, Data manipulation
@section Convolve

@cindex Convolution
@cindex Neighborhood
@cindex Weighted average
@cindex Average, weighted
@cindex Kernel, convolution
On an image, convolution can be thought of as a process to blur or remove the contrast in an image.
If you are already familiar with the concept and just want to run Convolve, you can jump to @ref{Convolution kernel} and @ref{Invoking astconvolve} and skip the lengthy introduction on the basic definitions and concepts of convolution.

There are generally two methods to convolve an image.
The first and more intuitive one is in the ``spatial domain'' or using the actual image pixel values, see @ref{Spatial domain convolution}.
The second method is when we manipulate the ``frequency domain'', or work on the magnitudes of the different frequencies that constitute the image, see @ref{Frequency domain and Fourier operations}.
Understanding convolution in the spatial domain is more intuitive and thus recommended if you are just starting to learn about convolution.
However, getting a good grasp of the frequency domain is a little more involved and needs some concentration and some mathematical proofs.
However, its reward is a faster operation and more importantly a very fundamental understanding of this very important operation.

@cindex Detection
@cindex Atmosphere
@cindex Blur image
@cindex Cosmic rays
@cindex Pixel mixing
@cindex Mixing pixel values
Convolution of an image will generally result in blurring the image because it mixes pixel values.
In other words, if the image has sharp differences in neighboring pixel values@footnote{In astronomy, the only major time we confront such sharp borders in signal are cosmic rays.
All other sources of signal in an image are already blurred by the atmosphere or the optics of the instrument.}, those sharp differences will become smoother.
This has very good consequences in detection of signal in noise for example.
In an actual observed image, the variation in neighboring pixel values due to noise can be very high.
But after convolution, those variations will decrease and we have a better hope in detecting the possible underlying signal.
Another case where convolution is extensively used is in mock images and modeling in general, convolution can be used to simulate the effect of the atmosphere or the optical system on the mock profiles that we create, see @ref{PSF}.
Convolution is a very interesting and important topic in any form of signal analysis (including astronomical observations).
So we have thoroughly@footnote{A mathematician will certainly consider this explanation is incomplete and inaccurate.
However this text is written for an understanding on the operations that are done on a real (not complex, discrete and noisy) astronomical image, not any general form of abstract function} explained the concepts behind it in the following sub-sections.

@menu
* Spatial domain convolution::  Only using the input image values.
* Frequency domain and Fourier operations::  Using frequencies in input.
* Spatial vs. Frequency domain::  When to use which?
* Convolution kernel::          How to specify the convolution kernel.
* Invoking astconvolve::        Options and argument to Convolve.
@end menu

@node Spatial domain convolution, Frequency domain and Fourier operations, Convolve, Convolve
@subsection Spatial domain convolution

The pixels in an input image represent different ``spatial'' positions, therefore when convolution is done only using the actual input pixel values, we name the process as being done in the ``Spatial domain''.
In particular this is in contrast to the ``frequency domain'' that we will discuss later in @ref{Frequency domain and Fourier operations}.
In the spatial domain (and in realistic situations where the image and the convolution kernel don't extend to infinity), convolution is the process of changing the value of one pixel to the @emph{weighted} average of all the pixels in its @emph{neighborhood}.

The `neighborhood' of each pixel (how many pixels in which direction) and
the `weight' function (how much each neighboring pixel should contribute
depending on its position) are given through a second image which is known
as a ``kernel''@footnote{Also known as filter, here we will use `kernel'.}.

@menu
* Convolution process::         More basic explanations.
* Edges in the spatial domain::  Dealing with the edges of an image.
@end menu

@node Convolution process, Edges in the spatial domain, Spatial domain convolution, Spatial domain convolution
@subsubsection Convolution process

In convolution, the kernel specifies the weight and positions of the neighbors of each pixel.
To find the convolved value of a pixel, the central pixel of the kernel is placed on that pixel.
The values of each overlapping pixel in the kernel and image are multiplied by each other and summed for all the kernel pixels.
To have one pixel in the center, the sides of the convolution kernel have to be an odd number.
This process effectively mixes the pixel values of each pixel with its neighbors, resulting in a blurred image compared to the sharper input image.

@cindex Linear spatial filtering
Formally, convolution is one kind of linear `spatial filtering' in image processing texts.
If we assume that the kernel has @mymath{2a+1} and @mymath{2b+1} pixels on each side, the convolved value of a pixel placed at @mymath{x} and @mymath{y} (@mymath{C_{x,y}}) can be calculated from the neighboring pixel values in the input image (@mymath{I}) and the kernel (@mymath{K}) from

@dispmath{C_{x,y}=\sum_{s=-a}^{a}\sum_{t=-b}^{b}K_{s,t}\times{}I_{x+s,y+t}.}

@cindex Correlation
@cindex Convolution
Any pixel coordinate that is outside of the image in the equation above will be considered to be zero.
When the kernel is symmetric about its center the blurred image has the same orientation as the original image.
However, if the kernel is not symmetric, the image will be affected in the opposite manner, this is a natural consequence of the definition of spatial filtering.
In order to avoid this we can rotate the kernel about its center by 180 degrees so the convolved output can have the same original orientation.
Technically speaking, only if the kernel is flipped the process is known @emph{Convolution}.
If it isn't it is known as @emph{Correlation}.

To be a weighted average, the sum of the weights (the pixels in the kernel) have to be unity.
This will have the consequence that the convolved image of an object and unconvolved object will have the same brightness (see @ref{Brightness flux magnitude}), which is natural, because convolution should not eat up the object photons, it only disperses them.





@node Edges in the spatial domain,  , Convolution process, Spatial domain convolution
@subsubsection Edges in the spatial domain

In purely `linear' spatial filtering (convolution), there are problems on the edges of the input image.
Here we will explain the problem in the spatial domain.
For a discussion of this problem from the frequency domain perspective, see @ref{Edges in the frequency domain}.
The problem originates from the fact that on the edges, in practice@footnote{Because we assumed the overlapping pixels outside the input image have a value of zero.}, the sum of the weights we use on the actual image pixels is not unity.
For example, as discussed above, a profile in the center of an image will have the same brightness before and after convolution.
However, for partially imaged profile on the edge of the image, the brightness (sum of its pixel fluxes within the image, see @ref{Brightness flux magnitude}) will not be equal, some of the flux is going to be `eaten' by the edges.

If you ran @command{$ make check} on the source files of Gnuastro, you can see the this effect by comparing the @file{convolve_frequency.fits} with @file{convolve_spatial.fits} in the @file{./tests/} directory.
In the spatial domain, by default, no assumption will be made about pixels outside of the image or any blank pixels in the image.
The problem explained above will also occur on the sides of blank regions (see @ref{Blank pixels}).
The solution to this edge effect problem is only possible in the spatial domain.
For pixels near the edge, we have to abandon the assumption that the sum of the kernel pixels is unity during the convolution process@footnote{ofcourse the sum of the kernel pixels still have to be unity in general.}.
So taking @mymath{W} as the sum of the kernel pixels that overlapped with non-blank and in-image pixels, the equation in @ref{Convolution process} will become:

@dispmath{C_{x,y}= { \sum_{s=-a}^{a}\sum_{t=-b}^{b}K_{s,t}\times{}I_{x+s,y+t} \over W}.}

@noindent
In this manner, objects which are near the edges of the image or blank pixels will also have the same brightness (within the image) before and after convolution.
This correction is applied by default in Convolve when convolving in the spatial domain.
To disable it, you can use the @option{--noedgecorrection} option.
In the frequency domain, there is no way to avoid this loss of flux near the edges of the image, see @ref{Edges in the frequency domain} for an interpretation from the frequency domain perspective.

Note that the edge effect discussed here is different from the one in @ref{If convolving afterwards}.
In making mock images we want to simulate a real observation.
In a real observation the images of the galaxies on the sides of the CCD are first blurred by the atmosphere and instrument, then imaged.
So light from the parts of a galaxy which are immediately outside the CCD will affect the parts of the galaxy which are covered by the CCD.
Therefore in modeling the observation, we have to convolve an image that is larger than the input image by exactly half of the convolution kernel.
We can hence conclude that this correction for the edges is only useful when working on actual observed images (where we don't have any more data on the edges) and not in modeling.



@node Frequency domain and Fourier operations, Spatial vs. Frequency domain, Spatial domain convolution, Convolve
@subsection Frequency domain and Fourier operations

Getting a good grip on the frequency domain is usually not an easy job! So we have decided to give the issue a complete review here.
Convolution in the frequency domain (see @ref{Convolution theorem}) heavily relies on the concepts of Fourier transform (@ref{Fourier transform}) and Fourier series (@ref{Fourier series}) so we will be investigating these important operations first.
It has become something of a clich@'e for people to say that the Fourier series ``is a way to represent a (wave-like) function as the sum of simple sine waves'' (from Wikipedia).
However, sines themselves are abstract functions, so this statement really adds no extra layer of physical insight.

Before jumping head-first into the equations and proofs, we will begin with a historical background to see how the importance of frequencies actually roots in our ancient desire to see everything in terms of circles.
A short review of how the complex plane should be interpreted is then given.
Having paved the way with these two basics, we define the Fourier series and subsequently the Fourier transform.
The final aim is to explain discrete Fourier transform, however some very important concepts need to be solidified first: The Dirac comb, convolution theorem and sampling theorem.
So each of these topics are explained in their own separate sub-sub-section before going on to the discrete Fourier transform.
Finally we revisit (after @ref{Edges in the spatial domain}) the problem of convolution on the edges, but this time in the frequency domain.
Understanding the sampling theorem and the discrete Fourier transform is very important in order to be able to pull out valuable science from the discrete image pixels.
Therefore we have included the mathematical proofs and figures so you can have a clear understanding of these very important concepts.

@menu
* Fourier series historical background::  Historical background.
* Circles and the complex plane::  Interpreting complex numbers.
* Fourier series::              Fourier Series definition.
* Fourier transform::           Fourier Transform definition.
* Dirac delta and comb::        Dirac delta and Dirac comb.
* Convolution theorem::         Derivation of Convolution theorem.
* Sampling theorem::            Sampling theorem (Nyquist frequency).
* Discrete Fourier transform::  Derivation and explanation of DFT.
* Fourier operations in two dimensions::  Extend to 2D images.
* Edges in the frequency domain::  Interpretation of edge effects.
@end menu

@node Fourier series historical background, Circles and the complex plane, Frequency domain and Fourier operations, Frequency domain and Fourier operations
@subsubsection Fourier series historical background
Ever since the ancient times, the circle has been (and still is) the simplest shape for abstract comprehension.
All you need is a center point and a radius and you are done.
All the points on a circle are at a fixed distance from the center.
However, the moment you try to connect this elegantly simple and beautiful abstract construct (the circle) with the real world (for example compute its area or its circumference), things become really hard (ideally, impossible) because the irrational number @mymath{\pi} gets involved.

The key to understanding the Fourier series (thus the Fourier transform and finally the Discrete Fourier Transform) is our ancient desire to express everything in terms of circles or the most exceptionally simple and elegant abstract human construct.
Most people prefer to say the same thing in a more ahistorical manner: to break a function into sines and cosines.
As the term ``ancient'' in the previous sentence implies, Jean-Baptiste Joseph Fourier (1768 -- 1830 A.D.) was not the first person to do this.
The main reason we know this process by his name today is that he came up with an ingenious method to find the necessary coefficients (radius of) and frequencies (``speed'' of rotation on) the circles for any generic (integrable) function.

@float Figure,epicycle

@c Since these links are long, we had to write them like this so they don't
@c jump out of the text width.
@cindex Qutb al-Din al-Shirazi
@cindex al-Shirazi, Qutb al-Din
@image{gnuastro-figures/epicycles, 15.2cm, , Middle ages epicycles along with two demonstrations of breaking a generic function using epicycles.}
@caption{Epicycles and the Fourier series.
Left: A demonstration of Mercury's epicycles relative to the ``center of the world'' by Qutb al-Din al-Shirazi (1236 -- 1311 A.D.) retrieved @url{https://commons.wikimedia.org/wiki/File:Ghotb2.jpg, from Wikipedia}.
@url{https://commons.wikimedia.org/wiki/File:Fourier_series_square_wave_circles_animation.gif, Middle} and
Right: How adding more epicycles (or terms in the Fourier series) will approximate functions.
The @url{https://commons.wikimedia.org/wiki/File:Fourier_series_sawtooth_wave_circles_animation.gif, right} animation is also available.}
@end float

Like most aspects of mathematics, this process of interpreting everything in terms of circles, began for astronomical purposes.
When astronomers noticed that the orbit of Mars and other outer planets, did not appear to be a simple circle (as everything should have been in the heavens).
At some point during their orbit, the revolution of these planets would become slower, stop, go back a little (in what is known as the retrograde motion) and then continue going forward again.

The correction proposed by Ptolemy (90 -- 168 A.D.) was the most agreed upon.
He put the planets on Epicycles or circles whose center itself rotates on a circle whose center is the earth.
Eventually, as observations became more and more precise, it was necessary to add more and more epicycles in order to explain the complex motions of the planets@footnote{See the Wikipedia page on ``Deferent and epicycle'' for a more complete historical review.}.
@ref{epicycle}(Left) shows an example depiction of the epicycles of Mercury in the late 13th century.

@cindex Aristarchus of Samos
Of course we now know that if they had abdicated the Earth from its throne in the center of the heavens and allowed the Sun to take its place, everything would become much simpler and true.
But there wasn't enough observational evidence for changing the ``professional consensus'' of the time to this radical view suggested by a small minority@footnote{Aristarchus of Samos (310 -- 230 B.C.) appears to be one of the first people to suggest the Sun being in the center of the universe.
This approach to science (that the standard model is defined by consensus) and the fact that this consensus might be completely wrong still applies equally well to our models of particle physics and cosmology today.}.
So the pre-Galilean astronomers chose to keep Earth in the center and find a correction to the models (while keeping the heavens a purely ``circular'' order).

The main reason we are giving this historical background which might appear off topic is to give historical evidence that while such ``approximations'' do work and are very useful for pragmatic reasons (like measuring the calendar from the movement of astronomical bodies).
They offer no physical insight.
The astronomers who were involved with the Ptolemaic world view had to add a huge number of epicycles during the centuries after Ptolemy in order to explain more accurate observations.
Finally the death knell of this world-view was Galileo's observations with his new instrument (the telescope).
So the physical insight, which is what Astronomers and Physicists are interested in (as opposed to Mathematicians and Engineers who just like proving and optimizing or calculating!) comes from being creative and not limiting our selves to such approximations.
Even when they work.

@node Circles and the complex plane, Fourier series, Fourier series historical background, Frequency domain and Fourier operations
@subsubsection Circles and the complex plane
Before going onto the derivation, it is also useful to review how the complex numbers and their plane relate to the circles we talked about above.
The two schematics in the middle and right of @ref{epicycle} show how a 1D function of time can be made using the 2D real and imaginary surface.
Seeing the animation in Wikipedia will really help in understanding this important concept.
At each point in time, we take the vertical coordinate of the point and use it to find the value of the function at that point in time.
@ref{iandtime} shows this relation with the axes marked.

@cindex Roger Cotes
@cindex Cotes, Roger
@cindex Caspar Wessel
@cindex Wassel, Caspar
@cindex Leonhard Euler
@cindex Euler, Leonhard
@cindex Abraham de Moivre
@cindex de Moivre, Abraham
Leonhard Euler@footnote{Other forms of this equation were known before Euler.
For example in 1707 A.D. (the year of Euler's birth) Abraham de Moivre (1667 -- 1754 A.D.) showed that @mymath{(\cos{x}+i\sin{x})^n=\cos(nx)+i\sin(nx)}.
In 1714 A.D., Roger Cotes (1682 -- 1716 A.D. a colleague of Newton who proofread the second edition of Principia) showed that: @mymath{ix=\ln(\cos{x}+i\sin{x})}.}  (1707 -- 1783 A.D.)  showed that the complex exponential (@mymath{e^{iv}} where @mymath{v} is real) is periodic and can be written as: @mymath{e^{iv}=\cos{v}+isin{v}}.
Therefore @mymath{e^{iv+2\pi}=e^{iv}}.
Later, Caspar Wessel (mathematician and cartographer 1745 -- 1818 A.D.)  showed how complex numbers can be displayed as vectors on a plane.
Euler's identity might seem counter intuitive at first, so we will try to explain it geometrically (for deeper physical insight).
On the real-imaginary 2D plane (like the left hand plot in each box of @ref{iandtime}), multiplying a number by @mymath{i} can be interpreted as rotating the point by @mymath{90} degrees (for example the value @mymath{3} on the real axis becomes @mymath{3i} on the imaginary axis).
On the other hand, @mymath{e\equiv\lim_{n\rightarrow\infty}(1+{1\over n})^n}, therefore, defining @mymath{m\equiv nu}, we get:

@dispmath{e^{u}=\lim_{n\rightarrow\infty}\left(1+{1\over n}\right)^{nu}
               =\lim_{n\rightarrow\infty}\left(1+{u\over nu}\right)^{nu}
               =\lim_{m\rightarrow\infty}\left(1+{u\over m}\right)^{m}}

@noindent
Taking @mymath{u\equiv iv} the result can be written as a generic complex number (a function of @mymath{v}):

@dispmath{e^{iv}=\lim_{m\rightarrow\infty}\left(1+i{v\over
                m}\right)^{m}=a(v)+ib(v)}

@noindent
For @mymath{v=\pi}, a nice geometric animation of going to the limit can be seen @url{https://commons.wikimedia.org/wiki/File:ExpIPi.gif, on Wikipedia}.
We see that @mymath{\lim_{m\rightarrow\infty}a(\pi)=-1}, while @mymath{\lim_{m\rightarrow\infty}b(\pi)=0}, which gives the famous @mymath{e^{i\pi}=-1} equation.
The final value is the real number @mymath{-1}, however the distance of the polygon points traversed as @mymath{m\rightarrow\infty} is half the circumference of a circle or @mymath{\pi}, showing how @mymath{v} in the equation above can be interpreted as an angle in units of radians and therefore how @mymath{a(v)=cos(v)} and @mymath{b(v)=sin(v)}.

Since @mymath{e^{iv}} is periodic (let's assume with a period of @mymath{T}), it is more clear to write it as @mymath{v\equiv{2{\pi}n\over T}t} (where @mymath{n} is an integer), so @mymath{e^{iv}=e^{i{2{\pi}n\over T}t}}.
The advantage of this notation is that the period (@mymath{T}) is clearly visible and the frequency (@mymath{2{\pi}n \over T}, in units of 1/cycle) is defined through the integer @mymath{n}.
In this notation, @mymath{t} is in units of ``cycle''s.

As we see from the examples in @ref{epicycle} and @ref{iandtime}, for each constituting frequency, we need a respective `magnitude' or the radius of the circle in order to accurately approximate the desired 1D function.
The concepts of ``period'' and ``frequency'' are relatively easy to grasp when using temporal units like time because this is how we define them in every-day life.
However, in an image (astronomical data), we are dealing with spatial units like distance.
Therefore, by one ``period'' we mean the @emph{distance} at which the signal is identical and frequency is defined as the inverse of that spatial ``period''.
The complex circle of @ref{iandtime} can be thought of the Moon rotating about Earth which is rotating around the Sun; so the ``Real (signal)'' axis shows the Moon's position as seen by a distant observer on the Sun as time goes by.
Because of the scalar (not having any direction or vector) nature of time, @ref{iandtime} is easier to understand in units of time.
When thinking about spatial units, mentally replace the ``Time (sec)'' axis with ``Distance (meters)''.
Because length has direction and is a vector, visualizing the rotation of the imaginary circle and the advance along the ``Distance (meters)'' axis is not as simple as temporal units like time.

@float Figure,iandtime
@image{gnuastro-figures/iandtime, 15.2cm, , }
@caption{Relation between the real (signal), imaginary (@mymath{i\equiv\sqrt{-1}}) and time axes at two snapshots of time.}
@end float




@node Fourier series, Fourier transform, Circles and the complex plane, Frequency domain and Fourier operations
@subsubsection Fourier series
In astronomical images, our variable (brightness, or number of photo-electrons, or signal to be more generic) is recorded over the 2D spatial surface of a camera pixel.
However to make things easier to understand, here we will assume that the signal is recorded in 1D (assume one row of the 2D image pixels).
Also for this section and the next (@ref{Fourier transform}) we will be talking about the signal before it is digitized or pixelated.
Let's assume that we have the continuous function @mymath{f(l)} which is integrable in the interval @mymath{[l_0, l_0+L]} (always true in practical cases like images).
Take @mymath{l_0} as the position of the first pixel in the assumed row of the image and @mymath{L} as the width of the image along that row.
The units of @mymath{l_0} and @mymath{L} can be in any spatial units (for example meters) or an angular unit (like radians) multiplied by a fixed distance which is more common.

To approximate @mymath{f(l)} over this interval, we need to find a set of frequencies and their corresponding `magnitude's (see @ref{Circles and the complex plane}).
Therefore our aim is to show @mymath{f(l)} as the following sum of periodic functions:


@dispmath{
f(l)=\displaystyle\sum_{n=-\infty}^{\infty}c_ne^{i{2{\pi}n\over L}l} }

@noindent
Note that the different frequencies (@mymath{2{\pi}n/L}, in units of cycles per meters for example) are not arbitrary.
They are all integer multiples of the fundamental frequency of @mymath{\omega_0=2\pi/L}.
Recall that @mymath{L} was the length of the signal we want to model.
Therefore, we see that the smallest possible frequency (or the frequency resolution) in the end, depends on the length we observed the signal or @mymath{L}.
In the case of each dimension on an image, this is the size of the image in the respective dimension.
The frequencies have been defined in this ``harmonic'' fashion to insure that the final sum is periodic outside of the @mymath{[l_0, l_0+L]} interval too.
At this point, you might be thinking that the sky is not periodic with the same period as my camera's view angle.
You are absolutely right! The important thing is that since your camera's observed region is the only region we are ``observing'' and will be using, the rest of the sky is irrelevant; so we can safely assume the sky is periodic outside of it.
However, this working assumption will haunt us later in @ref{Edges in the frequency domain}.

The frequencies are thus determined by definition.
So all we need to do is to find the coefficients (@mymath{c_n}), or magnitudes, or radii of the circles for each frequency which is identified with the integer @mymath{n}.
Fourier's approach was to multiply both sides with a fixed term:

@dispmath{
f(l)e^{-i{2{\pi}m\over L}l}=\displaystyle\sum_{n=-\infty}^{\infty}c_ne^{i{2{\pi}(n-m)\over L}l}
}

@noindent
where @mymath{m>0}@footnote{ We could have assumed @mymath{m<0} and set the exponential to positive, but this is more clear.}.
We can then integrate both sides over the observation period:

@dispmath{
\int_{l_0}^{l_0+L}f(l)e^{-i{2{\pi}m\over L}l}dl
=\int_{l_0}^{l_0+L}\displaystyle\sum_{n=-\infty}^{\infty}c_ne^{i{2{\pi}(n-m)\over L}l}dl=\displaystyle\sum_{n=-\infty}^{\infty}c_n\int_{l_0}^{l_0+L}e^{i{2{\pi}(n-m)\over L}l}dl
}

@noindent
Both @mymath{n} and @mymath{m} are positive integers.
Also, we know that a complex exponential is periodic so after one period (@mymath{L}) it comes back to its starting point.
Therefore @mymath{\int_{l_0}^{l_0+L}e^{2{\pi}k/L}dl=0} for any @mymath{k>0}.
However, when @mymath{k=0}, this integral becomes: @mymath{\int_{l_0}^{l_0+T}e^0dt=\int_{l_0}^{l_0+T}dt=T}.
Hence since the integral will be zero for all @mymath{n{\neq}m}, we get:

@dispmath{
\displaystyle\sum_{n=-\infty}^{\infty}c_n\int_{l_0}^{l_0+T}e^{i{2{\pi}(n-m)\over L}l}dl=Lc_m }

@noindent
The origin of the axis is fundamentally an arbitrary position.
So let's set it to the start of the image such that @mymath{l_0=0}.
So we can find the ``magnitude'' of the frequency @mymath{2{\pi}m/L} within @mymath{f(l)} through the relation:

@dispmath{ c_m={1\over L}\int_{0}^{L}f(l)e^{-i{2{\pi}m\over L}l}dl }



@node Fourier transform, Dirac delta and comb, Fourier series, Frequency domain and Fourier operations
@subsubsection Fourier transform
In @ref{Fourier series}, we had to assume that the function is periodic outside of the desired interval with a period of @mymath{L}.
Therefore, assuming that @mymath{L\rightarrow\infty} will allow us to work with any function.
However, with this approximation, the fundamental frequency (@mymath{\omega_0}) or the frequency resolution that we discussed in @ref{Fourier series} will tend to zero: @mymath{\omega_0\rightarrow0}.
In the equation to find @mymath{c_m}, every @mymath{m} represented a frequency (multiple of @mymath{\omega_0}) and the integration on @mymath{l} removes the dependence of the right side of the equation on @mymath{l}, making it only a function of @mymath{m} or frequency.
Let's define the following two variables:

@dispmath{\omega{\equiv}m\omega_0={2{\pi}m\over L}}

@dispmath{F(\omega){\equiv}Lc_m}

@noindent
The equation to find the coefficients of each frequency in
@ref{Fourier series} thus becomes:

@dispmath{ F(\omega)=\int_{-\infty}^{\infty}f(l)e^{-i{\omega}l}dl.}

@noindent
The function @mymath{F(\omega)} is thus the @emph{Fourier transform} of @mymath{f(l)} in the frequency domain.
So through this transformation, we can find (analyze) the magnitudes of the constituting frequencies or the value in the frequency space@footnote{As we discussed before, this `magnitude' can be interpreted as the radius of the circle rotating at this frequency in the epicyclic interpretation of the Fourier series, see @ref{epicycle} and @ref{iandtime}.}  of our spatial input function.
The great thing is that we can also do the reverse and later synthesize the input function from its Fourier transform.
Let's do it: with the approximations above, multiply the right side of the definition of the Fourier Series (@ref{Fourier series}) with @mymath{1=L/L=({\omega_0}L)/(2\pi)}:

@dispmath{ f(l)={1\over
2\pi}\displaystyle\sum_{n=-\infty}^{\infty}Lc_ne^{{2{\pi}in\over
L}l}\omega_0={1\over
2\pi}\displaystyle\sum_{n=-\infty}^{\infty}F(\omega)e^{i{\omega}l}\Delta\omega
}


@noindent
To find the right most side of this equation, we renamed @mymath{\omega_0} as @mymath{\Delta\omega} because it was our resolution, @mymath{2{\pi}n/L} was written as @mymath{\omega} and finally, @mymath{Lc_n} was written as @mymath{F(\omega)} as we defined above.
Now, as @mymath{L\rightarrow\infty}, @mymath{\Delta\omega\rightarrow0} so we can write:

@dispmath{ f(l)={1\over
  2\pi}\int_{-\infty}^{\infty}F(\omega)e^{i{\omega}l}d\omega }

Together, these two equations provide us with a very powerful set of tools that we can use to process (analyze) and recreate (synthesize) the input signal.
Through the first equation, we can break up our input function into its constituent frequencies and analyze it, hence it is also known as @emph{analysis}.
Using the second equation, we can synthesize or make the input function from the known frequencies and their magnitudes.
Thus it is known as @emph{synthesis}.
Here, we symbolize the Fourier transform (analysis) and its inverse (synthesis) of a function @mymath{f(l)} and its Fourier Transform @mymath{F(\omega)} as @mymath{{\cal F}[f]} and @mymath{{\cal F}^{-1}[F]}.


@node Dirac delta and comb, Convolution theorem, Fourier transform, Frequency domain and Fourier operations
@subsubsection Dirac delta and comb

The Dirac @mymath{\delta} (delta) function (also known as an impulse) is the way that we convert a continuous function into a discrete one.
It is defined to satisfy the following integral:

@dispmath{\int_{-\infty}^{\infty}\delta(l)dl=1}

@noindent
When integrated with another function, it gives that function's value at @mymath{l=0}:

@dispmath{\int_{-\infty}^{\infty}f(l)\delta(l)dt=f(0)}

@noindent
An impulse positioned at another point (say @mymath{l_0}) is written as @mymath{\delta(l-l_0)}:

@dispmath{\int_{-\infty}^{\infty}f(l)\delta(l-l_0)dt=f(l_0)}

@noindent
The Dirac @mymath{\delta} function also operates similarly if we use summations instead of integrals.
The Fourier transform of the delta function is:

@dispmath{{\cal F}[\delta(l)]=\int_{-\infty}^{\infty}\delta(l)e^{-i{\omega}l}dl=e^{-i{\omega}0}=1}

@dispmath{{\cal F}[\delta(l-l_0)]=\int_{-\infty}^{\infty}\delta(l-l_0)e^{-i{\omega}l}dl=e^{-i{\omega}l_0}}

@noindent
From the definition of the Dirac @mymath{\delta} we can also define a
Dirac comb (@mymath{{\rm III}_P}) or an impulse train with infinite
impulses separated by @mymath{P}:

@dispmath{
{\rm III}_P(l)\equiv\displaystyle\sum_{k=-\infty}^{\infty}\delta(l-kP) }


@noindent
@mymath{P} is chosen to represent ``pixel width'' later in @ref{Sampling theorem}.
Therefore the Dirac comb is periodic with a period of @mymath{P}.
We have intentionally used a different name for the period of the Dirac comb compared to the input signal's length of observation that we showed with @mymath{L} in @ref{Fourier series}.
This difference is highlighted here to avoid confusion later when these two periods are needed together in @ref{Discrete Fourier transform}.
The Fourier transform of the Dirac comb will be necessary in @ref{Sampling theorem}, so let's derive it.
By its definition, it is periodic, with a period of @mymath{P}, so the Fourier coefficients of its Fourier Series (@ref{Fourier series}) can be calculated within one period:

@dispmath{{\rm III}_P=\displaystyle\sum_{n=-\infty}^{\infty}c_ne^{i{2{\pi}n\over
P}l}}

@noindent
We can now find the @mymath{c_n} from @ref{Fourier series}:

@dispmath{
c_n={1\over P}\int_{-P/2}^{P/2}\delta(l)e^{-i{2{\pi}n\over P}l}
={1\over P}\quad\quad \rightarrow \quad\quad
{\rm III}_P={1\over P}\displaystyle\sum_{n=-\infty}^{\infty}e^{i{2{\pi}n\over P}l}
}

@noindent
So we can write the Fourier transform of the Dirac comb as:

@dispmath{
{\cal F}[{\rm III}_P]=\int_{-\infty}^{\infty}{\rm III}_Pe^{-i{\omega}l}dl
={1\over P}\displaystyle\sum_{n=-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-i(\omega-{2{\pi}n\over P})l}dl={1\over P}\displaystyle\sum_{n=-\infty}^{\infty}\delta\left(\omega-{2{\pi}n\over P}\right)
}


@noindent
In the last step, we used the fact that the complex exponential is a periodic function, that @mymath{n} is an integer and that as we defined in @ref{Fourier transform}, @mymath{\omega{\equiv}m\omega_0}, where @mymath{m} was an integer.
The integral will be zero for any @mymath{\omega} that is not equal to @mymath{2{\pi}n/P}, a more complete explanation can be seen in @ref{Fourier series}.
Therefore, while in the spatial domain the impulses had spacing of @mymath{P} (meters for example), in the frequency space, the spacing between the different impulses are @mymath{2\pi/P} cycles per meters.


@node Convolution theorem, Sampling theorem, Dirac delta and comb, Frequency domain and Fourier operations
@subsubsection Convolution theorem

The convolution (shown with the @mymath{\ast} operator) of the two
functions @mymath{f(l)} and @mymath{h(l)} is defined as:

@dispmath{
c(l)\equiv[f{\ast}h](l)=\int_{-\infty}^{\infty}f(\tau)h(l-\tau)d\tau
}

@noindent
See @ref{Convolution process} for a more detailed physical (pixel based) interpretation of this definition.
The Fourier transform of convolution (@mymath{C(\omega)}) can be written as:

@dispmath{
  C(\omega)=\int_{-\infty}^{\infty}[f{\ast}h](l)e^{-i{\omega}l}dl=
  \int_{-\infty}^{\infty}f(\tau)\left[\int_{-\infty}^{\infty}h(l-\tau)e^{-i{\omega}l}dl\right]d\tau
}

@noindent
To solve the inner integral, let's define @mymath{s{\equiv}l-\tau}, so
that @mymath{ds=dl} and @mymath{l=s+\tau} then the inner integral
becomes:

@dispmath{
\int_{-\infty}^{\infty}h(l-\tau)e^{-i{\omega}l}dl=
\int_{-\infty}^{\infty}h(s)e^{-i{\omega}(s+\tau)}ds=e^{-i{\omega}\tau}\int_{-\infty}^{\infty}h(s)e^{-i{\omega}s}ds=H(\omega)e^{-i{\omega}\tau}
}

@noindent
where @mymath{H(\omega)} is the Fourier transform of @mymath{h(l)}.
Substituting this result for the inner integral above, we get:

@dispmath{
C(\omega)=H(\omega)\int_{-\infty}^{\infty}f(\tau)e^{-i{\omega}\tau}d\tau=H(\omega)F(\omega)=F(\omega)H(\omega)
}

@noindent
where @mymath{F(\omega)} is the Fourier transform of @mymath{f(l)}.
So multiplying the Fourier transform of two functions individually, we get the Fourier transform of their convolution.
The convolution theorem also proves a relation between the convolutions in the frequency space.
Let's define:

@dispmath{D(\omega){\equiv}F(\omega){\ast}H(\omega)}

@noindent
Applying the inverse Fourier Transform or synthesis equation (@ref{Fourier transform}) to both sides and following the same steps above, we get:

@dispmath{d(l)=f(l)h(l)}

@noindent
Where @mymath{d(l)} is the inverse Fourier transform of @mymath{D(\omega)}.
We can therefore re-write the two equations above formally as the convolution theorem:

@dispmath{
  {\cal F}[f{\ast}h]={\cal F}[f]{\cal F}[h]
}

@dispmath{
  {\cal F}[fh]={\cal F}[f]\ast{\cal F}[h]
}

Besides its usefulness in blurring an image by convolving it with a given kernel, the convolution theorem also enables us to do another very useful operation in data analysis: to match the blur (or PSF) between two images taken with different telescopes/cameras or under different atmospheric conditions.
This process is also known as de-convolution.
Let's take @mymath{f(l)} as the image with a narrower PSF (less blurry) and @mymath{c(l)} as the image with a wider PSF which appears more blurred.
Also let's take @mymath{h(l)} to represent the kernel that should be convolved with the sharper image to create the more blurry image.
Above, we proved the relation between these three images through the convolution theorem.
But there, we assumed that @mymath{f(l)} and @mymath{h(l)} are known (given) and the convolved image is desired.

In de-convolution, we have @mymath{f(l)} --the sharper image-- and @mymath{f*h(l)} --the more blurry image-- and we want to find the kernel @mymath{h(l)}.
The solution is a direct result of the convolution theorem:

@dispmath{
  {\cal F}[h]={{\cal F}[f{\ast}h]\over {\cal F}[f]}
  \quad\quad
  {\rm or}
  \quad\quad
  h(l)={\cal F}^{-1}\left[{{\cal F}[f{\ast}h]\over {\cal F}[f]}\right]
}

While this works really nice, it has two problems:

@itemize

@item
If @mymath{{\cal F}[f]} has any zero values, then the inverse Fourier transform will not be a number!

@item
If there is significant noise in the image, then the high frequencies of the noise are going to significantly reduce the quality of the final result.

@end itemize

A standard solution to both these problems is the Weiner de-convolution
algorithm@footnote{@url{https://en.wikipedia.org/wiki/Wiener_deconvolution}}.

@node Sampling theorem, Discrete Fourier transform, Convolution theorem, Frequency domain and Fourier operations
@subsubsection Sampling theorem

Our mathematical functions are continuous, however, our data collecting and measuring tools are discrete.
Here we want to give a mathematical formulation for digitizing the continuous mathematical functions so that later, we can retrieve the continuous function from the digitized recorded input.
Assuming that we have a continuous function @mymath{f(l)}, then we can define @mymath{f_s(l)} as the `sampled' @mymath{f(l)} through the Dirac comb (see @ref{Dirac delta and comb}):

@dispmath{
f_s(l)=f(l){\rm III}_P=\displaystyle\sum_{n=-\infty}^{\infty}f(l)\delta(l-nP)
}

@noindent
The discrete data-element @mymath{f_k} (for example, a pixel in an
image), where @mymath{k} is an integer, can thus be represented as:

@dispmath{f_k=\int_{-\infty}^{\infty}f_s(l)dl=\int_{-\infty}^{\infty}f(l)\delta(l-kP)dt=f(kP)}

Note that in practice, our discrete data points are not found in this fashion.
Each detector pixel (in an image for example) has an area and averages the signal it receives over that area, not a mathematical point as the Dirac @mymath{\delta} function defines.
However, as long as the variation in the signal over one detector pixel is not significant, this can be a good approximation.
Having put this issue to the side, we can now try to find the relation between the Fourier transforms of the un-sampled @mymath{f(l)} and the sampled @mymath{f_s(l)}.
For a more clear notation, let's define:

@dispmath{F_s(\omega)\equiv{\cal F}[f_s]}

@dispmath{D(\omega)\equiv{\cal F}[{\rm III}_P]}

@noindent
Then using the Convolution theorem (see @ref{Convolution theorem}),
@mymath{F_s(\omega)} can be written as:

@dispmath{F_s(\omega)={\cal F}[f(l){\rm III}_P]=F(\omega){\ast}D(\omega)}

@noindent
Finally, from the definition of convolution and the Fourier transform
of the Dirac comb (see @ref{Dirac delta and comb}), we get:

@dispmath{
\eqalign{
F_s(\omega) &= \int_{-\infty}^{\infty}F(\omega)D(\omega-\mu)d\mu \cr
&= {1\over P}\displaystyle\sum_{n=-\infty}^{\infty}\int_{-\infty}^{\infty}F(\omega)\delta\left(\omega-\mu-{2{\pi}n\over P}\right)d\mu \cr
&= {1\over P}\displaystyle\sum_{n=-\infty}^{\infty}F\left(
   \omega-{2{\pi}n\over P}\right).\cr }
}

@mymath{F(\omega)} was only a simple function, see @ref{samplingfreq}(left).
However, from the sampled Fourier transform function we see that @mymath{F_s(\omega)} is the superposition of infinite copies of @mymath{F(\omega)} that have been shifted, see @ref{samplingfreq}(right).
From the equation, it is clear that the shift in each copy is @mymath{2\pi/P}.

@float Figure,samplingfreq
@image{gnuastro-figures/samplingfreq, 15.2cm, , } @caption{Sampling causes infinite repetition in the frequency domain.
FT is an abbreviation for `Fourier transform'.
@mymath{\omega_m} represents the maximum frequency present in the input.
@mymath{F(\omega)} is only symmetric on both sides of 0 when the input is real (not complex).
In general @mymath{F(\omega)} is complex and thus cannot be simply plotted like this.
Here we have assumed a real Gaussian @mymath{f(t)} which has produced a Gaussian @mymath{F(\omega)}.}
@end float

The input @mymath{f(l)} can have any distribution of frequencies in it.
In the example of @ref{samplingfreq}(left), the input consisted of a range of frequencies equal to @mymath{\Delta\omega=2\omega_m}.
Fortunately as @ref{samplingfreq}(right) shows, the assumed pixel size (@mymath{P}) we used to sample this hypothetical function was such that @mymath{2\pi/P>\Delta\omega}.
The consequence is that each copy of @mymath{F(\omega)} has become completely separate from the surrounding copies.
Such a digitized (sampled) data set is thus called @emph{over-sampled}.
When @mymath{2\pi/P=\Delta\omega}, @mymath{P} is just small enough to finely separate even the largest frequencies in the input signal and thus it is known as @emph{critically-sampled}.
Finally if @mymath{2\pi/P<\Delta\omega} we are dealing with an @emph{under-sampled} data set.
In an under-sampled data set, the separate copies of @mymath{F(\omega)} are going to overlap and this will deprive us of recovering high constituent frequencies of @mymath{f(l)}.
The effects of under-sampling in an image with high rates of change (for example a brick wall imaged from a distance) can clearly be visually seen and is known as @emph{aliasing}.

When the input @mymath{f(l)} is composed of a finite range of frequencies, @mymath{f(l)} is known as a @emph{band-limited} function.
The example in @ref{samplingfreq}(left) was a nice demonstration of such a case: for all @mymath{\omega<-\omega_m} or @mymath{\omega>\omega_m}, we have @mymath{F(\omega)=0}.
Therefore, when the input function is band-limited and our detector's pixels are placed such that we have critically (or over-) sampled it, then we can exactly reproduce the continuous @mymath{f(l)} from the discrete or digitized samples.
To do that, we just have to isolate one copy of @mymath{F(\omega)} from the infinite copies and take its inverse Fourier transform.

This ability to exactly reproduce the continuous input from the sampled or digitized data leads us to the @emph{sampling theorem} which connects the inherent property of the continuous signal (its maximum frequency) to that of the detector (the spacing between its pixels).
The sampling theorem states that the full (continuous) signal can be recovered when the pixel size (@mymath{P}) and the maximum constituent frequency in the signal (@mymath{\omega_m}) have the following relation@footnote{This equation is also shown in some places without the @mymath{2\pi}.
Whether @mymath{2\pi} is included or not depends on how you define the frequency}:

@dispmath{{2\pi\over P}>2\omega_m}

@noindent
This relation was first formulated by Harry Nyquist (1889 -- 1976 A.D.) in 1928 and formally proved in 1949 by Claude E. Shannon (1916 -- 2001 A.D.) in what is now known as the Nyquist-Shannon sampling theorem.
In signal processing, the signal is produced (synthesized) by a transmitter and is received and de-coded (analyzed) by a receiver.
Therefore producing a band-limited signal is necessary.

In astronomy, we do not produce the shapes of our targets, we are only observers.
Galaxies can have any shape and size, therefore ideally, our signal is not band-limited.
However, since we are always confined to observing through an aperture, the aperture will cause a point source (for which @mymath{\omega_m=\infty}) to be spread over several pixels.
This spread is quantitatively known as the point spread function or PSF.
This spread does blur the image which is undesirable; however, for this analysis it produces the positive outcome that there will be a finite @mymath{\omega_m}.
Though we should caution that any detector will have noise which will add lots of very high frequency (ideally infinite) changes between the pixels.
However, the coefficients of those noise frequencies are usually exceedingly small.

@node Discrete Fourier transform, Fourier operations in two dimensions, Sampling theorem, Frequency domain and Fourier operations
@subsubsection Discrete Fourier transform

As we have stated several times so far, the input image is a digitized, pixelated or discrete array of values (@mymath{f_s(l)}, see @ref{Sampling theorem}).
The input is not a continuous function.
Also, all our numerical calculations can only be done on a sampled, or discrete Fourier transform.
Note that @mymath{F_s(\omega)} is not discrete, it is continuous.
One way would be to find the analytic @mymath{F_s(\omega)}, then sample it at any desired ``freq-pixel''@footnote{We are using the made-up word ``freq-pixel'' so they are not confused with spatial domain ``pixels''.} spacing.
However, this process would involve two steps of operations and computers in particular are not too good at analytic operations for the first step.
So here, we will derive a method to directly find the `freq-pixel'ated @mymath{F_s(\omega)} from the pixelated @mymath{f_s(l)}.
Let's start with the definition of the Fourier transform (see @ref{Fourier transform}):

@dispmath{F_s(\omega)=\int_{-\infty}^{\infty}f_s(l)e^{-i{\omega}l}dl }

@noindent
From the definition of @mymath{f_s(\omega)} (using @mymath{x} instead of @mymath{n}) we get:

@dispmath{
\eqalign{
        F_s(\omega) &= \displaystyle\sum_{x=-\infty}^{\infty}
                       \int_{-\infty}^{\infty}f(l)\delta(l-xP)e^{-i{\omega}l}dl \cr
                    &= \displaystyle\sum_{x=-\infty}^{\infty}
                       f_xe^{-i{\omega}xP}
}
}

@noindent
Where @mymath{f_x} is the value of @mymath{f(l)} on the point @mymath{x} or the value of the @mymath{x}th pixel.
As shown in @ref{Sampling theorem} this function is infinitely periodic with a period of @mymath{2\pi/P}.
So all we need is the values within one period: @mymath{0<\omega<2\pi/P}, see @ref{samplingfreq}.
We want @mymath{X} samples within this interval, so the frequency difference between each frequency sample or freq-pixel is @mymath{1/XP}.
Hence we will evaluate the equation above on the points at:

@dispmath{\omega={u\over XP} \quad\quad u = 0, 1, 2, ..., X-1}

@noindent
Therefore the value of the freq-pixel @mymath{u} in the frequency
domain is:

@dispmath{F_u=\displaystyle\sum_{x=0}^{X-1} f_xe^{-i{ux\over X}} }

@noindent
Therefore, we see that for each freq-pixel in the frequency domain, we are going to need all the pixels in the spatial domain@footnote{So even if one pixel is a blank pixel (see @ref{Blank pixels}), all the pixels in the frequency domain will also be blank.}.
If the input (spatial) pixel row is also @mymath{X} pixels wide, then we can exactly recover the @mymath{x}th pixel with the following summation:

@dispmath{f_x={1\over X}\displaystyle\sum_{u=0}^{X-1} F_ue^{i{ux\over X}} }

When the input pixel row (we are still only working on 1D data) has @mymath{X} pixels, then it is @mymath{L=XP} spatial units wide.
@mymath{L}, or the length of the input data was defined in @ref{Fourier series} and @mymath{P} or the space between the pixels in the input was defined in @ref{Dirac delta and comb}.
As we saw in @ref{Sampling theorem}, the input (spatial) pixel spacing (@mymath{P}) specifies the range of frequencies that can be studied and in @ref{Fourier series} we saw that the length of the (spatial) input, (@mymath{L}) determines the resolution (or size of the freq-pixels) in our discrete Fourier transformed image.
Both result from the fact that the frequency domain is the inverse of the spatial domain.

@node Fourier operations in two dimensions, Edges in the frequency domain, Discrete Fourier transform, Frequency domain and Fourier operations
@subsubsection Fourier operations in two dimensions

Once all the relations in the previous sections have been clearly understood in one dimension, it is very easy to generalize them to two or even more dimensions since each dimension is by definition independent.
Previously we defined @mymath{l} as the continuous variable in 1D and the inverse of the period in its direction to be @mymath{\omega}.
Let's show the second spatial direction with @mymath{m} the inverse of the period in the second dimension with @mymath{\nu}.
The Fourier transform in 2D (see @ref{Fourier transform}) can be written as:

@dispmath{F(\omega, \nu)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
f(l, m)e^{-i({\omega}l+{\nu}m)}dl}

@dispmath{f(l, m)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
F(\omega, \nu)e^{i({\omega}l+{\nu}m)}dl}

The 2D Dirac @mymath{\delta(l,m)} is non-zero only when @mymath{l=m=0}.
The 2D Dirac comb (or Dirac brush! See @ref{Dirac delta and comb}) can be written in units of the 2D Dirac @mymath{\delta}.
For most image detectors, the sides of a pixel are equal in both dimensions.
So @mymath{P} remains unchanged, if a specific device is used which has non-square pixels, then for each dimension a different value should be used.

@dispmath{{\rm III}_P(l, m)\equiv\displaystyle\sum_{j=-\infty}^{\infty}
\displaystyle\sum_{k=-\infty}^{\infty}
\delta(l-jP, m-kP) }

The Two dimensional Sampling theorem (see @ref{Sampling theorem}) is thus very easily derived as before since the frequencies in each dimension are independent.
Let's take @mymath{\nu_m} as the maximum frequency along the second dimension.
Therefore the two dimensional sampling theorem says that a 2D band-limited function can be recovered when the following conditions hold@footnote{If the pixels are not a square, then each dimension has to use the respective pixel size, but since most detectors have square pixels, we assume so here too}:

@dispmath{ {2\pi\over P} > 2\omega_m \quad\quad\quad {\rm and}
\quad\quad\quad {2\pi\over P} > 2\nu_m}

Finally, let's represent the pixel counter on the second dimension in the spatial and frequency domains with @mymath{y} and @mymath{v} respectively.
Also let's assume that the input image has @mymath{Y} pixels on the second dimension.
Then the two dimensional discrete Fourier transform and its inverse (see @ref{Discrete Fourier transform}) can be written as:

@dispmath{F_{u,v}=\displaystyle\sum_{x=0}^{X-1}\displaystyle\sum_{y=0}^{Y-1}
f_{x,y}e^{-i({ux\over X}+{vy\over Y})} }

@dispmath{f_{x,y}={1\over XY}\displaystyle\sum_{u=0}^{X-1}\displaystyle\sum_{v=0}^{Y-1}
F_{u,v}e^{i({ux\over X}+{vy\over Y})} }


@node Edges in the frequency domain,  , Fourier operations in two dimensions, Frequency domain and Fourier operations
@subsubsection Edges in the frequency domain

With a good grasp of the frequency domain, we can revisit the problem of convolution on the image edges, see @ref{Edges in the spatial domain}.
When we apply the convolution theorem (see @ref{Convolution theorem}) to convolve an image, we first take the discrete Fourier transforms (DFT, @ref{Discrete Fourier transform}) of both the input image and the kernel, then we multiply them with each other and then take the inverse DFT to construct the convolved image.
Of course, in order to multiply them with each other in the frequency domain, the two images have to be the same size, so let's assume that we pad the kernel (it is usually smaller than the input image) with zero valued pixels in both dimensions so it becomes the same size as the input image before the DFT.

Having multiplied the two DFTs, we now apply the inverse DFT which is where the problem is usually created.
If the DFT of the kernel only had values of 1 (unrealistic condition!) then there would be no problem and the inverse DFT of the multiplication would be identical with the input.
However in real situations, the kernel's DFT has a maximum of 1 (because the sum of the kernel has to be one, see @ref{Convolution process}) and decreases something like the hypothetical profile of @ref{samplingfreq}.
So when multiplied with the input image's DFT, the coefficients or magnitudes (see @ref{Circles and the complex plane}) of the smallest frequency (or the sum of the input image pixels) remains unchanged, while the magnitudes of the higher frequencies are significantly reduced.

As we saw in @ref{Sampling theorem}, the Fourier transform of a discrete input will be infinitely repeated.
In the final inverse DFT step, the input is in the frequency domain (the multiplied DFT of the input image and the kernel DFT).
So the result (our output convolved image) will be infinitely repeated in the spatial domain.
In order to accurately reconstruct the input image, we need all the frequencies with the correct magnitudes.
However, when the magnitudes of higher frequencies are decreased, longer periods (shorter frequencies) will dominate in the reconstructed pixel values.
Therefore, when constructing a pixel on the edge of the image, the newly empowered longer periods will look beyond the input image edges and will find the repeated input image there.
So if you convolve an image in this fashion using the convolution theorem, when a bright object exists on one edge of the image, its blurred wings will be present on the other side of the convolved image.
This is often termed as circular convolution or cyclic convolution.

So, as long as we are dealing with convolution in the frequency domain, there is nothing we can do about the image edges.
The least we can do is to eliminate the ghosts of the other side of the image.
So, we add zero valued pixels to both the input image and the kernel in both dimensions so the image that will be convolved has a size equal to the sum of both images in each dimension.
Of course, the effect of this zero-padding is that the sides of the output convolved image will become dark.
To put it another way, the edges are going to drain the flux from nearby objects.
But at least it is consistent across all the edges of the image and is predictable.
In Convolve, you can see the padded images when inspecting the frequency domain convolution steps with the @option{--viewfreqsteps} option.


@node Spatial vs. Frequency domain, Convolution kernel, Frequency domain and Fourier operations, Convolve
@subsection Spatial vs. Frequency domain

With the discussions above it might not be clear when to choose the spatial domain and when to choose the frequency domain.
Here we will try to list the benefits of each.

@noindent
The spatial domain,
@itemize
@item
Can correct for the edge effects of convolution, see @ref{Edges in the spatial domain}.

@item
Can operate on blank pixels.

@item
Can be faster than frequency domain when the kernel is small (in terms of the number of pixels on the sides).
@end itemize

@noindent
The frequency domain,
@itemize
@item
Will be much faster when the image and kernel are both large.
@end itemize

@noindent
As a general rule of thumb, when working on an image of modeled profiles use the frequency domain and when working on an image of real (observed) objects use the spatial domain (corrected for the edges).
The reason is that if you apply a frequency domain convolution to a real image, you are going to loose information on the edges and generally you don't want large kernels.
But when you have made the profiles in the image yourself, you can just make a larger input image and crop the central parts to completely remove the edge effect, see @ref{If convolving afterwards}.
Also due to oversampling, both the kernels and the images can become very large and the speed boost of frequency domain convolution will significantly improve the processing time, see @ref{Oversampling}.

@node Convolution kernel, Invoking astconvolve, Spatial vs. Frequency domain, Convolve
@subsection Convolution kernel

All the programs that need convolution will need to be given a convolution kernel file and extension.
In most cases (other than Convolve, see @ref{Convolve}) the kernel file name is optional.
However, the extension is necessary and must be specified either on the command-line or at least one of the configuration files (see @ref{Configuration files}).
Within Gnuastro, there are two ways to create a kernel image:

@itemize

@item
MakeProfiles: You can use MakeProfiles to create a parametric (based on a radial function) kernel, see @ref{MakeProfiles}.
By default MakeProfiles will make the Gaussian and Moffat profiles in a separate file so you can feed it into any of the programs.

@item
ConvertType: You can write your own desired kernel into a text file table and convert it to a FITS file with ConvertType, see @ref{ConvertType}.
Just be careful that the kernel has to have an odd number of pixels along its two axes, see @ref{Convolution process}.
All the programs that do convolution will normalize the kernel internally, so if you choose this option, you don't have to worry about normalizing the kernel.
Only within Convolve, there is an option to disable normalization, see @ref{Invoking astconvolve}.

@end itemize

@noindent
The two options to specify a kernel file name and its extension are shown below.
These are common between all the programs that will do convolution.
@table @option
@item -k STR
@itemx --kernel=STR
The convolution kernel file name.
The @code{BITPIX} (data type) value of this file can be any standard type and it does not necessarily have to be normalized.
Several operations will be done on the kernel image prior to the program's processing:

@itemize

@item
It will be converted to floating point type.

@item
All blank pixels (see @ref{Blank pixels}) will be set to zero.

@item
It will be normalized so the sum of its pixels equal unity.

@item
It will be flipped so the convolved image has the same orientation.
This is only relevant if the kernel is not circular. See @ref{Convolution process}.
@end itemize

@item -U STR
@itemx --khdu=STR
The convolution kernel HDU.
Although the kernel file name is optional, before running any of the programs, they need to have a value for @option{--khdu} even if the default kernel is to be used.
So be sure to keep its value in at least one of the configuration files (see @ref{Configuration files}).
By default, the system configuration file has a value.

@end table


@node Invoking astconvolve,  , Convolution kernel, Convolve
@subsection Invoking Convolve

Convolve an input dataset (2D image or 1D spectrum for example) with a known kernel, or make the kernel necessary to match two PSFs.
The general template for Convolve is:

@example
$ astconvolve [OPTION...] ASTRdata
@end example

@noindent
One line examples:

@example
## Convolve mockimg.fits with psf.fits:
$ astconvolve --kernel=psf.fits mockimg.fits

## Convolve in the spatial domain:
$ astconvolve observedimg.fits --kernel=psf.fits --domain=spatial

## Find the kernel to match sharper and blurry PSF images:
$ astconvolve --kernel=sharperimage.fits --makekernel=10           \
              blurryimage.fits

## Convolve a Spectrum (column 14 in the FITS table below) with a
## custom kernel (the kernel will be normalized internally, so only
## the ratios are important). Sed is used to replace the spaces with
## new line characters so Convolve sees them as values in one column.
$ echo "1 3 10 3 1" | sed 's/ /\n/g' | astconvolve spectra.fits -c14
@end example

The only argument accepted by Convolve is an input image file.
Some of the options are the same between Convolve and some other Gnuastro programs.
Therefore, to avoid repetition, they will not be repeated here.
For the full list of options shared by all Gnuastro programs, please see @ref{Common options}.
In particular, in the spatial domain, on a multi-dimensional datasets, convolve uses Gnuastro's tessellation to speed up the run, see @ref{Tessellation}.
Common options related to tessellation are described in in @ref{Processing options}.

1-dimensional datasets (for example spectra) are only read as columns within a table (see @ref{Tables} for more on how Gnuastro programs read tables).
Note that currently 1D convolution is only implemented in the spatial domain and thus kernel-matching is also not supported.

Here we will only explain the options particular to Convolve.
Run Convolve with @option{--help} in order to see the full list of options Convolve accepts, irrespective of where they are explained in this book.

@table @option

@item --kernelcolumn
Column containing the 1D kernel.
When the input dataset is a 1-dimensional column, and the host table has more than one column, use this option to specify which column should be used.

@item --nokernelflip
Do not flip the kernel after reading it the spatial domain convolution.
This can be useful if the flipping has already been applied to the kernel.

@item --nokernelnormx
Do not normalize the kernel after reading it, such that the sum of its pixels is unity.

@item -d STR
@itemx --domain=STR
@cindex Discrete Fourier transform
The domain to use for the convolution.
The acceptable values are `@code{spatial}' and `@code{frequency}', corresponding to the respective domain.

For large images, the frequency domain process will be more efficient than convolving in the spatial domain.
However, the edges of the image will loose some flux (see @ref{Edges in the spatial domain}) and the image must not contain any blank pixels, see @ref{Spatial vs. Frequency domain}.


@item --checkfreqsteps
With this option a file with the initial name of the output file will be created that is suffixed with @file{_freqsteps.fits}, all the steps done to arrive at the final convolved image are saved as extensions in this file.
The extensions in order are:

@enumerate
@item
The padded input image.
In frequency domain convolution the two images (input and convolved) have to be the same size and both should be padded by zeros.

@item
The padded kernel, similar to the above.

@item
@cindex Phase angle
@cindex Complex numbers
@cindex Numbers, complex
@cindex Fourier spectrum
@cindex Spectrum, Fourier
The Fourier spectrum of the forward Fourier transform of the input image.
Note that the Fourier transform is a complex operation (and not view able in one image!)  So we either have to show the `Fourier spectrum' or the `Phase angle'.
For the complex number @mymath{a+ib}, the Fourier spectrum is defined as @mymath{\sqrt{a^2+b^2}} while the phase angle is defined as @mymath{\arctan(b/a)}.

@item
The Fourier spectrum of the forward Fourier transform of the kernel image.

@item
The Fourier spectrum of the multiplied (through complex arithmetic) transformed images.

@item
@cindex Round-off error
@cindex Floating point round-off error
@cindex Error, floating point round-off
The inverse Fourier transform of the multiplied image.
If you open it, you will see that the convolved image is now in the center, not on one side of the image as it started with (in the padded image of the first extension).
If you are working on a mock image which originally had pixels of precisely 0.0, you will notice that in those parts that your convolved profile(s) did not convert, the values are now @mymath{\sim10^{-18}}, this is due to floating-point round off errors.
Therefore in the final step (when cropping the central parts of the image), we also remove any pixel with a value less than @mymath{10^{-17}}.
@end enumerate

@item --noedgecorrection
Do not correct the edge effect in spatial domain convolution.
For a full discussion, please see @ref{Edges in the spatial domain}.

@item -m INT
@itemx --makekernel=INT
(@option{=INT}) If this option is called, Convolve will do de-convolution (see @ref{Convolution theorem}).
The image specified by the @option{--kernel} option is assumed to be the sharper (less blurry) image and the input image is assumed to be the more blurry image.
The value given to this option will be used as the maximum radius of the kernel.
Any pixel in the final kernel that is larger than this distance from the center will be set to zero.
The two images must have the same size.

Noise has large frequencies which can make the result less reliable for the higher frequencies of the final result.
So all the frequencies which have a spectrum smaller than the value given to the @option{minsharpspec} option in the sharper input image are set to zero and not divided.
This will cause the wings of the final kernel to be flatter than they would ideally be which will make the convolved image result unreliable if it is too high.
Some notes to take into account for a good result:

@itemize
@item
Choose a bright (unsaturated) star and use a region box (with Crop for example, see @ref{Crop}) that is sufficiently above the noise.

@item
Use Warp (see @ref{Warp}) to warp the pixel grid so the star's center is exactly on the center of the central pixel in the cropped image.
This will certainly slightly degrade the result, however, it is necessary.
If there are multiple good stars, you can shift all of them, then normalize them (so the sum of each star's pixels is one) and then take their average to decrease this effect.

@item
The shifting might move the center of the star by one pixel in any direction, so crop the central pixel of the warped image to have a clean image for the de-convolution.
@end itemize

Note that this feature is not yet supported in 1-dimensional datasets.

@item -c
@itemx --minsharpspec
(@option{=FLT}) The minimum frequency spectrum (or coefficient, or pixel value in the frequency domain image) to use in deconvolution, see the explanations under the @option{--makekernel} option for more information.
@end table











@node Warp,  , Convolve, Data manipulation
@section Warp
Image warping is the process of mapping the pixels of one image onto a new pixel grid.
This process is sometimes known as transformation, however following the discussion of Heckbert 1989@footnote{Paul S. Heckbert. 1989. @emph{Fundamentals of Texture mapping and Image Warping}, Master's thesis at University of California, Berkeley.} we will not be using that term because it can be confused with only pixel value or flux transformations.
Here we specifically mean the pixel grid transformation which is better conveyed with `warp'.

@cindex Gravitational lensing
Image wrapping is a very important step in astronomy, both in observational data analysis and in simulating modeled images.
In modeling, warping an image is necessary when we want to apply grid transformations to the initial models, for example in simulating gravitational lensing (Radial warpings are not yet included in Warp).
Observational reasons for warping an image are listed below:

@itemize

@cindex Signal to noise ratio
@item
@strong{Noise:} Most scientifically interesting targets are inherently faint (have a very low Signal to noise ratio).
Therefore one short exposure is not enough to detect such objects that are drowned deeply in the noise.
We need multiple exposures so we can add them together and increase the objects' signal to noise ratio.
Keeping the telescope fixed on one field of the sky is practically impossible.
Therefore very deep observations have to put into the same grid before adding them.

@cindex Mosaicing
@cindex Image mosaic
@item
@strong{Resolution:} If we have multiple images of one patch of the sky (hopefully at multiple orientations) we can warp them to the same grid.
The multiple orientations will allow us to `guess' the values of pixels on an output pixel grid that has smaller pixel sizes and thus increase the resolution of the output.
This process of merging multiple observations is known as Mosaicing.

@cindex Cosmic rays
@item
@strong{Cosmic rays:} Cosmic rays can randomly fall on any part of an image.
If they collide vertically with the camera, they are going to create a very sharp and bright spot that in most cases can be separated easily@footnote{All astronomical targets are blurred with the PSF, see @ref{PSF}, however a cosmic ray is not and so it is very sharp (it suddenly stops at one pixel).}.
However, depending on the depth of the camera pixels, and the angle that a cosmic rays collides with it, it can cover a line-like larger area on the CCD which makes the detection using their sharp edges very hard and error prone.
One of the best methods to remove cosmic rays is to compare multiple images of the same field.
To do that, we need all the images to be on the same pixel grid.

@cindex Optical distortion
@cindex Distortion, optical
@item
@strong{Optical distortion:} (Not yet included in Warp) In wide field images, the optical distortion that occurs on the outer parts of the focal plane will make accurate comparison of the objects at various locations impossible.
It is therefore necessary to warp the image and correct for those distortions prior to the analysis.

@cindex ACS
@cindex CCD
@cindex WFC3
@cindex Wide Field Camera 3
@cindex Charge-coupled device
@cindex Advanced camera for surveys
@cindex Hubble Space Telescope (HST)
@item
@strong{Detector not on focal plane:} In some cases (like the Hubble Space Telescope ACS and WFC3 cameras), the CCD might be tilted compared to the focal plane, therefore the recorded CCD pixels have to be projected onto the focal plane before further analysis.

@end itemize

@menu
* Warping basics::              Basics of coordinate transformation.
* Merging multiple warpings::   How to merge multiple matrices.
* Resampling::                  Warping an image is re-sampling it.
* Invoking astwarp::            Arguments and options for Warp.
@end menu

@node Warping basics, Merging multiple warpings, Warp, Warp
@subsection Warping basics

@cindex Scaling
@cindex Coordinate transformation
Let's take @mymath{\left[\matrix{u&v}\right]} as the coordinates of a point in the input image and @mymath{\left[\matrix{x&y}\right]} as the coordinates of that same point in the output image@footnote{These can be any real number, we are not necessarily talking about integer pixels here.}.
The simplest form of coordinate transformation (or warping) is the scaling of the coordinates, let's assume we want to scale the first axis by @mymath{M} and the second by @mymath{N}, the output coordinates of that point can be calculated by

@dispmath{\left[\matrix{x\cr y}\right]=
          \left[\matrix{Mu\cr Nv}\right]=
          \left[\matrix{M&0\cr0&N}\right]\left[\matrix{u\cr v}\right]}

@cindex Matrix
@cindex Multiplication, Matrix
@cindex Rotation of coordinates
@noindent
Note that these are matrix multiplications.
We thus see that we can represent any such grid warping as a matrix.
Another thing we can do with this @mymath{2\times2} matrix is to rotate the output coordinate around the common center of both coordinates.
If the output is rotated anticlockwise by @mymath{\theta} degrees from the positive (to the right) horizontal axis, then the warping matrix should become:

@dispmath{\left[\matrix{x\cr y}\right]=
   \left[\matrix{ucos\theta-vsin\theta\cr usin\theta+vcos\theta}\right]=
   \left[\matrix{cos\theta&-sin\theta\cr sin\theta&cos\theta}\right]
   \left[\matrix{u\cr v}\right]
   }

@cindex Flip coordinates
@noindent
We can also flip the coordinates around the first axis, the second axis and the coordinate center with the following three matrices respectively:

@dispmath{\left[\matrix{1&0\cr0&-1}\right]\quad\quad
          \left[\matrix{-1&0\cr0&1}\right]\quad\quad
          \left[\matrix{-1&0\cr0&-1}\right]}

@cindex Shear
@noindent
The final thing we can do with this definition of a @mymath{2\times2} warping matrix is shear.
If we want the output to be sheared along the first axis with @mymath{A} and along the second with @mymath{B}, then we can use the matrix:

@dispmath{\left[\matrix{1&A\cr B&1}\right]}

@noindent
To have one matrix representing any combination of these steps, you use matrix multiplication, see @ref{Merging multiple warpings}.
So any combinations of these transformations can be displayed with one @mymath{2\times2} matrix:

@dispmath{\left[\matrix{a&b\cr c&d}\right]}

@cindex Wide Field Camera 3
@cindex Advanced Camera for Surveys
@cindex Hubble Space Telescope (HST)
The transformations above can cover a lot of the needs of most coordinate transformations.
However they are limited to mapping the point @mymath{[\matrix{0&0}]} to @mymath{[\matrix{0&0}]}.
Therefore they are useless if you want one coordinate to be shifted compared to the other one.
They are also space invariant, meaning that all the coordinates in the image will receive the same transformation.
In other words, all the pixels in the output image will have the same area if placed over the input image.
So transformations which require varying output pixel sizes like projections cannot be applied through this @mymath{2\times2} matrix either (for example for the tilted ACS and WFC3 camera detectors on board the Hubble space telescope).

@cindex M@"obius, August. F.
@cindex Homogeneous coordinates
@cindex Coordinates, homogeneou
To add these further capabilities, namely translation and projection, we use the homogeneous coordinates.
They were defined about 200 years ago by August Ferdinand M@"obius (1790 -- 1868).
For simplicity, we will only discuss points on a 2D plane and avoid the complexities of higher dimensions.
We cannot provide a deep mathematical introduction here, interested readers can get a more detailed explanation from Wikipedia@footnote{@url{http://en.wikipedia.org/wiki/Homogeneous_coordinates}} and the references therein.

By adding an extra coordinate to a point we can add the flexibility we need.
The point @mymath{[\matrix{x&y}]} can be represented as @mymath{[\matrix{xZ&yZ&Z}]} in homogeneous coordinates.
Therefore multiplying all the coordinates of a point in the homogeneous coordinates with a constant will give the same point.
Put another way, the point @mymath{[\matrix{x&y&Z}]} corresponds to the point @mymath{[\matrix{x/Z&y/Z}]} on the constant @mymath{Z} plane.
Setting @mymath{Z=1}, we get the input image plane, so @mymath{[\matrix{u&v&1}]} corresponds to @mymath{[\matrix{u&v}]}.
With this definition, the transformations above can be generally written as:

@dispmath{\left[\matrix{x\cr y\cr 1}\right]=
          \left[\matrix{a&b&0\cr c&d&0\cr 0&0&1}\right]
          \left[\matrix{u\cr v\cr 1}\right]}

@noindent
@cindex Affine Transformation
@cindex Transformation, affine
We thus acquired 4 extra degrees of freedom.
By giving non-zero values to the zero valued elements of the last column we can have translation (try the matrix multiplication!).
In general, any coordinate transformation that is represented by the matrix below is known as an affine transformation@footnote{@url{http://en.wikipedia.org/wiki/Affine_transformation}}:

@dispmath{\left[\matrix{a&b&c\cr d&e&f\cr 0&0&1}\right]}

@cindex Homography
@cindex Projective transformation
@cindex Transformation, projective
We can now consider translation, but the affine transform is still spatially invariant.
Giving non-zero values to the other two elements in the matrix above gives us the projective transformation or Homography@footnote{@url{http://en.wikipedia.org/wiki/Homography}} which is the most general type of transformation with the @mymath{3\times3} matrix:

@dispmath{\left[\matrix{x'\cr y'\cr w}\right]=
          \left[\matrix{a&b&c\cr d&e&f\cr g&h&1}\right]
          \left[\matrix{u\cr v\cr 1}\right]}

@noindent
So the output coordinates can be calculated from:

@dispmath{x={x' \over w}={au+bv+c \over gu+hv+1}\quad\quad\quad\quad
          y={y' \over w}={du+ev+f \over gu+hv+1}}

Thus with Homography we can change the sizes of the output pixels on the input plane, giving a `perspective'-like visual impression.
This can be quantitatively seen in the two equations above.
When @mymath{g=h=0}, the denominator is independent of @mymath{u} or @mymath{v} and thus we have spatial invariance.
Homography preserves lines at all orientations.
A very useful fact about Homography is that its inverse is also a Homography.
These two properties play a very important role in the implementation of this transformation.
A short but instructive and illustrated review of affine, projective and also bi-linear mappings is provided in Heckbert 1989@footnote{
Paul S. Heckbert. 1989. @emph{Fundamentals of Texture mapping and Image Warping}, Master's thesis at University of California, Berkeley.
Note that since points are defined as row vectors there, the matrix is the transpose of the one discussed here.}.

@node Merging multiple warpings, Resampling, Warping basics, Warp
@subsection Merging multiple warpings

@cindex Commutative property
@cindex Matrix multiplication
@cindex Multiplication, matrix
@cindex Non-commutative operations
@cindex Operations, non-commutative
In @ref{Warping basics} we saw how a basic warp/transformation can be represented with a matrix.
To make more complex warpings (for example to define a translation, rotation and scale as one warp) the individual matrices have to be multiplied through matrix multiplication.
However matrix multiplication is not commutative, so the order of the set of matrices you use for the multiplication is going to be very important.

The first warping should be placed as the left-most matrix.
The second warping to the right of that and so on.
The second transformation is going to occur on the warped coordinates of the first.
As an example for merging a few transforms into one matrix, the multiplication below represents the rotation of an image about a point @mymath{[\matrix{U&V}]} anticlockwise from the horizontal axis by an angle of @mymath{\theta}.
To do this, first we take the origin to @mymath{[\matrix{U&V}]} through translation.
Then we rotate the image, then we translate it back to where it was initially.
These three operations can be merged in one operation by calculating the matrix multiplication below:

@dispmath{\left[\matrix{1&0&U\cr0&1&V\cr{}0&0&1}\right]
          \left[\matrix{cos\theta&-sin\theta&0\cr sin\theta&cos\theta&0\cr                0&0&1}\right]
          \left[\matrix{1&0&-U\cr0&1&-V\cr{}0&0&1}\right]}





@node Resampling, Invoking astwarp, Merging multiple warpings, Warp
@subsection Resampling

@cindex Pixel
@cindex Camera
@cindex Detector
@cindex Sampling
@cindex Resampling
@cindex Pixel mixing
@cindex Photoelectrons
@cindex Picture element
@cindex Mixing pixel values
A digital image is composed of discrete `picture elements' or `pixels'.
When a real image is created from a camera or detector, each pixel's area is used to store the number of photo-electrons that were created when incident photons collided with that pixel's surface area.
This process is called the `sampling' of a continuous or analog data into digital data.
When we change the pixel grid of an image or warp it as we defined in @ref{Warping basics}, we have to `guess' the flux value of each pixel on the new grid based on the old grid, or re-sample it.
Because of the `guessing', any form of warping on the data is going to degrade the image and mix the original pixel values with each other.
So if an analysis can be done on an unwarped data image, it is best to leave the image untouched and pursue the analysis.
However as discussed in @ref{Warp} this is not possible most of the times, so we have to accept the problem and re-sample the image.

@cindex Point pixels
@cindex Interpolation
@cindex Bicubic interpolation
@cindex Signal to noise ratio
@cindex Bi-linear interpolation
@cindex Interpolation, bicubic
@cindex Interpolation, bi-linear
In most applications of image processing, it is sufficient to consider each pixel to be a point and not an area.
This assumption can significantly speed up the processing of an image and also the simplicity of the code.
It is a fine assumption when the signal to noise ratio of the objects are very large.
The question will then be one of interpolation because you have multiple points distributed over the output image and you want to find the values at the pixel centers.
To increase the accuracy, you might also sample more than one point from within a pixel giving you more points for a more accurate interpolation in the output grid.

@cindex Image edges
@cindex Edges, image
However, interpolation has several problems.
The first one is that it will depend on the type of function you want to assume for the interpolation.
For example you can choose a bi-linear or bi-cubic (the `bi's are for the 2 dimensional nature of the data) interpolation method.
For the latter there are various ways to set the constants@footnote{see @url{http://entropymine.com/imageworsener/bicubic/} for a nice introduction.}.
Such functional interpolation functions can fail seriously on the edges of an image.
They will also need normalization so that the flux of the objects before and after the warpings are comparable.
The most basic problem with such techniques is that they are based on a point while a detector pixel is an area.
They add a level of subjectivity to the data (make more assumptions through the functions than the data can handle).
For most applications this is fine, but in scientific applications where detection of the faintest possible galaxies or fainter parts of bright galaxies is our aim, we cannot afford this loss.
Because of these reasons Warp will not use such interpolation techniques.

@cindex Drizzle
@cindex Pixel mixing
@cindex Exact area resampling
Warp will do interpolation based on ``pixel mixing''@footnote{For a graphic demonstration see @url{http://entropymine.com/imageworsener/pixelmixing/}.} or ``area resampling''.
This is also what the Hubble Space Telescope pipeline calls ``Drizzling''@footnote{@url{http://en.wikipedia.org/wiki/Drizzle_(image_processing)}}.
This technique requires no functions, it is thus non-parametric.
It is also the closest we can get (make least assumptions) to what actually happens on the detector pixels.
The basic idea is that you reverse-transform each output pixel to find which pixels of the input image it covers and what fraction of the area of the input pixels are covered.
To find the output pixel value, you simply sum the value of each input pixel weighted by the overlapfraction (between 0 to 1) of the output pixel and that input pixel.
Through this process, pixels are treated as an area not as a point (which is how detectors create the image), also the brightness (see @ref{Brightness flux magnitude}) of an object will be left completely unchanged.

If there are very high spatial-frequency signals in the image (for example fringes) which vary on a scale smaller than your output image pixel size, pixel mixing can cause ailiasing@footnote{@url{http://en.wikipedia.org/wiki/Aliasing}}.
So if the input image has fringes, they have to be calculated and removed separately (which would naturally be done in any astronomical  application).
Because of the PSF no astronomical target has a sharpchange in the signal so this issue is less important for astronomical applications, see @ref{PSF}.


@node Invoking astwarp,  , Resampling, Warp
@subsection Invoking Warp

Warp an input dataset into a new grid.
Any homographic warp (for example scaling, rotation, translation, projection) is acceptable, see @ref{Warping basics} for the definitions.
The general template for invoking Warp is:

@example
$ astwarp [OPTIONS...] InputImage
@end example

@noindent
One line examples:

@example
## Rotate and then scale input image:
$ astwarp --rotate=37.92 --scale=0.8 image.fits

## Scale, then translate the input image:
$ astwarp --scale 8/3 --translate 2.1 image.fits

## Align raw image with celestial coordinates:
$ astwarp --align rawimage.fits --output=aligned.fits

## Directly input a custom warping matrix (using fraction):
$ astwarp --matrix=1/5,0,4/10,0,1/5,4/10,0,0,1 image.fits

## Directly input a custom warping matrix, with final numbers:
$ astwarp --matrix="0.7071,-0.7071,  0.7071,0.7071" image.fits
@end example

If any processing is to be done, Warp can accept one file as input.
As in all Gnuastro programs, when an output is not explicitly set with the @option{--output} option, the output filename will be set automatically based on the operation, see @ref{Automatic output}.
For the full list of general options to all Gnuastro programs (including Warp), please see @ref{Common options}.

To be the most accurate, the input image will be read as a 64-bit double precision floating point dataset and all internal processing is done in this format (including the raw output type).
You can use the common @option{--type} option to write the output in any type you want, see @ref{Numeric data types}.

Warps must be specified as command-line options, either as (possibly multiple) modular warpings (for example @option{--rotate}, or @option{--scale}), or directly as a single raw matrix (with @option{--matrix}).
If specified together, the latter (direct matrix) will take precedence and all the modular warpings will be ignored.
Any number of modular warpings can be specified on the command-line and configuration files.
If more than one modular warping is given, all will be merged to create one warping matrix.
As described in @ref{Merging multiple warpings}, matrix multiplication is not commutative, so the order of specifying the modular warpings on the command-line, and/or configuration files makes a difference (see @ref{Configuration file precedence}).
The full list of modular warpings and the other options particular to Warp are described below.

The values to the warping options (modular warpings as well as @option{--matrix}), are a sequence of at least one number.
Each number in this sequence is separated from the next by a comma (@key{,}).
Each number can also be written as a single fraction (with a forward-slash @key{/} between the numerator and denominator).
Space and Tab characters arepermitted between any two numbers, just don't forget to quote the whole value.
Otherwise, the value will not be fully passed onto the option.
See the examples above as a demonstration.

@cindex FITS standard
Based on the FITS standard, integer values are assigned to the center of a pixel and the coordinate [1.0, 1.0] is the center of the first pixel (bottom left of the image when viewed in SAO ds9).
So the coordinate center [0.0, 0.0] is half a pixel away (in each axis) from the bottom left vertex of the first pixel.
The resampling that is done in Warp (see @ref{Resampling}) is done on the coordinate axes and thus directly depends on the coordinate center.
In some situations this if fine, for example when rotating/aligning a real image, all the edge pixels will be similarly affected.
But in other situations (for example when scaling an over-sampled mock image to its intended resolution, this is not desired: you want the center of the coordinates to be on the corner of the pixel.
In such cases, you can use the @option{--centeroncorner} option which will shift the center by @mymath{0.5} before the main warp, then shift it back by @mymath{-0.5} after the main warp, see below.


@table @option

@item -a
@itemx --align
Align the image and celestial (WCS) axes given in the input.
After it, the  vertical image direction (when viewed in SAO ds9) corresponds to thedeclination and the horizontal axis is the inverse of the Right Ascension (RA).
The inverse of the RA is chosen so the image can correspond to what you would actually see on the sky and is common in most survey images.

Align is internally treated just like a rotation (@option{--rotation}), but uses the input image's WCS to find the rotation angle.
Thus, if you have rotated the image before calling @option{--align}, you might get unexpected results (because the rotation is defined on the original WCS).

@item -r FLT
@itemx --rotate=FLT
Rotate the input image by the given angle in degrees: @mymath{\theta} in @ref{Warping basics}.
Note that commonly, the WCS structure of the image is set such that the RA is the inverse of the image horizontal axis which increases towards the right in the FITS standard and as viewed by SAO ds9.
So the default center for rotation is on the right of the image.
If you want to rotate about other points, you have to translate the warping center first (with @option{--translate}) then apply your rotation and then return the center back to the original position (with another call to @option{--translate}, see @ref{Merging multiple warpings}.

@item -s FLT[,FLT]
@itemx --scale=FLT[,FLT]
Scale the input image by the given factor(s): @mymath{M} and @mymath{N} in @ref{Warping basics}.
If only one value is given, then both image axes will be scaled with the given value.
When two values are given (separated by a comma), the first will be used to scale the first axis and the second will be used for the second axis.
If you only need to scale one axis, use @option{1} for the axis you don't need to scale.
The value(s) can also be written (on the command-line or in configuration files) as a fraction.

@item -f FLT[,FLT]
@itemx --flip=FLT[,FLT]
Flip the input image around the given axis(s).
If only one value is given, then both image axes are flipped.
When two values are given (separated by acomma), you can choose which axis to flip over.
@option{--flip} only takes values @code{0} (for no flip), or @code{1} (for a flip).
Hence, if you want to flip by the second axis only, use @option{--flip=0,1}.

@item -e FLT[,FLT]
@itemx --shear=FLT[,FLT]
Shear the input image by the given value(s): @mymath{A} and @mymath{B} in @ref{Warping basics}.
If only one value is given, then both image axes will be sheared with the given value.
When two values are given (separated by a comma), the first will be used to shear the first axis and the second will be used for the second axis.
If you only need to shear along one axis, use @option{0} for the axis that must be untouched.
The value(s) can also be written (on the command-line or in configuration files) as a fraction.

@item -t FLT[,FLT]
@itemx --translate=FLT[,FLT]
Translate (move the center of coordinates) the input image by the given value(s): @mymath{c} and @mymath{f} in @ref{Warping basics}.
If only one value is given, then both image axes will be translated by the given value.
When two values are given (separated by a comma), the first will be used to translate the first axis and the second will be used for the second axis.
If you only need to translate along one axis, use @option{0} for the axis that must be untouched.
The value(s) can also be written (on the command-line or in configuration files) as a fraction.

@item -p FLT[,FLT]
@itemx --project=FLT[,FLT]
Apply a projection to the input image by the given values(s): @mymath{g} and @mymath{h} in @ref{Warping basics}.
If only one value is given, then projection will apply to both axes with the given value.
When two values are given (separated by a comma), the first will be used to project the first axis and the second will be used for the second axis.
If you only need to project along one axis, use @option{0} for the axis that must be untouched.
The value(s) can also be written (on the command-line or in configuration files) as a fraction.

@item -m STR
@itemx --matrix=STR
The warp/transformation matrix.
All the elements in this matrix must be separated by comas(@key{,}) characters and as described above, you can also use fractions (a forward-slash between two numbers).
The transformation matrix can be either a 2 by 2 (4 numbers), or a 3 by 3 (9 numbers) array.
In the former case (if a 2 by 2 matrix is given), then it is put into a 3 by 3 matrix (see @ref{Warping basics}).

@cindex NaN
The determinant of the matrix has to be non-zero and it must not contain any non-number values (for example infinities or NaNs).
The elements of the matrix have to be written row by row.
So for the general Homography matrix of @ref{Warping basics}, it should be called with @command{--matrix=a,b,c,d,e,f,g,h,1}.

The raw matrix takes precedence over all the modular warping options listed above, so if it is called with any number of modular warps, the latter are ignored.

@item -c
@itemx --centeroncorer
Put the center of coordinates on the corner of the first (bottom-left when viewed in SAO ds9) pixel.
This option is applied after the final warping matrix has been finalized: either through modular warpings or the raw matrix.
See the explanation above for coordinates in the FITS standard to better understand this option and when it should be used.

@item --hstartwcs=INT
Specify the first header keyword number (line) that should be used to read the WCS information, see the full explanation in @ref{Invoking astcrop}.

@item --hendwcs=INT
Specify the last header keyword number (line) that should be used to read the WCS information, see the full explanation in @ref{Invoking astcrop}.

@item -k
@itemx --keepwcs
@cindex WCSLIB
@cindex World Coordinate System
Do not correct the WCS information of the input image and save it untouched to the output image.
By default the WCS (World Coordinate System) information of the input image is going to be corrected in the output image so the objects in the image are at the same WCS coordinates.
But in some cases it might be useful to keep it unchanged (for example to correct alignments).

@item -C FLT
@itemx --coveredfrac=FLT
Depending on the warp, the output pixels that cover pixels on the edge of the input image, or blank pixels in the input image, are not going to be fully covered by input data.
With this option, you can specify the acceptable covered fraction of such pixels (any value between 0 and 1).
If you only want output pixels that are fully covered by the input image area (and are not blank), then you can set @option{--coveredfrac=1}.
Alternatively, a value of @code{0} will keep output pixels that are even infinitesimally covered by the input(so the sum of the pixels in the input and output images will be the same).

@end table




















@node Data analysis, Modeling and fittings, Data manipulation, Top
@chapter Data analysis

Astronomical datasets (images or tables) contain very valuable information, the tools in this section can help in analyzing, extracting, and quantifying that information.
For example getting general or specific statistics of the dataset (with @ref{Statistics}), detecting signal within a noisy dataset (with @ref{NoiseChisel}), or creating a catalog from an input dataset (with @ref{MakeCatalog}).

@menu
* Statistics::                  Calculate dataset statistics.
* NoiseChisel::                 Detect objects in an image.
* Segment::                     Segment detections based on signal structure.
* MakeCatalog::                 Catalog from input and labeled images.
* Match::                       Match two datasets.
@end menu

@node Statistics, NoiseChisel, Data analysis, Data analysis
@section Statistics

The distribution of values in a dataset can provide valuable information about it.
For example, in an image, if it is a positively skewed distribution, we can see that there is significant data in the image.
If the distribution is roughly symmetric, we can tell that there is no significant data in the image.
In a table, when we need to select a sample of objects, it is important to first get a general view of the whole sample.

On the other hand, you might need to know certain statistical parameters of the dataset.
For example, if we have run a detection algorithm on an image, and we want to see how accurate it was, one method is to calculate the average of the undetected pixels and see how reasonable it is (if detection is done correctly, the average of undetected pixels should be approximately equal to the background value, see @ref{Sky value}).
In a table, you might have calculated the magnitudes of a certain class of objects and want to get some general characteristics of the distribution immediately on the command-line (very fast!), to possibly change some parameters.
The Statistics program is designed for such situations.

@menu
* Histogram and Cumulative Frequency Plot::  Basic definitions.
* 2D Histograms::               Plotting the distribution of two variables.
* Sigma clipping::              Definition of @mymath{\sigma}-clipping.
* Sky value::                   Definition and derivation of the Sky value.
* Invoking aststatistics::      Arguments and options to Statistics.
@end menu


@node Histogram and Cumulative Frequency Plot, 2D Histograms, Statistics, Statistics
@subsection Histogram and Cumulative Frequency Plot

@cindex Histogram
Histograms and the cumulative frequency plots are both used to visually study the distribution of a dataset.
A histogram shows the number of data points which lie within pre-defined intervals (bins).
So on the horizontal axis we have the bin centers and on the vertical, the number of points that are in that bin.
You can use it to get a general view of the distribution: which values have been repeated the most? how close/far are the most significant bins?  Are there more values in the larger part of the range of the dataset, or in the lower part?  Similarly, many very important properties about the dataset can be deduced from a visual inspection of the histogram.
In the Statistics program, the histogram can be either output to a table to plot with your favorite plotting program@footnote{
We recommend @url{http://pgfplots.sourceforge.net/,PGFPlots} which generates your plots directly within @TeX{} (the same tool that generates your document).},
or it can be shown with ASCII characters on the command-line, which is very crude, but good enough for a fast and on-the-go analysis, see the example in @ref{Invoking aststatistics}.

@cindex Intervals, histogram
@cindex Bin width, histogram
@cindex Normalizing histogram
@cindex Probability density function
The width of the bins is only necessary parameter for a histogram.
In the limiting case that the bin-widths tend to zero (while assuming the number of points in the dataset tend to infinity), then the histogram will tend to the @url{https://en.wikipedia.org/wiki/Probability_density_function, probability density function} of the distribution.
When the absolute number of points in each bin is not relevant to the study (only the shape of the histogram is important), you can @emph{normalize} a histogram so like the probability density function, the sum of all its bins will be one.

@cindex Cumulative Frequency Plot
In the cumulative frequency plot of a distribution, the horizontal axis is the sorted data values and the y axis is the index of each data in the sorted distribution.
Unlike a histogram, a cumulative frequency plot does not involve intervals or bins.
This makes it less prone to any sort of bias or error that a given bin-width would have on the analysis.
When a larger number of the data points have roughly the same value, then the cumulative frequency plot will become steep in that vicinity.
This occurs because on the horizontal axis, there is little change while on the vertical axis, the indexes constantly increase.
Normalizing a cumulative frequency plot means to divide each index (y axis) by the total number of data points (or the last value).

Unlike the histogram which has a limited number of bins, ideally the cumulative frequency plot should have one point for every data element.
Even in small datasets (for example a @mymath{200\times200} image) this will result in an unreasonably large number of points to plot (40000)! As a result, for practical reasons, it is common to only store its value on a certain number of points (intervals) in the input range rather than the whole dataset, so you should determine the number of bins you want when asking for a cumulative frequency plot.
In Gnuastro (and thus the Statistics program), the number reported for each bin is the total number of data points until the larger interval value for that bin.
You can see an example histogram and cumulative frequency plot of a single dataset under the @option{--asciihist} and @option{--asciicfp} options of @ref{Invoking aststatistics}.

So as a summary, both the histogram and cumulative frequency plot in Statistics will work with bins.
Within each bin/interval, the lower value is considered to be within then bin (it is inclusive), but its larger value is not (it is exclusive).
Formally, an interval/bin between a and b is represented by [a, b).
When the over-all range of the dataset is specified (with the @option{--greaterequal}, @option{--lessthan}, or @option{--qrange} options), the acceptable values of the dataset are also defined with a similar inclusive-exclusive manner.
But when the range is determined from the actual dataset (none of these options is called), the last element in the dataset is included in the last bin's count.

@node 2D Histograms, Sigma clipping, Histogram and Cumulative Frequency Plot, Statistics
@subsection 2D Histograms
@cindex 2D histogram
@cindex Histogram, 2D
In @ref{Histogram and Cumulative Frequency Plot} the concept of histograms were introduced on a single dataset.
But they are only useful for viewing the distribution of a single variable (column in a table).
In many contexts, the distribution of two variables in relation to each other may be of interest.
For example the color-magnitude diagrams in astronomy, where the horizontal axis is the luminosity or magnitude of an object, and the vertical axis is the color.
Scatter plots are useful to see these relations between the objects of interest when the number of the objects is small.

As the density of points in the scatter plot increases, the points will fall over each other and just make a large connected region hide potentially interesting behaviors/correlations in the densest regions.
This is where 2D histograms can become very useful.
A 2D histogram is composed of 2D bins (boxes or pixels), just as a 1D histogram consists of 1D bins (lines).
The number of points falling within each box/pixel will then be the value of that box.
Added with a color-bar, you can now clearly see the distribution independent of the density of points (for example you can even visualize it in log-scale if you want).

Gnuastro's Statistics program has the @option{--histogram2d} option for this task.
It takes a single argument (either @code{table} or @code{image}) that specifies the format of the output 2D histogram.
The two formats will be reviewed separately in the sub-sections below.
But let's start with the generalities that are common to both (related to the input, not the output).

You can specify the two columns to be shown using the @option{--column} (or @option{-c}) option.
So if you want to plot the color-magnitude diagram from a table with the @code{MAG-R} column on the horizontal and @code{COLOR-G-R} on the vertical column, you can use @option{--column=MAG-r,COLOR-G-r}.
The number of bins along each dimension can be set with @option{--numbins} (for first input column) and @option{--numbins2} (for second input column).

Without specifying any range, the full range of values will be used in each dimension.
If you only want to focus on a certain interval of the values in the columns in any dimension you can use the @option{--greaterequal} and @option{--lessthan} options to limit the values along the first/horizontal dimension and @option{--greaterequal2} and @option{--lessthan2} options for the second/vertical dimension.

@menu
* 2D histogram as a table::     Format and usage in table format.
* 2D histogram as an image::    Format and usage in image format
@end menu

@node 2D histogram as a table, 2D histogram as an image, 2D Histograms, 2D Histograms
@subsubsection 2D histogram as a table

When called with the @option{--histogram=table} option, Statistics will output a table file with three columns that have the information of every box as a column.
If you asked for @option{--numbins=N} and @option{--numbins2=M}, all three columns will have @mymath{M\times N} rows (one row for every box/pixel of the 2D histogram).
The first and second columns are the position of the box along the first and second dimensions.
The third column has the number of input points that fall within that box/pixel.

For example, you can make high-quality plots within your paper (using the same @LaTeX{} engine, thus blending very nicely with your text) using @url{https://ctan.org/pkg/pgfplots, PGFPlots}.
Below you can see one such minimal example, using your favorite text editor, save it into a file, make the two small corrections in it, then run the commands shown at the top.
This assumes that you have @LaTeX{} installed, if not the steps to install a minimally sufficient @LaTeX{} package on your system, see the respective section in @ref{Bootstrapping dependencies}.

The two parts that need to be corrected are marked with '@code{%% <--}': the first one (@code{XXXXXXXXX}) should be replaced by the value to the @option{--numbins} option which is the number of bins along the first dimension.
The second one (@code{FILE.txt}) should be replaced with the name of the file generated by Statistics.

@example
%% Replace 'XXXXXXXXX' with your selected number of bins in the first
%% dimension.
%%
%% Then run these commands to build the plot in a LaTeX command.
%%    mkdir tikz
%%    pdflatex -shell-escape -halt-on-error plot.tex
\documentclass@{article@}

%% Load PGFPlots and set it to build the figure separately in a 'tikz'
%% directory (which has to exist before LaTeX is run). This
%% "externalization" is very useful to include the commands of multiple
%% plots in the middle of your paper/report, but also have the plots
%% separately to use in slides or other scenarios.
\usepackage@{pgfplots@}
\usetikzlibrary@{external@}
\tikzexternalize
\tikzsetexternalprefix@{tikz/@}

%% Start the document
\begin@{document@}

  You can actually write a full paper here and include many figures!
  Feel free to change this text.

  %% Define the colormap.
  \pgfplotsset@{
    /pgfplots/colormap=@{coldredux@}@{
      [1cm]
      rgb255(0cm)=(255,255,255)
      rgb255(2cm)=(0,192,255)
      rgb255(4cm)=(0,0,255)
      rgb255(6cm)=(0,0,0)
    @}
  @}

  %% Draw the plot.
  \begin@{tikzpicture@}
    \small
    \begin@{axis@}[
      width=\linewidth,
      view=@{0@}@{90@},
      colorbar horizontal,
      xlabel=X axis,
      ylabel=Y axis,
      ylabel shift=-0.1cm,
      colorbar style=@{at=@{(0,1.01)@}, anchor=south west,
                      xticklabel pos=upper@},
    ]
      \addplot3[
        surf,
        shader=flat corner,
        mesh/ordering=rowwise,
        mesh/cols=XXXXXXXXX,     %% <-- Number of bins in 1st column.
      ] file @{FILE.txt@};         %% <-- Name of aststatistics output.

  \end@{axis@}
\end@{tikzpicture@}

\end@{document@}
@end example

@node 2D histogram as an image,  , 2D histogram as a table, 2D Histograms
@subsubsection 2D histogram as an image

When called with the @option{--histogram=image} option, Statistics will output a FITS file with an image/array extension.
If you asked for @option{--numbins=N} and @option{--numbins2=M} the image will have a size of @mymath{N\times M} pixels (one pixel per 2D bin).
Also, the FITS image will have a linear WCS that is scaled to the 2D bin size along each dimension.
So when you hover your mouse over any part of the image with a FITS viewer (for example SAO DS9), besides the number of points in each pixel, you can directly also see ``coordinates'' of the pixels along the two axes.
You can also use the optimized and fast FITS viewer features for many aspects of visually inspecting the distributions (which we won't go into further).

@cindex Color-magnitude diagram
@cindex Diagram, Color-magnitude
For example let's assume you want to derive the color-magnitude diagram (CMD) of the @url{http://uvudf.ipac.caltech.edu, UVUDF survey}.
You can run the first command below to download the table with magnitudes of objects in many filters and run the second command to see general column metadata after it is downloaded.

@example
$ wget http://asd.gsfc.nasa.gov/UVUDF/uvudf_rafelski_2015.fits.gz
$ asttable uvudf_rafelski_2015.fits.gz -i
@end example

Let's assume you want to find the color to be between the @code{F606W} and @code{F775W} filters (roughly corresponding to the g and r filters in ground-based imaging).
However, the original table doesn't have color columns (there would be too many combinations!).
Therefore you can use the @ref{Column arithmetic} feature of Gnuastro's Table program for deriving a new table with the @code{F775W} magnitude in one column and the difference between the @code{F606W} and @code{F775W} on the other column.
With the second command, you can see the actual values if you like.

@example
$ asttable uvudf_rafelski_2015.fits.gz -cMAG_F775W \
           -c'arith MAG_F606W MAG_F775W -' \
           --colmetadata=ARITH_1,F606W-F775W,"AB mag" -ocmd.fits
$ asttable cmd.fits
@end example

@noindent
You can now construct your 2D histogram as a @mymath{100\times100} pixel FITS image with this command (assuming you want @code{F775W} magnitudes between 22 and 30, colors between -1 and 3 and 100 bins in each dimension).
Note that without the @option{--manualbinrange} option the range of each axis will be determined by the values within the columns (which may be larger or smaller than your desired large).

@example
aststatistics cmd.fits -cMAG_F775W,F606W-F775W --histogram2d=image \
              --numbins=100  --greaterequal=22  --lessthan=30 \
              --numbins2=100 --greaterequal2=-1 --lessthan2=3 \
              --manualbinrange --output=cmd-2d-hist.fits
@end example

@noindent
If you have SAO DS9, you can now open this FITS file as a normal FITS image, for example with the command below.
Try hovering/zooming over the pixels: not only will you see the number of objects in the UVUDF catalog that fall in each bin, but you also see the @code{F775W} magnitude and color of that pixel also.

@example
$ ds9 cmd-2d-hist.fits -cmap sls -zoom to fit
@end example

@noindent
With the first command below, you can activate the grid feature of DS9 to actually see the coordinate grid, as well as values on each line.
With the second command, DS9 will even read the labels of the axises and use them to generate an almost publication-ready plot.

@example
$ ds9 cmd-2d-hist.fits -cmap sls -zoom to fit -grid yes
$ ds9 cmd-2d-hist.fits -cmap sls -zoom to fit -grid yes \
      -grid type publication
@end example

If you are happy with the grid and coloring and etc, you can also use ds9 to save this as a JPEG image to directly use in your documents/slides with these extra DS9 options (DS9 will write the image to @file{cmd-2d.jpeg} and quit immediately afterwards):

@example
$ ds9 cmd-2d-hist.fits -cmap sls -zoom 4 -grid yes \
      -grid type publication -saveimage cmd-2d.jpeg -quit
@end example

@cindex PGFPlots (@LaTeX{} package)
This is good for a fast progress update.
But for your paper or more official report, you want to show something with higher quality.
For that, you can use the PGFPlots package in @LaTeX{} to add axises in the same font as your text, sharp grids and many other elegant/powerful features (like over-plotting interesting points, lines and etc).
But to load the 2D histogram into PGFPlots first you need to convert the FITS image into a more standard format, for example PDF.
We'll use Gnuastro's @ref{ConvertType} for this, and use the @code{sls-inverse} color map (which will map the pixels with a value of zero to white):

@example
$ astconvertt cmd-2d-hist.fits --colormap=sls-inverse \
              --borderwidth=0 -ocmd-2d-hist.pdf
@end example

@noindent
Below you can see a minimally working example of how to add axis numbers, labels and a grid to the PDF generated above.
Copy and paste the @LaTeX{} code below into a plain-text file called @file{cmd-report.tex}
Notice the @code{xmin}, @code{xmax}, @code{ymin}, @code{ymax} values and how they are the same as the range specified above.

@example
\documentclass@{article@}
\usepackage@{pgfplots@}
\dimendef\prevdepth=0
\begin@{document@}

You can write all you want here...

\begin@{tikzpicture@}
  \begin@{axis@}[
      enlargelimits=false,
      grid,
      axis on top,
      width=\linewidth,
      height=\linewidth,
      xlabel=@{Magnitude (F775W)@},
      ylabel=@{Color (F606W-F775W)@}]

    \addplot graphics[xmin=22, xmax=30, ymin=-1, ymax=3]
             @{cmd-2d-hist.pdf@};
  \end@{axis@}
\end@{tikzpicture@}
\end@{document@}
@end example

@noindent
Run this command to build your PDF (assuming you have @LaTeX{} and PGFPlots).

@example
$ pdflatex cmd-report.tex
@end example

The improved quality, blending in with the text, vector-graphics resolution and other features make this plot pleasing to the eye, and let your readers focus on the main point of your scientific argument.
PGFPlots can also built the PDF of the plot separately from the rest of the paper/report, see @ref{2D histogram as a table} for the necessary changes in the preamble.

@node  Sigma clipping, Sky value, 2D Histograms, Statistics
@subsection Sigma clipping

Let's assume that you have pure noise (centered on zero) with a clear @url{https://en.wikipedia.org/wiki/Normal_distribution,Gaussian distribution}, or see @ref{Photon counting noise}.
Now let's assume you add very bright objects (signal) on the image which have a very sharp boundary.
By a sharp boundary, we mean that there is a clear cutoff (from the noise) at the pixels the objects finish.
In other words, at their boundaries, the objects do not fade away into the noise.
In such a case, when you plot the histogram (see @ref{Histogram and Cumulative Frequency Plot}) of the distribution, the pixels relating to those objects will be clearly separate from pixels that belong to parts of the image that did not have any signal (were just noise).
In the cumulative frequency plot, after a steady rise (due to the noise), you would observe a long flat region were for a certain range of data (horizontal axis), there is no increase in the index (vertical axis).

@cindex Blurring
@cindex Cosmic rays
@cindex Aperture blurring
@cindex Atmosphere blurring
Outliers like the example above can significantly bias the measurement of noise statistics.
@mymath{\sigma}-clipping is defined as a way to avoid the effect of such outliers.
In astronomical applications, cosmic rays (when they collide at a near normal incidence angle) are a very good example of such outliers.
The tracks they leave behind in the image are perfectly immune to the blurring caused by the atmosphere and the aperture.
They are also very energetic and so their borders are usually clearly separated from the surrounding noise.
So @mymath{\sigma}-clipping is very useful in removing their effect on the data.
See Figure 15 in Akhlaghi and Ichikawa, @url{https://arxiv.org/abs/1505.01664,2015}.

@mymath{\sigma}-clipping is defined as the very simple iteration below.
In each iteration, the range of input data might decrease and so when the outliers have the conditions above, the outliers will be removed through this iteration.
The exit criteria will be discussed below.

@enumerate
@item
Calculate the standard deviation (@mymath{\sigma}) and median (@mymath{m})
of a distribution.
@item
Remove all points that are smaller or larger than
@mymath{m\pm\alpha\sigma}.
@item
Go back to step 1, unless the selected exit criteria is reached.
@end enumerate

@noindent
The reason the median is used as a reference and not the mean is that the mean is too significantly affected by the presence of outliers, while the median is less affected, see @ref{Quantifying signal in a tile}.
As you can tell from this algorithm, besides the condition above (that the signal have clear high signal to noise boundaries) @mymath{\sigma}-clipping is only useful when the signal does not cover more than half of the full data set.
If they do, then the median will lie over the outliers and @mymath{\sigma}-clipping might remove the pixels with no signal.

There are commonly two exit criteria to stop the @mymath{\sigma}-clipping
iteration:

@itemize
@item
When a certain number of iterations has taken place (second value to the @option{--sclipparams} option is larger than 1).
@item
When the new measured standard deviation is within a certain tolerance level of the old one (second value to the @option{--sclipparams} option is less than 1).
The tolerance level is defined by:

@dispmath{\sigma_{old}-\sigma_{new} \over \sigma_{new}}

The standard deviation is used because it is heavily influenced by the presence of outliers.
Therefore the fact that it stops changing between two iterations is a sign that we have successfully removed outliers.
Note that in each clipping, the dispersion in the distribution is either less or equal.
So @mymath{\sigma_{old}\geq\sigma_{new}}.
@end itemize

@cartouche
@noindent
When working on astronomical images, objects like galaxies and stars are blurred by the atmosphere and the telescope aperture, therefore their signal sinks into the noise very gradually.
Galaxies in particular do not appear to have a clear high signal to noise cutoff at all.
Therefore @mymath{\sigma}-clipping will not be useful in removing their effect on the data.

To gauge if @mymath{\sigma}-clipping will be useful for your dataset, look at the histogram (see @ref{Histogram and Cumulative Frequency Plot}).
The ASCII histogram that is printed on the command-line with @option{--asciihist} is good enough in most cases.
@end cartouche




@node Sky value, Invoking aststatistics, Sigma clipping, Statistics
@subsection Sky value

@cindex Sky
One of the most important aspects of a dataset is its reference value: the value of the dataset where there is no signal.
Without knowing, and thus removing the effect of, this value it is impossible to compare the derived results of many high-level analyses over the dataset with other datasets (in the attempt to associate our results with the ``real'' world).

In astronomy, this reference value is known as the ``Sky'' value: the value that noise fluctuates around: where there is no signal from detectable objects or artifacts (for example galaxies, stars, planets or comets, star spikes or internal optical ghost).
Depending on the dataset, the Sky value maybe a fixed value over the whole dataset, or it may vary based on location.
For an example of the latter case, see Figure 11 in @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.

Because of the significance of the Sky value in astronomical data analysis, we have devoted this subsection to it for a thorough review.
We start with a thorough discussion on its definition (@ref{Sky value definition}).
In the astronomical literature, researchers use a variety of methods to estimate the Sky value, so in @ref{Sky value misconceptions}) we review those and discuss their biases.
From the definition of the Sky value, the most accurate way to estimate the Sky value is to run a detection algorithm (for example @ref{NoiseChisel}) over the dataset and use the undetected pixels.
However, there is also a more crude method that maybe useful when good direct detection is not initially possible (for example due to too many cosmic rays in a shallow image).
A more crude (but simpler method) that is usable in such situations is discussed in @ref{Quantifying signal in a tile}.

@menu
* Sky value definition::        Definition of the Sky/reference value.
* Sky value misconceptions::    Wrong methods to estimate the Sky value.
* Quantifying signal in a tile::  Method to estimate the presence of signal.
@end menu

@node Sky value definition, Sky value misconceptions, Sky value, Sky value
@subsubsection Sky value definition

@cindex Sky value
This analysis is taken from @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
Let's assume that all instrument defects -- bias, dark and flat -- have been corrected and the brightness (see @ref{Brightness flux magnitude}) of a detected object, @mymath{O}, is desired.
The sources of flux on pixel@footnote{For this analysis the dimension of the data (image) is irrelevant.
So if the data is an image (2D) with width of @mymath{w} pixels, then a pixel located on column @mymath{x} and row @mymath{y} (where all counting starts from zero and (0, 0) is located on the bottom left corner of the image), would have an index: @mymath{i=x+y\times{}w}.} @mymath{i} of the image can be written as follows:

@itemize
@item
Contribution from the target object (@mymath{O_i}).
@item
Contribution from other detected objects (@mymath{D_i}).
@item
Undetected objects or the fainter undetected regions of bright objects (@mymath{U_i}).
@item
@cindex Cosmic rays
A cosmic ray (@mymath{C_i}).
@item
@cindex Background flux
The background flux, which is defined to be the count if none of the others exists on that pixel (@mymath{B_i}).
@end itemize
@noindent
The total flux in this pixel (@mymath{T_i}) can thus be written as:

@dispmath{T_i=B_i+D_i+U_i+C_i+O_i.}

@cindex Cosmic ray removal
@noindent
By definition, @mymath{D_i} is detected and it can be assumed that it is correctly estimated (deblended) and subtracted, we can thus set @mymath{D_i=0}.
There are also methods to detect and remove cosmic rays, for example the method described in van Dokkum (2001)@footnote{van Dokkum, P. G. (2001).
Publications of the Astronomical Society of the Pacific. 113, 1420.}, or by comparing multiple exposures.
This allows us to set @mymath{C_i=0}.
Note that in practice, @mymath{D_i} and @mymath{U_i} are correlated, because they both directly depend on the detection algorithm and its input parameters.
Also note that no detection or cosmic ray removal algorithm is perfect.
With these limitations in mind, the observed Sky value for this pixel (@mymath{S_i}) can be defined as

@cindex Sky value
@dispmath{S_i\equiv{}B_i+U_i.}

@noindent
Therefore, as the detection process (algorithm and input parameters) becomes more accurate, or @mymath{U_i\to0}, the Sky value will tend to the background value or @mymath{S_i\to B_i}.
Hence, we see that while @mymath{B_i} is an inherent property of the data (pixel in an image), @mymath{S_i} depends on the detection process.
Over a group of pixels, for example in an image or part of an image, this equation translates to the average of undetected pixels (Sky@mymath{=\sum{S_i}}).
With this definition of Sky, the object flux in the data can be calculated, per pixel, with

@dispmath{ T_{i}=S_{i}+O_{i} \quad\rightarrow\quad
           O_{i}=T_{i}-S_{i}.}

@cindex photo-electrons
In the fainter outskirts of an object, a very small fraction of the photo-electrons in a pixel actually belongs to objects, the rest is caused by random factors (noise), see Figure 1b in @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
Therefore even a small over estimation of the Sky value will result in the loss of a very large portion of most galaxies.
Besides the lost area/brightness, this will also cause an over-estimation of the Sky value and thus even more under-estimation of the object's brightness.
It is thus very important to detect the diffuse flux of a target, even if they are not your primary target.

In summary, the more accurately the Sky is measured, the more accurately the brightness (sum of pixel values) of the target object can be measured (photometry).
Any under/over-estimation in the Sky will directly translate to an over/under-estimation of the measured object's brightness.

@cartouche
@noindent
The @strong{Sky value} is only correctly found when all the detected
objects (@mymath{D_i} and @mymath{C_i}) have been removed from the data.
@end cartouche




@node Sky value misconceptions, Quantifying signal in a tile, Sky value definition, Sky value
@subsubsection Sky value misconceptions

As defined in @ref{Sky value}, the sky value is only accurately defined when the detection algorithm is not significantly reliant on the sky value.
In particular its detection threshold.
However, most signal-based detection tools@footnote{According to Akhlaghi and Ichikawa (2015), signal-based detection is a detection process that relies heavily on assumptions about the to-be-detected objects.
This method was the most heavily used technique prior to the introduction of NoiseChisel in that paper.} use the sky value as a reference to define the detection threshold.
These older techniques therefore had to rely on approximations based on other assumptions about the data.
A review of those other techniques can be seen in Appendix A of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.

These methods were extensively used in astronomical data analysis for several decades, therefore they have given rise to a lot of misconceptions, ambiguities and disagreements about the sky value and how to measure it.
As a summary, the major methods used until now were an approximation of the mode of the image pixel distribution and @mymath{\sigma}-clipping.

@itemize
@cindex Histogram
@cindex Distribution mode
@cindex Mode of a distribution
@cindex Probability density function
@item
To find the mode of a distribution those methods would either have to assume (or find) a certain probability density function (PDF) or use the histogram.
But astronomical datasets can have any distribution, making it almost impossible to define a generic function.
Also, histogram-based results are very inaccurate (there is a large dispersion) and it depends on the histogram bin-widths.
Generally, the mode of a distribution also shifts as signal is added.
Therefore, even if it is accurately measured, the mode is a biased measure for the Sky value.

@cindex Sigma-clipping
@item
Another approach was to iteratively clip the brightest pixels in the image (which is known as @mymath{\sigma}-clipping).
See @ref{Sigma clipping} for a complete explanation.
@mymath{\sigma}-clipping is useful when there are clear outliers (an object with a sharp edge in an image for example).
However, real astronomical objects have diffuse and faint wings that penetrate deeply into the noise, see Figure 1 in @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
@end itemize

As discussed in @ref{Sky value}, the sky value can only be correctly defined as the average of undetected pixels.
Therefore all such approaches that try to approximate the sky value prior to detection are ultimately poor approximations.



@node Quantifying signal in a tile,  , Sky value misconceptions, Sky value
@subsubsection Quantifying signal in a tile

@cindex Data
@cindex Noise
@cindex Signal
@cindex Gaussian distribution
Put simply, noise can be characterized with a certain spread about the measured value.
In the Gaussian distribution (most commonly used to model noise) the spread is defined by the standard deviation about the characteristic mean.

Let's start by clarifying some definitions first: @emph{Data} is defined as the combination of signal and noise (so a noisy image is one @emph{data}set).
@emph{Signal} is defined as the mean of the noise on each element.
We'll also assume that the @emph{background} (see @ref{Sky value definition}) is subtracted and is zero.

When a dataset doesn't have any signal (only noise), the mean, median and mode of the distribution are equal within statistical errors and approximately equal to the background value.
Signal from emitting objects, like astronomical targets, always has a positive value and will never become negative, see Figure 1 in @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
Therefore, as more signal is added, the mean, median and mode of the dataset shift to the positive.
The mean's shift is the largest.
The median shifts less, since it is defined based on an ordered distribution and so is not affected by a small number of outliers.
The distribution's mode shifts the least to the positive.

@cindex Mean
@cindex Median
@cindex Quantile
Inverting the argument above gives us a robust method to quantify the significance of signal in a dataset.
Namely, when the mean and median of a distribution are approximately equal, or the mean's quantile is around 0.5, we can argue that there is no significant signal.

To allow for gradients (which are commonly present in raw exposures, or when there are large foreground galaxies in the field), we consider the image to be made of a grid of tiles@footnote{The options to customize the tessellation are discussed in @ref{Processing options}.} (see @ref{Tessellation}).
Hence, from the difference of the mean and median on each tile, we can estimate the significance of signal in it.
The median of a distribution is defined to be the value of the distribution's middle point after sorting (or 0.5 quantile).
Thus, to estimate the presence of signal, we just have to estimate the quantile of the mean (@mymath{q_{mean}}).
If a tile's @mymath{|q_{mean}-0.5|} is larger than the value given to the @option{--meanmedqdiff} option, that tile will be considered to have significant signal and ignored for the next steps.
You can read this option as ``mean-median-quantile-difference''.

The raw dataset's pixel distribution (in each tile) is noisy, to decrease the noise/error in estimating @mymath{q_{mean}}, we convolve the image before tessellation (see @ref{Convolution process}.
Convolution decreases the range of the dataset and enhances its skewness, See Section 3.1.1 and Figure 4 in @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
This enhanced skewness can be interpreted as an increase in the Signal to noise ratio of the objects buried in the noise.
Therefore, to obtain an even better measure of the presence of signal in a tile, the mean and median discussed above are measured on the convolved image.

@cindex Cosmic rays
There is one final hurdle: raw astronomical datasets are commonly peppered with Cosmic rays.
Images of Cosmic rays aren't smoothed by the atmosphere or telescope aperture, so they have sharp boundaries.
Also, since they don't occupy too many pixels, they don't affect the mode and median calculation.
But their very high values can greatly bias the calculation of the mean (recall how the mean shifts the fastest in the presence of outliers), for example see Figure 15 in @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
The effect of outliers like cosmic rays on the mean and standard deviation can be removed through @mymath{\sigma}-clipping, see @ref{Sigma clipping} for a complete explanation.

Therefore, after asserting that the mean and median are approximately equal in a tile (see @ref{Tessellation}), the measurements on the tile are determined after @mymath{\sigma}-clipping with the @option{--sigmaclip} option.
In the end, some of the tiles will pass the mean and median quantile difference test and will be given a value.
Others (that had signal in them) will just be assigned a NaN (not-a-number) value.
So we need to use interpolation and assign a value to all the tiles.

However, prior to interpolating over the failed tiles, another point should be considered: large and extended galaxies, or bright stars, have wings which sink into the noise very gradually.
In some cases, the gradient over these wings can be on scales that is larger than the tiles and mean-median distance test will be successful.
This will cause a footprint of such objects after the interpolation see @ref{Detecting large extended targets}.

These tiles (on the wings of large foreground galaxies for example) actually have signal in them and the mean-median difference isn't enough (due to the ``small'' size of the tiles).
Therefore, the final step of ``quantifying signal in a tile'' is to look at this distribution of successful tiles and remove the outliers.
@mymath{\sigma}-clipping is a good solution for removing a few outliers, but the problem with outliers of this kind is that there may be many such tiles (depending on the large/bright stars/galaxies in the image).
We therefore apply the following local outlier rejection strategy.

For each tile, we find the nearest @mymath{N_{ngb}} tiles that had a usable value (@mymath{N_{ngb}} is the value given to  @option{--interpnumngb}).
We then sort them and find the difference between the largest and second-to-smallest elements (The minimum isn't used because the scatter can be large).
Let's call this the tile's @emph{slope} (measured from its neighbors).
All the tiles that are on a region of flat noise will have similar slope values, but if a few tiles fall on the wings of a bright star or large galaxy, their slope will be significantly larger than the tiles with no signal.
We just have to find the smallest tile slope value that is an outlier compared to the rest and reject all tiles with a slope larger than that.

@cindex Outliers
@cindex Identifying outliers
To identify the smallest outlier, we'll use the distribution of distances between sorted elements.
Let's assume the total number tiles with a good mean-median quantile difference is @mymath{N}.
We sort them in increasing order based on their @emph{slope} (defined above).
So the array of tile slopes can be written as @mymath{@{s[0], s[1], ..., s[N]@}}, where @mymath{s[0]<s[1]} and so on.
We then start from element @mymath{M} and calculate the distances between all two adjacent values before it: @mymath{@{s[1]-s[0], s[2]-s[1], s[M-1]-s[M-2]@}}.

The @mymath{\sigma}-clipped median and standard deviation of this distribution is then found (@mymath{\sigma}-clipping is configured with @option{--outliersclip} which takes two values, see @ref{Sigma clipping}).
If the distance between the element and its previous element is more than @option{--outliersigma} multiples of the @mymath{\sigma}-clipped standard deviation added with the @mymath{\sigma}-clipped median, that element is considered an outlier and all tiles larger than that value are ignored.

Formally, if we assume there are @mymath{N} elements.
They are first sorted.
Searching for the outlier starts on element @mymath{N/3} (integer division).
Let's take @mymath{v_i} to be the @mymath{i}-th element of the sorted input (with no blank values) and @mymath{m} and @mymath{\sigma} as the @mymath{\sigma}-clipped median and standard deviation from the distances of the previous @mymath{N/3-1} elements (not including @mymath{v_i}).
If the value given to @option{--outliersigma} is displayed with @mymath{s}, the @mymath{i}-th element is considered as an outlier when the condition below is true.

@dispmath{{(v_i-v_{i-1})-m\over \sigma}>s}

Since @mymath{i} begins from the @mymath{N/3}-th element in the sorted array (a quantile of @mymath{1/3=0.33}), the outlier has to be larger than the @mymath{0.33} quantile value of the dataset.

Once the outlying tiles have been successfully removed we use nearest neighbor interpolation to give a value to all tiles in the image.
During interpolation, the median of the @mymath{N_{ngb}} nearest neighbors of each tile is found and given to the tile (@mymath{N_{ngb}} is the value given to  @option{--interpnumngb}).
Once all the tiles are given a value, a smoothing step is implemented to remove the sharp value contrast that can happen between some tiles.
The size of the smoothing box is set with the @option{--smoothwidth} option.

You can use the check images to inspect the steps and see which tiles have been discarded as outliers prior to interpolation (@option{--checksky} in the Statistics program or @option{--checkqthresh} in NoiseChisel).










@node Invoking aststatistics,  , Sky value, Statistics
@subsection Invoking Statistics

Statistics will print statistical measures of an input dataset (table column or image).
The executable name is @file{aststatistics} with the following general template

@example
$ aststatistics [OPTION ...] InputImage.fits
@end example

@noindent
One line examples:

@example
## Print some general statistics of input image:
$ aststatistics image.fits

## Print some general statistics of column named MAG_F160W:
$ aststatistics catalog.fits -h1 --column=MAG_F160W

## Make the histogram of the column named MAG_F160W:
$ aststatistics table.fits -cMAG_F160W --histogram

## Find the Sky value on image with a given kernel:
$ aststatistics image.fits --sky --kernel=kernel.fits

## Print Sigma-clipped results of records with a MAG_F160W
## column value between 26 and 27:
$ aststatistics cat.fits -cMAG_F160W -g26 -l27 --sigmaclip=3,0.2

## Print the median value of all records in column MAG_F160W that
## have a value larger than 3 in column PHOTO_Z:
$ aststatistics tab.txt -rPHOTO_Z -g3 -cMAG_F160W --median

## Calculate the median of the third column in the input table, but only
## for rows where the mean of the first and second columns is >5.
$ awk '($1+$2)/2 > 5 @{print $3@}' table.txt | aststatistics --median
@end example

@noindent
@cindex Standard input
Statistics can take its input dataset either from a file (image or table) or the Standard input (see @ref{Standard input}).
If any output file is to be created, the value to the @option{--output} option, is used as the base name for the generated files.
Without @option{--output}, the input name will be used to generate an output name, see @ref{Automatic output}.
The options described below are particular to Statistics, but for general operations, it shares a large collection of options with the other Gnuastro programs, see @ref{Common options} for the full list.
For more on reading from standard input, please see the description of @code{--stdintimeout} option in @ref{Input output options}.
Options can also be given in configuration files, for more, please see @ref{Configuration files}.

The input dataset may have blank values (see @ref{Blank pixels}), in this case, all blank pixels are ignored during the calculation.
Initially, the full dataset will be read, but it is possible to select a specific range of data elements to use in the analysis of each run.
You can either directly specify a minimum and maximum value for the range of data elements to use (with @option{--greaterequal} or @option{--lessthan}), or specify the range using quantiles (with @option{--qrange}).
If a range is specified, all pixels outside of it are ignored before any processing.

The following set of options are for specifying the input/outputs of Statistics.
There are many other input/output options that are common to all Gnuastro programs including Statistics, see @ref{Input output options} for those.

@table @option

@item -c STR/INT
@itemx --column=STR/INT
The column to use when the input file is a table with more than one column.
See @ref{Selecting table columns} for a full description of how to use this option.
For more on how tables are read in Gnuastro, please see @ref{Tables}.

@item -r STR/INT
@itemx --refcol=STR/INT
The reference column selector when the input file is a table.
When a reference column is given, the range options below will be applied to this column and only elements in the input column that have a reference value in the correct range will be used.
In practice this option allows you to select a subset of the input column based on values in another (the reference) column.
All the statistical calculations will be done on the selected input column, not the reference column.

@item -g FLT
@itemx --greaterequal=FLT
Limit the range of inputs into those with values greater and equal to what is given to this option.
None of the values below this value will be used in any of the processing steps below.

@item -l FLT
@itemx --lessthan=FLT
Limit the range of inputs into those with values less-than what is given to this option.
None of the values greater or equal to this value will be used in any of the processing steps below.

@item -Q FLT[,FLT]
@itemx --qrange=FLT[,FLT]
Specify the range of usable inputs using the quantile.
This option can take one or two quantiles to specify the range.
When only one number is input (let's call it @mymath{Q}), the range will be those values in the quantile range @mymath{Q} to @mymath{1-Q}.
So when only one value is given, it must be less than 0.5.
When two values are given, the first is used as the lower quantile range and the second is used as the larger quantile range.

@cindex Quantile
The quantile of a given element in a dataset is defined by the fraction of its index to the total number of values in the sorted input array.
So the smallest and largest values in the dataset have a quantile of 0.0 and 1.0.
The quantile is a very useful non-parametric (making no assumptions about the input) relative measure to specify a range.
It can best be understood in terms of the cumulative frequency plot, see @ref{Histogram and Cumulative Frequency Plot}.
The quantile of each horizontal axis value in the cumulative frequency plot is the vertical axis value associate with it.

@end table

@cindex ASCII plot
When no operation is requested, Statistics will print some general basic properties of the input dataset on the command-line like the example below (ran on one of the output images of @command{make check}@footnote{You can try it by running the command in the @file{tests} directory, open the image with a FITS viewer and have a look at it to get a sense of how these statistics relate to the input image/dataset.}).
This default behavior is designed to help give you a general feeling of how the data are distributed and help in narrowing down your analysis.

@example
$ aststatistics convolve_spatial_scaled_noised.fits     \
                --greaterequal=9500 --lessthan=11000
Statistics (GNU Astronomy Utilities) X.X
-------
Input: convolve_spatial_scaled_noised.fits (hdu: 0)
Range: from (inclusive) 9500, upto (exclusive) 11000.
Unit: Brightness
-------
  Number of elements:                      9074
  Minimum:                                 9622.35
  Maximum:                                 10999.7
  Mode:                                    10055.45996
  Mode quantile:                           0.4001983908
  Median:                                  10093.7
  Mean:                                    10143.98257
  Standard deviation:                      221.80834
-------
Histogram:
 |                   **
 |                 ******
 |                 *******
 |                *********
 |              *************
 |              **************
 |            ******************
 |            ********************
 |          *************************** *
 |        ***************************************** ***
 |*  **************************************************************
 |-----------------------------------------------------------------
@end example

Gnuastro's Statistics is a very general purpose program, so to be able to easily understand this diversity in its operations (and how to possibly run them together), we'll divided the operations into two types: those that don't respect the position of the elements and those that do (by tessellating the input on a tile grid, see @ref{Tessellation}).
The former treat the whole dataset as one and can re-arrange all the elements (for example sort them), but the former do their processing on each tile independently.
First, we'll review the operations that work on the whole dataset.

@cindex AWK
@cindex GNU AWK
The group of options below can be used to get single value measurement(s) of the whole dataset.
They will print only the requested value as one field in a line/row, like the @option{--mean}, @option{--median} options.
These options can be called any number of times and in any order.
The outputs of all such options will be printed on one line following each other (with a space character between them).
This feature makes these options very useful in scripts, or to redirect into programs like GNU AWK for higher-level processing.
These are some of the most basic measures, Gnuastro is still under heavy development and this list will grow.
If you want another statistical parameter, please contact us and we will do out best to add it to this list, see @ref{Suggest new feature}.

@table @option

@item -n
@itemx --number
Print the number of all used (non-blank and in range) elements.

@item --minimum
Print the minimum value of all used elements.

@item --maximum
Print the maximum value of all used elements.

@item --sum
Print the sum of all used elements.

@item -m
@itemx --mean
Print the mean (average) of all used elements.

@item -t
@itemx --std
Print the standard deviation of all used elements.

@item -E
@itemx --median
Print the median of all used elements.

@item -u FLT[,FLT[,...]]
@itemx --quantile=FLT[,FLT[,...]]
Print the values at the given quantiles of the input dataset.
Any number of quantiles may be given and one number will be printed for each.
Values can either be written as a single number or as fractions, but must be between zero and one (inclusive).
Hence, in effect @command{--quantile=0.25 --quantile=0.75} is equivalent to @option{--quantile=0.25,3/4}, or @option{-u1/4,3/4}.

The returned value is one of the elements from the dataset.
Taking @mymath{q} to be your desired quantile, and @mymath{N} to be the total number of used (non-blank and within the given range) elements, the returned value is at the following position in the sorted array: @mymath{round(q\times{}N}).

@item --quantfunc=FLT[,FLT[,...]]
Print the quantiles of the given values in the dataset.
This option is the inverse of the @option{--quantile} and operates similarly except that the acceptable values are within the range of the dataset, not between 0 and 1.
Formally it is known as the ``Quantile function''.

Since the dataset is not continuous this function will find the nearest element of the dataset and use its position to estimate the quantile function.

@item -O
@itemx --mode
Print the mode of all used elements.
The mode is found through the mirror distribution which is fully described in Appendix C of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 2015}.
See that section for a full description.

This mode calculation algorithm is non-parametric, so when the dataset is not large enough (larger than about 1000 elements usually), or doesn't have a clear mode it can fail.
In such cases, this option will return a value of @code{nan} (for the floating point NaN value).

As described in that paper, the easiest way to assess the quality of this mode calculation method is to use it's symmetricity (see @option{--modesym} below).
A better way would be to use the @option{--mirror} option to generate the histogram and cumulative frequency tables for any given mirror value (the mode in this case) as a table.
If you generate plots like those shown in Figure 21 of that paper, then your mode is accurate.

@item --modequant
Print the quantile of the mode.
You can get the actual mode value from the @option{--mode} described above.
In many cases, the absolute value of the mode is irrelevant, but its position within the distribution is important.
In such cases, this option will become handy.

@item --modesym
Print the symmetricity of the calculated mode.
See the description of @option{--mode} for more.
This mode algorithm finds the mode based on how symmetric it is, so if the symmetricity returned by this option is too low, the mode is not too accurate.
See Appendix C of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 2015} for a full description.
In practice, symmetricity values larger than 0.2 are mostly good.

@item --modesymvalue
Print the value in the distribution where the mirror and input
distributions are no longer symmetric, see @option{--mode} and Appendix C
of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 2015} for
more.

@item --sigclip-number
Number of elements after applying @mymath{\sigma}-clipping (see @ref{Sigma clipping}).
@mymath{\sigma}-clipping configuration is done with the @option{--sigclipparams} option.

@item --sigclip-median
Median after applying @mymath{\sigma}-clipping (see @ref{Sigma clipping}).
@mymath{\sigma}-clipping configuration is done with the @option{--sigclipparams} option.

@cindex Outlier
Here is one scenario where this can be useful: assume you have a table and you would like to remove the rows that are outliers (not within the @mymath{\sigma}-clipping range).
Let's assume your table is called @file{table.fits} and you only want to keep the rows that have a value in @code{COLUMN} within the @mymath{\sigma}-clipped range (to @mymath{3\sigma}, with a tolerance of 0.1).
This command will return the @mymath{\sigma}-clipped median and standard deviation (used to define the range later).

@example
$ aststatistics table.fits -cCOLUMN --sclipparams=3,0.1 \
                --sigclip-median --sigclip-std
@end example

@cindex GNU AWK
You can then use the @option{--range} option of Table (see @ref{Table}) to select the proper rows.
But for that, you need the actual starting and ending values of the range (@mymath{m\pm s\sigma}; where @mymath{m} is the median and @mymath{s} is the multiple of sigma to define an outlier).
Therefore, the raw outputs of Statistics in the command above aren't enough.

To get the starting and ending values of the non-outlier range (and put a `@key{,}' between them, ready to be used in @option{--range}), pipe the result into AWK.
But in AWK, we'll also need the multiple of @mymath{\sigma}, so we'll define it as a shell variable (@code{s}) before calling Statistics (note how @code{$s} is used two times now):

@example
$ s=3
$ aststatistics table.fits -cCOLUMN --sclipparams=$s,0.1 \
                --sigclip-median --sigclip-std           \
     | awk '@{s='$s'; printf("%f,%f\n", $1-s*$2, $1+s*$2)@}'
@end example

To pass it onto Table, we'll need to keep the printed output from the command above in another shell variable (@code{r}), not print it.
In Bash, can do this by putting the whole statement within a @code{$()}:

@example
$ s=3
$ r=$(aststatistics table.fits -cCOLUMN --sclipparams=$s,0.1 \
                    --sigclip-median --sigclip-std           \
        | awk '@{s='$s'; printf("%f,%f\n", $1-s*$2, $1+s*$2)@}')
$ echo $r      # Just to confirm.
@end example

Now you can use Table with the @option{--range} option to only print the rows that have a value in @code{COLUMN} within the desired range:

@example
$ asttable table.fits --range=COLUMN,$r
@end example

To save the resulting table (that is clean of outliers) in another file (for example named @file{cleaned.fits}, it can also have a @file{.txt} suffix), just add @option{--output=cleaned.fits} to the command above.


@item --sigclip-mean
Mean after applying @mymath{\sigma}-clipping (see @ref{Sigma clipping}).
@mymath{\sigma}-clipping configuration is done with the @option{--sigclipparams} option.

@item --sigclip-std
Standard deviation after applying @mymath{\sigma}-clipping (see @ref{Sigma clipping}).
@mymath{\sigma}-clipping configuration is done with the @option{--sigclipparams} option.

@end table

The list of options below are for those statistical operations that output more than one value.
So while they can be called together in one run, their outputs will be distinct (each one's output will usually be printed in more than one line).

@table @option

@item -A
@itemx --asciihist
Print an ASCII histogram of the usable values within the input dataset along with some basic information like the example below (from the UVUDF catalog@footnote{@url{https://asd.gsfc.nasa.gov/UVUDF/uvudf_rafelski_2015.fits.gz}}).
The width and height of the histogram (in units of character widths and heights on your command-line terminal) can be set with the @option{--numasciibins} (for the width) and @option{--asciiheight} options.

For a full description of the histogram, please see @ref{Histogram and Cumulative Frequency Plot}.
An ASCII plot is certainly very crude and cannot be used in any publication, but it is very useful for getting a general feeling of the input dataset very fast and easily on the command-line without having to take your hands off the keyboard (which is a major distraction!).
If you want to try it out, you can write it all in one line and ignore the @key{\} and extra spaces.

@example
$ aststatistics uvudf_rafelski_2015.fits.gz --hdu=1         \
                --column=MAG_F160W --lessthan=40            \
                --asciihist --numasciibins=55
ASCII Histogram:
Number: 8593
Y: (linear: 0 to 660)
X: (linear: 17.7735 -- 31.4679, in 55 bins)
 |                                         ****
 |                                        *****
 |                                       ******
 |                                      ********
 |                                      *********
 |                                    ***********
 |                                  **************
 |                                *****************
 |                           ***********************
 |                    ********************************
 |*** ***************************************************
 |-------------------------------------------------------
@end example

@item --asciicfp
Print the cumulative frequency plot of the usable elements in the input dataset.
Please see descriptions under @option{--asciihist} for more, the example below is from the same input table as that example.
To better understand the cumulative frequency plot, please see @ref{Histogram and Cumulative Frequency Plot}.

@example
$ aststatistics uvudf_rafelski_2015.fits.gz --hdu=1         \
                --column=MAG_F160W --lessthan=40            \
                --asciicfp --numasciibins=55
ASCII Cumulative frequency plot:
Y: (linear: 0 to 8593)
X: (linear: 17.7735 -- 31.4679, in 55 bins)
 |                                                *******
 |                                             **********
 |                                            ***********
 |                                          *************
 |                                         **************
 |                                        ***************
 |                                      *****************
 |                                    *******************
 |                                ***********************
 |                         ******************************
 |*******************************************************
 |-------------------------------------------------------
@end example

@item -H
@itemx --histogram
Save the histogram of the usable values in the input dataset into a table.
The first column is the value at the center of the bin and the second is the number of points in that bin.
If the @option{--cumulative} option is also called with this option in a run, then the table will have three columns (the third is the cumulative frequency plot).
Through the @option{--numbins}, @option{--onebinstart}, or @option{--manualbinrange}, you can modify the first column values and with @option{--normalize} and @option{--maxbinone} you can modify the second columns.
See below for the description of each.

By default (when no @option{--output} is specified) a plain text table will be created, see @ref{Gnuastro text table format}.
If a FITS name is specified, you can use the common option @option{--tableformat} to have it as a FITS ASCII or FITS binary format, see @ref{Common options}.
This table can then be fed into your favorite plotting tool and get a much more clean and nice histogram than what the raw command-line can offer you (with the @option{--asciihist} option).

@item --histogram2d
Save the 2D histogram of two input columns into an output file, see @ref{2D Histograms}.
The output will have three columns: the first two are the coordinates of each box's center in the first and second dimensions/columns.
The third will be number of input points that fall within that box.

@item -C
@itemx --cumulative
Save the cumulative frequency plot of the usable values in the input dataset into a table, similar to @option{--histogram}.

@item -s
@itemx --sigmaclip
Do @mymath{\sigma}-clipping on the usable pixels of the input dataset.
See @ref{Sigma clipping} for a full description on @mymath{\sigma}-clipping and also to better understand this option.
The @mymath{\sigma}-clipping parameters can be set through the @option{--sclipparams} option (see below).

@item --mirror=FLT
Make a histogram and cumulative frequency plot of the mirror distribution for the given dataset when the mirror is located at the value to this option.
The mirror distribution is fully described in Appendix C of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 2015} and currently it is only used to calculate the mode (see @option{--mode}).

Just note that the mirror distribution is a discrete distribution like the input, so while you may give any number as the value to this option, the actual mirror value is the closest number in the input dataset to this value.
If the two numbers are different, Statistics will warn you of the actual mirror value used.

This option will make a table as output.
Depending on your selected name for the output, it will be either a FITS table or a plain text table (which is the default).
It contains three columns: the first is the center of the bins, the second is the histogram (with the largest value set to 1) and the third is the normalized cumulative frequency plot of the mirror distribution.
The bins will be positioned such that the mode is on the starting interval of one of the bins to make it symmetric around the mirror.
With this output file and the input histogram (that you can generate in another run of Statistics, using the @option{--onebinvalue}), it is possible to make plots like Figure 21 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 2015}.

@end table

The list of options below allow customization of the histogram and cumulative frequency plots (for the @option{--histogram}, @option{--cumulative}, @option{--asciihist}, and @option{--asciicfp} options).

@table @option

@item --numbins
The number of bins (rows) to use in the histogram and the cumulative frequency plot tables (outputs of @option{--histogram} and @option{--cumulative}).

@item --numasciibins
The number of bins (characters) to use in the ASCII plots when printing the histogram and the cumulative frequency plot (outputs of @option{--asciihist} and @option{--asciicfp}).

@item --asciiheight
The number of lines to use when printing the ASCII histogram and cumulative frequency plot on the command-line (outputs of @option{--asciihist} and @option{--asciicfp}).

@item -n
@itemx --normalize
Normalize the histogram or cumulative frequency plot tables (outputs of @option{--histogram} and @option{--cumulative}).
For a histogram, the sum of all bins will become one and for a cumulative frequency plot the last bin value will be one.

@item --maxbinone
Divide all the histogram values by the maximum bin value so it becomes one and the rest are similarly scaled.
In some situations (for example if you want to plot the histogram and cumulative frequency plot in one plot) this can be very useful.

@item --onebinstart=FLT
Make sure that one bin starts with the value to this option.
In practice, this will shift the bins used to find the histogram and cumulative frequency plot such that one bin's lower interval becomes this value.

For example when a histogram range includes negative and positive values and zero has a special significance in your analysis, then zero might fall somewhere in one bin.
As a result that bin will have counts of positive and negative.
By setting @option{--onebinstart=0}, you can make sure that one bin will only count negative values in the vicinity of zero and the next bin will only count positive ones in that vicinity.

@cindex NaN
Note that by default, the first row of the histogram and cumulative frequency plot show the central values of each bin.
So in the example above you will not see the 0.000 in the first column, you will see two symmetric values.

If the value is not within the usable input range, this option will be ignored.
When it is, this option is the last operation before the bins are finalized, therefore it has a higher priority than options like @option{--manualbinrange}.

@item --manualbinrange
Use the values given to the @option{--greaterequal} and @option{--lessthan} to define the range of all bin-based calculations like the histogram.
This option itself doesn't take any value, but just tells the program to use the values of those two options instead of the minimum and maximum values of a plot.
If any of the two options are not given, then the minimum or maximum will be used respectively.
Therefore, if none of them are called calling this option is redundant.

The @option{--onebinstart} option has a higher priority than this option.
In other words, @option{--onebinstart} takes effect after the range has been finalized and the initial bins have been defined, therefore it has the power to (possibly) shift the bins.
If you want to manually set the range of the bins @emph{and} have one bin on a special value, it is thus better to avoid @option{--onebinstart}.

@item --numbins2=INT
Similar to @option{--numbins}, but for the second column when a 2D histogram is requested, see @option{--histogram2d}.

@item --greaterequal2=FLT
Similar to @option{--greaterequal}, but for the second column when a 2D histogram is requested, see @option{--histogram2d}.

@item --lessthan2=FLT
Similar to @option{--lessthan}, but for the second column when a 2D histogram is requested, see @option{--histogram2d}.

@item --onebinstart2=FLT
Similar to @option{--onebinstart}, but for the second column when a 2D histogram is requested, see @option{--histogram2d}.

@end table


All the options described until now were from the first class of operations discussed above: those that treat the whole dataset as one.
However, it often happens that the relative position of the dataset elements over the dataset is significant.
For example you don't want one median value for the whole input image, you want to know how the median changes over the image.
For such operations, the input has to be tessellated (see @ref{Tessellation}).
Thus this class of options can't currently be called along with the options above in one run of Statistics.

@table @option

@item -t
@itemx --ontile
Do the respective single-valued calculation over one tile of the input dataset, not the whole dataset.
This option must be called with at least one of the single valued options discussed above (for example @option{--mean} or @option{--quantile}).
The output will be a file in the same format as the input.
If the @option{--oneelempertile} option is called, then one element/pixel will be used for each tile (see @ref{Processing options}).
Otherwise, the output will have the same size as the input, but each element will have the value corresponding to that tile's value.
If multiple single valued operations are called, then for each operation there will be one extension in the output FITS file.

@item -R FLT[,FLT[,FLT...]]
@itemx --contour=FLT[,FLT[,FLT...]]
@cindex Contour
@cindex Plot: contour
Write the contours for the requested levels in a file ending with @file{_contour.txt}.
It will have three columns: the first two are the coordinates of each point and the third is the level it belongs to (one of the input values).
Each disconnected contour region will be separated by a blank line.
This is the requested format for adding contours with PGFPlots in @LaTeX{}.
If any other format can be useful for your work please let us know so we can add it.
If the image has World Coordinate System information, the written coordinates will be in RA and Dec, otherwise, they will be in pixel coordinates.

Note that currently, this is a very crude/simple implementation, please let us know if you find problematic situations so we can fix it.

@item -y
@itemx --sky
Estimate the Sky value on each tile as fully described in @ref{Quantifying signal in a tile}.
As described in that section, several options are necessary to configure the Sky estimation which are listed below.
The output file will have two extensions: the first is the Sky value and the second is the Sky standard deviation on each tile.
Similar to @option{--ontile}, if the @option{--oneelempertile} option is called, then one element/pixel will be used for each tile (see @ref{Processing options}).

@end table

The parameters for estimating the sky value can be set with the following options, except for the @option{--sclipparams} option (which is also used by the @option{--sigmaclip}), the rest are only used for the Sky value estimation.

@table @option

@item -k=STR
@itemx --kernel=STR
File name of kernel to help in estimating the significance of signal in a
tile, see @ref{Quantifying signal in a tile}.

@item --khdu=STR
Kernel HDU to help in estimating the significance of signal in a tile, see
@ref{Quantifying signal in a tile}.

@item --meanmedqdiff=FLT
The maximum acceptable distance between the quantiles of the mean and median, see @ref{Quantifying signal in a tile}.
The initial Sky and its standard deviation estimates are measured on tiles where the quantiles of their mean and median are less distant than the value given to this option.
For example @option{--meanmedqdiff=0.01} means that only tiles where the mean's quantile is between 0.49 and 0.51 (recall that the median's quantile is 0.5) will be used.

@item --sclipparams=FLT,FLT
The @mymath{\sigma}-clipping parameters, see @ref{Sigma clipping}.
This option takes two values which are separated by a comma (@key{,}).
Each value can either be written as a single number or as a fraction of two numbers (for example @code{3,1/10}).
The first value to this option is the multiple of @mymath{\sigma} that will be clipped (@mymath{\alpha} in that section).
The second value is the exit criteria.
If it is less than 1, then it is interpreted as tolerance and if it is larger than one it is a specific number.
Hence, in the latter case the value must be an integer.

@item --outliersclip=FLT,FLT
@mymath{\sigma}-clipping parameters for the outlier rejection of the Sky
value (similar to @option{--sclipparams}).

Outlier rejection is useful when the dataset contains a large and diffuse (almost flat within each tile) signal.
The flatness of the profile will cause it to successfully pass the mean-median quantile difference test, so we'll need to use the distribution of successful tiles for removing these false positive.
For more, see the latter half of @ref{Quantifying signal in a tile}.

@item --outliersigma=FLT
Multiple of sigma to define an outlier in the Sky value estimation.
If this option is given a value of zero, no outlier rejection will take place.
For more see @option{--outliersclip} and the latter half of @ref{Quantifying signal in a tile}.

@item --smoothwidth=INT
Width of a flat kernel to convolve the interpolated tile values.
Tile interpolation is done using the median of the @option{--interpnumngb} neighbors of each tile (see @ref{Processing options}).
If this option is given a value of zero or one, no smoothing will be done.
Without smoothing, strong boundaries will probably be created between the values estimated for each tile.
It is thus good to smooth the interpolated image so strong discontinuities do not show up in the final Sky values.
The smoothing is done through convolution (see @ref{Convolution process}) with a flat kernel, so the value to this option must be an odd number.

@item --ignoreblankintiles
Don't set the input's blank pixels to blank in the tiled outputs (for example Sky and Sky standard deviation extensions of the output).
This is only applicable when the tiled output has the same size as the input, in other words, when @option{--oneelempertile} isn't called.

By default, blank values in the input (commonly on the edges which are outside the survey/field area) will be set to blank in the tiled outputs also.
But in other scenarios this default behavior is not desired: for example if you have masked something in the input, but want the tiled output under that also.

@item --checksky
Create a multi-extension FITS file showing the steps that were used to estimate the Sky value over the input, see @ref{Quantifying signal in a tile}.
The file will have two extensions for each step (one for the Sky and one for the Sky standard deviation).

@end table

@node NoiseChisel, Segment, Statistics, Data analysis
@section NoiseChisel

@cindex Labeling
@cindex Detection
@cindex Segmentation
Once instrumental signatures are removed from the raw data (image) in the initial reduction process (see @ref{Data manipulation}).
You are naturally eager to start answering the scientific questions that motivated the data collection in the first place.
However, the raw dataset/image is just an array of values/pixels, that is all! These raw values cannot directly be used to answer your scientific questions: for example ``how many galaxies are there in the image?'', ``What is their brightness?'' and etc.

The first high-level step your analysis will therefore be to classify, or label, the dataset elements (pixels) into two classes:
1) Noise, where random effects are the major contributor to the value, and
2) Signal, where non-random factors (for example light from a distant galaxy) are present.
This classification of the elements in a dataset is formally known as @emph{detection}.

In an observational/experimental dataset, signal is always buried in noise: only mock/simulated datasets are free of noise.
Therefore detection, or the process of separating signal from noise, determines the number of objects you study and the accuracy of any higher-level measurement you do on them.
Detection is thus the most important step of any analysis and is not trivial.
In particular, the most scientifically interesting astronomical targets are faint, can have a large variety of morphologies, along with a large distribution in brightness and size.
Therefore when noise is significant, proper detection of your targets is a uniquely decisive step in your final scientific analysis/result.

@cindex Erosion
NoiseChisel is Gnuastro's program for detection of targets that don't have a sharp border (almost all astronomical objects).
When the targets have sharp edges/borders (for example cells in biological imaging), a simple threshold is enough to separate them from noise and each other (if they are not touching).
To detect such sharp-edged targets, you can use Gnuastro's Arithmetic program in a command like below (assuming the threshold is @code{100}, see @ref{Arithmetic}):

@example
$ astarithmetic in.fits 100 gt 2 connected-components
@end example

Since almost no astronomical target has such sharp edges, we need a more advanced detection methodology.
NoiseChisel uses a new noise-based paradigm for detection of very extended and diffuse targets that are drowned deeply in the ocean of noise.
It was initially introduced in @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]} and improvements after the first four were published in @url{https://arxiv.org/abs/1909.11230, Akhlaghi [2019]}.
Please take the time to go through these papers to most effectively understand the need of NoiseChisel and how best to use it.

The name of NoiseChisel is derived from the first thing it does after thresholding the dataset: to erode it.
In mathematical morphology, erosion on pixels can be pictured as carving-off boundary pixels.
Hence, what NoiseChisel does is similar to what a wood chisel or stone chisel do.
It is just not a hardware, but a software.
In fact, looking at it as a chisel and your dataset as a solid cube of rock will greatly help in effectively understanding and optimally using it: with NoiseChisel you literally carve your targets out of the noise.
Try running it with the @option{--checkdetection} option, and open the temporary output as a multi-extension cube, to see each step of the carving process on your input dataset (see @ref{Viewing multiextension FITS images}).

@cindex Segmentation
NoiseChisel's primary output is a binary detection map with the same size as the input but its pixels only have two values: 0 (background) and 1 (foreground).
Pixels that don't harbor any detected signal (noise) are given a label (or value) of zero and those with a value of 1 have been identified as hosting signal.

Segmentation is the process of classifying the signal into higher-level constructs.
For example if you have two separate galaxies in one image, NoiseChisel will give a value of 1 to the pixels of both (each forming an ``island'' of touching foreground pixels).
After segmentation, the connected foreground pixels will get separate labels, enabling you to study them individually.
NoiseChisel is only focused on detection (separating signal from noise), to @emph{segment} the signal (into separate galaxies for example), Gnuastro has a separate specialized program @ref{Segment}.
NoiseChisel's output can be directly/readily fed into Segment.

For more on NoiseChisel's output format and its benefits (especially in conjunction with @ref{Segment} and later @ref{MakeCatalog}), please see @url{https://arxiv.org/abs/1611.06387, Akhlaghi [2016]}.
Just note that when that paper was published, Segment was not yet spun-off into a separate program, and NoiseChisel done both detection and segmentation.

NoiseChisel's output is designed to be generic enough to be easily used in any higher-level analysis.
If your targets are not touching after running NoiseChisel and you aren't interested in their sub-structure, you don't need the Segment program at all.
You can ask NoiseChisel to find the connected pixels in the output with the @option{--label} option.
In this case, the output won't be a binary image any more, the signal will have counters/labels starting from 1 for each connected group of pixels.
You can then directly feed NoiseChisel's output into MakeCatalog for measurements over the detections and the production of a catalog (see @ref{MakeCatalog}).

Thanks to the published papers mentioned above, there is no need to provide a more complete introduction to NoiseChisel in this book.
However, published papers cannot be updated any more, but the software has evolved/changed.
The changes since publication are documented in @ref{NoiseChisel changes after publication}.
In @ref{Invoking astnoisechisel}, the details of running NoiseChisel and its options are discussed.

As discussed above, detection is one of the most important steps for your scientific result.
It is therefore very important to obtain a good understanding of NoiseChisel (and afterwards @ref{Segment} and @ref{MakeCatalog}).
We strongly recommend reviewing two tutorials of @ref{General program usage tutorial} and @ref{Detecting large extended targets}.
They are designed to show how to most effectively use NoiseChisel for the detection of small faint objects and large extended objects.
In the meantime, they also show the modular principle behind Gnuastro's programs and how they are built to complement, and build upon, each other.

@ref{General program usage tutorial} culminates in using NoiseChisel to detect galaxies and use its outputs to find the galaxy colors.
Defining colors is a very common process in most science-cases.
Therefore it is also recommended to (patiently) complete that tutorial for optimal usage of NoiseChisel in conjunction with all the other Gnuastro programs.
@ref{Detecting large extended targets} shows you can optimize NoiseChisel's settings for very extended objects to successfully carve out to signal-to-noise ratio levels of below 1/10.
After going through those tutorials, play a little with the settings (in the order presented in the paper and @ref{Invoking astnoisechisel}) on a dataset you are familiar with and inspect all the check images (options starting with @option{--check}) to see the effect of each parameter.

Below, in @ref{Invoking astnoisechisel}, we will review NoiseChisel's input, detection, and output options in @ref{NoiseChisel input}, @ref{Detection options}, and @ref{NoiseChisel output}.
If you have used NoiseChisel within your research, please run it with @option{--cite} to list the papers you should cite and how to acknowledge its funding sources.

@menu
* NoiseChisel changes after publication::  Updates since published papers.
* Invoking astnoisechisel::     Options and arguments for NoiseChisel.
@end menu

@node NoiseChisel changes after publication, Invoking astnoisechisel, NoiseChisel, NoiseChisel
@subsection NoiseChisel changes after publication

NoiseChisel was initially introduced in @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]} and updates after the first four years were published in @url{https://arxiv.org/abs/1909.11230, Akhlaghi [2019]}.
To help in understanding how it works, those papers have many figures showing every step on multiple mock and real examples.
We recommended to read these papers for a good understanding of what it does and how each parameter influences the output.

However, the papers cannot be updated anymore, but NoiseChisel has evolved (and will continue to do so): better algorithms or steps have been found and implemented and some options have been added, removed or changed behavior.
This book is thus the final and definitive guide to NoiseChisel.
The aim of this section is to make the transition from the papers above to the installed version on your system, as smooth as possible with the list below.
For a more detailed list of changes in each Gnuastro version, please see the @file{NEWS} file@footnote{The @file{NEWS} file is present in the released Gnuastro tarball, see @ref{Release tarball}.}.

@itemize
@item
An improved outlier rejection for identifying tiles without any signal has been implemented in the quantile-threshold phase:
Prior to version 0.14, outliers were defined globally: the distribution of all tiles with an acceptable @option{--meanmedqdiff} was inspected and outliers were found and rejected.
However, this caused problems when there are strong gradients over the image (for example an image prior to flat-fielding, or in the presence of a large foreground galaxy).
In these cases, the faint wings of galaxies/stars could be mistakenly identified as Sky (leaving a footprint of the object on the Sky output) and wrongly subtracted.

It was possible to play with the parameters to correct this for that particular dataset, but that was frustrating.
Therefore from version 0.14, instead of finding outliers from the full tile distribution, we now measure the @emph{slope} of the tile's nearby tiles and find outliers locally.
For more on the outlier-by-distance algorithm and the definition of @emph{slope}, see @ref{Quantifying signal in a tile}.
In our tests, this gave a much improved estimate of the quantile thresholds and final Sky values with default values.
@end itemize




@node Invoking astnoisechisel,  , NoiseChisel changes after publication, NoiseChisel
@subsection Invoking NoiseChisel

NoiseChisel will detect signal in noise producing a multi-extension dataset containing a binary detection map which is the same size as the input.
Its output can be readily used for input into @ref{Segment}, for higher-level segmentation, or @ref{MakeCatalog} to do measurements and generate a catalog.
The executable name is @file{astnoisechisel} with the following general template

@example
$ astnoisechisel [OPTION ...] InputImage.fits
@end example

@noindent
One line examples:

@example
## Detect signal in input.fits.
$ astnoisechisel input.fits

## Inspect all the detection steps after changing a parameter.
$ astnoisechisel input.fits --qthresh=0.4 --checkdetection

## Detect signal assuming input has 4 amplifier channels along first
## dimension and 1 along the second. Also set the regular tile size
## to 100 along both dimensions:
$ astnoisechisel --numchannels=4,1 --tilesize=100,100 input.fits
@end example

@cindex Gaussian
@noindent
If NoiseChisel is to do processing (for example you don't want to get help, or see the values to each input parameter), an input image should be provided with the recognized extensions (see @ref{Arguments}).
NoiseChisel shares a large set of common operations with other Gnuastro programs, mainly regarding input/output, general processing steps, and general operating modes.
To help in a unified experience between all of Gnuastro's programs, these operations have the same command-line options, see @ref{Common options} for a full list/description (they are not repeated here).

As in all Gnuastro programs, options can also be given to NoiseChisel in configuration files.
For a thorough description on Gnuastro's configuration file parsing, please see @ref{Configuration files}.
All of NoiseChisel's options with a short description are also always available on the command-line with the @option{--help} option, see @ref{Getting help}.
To inspect the option values without actually running NoiseChisel, append your command with @option{--printparams} (or @option{-P}).

NoiseChisel's input image may contain blank elements (see @ref{Blank pixels}).
Blank elements will be ignored in all steps of NoiseChisel.
Hence if your dataset has bad pixels which should be masked with a mask image, please use Gnuastro's @ref{Arithmetic} program (in particular its @command{where} operator) to convert those pixels to blank pixels before running NoiseChisel.
Gnuastro's Arithmetic program has bitwise operators helping you select specific kinds of bad-pixels when necessary.

A convolution kernel can also be optionally given.
If a value (file name) is given to @option{--kernel} on the command-line or in a configuration file (see @ref{Configuration files}), then that file will be used to convolve the image prior to thresholding.
Otherwise a default kernel will be used.
The default kernel is a 2D Gaussian with a FWHM of 2 pixels truncated at 5 times the FWHM.
This choice of the default kernel is discussed in Section 3.1.1 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
See @ref{Convolution kernel} for kernel related options.
Passing @code{none} to @option{--kernel} will disable convolution.
On the other hand, through the @option{--convolved} option, you may provide an already convolved image, see descriptions below for more.

NoiseChisel defines two tessellations over the input (see @ref{Tessellation}).
This enables it to deal with possible gradients in the input dataset and also significantly improve speed by processing each tile on different threads simultaneously.
Tessellation related options are discussed in @ref{Processing options}.
In particular, NoiseChisel uses two tessellations (with everything between them identical except the tile sizes): a fine-grained one with smaller tiles (used in thresholding and Sky value estimations) and another with larger tiles which is used for pseudo-detections over non-detected regions of the image.
The common Tessellation options described in @ref{Processing options} define all parameters of both tessellations.
The large tile size for the latter tessellation is set through the @option{--largetilesize} option.
To inspect the tessellations on your input dataset, run NoiseChisel with @option{--checktiles}.

@cartouche
@noindent
@strong{Usage TIP:} Frequently use the options starting with @option{--check}.
Since the noise properties differ between different datasets, you can often play with the parameters/options for a better result than the default parameters.
You can start with @option{--checkdetection} for the main steps.
For the full list of NoiseChisel's checking options please run:
@example
$ astnoisechisel --help | grep check
@end example
@end cartouche

Below, we'll discuss NoiseChisel's options, classified into two general classes, to help in easy navigation.
@ref{NoiseChisel input} mainly discusses the basic options relating to inputs and prior to the detection process detection.
Afterwards, @ref{Detection options} fully describes every configuration parameter (option) related to detection and how they affect the final result.
The order of options in this section follow the logical order within NoiseChisel.
On first reading (while you are still new to NoiseChisel), it is therefore strongly recommended to read the options in the given order below.
The output of @option{--printparams} (or @option{-P}) also has this order.
However, the output of @option{--help} is sorted alphabetically.
Finally, in @ref{NoiseChisel output} the format of NoiseChisel's output is discussed.

@menu
* NoiseChisel input::           NoiseChisel's input options.
* Detection options::           Configure detection in NoiseChisel.
* NoiseChisel output::          NoiseChisel's output options and format.
@end menu

@node NoiseChisel input, Detection options, Invoking astnoisechisel, Invoking astnoisechisel
@subsubsection NoiseChisel input

The options here can be used to configure the inputs and output of NoiseChisel, along with some general processing options.
Recall that you can always see the full list of Gnuastro's options with the @option{--help} (see @ref{Getting help}), or @option{--printparams} (or @option{-P}) to see their values (see @ref{Operating mode options}).

@table @option

@item -k STR
@itemx --kernel=STR
File name of kernel to smooth the image before applying the threshold, see @ref{Convolution kernel}.
If no convolution is needed, give this option a value of @option{none}.

The first step of NoiseChisel is to convolve/smooth the image and use the convolved image in multiple steps including the finding and applying of the quantile threshold (see @option{--qthresh}).

The @option{--kernel} option is not mandatory.
If not called, a 2D Gaussian profile with a FWHM of 2 pixels truncated at 5 times the FWHM is used.
This choice of the default kernel is discussed in Section 3.1.1 of Akhlaghi and Ichikawa [2015].
You can use MakeProfiles to build a kernel with any of its recognized profile types and parameters.
For more details, please see @ref{MakeProfiles output dataset}.
For example, the command below will make a Moffat kernel (with @mymath{\beta=2.8}) with FWHM of 2 pixels truncated at 10 times the FWHM.

@example
$ astmkprof --oversample=1 --kernel=moffat,2,2.8,10
@end example

Since convolution can be the slowest step of NoiseChisel, for large datasets, you can convolve the image once with Gnuastro's Convolve (see @ref{Convolve}), and use the @option{--convolved} option to feed it directly to NoiseChisel.
This can help getting faster results when you are playing/testing the higher-level options.

@item --khdu=STR
HDU containing the kernel in the file given to the @option{--kernel}
option.

@item --convolved=STR
Use this file as the convolved image and don't do convolution (ignore @option{--kernel}).
NoiseChisel will just check the size of the given dataset is the same as the input's size.
If a wrong image (with the same size) is given to this option, the results (errors, bugs, etc) are unpredictable.
So please use this option with care and in a highly controlled environment, for example in the scenario discussed below.

In almost all situations, as the input gets larger, the single most CPU (and time) consuming step in NoiseChisel (and other programs that need a convolved image) is convolution.
Therefore minimizing the number of convolutions can save a significant amount of time in some scenarios.
One such scenario is when you want to segment NoiseChisel's detections using the same kernel (with @ref{Segment}, which also supports this @option{--convolved} option).
This scenario would require two convolutions of the same dataset: once by NoiseChisel and once by Segment.
Using this option in both programs, only one convolution (prior to running NoiseChisel) is enough.

Another common scenario where this option can be convenient is when you are testing NoiseChisel (or Segment) for the best parameters.
You have to run NoiseChisel multiple times and see the effect of each change.
However, once you are happy with the kernel, re-convolving the input on every change of higher-level parameters will greatly hinder, or discourage, further testing.
With this option, you can convolve the input image with your chosen kernel once before running NoiseChisel, then feed it to NoiseChisel on each test run and thus save valuable time for better/more tests.

To build your desired convolution kernel, you can use @ref{MakeProfiles}.
To convolve the image with a given kernel you can use @ref{Convolve}.
Spatial domain convolution is mandatory: in the frequency domain, blank pixels (if present) will cover the whole image and gradients will appear on the edges, see @ref{Spatial vs. Frequency domain}.

Below you can see an example of the second scenario: you want to see how variation of the growth level (through the @option{--detgrowquant} option) will affect the final result.
Recall that you can ignore all the extra spaces, new lines, and backslash's (`@code{\}') if you are typing in the terminal.
In a shell script, remove the @code{$} signs at the start of the lines.

@example
## Make the kernel to convolve with.
$ astmkprof --oversample=1 --kernel=gaussian,2,5

## Convolve the input with the given kernel.
$ astconvolve input.fits --kernel=kernel.fits                \
              --domain=spatial --output=convolved.fits

## Run NoiseChisel with seven growth quantile values.
$ for g in 60 65 70 75 80 85 90; do                          \
    astnoisechisel input.fits --convolved=convolved.fits     \
                   --detgrowquant=0.$g --output=$g.fits;     \
  done
@end example



@item --chdu=STR
The HDU/extension containing the convolved image in the file given to @option{--convolved}.

@item -w STR
@itemx --widekernel=STR
File name of a wider kernel to use in estimating the difference of the mode and median in a tile (this difference is used to identify the significance of signal in that tile, see @ref{Quantifying signal in a tile}).
As displayed in Figure 4 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}, a wider kernel will help in identifying the skewness caused by data in noise.
The image that is convolved with this kernel is @emph{only} used for this purpose.
Once the mode is found to be sufficiently close to the median, the quantile threshold is found on the image convolved with the sharper kernel (@option{--kernel}), see @option{--qthresh}).

Since convolution will significantly slow down the processing, this feature is optional.
When it isn't given, the image that is convolved with @option{--kernel} will be used to identify good tiles @emph{and} apply the quantile threshold.
This option is mainly useful in conditions were you have a very large, extended, diffuse signal that is still present in the usable tiles when using @option{--kernel}.
See @ref{Detecting large extended targets} for a practical demonstration on how to inspect the tiles used in identifying the quantile threshold.

@item --whdu=STR
HDU containing the kernel file given to the @option{--widekernel} option.

@item -L INT[,INT]
@itemx --largetilesize=INT[,INT]
The size of each tile for the tessellation with the larger tile sizes.
Except for the tile size, all the other parameters for this tessellation are taken from the common options described in @ref{Processing options}.
The format is identical to that of the @option{--tilesize} option that is discussed in that section.
@end table

@node Detection options, NoiseChisel output, NoiseChisel input, Invoking astnoisechisel
@subsubsection Detection options

Detection is the process of separating the pixels in the image into two groups: 1) Signal, and 2) Noise.
Through the parameters below, you can customize the detection process in NoiseChisel.
Recall that you can always see the full list of NoiseChisel's options with the @option{--help} (see @ref{Getting help}), or @option{--printparams} (or @option{-P}) to see their values (see @ref{Operating mode options}).

@table @option

@item -Q FLT
@itemx --meanmedqdiff=FLT
The maximum acceptable distance between the quantiles of the mean and median in each tile, see @ref{Quantifying signal in a tile}.
The quantile threshold estimates are measured on tiles where the quantiles of their mean and median are less distant than the value given to this option.
For example @option{--meanmedqdiff=0.01} means that only tiles where the mean's quantile is between 0.49 and 0.51 (recall that the median's quantile is 0.5) will be used.

@item --outliersclip=FLT,FLT
@mymath{\sigma}-clipping parameters for the outlier rejection of the quantile threshold.
The format of the given values is similar to @option{--sigmaclip} below.
In NoiseChisel, outlier rejection on tiles is used when identifying the quantile thresholds (@option{--qthresh}, @option{--noerodequant}, and @option{detgrowquant}).

Outlier rejection is useful when the dataset contains a large and diffuse (almost flat within each tile) signal.
The flatness of the profile will cause it to successfully pass the mean-median quantile difference test, so we'll need to use the distribution of successful tiles for removing these false positives.
For more, see the latter half of @ref{Quantifying signal in a tile}.

@item --outliersigma=FLT
Multiple of sigma to define an outlier.
If this option is given a value of zero, no outlier rejection will take place.
For more see @option{--outliersclip} and the latter half of @ref{Quantifying signal in a tile}.

@item -t FLT
@itemx --qthresh=FLT
The quantile threshold to apply to the convolved image.
The detection process begins with applying a quantile threshold to each of the tiles in the small tessellation.
The quantile is only calculated for tiles that don't have any significant signal within them, see @ref{Quantifying signal in a tile}.
Interpolation is then used to give a value to the unsuccessful tiles and it is finally smoothed.

@cindex Quantile
@cindex Binary image
@cindex Foreground pixels
@cindex Background pixels
The quantile value is a floating point value between 0 and 1.
Assume that we have sorted the @mymath{N} data elements of a distribution (the pixels in each mesh on the convolved image).
The quantile (@mymath{q}) of this distribution is the value of the element with an index of (the nearest integer to) @mymath{q\times{N}} in the sorted data set.
After thresholding is complete, we will have a binary (two valued) image.
The pixels above the threshold are known as foreground pixels (have a value of 1) while those which lie below the threshold are known as background (have a value of 0).

@item --smoothwidth=INT
Width of flat kernel used to smooth the interpolated quantile thresholds, see @option{--qthresh} for more.

@cindex NaN
@item --checkqthresh
Check the quantile threshold values on the mesh grid.
A file suffixed with @file{_qthresh.fits} will be created showing each step.
With this option, NoiseChisel will abort as soon as quantile estimation has been completed, allowing you to inspect the steps leading to the final quantile threshold, this can be disabled with @option{--continueaftercheck}.
By default the output will have the same pixel size as the input, but with the @option{--oneelempertile} option, only one pixel will be used for each tile (see @ref{Processing options}).

@item --blankasforeground
In the erosion and opening steps below, treat blank elements as foreground (regions above the threshold).
By default, blank elements in the dataset are considered to be background, so if a foreground pixel is touching it, it will be eroded.
This option is irrelevant if the datasets contains no blank elements.

When there are many blank elements in the dataset, treating them as foreground will systematically erode their regions less, therefore systematically creating more false positives.
So use this option (when blank values are present) with care.

@item -e INT
@itemx --erode=INT
@cindex Erosion
The number of erosions to apply to the binary thresholded image.
Erosion is simply the process of flipping (from 1 to 0) any of the foreground pixels that neighbor a background pixel.
In a 2D image, there are two kinds of neighbors, 4-connected and 8-connected neighbors.
You can specify which type of neighbors should be used for erosion with the @option{--erodengb} option, see below.

Erosion has the effect of shrinking the foreground pixels.
To put it another way, it expands the holes.
This is a founding principle in NoiseChisel: it exploits the fact that with very low thresholds, the holes in the very low surface brightness regions of an image will be smaller than regions that have no signal.
Therefore by expanding those holes, we are able to separate the regions harboring signal.

@item --erodengb=INT
The type of neighborhood (structuring element) used in erosion, see @option{--erode} for an explanation on erosion.
Only two integer values are acceptable: 4 or 8.
In 4-connectivity, the neighbors of a pixel are defined as the four pixels on the top, bottom, right and left of a pixel that share an edge with it.
The 8-connected neighbors on the other hand include the 4-connected neighbors along with the other 4 pixels that share a corner with this pixel.
See Figure 6 (a) and (b) in Akhlaghi and Ichikawa (2015) for a demonstration.

@item --noerodequant
Pure erosion is going to carve off sharp and small objects completely out of the detected regions.
This option can be used to avoid missing such sharp and small objects (which have significant pixels, but not over a large area).
All pixels with a value larger than the significance level specified by this option will not be eroded during the erosion step above.
However, they will undergo the erosion and dilation of the opening step below.

Like the @option{--qthresh} option, the significance level is determined using the quantile (a value between 0 and 1).
Just as a reminder, in the normal distribution, @mymath{1\sigma}, @mymath{1.5\sigma}, and @mymath{2\sigma} are approximately on the 0.84, 0.93, and 0.98 quantiles.

@item -p INT
@itemx --opening=INT
Depth of opening to be applied to the eroded binary image.
Opening is a composite operation.
When opening a binary image with a depth of @mymath{n}, @mymath{n} erosions (explained in @option{--erode}) are followed by @mymath{n} dilations.
Simply put, dilation is the inverse of erosion.
When dilating an image any background pixel is flipped (from 0 to 1) to become a foreground pixel.
Dilation has the effect of fattening the foreground.
Note that in NoiseChisel, the erosion which is part of opening is independent of the initial erosion that is done on the thresholded image (explained in @option{--erode}).
The structuring element for the opening can be specified with the @option{--openingngb} option.
Opening has the effect of removing the thin foreground connections (mostly noise) between separate foreground `islands' (detections) thereby completely isolating them.
Once opening is complete, we have @emph{initial} detections.

@item --openingngb=INT
The structuring element used for opening, see @option{--erodengb} for more information about a structuring element.

@item --skyfracnoblank
Ignore blank pixels when estimating the fraction of undetected pixels for Sky estimation.
NoiseChisel only measures the Sky over the tiles that have a sufficiently large fraction of undetected pixels (value given to @option{--minskyfrac}).
By default this fraction is found by dividing number of undetected pixels in a tile by the tile's area.
But this default behavior ignores the possibility of blank pixels.
In situations that blank/masked pixels are scattered across the image and if they are large enough, all the tiles can fail the @option{--minskyfrac} test, thus not allowing NoiseChisel to proceed.
With this option, such scenarios can be fixed: the denominator of the fraction will be the number of non-blank elements in the tile, not the total tile area.

@item -B FLT
@itemx --minskyfrac=FLT
Minimum fraction (value between 0 and 1) of Sky (undetected) areas in a tile.
Only tiles with a fraction of undetected pixels (Sky) larger than this value will be used to estimate the Sky value.
NoiseChisel uses this option value twice to estimate the Sky value: after initial detections and in the end when false detections have been removed.

Because of the PSF and their intrinsic amorphous properties, astronomical objects (except cosmic rays) never have a clear cutoff and commonly sink into the noise very slowly.
Even below the very low thresholds used by NoiseChisel.
So when a large fraction of the area of one mesh is covered by detections, it is very plausible that their faint wings are present in the undetected regions (hence causing a bias in any measurement).
To get an accurate measurement of the above parameters over the tessellation, tiles that harbor too many detected regions should be excluded.
The used tiles are visible in the respective @option{--check} option of the given step.

@item --checkdetsky
Check the initial approximation of the sky value and its standard deviation in a FITS file ending with @file{_detsky.fits}.
With this option, NoiseChisel will abort as soon as the sky value used for defining pseudo-detections is complete.
This allows you to inspect the steps leading to the final quantile threshold, this behavior can be disabled with @option{--continueaftercheck}.
By default the output will have the same pixel size as the input, but with the @option{--oneelempertile} option, only one pixel will be used for each tile (see @ref{Processing options}).

@item -s FLT,FLT
@itemx --sigmaclip=FLT,FLT
The @mymath{\sigma}-clipping parameters for measuring the initial and final Sky values from the undetected pixels, see @ref{Sigma clipping}.

This option takes two values which are separated by a comma (@key{,}).
Each value can either be written as a single number or as a fraction of two numbers (for example @code{3,1/10}).
The first value to this option is the multiple of @mymath{\sigma} that will be clipped (@mymath{\alpha} in that section).
The second value is the exit criteria.
If it is less than 1, then it is interpreted as tolerance and if it is larger than one it is assumed to be the fixed number of iterations.
Hence, in the latter case the value must be an integer.

@item -R FLT
@itemx --dthresh=FLT
The detection threshold: a multiple of the initial Sky standard deviation added with the initial Sky approximation (which you can inspect with @option{--checkdetsky}).
This flux threshold is applied to the initially undetected regions on the unconvolved image.
The background pixels that are completely engulfed in a 4-connected foreground region are converted to background (holes are filled) and one opening (depth of 1) is applied over both the initially detected and undetected regions.
The Signal to noise ratio of the resulting `pseudo-detections' are used to identify true vs. false detections.
See Section 3.1.5 and Figure 7 in Akhlaghi and Ichikawa (2015) for a very complete explanation.

@item --dopening=INT
The number of openings to do after applying @option{--dthresh}.

@item --dopeningngb=INT
The connectivity used in the opening of @option{--dopening}.
In a 2D image this must be either 4 or 8.
The stronger the connectivity, the more smaller regions will be discarded.

@item --holengb=INT
The connectivity (defined by the number of neighbors) to fill holes after applying @option{--dthresh} (above) to find pseudo-detections.
For example in a 2D image it must be 4 (the neighbors that are most strongly connected) or 8 (all neighbors).
The stronger the connectivity, the stronger the hole will be enclosed.
So setting a value of 8 in a 2D image means that the walls of the hole are 4-connected.
If standard (near Sky level) values are given to @option{--dthresh}, setting @option{--holengb=4}, might fill the complete dataset and thus not create enough pseudo-detections.

@item --pseudoconcomp=INT
The connectivity (defined by the number of neighbors) to find individual pseudo-detections.
If it is a weaker connectivity (4 in a 2D image), then pseudo-detections that are connected on the corners will be treated as separate.

@item -m INT
@itemx --snminarea=INT
The minimum area to calculate the Signal to noise ratio on the pseudo-detections of both the initially detected and undetected regions.
When the area in a pseudo-detection is too small, the Signal to noise ratio measurements will not be accurate and their distribution will be heavily skewed to the positive.
So it is best to ignore any pseudo-detection that is smaller than this area.
Use @option{--detsnhistnbins} to check if this value is reasonable or not.

@item --checksn
Save the S/N values of the pseudo-detections (and possibly grown detections if @option{--cleangrowndet} is called) into separate tables.
If @option{--tableformat} is a FITS table, each table will be written into a separate extension of one file suffixed with @file{_detsn.fits}.
If it is plain text, a separate file will be made for each table (ending in @file{_detsn_sky.txt}, @file{_detsn_det.txt} and @file{_detsn_grown.txt}).
For more on @option{--tableformat} see @ref{Input output options}.

You can use these to inspect the S/N values and their distribution (in combination with the @option{--checkdetection} option to see where the pseudo-detections are).
You can use Gnuastro's @ref{Statistics} to make a histogram of the distribution or any other analysis you would like for better understanding of the distribution (for example through a histogram).

@item --minnumfalse=INT
The minimum number of `pseudo-detections' over the undetected regions to identify a Signal-to-Noise ratio threshold.
The Signal to noise ratio (S/N) of false pseudo-detections in each tile is found using the quantile of the S/N distribution of the pseudo-detections over the undetected pixels in each mesh.
If the number of S/N measurements is not large enough, the quantile will not be accurate (can have large scatter).
For example if you set @option{--snquant=0.99} (or the top 1 percent), then it is best to have at least 100 S/N measurements.

@item -c FLT
@itemx --snquant=FLT
The quantile of the Signal to noise ratio distribution of the pseudo-detections in each mesh to use for filling the large mesh grid.
Note that this is only calculated for the large mesh grids that satisfy the minimum fraction of undetected pixels (value of @option{--minbfrac}) and minimum number of pseudo-detections (value of @option{--minnumfalse}).

@item --snthresh=FLT
Manually set the signal-to-noise ratio of true pseudo-detections.
With this option, NoiseChisel will not attempt to find pseudo-detections over the noisy regions of the dataset, but will directly go onto applying the manually input value.

This option is useful in crowded images where there is no blank sky to find the sky pseudo-detections.
You can get this value on a similarly reduced dataset (from another region of the Sky with more undetected regions spaces).

@item -d FLT
@itemx --detgrowquant=FLT
Quantile limit to ``grow'' the final detections.
As discussed in the previous options, after applying the initial quantile threshold, layers of pixels are carved off the objects to identify true signal.
With this step you can return those low surface brightness layers that were carved off back to the detections.
To disable growth, set the value of this option to @code{1}.

The process is as follows: after the true detections are found, all the non-detected pixels above this quantile will be put in a list and used to ``grow'' the true detections (seeds of the growth).
Like all quantile thresholds, this threshold is defined and applied to the convolved dataset.
Afterwards, the dataset is dilated once (with minimum connectivity) to connect very thin regions on the boundary: imagine building a dam at the point rivers spill into an open sea/ocean.
Finally, all holes are filled.
In the geography metaphor, holes can be seen as the closed (by the dams) rivers and lakes, so this process is like turning the water in all such rivers and lakes into soil.
See @option{--detgrowmaxholesize} for configuring the hole filling.

@item --detgrowmaxholesize=INT
The maximum hole size to fill during the final expansion of the true detections as described in @option{--detgrowquant}.
This is necessary when the input contains many smaller objects and can be used to avoid marking blank sky regions as detections.

For example multiple galaxies can be positioned such that they surround an empty region of sky.
If all the holes are filled, the Sky region in between them will be taken as a detection which is not desired.
To avoid such cases, the integer given to this option must be smaller than the hole between such objects.
However, we should caution that unless the ``hole'' is very large, the combined faint wings of the galaxies might actually be present in between them, so be very careful in not filling such holes.

On the other hand, if you have a very large (and extended) galaxy, the diffuse wings of the galaxy may create very large holes over the detections.
In such cases, a large enough value to this option will cause all such holes to be detected as part of the large galaxy and thus help in detecting it to extremely low surface brightness limits.
Therefore, especially when large and extended objects are present in the image, it is recommended to give this option (very) large values.
For one real-world example, see @ref{Detecting large extended targets}.

@item --cleangrowndet
After dilation, if the signal-to-noise ratio of a detection is less than the derived pseudo-detection S/N limit, that detection will be discarded.
In an ideal/clean noise, a true detection's S/N should be larger than its constituent pseudo-detections because its area is larger and it also covers more signal.
However, on a false detections (especially at lower @option{--snquant} values), the increase in size can cause a decrease in S/N below that threshold.

This will improve purity and not change completeness (a true detection will not be discarded).
Because a true detection has flux in its vicinity and dilation will catch more of that flux and increase the S/N.
So on a true detection, the final S/N cannot be less than pseudo-detections.

However, in many real images bad processing creates artifacts that cannot be accurately removed by the Sky subtraction.
In such cases, this option will decrease the completeness (will artificially discard true detections).
So this feature is not default and should to be explicitly called when you know the noise is clean.


@item --checkdetection
Every step of the detection process will be added as an extension to a file with the suffix @file{_det.fits}.
Going through each would just be a repeat of the explanations above and also of those in Akhlaghi and Ichikawa (2015).
The extension label should be sufficient to recognize which step you are observing.
Viewing all the steps can be the best guide in choosing the best set of parameters.
With this option, NoiseChisel will abort as soon as a snapshot of all the detection process is saved.
This behavior can be disabled with @option{--continueaftercheck}.

@item --checksky
Check the derivation of the final sky and its standard deviation values on the mesh grid.
With this option, NoiseChisel will abort as soon as the sky value is estimated over the image (on each tile).
This behavior can be disabled with @option{--continueaftercheck}.
By default the output will have the same pixel size as the input, but with the @option{--oneelempertile} option, only one pixel will be used for each tile (see @ref{Processing options}).

@end table





@node NoiseChisel output,  , Detection options, Invoking astnoisechisel
@subsubsection NoiseChisel output

NoiseChisel's output is a multi-extension FITS file.
The main extension/dataset is a (binary) detection map.
It has the same size as the input but with only two possible values for all pixels: 0 (for pixels identified as noise) and 1 (for those identified as signal/detections).
The detection map is followed by a Sky and Sky standard deviation dataset (which are calculated from the binary image).
By default (when @option{--rawoutput} isn't called), NoiseChisel will also subtract the Sky value from the input and save the sky-subtracted input as the first extension in the output with data.
The zero-th extension (that contains no data), contains NoiseChisel's configuration as FITS keywords, see @ref{Output FITS files}.

The name of the output file can be set by giving a value to @option{--output} (this is a common option between all programs and is therefore discussed in @ref{Input output options}).
If @option{--output} isn't used, the input name will be suffixed with @file{_detected.fits} and used as output, see @ref{Automatic output}.
If any of the options starting with @option{--check*} are given, NoiseChisel won't complete and will abort as soon as the respective check images are created.
For more information on the different check images, see the description for the @option{--check*} options in @ref{Detection options} (this can be disabled with @option{--continueaftercheck}).

The last two extensions of the output are the Sky and its Standard deviation, see @ref{Sky value} for a complete explanation.
They are calculated on the tile grid that you defined for NoiseChisel.
By default these datasets will have the same size as the input, but with all the pixels in one tile given one value.
To be more space-efficient (keep only one pixel per tile), you can use the @option{--oneelempertile} option, see @ref{Tessellation}.

@cindex GNOME
To inspect any of NoiseChisel's output files, assuming you use SAO DS9, you can configure your Graphic User Interface (GUI) to open NoiseChisel's output as a multi-extension data cube.
This will allow you to flip through the different extensions and visually inspect the results.
This process has been described for the GNOME GUI (most common GUI in GNU/Linux operating systems) in @ref{Viewing multiextension FITS images}.

NoiseChisel's output configuration options are described in detail below.

@table @option
@item --continueaftercheck
Continue NoiseChisel after any of the options starting with @option{--check} (see @ref{Detection options}.
NoiseChisel involves many steps and as a result, there are many checks, allowing you to inspect the status of the processing.
The results of each step affect the next steps of processing.
Therefore, when you want to check the status of the processing at one step, the time spent to complete NoiseChisel is just wasted/distracting time.

To encourage easier experimentation with the option values, when you use any of the NoiseChisel options that start with @option{--check}, NoiseChisel will abort once its desired extensions have been written.
With @option{--continueaftercheck} option, you can disable this behavior and ask NoiseChisel to continue with the rest of the processing, even after the requested check files are complete.

@item --ignoreblankintiles
Don't set the input's blank pixels to blank in the tiled outputs (for example Sky and Sky standard deviation extensions of the output).
This is only applicable when the tiled output has the same size as the input, in other words, when @option{--oneelempertile} isn't called.

By default, blank values in the input (commonly on the edges which are outside the survey/field area) will be set to blank in the tiled outputs also.
But in other scenarios this default behavior is not desired: for example if you have masked something in the input, but want the tiled output under that also.

@item -l
@itemx --label
Run a connected-components algorithm on the finally detected pixels to identify which pixels are connected to which.
By default the main output is a binary dataset with only two values: 0 (for noise) and 1 (for signal/detections).
See @ref{NoiseChisel output} for more.

The purpose of NoiseChisel is to detect targets that are extended and diffuse, with outer parts that sink into the noise very gradually (galaxies and stars for example).
Since NoiseChisel digs down to extremely low surface brightness values, many such targets will commonly be detected together as a single large body of connected pixels.

To properly separate connected objects, sophisticated segmentation methods are commonly necessary on NoiseChisel's output.
Gnuastro has the dedicated @ref{Segment} program for this job.
Since input images are commonly large and can take a significant volume, the extra volume necessary to store the labels of the connected components in the detection map (which will be created with this @option{--label} option, in 32-bit signed integer type) can thus be a major waste of space.
Since the default output is just a binary dataset, an 8-bit unsigned dataset is enough.

The binary output will also encourage users to segment the result separately prior to doing higher-level analysis.
As an alternative to @option{--label}, if you have the binary detection image, you can use the @code{connected-components} operator in Gnuastro's Arithmetic program to identify regions that are connected with each other.
For example with this command (assuming NoiseChisel's output is called @file{nc.fits}):

@example
$ astarithmetic nc.fits 2 connected-components -hDETECTIONS
@end example

@item --rawoutput
Don't include the Sky-subtracted input image as the first extension of the output.
By default, the Sky-subtracted input is put in the first extension of the output.
The next extensions are NoiseChisel's main outputs described above.

The extra Sky-subtracted input can be convenient in checking NoiseChisel's output and comparing the detection map with the input: visually see if everything you expected is detected (reasonable completeness) and that you don't have too many false detections (reasonable purity).
This visual inspection is simplified if you use SAO DS9 to view NoiseChisel's output as a multi-extension data-cube, see @ref{Viewing multiextension FITS images}.

When you are satisfied with your NoiseChisel configuration (therefore you don't need to check on every run), or you want to archive/transfer the outputs, or the datasets become large, or you are running NoiseChisel as part of a pipeline, this Sky-subtracted input image can be a significant burden (take up a large volume).
The fact that the input is also noisy, makes it hard to compress it efficiently.

In such cases, this @option{--rawoutput} can be used to avoid the extra sky-subtracted input in the output.
It is always possible to easily produce the Sky-subtracted dataset from the input (assuming it is in extension @code{1} of @file{in.fits}) and the @code{SKY} extension of NoiseChisel's output (let's call it @file{nc.fits}) with a command like below (assuming NoiseChisel wasn't run with @option{--oneelempertile}, see @ref{Tessellation}):

@example
$ astarithmetic in.fits nc.fits - -h1 -hSKY
@end example
@end table

@cartouche
@noindent
@cindex Compression
@strong{Save space:} with the @option{--rawoutput} and @option{--oneelempertile}, NoiseChisel's output will only be one binary detection map and two much smaller arrays with one value per tile.
Since none of these have noise they can be compressed very effectively (without any loss of data) with exceptionally high compression ratios.
This makes it easy to archive, or transfer, NoiseChisel's output even on huge datasets.
To compress it with the most efficient method (take up less volume), run the following command:

@cindex GNU Gzip
@example
$ gzip --best noisechisel_output.fits
@end example

@noindent
The resulting @file{.fits.gz} file can then be fed into any of Gnuastro's programs directly, or viewed in viewers like SAO DS9, without having to decompress it separately (they will just take a little longer, because they have to internally decompress it before starting).
See @ref{NoiseChisel optimization for storage} for an example on a real dataset.
@end cartouche










@node Segment, MakeCatalog, NoiseChisel, Data analysis
@section Segment

Once signal is separated from noise (for example with @ref{NoiseChisel}), you have a binary dataset: each pixel is either signal (1) or noise (0).
Signal (for example every galaxy in your image) has been ``detected'', but all detections have a label of 1.
Therefore while we know which pixels contain signal, we still can't find out how many galaxies they contain or which detected pixels correspond to which galaxy.
At the lowest (most generic) level, detection is a kind of segmentation (segmenting the whole dataset into signal and noise, see @ref{NoiseChisel}).
Here, we'll define segmentation only on signal: to separate sub-structure within the detections.

@cindex Connected component labeling
If the targets are clearly separated, or their detected regions aren't touching, a simple connected components@footnote{@url{https://en.wikipedia.org/wiki/Connected-component_labeling}} algorithm (very basic segmentation) is enough to separate the regions that are touching/connected.
This is such a basic and simple form of segmentation that Gnuastro's Arithmetic program has an operator for it: see @code{connected-components} in @ref{Arithmetic operators}.
Assuming the binary dataset is called @file{binary.fits}, you can use it with a command like this:

@example
$ astarithmetic binary.fits 2 connected-components
@end example

@noindent
You can even do a very basic detection (a threshold, say at value
@code{100}) @emph{and} segmentation in Arithmetic with a single command
like below:

@example
$ astarithmetic in.fits 100 gt 2 connected-components
@end example

However, in most astronomical situations our targets are not nicely separated or have a sharp boundary/edge (for a threshold to suffice): they touch (for example merging galaxies), or are simply in the same line-of-sight (which is much more common).
This causes their images to overlap.

In particular, when you do your detection with NoiseChisel, you will detect signal to very low surface brightness limits: deep into the faint wings of galaxies or bright stars (which can extend very far and irregularly from their center).
Therefore, it often happens that several galaxies are detected as one large detection.
Since they are touching, a simple connected components algorithm will not suffice.
It is therefore necessary to do a more sophisticated segmentation and break up the detected pixels (even those that are touching) into multiple target objects as accurately as possible.

Segment will use a detection map and its corresponding dataset to find sub-structure over the detected areas and use them for its segmentation.
Until Gnuastro version 0.6 (released in 2018), Segment was part of @ref{NoiseChisel}.
Therefore, similar to NoiseChisel, the best place to start reading about Segment and understanding what it does (with many illustrative figures) is Section 3.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}, and continue with @url{https://arxiv.org/abs/1909.11230, Akhlaghi [2019]}.

@cindex river
@cindex Watershed algorithm
As a summary, Segment first finds true @emph{clump}s over the detections.
Clumps are associated with local maxima/minima@footnote{By default the maximum is used as the first clump pixel, to define clumps based on local minima, use the @option{--minima} option.} and extend over the neighboring pixels until they reach a local minimum/maximum (@emph{river}/@emph{watershed}).
By default, Segment will use the distribution of clump signal-to-noise ratios over the undetected regions as reference to find ``true'' clumps over the detections.
Using the undetected regions can be disabled by directly giving a signal-to-noise ratio to @option{--clumpsnthresh}.

The true clumps are then grown to a certain threshold over the detections.
Based on the strength of the connections (rivers/watersheds) between the grown clumps, they are considered parts of one @emph{object} or as separate @emph{object}s.
See Section 3.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]} for more.
Segment's main output are thus two labeled datasets: 1) clumps, and 2) objects.
See @ref{Segment output} for more.

To start learning about Segment, especially in relation to detection (@ref{NoiseChisel}) and measurement (@ref{MakeCatalog}), the recommended references are @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}, @url{https://arxiv.org/abs/1611.06387, Akhlaghi [2016]} and @url{https://arxiv.org/abs/1909.11230, Akhlaghi [2019]}.
If you have used Segment within your research, please run it with @option{--cite} to list the papers you should cite and how to acknowledge its funding sources.

Those papers cannot be updated any more but the software will evolve.
For example Segment became a separate program (from NoiseChisel) in 2018 (after those papers were published).
Therefore this book is the definitive reference.
@c To help in the transition from those papers to the software you are using, see @ref{Segment changes after publication}.
Finally, in @ref{Invoking astsegment}, we'll discuss Segment's inputs, outputs and configuration options.


@menu
* Invoking astsegment::         Inputs, outputs and options to Segment
@end menu

@c @node Segment changes after publication, Invoking astsegment, Segment, Segment
@c @subsection Segment changes after publication

@c Segment's main algorithm and working strategy were initially defined and introduced in Section 3.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]} and @url{https://arxiv.org/abs/1909.11230, Akhlaghi [2019]}.
@c It is strongly recommended to read those papers for a good understanding of what Segment does, how it relates to detection, and how each parameter influences the output.
@c They have many figures showing every step on multiple mock and real examples.

@c However, the papers cannot be updated anymore, but Segment has evolved (and will continue to do so): better algorithms or steps have been (and will be) found.
@c This book is thus the final and definitive guide to Segment.
@c The aim of this section is to make the transition from the paper to your installed version, as smooth as possible through the list below.
@c For a more detailed list of changes in previous Gnuastro releases/versions, please follow the @file{NEWS} file@footnote{The @file{NEWS} file is present in the released Gnuastro tarball, see @ref{Release tarball}.}.

@node Invoking astsegment,  , Segment, Segment
@subsection Invoking Segment

Segment will identify substructure within the detected regions of an input image.
Segment's output labels can be directly used for measurements (for example with @ref{MakeCatalog}).
The executable name is @file{astsegment} with the following general template

@example
$ astsegment [OPTION ...] InputImage.fits
@end example

@noindent
One line examples:

@example
## Segment NoiseChisel's detected regions.
$ astsegment default-noisechisel-output.fits

## Use a hand-input S/N value for keeping true clumps
## (avoid finding the S/N using the undetected regions).
$ astsegment nc-out.fits --clumpsnthresh=10

## Inspect all the segmentation steps after changing a parameter.
$ astsegment input.fits --snquant=0.9 --checksegmentaion

## Use the fixed value of 0.01 for the input's Sky standard deviation
## (in the units of the input), and assume all the pixels are a
## detection (for example a large structure extending over the whole
## image), and only keep clumps with S/N>10 as true clumps.
$ astsegment in.fits --std=0.01 --detection=all --clumpsnthresh=10
@end example

@cindex Gaussian
@noindent
If Segment is to do processing (for example you don't want to get help, or see the values of each option), at least one input dataset is necessary along with detection and error information, either as separate datasets (per-pixel) or fixed values, see @ref{Segment input}.
Segment shares a large set of common operations with other Gnuastro programs, mainly regarding input/output, general processing steps, and general operating modes.
To help in a unified experience between all of Gnuastro's programs, these common operations have the same names and defined in @ref{Common options}.

As in all Gnuastro programs, options can also be given to Segment in configuration files.
For a thorough description of Gnuastro's configuration file parsing, please see @ref{Configuration files}.
All of Segment's options with a short description are also always available on the command-line with the @option{--help} option, see @ref{Getting help}.
To inspect the option values without actually running Segment, append your command with @option{--printparams} (or @option{-P}).

To help in easy navigation between Segment's options, they are separately discussed in the three sub-sections below: @ref{Segment input} discusses how you can customize the inputs to Segment.
@ref{Segmentation options} is devoted to options specific to the high-level segmentation process.
Finally, in @ref{Segment output}, we'll discuss options that affect Segment's output.

@menu
* Segment input::               Input files and options.
* Segmentation options::        Parameters of the segmentation process.
* Segment output::              Outputs of Segment
@end menu

@node Segment input, Segmentation options, Invoking astsegment, Invoking astsegment
@subsubsection Segment input

Besides the input dataset (for example astronomical image), Segment also needs to know the Sky standard deviation and the regions of the dataset that it should segment.
The values dataset is assumed to be Sky subtracted by default.
If it isn't, you can ask Segment to subtract the Sky internally by calling @option{--sky}.
For the rest of this discussion, we'll assume it is already sky subtracted.

The Sky and its standard deviation can be a single value (to be used for the whole dataset) or a separate dataset (for a separate value per pixel).
If a dataset is used for the Sky and its standard deviation, they must either be the size of the input image, or have a single value per tile (generated with @option{--oneelempertile}, see @ref{Processing options} and @ref{Tessellation}).

The detected regions/pixels can be specified as a detection map (for example see @ref{NoiseChisel output}).
If @option{--detection=all}, Segment won't read any detection map and assume the whole input is a single detection.
For example when the dataset is fully covered by a large nearby galaxy/globular cluster.

When dataset are to be used for any of the inputs, Segment will assume they are multiple extensions of a single file by default (when @option{--std} or @option{--detection} aren't called).
For example NoiseChisel's default output @ref{NoiseChisel output}.
When the Sky-subtracted values are in one file, and the detection and Sky standard deviation are in another, you just need to use @option{--detection}: in the absence of @option{--std}, Segment will look for both the detection labels and Sky standard deviation in the file given to @option{--detection}.
Ultimately, if all three are in separate files, you need to call both @option{--detection} and @option{--std}.

The extensions of the three mandatory inputs can be specified with @option{--hdu}, @option{--dhdu}, and @option{--stdhdu}.
For a full discussion on what to give to these options, see the description of @option{--hdu} in @ref{Input output options}.
To see their default values (along with all the other options), run Segment with the @option{--printparams} (or @option{-P}) option.
Just recall that in the absence of @option{--detection} and @option{--std}, all three are assumed to be in the same file.
If you only want to see Segment's default values for HDUs on your system, run this command:

@example
$ astsegment -P | grep hdu
@end example

By default Segment will convolve the input with a kernel to improve the signal-to-noise ratio of true peaks.
If you already have the convolved input dataset, you can pass it directly to Segment for faster processing (using the @option{--convolved} and @option{--chdu} options).
Just don't forget that the convolved image must also be Sky-subtracted before calling Segment.
If a value/file is given to @option{--sky}, the convolved values will also be Sky subtracted internally.
Alternatively, if you prefer to give a kernel (with @option{--kernel} and @option{--khdu}), Segment can do the convolution internally.
To disable convolution, use @option{--kernel=none}.

@table @option

@item --sky=STR/FLT
The Sky value(s) to subtract from the input.
This option can either be given a constant number or a file name containing a dataset (multiple values, per pixel or per tile).
By default, Segment will assume the input dataset is Sky subtracted, so this option is not mandatory.

If the value can't be read as a number, it is assumed to be a file name.
When the value is a file, the extension can be specified with @option{--skyhdu}.
When its not a single number, the given dataset must either have the same size as the output or the same size as the tessellation (so there is one pixel per tile, see @ref{Tessellation}).

When this option is given, its value(s) will be subtracted from the input and the (optional) convolved dataset (given to @option{--convolved}) prior to starting the segmentation process.

@item --skyhdu=STR/INT
The HDU/extension containing the Sky values.
This is mandatory when the value given to @option{--sky} is not a number.
Please see the description of @option{--hdu} in @ref{Input output options} for the different ways you can identify a special extension.

@item --std=STR/FLT
The Sky standard deviation value(s) corresponding to the input.
The value can either be a constant number or a file name containing a dataset (multiple values, per pixel or per tile).
The Sky standard deviation is mandatory for Segment to operate.

If the value can't be read as a number, it is assumed to be a file name.
When the value is a file, the extension can be specified with @option{--skyhdu}.
When its not a single number, the given dataset must either have the same size as the output or the same size as the tessellation (so there is one pixel per tile, see @ref{Tessellation}).

When this option is not called, Segment will assume the standard deviation is a dataset and in a HDU/extension (@option{--stdhdu}) of another one of the input file(s).
If a file is given to @option{--detection}, it will assume that file contains the standard deviation dataset, otherwise, it will look into input filename (the main argument, without any option).

@item --stdhdu=INT/STR
The HDU/extension containing the Sky standard deviation values, when the value given to @option{--std} is a file name.
Please see the description of @option{--hdu} in @ref{Input output options} for the different ways you can identify a special extension.

@item --variance
The input Sky standard deviation value/dataset is actually variance.
When this option is called, the square root of input Sky standard deviation (see @option{--std}) is used internally, not its raw value(s).

@item -d STR
@itemx --detection=STR
Detection map to use for segmentation.
If given a value of @option{all}, Segment will assume the whole dataset must be segmented, see below.
If a detection map is given, the extension can be specified with @option{--dhdu}.
If not given, Segment will assume the desired HDU/extension is in the main input argument (input file specified with no option).

The final segmentation (clumps or objects) will only be over the non-zero pixels of this detection map.
The dataset must have the same size as the input image.
Only datasets with an integer type are acceptable for the labeled image, see @ref{Numeric data types}.
If your detection map only has integer values, but it is stored in a floating point container, you can use Gnuastro's Arithmetic program (see @ref{Arithmetic}) to convert it to an integer container, like the example below:

@example
$ astarithmetic float.fits int32 --output=int.fits
@end example

It may happen that the whole input dataset is covered by signal, for example when working on parts of the Andromeda galaxy, or nearby globular clusters (that cover the whole field of view).
In such cases, segmentation is necessary over the complete dataset, not just specific regions (detections).
By default Segment will first use the undetected regions as a reference to find the proper signal-to-noise ratio of ``true'' clumps (give a purity level specified with @option{--snquant}).
Therefore, in such scenarios you also need to manually give a ``true'' clump signal-to-noise ratio with the @option{--clumpsnthresh} option to disable looking into the undetected regions, see @ref{Segmentation options}.
In such cases, is possible to make a detection map that only has the value @code{1} for all pixels (for example using @ref{Arithmetic}), but for convenience, you can also use @option{--detection=all}.

@item --dhdu
The HDU/extension containing the detection map given to @option{--detection}.
Please see the description of @option{--hdu} in @ref{Input output options} for the different ways you can identify a special extension.

@item -k STR
@itemx --kernel=STR
The kernel used to convolve the input image.
The usage of this option is identical to NoiseChisel's @option{--kernel} option (@ref{NoiseChisel input}).
Please see the descriptions there for more.
To disable convolution, you can give it a value of @option{none}.

@item --khdu
The HDU/extension containing the kernel used for convolution.
For acceptable values, please see the description of @option{--hdu} in @ref{Input output options}.

@item --convolved
The convolved image to avoid internal convolution by Segment.
The usage of this option is identical to NoiseChisel's @option{--convolved} option.
Please see @ref{NoiseChisel input} for a thorough discussion of the usefulness and best practices of using this option.

If you want to use the same convolution kernel for detection (with @ref{NoiseChisel}) and segmentation, with this option, you can use the same convolved image (that is also available in NoiseChisel) and avoid two convolutions.
However, just be careful to use the input to NoiseChisel as the input to Segment also, then use the @option{--sky} and @option{--std} to specify the Sky and its standard deviation (from NoiseChisel's output).
Recall that when NoiseChisel is not called with @option{--rawoutput}, the first extension of NoiseChisel's output is the @emph{Sky-subtracted} input (see @ref{NoiseChisel output}).
So if you use the same convolved image that you fed to NoiseChisel, but use NoiseChisel's output with Segment's @option{--convolved}, then the convolved image won't be Sky subtracted.

@item --chdu
The HDU/extension containing the convolved image (given to @option{--convolved}).
For acceptable values, please see the description of @option{--hdu} in @ref{Input output options}.

@item -L INT[,INT]
@itemx --largetilesize=INT[,INT]
The size of the large tiles to use for identifying the clump S/N threshold over the undetected regions.
The usage of this option is identical to NoiseChisel's @option{--largetilesize} option (@ref{NoiseChisel input}).
Please see the descriptions there for more.

The undetected regions can be a significant fraction of the dataset and finding clumps requires sorting of the desired regions, which can be slow.
To speed up the processing, Segment finds clumps in the undetected regions over separate large tiles.
This allows it to have to sort a much smaller set of pixels and also to treat them independently and in parallel.
Both these issues greatly speed it up.
Just be sure to not decrease the large tile sizes too much (less than 100 pixels in each dimension).
It is important for them to be much larger than the clumps.

@end table


@node Segmentation options, Segment output, Segment input, Invoking astsegment
@subsubsection Segmentation options

The options below can be used to configure every step of the segmentation process in the Segment program.
For a more complete explanation (with figures to demonstrate each step), please see Section 3.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}, and also @ref{Segment}.
By default, Segment will follow the procedure described in the paper to find the S/N threshold based on the noise properties.
This can be disabled by directly giving a trustable signal-to-noise ratio to the @option{--clumpsnthresh} option.

Recall that you can always see the full list of Gnuastro's options with the @option{--help} (see @ref{Getting help}), or @option{--printparams} (or @option{-P}) to see their values (see @ref{Operating mode options}).

@table @option

@item -B FLT
@itemx --minskyfrac=FLT
Minimum fraction (value between 0 and 1) of Sky (undetected) areas in a large tile.
Only (large) tiles with a fraction of undetected pixels (Sky) greater than this value will be used for finding clumps.
The clumps found in the undetected areas will be used to estimate a S/N threshold for true clumps.
Therefore this is an important option (to decrease) in crowded fields.
Operationally, this is almost identical to NoiseChisel's @option{--minskyfrac} option (@ref{Detection options}).
Please see the descriptions there for more.

@item --minima
Build the clumps based on the local minima, not maxima.
By default, clumps are built starting from local maxima (see Figure 8 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}).
Therefore, this option can be useful when you are searching for true local minima (for example absorption features).

@item -m INT
@itemx --snminarea=INT
The minimum area which a clump in the undetected regions should have in order to be considered in the clump Signal to noise ratio measurement.
If this size is set to a small value, the Signal to noise ratio of false clumps will not be accurately found.
It is recommended that this value be larger than the value to NoiseChisel's @option{--snminarea}.
Because the clumps are found on the convolved (smoothed) image while the pseudo-detections are found on the input image.
You can use @option{--checksn} and @option{--checksegmentation} to see if your chosen value is reasonable or not.

@item --checksn
Save the S/N values of the clumps over the sky and detected regions into separate tables.
If @option{--tableformat} is a FITS format, each table will be written into a separate extension of one file suffixed with @file{_clumpsn.fits}.
If it is plain text, a separate file will be made for each table (ending in @file{_clumpsn_sky.txt} and @file{_clumpsn_det.txt}).
For more on @option{--tableformat} see @ref{Input output options}.

You can use these tables to inspect the S/N values and their distribution (in combination with the @option{--checksegmentation} option to see where the clumps are).
You can use Gnuastro's @ref{Statistics} to make a histogram of the distribution (ready for plotting in a text file, or a crude ASCII-art demonstration on the command-line).

With this option, Segment will abort as soon as the two tables are created.
This allows you to inspect the steps leading to the final S/N quantile threshold, this behavior can be disabled with @option{--continueaftercheck}.

@item --minnumfalse=INT
The minimum number of clumps over undetected (Sky) regions to identify the requested Signal-to-Noise ratio threshold.
Operationally, this is almost identical to NoiseChisel's @option{--minnumfalse} option (@ref{Detection options}).
Please see the descriptions there for more.

@item -c FLT
@itemx --snquant=FLT
The quantile of the signal-to-noise ratio distribution of clumps in undetected regions, used to define true clumps.
After identifying all the usable clumps in the undetected regions of the dataset, the given quantile of their signal-to-noise ratios is used to define the signal-to-noise ratio of a ``true'' clump.
Effectively, this can be seen as an inverse p-value measure.
See Figure 9 and Section 3.2.1 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]} for a complete explanation.
The full distribution of clump signal-to-noise ratios over the undetected areas can be saved into a table with @option{--checksn} option and visually inspected with @option{--checksegmentation}.

@item -v
@itemx --keepmaxnearriver
Keep a clump whose maximum (minimum if @option{--minima} is called) flux is 8-connected to a river pixel.
By default such clumps over detections are considered to be noise and are removed irrespective of their brightness (see @ref{Brightness flux magnitude}).
Over large profiles, that sink into the noise very slowly, noise can cause part of the profile (which was flat without noise) to become a very large and with a very high Signal to noise ratio.
In such cases, the pixel with the maximum flux in the clump will be immediately touching a river pixel.

@item -s FLT
@itemx --clumpsnthresh=FLT
The signal-to-noise threshold for true clumps.
If this option is given, then the segmentation options above will be ignored and the given value will be directly used to identify true clumps over the detections.
This can be useful if you have a large dataset with similar noise properties.
You can find a robust signal-to-noise ratio based on a (sufficiently large) smaller portion of the dataset.
Afterwards, with this option, you can speed up the processing on the whole dataset.
Other scenarios where this option may be useful is when, the image might not contain enough/any Sky regions.

@item -G FLT
@itemx --gthresh=FLT
Threshold (multiple of the sky standard deviation added with the sky) to stop growing true clumps.
Once true clumps are found, they are set as the basis to segment the detected region.
They are grown until the threshold specified by this option.

@item -y INT
@itemx --minriverlength=INT
The minimum length of a river between two grown clumps for it to be considered in signal-to-noise ratio estimations.
Similar to @option{--snminarea}, if the length of the river is too short, the signal-to-noise ratio can be noisy and unreliable.
Any existing rivers shorter than this length will be considered as non-existent, independent of their Signal to noise ratio.
The clumps are grown on the input image, therefore this value can be smaller than the value given to @option{--snminarea}.
Recall that the clumps were defined on the convolved image so @option{--snminarea} should be larger.

@item -O FLT
@itemx --objbordersn=FLT
The maximum Signal to noise ratio of the rivers between two grown clumps in order to consider them as separate `objects'.
If the Signal to noise ratio of the river between two grown clumps is larger than this value, they are defined to be part of one `object'.
Note that the physical reality of these `objects' can never be established with one image, or even multiple images from one broad-band filter.
Any method we devise to define `object's over a detected region is ultimately subjective.

Two very distant galaxies or satellites in one halo might lie in the same line of sight and be detected as clumps on one detection.
On the other hand, the connection (through a spiral arm or tidal tail for example) between two parts of one galaxy might have such a low surface brightness that they are broken up into multiple detections or objects.
In fact if you have noticed, exactly for this purpose, this is the only Signal to noise ratio that the user gives into NoiseChisel.
The `true' detections and clumps can be objectively identified from the noise characteristics of the image, so you don't have to give any hand input Signal to noise ratio.

@item --checksegmentation
A file with the suffix @file{_seg.fits} will be created.
This file keeps all the relevant steps in finding true clumps and segmenting the detections into multiple objects in various extensions.
Having read the paper or the steps above.
Examining this file can be an excellent guide in choosing the best set of parameters.
Note that calling this function will significantly slow NoiseChisel.
In verbose mode (without the @option{--quiet} option, see @ref{Operating mode options}) the important steps (along with their extension names) will also be reported.

With this option, NoiseChisel will abort as soon as the two tables are created.
This behavior can be disabled with @option{--continueaftercheck}.

@end table

@node Segment output,  , Segmentation options, Invoking astsegment
@subsubsection Segment output

The main output of Segment are two label datasets (with integer types, separating the dataset's elements into different classes).
They have HDU/extension names of @code{CLUMPS} and @code{OBJECTS}.

Similar to all Gnuastro's FITS outputs, the zero-th extension/HDU of the main output file only contains header keywords and image or table.
It contains the Segment input files and parameters (option names and values) as FITS keywords.
Note that if an option name is longer than 8 characters, the keyword name is the second word.
The first word is @code{HIERARCH}.
Also note that according to the FITS standard, the keyword names must be in capital letters, therefore, if you want to use Grep to inspect these keywords, use the @option{-i} option, like the example below.

@example
$ astfits image_segmented.fits -h0 | grep -i snquant
@end example

@cindex DS9
@cindex SAO DS9
By default, besides the @code{CLUMPS} and @code{OBJECTS} extensions, Segment's output will also contain the (technically redundant) input dataset and the sky standard deviation dataset (if it wasn't a constant number).
This can help in visually inspecting the result when viewing the images as a ``Multi-extension data cube'' in SAO DS9 for example (see @ref{Viewing multiextension FITS images}).
You can simply flip through the extensions and see the same region of the image and its corresponding clumps/object labels.
It also makes it easy to feed the output (as one file) into MakeCatalog when you intend to make a catalog afterwards (see @ref{MakeCatalog}.
To remove these redundant extensions from the output (for example when designing a pipeline), you can use @option{--rawoutput}.

The @code{OBJECTS} and @code{CLUMPS} extensions can be used as input into @ref{MakeCatalog} to generate a catalog for higher-level analysis.
If you want to treat each clump separately, you can give a very large value (or even a NaN, which will always fail) to the @option{--gthresh} option (for example @code{--gthresh=1e10} or @code{--gthresh=nan}), see @ref{Segmentation options}.

For a complete definition of clumps and objects, please see Section 3.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]} and @ref{Segmentation options}.
The clumps are ``true'' local maxima (minima if @option{--minima} is called) and their surrounding pixels until a local minimum/maximum (caused by noise fluctuations, or another ``true'' clump).
Therefore it may happen that some of the input detections aren't covered by clumps at all (very diffuse objects without any strong peak), while some objects may contain many clumps.
Even in those that have clumps, there will be regions that are too diffuse.
The diffuse regions (within the input detected regions) are given a negative label (-1) to help you separate them from the undetected regions (with a value of zero).

Each clump is labeled with respect to its host object.
Therefore, if an object has three clumps for example, the clumps within it have labels 1, 2 and 3.
As a result, if an initial detected region has multiple objects, each with a single clump, all the clumps will have a label of 1.
The total number of clumps in the dataset is stored in the @code{NCLUMPS} keyword of the @code{CLUMPS} extension and printed in the verbose output of Segment (when @option{--quiet} is not called).

The @code{OBJECTS} extension of the output will give a positive counter/label to every detected pixel in the input.
As described in Akhlaghi and Ichikawa [2015], the true clumps are grown until a certain threshold.
If the grown clumps touch other clumps and the connection is strong enough, they are considered part of the same @emph{object}.
Once objects (grown clumps) are identified, they are grown to cover the whole detected area.

The options to configure the output of Segment are listed below:

@table @option
@item --continueaftercheck
Don't abort Segment after producing the check image(s).
The usage of this option is identical to NoiseChisel's @option{--continueaftercheck} option (@ref{NoiseChisel input}).
Please see the descriptions there for more.

@item --onlyclumps
Abort Segment after finding true clumps and don't continue with finding options.
Therefore, no @code{OBJECTS} extension will be present in the output.
Each true clump in @code{CLUMPS} will get a unique label, but diffuse regions will still have a negative value.

To make a catalog of the clumps, the input detection map (where all the labels are one) can be fed into @ref{MakeCatalog} along with the input detection map to Segment (that only had a value of @code{1} for all detected pixels) with @option{--clumpscat}.
In this way, MakeCatalog will assume all the clumps belong to a single ``object''.

@item --grownclumps
In the output @code{CLUMPS} extension, store the grown clumps.
If a detected region contains no clumps or only one clump, then it will be fully given a label of @code{1} (no negative valued pixels).

@item --rawoutput
Only write the @code{CLUMPS} and @code{OBJECTS} datasets in the output file.
Without this option (by default), the first and last extensions of the output will the Sky-subtracted input dataset and the Sky standard deviation dataset (if it wasn't a number).
When the datasets are small, these redundant extensions can make it convenient to inspect the results visually or feed the output to @ref{MakeCatalog} for measurements.
Ultimately both the input and Sky standard deviation datasets are redundant (you had them before running Segment).
When the inputs are large/numerous, these extra dataset can be a burden.
@end table

@cartouche
@noindent
@cindex Compression
@strong{Save space:} with the @option{--rawoutput}, Segment's output will only be two labeled datasets (only containing integers).
Since they have no noise, such datasets can be compressed very effectively (without any loss of data) with exceptionally high compression ratios.
You can use the following command to compress it with the best ratio:

@cindex GNU Gzip
@example
$ gzip --best segment_output.fits
@end example

@noindent
The resulting @file{.fits.gz} file can then be fed into any of Gnuastro's programs directly, without having to decompress it separately (it will just take them a little longer, because they have to decompress it internally before use).
@end cartouche





@node MakeCatalog, Match, Segment, Data analysis
@section MakeCatalog

At the lowest level, a dataset (for example an image) is just a collection of values, placed after each other in any number of dimensions (for example an image is a 2D dataset).
Each data-element (pixel) just has two properties: its position (relative to the rest) and its value.
In higher-level analysis, an entire dataset (an image for example) is rarely treated as a singular entity@footnote{You can derive the over-all properties of a complete dataset (1D table column, 2D image, or 3D data-cube) treated as a single entity with Gnuastro's Statistics program (see @ref{Statistics}).}.
You usually want to know/measure the properties of the (separate) scientifically interesting targets that are embedded in it.
For example the magnitudes, positions and elliptical properties of the galaxies that are in the image.

MakeCatalog is Gnuastro's program for localized measurements over a dataset.
In other words, MakeCatalog is Gnuastro's program to convert low-level datasets (like images), to high level catalogs.
The role of MakeCatalog in a scientific analysis and the benefits of its model (where detection/segmentation is separated from measurement) is discussed in @url{https://arxiv.org/abs/1611.06387v1, Akhlaghi [2016]}@footnote{A published paper cannot undergo any more change, so this manual is the definitive guide.} and summarized in @ref{Detection and catalog production}.
We strongly recommend reading this short paper for a better understanding of this methodology.
Understanding the effective usage of MakeCatalog, will thus also help effective use of other (lower-level) Gnuastro's programs like @ref{NoiseChisel} or @ref{Segment}.

It is important to define your regions of interest for measurements @emph{before} running MakeCatalog.
MakeCatalog is specialized in doing measurements accurately and efficiently.
Therefore MakeCatalog will not do detection, segmentation, or defining apertures on requested positions in your dataset.
Following Gnuastro's modularity principle, there are separate and highly specialized and customizable programs in Gnuastro for these other jobs as shown below (for a usage example in a real-world analysis, see @ref{General program usage tutorial} and @ref{Detecting large extended targets}).

@itemize
@item
@ref{Arithmetic}: Detection with a simple threshold.

@item
@ref{NoiseChisel}: Advanced detection.

@item
@ref{Segment}: Segmentation (substructure over detections).

@item
@ref{MakeProfiles}: Aperture creation for known positions.
@end itemize

These programs will/can return labeled dataset(s) to be fed into MakeCatalog.
A labeled dataset for measurement has the same size/dimensions as the input, but with integer valued pixels that have the label/counter for each sub-set of pixels that must be measured together.
For example all the pixels covering one galaxy in an image, get the same label.

The requested measurements are then done on similarly labeled pixels.
The final result is a catalog where each row corresponds to the measurements on pixels with a specific label.
For example the flux weighted average position of all the pixels with a label of 42 will be written into the 42nd row of the output catalog/table's central position column@footnote{See @ref{Measuring elliptical parameters} for a discussion on this and the derivation of positional parameters, which includes the center.}.
Similarly, the sum of all these pixels will be the 42nd row in the brightness column, etc.
Pixels with labels equal to, or smaller than, zero will be ignored by MakeCatalog.
In other words, the number of rows in MakeCatalog's output is already known before running it (the maximum value of the labeled dataset).

Before getting into the details of running MakeCatalog (in @ref{Invoking astmkcatalog}, we'll start with a discussion on the basics of its approach to separating detection from measurements in @ref{Detection and catalog production}.
A very important factor in any measurement is understanding its validity range, or limits.
Therefore in @ref{Quantifying measurement limits}, we'll discuss how to estimate the reliability of the detection and basic measurements.
This section will continue with a derivation of elliptical parameters from the labeled datasets in @ref{Measuring elliptical parameters}.
For those who feel MakeCatalog's existing measurements/columns aren't enough and would like to add further measurements, in @ref{Adding new columns to MakeCatalog}, a checklist of steps is provided for readily adding your own new measurements/columns.

@menu
* Detection and catalog production::  Discussing why/how to treat these separately
* Quantifying measurement limits::  For comparing different catalogs.
* Measuring elliptical parameters::  Estimating elliptical parameters.
* Adding new columns to MakeCatalog::  How to add new columns.
* Invoking astmkcatalog::       Options and arguments to MakeCatalog.
@end menu

@node Detection and catalog production, Quantifying measurement limits, MakeCatalog, MakeCatalog
@subsection Detection and catalog production

Most existing common tools in low-level astronomical data-analysis (for example SExtractor@footnote{@url{https://www.astromatic.net/software/sextractor}}) merge the two processes of detection and measurement (catalog production) in one program.
However, in light of Gnuastro's modularized approach (modeled on the Unix system) detection is separated from measurements and catalog production.
This modularity is therefore new to many experienced astronomers and deserves a short review here.
Further discussion on the benefits of this methodology can be seen in @url{https://arxiv.org/abs/1611.06387v1, Akhlaghi [2016]}.

As discussed in the introduction of @ref{MakeCatalog}, detection (identifying which pixels to do measurements on) can be done with different programs.
Their outputs (a labeled dataset) can be directly fed into MakeCatalog to do the measurements and write the result as a catalog/table.
Beyond that, Gnuastro's modular approach has many benefits that will become clear as you get more experienced in astronomical data analysis and want to be more creative in using your valuable data for the exciting scientific project you are working on.
In short the reasons for this modularity can be classified as below:

@itemize

@item
Simplicity/robustness of independent, modular tools: making a catalog is a logically separate process from labeling (detection, segmentation, or aperture production).
A user might want to do certain operations on the labeled regions before creating a catalog for them.
Another user might want the properties of the same pixels/objects in another image (another filter for example) to measure the colors or SED fittings.

Here is an example of doing both: suppose you have images in various broad band filters at various resolutions and orientations.
The image of one color will thus not lie exactly on another or even be in the same scale.
However, it is imperative that the same pixels be used in measuring the colors of galaxies.

To solve the problem, NoiseChisel can be run on the reference image to generate the labeled detection image.
Afterwards, the labeled image can be warped into the grid of the other color (using @ref{Warp}).
MakeCatalog will then generate the same catalog for both colors (with the different labeled images).
It is currently customary to warp the images to the same pixel grid, however, modification of the scientific dataset is very harmful for the data and creates correlated noise.
It is much more accurate to do the transformations on the labeled image.

@item
Complexity of a monolith: Adding in a catalog functionality to the detector program will add several more steps (and many more options) to its processing that can equally well be done outside of it.
This makes following what the program does harder for the users and developers, it can also potentially add many bugs.

As an example, if the parameter you want to measure over one profile is not provided by the developers of MakeCatalog.
You can simply open this tiny little program and add your desired calculation easily.
This process is discussed in @ref{Adding new columns to MakeCatalog}.
However, if making a catalog was part of NoiseChisel for example, adding a new column/measurement would require a lot of energy to understand all the steps and internal structures of that huge program.
It might even be so intertwined with its processing, that adding new columns might cause problems/bugs in its primary job (detection).

@end itemize







@node Quantifying measurement limits, Measuring elliptical parameters, Detection and catalog production, MakeCatalog
@subsection Quantifying measurement limits

@cindex Depth
@cindex Clump magnitude limit
@cindex Object magnitude limit
@cindex Limit, object/clump magnitude
@cindex Magnitude, object/clump detection limit
No measurement on a real dataset can be perfect: you can only reach a certain level/limit of accuracy.
Therefore, a meaningful (scientific) analysis requires an understanding of these limits for the dataset and your analysis tools: different datasets have different noise properties and different detection methods (one method/algorithm/software that is run with a different set of parameters is considered as a different detection method) will have different abilities to detect or measure certain kinds of signal (astronomical objects) and their properties in the dataset.
Hence, quantifying the detection and measurement limitations with a particular dataset and analysis tool is the most crucial/critical aspect of any high-level analysis.

Here, we'll review some of the most general limits that are important in any astronomical data analysis and how MakeCatalog makes it easy to find them.
Depending on the higher-level analysis, there are more tests that must be done, but these are relatively low-level and usually necessary in most cases.
In astronomy, it is common to use the magnitude (a unit-less scale) and physical units, see @ref{Brightness flux magnitude}.
Therefore the measurements discussed here are commonly used in units of magnitudes.

@table @asis

@item Surface brightness limit (of whole dataset)
@cindex Surface brightness
As we make more observations on one region of the sky, and add the observations into one dataset, the signal and noise both increase.
However, the signal increase much faster than the noise: assuming you add @mymath{N} datasets with equal exposure times, the signal will increases as a multiple of @mymath{N}, while noise increases as @mymath{\sqrt{N}}.
Thus this increases the signal-to-noise ratio.
Qualitatively, fainter (per pixel) parts of the objects/signal in the image will become more visible/detectable.
The noise-level is known as the dataset's surface brightness limit.

You can think of the noise as muddy water that is completely covering a flat ground@footnote{The ground is the sky value in this analogy, see @ref{Sky value}.
Note that this analogy only holds for a flat sky value across the surface of the image or ground.}.
The signal (or astronomical objects in this analogy) will be summits/hills that start from the flat sky level (under the muddy water) and can sometimes reach outside of the muddy water.
Let's assume that in your first observation the muddy water has just been stirred and you can't see anything through it.
As you wait and make more observations/exposures, the mud settles down and the @emph{depth} of the transparent water increases, making the summits visible.
As the depth of clear water increases, the parts of the hills with lower heights (parts with lower surface brightness) can be seen more clearly.
In this analogy, height (from the ground) is @emph{surface brightness}@footnote{Note that this muddy water analogy is not perfect, because while the water-level remains the same all over a peak, in data analysis, the Poisson noise increases with the level of data.} and the height of the muddy water is your surface brightness limit.

@cindex Data's depth
The outputs of NoiseChisel include the Sky standard deviation (@mymath{\sigma}) on every group of pixels (a mesh) that were calculated from the undetected pixels in each tile, see @ref{Tessellation} and @ref{NoiseChisel output}.
Let's take @mymath{\sigma_m} as the median @mymath{\sigma} over the successful meshes in the image (prior to interpolation or smoothing).

On different instruments, pixels have different physical sizes (for example in micro-meters, or spatial angle over the sky).
Nevertheless, a pixel is our unit of data collection.
In other words, while quantifying the noise, the physical or projected size of the pixels is irrelevant.
We thus define the Surface brightness limit or @emph{depth}, in units of magnitude/pixel, of a data-set, with zeropoint magnitude @mymath{z}, with the @mymath{n}th multiple of @mymath{\sigma_m} as (see @ref{Brightness flux magnitude}):

@dispmath{SB_{\rm Pixel}=-2.5\times\log_{10}{(n\sigma_m)}+z}

@cindex XDF survey
@cindex CANDELS survey
@cindex eXtreme Deep Field (XDF) survey
As an example, the XDF survey covers part of the sky that the Hubble space telescope has observed the most (for 85 orbits) and is consequently very small (@mymath{\sim4} arcmin@mymath{^2}).
On the other hand, the CANDELS survey, is one of the widest multi-color surveys covering several fields (about 720 arcmin@mymath{^2}) but its deepest fields have only 9 orbits observation.
The depth of the XDF and CANDELS-deep surveys in the near infrared WFC3/F160W filter are respectively 34.40 and 32.45 magnitudes/pixel.
In a single orbit image, this same field has a depth of 31.32.
Recall that a larger magnitude corresponds to less brightness.

The low-level magnitude/pixel measurement above is only useful when all the datasets you want to use belong to one instrument (telescope and camera).
However, you will often find yourself using datasets from various instruments with different pixel scales (projected pixel sizes).
If we know the pixel scale, we can obtain a more easily comparable surface brightness limit in units of: magnitude/arcsec@mymath{^2}.
Let's assume that the dataset has a zeropoint value of @mymath{z}, and every pixel is @mymath{p} arcsec@mymath{^2} (so @mymath{A/p} is the number of pixels that cover an area of @mymath{A} arcsec@mymath{^2}).
If the surface brightness is desired at the @mymath{n}th multiple of @mymath{\sigma_m}, the following equation (in units of magnitudes per @mymath{A} arcsec@mymath{^2}) can be used:

@dispmath{SB_{\rm Projected}=-2.5\times\log_{10}{\left(n\sigma_m\sqrt{A\over p}\right)+z}}

The @mymath{\sqrt{A/p}} term comes from the fact that noise is added in RMS: if you add three datasets with noise @mymath{\sigma_1}, @mymath{\sigma_2} and @mymath{\sigma_3}, the resulting noise level is @mymath{\sigma_t=\sqrt{\sigma_1^2+\sigma_2^2+\sigma_3^2}}, so when @mymath{\sigma_1=\sigma_2=\sigma_3=\sigma}, then @mymath{\sigma_t=\sqrt{3}\sigma}.
As mentioned above, there are @mymath{A/p} pixels in the area @mymath{A}.
Therefore, as @mymath{A/p} increases, the surface brightness limiting magnitude will become brighter.

It is just important to understand that the surface brightness limit is the raw noise level, @emph{not} the signal-to-noise.
To get a feeling for it you can try these commands on any FITS image (let's assume its called @file{image.fits}), the output of the first command (@file{zero.fits}) will be the same size as the input, but all pixels will have a value of zero.
We then add an ideal noise to this image and warp it to a new pixel size (such that the area of the new pixels is @code{area_per_pixel} times the input's), then we print the standard deviation of the raw noise and warped noise.
Please open the output images an compare them (their sizes, or their pixel values) to get a good feeling of what is going on.
Just note that this demo only works when @code{area_per_pixel} is larger than one.

@example
area_per_pixel=25
scale=$(echo $area_per_pixel | awk '@{print sqrt($1)@}')
astarithmetic image.fits -h0 nan + isblank not -ozero.fits
astmknoise zero.fits -onoise.fits
astwarp --scale=1/$scale,1/$scale noise.fits -onoise-w.fits
std_raw=$(aststatistics noise.fits --std)
std_warped=$(aststatistics noise-w.fits --std)
echo;
echo "(warped pixel area) = $area_per_pixel x (pixel area)"
echo "Raw STD:    $std_raw"
echo "Warped STD: $std_warped"
@end example

As you see in this example, this is thus just an extrapolation of the per-pixel measurement @mymath{\sigma_m}.
So it should be used with extreme care: for example the dataset must have an approximately flat depth or noise properties overall.
A more accurate measure for each detection is known as the @emph{upper-limit magnitude} which actually uses random positioning of each detection's area/footprint, see the respective item below.
The upper-limit magnitude doesn't extrapolate and even accounts for correlated noise patterns in relation to that detection.
Therefore, the upper-limit magnitude is a much better measure of your dataset's surface brightness limit for each particular object.

MakeCatalog will calculate the input dataset's @mymath{SB_{\rm Pixel}} and @mymath{SB_{\rm Projected}} and write them as comments/meta-data in the output catalog(s).
Just note that @mymath{SB_{\rm Projected}} is only calculated if the input has World Coordinate System (WCS).

@item Completeness limit (of each detection)
@cindex Completeness
As the surface brightness of the objects decreases, the ability to detect them will also decrease.
An important statistic is thus the fraction of objects of similar morphology and brightness that will be identified with our detection algorithm/parameters in the given image.
This fraction is known as completeness.
For brighter objects, completeness is 1: all bright objects that might exist over the image will be detected.
However, as we go to objects of lower overall surface brightness, we will fail to detect some, and gradually we are not able to detect anything any more.
For a given profile, the magnitude where the completeness drops below a certain level (usually above @mymath{90\%}) is known as the completeness limit.

@cindex Purity
@cindex False detections
@cindex Detections false
Another important parameter in measuring completeness is purity: the fraction of true detections to all true detections.
In effect purity is the measure of contamination by false detections: the higher the purity, the lower the contamination.
Completeness and purity are anti-correlated: if we can allow a large number of false detections (that we might be able to remove by other means), we can significantly increase the completeness limit.

One traditional way to measure the completeness and purity of a given sample is by embedding mock profiles in regions of the image with no detection.
However in such a study we must be really careful to choose model profiles as similar to the target of interest as possible.

@item Magnitude measurement error (of each detection)
Any measurement has an error and this includes the derived magnitude for an object.
Note that this value is only meaningful when the object's magnitude is brighter than the upper-limit magnitude (see the next items in this list).
As discussed in @ref{Brightness flux magnitude}, the magnitude (@mymath{M}) of an object with brightness @mymath{B} and zero point magnitude @mymath{z} can be written as:

@dispmath{M=-2.5\log_{10}(B)+z}

@noindent
Calculating the derivative with respect to @mymath{B}, we get:

@dispmath{{dM\over dB} = {-2.5\over {B\times ln(10)}}}

@noindent
From the Tailor series (@mymath{\Delta{M}=dM/dB\times\Delta{B}}), we can write:

@dispmath{\Delta{M} = \left|{-2.5\over ln(10)}\right|\times{\Delta{B}\over{B}}}

@noindent
But, @mymath{\Delta{B}/B} is just the inverse of the Signal-to-noise ratio (@mymath{S/N}), so we can write the error in magnitude in terms of the signal-to-noise ratio:

@dispmath{ \Delta{M} = {2.5\over{S/N\times ln(10)}} }

MakeCatalog uses this relation to estimate the magnitude errors.
The signal-to-noise ratio is calculated in different ways for clumps and objects (see @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}), but this single equation can be used to estimate the measured magnitude error afterwards for any type of target.

@item Upper limit magnitude (of each detection)
Due to the noisy nature of data, it is possible to get arbitrarily low values for a faint object's brightness (or arbitrarily high @emph{magnitudes}).
Given the scatter caused by the dataset's noise, values fainter than a certain level are meaningless: another similar depth observation will give a radically different value.

For example, while the depth of the image is 32 magnitudes/pixel, a measurement that gives a magnitude of 36 for a @mymath{\sim100} pixel object is clearly unreliable.
In another similar depth image, we might measure a magnitude of 30 for it, and yet another might give 33.
Furthermore, due to the noise scatter so close to the depth of the data-set, the total brightness might actually get measured as a negative value, so no magnitude can be defined (recall that a magnitude is a base-10 logarithm).
This problem usually becomes relevant when the detection labels were not derived from the values being measured (for example when you are estimating colors, see @ref{MakeCatalog}).

@cindex Upper limit magnitude
@cindex Magnitude, upper limit
Using such unreliable measurements will directly affect our analysis, so we must not use the raw measurements.
But how can we know how reliable a measurement on a given dataset is?

When we confront such unreasonably faint magnitudes, there is one thing we can deduce: that if something actually exists here (possibly buried deep under the noise), it's inherent magnitude is fainter than an @emph{upper limit magnitude}.
To find this upper limit magnitude, we place the object's footprint (segmentation map) over random parts of the image where there are no detections, so we only have pure (possibly correlated) noise, along with undetected objects.
Doing this a large number of times will give us a distribution of brightness values.
The standard deviation (@mymath{\sigma}) of that distribution can be used to quantify the upper limit magnitude.

@cindex Correlated noise
Traditionally, faint/small object photometry was done using fixed circular apertures (for example with a diameter of @mymath{N} arc-seconds).
Hence, the upper limit was like the depth discussed above: one value for the whole image.
The problem with this simplified approach is that the number of pixels in the aperture directly affects the final distribution and thus magnitude.
Also the image correlated noise might actually create certain patters, so the shape of the object can also affect the final result result.
Fortunately, with the much more advanced hardware and software of today, we can make customized segmentation maps for each object.

When requested, MakeCatalog will randomly place each target's footprint over the dataset as described above and estimate the resulting distribution's properties (like the upper limit magnitude).
The procedure is fully configurable with the options in @ref{Upper-limit settings}.
If one value for the whole image is required, you can either use the surface brightness limit above or make a circular aperture and feed it into MakeCatalog to request an upper-limit magnitude for it@footnote{If you intend to make apertures manually and not use a detection map (for example from @ref{Segment}), don't forget to use the @option{--upmaskfile} to give NoiseChisel's output (or any a binary map, marking detected pixels, see @ref{NoiseChisel output}) as a mask.
Otherwise, the footprints may randomly fall over detections, giving highly skewed distributions, with wrong upper-limit distributions.
See The description of @option{--upmaskfile} in @ref{Upper-limit settings} for more.}.

@end table




@node Measuring elliptical parameters, Adding new columns to MakeCatalog, Quantifying measurement limits, MakeCatalog
@subsection Measuring elliptical parameters

The shape or morphology of a target is one of the most commonly desired parameters of a target.
Here, we will review the derivation of the most basic/simple morphological parameters: the elliptical parameters for a set of labeled pixels.
The elliptical parameters are: the (semi-)major axis, the (semi-)minor axis and the position angle along with the central position of the profile.
The derivations below follow the SExtractor manual derivations with some added explanations for easier reading.

@cindex Moments
Let's begin with one dimension for simplicity: Assume we have a set of @mymath{N} values @mymath{B_i} (for example showing the spatial distribution of a target's brightness), each at position @mymath{x_i}.
The simplest parameter we can define is the geometric center of the object (@mymath{x_g}) (ignoring the brightness values): @mymath{x_g=(\sum_ix_i)/N}.
@emph{Moments} are defined to incorporate both the value (brightness) and position of the data.
The first moment can be written as:

@dispmath{\overline{x}={\sum_iB_ix_i \over \sum_iB_i}}

@cindex Variance
@cindex Second moment
@noindent
This is essentially the weighted (by @mymath{B_i}) mean position.
The geometric center (@mymath{x_g}, defined above) is a special case of this with all @mymath{B_i=1}.
The second moment is essentially the variance of the distribution:

@dispmath{\overline{x^2}\equiv{\sum_iB_i(x_i-\overline{x})^2 \over
        \sum_iB_i} = {\sum_iB_ix_i^2 \over \sum_iB_i} -
        2\overline{x}{\sum_iB_ix_i\over\sum_iB_i} + \overline{x}^2
        ={\sum_iB_ix_i^2 \over \sum_iB_i} - \overline{x}^2}

@cindex Standard deviation
@noindent
The last step was done from the definition of @mymath{\overline{x}}.
Hence, the square root of @mymath{\overline{x^2}} is the spatial standard deviation (along the one-dimension) of this particular brightness distribution (@mymath{B_i}).
Crudely (or qualitatively), you can think of its square root as the distance (from @mymath{\overline{x}}) which contains a specific amount of the flux (depending on the @mymath{B_i} distribution).
Similar to the first moment, the geometric second moment can be found by setting all @mymath{B_i=1}.
So while the first moment quantified the position of the brightness distribution, the second moment quantifies how that brightness is dispersed about the first moment.
In other words, it quantifies how ``sharp'' the object's image is.

@cindex Floating point error
Before continuing to two dimensions and the derivation of the elliptical parameters, let's pause for an important implementation technicality.
You can ignore this paragraph and the next two if you don't want to implement these concepts.
The basic definition (first definition of @mymath{\overline{x^2}} above) can be used without any major problem.
However, using this fraction requires two runs over the data: one run to find @mymath{\overline{x}} and another run to find @mymath{\overline{x^2}} from @mymath{\overline{x}}, this can be slow.
The advantage of the last fraction above, is that we can estimate both the first and second moments in one run (since the @mymath{-\overline{x}^2} term can easily be added later).

The logarithmic nature of floating point number digitization creates a complication however: suppose the object is located between pixels 10000 and 10020.
Hence the target's pixels are only distributed over 20 pixels (with a standard deviation @mymath{<20}), while the mean has a value of @mymath{\sim10000}.
The @mymath{\sum_iB_i^2x_i^2} will go to very very large values while the individual pixel differences will be orders of magnitude smaller.
This will lower the accuracy of our calculation due to the limited accuracy of floating point operations.
The variance only depends on the distance of each point from the mean, so we can shift all position by a constant/arbitrary @mymath{K} which is much closer to the mean: @mymath{\overline{x-K}=\overline{x}-K}.
Hence we can calculate the second order moment using:

@dispmath{ \overline{x^2}={\sum_iB_i(x_i-K)^2 \over \sum_iB_i} -
           (\overline{x}-K)^2 }

@noindent
The closer @mymath{K} is to @mymath{\overline{x}}, the better (the sums of squares will involve smaller numbers), as long as @mymath{K} is within the object limits (in the example above: @mymath{10000\leq{K}\leq10020}), the floating point error induced in our calculation will be negligible.
For the most simplest implementation, MakeCatalog takes @mymath{K} to be the smallest position of the object in each dimension.
Since @mymath{K} is arbitrary and an implementation/technical detail, we will ignore it for the remainder of this discussion.

In two dimensions, the mean and variances can be written as:

@dispmath{\overline{x}={\sum_iB_ix_i\over B_i}, \quad
          \overline{x^2}={\sum_iB_ix_i^2 \over \sum_iB_i} -
          \overline{x}^2}
@dispmath{\overline{y}={\sum_iB_iy_i\over B_i}, \quad
          \overline{y^2}={\sum_iB_iy_i^2 \over \sum_iB_i} -
          \overline{y}^2}
@dispmath{\quad\quad\quad\quad\quad\quad\quad\quad\quad
          \overline{xy}={\sum_iB_ix_iy_i \over \sum_iB_i} -
          \overline{x}\times\overline{y}}

If an elliptical profile's major axis exactly lies along the @mymath{x} axis, then @mymath{\overline{x^2}} will be directly proportional with the profile's major axis, @mymath{\overline{y^2}} with its minor axis and @mymath{\overline{xy}=0}.
However, in reality we are not that lucky and (assuming galaxies can be parameterized as an ellipse) the major axis of galaxies can be in any direction on the image (in fact this is one of the core principles behind weak-lensing by shear estimation).
So the purpose of the remainder of this section is to define a strategy to measure the position angle and axis ratio of some randomly positioned ellipses in an image, using the raw second moments that we have calculated above in our image coordinates.

Let's assume we have rotated the galaxy by @mymath{\theta}, the new second order moments are:

@dispmath{\overline{x_\theta^2} = \overline{x^2}\cos^2\theta +
           \overline{y^2}\sin^2\theta -
           2\overline{xy}\cos\theta\sin\theta }
@dispmath{\overline{y_\theta^2} = \overline{x^2}\sin^2\theta +
           \overline{y^2}\cos^2\theta +
           2\overline{xy}\cos\theta\sin\theta}
@dispmath{\overline{xy_\theta} = \overline{x^2}\cos\theta\sin\theta -
           \overline{y^2}\cos\theta\sin\theta +
           \overline{xy}(\cos^2\theta-\sin^2\theta)}

@noindent
The best @mymath{\theta} (@mymath{\theta_0}, where major axis lies along the @mymath{x_\theta} axis) can be found by:

@dispmath{\left.{\partial \overline{x_\theta^2} \over \partial \theta}\right|_{\theta_0}=0}
Taking the derivative, we get:
@dispmath{2\cos\theta_0\sin\theta_0(\overline{y^2}-\overline{x^2}) +
2(\cos^2\theta_0-\sin^2\theta_0)\overline{xy}=0} When
@mymath{\overline{x^2}\neq\overline{y^2}}, we can write:
@dispmath{\tan2\theta_0 =
2{\overline{xy} \over \overline{x^2}-\overline{y^2}}.}

@cindex Position angle
@noindent
MakeCatalog uses the standard C math library's @code{atan2} function to estimate @mymath{\theta_0}, which we define as the position angle of the ellipse.
To recall, this is the angle of the major axis of the ellipse with the @mymath{x} axis.
By definition, when the elliptical profile is rotated by @mymath{\theta_0}, then @mymath{\overline{xy_{\theta_0}}=0}, @mymath{\overline{x_{\theta_0}^2}} will be the extent of the maximum variance and @mymath{\overline{y_{\theta_0}^2}} the extent of the minimum variance (which are perpendicular for an ellipse).
Replacing @mymath{\theta_0} in the equations above for @mymath{\overline{x_\theta}} and @mymath{\overline{y_\theta}}, we can get the semi-major (@mymath{A}) and semi-minor (@mymath{B}) lengths:

@dispmath{A^2\equiv\overline{x_{\theta_0}^2}= {\overline{x^2} +
\overline{y^2} \over 2} + \sqrt{\left({\overline{x^2}-\overline{y^2} \over 2}\right)^2 + \overline{xy}^2}}

@dispmath{B^2\equiv\overline{y_{\theta_0}^2}= {\overline{x^2} +
\overline{y^2} \over 2} - \sqrt{\left({\overline{x^2}-\overline{y^2} \over 2}\right)^2 + \overline{xy}^2}}

As a summary, it is important to remember that the units of @mymath{A} and @mymath{B} are in pixels (the standard deviation of a positional distribution) and that they represent the spatial light distribution of the object in both image dimensions (rotated by @mymath{\theta_0}).
When the object cannot be represented as an ellipse, this interpretation breaks down: @mymath{\overline{xy_{\theta_0}}\neq0} and @mymath{\overline{y_{\theta_0}^2}} will not be the direction of minimum variance.





@node Adding new columns to MakeCatalog, Invoking astmkcatalog, Measuring elliptical parameters, MakeCatalog
@subsection Adding new columns to MakeCatalog

MakeCatalog is designed to allow easy addition of different measurements over a labeled image (see @url{https://arxiv.org/abs/1611.06387v1, Akhlaghi [2016]}).
A check-list style description of necessary steps to do that is described in this section.
The common development characteristics of MakeCatalog and other Gnuastro programs is explained in @ref{Developing}.
We strongly encourage you to have a look at that chapter to greatly simplify your navigation in the code.
After adding and testing your column, you are most welcome (and encouraged) to share it with us so we can add to the next release of Gnuastro for everyone else to also benefit from your efforts.

MakeCatalog will first pass over each label's pixels two times and do necessary raw/internal calculations.
Once the passes are done, it will use the raw information for filling the final catalog's columns.
In the first pass it will gather mainly object information and in the second run, it will mainly focus on the clumps, or any other measurement that needs an output from the first pass.
These two passes are designed to be raw summations: no extra processing.
This will allow parallel processing and simplicity/clarity.
So if your new calculation, needs new raw information from the pixels, then you will need to also modify the respective @code{mkcatalog_first_pass} and @code{mkcatalog_second_pass} functions (both in @file{bin/mkcatalog/mkcatalog.c}) and define new raw table columns in @file{main.h} (hopefully the comments in the code are clear enough).

In all these different places, the final columns are sorted in the same order (same order as @ref{Invoking astmkcatalog}).
This allows a particular column/option to be easily found in all steps.
Therefore in adding your new option, be sure to keep it in the same relative place in the list in all the separate places (it doesn't necessarily have to be in the end), and near conceptually similar options.

@table @file

@item main.h
The @code{objectcols} and @code{clumpcols} enumerated variables (@code{enum}) define the raw/internal calculation columns.
If your new column requires new raw calculations, add a row to the respective list.
If your calculation requires any other settings parameters, you should add a variable to the @code{mkcatalogparams} structure.

@item ui.c
If the new column needs raw calculations (an entry was added in @code{objectcols} and @code{clumpcols}), specify which inputs it needs in @code{ui_necessary_inputs}, similar to the other options.
Afterwards, if your column includes any particular settings (you needed to add a variable to the @code{mkcatalogparams} structure in @file{main.h}), you should do the sanity checks and preparations for it here.

@item ui.h
The @code{option_keys_enum} associates a unique value for each option to MakeCatalog.
The options that have a short option version, the single character short comment is used for the value.
Those that don't have a short option version, get a large integer automatically.
You should add a variable here to identify your desired column.


@cindex GNU C library
@item args.h
This file specifies all the parameters for the GNU C library, Argp structure that is in charge of reading the user's options.
To define your new column, just copy an existing set of parameters and change the first, second and 5th values (the only ones that differ between all the columns), you should use the macro you defined in @file{ui.h} here.


@item columns.c
This file contains the main definition and high-level calculation of your new column through the @code{columns_define_alloc} and @code{columns_fill} functions.
In the first, you specify the basic information about the column: its name, units, comments, type (see @ref{Numeric data types}) and how it should be printed if the output is a text file.
You should also specify the raw/internal columns that are necessary for this column here as the many existing examples show.
Through the types for objects and rows, you can specify if this column is only for clumps, objects or both.

The second main function (@code{columns_fill}) writes the final value into the appropriate column for each object and clump.
As you can see in the many existing examples, you can define your processing on the raw/internal calculations here and save them in the output.

@item mkcatalog.c
This file contains the low-level parsing functions.
To be optimized, the parsing is done in parallel through the @code{mkcatalog_single_object} function.
This function initializes the necessary arrays and calls the lower-level @code{parse_objects} and @code{parse_clumps} for actually going over the pixels.
They are all heavily commented, so you should be able to follow where to add your necessary low-level calculations.

@item doc/gnuastro.texi
Update this manual and add a description for the new column.

@end table





@node Invoking astmkcatalog,  , Adding new columns to MakeCatalog, MakeCatalog
@subsection Invoking MakeCatalog

MakeCatalog will do measurements and produce a catalog from a labeled dataset and optional values dataset(s).
The executable name is @file{astmkcatalog} with the following general template

@example
$ astmkcatalog [OPTION ...] InputImage.fits
@end example

@noindent
One line examples:

@example
## Create catalog with RA, Dec, Magnitude and Magnitude error,
## from Segment's output:
$ astmkcatalog --ra --dec --magnitude --magnitudeerr seg-out.fits

## Same catalog as above (using short options):
$ asmkcatalog -rdmG seg-out.fits

## Write the catalog to a text table:
$ astmkcatalog -mpQ seg-out.fits --output=cat.txt

## Output columns specified in `columns.conf':
$ astmkcatalog --config=columns.conf seg-out.fits

## Use object and clump labels from a K-band image, but pixel values
## from an i-band image.
$ astmkcatalog K_segmented.fits --hdu=DETECTIONS --clumpscat     \
               --clumpsfile=K_segmented.fits --clumpshdu=CLUMPS  \
               --valuesfile=i_band.fits
@end example

@cindex Gaussian
@noindent
If MakeCatalog is to do processing (not printing help or option values), an input labeled image should be provided.
The options described in this section are those that are particular to MakeProfiles.
For operations that MakeProfiles shares with other programs (mainly involving input/output or general processing steps), see @ref{Common options}.
Also see @ref{Common program behavior} for some general characteristics of all Gnuastro programs including MakeCatalog.

The various measurements/columns of MakeCatalog are requested as options, either on the command-line or in configuration files, see @ref{Configuration files}.
The full list of available columns is available in @ref{MakeCatalog measurements}.
Depending on the requested columns, MakeCatalog needs more than one input dataset, for more details, please see @ref{MakeCatalog inputs and basic settings}.
The upper-limit measurements in particular need several configuration options which are thoroughly discussed in @ref{Upper-limit settings}.
Finally, in @ref{MakeCatalog output} the output file(s) created by MakeCatalog are discussed.

@menu
* MakeCatalog inputs and basic settings::  Input files and basic settings.
* Upper-limit settings::        Settings for upper-limit measurements.
* MakeCatalog measurements::    Available measurements in MakeCatalog.
* MakeCatalog output::          File names of MakeCatalog's output table.
@end menu

@node MakeCatalog inputs and basic settings, Upper-limit settings, Invoking astmkcatalog, Invoking astmkcatalog
@subsubsection MakeCatalog inputs and basic settings

MakeCatalog works by using a localized/labeled dataset (see @ref{MakeCatalog}).
This dataset maps/labels pixels to a specific target (row number in the final catalog) and is thus the only necessary input dataset to produce a minimal catalog in any situation.
Because it only has labels/counters, it must have an integer type (see @ref{Numeric data types}), see below if your labels are in a floating point container.
When the requested measurements only need this dataset (for example @option{--geox}, @option{--geoy}, or @option{--geoarea}), MakeCatalog won't read any more datasets.

Low-level measurements that only use the labeled image are rarely sufficient for any high-level science case.
Therefore necessary input datasets depend on the requested columns in each run.
For example, let's assume you want the brightness/magnitude and signal-to-noise ratio of your labeled regions.
For these columns, you will also need to provide an extra dataset containing values for every pixel of the labeled input (to measure brightness) and another for the Sky standard deviation (to measure error).
All such auxiliary input files have to have the same size (number of pixels in each dimension) as the input labeled image.
Their numeric data type is irrelevant (they will be converted to 32-bit floating point internally).
For the full list of available measurements, see @ref{MakeCatalog measurements}.

The ``values'' dataset is used for measurements like brightness/magnitude, or flux-weighted positions.
If it is a real image, by default it is assumed to be already Sky-subtracted prior to running MakeCatalog.
If it isn't, you use the @option{--subtractsky} option to, so MakeCatalog reads and subtracts the Sky dataset before any processing.
To obtain the Sky value, you can use the @option{--sky} option of @ref{Statistics}, but the best recommended method is @ref{NoiseChisel}, see @ref{Sky value}.

MakeCatalog can also do measurements on sub-structures of detections.
In other words, it can produce two catalogs.
Following the nomenclature of Segment (see @ref{Segment}), the main labeled input dataset is known as ``object'' labels and the (optional) sub-structure input dataset is known as ``clumps''.
If MakeCatalog is run with the @option{--clumpscat} option, it will also need a labeled image containing clumps, similar to what Segment produces (see @ref{Segment output}).
Since clumps are defined within detected regions (they exist over signal, not noise), MakeCatalog uses their boundaries to subtract the level of signal under them.

There are separate options to explicitly request a file name and HDU/extension for each of the required input datasets as fully described below (with the @option{--*file} format).
When each dataset is in a separate file, these options are necessary.
However, one great advantage of the FITS file format (that is heavily used in astronomy) is that it allows the storage of multiple datasets in one file.
So in most situations (for example if you are using the outputs of @ref{NoiseChisel} or @ref{Segment}), all the necessary input datasets can be in one file.

When none of the @option{--*file} options are given, MakeCatalog will assume the necessary input datasets are in the file given as its argument (without any option).
When the Sky or Sky standard deviation datasets are necessary and the only @option{--*file} option called is @option{--valuesfile}, MakeCatalog will search for these datasets (with the default/given HDUs) in the file given to @option{--valuesfile} (before looking into the main argument file).

When the clumps image (necessary with the @option{--clumpscat} option) is used, MakeCatalog looks into the (possibly existing) @code{NUMLABS} keyword for the total number of clumps in the image (irrespective of how many objects there are).
If its not present, it will count them and possibly re-label the clumps so the clump labels always start with 1 and finish with the total number of clumps in each object.
The re-labeled clumps image will be stored with the @file{-clumps-relab.fits} suffix.
This can slightly slow-down the run.

Note that @code{NUMLABS} is automatically written by Segment in its outputs, so if you are feeding Segment's clump labels, you can benefit from the improved speed.
Otherwise, if you are creating the clumps label dataset manually, it may be good to include the @code{NUMLABS} keyword in its header and also be sure that there is no gap in the clump labels.
For example if an object has three clumps, they are labeled as 1, 2, 3.
If they are labeled as 1, 3, 4, or any other combination of three positive integers that aren't an increment of the previous, you might get unknown behavior.

It may happen that your labeled objects image was created with a program that only outputs floating point files.
However, you know it only has integer valued pixels that are stored in a floating point container.
In such cases, you can use Gnuastro's Arithmetic program (see @ref{Arithmetic}) to change the numerical data type of the image (@file{float.fits}) to an integer type image (@file{int.fits}) with a command like below:

@example
@command{$ astarithmetic float.fits int32 --output=int.fits}
@end example

To summarize: if the input file to MakeCatalog is the default/full output of Segment (see @ref{Segment output}) you don't have to worry about any of the @option{--*file} options below.
You can just give Segment's output file to MakeCatalog as described in @ref{Invoking astmkcatalog}.
To feed NoiseChisel's output into MakeCatalog, just change the labeled dataset's header (with @option{--hdu=DETECTIONS}).
The full list of input dataset options and general setting options are described below.

@table @option

@item -l STR
@itemx --clumpsfile=STR
The file containing the labeled clumps dataset when @option{--clumpscat} is called (see @ref{MakeCatalog output}).
When @option{--clumpscat} is called, but this option isn't, MakeCatalog will look into the main input file (given as an argument) for the required extension/HDU (value to @option{--clumpshdu}).

@item --clumpshdu=STR
The HDU/extension of the clump labels dataset.
Only pixels with values above zero will be considered.
The clump labels dataset has to be an integer data type (see @ref{Numeric data types}) and only pixels with a value larger than zero will be used.
See @ref{Segment output} for a description of the expected format.

@item -v STR
@itemx --valuesfile=STR
The file name of the (sky-subtracted) values dataset.
When any of the columns need values to associate with the input labels (for example to measure the brightness/magnitude of a galaxy), MakeCatalog will look into a ``values'' for the respective pixel values.
In most common processing, this is the actual astronomical image that the labels were defined, or detected, over.
The HDU/extension of this dataset in the given file can be specified with @option{--valueshdu}.
If this option is not called, MakeCatalog will look for the given extension in the main input file.

@item --valueshdu=STR/INT
The name or number (counting from zero) of the extension containing the ``values'' dataset, see the descriptions above and those in @option{--valuesfile} for more.

@item -s STR/FLT
@itemx --insky=STR/FLT
Sky value as a single number, or the file name containing a dataset (different values per pixel or tile).
The Sky dataset is only necessary when @option{--subtractsky} is called or when a column directly related to the Sky value is requested (currently @option{--sky}).
This dataset may be a tessellation, with one element per tile (see @option{--oneelempertile} of NoiseChisel's @ref{Processing options}).

When the Sky dataset is necessary but this option is not called, MakeCatalog will assume it is an HDU/extension (specified by @option{--skyhdu}) in one of the already given files.
First it will look for it in the @option{--valuesfile} (if it is given) and then the main input file (given as an argument).

By default the values dataset is assumed to be already Sky subtracted, so
this dataset is not necessary for many of the columns.

@item --skyhdu=STR
HDU/extension of the Sky dataset, see @option{--skyfile}.

@item --subtractsky
Subtract the sky value or dataset from the values file prior to any
processing.

@item -t STR/FLT
@itemx --instd=STR/FLT
Sky standard deviation value as a single number, or the file name containing a dataset (different values per pixel or tile).
With the @option{--variance} option you can tell MakeCatalog to interpret this value/dataset as a variance image, not standard deviation.

@strong{Important note:} This must only be the SKY standard deviation or variance (not including the signal's contribution to the error).
In other words, the final standard deviation of a pixel depends on how much signal there is in it.
MakeCatalog will find the amount of signal within each pixel (while subtracting the Sky, if @option{--subtractsky} is called) and account for the extra error due to it's value (signal).
Therefore if the input standard deviation (or variance) image also contains the contribution of signal to the error, then the final error measurements will be over-estimated.

@item --stdhdu=STR
The HDU of the Sky value standard deviation image.

@item --variance
The dataset given to @option{--stdfile} (and @option{--stdhdu} has the Sky variance of every pixel, not the Sky standard deviation.

@item --forcereadstd
Read the input STD image even if it is not required by any of the requested columns.
This is because some of the output catalog's metadata may need it, for example to calculate the dataset's surface brightness limit (see @ref{Quantifying measurement limits}, configured with @option{--sfmagarea} and @option{--sfmagnsigma} in @ref{MakeCatalog output}).

@item -z FLT
@itemx --zeropoint=FLT
The zero point magnitude for the input image, see @ref{Brightness flux magnitude}.

@item --sigmaclip FLT,FLT
The sigma-clipping parameters when any of the sigma-clipping related columns are requested (for example @option{--sigclip-median} or @option{--sigclip-number}).

This option takes two values: the first is the multiple of @mymath{\sigma}, and the second is the termination criteria.
If the latter is larger than 1, it is read as an integer number and will be the number of times to clip.
If it is smaller than 1, it is interpreted as the tolerance level to stop clipping. See @ref{Sigma clipping} for a complete explanation.

@item --fracmax=FLT[,FLT]
The fractions (one or two) of maximum value in objects or clumps to be used in the related columns, for example @option{--fracmaxarea1}, @option{--fracmaxsum1} or @option{--fracmaxradius1}, see @ref{MakeCatalog measurements}.
For the maximum value, see the description of @option{--maximum} column below.
The value(s) of this option must be larger than 0 and smaller than 1 (they are a fraction).
When only @option{--fracmaxarea1} or @option{--fracmaxsum1} is requested, one value must be given to this option, but if @option{--fracmaxarea2} or @option{--fracmaxsum2} are also requested, two values must be given to this option.
The values can be written as simple floating point numbers, or as fractions, for example @code{0.25,0.75} and @code{0.25,3/4} are the same.
@end table


@node Upper-limit settings, MakeCatalog measurements, MakeCatalog inputs and basic settings, Invoking astmkcatalog
@subsubsection Upper-limit settings

The upper-limit magnitude was discussed in @ref{Quantifying measurement limits}.
Unlike other measured values/columns in MakeCatalog, the upper limit magnitude needs several extra parameters which are discussed here.
All the options specific to the upper-limit measurements start with @option{up} for ``upper-limit''.
The only exception is @option{--envseed} that is also present in other programs and is general for any job requiring random number generation in Gnuastro (see @ref{Generating random numbers}).

@cindex Reproducibility
One very important consideration in Gnuastro is reproducibility.
Therefore, the values to all of these parameters along with others (like the random number generator type and seed) are also reported in the comments of the final catalog when the upper limit magnitude column is desired.
The random seed that is used to define the random positions for each object or clump is unique and set based on the (optionally) given seed, the total number of objects and clumps and also the labels of the clumps and objects.
So with identical inputs, an identical upper-limit magnitude will be found.
However, even if the seed is identical, when the ordering of the object/clump labels differs between different runs, the result of upper-limit measurements will not be identical.

MakeCatalog will randomly place the object/clump footprint over the dataset.
When the randomly placed footprint doesn't fall on any object or masked region (see @option{--upmaskfile}) it will be used in the final distribution.
Otherwise that particular random position will be ignored and another random position will be generated.
Finally, when the distribution has the desired number of successfully measured random samples (@option{--upnum}) the distribution's properties will be measured and placed in the catalog.

When the profile is very large or the image is significantly covered by detections, it might not be possible to find the desired number of samplings in a reasonable time.
MakeProfiles will continue searching until it is unable to find a successful position (since the last successful measurement@footnote{The counting of failed positions restarts on every successful measurement.}), for a large multiple of @option{--upnum} (currently@footnote{In Gnuastro's source, this constant number is defined as the @code{MKCATALOG_UPPERLIMIT_MAXFAILS_MULTIP} macro in @file{bin/mkcatalog/main.h}, see @ref{Downloading the source}.} this is 10).
If @option{--upnum} successful samples cannot be found until this limit is reached, MakeCatalog will set the upper-limit magnitude for that object to NaN (blank).

MakeCatalog will also print a warning if the range of positions available for the labeled region is smaller than double the size of the region.
In such cases, the limited range of random positions can artificially decrease the standard deviation of the final distribution.
If your dataset can allow it (it is large enough), it is recommended to use a larger range if you see such warnings.

@table @option

@item --upmaskfile=STR
File name of mask image to use for upper-limit calculation.
In some cases (especially when doing matched photometry), the object labels specified in the main input and mask image might not be adequate.
In other words they do not necessarily have to cover @emph{all} detected objects: the user might have selected only a few of the objects in their labeled image.
This option can be used to ignore regions in the image in these situations when estimating the upper-limit magnitude.
All the non-zero pixels of the image specified by this option (in the @option{--upmaskhdu} extension) will be ignored in the upper-limit magnitude measurements.

For example, when you are using labels from another image, you can give NoiseChisel's objects image output for this image as the value to this option.
In this way, you can be sure that regions with data do not harm your distribution.
See @ref{Quantifying measurement limits} for more on the upper limit magnitude.

@item --upmaskhdu=STR
The extension in the file specified by @option{--upmask}.

@item --upnum=INT
The number of random samples to take for all the objects.
A larger value to this option will give a more accurate result (asymptotically), but it will also slow down the process.
When a randomly positioned sample overlaps with a detected/masked pixel it is not counted and another random position is found until the object completely lies over an undetected region.
So you can be sure that for each object, this many samples over undetected objects are made.
See the upper limit magnitude discussion in @ref{Quantifying measurement limits} for more.

@item --uprange=INT,INT
The range/width of the region (in pixels) to do random sampling along each dimension of the input image around each object's position.
This is not a mandatory option and if not given (or given a value of zero in a dimension), the full possible range of the dataset along that dimension will be used.
This is useful when the noise properties of the dataset vary gradually.
In such cases, using the full range of the input dataset is going to bias the result.
However, note that decreasing the range of available positions too much will also artificially decrease the standard deviation of the final distribution (and thus bias the upper-limit measurement).

@item --envseed
@cindex Seed, Random number generator
@cindex Random number generator, Seed
Read the random number generator type and seed value from the environment (see @ref{Generating random numbers}).
Random numbers are used in calculating the random positions of different samples of each object.

@item --upsigmaclip=FLT,FLT
The raw distribution of random values will not be used to find the upper-limit magnitude, it will first be @mymath{\sigma}-clipped (see @ref{Sigma clipping}) to avoid outliers in the distribution (mainly the faint undetected wings of bright/large objects in the image).
This option takes two values: the first is the multiple of @mymath{\sigma}, and the second is the termination criteria.
If the latter is larger than 1, it is read as an integer number and will be the number of times to clip.
If it is smaller than 1, it is interpreted as the tolerance level to stop clipping. See @ref{Sigma clipping} for a complete explanation.

@item --upnsigma=FLT
The multiple of the final (@mymath{\sigma}-clipped) standard deviation (or @mymath{\sigma}) used to measure the upper-limit brightness or magnitude.

@item --checkuplim=INT[,INT]
Print a table of positions and measured values for all the full random distribution used for one particular object or clump.
If only one integer is given to this option, it is interpreted to be an object's label.
If two values are given, the first is the object label and the second is the ID of requested clump within it.

The output is a table with three columns (its type is determined with the @option{--tableformat} option, see @ref{Input output options}).
The first two columns are the position of the first pixel in each random sampling of this particular object/clump.
The the third column is the measured flux over that region.
If the region overlapped with a detection or masked pixel, then its measured value will be a NaN (not-a-number).
The total number of rows is thus unknown, but you can be sure that the number of rows with non-NaN measurements is the number given to the @option{--upnum} option.

@end table


@node MakeCatalog measurements, MakeCatalog output, Upper-limit settings, Invoking astmkcatalog
@subsubsection MakeCatalog measurements

The final group of options particular to MakeCatalog are those that specify which measurements/columns should be written into the final output table.
The current measurements in MakeCatalog are those which only produce one final value for each label (for example its total brightness: a single number).
All the different label's measurements can be written as one column in a final table/catalog that contains other columns for other similar single-number measurements.

In this case, all the different label's measurements can be written as one column in a final table/catalog that contains other columns for other similar single-number measurements.
The majority of this section is devoted to MakeCatalog's single-valued measurements.
However, MakeCatalog can also do measurements that produce more than one value for each label.
Currently the only such measurement is generation of spectra from 3D cubes with the @option{--spectrum} option and it is discussed in the end of this section.

Command-line options are used to identify which measurements you want in the final catalog(s) and in what order.
If any of the options below is called on the command line or in any of the configuration files, it will be included as a column in the output catalog.
The order of the columns is in the same order as the options were seen by MakeCatalog (see @ref{Configuration file precedence}).
Some of the columns apply to both ``objects'' and ``clumps'' and some are particular to only one of them (for the definition of ``objects'' and ``clumps'', see @ref{Segment}).
Columns/options that are unique to one catalog (only objects, or only clumps), are explicitly marked with [Objects] or [Clumps] to specify the catalog they will be placed in.

@table @option

@item --i
@itemx --ids
This is a unique option which can add multiple columns to the final catalog(s).
Calling this option will put the object IDs (@option{--objid}) in the objects catalog and host-object-ID (@option{--hostobjid}) and ID-in-host-object (@option{--idinhostobj}) into the clumps catalog.
Hence if only object catalogs are required, it has the same effect as @option{--objid}.

@item --objid
[Objects] ID of this object.

@item -j
@itemx --hostobjid
[Clumps] The ID of the object which hosts this clump.

@item --idinhostobj
[Clumps] The ID of this clump in its host object.

@item -x
@itemx --x
The flux weighted center of all objects and clumps along the first FITS axis (horizontal when viewed in SAO ds9), see @mymath{\overline{x}} in @ref{Measuring elliptical parameters}.
The weight has to have a positive value (pixel value larger than the Sky value) to be meaningful! Specially when doing matched photometry, this might not happen: no pixel value might be above the Sky value.
For such detections, the geometric center will be reported in this column (see @option{--geox}).
You can use @option{--weightarea} to see which was used.

@item -y
@itemx --y
The flux weighted center of all objects and clumps along the second FITS axis (vertical when viewed in SAO ds9).
See @option{--x}.

@item -z
@itemx --z
The flux weighted center of all objects and clumps along the third FITS
axis. See @option{--x}.

@item --geox
The geometric center of all objects and clumps along the first FITS axis axis.
The geometric center is the average pixel positions irrespective of their pixel values.

@item --geoy
The geometric center of all objects and clumps along the second FITS axis axis, see @option{--geox}.

@item --geoz
The geometric center of all objects and clumps along the third FITS axis axis, see @option{--geox}.

@item --minvx
Position of pixel with minimum value in objects and clumps, along the first FITS axis.

@item --maxvx
Position of pixel with minimum value in objects and clumps, along the first FITS axis.

@item --minvy
Position of pixel with minimum value in objects and clumps, along the first FITS axis.

@item --maxvy
Position of pixel with minimum value in objects and clumps, along the first FITS axis.

@item --minvz
Position of pixel with minimum value in objects and clumps, along the first FITS axis.

@item --maxvz
Position of pixel with minimum value in objects and clumps, along the first FITS axis.

@item --minx
The minimum position of all objects and clumps along the first FITS axis.

@item --maxx
The maximum position of all objects and clumps along the first FITS axis.

@item --miny
The minimum position of all objects and clumps along the second FITS axis.

@item --maxy
The maximum position of all objects and clumps along the second FITS axis.

@item --minz
The minimum position of all objects and clumps along the third FITS axis.

@item --maxz
The maximum position of all objects and clumps along the third FITS axis.

@item --clumpsx
[Objects] The flux weighted center of all the clumps in this object along the first FITS axis.
See @option{--x}.

@item --clumpsy
[Objects] The flux weighted center of all the clumps in this object along the second FITS axis.
See @option{--x}.

@item --clumpsz
[Objects] The flux weighted center of all the clumps in this object along the third FITS axis.
See @option{--x}.

@item --clumpsgeox
[Objects] The geometric center of all the clumps in this object along the first FITS axis.
See @option{--geox}.

@item --clumpsgeoy
[Objects] The geometric center of all the clumps in this object along the second FITS axis.
See @option{--geox}.

@item --clumpsgeoz
[Objects] The geometric center of all the clumps in this object along
the third FITS axis. See @option{--geoz}.

@item -r
@itemx --ra
Flux weighted right ascension of all objects or clumps, see @option{--x}.
This is just an alias for one of the lower-level @option{--w1} or @option{--w2} options.
Using the FITS WCS keywords (@code{CTYPE}), MakeCatalog will determine which axis corresponds to the right ascension.
If no @code{CTYPE} keywords start with @code{RA}, an error will be printed when requesting this column and MakeCatalog will abort.

@item -d
@itemx --dec
Flux weighted declination of all objects or clumps, see @option{--x}.
This is just an alias for one of the lower-level @option{--w1} or @option{--w2} options.
Using the FITS WCS keywords (@code{CTYPE}), MakeCatalog will determine which axis corresponds to the declination.
If no @code{CTYPE} keywords start with @code{DEC}, an error will be printed when requesting this column and MakeCatalog will abort.

@item --w1
Flux weighted first WCS axis of all objects or clumps, see @option{--x}.
The first WCS axis is commonly used as right ascension in images.

@item --w2
Flux weighted second WCS axis of all objects or clumps, see @option{--x}.
The second WCS axis is commonly used as declination in images.

@item --w3
Flux weighted third WCS axis of all objects or clumps, see
@option{--x}. The third WCS axis is commonly used as wavelength in integral
field unit data cubes.

@item --geow1
Geometric center in first WCS axis of all objects or clumps, see @option{--geox}.
The first WCS axis is commonly used as right ascension in images.

@item --geow2
Geometric center in second WCS axis of all objects or clumps, see @option{--geox}.
The second WCS axis is commonly used as declination in images.

@item --geow3
Geometric center in third WCS axis of all objects or clumps, see
@option{--geox}. The third WCS axis is commonly used as wavelength in
integral field unit data cubes.

@item --clumpsw1
[Objects] Flux weighted center in first WCS axis of all clumps in this object, see @option{--x}.
The first WCS axis is commonly used as right ascension in images.

@item --clumpsw2
[Objects] Flux weighted declination of all clumps in this object, see @option{--x}.
The second WCS axis is commonly used as declination in images.

@item --clumpsw3
[Objects] Flux weighted center in third WCS axis of all clumps in this object, see @option{--x}.
The third WCS axis is commonly used as wavelength in integral field unit data cubes.

@item --clumpsgeow1
[Objects] Geometric center right ascension of all clumps in this object, see @option{--geox}.
The first WCS axis is commonly used as right ascension in images.

@item --clumpsgeow2
[Objects] Geometric center declination of all clumps in this object, see @option{--geox}.
The second WCS axis is commonly used as declination in images.

@item --clumpsgeow3
[Objects] Geometric center in third WCS axis of all clumps in this object,
see @option{--geox}. The third WCS axis is commonly used as wavelength in
integral field unit data cubes.

@item -b
@itemx --brightness
The brightness (sum of all pixel values), see @ref{Brightness flux magnitude}.
For clumps, the ambient brightness (flux of river pixels around the clump multiplied by the area of the clump) is removed, see @option{--riverave}.
So the sum of all the clumps brightness in the clump catalog will be smaller than the total clump brightness in the @option{--clumpbrightness} column of the objects catalog.

If no usable pixels are present over the clump or object (for example they are all blank), the returned value will be NaN (note that zero is meaningful).

@item --brightnesserr
The (@mymath{1\sigma}) error in measuring the brightness of objects or clumps.

@item --clumpbrightness
[Objects] The total brightness of the clumps within an object.
This is simply the sum of the pixels associated with clumps in the object.
If no usable pixels are present over the clump or object (for example they are all blank), the stored value will be NaN (note that zero is meaningful).

@item --brightnessnoriver
[Clumps] The Sky (not river) subtracted clump brightness.
By definition, for the clumps, the average brightness of the rivers surrounding it are subtracted from it for a first order accounting for contamination by neighbors.
In cases where you will be calculating the flux brightness difference later (one example below) the contamination will be (mostly) removed at that stage, which is why this column was added.

If no usable pixels are present over the clump or object (for example they are all blank), the stored value will be NaN (note that zero is meaningful).

@item --mean
The mean sky subtracted value of pixels within the object or clump.
For clumps, the average river flux is subtracted from the sky subtracted mean.

@item --median
The median sky subtracted value of pixels within the object or clump.
For clumps, the average river flux is subtracted from the sky subtracted median.

@item --maximum
The maximum value of pixels within the object or clump.
When the label (object or clump) is larger than three pixels, the maximum is actually derived by the mean of the brightest three pixels, not the largest pixel value of the same label.
This is because noise fluctuations can be very strong in the extreme values of the objects/clumps due to Poisson noise (which gets stronger as the mean gets higher).
Simply using the maximum pixel value will create a strong scatter in results that depend on the maximum (for example the @option{--fwhm} option also uses this value internally).

@item --sigclip-number
The number of elements/pixels in the dataset after sigma-clipping the object or clump.
The sigma-clipping parameters can be set with the @option{--sigmaclip} option described in @ref{MakeCatalog inputs and basic settings}.
For more on Sigma-clipping, see @ref{Sigma clipping}.

@item --sigclip-median
The sigma-clipped median value of the object of clump's pixel distribution.
For more on sigma-clipping and how to define it, see @option{--sigclip-number}.

@item --sigclip-mean
The sigma-clipped mean value of the object of clump's pixel distribution.
For more on sigma-clipping and how to define it, see @option{--sigclip-number}.

@item --sigclip-std
The sigma-clipped standard deviation of the object of clump's pixel distribution.
For more on sigma-clipping and how to define it, see @option{--sigclip-number}.

@item -m
@itemx --magnitude
The magnitude of clumps or objects, see @option{--brightness}.

@item -e
@itemx --magnitudeerr
The magnitude error of clumps or objects.
The magnitude error is calculated from the signal-to-noise ratio (see @option{--sn} and @ref{Quantifying measurement limits}).
Note that until now this error assumes uncorrelated pixel values and also does not include the error in estimating the aperture (or error in generating the labeled image).

For now these factors have to be found by other means.
@url{https://savannah.gnu.org/task/index.php?14124, Task 14124} has been defined for work on adding these sources of error too.

@item --clumpsmagnitude
[Objects] The magnitude of all clumps in this object, see @option{--clumpbrightness}.

@item --upperlimit
The upper limit value (in units of the input image) for this object or clump.
See @ref{Quantifying measurement limits} and @ref{Upper-limit settings} for a complete explanation.
This is very important for the fainter and smaller objects in the image where the measured magnitudes are not reliable.

@item --upperlimitmag
The upper limit magnitude for this object or clump.
See @ref{Quantifying measurement limits} and @ref{Upper-limit settings} for a complete explanation.
This is very important for the fainter and smaller objects in the image where the measured magnitudes are not reliable.

@item --upperlimitonesigma
The @mymath{1\sigma} upper limit value (in units of the input image) for this object or clump.
See @ref{Quantifying measurement limits} and @ref{Upper-limit settings} for a complete explanation.
When @option{--upnsigma=1}, this column's values will be the same as @option{--upperlimit}.

@item --upperlimitsigma
The position of the total brightness measured within the distribution of randomly placed upperlimit measurements in units of the distribution's @mymath{\sigma} or standard deviation.
See @ref{Quantifying measurement limits} and @ref{Upper-limit settings} for a complete explanation.

@item --upperlimitquantile
The position of the total brightness measured within the distribution of randomly placed upperlimit measurements as a quantile (value between 0 or 1).
See @ref{Quantifying measurement limits} and @ref{Upper-limit settings} for a complete explanation.
If the object is brighter than the brightest randomly placed profile, a value of @code{inf} is returned.
If it is less than the minimum, a value of @code{-inf} is reported.

@item --upperlimitskew
@cindex Skewness
This column contains the non-parametric skew of the @mymath{\sigma}-clipped random distribution that was used to estimate the upper-limit magnitude.
Taking @mymath{\mu} as the mean, @mymath{\nu} as the median and @mymath{\sigma} as the standard deviation, the traditional definition of skewness is defined as: @mymath{(\mu-\nu)/\sigma}.

This can be a good measure to see how much you can trust the random measurements, or in other words, how accurately the regions with signal have been masked/detected. If the skewness is strong (and to the positive), then you can tell that you have a lot of undetected signal in the dataset, and therefore that the upper-limit measurement (and other measurements) are not reliable.

@item --riverave
[Clumps] The average brightness of the river pixels around this clump.
River pixels were defined in Akhlaghi and Ichikawa 2015.
In short they are the pixels immediately outside of the clumps.
This value is used internally to find the brightness (or magnitude) and signal to noise ratio of the clumps.
It can generally also be used as a scale to gauge the base (ambient) flux surrounding the clump.
In case there was no river pixels, then this column will have the value of the Sky under the clump.
So note that this value is @emph{not} sky subtracted.

@item --rivernum
[Clumps] The number of river pixels around this clump, see
@option{--riverave}.

@item -n
@itemx --sn
The Signal to noise ratio (S/N) of all clumps or objects.
See Akhlaghi and Ichikawa (2015) for the exact equations used.

@item --sky
The sky flux (per pixel) value under this object or clump.
This is actually the mean value of all the pixels in the sky image that lie on the same position as the object or clump.

@item --std
The sky value standard deviation (per pixel) for this clump or object.
This is the square root of the mean variance under the object, or the root mean square.

@item -C
@itemx --numclumps
[Objects] The number of clumps in this object.

@item -a
@itemx --area
The raw area (number of pixels/voxels) in any clump or object independent of what pixel it lies over (if it is NaN/blank or unused for example).

@item --areaarcsec2
The used (non-blank in values image) area of the labeled region in units of arc-seconds squared.
This column is just the value of the @option{--area} column, multiplied by the area of each pixel in the input image (in units of arcsec^2).
Similar to the @option{--ra} or @option{--dec} columns, for this option to work, the objects extension used has to have a WCS structure.

@item --areaminv
The number of pixels that are equal to the minimum value of the labeled region (clump or object).

@item --areamaxv
The number of pixels that are equal to the maximum value of the labeled region (clump or object).

@item --surfacebrightness
The surface brightness (in units of mag/arcsec@mymath{^2}) of the labeled region (objects or clumps).
For more on the definition of the surface brightness, see @ref{Brightness flux magnitude}.

@item --areaxy
@cindex IFU: Integral Field Unit
@cindex Integral Field Unit
Similar to @option{--area}, when the clump or object is projected onto the
first two dimensions. This is only available for 3-dimensional
datasets. When working with Integral Field Unit (IFU) datasets, this
projection onto the first two dimensions would be a narrow-band image.

@item --fwhm
@cindex FWHM
The full width at half maximum (in units of pixels, along the semi-major axis) of the labeled region (object or clump).
The maximum value is estimated from the mean of the top-three pixels with the highest values, see the description under @option{--maximum}.
The number of pixels that have half the value of that maximum are then found (value in the @option{--halfmaxarea} column) and a radius is estimated from the area.
See the description under @option{--halfsumradius} for more on converting area to radius along major axis.

Because of its non-parametric nature, this column is most reliable on clumps and should only be used in objects with great caution.
This is because objects can have more than one clump (peak with true signal) and multiple peaks are not treated separately in objects, so the result of this column will be biased.

Also, because of its non-parametric nature, this FWHM it does not account for the PSF, and it will be strongly affected by noise if the object is faint.
So when half the maximum value (which can be requested using the @option{--maximum} column) is too close to the local noise level (which can be requested using the @option{--std} column), the value returned in this column is meaningless (its just noise peaks which are randomly distributed over the area).
You can therefore use the @option{--maximum} and @option{--std} columns to remove unreliable FWHMs.
For example if a labeled region's maximum is less than 2 times the sky standard deviation, the value will certainly be unreliable (half of that is @mymath{1\sigma}!).
For a more reliable value, this fraction should be around 4 (so half the maximum is 2@mymath{\sigma}).

@item --halfmaxarea
The number of pixels with values larger than half the maximum flux within the labeled region.
This option is used to estimate @option{--fwhm}, so please read the notes there for the caveats and necessary precautions.

@item --halfmaxradius
The radius of region containing half the maximum flux within the labeled region.
This is just half the value reported by @option{--fwhm}.

@item --halfmaxsum
The sum of the pixel values containing half the maximum flux within the labeled region (or those that are counted in @option{--halfmaxarea}).
This option uses the pixels within @option{--fwhm}, so please read the notes there for the caveats and necessary precautions.

@item --halfmaxsb
The surface brightness (in units of mag/arcsec@mymath{^2}) within the region that contains half the maximum value of the labeled region.
For more on the definition of the surface brightness, see @ref{Brightness flux magnitude}.

@item --halfsumarea
The number of pixels that contain half the object or clump's total sum of pixels (half the value in the @option{--brightness} column).
To count this area, all the non-blank values associated with the given label (object or clump) will be sorted and summed in order (starting from the maximum), until the sum becomes larger than half the total sum of the label's pixels.

This option is thus good for clumps (which are defined to have a single peak in their morphology), but for objects you should be careful because if they may have multiple peaks and the sorting by value doesn't include morphology.
So if the object includes multiple peaks/clumps at roughly the same level, then the area reported by this option will be distributed over all the peaks.

@item --halfsumsb
Surface brightness (in units of mag/arcsec@mymath{^2}) within the area that contains half the total sum of the label's pixels (object or clump).
For more on the definition of the surface brightness, see @ref{Brightness flux magnitude}.

This column just plugs in the values of half the value of the @option{--brightness} column and the @option{--halfsumarea} column, into the surface brightness equation.
Therefore please see the description in @option{--halfsumarea} to understand the systematics of this column and potential biases.

@item --halfsumradius
Radius (in units of pixels) derived from the area that contains half the total sum of the label's pixels (value reported by @option{--halfsumarea}).
If the area is @mymath{A_h} and the axis ratio is @mymath{q}, then the value returned in this column is @mymath{\sqrt{A_h/({\pi}q)}}.
This option is a good measure of the concentration of the @emph{observed} (after PSF convolution and noisy) object or clump,
But as described below it underestimates the effective radius.
Also, it should be used in caution with objects, it reliable with clumps, see the note under @option{--halfradiusarea}.

@cindex Ellipse area
@cindex Area, ellipse
Recall that in general, for an ellipse with semi-major axis @mymath{a}, semi-minor axis @mymath{b}, and axis ratio @mymath{q=b/a} the area (@mymath{A}) is @mymath{A={\pi}ab={\pi}qa^2}.
For a circle (where @mymath{q=1}), this simplifies to the familiar @mymath{A={\pi}a^2}.

@cindex S@'ersic profile
@cindex Effective radius
The suffix @code{obs} is added to the option name to avoid confusing this with a profile's effective radius for S@'ersic profiles, commonly written as @mymath{r_e}.
For more on @mymath{r_e}, please see @ref{Galaxies}.
Therefore, when @mymath{r_e} is meaningful for the target (the target is elliptically symmetric and can be parameterized as a S@'ersic profile), @mymath{r_e} should be derived from fitting the profile with a S@'ersic function which has been convolved with the PSF.
But from the equation above, you see that this radius is derived from the raw image's labeled values (after convolution, with no parametric profile), so this column's value will generally be (much) smaller than @mymath{r_e}, depending on the PSF, depth of the dataset, the morphology, or if a fraction of the profile falls on the edge of the image.

@item --fracmaxarea1
@itemx --fracmaxarea2
Number of pixels brighter than the given fraction(s) of the maximum pixel value.
For the maximum value, see the description of @option{--maximum} column.
The fraction(s) are given through the @option{--fracmax} option (that can take two values) and is described in @ref{MakeCatalog inputs and basic settings}.
Recall that in @option{--halfmaxarea}, the fraction is fixed to 0.5.
Hence, added with these two columns, you can sample three parts of the profile area.

@item --fracmaxsum1
@itemx --fracmaxsum2
Sum of pixels brighter than the given fraction(s) of the maximum pixel value.
For the maximum value, see the description of @option{--maximum} column below.
The fraction(s) are given through the @option{--fracmax} option (that can take two values) and is described in @ref{MakeCatalog inputs and basic settings}.
Recall that in @option{--halfmaxsum}, the fraction is fixed to 0.5.
Hence, added with these two columns, you can sample three parts of the profile's sum of pixels.

@item --fracmaxradius1
@itemx --fracmaxradius2
Radius (in units of pixels) derived from the area that contains the given fractions of the maximum valued pixel(s) of the label's pixels (value reported by @option{--fracmaxarea1} or @option{--fracmaxarea2}).
For the maximum value, see the description of @option{--maximum} column below.
The fractions are given through the @option{--fracmax} option (that can take two values) and is described in @ref{MakeCatalog inputs and basic settings}.
Recall that in @option{--fwhm}, the fraction is fixed to 0.5.
Hence, added with these two columns, you can sample three parts of the profile's radius.

@item --clumpsarea
[Objects] The total area of all the clumps in this object.

@item --weightarea
The area (number of pixels) used in the flux weighted position calculations.

@item --geoarea
The area of all the pixels labeled with an object or clump.
Note that unlike @option{--area}, pixel values are completely ignored in this column.
For example, if a pixel value is blank, it won't be counted in @option{--area}, but will be counted here.

@item --geoareaxy
Similar to @option{--geoarea}, when the clump or object is projected onto
the first two dimensions. This is only available for 3-dimensional
datasets. When working with Integral Field Unit (IFU) datasets, this
projection onto the first two dimensions would be a narrow-band image.

@item -A
@itemx --semimajor
The pixel-value weighted root mean square (RMS) along the semi-major axis of the profile (assuming it is an ellipse) in units of pixels.
See @ref{Measuring elliptical parameters}.

@item -B
@itemx --semiminor
The pixel-value weighted root mean square (RMS) along the semi-minor axis of the profile (assuming it is an ellipse) in units of pixels.
See @ref{Measuring elliptical parameters}.

@item --axisratio
The pixel-value weighted axis ratio (semi-minor/semi-major) of the object or clump.

@item -p
@itemx --positionangle
The pixel-value weighted angle of the semi-major axis with the first FITS
axis in degrees.
See @ref{Measuring elliptical parameters}.

@item --geosemimajor
The geometric (ignoring pixel values) root mean square (RMS) along the semi-major axis of the profile, assuming it is an ellipse, in units of pixels.

@item --geosemiminor
The geometric (ignoring pixel values) root mean square (RMS) along the semi-minor axis of the profile, assuming it is an ellipse, in units of pixels.

@item --geoaxisratio
The geometric (ignoring pixel values) axis ratio of the profile, assuming
it is an ellipse.

@item --geopositionangle
The geometric (ignoring pixel values) angle of the semi-major axis with the first FITS axis in degrees.

@end table

@cindex 3D data-cubes
@cindex Cubes (3D data)
@cindex IFU: Integral Field Unit
@cindex Integral field unit (IFU)
@cindex Spectrum (of astronomical source)
Above, all of MakeCatalog's single-valued measurements were listed. As
mentioned in the start of this section, MakeCatalog can also do
multi-valued measurements per label. Currently the only such measurement is
the creation of spectra from 3D data cubes as discussed below:

@table @option
@item --spectrum
Generate a spectrum (measurement along the first two FITS dimensions) for
each label when the input dataset is a 3D data cube. With this option, a
seprate table/spectrum will be generated for every label. If the output is
a FITS file, each label's spectrum will be written into an extension of
that file with a standard name of @code{SPECTRUM_NN} (the label will be
replaced with @code{NN}). If the output is a plain text file, each label's
spectrum will be written into a separate file with the suffix
@file{spec-NN.txt}. See @ref{MakeCatalog output} for more on specifying
MakeCatalog's output file.

The spectra will contain one row for every slice (third FITS dimension) of
the cube. Since the physical nature of the third dimension is different,
two types of spectra (along with their errors) are measured: 1) Sum of
values in each slice that only have the requested label. 2) Sum of values
on the 2D projection of the whole label (the area of this projection can be
requested with the @option{--areaxy} column above).

Labels can overlap when they are projected onto the first two FITS
dimensions (the spatial domain). To help separate them, MakeCatalog does a
third measurement on each slice: the area, sum of values and error of all
pixels that belong to other labels but overlap with the 2D projection. This
can be used to see how reliable the emission line measurement is (on the
projected spectra) and also if multiple lines (labeled regions) belong to
the same physical object.

@item --inbetweenints
Output will contain one row for all integers between 1 and the largest label in the input (irrespective of their existance in the input image).
By default, MakeCatalog's output will only contain rows with integers that actually corresponded to at least one pixel in the input dataset.

For example if the input's only labeled pixel values are 11 and 13, MakeCatalog's default output will only have two rows.
If you use this option, it will have 13 rows and all the columns corresponding to integer identifiers that didn't correspond to any pixel will be 0 or NaN (depending on context).
@end table



@node MakeCatalog output,  , MakeCatalog measurements, Invoking astmkcatalog
@subsubsection MakeCatalog output
After it has completed all the requested measurements (see @ref{MakeCatalog measurements}), MakeCatalog will store its measurements in table(s).
If an output filename is given (see @option{--output} in @ref{Input output options}), the format of the table will be deduced from the name.
When it isn't given, the input name will be appended with a @file{_cat} suffix (see @ref{Automatic output}) and its format will be determined from the @option{--tableformat} option, which is also discussed in @ref{Input output options}.
@option{--tableformat} is also necessary when the requested output name is a FITS table (recall that FITS can accept ASCII and binary tables, see @ref{Table}).

By default (when @option{--spectrum} isn't called) only a single catalog/table will be created for ``objects'', however, if @option{--clumpscat} is called, a secondary catalog/table will also be created.
For more on ``objects'' and ``clumps'', see @ref{Segment}.
In short, if you only have one set of labeled images, you don't have to worry about clumps (they are deactivated by default).

When @option{--spectrum} is called, it is not mandatory to specify any single-valued measurement columns. In this case, the output will only be the spectra of each labeled region. See the description of @option{--spectrum} in @ref{MakeCatalog measurements}.

The full list of MakeCatalog's output options are elaborated below.

@table @option
@item -C
@itemx --clumpscat
Do measurements on clumps and produce a second catalog (only devoted to clumps).
When this option is given, MakeCatalog will also look for a secondary labeled dataset (identifying substructure) and produce a catalog from that.
For more on the definition on ``clumps'', see @ref{Segment}.

When the output is a FITS file, the objects and clumps catalogs/tables will be stored as multiple extensions of one FITS file.
You can use @ref{Table} to inspect the column meta-data and contents in this case.
However, in plain text format (see @ref{Gnuastro text table format}), it is only possible to keep one table per file.
Therefore, if the output is a text file, two output files will be created, ending in @file{_o.txt} (for objects) and @file{_c.txt} (for clumps).

@item --noclumpsort
Don't sort the clumps catalog based on object ID (only relevant with @option{--clumpscat}).
This option will benefit the performance@footnote{The performance boost due to @option{--noclumpsort} can only be felt when there are a huge number of objects.
Therefore, by default the output is sorted to avoid miss-understandings or bugs in the user's scripts when the user forgets to sort the outputs.} of MakeCatalog when it is run on multiple threads @emph{and} the position of the rows in the clumps catalog is irrelevant (for example you just want the number-counts).

MakeCatalog does all its measurements on each @emph{object} independently and in parallel.
As a result, while it is writing the measurements on each object's clumps, it doesn't know how many clumps there were in previous objects.
Each thread will just fetch the first available row and write the information of clumps (in order) starting from that row.
After all the measurements are done, by default (when this option isn't called), MakeCatalog will reorder/permute the clumps catalog to have both the object and clump ID in an ascending order.

If you would like to order the catalog later (when its a plain text file), you can run the following command to sort the rows by object ID (and clump ID within each object), assuming they are respectively the first and second columns:

@example
$ awk '!/^#/' out_c.txt | sort -g -k1,1 -k2,2
@end example

@item --sfmagnsigma=FLT
Value to multiply with the median standard deviation (from a @command{MEDSTD} keyword in the Sky standard deviation image) for estimating the surface brightness limit.
Note that the surface brightness limit is only reported when a standard deviation image is read, in other words a column using it is requested (for example @option{--sn}) or @option{--forcereadstd} is called.

This value is a per-pixel value, not per object/clump and is not found over an area or aperture, like the common @mymath{5\sigma} values that are commonly reported as a measure of depth or the upper-limit measurements (see @ref{Quantifying measurement limits}).

@item --sfmagarea=FLT
Area (in arc-seconds squared) to convert the per-pixel estimation of @option{--sfmagnsigma} in the comments section of the output tables.
Note that the surface brightness limit is only reported when a standard deviation image is read, in other words a column using it is requested (for example @option{--sn}) or @option{--forcereadstd} is called.

Note that this is just a unit conversion using the World Coordinate System (WCS) information in the input's header.
It does not actually do any measurements on this area.
For random measurements on any area, please use the upper-limit columns of MakeCatalog (see the discussion on upper-limit measurements in @ref{Quantifying measurement limits}).
@end table




@node Match,  , MakeCatalog, Data analysis
@section Match

Data can come come from different telescopes, filters, software and even different configurations for a single software.
As a result, one of the primary things to do after generating catalogs from each of these sources (for example with @ref{MakeCatalog}), is to find which sources in one catalog correspond to which in the other(s).
In other words, to `match' the two catalogs with each other.

Gnuastro's Match program is in charge of such operations.
The nearest objects in the two catalogs, within the given aperture, will be found and given as output.
The aperture can be a circle or an ellipse with any orientation.

@menu
* Invoking astmatch::           Inputs, outputs and options of Match
@end menu

@node Invoking astmatch,  , Match, Match
@subsection Invoking Match

When given two catalogs, Match finds the rows that are nearest to each other within an input aperture.
The executable name is @file{astmatch} with the following general template

@example
$ astmatch [OPTION ...] input-1 input-2
@end example

@noindent
One line examples:

@example
## 1D wavelength match (within 5 angstroms) of the two inputs.
## The wavelengths are in the 5th and 10th columns respectively.
$ astmatch --aperture=5e-10 --ccol1=5 --ccol2=10 in1.fits in2.txt

## Match the two catalogs with a circular aperture of width 2.
## (Units same as given positional columns).
## (By default two columns are given for `--ccol1' and `--ccol2',
##  The number of values to these determines the dimensions).
$ astmatch --aperture=2 input1.txt input2.fits

## Similar to before, but the output is created by merging various
## columns from the two inputs: columns 1, RA, DEC from the first
## input, followed by all columns starting with `MAG' and the `BRG'
## column from second input and finally the 10th from first input.
$ astmatch --aperture=2 input1.txt input2.fits                   \
           --outcols=a1,aRA,aDEC,b/^MAG/,bBRG,a10

## Match the two catalogs within an elliptical aperture of 1 and 2
## arc-seconds along RA and Dec respectively.
$ astmatch --aperture=1/3600,2/3600 in1.fits in2.txt

## Match the RA and DEC columns of the first input with the RA_D
## and DEC_D columns of the second within a 0.5 arcseconds aperture.
$ astmatch --ccol1=RA,DEC --ccol2=RA_D,DEC_D --aperture=0.5/3600  \
           in1.fits in2.fits

## Match in 3D (RA, Dec and Wavelength).
$ astmatch --ccol1=2,3,4 --ccol2=2,3,4 -a0.5/3600,0.5/3600,5e-10 \
           in1.fits in2.txt
@end example

Match will find the rows that are nearest to each other in two catalogs (given some coordinate columns).
Therefore two catalogs are necessary for input.
However, they don't necessarily have to be files:
1) the first catalog can also come from the standard input (for example a pipe, see @ref{Standard input});
2) when only one point is needed, you can use the @option{--coord} option to avoid creating a file for the second catalog.
When the inputs are files, they can be plain text tables or FITS tables, for more see @ref{Tables}.

Match follows the same basic behavior of all Gnuastro programs as fully described in @ref{Common program behavior}.
If the first input is a FITS file, the common @option{--hdu} option (see @ref{Input output options}) should be used to identify the extension.
When the second input is FITS, the extension must be specified with @option{--hdu2}.

When @option{--quiet} is not called, Match will print the number of matches found in standard output (on the command-line).
When matches are found, by default, the output file(s) will be the re-arranged input tables such that the rows match each other: both output tables will have the same number of rows which are matched with each other.
If @option{--outcols} is called, the output is a single table with rows chosen from either of the two inputs in any order.
If the @option{--logasoutput} option is called, the output will be a single table with the contents of the log file, see below.
If no matches are found, the columns of the output table(s) will have zero rows (with proper meta-data).

If no output file name is given with the @option{--output} option, then automatic output @ref{Automatic output} will be used to determine the output name(s).
Depending on @option{--tableformat} (see @ref{Input output options}), the output will then be a (possibly multi-extension) FITS file or (possibly two) plain text file(s).
When the output is a FITS file, the default re-arranged inputs will be two extensions of the output FITS file.
With @option{--outcols} and @option{--logasoutput}, the FITS output will be a single table (in one extension).

When the @option{--log} option is called (see @ref{Operating mode options}), and there was a match, Match will also create a file named @file{astmatch.fits} (or @file{astmatch.txt}, depending on @option{--tableformat}, see @ref{Input output options}) in the directory it is run in.
This log table will have three columns.
The first and second columns show the matching row/record number (counting from 1) of the first and second input catalogs respectively.
The third column is the distance between the two matched positions.
The units of the distance are the same as the given coordinates (given the possible ellipticity, see description of @option{--aperture} below).
When @option{--logasoutput} is called, no log file (with a fixed name) will be created.
In this case, the output file (possibly given by the @option{--output} option) will have the contents of this log file.

@cartouche
@noindent
@strong{@option{--log} isn't thread-safe}: As described above, when @option{--logasoutput} is not called, the Log file has a fixed name for all calls to Match.
Therefore if a separate log is requested in two simultaneous calls to Match in the same directory, Match will try to write to the same file.
This will cause problems like unreasonable log file, undefined behavior, or a crash.
@end cartouche

@table @option
@item -H STR
@itemx --hdu2=STR
The extension/HDU of the second input if it is a FITS file.
When it isn't a FITS file, this option's value is ignored.
For the first input, the common option @option{--hdu} must be used.

@item --outcols=STR
Columns (from both inputs) to write into a single matched table output.
The value to @code{--outcols} must be a comma-separated list of strings.
The first character of each string specifies the input catalog: @option{a} for the first and @option{b} for the second.
The rest of the characters of the string will be directly used to identify the proper column(s) in the respective table.
See @ref{Selecting table columns} for how columns can be specified in Gnuastro.

For example the output of @option{--outcols=a1,bRA,bDEC} will have three columns: the first column of the first input, along with the @option{RA} and @option{DEC} columns of the second input.

If the string after @option{a} or @option{b} is @option{_all}, then all the columns of the respective input file will be written in the output.
For example the command below will print all the input columns from the first catalog along with the 5th column from the second:

@example
$ astmatch a.fits b.fits --outcols=a_all,b5
@end example

@code{_all} can be used multiple times, possibly on both inputs.
Tip: if an input's column is called @code{_all} (an unlikely name!) and you don't want all the columns from that table the output, use its column number to avoid confusion.

Another example is given in the one-line examples above.
Compared to the default case (where two tables with all their columns) are saved separately, using this option is much faster: it will only read and re-arrange the necessary columns and it will write a single output table.
Combined with regular expressions in large tables, this can be a very powerful and convenient way to merge various tables into one.

When @option{--coord} is given, no second catalog will be read.
The second catalog will be created internally based on the values given to @option{--coord}.
So column names aren't defined and you can only request integer column numbers that are less than the number of coordinates given to @option{--coord}.
For example if you want to find the row matching RA of 1.2345 and Dec of 6.7890, then you should use @option{--coord=1.2345,6.7890}.
But when using @option{--outcols}, you can't give @code{bRA}, or @code{b25}.

@item -l
@itemx --logasoutput
The output file will have the contents of the log file: indexes in the two catalogs that match with each other along with their distance.
See description above.
When this option is called, a log file called @file{astmatch.txt} will not be created.
With this option, the default output behavior (two tables containing the re-arranged inputs) will be

@item --notmatched
Write the non-matching rows into the outputs, not the matched ones.
Note that with this option, the two output tables will not necessarily have the same number of rows.
Therefore, this option cannot be called with @option{--outcols}.
@option{--outcols} prints mixed columns from both inputs, so they must all have the same number of elements and must correspond to each other.

@item -c INT/STR[,INT/STR]
@itemx --ccol1=INT/STR[,INT/STR]
The coordinate columns of the first input.
The number of dimensions for the match is determined by the number of comma-separated values given to this option.
The values can be the column number (counting from 1), exact column name or a regular expression.
For more, see @ref{Selecting table columns}.
See the one-line examples above for some usages of this option.

@item -C INT/STR[,INT/STR]
@itemx --ccol2=INT/STR[,INT/STR]
The coordinate columns of the second input.
See the example in @option{--ccol1} for more.

@item -d FLT[,FLT]
@itemx --coord=FLT[,FLT]
Manually specify the coordinates to match against the given catalog.
With this option, Match will not look for a second input file/table and will directly use the coordinates given to this option.

When this option is called, the output changes in the following ways:
1) when @option{--outcols} is specified, for the second input, it can only accept integer numbers that are less than the number of values given to this option, see description of that option for more.
2) By default (when @option{--outcols} isn't used), only the matching row of the first table will be output (a single file), not two separate files (one for each table).

This option is good when you have a (large) catalog and only want to match a single coordinate to it (for example to find the nearest catalog entry to your desired point).
With this option, you can write the coordinates on the command-line and thus avoid the need to make a single-row file.

@item -a FLT[,FLT[,FLT]]
@itemx --aperture=FLT[,FLT[,FLT]]
Parameters of the aperture for matching.
The values given to this option can be fractions, for example when the position columns are in units of degrees, @option{1/3600} can be used to ask for one arc-second.
The interpretation of the values depends on the requested dimensions (determined from @option{--ccol1} and @code{--ccol2}) and how many values are given to this option.

When multiple objects are found within the aperture, the match is defined
as the nearest one. In a multi-dimensional dataset, when the aperture is a
general ellipse or ellipsoid (and not a circle or sphere), the distance is
calculated in the elliptical space along the major axis. For the defintion
of this distance, see @mymath{r_{el}} in @ref{Defining an ellipse and
ellipsoid}.

@table @asis
@item 1D match
The aperture/interval can only take one value: half of the interval around each point (maximum distance from each point).

@item 2D match
In a 2D match, the aperture can be a circle, an ellipse aligned in the axes or an ellipse with a rotated major axis.
To simply the usage, you can determine the shape based on the number of free parameters for each.

@table @asis
@item 1 number
For example @option{--aperture=2}.
The aperture will be a circle of the given radius.
The value will be in the same units as the columns in @option{--ccol1} and @option{--ccol2}).

@item 2 numbers
For example @option{--aperture=3,4e-10}.
The aperture will be an ellipse (if the two numbers are different) with the respective value along each dimension.
The numbers are in units of the first and second axis.
In the example above, the semi-axis value along the first axis will be 3 (in units of the first coordinate) and along the second axis will be @mymath{4\times10^{-10}} (in units of the second coordinate).
Such values can happen if you are comparing catalogs of a spectra for example.
If more than one object exists in the aperture, the nearest will be found along the major axis as described in @ref{Defining an ellipse and ellipsoid}.

@item 3 numbers
For example @option{--aperture=2,0.6,30}.
The aperture will be an ellipse (if the second value is not 1).
The first number is the semi-major axis, the second is the axis ratio and the third is the position angle (in degrees).
If multiple matches are found within the ellipse, the distance (to find the nearest) is calculated along the major axis in the elliptical space, see @ref{Defining an ellipse and ellipsoid}.
@end table

@item 3D match
The aperture (matching volume) can be a sphere, an ellipsoid aligned on the three axises or a genenral ellipsoid rotated in any direction.
To simplifythe usage, the shape can be determined based on the number of values given to this option.

@table @asis
@item 1 number
For example @option{--aperture=3}.
The matching volume will be a sphere of the given radius.
The value is in the same units as the input coordinates.

@item 3 numbers
For example @option{--aperture=4,5,6e-10}.
The aperture will be a general ellipsoid with the respective extent along each dimension.
The numbers must be in the same units as each axis.
This is very similar to the two number case of 2D inputs.
See there for more.

@item 6 numbers
For example @option{--aperture=4,0.5,0.6,10,20,30}.
The numbers represent the full general ellipsoid definition (in any orientation).
For the definition of a general ellipsoid, see @ref{Defining an ellipse and ellipsoid}.
The first number is the semi-major axis.
The second and third are the two axis ratios.
The last three are the three Euler angles in units of degrees in the ZXZ order as fully described in @ref{Defining an ellipse and ellipsoid}.
@end table

@end table
@end table













@node Modeling and fittings, High-level calculations, Data analysis, Top
@chapter Modeling and fitting

@cindex Fitting
@cindex Modeling
In order to fully understand observations after initial analysis on the image, it is very important to compare them with the existing models to be able to further understand both the models and the data.
The tools in this chapter create model galaxies and will provide 2D fittings to be able to understand the detections.

@menu
* MakeProfiles::                Making mock galaxies and stars.
* MakeNoise::                   Make (add) noise to an image.
@end menu




@node MakeProfiles, MakeNoise, Modeling and fittings, Modeling and fittings
@section MakeProfiles

@cindex Checking detection algorithms
@pindex @r{MakeProfiles (}astmkprof@r{)}
MakeProfiles will create mock astronomical profiles from a catalog, either individually or together in one output image.
In data analysis, making a mock image can act like a calibration tool, through which you can test how successfully your detection technique is able to detect a known set of objects.
There are commonly two aspects to detecting: the detection of the fainter parts of bright objects (which in the case of galaxies fade into the noise very slowly) or the complete detection of an over-all faint object.
Making mock galaxies is the most accurate (and idealistic) way these two aspects of a detection algorithm can be tested.
You also need mock profiles in fitting known functional profiles with observations.

MakeProfiles was initially built for extra galactic studies, so currently the only astronomical objects it can produce are stars and galaxies.
We welcome the simulation of any other astronomical object.
The general outline of the steps that MakeProfiles takes are the following:

@enumerate

@item
Build the full profile out to its truncation radius in a possibly over-sampled array.

@item
Multiply all the elements by a fixed constant so its total magnitude equals the desired total magnitude.

@item
If @option{--individual} is called, save the array for each profile to a FITS file.

@item
If @option{--nomerged} is not called, add the overlapping pixels of all the created profiles to the output image and abort.

@end enumerate

Using input values, MakeProfiles adds the World Coordinate System (WCS) headers of the FITS standard to all its outputs (except PSF images!).
For a simple test on a set of mock galaxies in one image, there is no need for the third step or the WCS information.

@cindex Transform image
@cindex Lensing simulations
@cindex Image transformations
However in complicated simulations like weak lensing simulations, where each galaxy undergoes various types of individual transformations based on their position, those transformations can be applied to the different individual images with other programs.
After all the transformations are applied, using the WCS information in each individual profile image, they can be merged into one output image for convolution and adding noise.

@menu
* Modeling basics::             Astronomical modeling basics.
* If convolving afterwards::    Considerations for convolving later.
* Brightness flux magnitude::   About these measures of energy.
* Profile magnitude::           Definition of total profile magnitude.
* Invoking astmkprof::          Inputs and Options for MakeProfiles.
@end menu



@node Modeling basics, If convolving afterwards, MakeProfiles, MakeProfiles
@subsection Modeling basics

In the subsections below, first a review of some very basic information and concepts behind modeling a real astronomical image is given.
You can skip this subsection if you are already sufficiently familiar with these concepts.

@menu
* Defining an ellipse and ellipsoid::  Definition of these important shapes.
* PSF::                         Radial profiles for the PSF.
* Stars::                       Making mock star profiles.
* Galaxies::                    Radial profiles for galaxies.
* Sampling from a function::    Sample a function on a pixelated canvas.
* Oversampling::                Oversampling the model.
@end menu

@node Defining an ellipse and ellipsoid, PSF, Modeling basics, Modeling basics
@subsubsection Defining an ellipse and ellipsoid

@cindex Ellipse
@cindex Axis ratio
@cindex Position angle
The PSF, see @ref{PSF}, and galaxy radial profiles are generally defined on an ellipse.
Therefore, in this section we'll start defining an ellipse on a pixelated 2D surface.
Labeling the major axis of an ellipse @mymath{a}, and its minor axis with @mymath{b}, the @emph{axis ratio} is defined as: @mymath{q\equiv b/a}.
The major axis of an ellipse can be aligned in any direction, therefore the angle of the major axis with respect to the horizontal axis of the image is defined to be the @emph{position angle} of the ellipse and in this book, we show it with @mymath{\theta}.

@cindex Radial profile on ellipse
Our aim is to put a radial profile of any functional form @mymath{f(r)} over an ellipse.
Hence we need to associate a radius/distance to every point in space.
Let's define the radial distance @mymath{r_{el}} as the distance on the major axis to the center of an ellipse which is located at @mymath{i_c} and @mymath{j_c} (in other words @mymath{r_{el}\equiv{a}}).
We want to find @mymath{r_{el}} of a point located at @mymath{(i,j)} (in the image coordinate system) from the center of the ellipse with axis ratio @mymath{q} and position angle @mymath{\theta}.
First the coordinate system is rotated@footnote{Do not confuse the signs of @mymath{sin} with the rotation matrix defined in @ref{Warping basics}.
In that equation, the point is rotated, here the coordinates are rotated and the point is fixed.} by @mymath{\theta} to get the new rotated coordinates of that point @mymath{(i_r,j_r)}:

@dispmath{i_r(i,j)=+(i_c-i)\cos\theta+(j_c-j)\sin\theta}
@dispmath{j_r(i,j)=-(i_c-i)\sin\theta+(j_c-j)\cos\theta}

@cindex Elliptical distance
@noindent Recall that an ellipse is defined by @mymath{(i_r/a)^2+(j_r/b)^2=1} and that we defined @mymath{r_{el}\equiv{a}}.
Hence, multiplying all elements of the ellipse definition with @mymath{r_{el}^2} we get the elliptical distance at this point point located: @mymath{r_{el}=\sqrt{i_r^2+(j_r/q)^2}}.
To place the radial profiles explained below over an ellipse, @mymath{f(r_{el})} is calculated based on the functional radial profile desired.

@cindex Ellipsoid
@cindex Euler angles
An ellipse in 3D, or an @url{https://en.wikipedia.org/wiki/Ellipsoid, ellipsoid}, can be defined following similar principles as before.
Labeling the major (largest) axis length as @mymath{a}, the second and third (in a right-handed coordinate system) axis lengths can be labeled as @mymath{b} and @mymath{c}.
Hence we have two axis ratios: @mymath{q_1\equiv{b/a}} and @mymath{q_2\equiv{c/a}}.
The orientation of the ellipsoid can be defined from the orientation of its major axis.
There are many ways to define 3D orientation and order matters.
So to be clear, here we use the ZXZ (or @mymath{Z_1X_2Z_3}) proper @url{https://en.wikipedia.org/wiki/Euler_angles, Euler angles} to define the 3D orientation.
In short, when a point is rotated in this order, we first rotate it around the Z axis (third axis) by @mymath{\alpha}, then about the (rotated) X axis by @mymath{\beta} and finally about the (rotated) Z axis by @mymath{\gamma}.

Following the discussion in @ref{Merging multiple warpings}, we can define the full rotation with the following matrix multiplication.
However, here we are rotating the coordinates, not the point.
Therefore, both the rotation angles and rotation order are reversed.
We are also not using homogeneous coordinates (see @ref{Warping basics}) since we aren't concerned with translation in this context:

@dispmath{\left[\matrix{i_r\cr j_r\cr k_r}\right] =
          \left[\matrix{cos\gamma&sin\gamma&0\cr -sin\gamma&cos\gamma&0\cr                 0&0&1}\right]
          \left[\matrix{1&0&0\cr      0&cos\beta&sin\beta\cr                               0&-sin\beta&cos\beta }\right]
          \left[\matrix{cos\alpha&sin\alpha&0\cr -sin\alpha&cos\alpha&0\cr                 0&0&1}\right]
          \left[\matrix{i_c-i\cr j_c-j\cr k_c-k}\right]   }

@noindent
Recall that an ellipsoid can be characterized with
@mymath{(i_r/a)^2+(j_r/b)^2+(k_r/c)^2=1}, so similar to before
(@mymath{r_{el}\equiv{a}}), we can find the ellipsoidal radius at pixel
@mymath{(i,j,k)} as: @mymath{r_{el}=\sqrt{i_r^2+(j_r/q_1)^2+(k_r/q_2)^2}}.

@cindex Breadth first search
@cindex Inside-out construction
@cindex Making profiles pixel by pixel
@cindex Pixel by pixel making of profiles
MakeProfiles builds the profile starting from the nearest element (pixel in an image) in the dataset to the profile center.
The profile value is calculated for that central pixel using monte carlo integration, see @ref{Sampling from a function}.
The next pixel is the next nearest neighbor to the central pixel as defined by @mymath{r_{el}}.
This process goes on until the profile is fully built upto the truncation radius.
This is done fairly efficiently using a breadth first parsing strategy@footnote{@url{http://en.wikipedia.org/wiki/Breadth-first_search}} which is implemented through an ordered linked list.

Using this approach, we build the profile by expanding the circumference.
Not one more extra pixel has to be checked (the calculation of @mymath{r_{el}} from above is not cheap in CPU terms).
Another consequence of this strategy is that extending MakeProfiles to three dimensions becomes very simple: only the neighbors of each pixel have to be changed.
Everything else after that (when the pixel index and its radial profile have entered the linked list) is the same, no matter the number of dimensions we are dealing with.





@node PSF, Stars, Defining an ellipse and ellipsoid, Modeling basics
@subsubsection Point spread function

@cindex PSF
@cindex Point source
@cindex Diffraction limited
@cindex Point spread function
@cindex Spread of a point source
Assume we have a `point' source, or a source that is far smaller than the maximum resolution (a pixel).
When we take an image of it, it will `spread' over an area.
To quantify that spread, we can define a `function'.
This is how the point spread function or the PSF of an image is defined.
This `spread' can have various causes, for example in ground based astronomy, due to the atmosphere.
In practice we can never surpass the `spread' due to the diffraction of the lens aperture.
Various other effects can also be quantified through a PSF.
For example, the simple fact that we are sampling in a discrete space, namely the pixels, also produces a very small `spread' in the image.

@cindex Blur image
@cindex Convolution
@cindex Image blurring
@cindex PSF image size
Convolution is the mathematical process by which we can apply a `spread' to an image, or in other words blur the image, see @ref{Convolution process}.
The Brightness of an object should remain unchanged after convolution, see @ref{Brightness flux magnitude}.
Therefore, it is important that the sum of all the pixels of the PSF be unity.
The PSF image also has to have an odd number of pixels on its sides so one pixel can be defined as the center.
In MakeProfiles, the PSF can be set by the two methods explained below.

@table @asis

@item Parametric functions
@cindex FWHM
@cindex PSF width
@cindex Parametric PSFs
@cindex Full Width at Half Maximum
A known mathematical function is used to make the PSF.
In this case, only the parameters to define the functions are necessary and MakeProfiles will make a PSF based on the given parameters for each function.
In both cases, the center of the profile has to be exactly in the middle of the central pixel of the PSF (which is automatically done by MakeProfiles).
When talking about the PSF, usually, the full width at half maximum or FWHM is used as a scale of the width of the PSF.

@table @cite
@item Gaussian
@cindex Gaussian distribution
In the older papers, and to a lesser extent even today, some researchers use the 2D Gaussian function to approximate the PSF of ground based images.
In its most general form, a Gaussian function can be written as:

@dispmath{f(r)=a \exp \left( -(x-\mu)^2 \over 2\sigma^2 \right)+d}

Since the center of the profile is pre-defined, @mymath{\mu} and @mymath{d} are constrained.
@mymath{a} can also be found because the function has to be normalized.
So the only important parameter for MakeProfiles is the @mymath{\sigma}.
In the Gaussian function we have this relation between the FWHM and @mymath{\sigma}:

@cindex Gaussian FWHM
@dispmath{\rm{FWHM}_g=2\sqrt{2\ln{2}}\sigma \approx 2.35482\sigma}

@item Moffat
@cindex Moffat function
The Gaussian profile is much sharper than the images taken from stars on photographic plates or CCDs.
Therefore in 1969, Moffat proposed this functional form for the image of stars:

@dispmath{f(r)=a \left[ 1+\left( r\over \alpha \right)^2 \right]^{-\beta}}

@cindex Moffat beta
Again, @mymath{a} is constrained by the normalization, therefore two parameters define the shape of the Moffat function: @mymath{\alpha} and @mymath{\beta}.
The radial parameter is @mymath{\alpha} which is related to the FWHM by

@cindex Moffat FWHM
@dispmath{\rm{FWHM}_m=2\alpha\sqrt{2^{1/\beta}-1}}

@cindex Compare Moffat and Gaussian
@cindex PSF, Moffat compared Gaussian
@noindent
Comparing with the PSF predicted from atmospheric turbulence theory with a Moffat function, Trujillo et al.@footnote{
Trujillo, I., J. A. L. Aguerri, J. Cepa, and C. M. Gutierrez (2001). ``The effects of seeing on S@'ersic profiles - II. The Moffat PSF''. In: MNRAS 328, pp. 977---985.}
claim that @mymath{\beta} should be 4.765.
They also show how the Moffat PSF contains the Gaussian PSF as a limiting case when @mymath{\beta\to\infty}.

@end table

@item An input FITS image
An input image file can also be specified to be used as a PSF.
If the sum of its pixels are not equal to 1, the pixels will be multiplied by a fraction so the sum does become 1.
@end table


While the Gaussian is only dependent on the FWHM, the Moffat function is also dependent on @mymath{\beta}.
Comparing these two functions with a fixed FWHM gives the following results:

@itemize
@item
Within the FWHM, the functions don't have significant differences.
@item
For a fixed FWHM, as @mymath{\beta} increases, the Moffat function becomes sharper.
@item
The Gaussian function is much sharper than the Moffat functions, even when @mymath{\beta} is large.
@end itemize




@node Stars, Galaxies, PSF, Modeling basics
@subsubsection Stars

@cindex Modeling stars
@cindex Stars, modeling
In MakeProfiles, stars are generally considered to be a point source.
This is usually the case for extra galactic studies, were nearby stars are also in the field.
Since a star is only a point source, we assume that it only fills one pixel prior to convolution.
In fact, exactly for this reason, in astronomical images the light profiles of stars are one of the best methods to understand the shape of the PSF and a very large fraction of scientific research is preformed by assuming the shapes of stars to be the PSF of the image.





@node Galaxies, Sampling from a function, Stars, Modeling basics
@subsubsection Galaxies

@cindex Galaxy profiles
@cindex S@'ersic profile
@cindex Profiles, galaxies
@cindex Generalized de Vaucouleur profile
Today, most practitioners agree that the flux of galaxies can be modeled with one or a few generalized de Vaucouleur's (or S@'ersic) profiles.

@dispmath{I(r) = I_e \exp \left ( -b_n \left[ \left( r \over r_e \right)^{1/n} -1 \right] \right )}

@cindex Brightness
@cindex S@'ersic, J. L.
@cindex S@'ersic index
@cindex Effective radius
@cindex Radius, effective
@cindex de Vaucouleur profile
@cindex G@'erard de Vaucouleurs
G@'erard de Vaucouleurs (1918-1995) was first to show in 1948 that this function resembles the galaxy light profiles, with the only difference that he held @mymath{n} fixed to a value of 4.
Twenty years later in 1968, J. L. S@'ersic showed that @mymath{n} can have a variety of values and does not necessarily need to be 4.
This profile depends on the effective radius (@mymath{r_e}) which is defined as the radius which contains half of the profile brightness (see @ref{Profile magnitude}).
@mymath{I_e} is the flux at the effective radius.
The S@'ersic index @mymath{n} is used to define the concentration of the profile within @mymath{r_e} and @mymath{b_n} is a constant dependent on @mymath{n}.
MacArthur et al.@footnote{MacArthur, L. A., S. Courteau, and J. A. Holtzman (2003). ``Structure of Disk-dominated Galaxies. I. Bulge/Disk Parameters, Simulations, and Secular Evolution''. In: ApJ 582, pp. 689---722.} show that for @mymath{n>0.35}, @mymath{b_n} can be accurately approximated using this equation:

@dispmath{b_n=2n - {1\over 3} + {4\over 405n} + {46\over 25515n^2} + {131\over 1148175n^3}-{2194697\over 30690717750n^4}}





@node Sampling from a function, Oversampling, Galaxies, Modeling basics
@subsubsection Sampling from a function

@cindex Sampling
A pixel is the ultimate level of accuracy to gather data, we can't get any more accurate in one image, this is known as sampling in signal processing.
However, the mathematical profiles which describe our models have infinite accuracy.
Over a large fraction of the area of astrophysically interesting profiles (for example galaxies or PSFs), the variation of the profile over the area of one pixel is not too significant.
In such cases, the elliptical radius (@mymath{r_{el}} of the center of the pixel can be assigned as the final value of the pixel, see @ref{Defining an ellipse and ellipsoid}).

@cindex Integration over pixel
@cindex Gradient over pixel area
@cindex Function gradient over pixel area
As you approach their center, some galaxies become very sharp (their value significantly changes over one pixel's area).
This sharpness increases with smaller effective radius and larger S@'ersic values.
Thus rendering the central value extremely inaccurate.
The first method that comes to mind for solving this problem is integration.
The functional form of the profile can be integrated over the pixel area in a 2D integration process.
However, unfortunately numerical integration techniques also have their limitations and when such sharp profiles are needed they can become extremely inaccurate.

@cindex Monte carlo integration
The most accurate method of sampling a continuous profile on a discrete space is by choosing a large number of random points within the boundaries of the pixel and taking their average value (or Monte Carlo integration).
This is also, generally speaking, what happens in practice with the photons on the pixel.
The number of random points can be set with @option{--numrandom}.

Unfortunately, repeating this Monte Carlo process would be extremely time and CPU consuming if it is to be applied to every pixel.
In order to not loose too much accuracy, in MakeProfiles, the profile is built using both methods explained below.
The building of the profile begins from its central pixel and continues (radially) outwards.
Monte Carlo integration is first applied (which yields @mymath{F_r}), then the central pixel value (@mymath{F_c}) is calculated on the same pixel.
If the fractional difference (@mymath{|F_r-F_c|/F_r}) is lower than a given tolerance level (specified with @option{--tolerance}) MakeProfiles will stop using Monte Carlo integration and only use the central pixel value.

@cindex Inside-out construction
The ordering of the pixels in this inside-out construction is based on @mymath{r=\sqrt{(i_c-i)^2+(j_c-j)^2}}, not @mymath{r_{el}}, see @ref{Defining an ellipse and ellipsoid}.
When the axis ratios are large (near one) this is fine.
But when they are small and the object is highly elliptical, it might seem more reasonable to follow @mymath{r_{el}} not @mymath{r}.
The problem is that the gradient is stronger in pixels with smaller @mymath{r} (and larger @mymath{r_{el}}) than those with smaller @mymath{r_{el}}.
In other words, the gradient is strongest along the minor axis.
So if the next pixel is chosen based on @mymath{r_{el}}, the tolerance level will be reached sooner and lots of pixels with large fractional differences will be missed.

Monte Carlo integration uses a random number of points.
Thus, every time you run it, by default, you will get a different distribution of points to sample within the pixel.
In the case of large profiles, this will result in a slight difference of the pixels which use Monte Carlo integration each time MakeProfiles is run.
To have a deterministic result, you have to fix the random number generator properties which is used to build the random distribution.
This can be done by setting the @code{GSL_RNG_TYPE} and @code{GSL_RNG_SEED} environment variables and calling MakeProfiles with the @option{--envseed} option.
To learn more about the process of generating random numbers, see @ref{Generating random numbers}.

@cindex Seed, Random number generator
@cindex Random number generator, Seed
The seed values are fixed for every profile: with @option{--envseed}, all the profiles have the same seed and without it, each will get a different seed using the system clock (which is accurate to within one microsecond).
The same seed will be used to generate a random number for all the sub-pixel positions of all the profiles.
So in the former, the sub-pixel points checked for all the pixels undergoing Monte carlo integration in all profiles will be identical.
In other words, the sub-pixel points in the first (closest to the center) pixel of all the profiles will be identical with each other.
All the second pixels studied for all the profiles will also receive an identical (different from the first pixel) set of sub-pixel points and so on.
As long as the number of random points used is large enough or the profiles are not identical, this should not cause any systematic bias.


@node Oversampling,  , Sampling from a function, Modeling basics
@subsubsection Oversampling

@cindex Oversampling
The steps explained in @ref{Sampling from a function} do give an accurate representation of a profile prior to convolution.
However, in an actual observation, the image is first convolved with or blurred by the atmospheric and instrument PSF in a continuous space and then it is sampled on the discrete pixels of the camera.

@cindex PSF over-sample
In order to more accurately simulate this process, the unconvolved image and the PSF are created on a finer pixel grid.
In other words, the output image is a certain odd-integer multiple of the desired size, we can call this `oversampling'.
The user can specify this multiple as a command-line option.
The reason this has to be an odd number is that the PSF has to be centered on the center of its image.
An image with an even number of pixels on each side does not have a central pixel.

The image can then be convolved with the PSF (which should also be oversampled on the same scale).
Finally, image can be sub-sampled to get to the initial desired pixel size of the output image.
After this, mock noise can be added as explained in the next section.
This is because unlike the PSF, the noise occurs in each output pixel, not on a continuous space like all the prior steps.




@node If convolving afterwards, Brightness flux magnitude, Modeling basics, MakeProfiles
@subsection If convolving afterwards

In case you want to convolve the image later with a given point spread function, make sure to use a larger image size.
After convolution, the profiles become larger and a profile that is normally completely outside of the image might fall within it.

On one axis, if you want your final (convolved) image to be @mymath{m} pixels and your PSF is @mymath{2n+1} pixels wide, then when calling MakeProfiles, set the axis size to @mymath{m+2n}, not @mymath{m}.
You also have to shift all the pixel positions of the profile centers on the that axis by @mymath{n} pixels to the positive.

After convolution, you can crop the outer @mymath{n} pixels with the section crop box specification of Crop: @option{--section=n:*-n,n:*-n} assuming your PSF is a square, see @ref{Crop section syntax}.
This will also remove all discrete Fourier transform artifacts (blurred sides) from the final image.
To facilitate this shift, MakeProfiles has the options @option{--xshift}, @option{--yshift} and @option{--prepforconv}, see @ref{Invoking astmkprof}.




@node Brightness flux magnitude, Profile magnitude, If convolving afterwards, MakeProfiles
@subsection Brightness, Flux, Magnitude and Surface brightness

@cindex ADU
@cindex Gain
@cindex Counts
Astronomical data pixels are usually in units of counts@footnote{Counts are also known as analog to digital units (ADU).} or electrons or either one divided by seconds.
To convert from the counts to electrons, you will need to know the instrument gain.
In any case, they can be directly converted to energy or energy/time using the basic hardware (telescope, camera and filter) information.
We will continue the discussion assuming the pixels are in units of energy/time.

@cindex Flux
@cindex Luminosity
@cindex Brightness
The @emph{brightness} of an object is defined as its total detected energy per time.
In the case of an imaged source, this is simply the sum of the pixels that are associated with that detection by our detection tool (for example @ref{NoiseChisel}@footnote{If further processing is done, for example the Kron or Petrosian radii are calculated, then the detected area is not sufficient and the total area that was within the respective radius must be used.}).
The @emph{flux} of an object is defined in units of energy/time/collecting-area.
For an astronomical target, the flux is therefore defined as its brightness divided by the area used to collect the light from the source: or the telescope aperture (for example in units of @mymath{cm^2}).
Knowing the flux (@mymath{f}) and distance to the object (@mymath{r}), we can define its @emph{luminosity}: @mymath{L=4{\pi}r^2f}.

Therefore, while flux and luminosity are intrinsic properties of the object, brightness depends on our detecting tools (hardware and software).
In low-level observational astronomy data analysis, we are usually more concerned with measuring the brightness, because it is the thing we directly measure from the image pixels and create in catalogs.
On the other hand, luminosity is used in higher-level analysis (after image contents are measured as catalogs to deduce physical interpretations).
It is just important avoid possible confusion between luminosity and brightness because both have the same units of energy per seconds.

@cindex Magnitudes from flux
@cindex Flux to magnitude conversion
@cindex Astronomical Magnitude system
Images of astronomical objects span over a very large range of brightness.
With the Sun (as the brightest object) being roughly @mymath{2.5^{60}=10^{24}} times brighter than the fainter galaxies we can currently detect in the deepest images.
Therefore discussing brightness directly will involve a large range of values which is inconvenient.
So astronomers have chosen to use a logarithmic scale to talk about the brightness of astronomical objects.

@cindex Hipparchus of Nicaea
But the logarithm can only be usable with a dimensionless value that is always positive.
Fortunately brightness is always positive (at least in theory@footnote{In practice, for very faint objects, if the background brightness is over-subtracted, we may end up with a negative brightness in a real object.}).
To remove the dimensions, we divide the brightness of the object (@mymath{B}) by a reference brightness (@mymath{B_r}).
We then define a logarithmic scale as @mymath{magnitude} through the relation below.
The @mymath{-2.5} factor in the definition of magnitudes is a legacy of the our ancient colleagues and in particular Hipparchus of Nicaea (190-120 BC).

@dispmath{m-m_r=-2.5\log_{10} \left( B \over B_r \right)}

@cindex Zero point magnitude
@cindex Magnitude zero point
@noindent
@mymath{m} is defined as the magnitude of the object and @mymath{m_r} is the pre-defined magnitude of the reference brightness.
One particularly easy condition is when the reference brightness is unity (@mymath{B_r=1}).
This brightness will thus summarize all the hardware-specific parameters discussed above (like the conversion of pixel values to physical units) into one number.
That reference magnitude which is commonly known as the @emph{Zero point} magnitude (because when @mymath{B=Br=1}, the right side of the magnitude definition above will be zero).
Using the zero point magnitude (@mymath{Z}), we can write the magnitude relation above in a more simpler format:

@dispmath{m = -2.5\log_{10}(B) + Z}

@cindex Steradian
@cindex Angular coverage
@cindex Celestial sphere
@cindex Surface brightness
@cindex SI (International System of Units)
Another important concept is the distribution of brightness over its area.
For this, we define the @emph{surface brightness} to be the magnitude of an object's brightness divided by its solid angle over the celestial sphere (or coverage in the sky, commonly in units of arcsec@mymath{^2}).
The solid angle is expressed in units of arcsec@mymath{^2} because astronomical targets are usually much smaller than one steradian.
Recall that the steradian is the dimension-less SI unit of a solid angle and 1 steradian covers @mymath{1/4\pi} (almost @mymath{8\%}) of the full celestial sphere.

Surface brightness is therefore most commonly expressed in units of mag/arcsec@mymath{2}.
For example when the brightness is measured over an area of A arcsec@mymath{^2}, then the surface brightness becomes:

@dispmath{S = -2.5\log_{10}(B/A) + Z = -2.5\log_{10}(B) + 2.5\log_{10}(A) + Z}

@noindent
In other words, the surface brightness (in units of mag/arcsec@mymath{^2}) is related to the object's magnitude (@mymath{m}) and area (@mymath{A}, in units of arcsec@mymath{^2}) through this equation:

@dispmath{S = m + 2.5\log_{10}(A)}

A common mistake is to follow the mag/arcsec@mymath{^2} unit literally, and divide the object's magnitude by its area.
But this is wrong because magnitude is a logarithmic scale while area is linear.
It is the brightness that should be divided by the solid angle because both have linear scales.
The magnitude of that ratio is then defined to be the surface brightness.

@node Profile magnitude, Invoking astmkprof, Brightness flux magnitude, MakeProfiles
@subsection Profile magnitude

@cindex Brightness
@cindex Truncation radius
@cindex Sum for total flux
To find the profile brightness or its magnitude, (see @ref{Brightness flux magnitude}), it is customary to use the 2D integration of the flux to infinity.
However, in MakeProfiles we do not follow this idealistic approach and apply a more realistic method to find the total brightness or magnitude: the sum of all the pixels belonging to a profile within its predefined truncation radius.
Note that if the truncation radius is not large enough, this can be significantly different from the total integrated light to infinity.

@cindex Integration to infinity
An integration to infinity is not a realistic condition because no galaxy extends indefinitely (important for high S@'ersic index profiles), pixelation can also cause a significant difference between the actual total pixel sum value of the profile and that of integration to infinity, especially in small and high S@'ersic index profiles.
To be safe, you can specify a large enough truncation radius for such compact high S@'ersic index profiles.

If oversampling is used then the brightness is calculated using the over-sampled image, see @ref{Oversampling} which is much more accurate.
The profile is first built in an array completely bounding it with a normalization constant of unity (see @ref{Galaxies}).
Taking @mymath{B} to be the desired brightness and @mymath{S} to be the sum of the pixels in the created profile, every pixel is then multiplied by @mymath{B/S} so the sum is exactly @mymath{B}.

If the @option{--individual} option is called, this same array is written to a FITS file.
If not, only the overlapping pixels of this array and the output image are kept and added to the output array.






@node Invoking astmkprof,  , Profile magnitude, MakeProfiles
@subsection Invoking MakeProfiles

MakeProfiles will make any number of profiles specified in a catalog either individually or in one image.
The executable name is @file{astmkprof} with the following general template

@example
$ astmkprof [OPTION ...] [Catalog]
@end example

@noindent
One line examples:

@example
## Make an image with profiles in catalog.txt (with default size):
$ astmkprof catalog.txt

## Make the profiles in catalog.txt over image.fits:
$ astmkprof --background=image.fits catalog.txt

## Make a Moffat PSF with FWHM 3pix, beta=2.8, truncation=5
$ astmkprof --kernel=moffat,3,2.8,5 --oversample=1

## Make profiles in catalog, using RA and Dec in the given column:
$ astmkprof --ccol=RA_CENTER --ccol=DEC_CENTER --mode=wcs catalog.txt

## Make a 1500x1500 merged image (oversampled 500x500) image along
## with an individual image for all the profiles in catalog:
$ astmkprof --individual --oversample 3 --mergedsize=500,500 cat.txt
@end example

@noindent
The parameters of the mock profiles can either be given through a catalog (which stores the parameters of many mock profiles, see @ref{MakeProfiles catalog}), or the @option{--kernel} option (see @ref{MakeProfiles output dataset}).
The catalog can be in the FITS ASCII, FITS binary format, or plain text formats (see @ref{Tables}).
A plain text catalog can also be provided using the Standard input (see @ref{Standard input}).
The columns related to each parameter can be determined both by number, or by match/search criteria using the column names, units, or comments, with the options ending in @option{col}, see below.

Without any file given to the @option{--background} option, MakeProfiles will make a zero-valued image and build the profiles on that (its size and main WCS parameters can also be defined through the options described in @ref{MakeProfiles output dataset}).
Besides the main/merged image containing all the profiles in the catalog, it is also possible to build individual images for each profile (only enclosing one full profile to its truncation radius) with the @option{--individual} option.

If an image is given to the @option{--background} option, the pixels of that image are used as the background value for every pixel hence flux value of each profile pixel will be added to the pixel in that background value.
You can disable this with the @code{--clearcanvas} option (which will initialize the background to zero-valued pixels and build profiles over that).
With the @option{--background} option, the values to all options relating to the ``canvas'' (output size and WCS) will be ignored if specified, for example @option{--oversample}, @option{--mergedsize}, and @option{--prepforconv}.

The sections below discuss the options specific to MakeProfiles based on context: the input catalog settings which can have many rows for different profiles are discussed in @ref{MakeProfiles catalog}, in @ref{MakeProfiles profile settings}, we discuss how you can set general profile settings (that are the same for all the profiles in the catalog).
Finally @ref{MakeProfiles output dataset} and @ref{MakeProfiles log file} discuss the outputs of MakeProfiles and how you can configure them.
Besides these, MakeProfiles also supports all the common Gnuastro program options that are discussed in @ref{Common options}, so please flip through them is well for a more comfortable usage.

When building 3D profiles, there are more degrees of freedom.
Hence, more columns are necessary and all the values related to dimensions (for example size of dataset in each dimension and the WCS properties) must also have 3 values.
To allow having an independent set of default values for creating 3D profiles, MakeProfiles also installs a @file{astmkprof-3d.conf} configuration file (see @ref{Configuration files}).
You can use this for default 3D profile values.
For example, if you installed Gnuastro with the prefix @file{/usr/local} (the default location, see @ref{Installation directory}), you can benefit from this configuration file by running MakeProfiles like the example below.
As with all configuration files, if you want to customize a given option, call it before the configuration file.

@example
$ astmkprof --config=/usr/local/etc/astmkprof-3d.conf catalog.txt
@end example

@cindex Shell alias
@cindex Alias, shell
@cindex Shell startup
@cindex Startup, shell
To further simplify the process, you can define a shell alias in any startup file (for example @file{~/.bashrc}, see @ref{Installation directory}).
Assuming that you installed Gnuastro in @file{/usr/local}, you can add this line to the startup file (you may put it all in one line, it is broken into two lines here for fitting within page limits).

@example
alias astmkprof-3d="astmkprof --config=/usr/local/etc/astmkprof-3d.conf"
@end example

@noindent
Using this alias, you can call MakeProfiles with the name @command{astmkprof-3d} (instead of @command{astmkprof}).
It will automatically load the 3D specific configuration file first, and then parse any other arguments, options or configuration files.
You can change the default values in this 3D configuration file by calling them on the command-line as you do with @command{astmkprof}@footnote{Recall that for single-invocation options, the last command-line invocation takes precedence over all previous invocations (including those in the 3D configuration file).
See the description of @option{--config} in @ref{Operating mode options}.}.

Please see @ref{Sufi simulates a detection} for a very complete tutorial explaining how one could use MakeProfiles in conjunction with other Gnuastro's programs to make a complete simulated image of a mock galaxy.

@menu
* MakeProfiles catalog::        Required catalog properties.
* MakeProfiles profile settings::  Configuration parameters for all profiles.
* MakeProfiles output dataset::  The canvas/dataset to build profiles over.
* MakeProfiles log file::       A description of the optional log file.
@end menu

@node MakeProfiles catalog, MakeProfiles profile settings, Invoking astmkprof, Invoking astmkprof
@subsubsection MakeProfiles catalog
The catalog containing information about each profile can be in the FITS ASCII, FITS binary, or plain text formats (see @ref{Tables}).
The latter can also be provided using standard input (see @ref{Standard input}).
Its columns can be ordered in any desired manner.
You can specify which columns belong to which parameters using the set of options discussed below.
For example through the @option{--rcol} and @option{--tcol} options, you can specify the column that contains the radial parameter for each profile and its truncation respectively.
See @ref{Selecting table columns} for a thorough discussion on the values to these options.

The value for the profile center in the catalog (the @option{--ccol} option) can be a floating point number so the profile center can be on any sub-pixel position.
Note that pixel positions in the FITS standard start from 1 and an integer is the pixel center.
So a 2D image actually starts from the position (0.5, 0.5), which is the bottom-left corner of the first pixel.
When a @option{--background} image with WCS information is provided or you specify the WCS parameters with the respective options, you may also use RA and Dec to identify the center of each profile (see the @option{--mode} option below).

In MakeProfiles, profile centers do not have to be in (overlap with) the final image.
Even if only one pixel of the profile within the truncation radius overlaps with the final image size, the profile is built and included in the final image image.
Profiles that are completely out of the image will not be created (unless you explicitly ask for it with the @option{--individual} option).
You can use the output log file (created with @option{--log} to see which profiles were within the image, see @ref{Common options}.

If PSF profiles (Moffat or Gaussian, see @ref{PSF}) are in the catalog and the profiles are to be built in one image (when @option{--individual} is not used), it is assumed they are the PSF(s) you want to convolve your created image with.
So by default, they will not be built in the output image but as separate files.
The sum of pixels of these separate files will also be set to unity (1) so you are ready to convolve, see @ref{Convolution process}.
As a summary, the position and magnitude of PSF profile will be ignored.
This behavior can be disabled with the @option{--psfinimg} option.
If you want to create all the profiles separately (with @option{--individual}) and you want the sum of the PSF profile pixels to be unity, you have to set their magnitudes in the catalog to the zero point magnitude and be sure that the central positions of the profiles don't have any fractional part (the PSF center has to be in the center of the pixel).

The list of options directly related to the input catalog columns is shown below.

@table @option

@item --ccol=STR/INT
Center coordinate column for each dimension.
This option must be called two times to define the center coordinates in an image.
For example @option{--ccol=RA} and @option{--ccol=DEC} (along with @option{--mode=wcs}) will inform MakeProfiles to look into the catalog columns named @option{RA} and @option{DEC} for the Right Ascension and Declination of the profile centers.

@item --fcol=INT/STR
The functional form of the profile with one of the values below depending on the desired profile.
The column can contain either the numeric codes (for example `@code{1}') or string characters (for example `@code{sersic}').
The numeric codes are easier to use in scripts which generate catalogs with hundreds or thousands of profiles.

The string format can be easier when the catalog is to be written/checked by hand/eye before running MakeProfiles.
It is much more readable and provides a level of documentation.
All Gnuastro's recognized table formats (see @ref{Recognized table formats}) accept string type columns.
To have string columns in a plain text table/catalog, see @ref{Gnuastro text table format}.

@itemize
@item
S@'ersic profile with `@code{sersic}' or `@code{1}'.

@item
Moffat profile with `@code{moffat}' or `@code{2}'.

@item
Gaussian profile with `@code{gaussian}' or `@code{3}'.

@item
Point source with `@code{point}' or `@code{4}'.

@item
Flat profile with `@code{flat}' or `@code{5}'.

@item
Circumference profile with `@code{circum}' or `@code{6}'.
A fixed value will be used for all pixels less than or equal to the truncation radius (@mymath{r_t}) and greater than @mymath{r_t-w} (@mymath{w} is the value to the @option{--circumwidth}).

@item
Radial distance profile with `@code{distance}' or `@code{7}'.
At the lowest level, each pixel only has an elliptical radial distance given the profile's shape and orientation (see @ref{Defining an ellipse and ellipsoid}).
When this profile is chosen, the pixel's elliptical radial distance from the profile center is written as its value.
For this profile, the value in the magnitude column (@option{--mcol}) will be ignored.

You can use this for checks or as a first approximation to define your own higher-level radial function.
In the latter case, just note that the central values are going to be incorrect (see @ref{Sampling from a function}).

@item
Custom profile with `@code{custom}' or `@code{8}'.
The values to use for each radial interval should be in the table given to @option{--customtable}. For more, see @ref{MakeProfiles profile settings}.
@end itemize

@item --rcol=STR/INT
The radius parameter of the profiles.
Effective radius (@mymath{r_e}) if S@'ersic, FWHM if Moffat or Gaussian.

@item --ncol=STR/INT
The S@'ersic index (@mymath{n}) or Moffat @mymath{\beta}.

@item --pcol=STR/INT
The position angle (in degrees) of the profiles relative to the first FITS axis (horizontal when viewed in SAO ds9).
When building a 3D profile, this is the first Euler angle: first rotation of the ellipsoid major axis from the first FITS axis (rotating about the third axis).
See @ref{Defining an ellipse and ellipsoid}.

@item --p2col=STR/INT
Second Euler angle (in degrees) when building a 3D ellipsoid.
This is the second rotation of the ellipsoid major axis (following @option{--pcol}) about the (rotated) X axis.
See @ref{Defining an ellipse and ellipsoid}.
This column is ignored when building a 2D profile.

@item --p3col=STR/INT
Third Euler angle (in degrees) when building a 3D ellipsoid.
This is the third rotation of the ellipsoid major axis (following @option{--pcol} and @option{--p2col}) about the (rotated) Z axis.
See @ref{Defining an ellipse and ellipsoid}.
This column is ignored when building a 2D profile.

@item --qcol=STR/INT
The axis ratio of the profiles (minor axis divided by the major axis in a 2D ellipse).
When building a 3D ellipse, this is the ratio of the major axis to the semi-axis length of the second dimension (in a right-handed coordinate system).
See @mymath{q1} in @ref{Defining an ellipse and ellipsoid}.

@item --q2col=STR/INT
The ratio of the ellipsoid major axis to the third semi-axis length (in a right-handed coordinate system) of a 3D ellipsoid.
See @mymath{q1} in @ref{Defining an ellipse and ellipsoid}.
This column is ignored when building a 2D profile.

@item --mcol=STR/INT
The total pixelated magnitude of the profile within the truncation radius, see @ref{Profile magnitude}.

@item --tcol=STR/INT
The truncation radius of this profile.
By default it is in units of the radial parameter of the profile (the value in the @option{--rcol} of the catalog).
If @option{--tunitinp} is given, this value is interpreted in units of pixels (prior to oversampling) irrespective of the profile.

@end table

@node MakeProfiles profile settings, MakeProfiles output dataset, MakeProfiles catalog, Invoking astmkprof
@subsubsection MakeProfiles profile settings

The profile parameters that differ between each created profile are specified through the columns in the input catalog and described in @ref{MakeProfiles catalog}.
Besides those there are general settings for some profiles that don't differ between one profile and another, they are a property of the general process.
For example how many random points to use in the monte-carlo integration, this value is fixed for all the profiles.
The options described in this section are for configuring such properties.

@table @option

@item --mode=STR
Interpret the center position columns (@option{--ccol} in @ref{MakeProfiles catalog}) in image or WCS coordinates.
This option thus accepts only two values: @option{img} and @option{wcs}.
It is mandatory when a catalog is being used as input.

@item -r
@itemx --numrandom
The number of random points used in the central regions of the profile, see @ref{Sampling from a function}.

@item -e
@itemx --envseed
@cindex Seed, Random number generator
@cindex Random number generator, Seed
Use the value to the @code{GSL_RNG_SEED} environment variable to generate the random Monte Carlo sampling distribution, see @ref{Sampling from a function} and @ref{Generating random numbers}.

@item -t FLT
@itemx --tolerance=FLT
The tolerance to switch from Monte Carlo integration to the central pixel value, see @ref{Sampling from a function}.

@item -p
@itemx --tunitinp
The truncation column of the catalog is in units of pixels.
By default, the truncation column is considered to be in units of the radial parameters of the profile (@option{--rcol}).
Read it as `t-unit-in-p' for `truncation unit in pixels'.

@item -f
@itemx --mforflatpix
When making fixed value profiles (flat and circumference, see `@option{--fcol}'), don't use the value in the column specified by `@option{--mcol}' as the magnitude.
Instead use it as the exact value that all the pixels of these profiles should have.
This option is irrelevant for other types of profiles.
This option is very useful for creating masks, or labeled regions in an image.
Any integer, or floating point value can used in this column with this option, including @code{NaN} (or `@code{nan}', or `@code{NAN}', case is irrelevant), and infinities (@code{inf}, @code{-inf}, or @code{+inf}).

For example, with this option if you set the value in the magnitude column (@option{--mcol}) to @code{NaN}, you can create an elliptical or circular mask over an image (which can be given as the argument), see @ref{Blank pixels}.
Another useful application of this option is to create labeled elliptical or circular apertures in an image.
To do this, set the value in the magnitude column to the label you want for this profile.
This labeled image can then be used in combination with NoiseChisel's output (see @ref{NoiseChisel output}) to do aperture photometry with MakeCatalog (see @ref{MakeCatalog}).

Alternatively, if you want to mark regions of the image (for example with an elliptical circumference) and you don't want to use NaN values (as explained above) for some technical reason, you can get the minimum or maximum value in the image @footnote{
The minimum will give a better result, because the maximum can be too high compared to most pixels in the image, making it harder to display.}
using Arithmetic (see @ref{Arithmetic}), then use that value in the magnitude column along with this option for all the profiles.

Please note that when using MakeProfiles on an already existing image, you have to set `@option{--oversample=1}'.
Otherwise all the profiles will be scaled up based on the oversampling scale in your configuration files (see @ref{Configuration files}) unless you have accounted for oversampling in your catalog.

@item --mcolisbrightness
The value given in the ``magnitude column'' (specified by @option{--mcol}, see @ref{MakeProfiles catalog}) must be interpreted as brightness, not magnitude.
The zero point magnitude (value to the @option{--zeropoint} option) is ignored and the given value must have the same units as the input dataset's pixels.

Recall that the total profile magnitude or brightness that is specified with in the @option{--mcol} column of the input catalog is not an integration to infinity, but the actual sum of pixels in the profile (until the desired truncation radius).
See @ref{Profile magnitude} for more on this point.

@item --magatpeak
The magnitude column in the catalog (see @ref{MakeProfiles catalog}) will be used to find the brightness only for the peak profile pixel, not the full profile.
Note that this is the flux of the profile's peak pixel in the final output of MakeProfiles.
So beware of the oversampling, see @ref{Oversampling}.

This option can be useful if you want to check a mock profile's total magnitude at various truncation radii.
Without this option, no matter what the truncation radius is, the total magnitude will be the same as that given in the catalog.
But with this option, the total magnitude will become brighter as you increase the truncation radius.

In sharper profiles, sometimes the accuracy of measuring the peak profile flux is more than the overall object brightness.
In such cases, with this option, the final profile will be built such that its peak has the given magnitude, not the total profile.

@cartouche
@strong{CAUTION:} If you want to use this option for comparing with observations, please note that MakeProfiles does not do convolution.
Unless you have de-convolved your data, your images are convolved with the instrument and atmospheric PSF, see @ref{PSF}.
Particularly in sharper profiles, the flux in the peak pixel is strongly decreased after convolution.
Also note that in such cases, besides de-convolution, you will have to set @option{--oversample=1} otherwise after resampling your profile with Warp (see @ref{Warp}), the peak flux will be different.
@end cartouche

@item --customtable STR
The filename of the table to use in the custom profiles (see description of @option{--fcol} in @ref{MakeProfiles catalog}.
This can be a plain-text table, or FITS table, see @ref{Tables}, if its a FITS table, you can use @option{--customtablehdu} to specify which HDU should be used (described below).

A custom profile can have any value you want for a given radial profile.
Each interval is defined by its minimum (inclusive) and maximum (exclusive) radius, when a pixel falls within this radius the value specified for that interval will be used.
If a pixel is not in the given intervals, a value of 0 will be used for it.

The table should have 3 columns as shown below.
If the intervals are contiguous (the maximum value of the previous interval is equal to the minimum value of an interval) and the intervals all have the same size (difference between minimum and maximum values) the creation of these profiles will be fast.
However, if the intervals are not sorted and contiguous, Makeprofiles will parse the intervals from the top of the table and use the first interval that contains the pixel center.

@table @asis
@item Column 1:
The interval's minimum radius.
@item Column 2:
The interval's maximum radius.
@item Column 3:
The value to be used for pixels within the given interval.
@end table

For example let's assume you have the radial profile below in a file called @file{radial.txt}.
The first column is the larger interval radius and the second column is the value in that interval:

@example
1    100
2    90
3    50
4    10
5    2
6    0.1
7    0.05
@end example

@noindent
You can construct the table to give to @option{--customtable} with either of the commands below: the first one with Gnuastro's @ref{Column arithmetic} which can also work on FITS tables, and the second one with an AWK command that only works on plain-text tables..

@example
asttable radial.fits -c'arith $1 1 -' -c1,2 -ocustom.fits
awk '@{print $1-1, $1, $2@}' radial.txt > custom.txt
@end example

@noindent
In case the intervals are different from 1 (for example 0.5), change them respectively: for Gnuastro's table change @code{$1 1 -} to @code{$1 0.5 -} and for AWK change  @code{$1-1} to @code{$1-0.5}.


@item --customtablehdu INT/STR
The HDU/extension in the FITS file given to @option{--customtable}.

@item -X INT,INT
@itemx --shift=INT,INT
Shift all the profiles and enlarge the image along each dimension.
To better understand this option, please see @mymath{n} in @ref{If convolving afterwards}.
This is useful when you want to convolve the image afterwards.
If you are using an external PSF, be sure to oversample it to the same scale used for creating the mock images.
If a background image is specified, any possible value to this option is ignored.

@item -c
@itemx --prepforconv
Shift all the profiles and enlarge the image based on half the width of the first Moffat or Gaussian profile in the catalog, considering any possible oversampling see @ref{If convolving afterwards}.
@option{--prepforconv} is only checked and possibly activated if @option{--xshift} and @option{--yshift} are both zero (after reading the command-line and configuration files).
If a background image is specified, any possible value to this option is ignored.

@item -z FLT
@itemx --zeropoint=FLT
The zero point magnitude of the input.
For more on the zero point magnitude, see @ref{Brightness flux magnitude}.

@item -w FLT
@itemx --circumwidth=FLT
The width of the circumference if the profile is to be an elliptical circumference or annulus.
See the explanations for this type of profile in @option{--fcol}.

@item -R
@itemx --replace
Do not add the pixels of each profile over the background, or other profiles.
But replace the values.

By default, when two profiles overlap, the final pixel value is the sum of all the profiles that overlap on that pixel.
This is the expected situation when dealing with physical object profiles like galaxies or stars/PSF.
However, when MakeProfiles is used to build integer labeled images (for example in @ref{Aperture photometry}), this is not the expected situation: the sum of two labels will be a new label.
With this option, the pixels are not added but the largest (maximum) value over that pixel is used.
Because the maximum operator is independent of the order of values, the output is also thread-safe.

@end table

@node MakeProfiles output dataset, MakeProfiles log file, MakeProfiles profile settings, Invoking astmkprof
@subsubsection MakeProfiles output dataset
MakeProfiles takes an input catalog uses basic properties that are defined there to build a dataset, for example a 2D image containing the profiles in the catalog.
In @ref{MakeProfiles catalog} and @ref{MakeProfiles profile settings}, the catalog and profile settings were discussed.
The options of this section, allow you to configure the output dataset (or the canvas that will host the built profiles).

@table @option

@item -k STR
@itemx --background=STR
A background image FITS file to build the profiles on.
The extension that contains the image should be specified with the @option{--backhdu} option, see below.
When a background image is specified, it will be used to derive all the information about the output image.
Hence, the following options will be ignored: @option{--mergedsize}, @option{--oversample}, @option{--crpix}, @option{--crval} (generally, all other WCS related parameters) and the output's data type (see @option{--type} in @ref{Input output options}).

The image will act like a canvas to build the profiles on: profile pixel values will be summed with the background image pixel values.
With the @option{--replace} option you can disable this behavior and replace the profile pixels with the background pixels.
If you want to use all the image information above, except for the pixel values (you want to have a blank canvas to build the profiles on, based on an input image), you can call @option{--clearcanvas}, to set all the input image's pixels to zero before starting to build the profiles over it (this is done in memory after reading the input, so nothing will happen to your input file).

@item -B STR/INT
@itemx --backhdu=STR/INT
The header data unit (HDU) of the file given to @option{--background}.

@item -C
@itemx --clearcanvas
When an input image is specified (with the @option{--background} option, set all its pixels to 0.0 immediately after reading it into memory.
Effectively, this will allow you to use all its properties (described under the @option{--background} option), without having to worry about the pixel values.

@option{--clearcanvas} can come in handy in many situations, for example if you want to create a labeled image (segmentation map) for creating a catalog (see @ref{MakeCatalog}).
In other cases, you might have modeled the objects in an image and want to create them on the same frame, but without the original pixel values.

@item -E STR/INT,FLT[,FLT,[...]]
@itemx --kernel=STR/INT,FLT[,FLT,[...]]
Only build one kernel profile with the parameters given as the values to this option.
The different values must be separated by a comma (@key{,}).
The first value identifies the radial function of the profile, either through a string or through a number (see description of @option{--fcol} in @ref{MakeProfiles catalog}).
Each radial profile needs a different total number of parameters: S@'ersic and Moffat functions need 3 parameters: radial, S@'ersic index or Moffat @mymath{\beta}, and truncation radius.
The Gaussian function needs two parameters: radial and truncation radius.
The point function doesn't need any parameters and flat and circumference profiles just need one parameter (truncation radius).

The PSF or kernel is a unique (and highly constrained) type of profile: the sum of its pixels must be one, its center must be the center of the central pixel (in an image with an odd number of pixels on each side), and commonly it is circular, so its axis ratio and position angle are one and zero respectively.
Kernels are commonly necessary for various data analysis and data manipulation steps (for example see @ref{Convolve}, and @ref{NoiseChisel}.
Because of this it is inconvenient to define a catalog with one row and many zero valued columns (for all the non-necessary parameters).
Hence, with this option, it is possible to create a kernel with MakeProfiles without the need to create a catalog.
Here are some examples:

@table @option
@item --kernel=moffat,3,2.8,5
A Moffat kernel with FWHM of 3 pixels, @mymath{\beta=2.8} which is truncated at 5 times the FWHM.

@item --kernel=gaussian,2,3
A circular Gaussian kernel with FWHM of 2 pixels and truncated at 3 times
the FWHM.
@end table

This option may also be used to create a 3D kernel.
To do that, two small modifications are necessary: add a @code{-3d} (or @code{-3D}) to the profile name (for example @code{moffat-3d}) and add a number (axis-ratio along the third dimension) to the end of the parameters for all profiles except @code{point}.
The main reason behind providing an axis ratio in the third dimension is that in 3D astronomical datasets, commonly the third dimension doesn't have the same nature (units/sampling) as the first and second.

For example in IFU datacubes, the first and second dimensions are angularpositions (like RA and Dec) but the third is in units of Angstroms for wavelength.
Because of this different nature (which also affects theprocessing), it may be necessary for the kernel to have a different extent in that direction.

If the 3rd dimension axis ratio is equal to @mymath{1.0}, then the kernel will be a spheroid.
If its smaller than @mymath{1.0}, the kernel will be button-shaped: extended less in the third dimension.
However, when it islarger than @mymath{1.0}, the kernel will be bullet-shaped: extended more in the third dimension.
In the latter case, the radial parameter will correspond to the length along the 3rd dimension.
For example, let's have a look at the two examples above but in 3D:

@table @option
@item --kernel=moffat-3d,3,2.8,5,0.5
An ellipsoid Moffat kernel with FWHM of 3 pixels, @mymath{\beta=2.8} which is truncated at 5 times the FWHM.
The ellipsoid is circular in the first two dimensions, but in the third dimension its extent is half the first two.

@item --kernel=gaussian-3d,2,3,1
A spherical Gaussian kernel with FWHM of 2 pixels and truncated at 3 times
the FWHM.
@end table

Ofcourse, if a specific kernel is needed that doesn't fit the constraints imposed by this option, you can always use a catalog to define any arbitrary kernel.
Just call the @option{--individual} and @option{--nomerged} options to make sure that it is built as a separate file (individually) and no ``merged'' image of the input profiles is created.

@item -x INT,INT
@itemx --mergedsize=INT,INT
The number of pixels along each axis of the output, in FITS order.
This is before over-sampling.
For example if you call MakeProfiles with @option{--mergedsize=100,150 --oversample=5} (assuming no shift due for later convolution), then the final image size along the first axis will be 500 by 750 pixels.
Fractions are acceptable as values for each dimension, however, they must reduce to an integer, so @option{--mergedsize=150/3,300/3} is acceptable but @option{--mergedsize=150/4,300/4} is not.

When viewing a FITS image in DS9, the first FITS dimension is in the horizontal direction and the second is vertical.
As an example, the image created with the example above will have 500 pixels horizontally and 750 pixels vertically.

If a background image is specified, this option is ignored.

@item -s INT
@itemx --oversample=INT
The scale to over-sample the profiles and final image.
If not an odd number, will be added by one, see @ref{Oversampling}.
Note that this @option{--oversample} will remain active even if an input image is specified.
If your input catalog is based on the background image, be sure to set @option{--oversample=1}.

@item --psfinimg
Build the possibly existing PSF profiles (Moffat or Gaussian) in the catalog into the final image.
By default they are built separately so you can convolve your images with them, thus their magnitude and positions are ignored.
With this option, they will be built in the final image like every other galaxy profile.
To have a final PSF in your image, make a point profile where you want the PSF and after convolution it will be the PSF.

@item -i
@itemx --individual
@cindex Individual profiles
@cindex Build individual profiles
If this option is called, each profile is created in a separate FITS file within the same directory as the output and the row number of the profile (starting from zero) in the name.
The file for each row's profile will be in the same directory as the final combined image of all the profiles and will have the final image's name as a suffix.
So for example if the final combined image is named @file{./out/fromcatalog.fits}, then the first profile that will be created with this option will be named @file{./out/0_fromcatalog.fits}.

Since each image only has one full profile out to the truncation radius the profile is centered and so, only the sub-pixel position of the profile center is important for the outputs of this option.
The output will have an odd number of pixels.
If there is no oversampling, the central pixel will contain the profile center.
If the value to @option{--oversample} is larger than unity, then the profile center is on any of the central @option{--oversample}'d pixels depending on the fractional value of the profile center.

If the fractional value is larger than half, it is on the bottom half of the central region.
This is due to the FITS definition of a real number position: The center of a pixel has fractional value @mymath{0.00} so each pixel contains these fractions: .5 -- .75 -- .00 (pixel center) -- .25 -- .5.

@item -m
@itemx --nomerged
Don't make a merged image.
By default after making the profiles, they are added to a final image with side lengths specified by @option{--mergedsize} if they overlap with it.

@end table


@noindent
The options below can be used to define the world coordinate system (WCS) properties of the MakeProfiles outputs.
The option names are deliberately chosen to be the same as the FITS standard WCS keywords.
See Section 8 of @url{https://doi.org/10.1051/0004-6361/201015362, Pence et al [2010]} for a short introduction to WCS in the FITS standard@footnote{The world coordinate standard in FITS is a very beautiful and powerful concept to link/associate datasets with the outside world (other datasets).
The description in the FITS standard (link above) only touches the tip of the ice-burg.
To learn more please see @url{https://doi.org/10.1051/0004-6361:20021326, Greisen and Calabretta [2002]}, @url{https://doi.org/10.1051/0004-6361:20021327, Calabretta and Greisen [2002]}, @url{https://doi.org/10.1051/0004-6361:20053818, Greisen et al. [2006]}, and @url{http://www.atnf.csiro.au/people/mcalabre/WCS/dcs_20040422.pdf, Calabretta et al.}}.

If you look into the headers of a FITS image with WCS for example you will see all these names but in uppercase and with numbers to represent the dimensions, for example @code{CRPIX1} and @code{PC2_1}.
You can see the FITS headers with Gnuastro's @ref{Fits} program using a command like this: @command{$ astfits -p image.fits}.

If the values given to any of these options does not correspond to the number of dimensions in the output dataset, then no WCS information will be added.

@table @option

@item --crpix=FLT,FLT
The pixel coordinates of the WCS reference point.
Fractions are acceptable for the values of this option.

@item --crval=FLT,FLT
The WCS coordinates of the Reference point.
Fractions are acceptable for the values of this option.

@item --cdelt=FLT,FLT
The resolution (size of one data-unit or pixel in WCS units) of the non-oversampled dataset.
Fractions are acceptable for the values of this option.

@item --pc=FLT,FLT,FLT,FLT
The PC matrix of the WCS rotation, see the FITS standard (link above) to better understand the PC matrix.

@item --cunit=STR,STR
The units of each WCS axis, for example @code{deg}.
Note that these values are part of the FITS standard (link above).
MakeProfiles won't complain if you use non-standard values, but later usage of them might cause trouble.

@item --ctype=STR,STR
The type of each WCS axis, for example @code{RA---TAN} and @code{DEC--TAN}.
Note that these values are part of the FITS standard (link above).
MakeProfiles won't complain if you use non-standard values, but later usage of them might cause trouble.

@end table

@node MakeProfiles log file,  , MakeProfiles output dataset, Invoking astmkprof
@subsubsection MakeProfiles log file

Besides the final merged dataset of all the profiles, or the individual datasets (see @ref{MakeProfiles output dataset}), if the @option{--log} option is called MakeProfiles will also create a log file in the current directory (where you run MockProfiles).
See @ref{Common options} for a full description of @option{--log} and other options that are shared between all Gnuastro programs.
The values for each column are explained in the first few commented lines of the log file (starting with @command{#} character).
Here is a more complete description.

@itemize
@item
An ID (row number of profile in input catalog).

@item
The total magnitude of the profile in the output dataset.
When the profile does not completely overlap with the output dataset, this will be different from your input magnitude.

@item
The number of pixels (in the oversampled image) which used Monte Carlo integration and not the central pixel value, see @ref{Sampling from a function}.

@item
The fraction of flux in the Monte Carlo integrated pixels.

@item
If an individual image was created, this column will have a value of @code{1}, otherwise it will have a value of @code{0}.
@end itemize












@node MakeNoise,  , MakeProfiles, Modeling and fittings
@section MakeNoise

@cindex Noise
Real data are always buried in noise, therefore to finalize a simulation of real data (for example to test our observational algorithms) it is essential to add noise to the mock profiles created with MakeProfiles, see @ref{MakeProfiles}.
Below, the general principles and concepts to help understand how noise is quantified is discussed.
MakeNoise options and argument are then discussed in @ref{Invoking astmknoise}.

@menu
* Noise basics::                Noise concepts and definitions.
* Invoking astmknoise::         Options and arguments to MakeNoise.
@end menu



@node Noise basics, Invoking astmknoise, MakeNoise, MakeNoise
@subsection Noise basics

@cindex Noise
@cindex Image noise
Deep astronomical images, like those used in extragalactic studies, seriously suffer from noise in the data.
Generally speaking, the sources of noise in an astronomical image are photon counting noise and Instrumental noise which are discussed in @ref{Photon counting noise} and @ref{Instrumental noise}.
This review finishes with @ref{Generating random numbers} which is a short introduction on how random numbers are generated.
We will see that while software random number generators are not perfect, they allow us to obtain a reproducible series of random numbers through setting the random number generator function and seed value.
Therefore in this section, we'll also discuss how you can set these two parameters in Gnuastro's programs (including MakeNoise).

@menu
* Photon counting noise::       Poisson noise
* Instrumental noise::          Readout, dark current and other sources.
* Final noised pixel value::    How the final noised value is calculated.
* Generating random numbers::   How random numbers are generated.
@end menu

@node Photon counting noise, Instrumental noise, Noise basics, Noise basics
@subsubsection Photon counting noise

@cindex Counting error
@cindex de Moivre, Abraham
@cindex Poisson distribution
@cindex Photon counting noise
@cindex Poisson, Sim@'eon Denis
With the very accurate electronics used in today's detectors, photon counting noise@footnote{In practice, we are actually counting the electrons that are produced by each photon, not the actual photons.} is the most significant source of uncertainty in most datasets.
To understand this noise (error in counting), we need to take a closer look at how a distribution produced by counting can be modeled as a parametric function.

Counting is an inherently discrete operation, which can only produce positive (including zero) integer outputs.
For example we can't count @mymath{3.2} or @mymath{-2} of anything.
We only count @mymath{0}, @mymath{1}, @mymath{2}, @mymath{3} and so on.
The distribution of values, as a result of counting efforts is formally known as the @url{https://en.wikipedia.org/wiki/Poisson_distribution, Poisson distribution}.
It is associated to Sim@'eon Denis Poisson, because he discussed it while working on the number of wrongful convictions in court cases in his 1837 book@footnote{[From Wikipedia] Poisson's result was also derived in a previous study by Abraham de Moivre in 1711.
Therefore some people suggest it should rightly be called the de Moivre distribution.}.

@cindex Probability density function
Let's take @mymath{\lambda} to represent the expected mean count of something.
Furthermore, let's take @mymath{k} to represent the result of one particular counting attempt.
The probability density function of getting @mymath{k} counts (in each attempt, given the expected/mean count of @mymath{\lambda}) can be written as:

@cindex Poisson distribution
@dispmath{f(k)={\lambda^k \over k!} e^{-\lambda},\quad k\in @{0, 1, 2, 3, \dots @}}

@cindex Skewed Poisson distribution
Because the Poisson distribution is only applicable to positive values (note the factorial operator, which only applies to non-negative integers), naturally it is very skewed when @mymath{\lambda} is near zero.
One qualitative way to understand this behavior is that there simply aren't enough integers smaller than @mymath{\lambda}, than integers that are larger than it.
Therefore to accommodate all possibilities/counts, it has to be strongly skewed when @mymath{\lambda} is small.

@cindex Compare Poisson and Gaussian
As @mymath{\lambda} becomes larger, the distribution becomes more and more symmetric.
A very useful property of the Poisson distribution is that the mean value is also its variance.
When @mymath{\lambda} is very large, say @mymath{\lambda>1000}, then the @url{https://en.wikipedia.org/wiki/Normal_distribution, Normal (Gaussian) distribution}, is an excellent approximation of the Poisson distribution with mean @mymath{\mu=\lambda} and standard deviation @mymath{\sigma=\sqrt{\lambda}}.
In other words, a Poisson distribution (with a sufficiently large @mymath{\lambda}) is simply a Gaussian that only has one free parameter (@mymath{\mu=\lambda} and @mymath{\sigma=\sqrt{\lambda}}), instead of the two parameters (independent @mymath{\mu} and @mymath{\sigma}) that it originally has.

@cindex Sky value
@cindex Background flux
@cindex Undetected objects
In real situations, the photons/flux from our targets are added to a certain background flux (observationally, the @emph{Sky} value).
The Sky value is defined to be the average flux of a region in the dataset with no targets.
Its physical origin can be the brightness of the atmosphere (for ground-based instruments), possible stray light within the imaging instrument, the average flux of undetected targets, etc.
The Sky value is thus an ideal definition, because in real datasets, what lies deep in the noise (far lower than the detection limit) is never known@footnote{In a real image, a relatively large number of very faint objects can been fully buried in the noise and never detected.
These undetected objects will bias the background measurement to slightly larger values.
Our best approximation is thus to simply assume they are uniform, and consider their average effect.
See Figure 1 (a.1 and a.2) and Section 2.2 in @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.}.
To account for all of these, the sky value is defined to be the average count/value of the undetected regions in the image.
In a mock image/dataset, we have the luxury of setting the background (Sky) value.

@cindex Simulating noise
@cindex Noise simulation
In each element of the dataset (pixel in an image), the flux is the sum of contributions from various sources (after convolution by the PSF, see @ref{PSF}).
Let's name the convolved sum of possibly overlapping objects, @mymath{I_{nn}}.
@mymath{nn} representing `no noise'.
For now, let's assume the background (@mymath{B}) is constant and sufficiently high for the Poisson distribution to be approximated by a Gaussian.
Then the flux after adding noise is a random value taken from a Gaussian distribution with the following mean (@mymath{\mu}) and standard deviation (@mymath{\sigma}):

@dispmath{\mu=B+I_{nn}, \quad \sigma=\sqrt{B+I_{nn}}}

Since this type of noise is inherent in the objects we study, it is usually measured on the same scale as the astronomical objects, namely the magnitude system, see @ref{Brightness flux magnitude}.
It is then internally converted to the flux scale for further processing.

@node Instrumental noise, Final noised pixel value, Photon counting noise, Noise basics
@subsubsection Instrumental noise

@cindex Readout noise
@cindex Instrumental noise
@cindex Noise, instrumental
While taking images with a camera, a dark current is fed to the pixels, the variation of the value of this dark current over the pixels, also adds to the final image noise.
Another source of noise is the readout noise that is produced by the electronics in the detector.
Specifically, the parts that attempt to digitize the voltage produced by the photo-electrons in the analog to digital converter.
With the current generation of instruments, this source of noise is not as significant as the noise due to the background Sky discussed in @ref{Photon counting noise}.

Let @mymath{C} represent the combined standard deviation of all these instrumental sources of noise.
When only this source of noise is present, the noised pixel value would be a random value chosen from a Gaussian distribution with

@dispmath{\mu=I_{nn}, \quad \sigma=\sqrt{C^2+I_{nn}}}

@cindex ADU
@cindex Gain
@cindex Counts
This type of noise is independent of the signal in the dataset, it is only determined by the instrument.
So the flux scale (and not magnitude scale) is most commonly used for this type of noise.
In practice, this value is usually reported in analog-to-digital units or ADUs, not flux or electron counts.
The gain value of the device can be used to convert between these two, see @ref{Brightness flux magnitude}.

@node Final noised pixel value, Generating random numbers, Instrumental noise, Noise basics
@subsubsection Final noised pixel value
Based on the discussions in @ref{Photon counting noise} and @ref{Instrumental noise}, depending on the values you specify for @mymath{B} and @mymath{C} from the above, the final noised value for each pixel is a random value chosen from a Gaussian distribution with

@dispmath{\mu=B+I_{nn}, \quad \sigma=\sqrt{C^2+B+I_{nn}}}



@node Generating random numbers,  , Final noised pixel value, Noise basics
@subsubsection Generating random numbers

@cindex Random numbers
@cindex Numbers, random
As discussed above, to generate noise we need to make random samples of a particular distribution.
So it is important to understand some general concepts regarding the generation of random numbers.
For a very complete and nice introduction we strongly advise reading Donald Knuth's ``The art of computer programming'', volume 2, chapter 3@footnote{Knuth, Donald. 1998.
The art of computer programming. Addison--Wesley. ISBN 0-201-89684-2 }.
Quoting from the GNU Scientific Library manual, ``If you don't own it, you should stop reading right now, run to the nearest bookstore, and buy it''@footnote{For students, running to the library might be more affordable!}!

@cindex Psuedo-random numbers
@cindex Numbers, psuedo-random
Using only software, we can only produce what is called a psuedo-random sequence of numbers.
A true random number generator is a hardware (let's assume we have made sure it has no systematic biases), for example throwing dice or flipping coins (which have remained from the ancient times).
More modern hardware methods use atmospheric noise, thermal noise or other types of external electromagnetic or quantum phenomena.
All pseudo-random number generators (software) require a seed to be the basis of the generation.
The advantage of having a seed is that if you specify the same seed for multiple runs, you will get an identical sequence of random numbers which allows you to reproduce the same final noised image.

@cindex Environment variables
@cindex GNU Scientific Library
The programs in GNU Astronomy Utilities (for example MakeNoise or MakeProfiles) use the GNU Scientific Library (GSL) to generate random numbers.
GSL allows the user to set the random number generator through environment variables, see @ref{Installation directory} for an introduction to environment variables.
In the chapter titled ``Random Number Generation'' they have fully explained the various random number generators that are available (there are a lot of them!).
Through the two environment variables @code{GSL_RNG_TYPE} and @code{GSL_RNG_SEED} you can specify the generator and its seed respectively.

@cindex Seed, Random number generator
@cindex Random number generator, Seed
If you don't specify a value for @code{GSL_RNG_TYPE}, GSL will use its default random number generator type.
The default type is sufficient for most general applications.
If no value is given for the @code{GSL_RNG_SEED} environment variable and you have asked Gnuastro to read the seed from the environment (through the @option{--envseed} option), then GSL will use the default value of each generator to give identical outputs.
If you don't explicitly tell Gnuastro programs to read the seed value from the environment variable, then they will use the system time (accurate to within a microsecond) to generate (apparently random) seeds.
In this manner, every time you run the program, you will get a different random number distribution.

There are two ways you can specify values for these environment variables.
You can call them on the same command-line for example:

@example
$ GSL_RNG_TYPE="taus" GSL_RNG_SEED=345 astmknoise input.fits
@end example

@noindent
In this manner the values will only be used for this particular execution of MakeNoise.
Alternatively, you can define them for the full period of your terminal session or script length, using the shell's @command{export} command with the two separate commands below (for a script remove the @code{$} signs):

@example
$ export GSL_RNG_TYPE="taus"
$ export GSL_RNG_SEED=345
@end example

@cindex Startup scripts
@cindex @file{.bashrc}
@noindent
The subsequent programs which use GSL's random number generators will hence forth use these values in this session of the terminal you are running or while executing this script.
In case you want to set fixed values for these parameters every time you use the GSL random number generator, you can add these two lines to your @file{.bashrc} startup script@footnote{Don't forget that if you are going to give your scripts (that use the GSL random number generator) to others you have to make sure you also tell them to set these environment variable separately.
So for scripts, it is best to keep all such variable definitions within the script, even if they are within your @file{.bashrc}.}, see @ref{Installation directory}.

@cartouche
@noindent
@strong{NOTE:} If the two environment variables @code{GSL_RNG_TYPE} and @code{GSL_RNG_SEED} are defined, GSL will report them by default, even if you don't use the @option{--envseed} option.
For example you can see the top few lines of the output of MakeProfiles:

@example
$ export GSL_RNG_TYPE="taus"
$ export GSL_RNG_SEED=345
$ astmkprof -s1 --kernel=gaussian,2,5 --envseed
GSL_RNG_TYPE=taus
GSL_RNG_SEED=345
MakeProfiles A.B started on DDD MMM DD HH:MM:SS YYYY
  - Building one gaussian kernel
  - Random number generator (RNG) type: ranlxs1
  - RNG seed for all profiles: 345
  ---- ./kernel.fits created.
MakeProfiles finished in 0.111271 seconds
@end example

@noindent
@cindex Seed, Random number generator
@cindex Random number generator, Seed
The first two output lines (showing the names of the environment variables) are printed by GSL before MakeProfiles actually starts generating random numbers.
The Gnuastro programs will report the values they use independently, you should check them for the final values used.
For example if @option{--envseed} is not given, @code{GSL_RNG_SEED} will not be used and the last line shown above will not be printed.
In the case of MakeProfiles, each profile will get its own seed value.
@end cartouche


@node Invoking astmknoise,  , Noise basics, MakeNoise
@subsection Invoking MakeNoise

MakeNoise will add noise to an existing image.
The executable name is @file{astmknoise} with the following general template

@example
$ astmknoise [OPTION ...] InputImage.fits
@end example

@noindent
One line examples:

@example
## Add noise with a standard deviation of 100 to image.
## (this is independent of the pixel value: not Poission noise)
$ astmknoise --sigma=100 image.fits

## Add noise to input image assuming a background magnitude (with
## zero point magnitude of 0) and a certain instrumental noise:
$ astmknoise --background=-10 -z0 --instrumental=20 mockimage.fits
@end example

@noindent
If actual processing is to be done, the input image is a mandatory argument.
The full list of options common to all the programs in Gnuastro can be seen in @ref{Common options}.
The type (see @ref{Numeric data types}) of the output can be specified with the @option{--type} option, see @ref{Input output options}.
The header of the output FITS file keeps all the parameters that were influential in making it.
This is done for future reproducibility.

@table @option

@item -b FLT
@itemx --background=FLT
The background value (per pixel) that will be added to each pixel value (internally) to estimate Poisson noise, see @ref{Photon counting noise}.
By default the units of this value are assumed to be in magnitudes, hence a @option{--zeropoint} is also necessary.
But if the background is in units of brightness, you need add @option{--bgisbrightness}, see @ref{Brightness flux magnitude}

Internally, the value given to this option will be converted to brightness (@mymath{b}, when @option{--bgisbrightness} is called, the value will be used directly).
Assuming the pixel value is @mymath{p}, the random value for that pixel will be taken from a Gaussian distribution with mean of @mymath{p+b} and standard deviation of @mymath{\sqrt{p+b}}.
With this option, the noise will therefore be dependent on the pixel values: according to the Poission noise model, as the pixel value becomes larger, its noise will also become larger.
This is thus a realistic way to model noise, see @ref{Photon counting noise}.

@item -B
@itemx --bgisbrightness
The value given to @option{--background} should be interpretted as brightness, not as a magnitude.

@item -z FLT
@itemx --zeropoint=FLT
The zero point magnitude used to convert the value of @option{--background} (in units of magnitude) to flux, see @ref{Brightness flux magnitude}.

@item -i FLT
@itemx --instrumental=FLT
The instrumental noise which is in units of flux, see @ref{Instrumental noise}.

@item -s FLT
@item --sigma=FLT
The total noise sigma in the same units as the pixel values.
With this option, the @option{--background}, @option{--zeropoint} and @option{--instrumental} will be ignored.
With this option, the noise will be independent of the pixel values (which is not realistic, see @ref{Photon counting noise}).
Hence it is only useful if you are working on low surface brightness regions where the change in pixel value (and thus real noise) is insignificant.

Generally, @strong{usage of this option is discouraged} unless you understand the risks of not simulating real noise.
This is because with this option, you will not get Poisson noise (the common noise model for astronomical imaging), where the noise varies based on pixel value.
Use @option{--background} for adding Poission noise.

@item -e
@itemx --envseed
@cindex Seed, Random number generator
@cindex Random number generator, Seed
Use the @code{GSL_RNG_SEED} environment variable for the seed used in the random number generator, see @ref{Generating random numbers}.
With this option, the output image noise is always going to be identical (or reproducible).

@item -d
@itemx --doubletype
Save the output in the double precision floating point format that was used internally.
This option will be most useful if the input images were of integer types.

@end table


















@node High-level calculations, Library, Modeling and fittings, Top
@chapter High-level calculations

After the reduction of raw data (for example with the programs in @ref{Data manipulation}) you will have reduced images/data ready for processing/analyzing (for example with the programs in @ref{Data analysis}).
But the processed/analyzed data (or catalogs) are still not enough to derive any scientific result.
Even higher-level analysis is still needed to convert the observed magnitudes, sizes or volumes into physical quantities that we associate with each catalog entry or detected object which is the purpose of the tools in this section.





@menu
* CosmicCalculator::            Calculate cosmological variables
@end menu

@node CosmicCalculator,  , High-level calculations, High-level calculations
@section CosmicCalculator

To derive higher-level information regarding our sources in extra-galactic astronomy, cosmological calculations are necessary.
In Gnuastro, CosmicCalculator is in charge of such calculations.
Before discussing how CosmicCalculator is called and operates (in @ref{Invoking astcosmiccal}), it is important to provide a rough but mostly self sufficient review of the basics and the equations used in the analysis.
In @ref{Distance on a 2D curved space} the basic idea of understanding distances in a curved and expanding 2D universe (which we can visualize) are reviewed.
Having solidified the concepts there, in @ref{Extending distance concepts to 3D}, the formalism is extended to the 3D universe we are trying to study in our research.

The focus here is obtaining a physical insight into these equations (mainly for the use in real observational studies).
There are many books thoroughly deriving and proving all the equations with all possible initial conditions and assumptions for any abstract universe, interested readers can study those books.

@menu
* Distance on a 2D curved space::  Distances in 2D for simplicity
* Extending distance concepts to 3D::  Going to 3D (our real universe).
* Invoking astcosmiccal::       How to run CosmicCalculator
@end menu

@node Distance on a 2D curved space, Extending distance concepts to 3D, CosmicCalculator, CosmicCalculator
@subsection Distance on a 2D curved space

The observations to date (for example the Planck 2015 results), have not measured@footnote{The observations are interpreted under the assumption of uniform curvature.
For a relativistic alternative to dark energy (and maybe also some part of dark matter), non-uniform curvature may be even be more critical, but that is beyond the scope of this brief explanation.} the presence of significant curvature in the universe.
However to be generic (and allow its measurement if it does in fact exist), it is very important to create a framework that allows non-zero uniform curvature.
However, this section is not intended to be a fully thorough and mathematically complete derivation of these concepts.
There are many references available for such reviews that go deep into the abstract mathematical proofs.
The emphasis here is on visualization of the concepts for a beginner.

As 3D beings, it is difficult for us to mentally create (visualize) a picture of the curvature of a 3D volume.
Hence, here we will assume a 2D surface/space and discuss distances on that 2D surface when it is flat and when it is curved.
Once the concepts have been created/visualized here, we will extend them, in @ref{Extending distance concepts to 3D}, to a real 3D spatial @emph{slice} of the Universe we live in and hope to study.

To be more understandable (actively discuss from an observer's point of view) let's assume there's an imaginary 2D creature living on the 2D space (which @emph{might} be curved in 3D).
Here, we will be working with this creature in its efforts to analyze distances in its 2D universe.
The start of the analysis might seem too mundane, but since it is difficult to imagine a 3D curved space, it is important to review all the very basic concepts thoroughly for an easy transition to a universe that is more difficult to visualize (a curved 3D space embedded in 4D).

To start, let's assume a static (not expanding or shrinking), flat 2D surface similar to @ref{flatplane} and that the 2D creature is observing its universe from point @mymath{A}.
One of the most basic ways to parameterize this space is through the Cartesian coordinates (@mymath{x}, @mymath{y}).
In @ref{flatplane}, the basic axes of these two coordinates are plotted.
An infinitesimal change in the direction of each axis is written as @mymath{dx} and @mymath{dy}.
For each point, the infinitesimal changes are parallel with the respective axes and are not shown for clarity.
Another very useful way of parameterizing this space is through polar coordinates.
For each point, we define a radius (@mymath{r}) and angle (@mymath{\phi}) from a fixed (but arbitrary) reference axis.
In @ref{flatplane} the infinitesimal changes for each polar coordinate are plotted for a random point and a dashed circle is shown for all points with the same radius.

@float Figure,flatplane
@center@image{gnuastro-figures/flatplane, 10cm, , }

@caption{Two dimensional Cartesian and polar coordinates on a flat
plane.}
@end float

Assuming an object is placed at a certain position, which can be parameterized as @mymath{(x,y)}, or @mymath{(r,\phi)}, a general infinitesimal change in its position will place it in the coordinates @mymath{(x+dx,y+dy)} and @mymath{(r+dr,\phi+d\phi)}.
The distance (on the flat 2D surface) that is covered by this infinitesimal change in the static universe (@mymath{ds_s}, the subscript signifies the static nature of this universe) can be written as:

@dispmath{ds_s=dx^2+dy^2=dr^2+r^2d\phi^2}

The main question is this: how can the 2D creature incorporate the (possible) curvature in its universe when it's calculating distances? The universe that it lives in might equally be a curved surface like @ref{sphereandplane}.
The answer to this question but for a 3D being (us) is the whole purpose to this discussion.
Here, we want to give the 2D creature (and later, ourselves) the tools to measure distances if the space (that hosts the objects) is curved.

@ref{sphereandplane} assumes a spherical shell with radius @mymath{R} as the curved 2D plane for simplicity.
The 2D plane is tangent to the spherical shell and only touches it at @mymath{A}.
This idea will be generalized later.
The first step in measuring the distance in a curved space is to imagine a third dimension along the @mymath{z} axis as shown in @ref{sphereandplane}.
For simplicity, the @mymath{z} axis is assumed to pass through the center of the spherical shell.
Our imaginary 2D creature cannot visualize the third dimension or a curved 2D surface within it, so the remainder of this discussion is purely abstract for it (similar to us having difficulty in visualizing a 3D curved space in 4D).
But since we are 3D creatures, we have the advantage of visualizing the following steps.
Fortunately the 2D creature is already familiar with our mathematical constructs, so it can follow our reasoning.

With the third axis added, a generic infinitesimal change over @emph{the full} 3D space corresponds to the distance:

@dispmath{ds_s^2=dx^2+dy^2+dz^2=dr^2+r^2d\phi^2+dz^2.}

@float Figure,sphereandplane
@center@image{gnuastro-figures/sphereandplane, 10cm, , }

@caption{2D spherical shell (centered on @mymath{O}) and flat plane (light gray) tangent to it at point @mymath{A}.}
@end float

It is very important to recognize that this change of distance is for @emph{any} point in the 3D space, not just those changes that occur on the 2D spherical shell of @ref{sphereandplane}.
Recall that our 2D friend can only do measurements on the 2D surfaces, not the full 3D space.
So we have to constrain this general change to any change on the 2D spherical shell.
To do that, let's look at the arbitrary point @mymath{P} on the 2D spherical shell.
Its image (@mymath{P'}) on the flat plain is also displayed. From the dark gray triangle, we see that

@dispmath{\sin\theta={r\over R},\quad\cos\theta={R-z\over R}.}These relations allow the 2D creature to find the value of @mymath{z} (an abstract dimension for it) as a function of r (distance on a flat 2D plane, which it can visualize) and thus eliminate @mymath{z}.
From @mymath{\sin^2\theta+\cos^2\theta=1}, we get @mymath{z^2-2Rz+r^2=0} and solving for @mymath{z}, we find:

@dispmath{z=R\left(1\pm\sqrt{1-{r^2\over R^2}}\right).}

The @mymath{\pm} can be understood from @ref{sphereandplane}: For each @mymath{r}, there are two points on the sphere, one in the upper hemisphere and one in the lower hemisphere.
An infinitesimal change in @mymath{r}, will create the following infinitesimal change in @mymath{z}:

@dispmath{dz={\mp r\over R}\left(1\over
\sqrt{1-{r^2/R^2}}\right)dr.}Using the positive signed equation instead of @mymath{dz} in the @mymath{ds_s^2} equation above, we get:

@dispmath{ds_s^2={dr^2\over 1-r^2/R^2}+r^2d\phi^2.}

The derivation above was done for a spherical shell of radius @mymath{R} as a curved 2D surface.
To generalize it to any surface, we can define @mymath{K=1/R^2} as the curvature parameter.
Then the general infinitesimal change in a static universe can be written as:

@dispmath{ds_s^2={dr^2\over 1-Kr^2}+r^2d\phi^2.}

Therefore, when @mymath{K>0} (and curvature is the same everywhere), we have a finite universe, where @mymath{r} cannot become larger than @mymath{R} as in @ref{sphereandplane}.
When @mymath{K=0}, we have a flat plane (@ref{flatplane}) and a negative @mymath{K} will correspond to an imaginary @mymath{R}.
The latter two cases may be infinite in area (which is not a simple concept, but mathematically can be modeled with @mymath{r} extending infinitely), or finite-area (like a cylinder is flat everywhere with @mymath{ds_s^2={dx^2 + dy^2}}, but finite in one direction in size).

@cindex Proper distance
A very important issue that can be discussed now (while we are still in 2D and can actually visualize things) is that @mymath{\overrightarrow{r}} is tangent to the curved space at the observer's position.
In other words, it is on the gray flat surface of @ref{sphereandplane}, even when the universe if curved: @mymath{\overrightarrow{r}=P'-A}.
Therefore for the point @mymath{P} on a curved space, the raw coordinate @mymath{r} is the distance to @mymath{P'}, not @mymath{P}.
The distance to the point @mymath{P} (at a specific coordinate @mymath{r} on the flat plane) over the curved surface (thick line in @ref{sphereandplane}) is called the @emph{proper distance} and is displayed with @mymath{l}.
For the specific example of @ref{sphereandplane}, the proper distance can be calculated with: @mymath{l=R\theta} (@mymath{\theta} is in radians).
Using the @mymath{\sin\theta} relation found above, we can find @mymath{l} as a function of @mymath{r}:

@dispmath{\theta=\sin^{-1}\left({r\over R}\right)\quad\rightarrow\quad
l(r)=R\sin^{-1}\left({r\over R}\right)}


@mymath{R} is just an arbitrary constant and can be directly found from @mymath{K}, so for cleaner equations, it is common practice to set @mymath{R=1}, which gives: @mymath{l(r)=\sin^{-1}r}.
Also note that when @mymath{R=1}, then @mymath{l=\theta}.
Generally, depending on the curvature, in a @emph{static} universe the proper distance can be written as a function of the coordinate @mymath{r} as (from now on we are assuming @mymath{R=1}):

@dispmath{l(r)=\sin^{-1}(r)\quad(K>0),\quad\quad
l(r)=r\quad(K=0),\quad\quad l(r)=\sinh^{-1}(r)\quad(K<0).}With
@mymath{l}, the infinitesimal change of distance can be written in a
more simpler and abstract form of

@dispmath{ds_s^2=dl^2+r^2d\phi^2.}

@cindex Comoving distance
Until now, we had assumed a static universe (not changing with time).
But our observations so far appear to indicate that the universe is expanding (it isn't static).
Since there is no reason to expect the observed expansion is unique to our particular position of the universe, we expect the universe to be expanding at all points with the same rate at the same time.
Therefore, to add a time dependence to our distance measurements, we can include a multiplicative scaling factor, which is a function of time: @mymath{a(t)}.
The functional form of @mymath{a(t)} comes from the cosmology, the physics we assume for it: general relativity, and the choice of whether the universe is uniform (`homogeneous') in density and curvature or inhomogeneous.
In this section, the functional form of @mymath{a(t)} is irrelevant, so we can avoid these issues.

With this scaling factor, the proper distance will also depend on time.
As the universe expands, the distance between two given points will shift to larger values.
We thus define a distance measure, or coordinate, that is independent of time and thus doesn't `move'.
We call it the @emph{comoving distance} and display with @mymath{\chi} such that: @mymath{l(r,t)=\chi(r)a(t)}.
We have therefore, shifted the @mymath{r} dependence of the proper distance we derived above for a static universe to the comoving distance:

@dispmath{\chi(r)=\sin^{-1}(r)\quad(K>0),\quad\quad
\chi(r)=r\quad(K=0),\quad\quad \chi(r)=\sinh^{-1}(r)\quad(K<0).}

Therefore, @mymath{\chi(r)} is the proper distance to an object at a specific reference time: @mymath{t=t_r} (the @mymath{r} subscript signifies ``reference'') when @mymath{a(t_r)=1}.
At any arbitrary moment (@mymath{t\neq{t_r}}) before or after @mymath{t_r}, the proper distance to the object can be scaled with @mymath{a(t)}.

Measuring the change of distance in a time-dependent (expanding) universe only makes sense if we can add up space and time@footnote{In other words, making our space-time consistent with Minkowski space-time geometry.
In this geometry, different observers at a given point (event) in space-time split up space-time into `space' and `time' in different ways, just like people at the same spatial position can make different choices of splitting up a map into `left--right' and `up--down'.
This model is well supported by twentieth and twenty-first century observations.}.
But we can only add bits of space and time together if we measure them in the same units: with a conversion constant (similar to how 1000 is used to convert a kilometer into meters).
Experimentally, we find strong support for the hypothesis that this conversion constant is the speed of light (or gravitational waves@footnote{The speed of gravitational waves was recently found to be very similar to that of light in vacuum, see @url{https://arxiv.org/abs/1710.05834, arXiv:1710.05834}.}) in a vacuum.
This speed is postulated to be constant@footnote{In @emph{natural units}, speed is measured in units of the speed of light in vacuum.} and is almost always written as @mymath{c}.
We can thus parameterize the change in distance on an expanding 2D surface as

@dispmath{ds^2=c^2dt^2-a^2(t)ds_s^2 = c^2dt^2-a^2(t)(d\chi^2+r^2d\phi^2).}


@node Extending distance concepts to 3D, Invoking astcosmiccal, Distance on a 2D curved space, CosmicCalculator
@subsection Extending distance concepts to 3D

The concepts of @ref{Distance on a 2D curved space} are here extended to a 3D space that @emph{might} be curved.
We can start with the generic infinitesimal distance in a static 3D universe, but this time in spherical coordinates instead of polar coordinates.
@mymath{\theta} is shown in @ref{sphereandplane}, but here we are 3D beings, positioned on @mymath{O} (the center of the sphere) and the point @mymath{O} is tangent to a 4D-sphere.
In our 3D space, a generic infinitesimal displacement will correspond to the following distance in spherical coordinates:

@dispmath{ds_s^2=dx^2+dy^2+dz^2=dr^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2).}

Like the 2D creature before, we now have to assume an abstract dimension which we cannot visualize easily.
Let's call the fourth dimension @mymath{w}, then the general change in coordinates in the @emph{full} four dimensional space will be:

@dispmath{ds_s^2=dr^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2)+dw^2.}

@noindent
But we can only work on a 3D curved space, so following exactly the same steps and conventions as our 2D friend, we arrive at:

@dispmath{ds_s^2={dr^2\over 1-Kr^2}+r^2(d\theta^2+\sin^2{\theta}d\phi^2).}

@noindent
In a non-static universe (with a scale factor a(t)), the distance can be written as:

@dispmath{ds^2=c^2dt^2-a^2(t)[d\chi^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2)].}



@c@dispmath{H(z){\equiv}\left(\dot{a}\over a\right)(z)=H_0E(z) }

@c@dispmath{E(z)=[ \Omega_{\Lambda,0} + \Omega_{C,0}(1+z)^2 +
@c\Omega_{m,0}(1+z)^3 + \Omega_{r,0}(1+z)^4 ]^{1/2}}

@c Let's take @mymath{r} to be the radial coordinate of the emitting
@c source, which emitted its light at redshift $z$. Then the comoving
@c distance of this object would be:

@c@dispmath{ \chi(r)={c\over H_0a_0}\int_0^z{dz'\over E(z')} }

@c@noindent
@c So the proper distance at the current time to that object is:
@c @mymath{a_0\chi(r)}, therefore the angular diameter distance
@c (@mymath{d_A}) and luminosity distance (@mymath{d_L}) can be written
@c as:

@c@dispmath{ d_A={a_0\chi(r)\over 1+z}, \quad d_L=a_0\chi(r)(1+z) }




@node Invoking astcosmiccal,  , Extending distance concepts to 3D, CosmicCalculator
@subsection Invoking CosmicCalculator

CosmicCalculator will calculate cosmological variables based on the input parameters.
The executable name is @file{astcosmiccal} with the following general template

@example
$ astcosmiccal [OPTION...] ...
@end example


@noindent
One line examples:

@example
## Print basic cosmological properties at redshift 2.5:
$ astcosmiccal -z2.5

## Only print Comoving volume over 4pi stradian to z (Mpc^3):
$ astcosmiccal --redshift=0.8 --volume

## Print redshift and age of universe when Lyman-alpha line is
## at 6000 angstrom (another way to specify redshift).
$ astcosmiccal --obsline=lyalpha,6000 --age

## Print luminosity distance, angular diameter distance and age
## of universe in one row at redshift 0.4
$ astcosmiccal -z0.4 -LAg

## Assume Lambda and matter density of 0.7 and 0.3 and print
## basic cosmological parameters for redshift 2.1:
$ astcosmiccal -l0.7 -m0.3 -z2.1

## Print wavelength of all pre-defined spectral lines when
## Lyman-alpha is observed at 4000 Angstroms.
$ astcosmiccal --obsline=lyalpha,4000 --listlinesatz
@end example

The input parameters (for example current matter density, etc) can be given as command-line options or in the configuration files, see @ref{Configuration files}.
For a definition of the different parameters, please see the sections prior to this.
If no redshift is given, CosmicCalculator will just print its input parameters and abort.
For a full list of the input options, please see @ref{CosmicCalculator input options}.

Without any particular output requested (and only a given redshift), CosmicCalculator will print all basic cosmological calculations (one per line) with some explanations before each.
This can be good when you want a general feeling of the conditions at a specific redshift.
Alternatively, if any specific calculation(s) are requested (its possible to call more than one), only the requested value(s) will be calculated and printed with one character space between them.
In this case, no description or units will be printed.
See @ref{CosmicCalculator basic cosmology calculations} for the full list of these options along with some explanations how when/how they can be useful.

Another common operation in observational cosmology is dealing with spectral lines at different redshifts.
CosmicCalculator also has features to help in such situations, please see @ref{CosmicCalculator spectral line calculations}.

@menu
* CosmicCalculator input options::  Options to specify input conditions.
* CosmicCalculator basic cosmology calculations::  Like distance modulus, distances and etc.
* CosmicCalculator spectral line calculations::  How they get affected by redshift.
@end menu

@node CosmicCalculator input options, CosmicCalculator basic cosmology calculations, Invoking astcosmiccal, Invoking astcosmiccal
@subsubsection CosmicCalculator input options

The inputs to CosmicCalculator can be specified with the following options:
@table @option

@item -z FLT
@itemx --redshift=FLT
The redshift of interest.
There are two other ways that you can specify the target redshift:
1) Spectral lines and their observed wavelengths, see @option{--obsline}.
2) Velocity, see @option{--velocity}.
Hence this option cannot be called with @option{--obsline} or @option{--velocity}.

@item -y FLT
@itemx --velocity=FLT
Input velocity in km/s.
The given value will be converted to redshift internally, and used in any subsequent calculation.
This option is thus an alternative to @code{--redshift} or @code{--obsline}, it cannot be used with them.
The conversion will be done with the more general and accurate relativistic equation of @mymath{1+z=\sqrt{(c+v)/(c-v)}}, not the simplified @mymath{z\approx v/c}.

@item -H FLT
@itemx --H0=FLT
Current expansion rate (in km sec@mymath{^{-1}} Mpc@mymath{^{-1}}).

@item -l FLT
@itemx --olambda=FLT
Cosmological constant density divided by the critical density in the current Universe (@mymath{\Omega_{\Lambda,0}}).

@item -m FLT
@itemx --omatter=FLT
Matter (including massive neutrinos) density divided by the critical density in the current Universe (@mymath{\Omega_{m,0}}).

@item -r FLT
@itemx --oradiation=FLT
Radiation density divided by the critical density in the current Universe (@mymath{\Omega_{r,0}}).

@item -O STR/FLT,FLT
@itemx --obsline=STR/FLT,FLT
@cindex Rest-frame wavelength
@cindex Wavelength, rest-frame
Find the redshift to use in next steps based on the rest-frame and observed wavelengths of a line.
This option is thus an alternative to @code{--redshift} or @code{--velocity}, it cannot be used with them.
Wavelengths are assumed to be in Angstroms.
The first argument identifies the line.
It can be one of the standard names below, or any rest-frame wavelength in Angstroms.
The second argument is the observed wavelength of that line.
For example @option{--obsline=lyalpha,6000} is the same as @option{--obsline=1215.64,6000}.

The pre-defined names are listed below, sorted from red (longer wavelength) to blue (shorter wavelength).
You can get this list on the command-line with the @option{--listlines}.

@table @code
@item siired
[6731@AA{}] SII doublet's redder line.

@item sii
@cindex Doublet: SII
@cindex SII doublet
[6724@AA{}] SII doublet's mean center at .

@item siiblue
[6717@AA{}] SII doublet's bluer line.

@item niired
[6584@AA{}] NII doublet's redder line.

@item nii
@cindex Doublet: NII
@cindex NII doublet
[6566@AA{}] NII doublet's mean center.

@item halpha
@cindex H-alpha
[6562.8@AA{}] H-@mymath{\alpha} line.

@item niiblue
[6548@AA{}] NII doublet's bluer line.

@item oiiired-vis
[5007@AA{}] OIII doublet's redder line in the visible.

@item oiii-vis
@cindex Doublet: OIII (visible)
@cindex OIII doublet in visible
[4983@AA{}] OIII doublet's mean center in the visible.

@item oiiiblue-vis
[4959@AA{}] OIII doublet's bluer line in the visible.

@item hbeta
@cindex H-beta
[4861.36@AA{}] H-@mymath{\beta} line.

@item heii-vis
[4686@AA{}] HeII doublet's redder line in the visible.

@item hgamma
@cindex H-gamma
[4340.46@AA{}] H-@mymath{\gamma} line.

@item hdelta
@cindex H-delta
[4101.74@AA{}] H-@mymath{\delta} line.

@item hepsilon
@cindex H-epsilon
[3970.07@AA{}] H-@mymath{\epsilon} line.

@item neiii
[3869@AA{}] NEIII line.

@item oiired
[3729@AA{}] OII doublet's redder line.

@item oii
@cindex Doublet: OII
@cindex OII doublet
[3727.5@AA{}] OII doublet's mean center.

@item oiiblue
[3726@AA{}] OII doublet's bluer line.

@item blimit
@cindex Balmer limit
[3646@AA{}] Balmer limit.

@item mgiired
[2803@AA{}] MgII doublet's redder line.

@item mgii
@cindex Doublet: MgII
@cindex MgII doublet
[2799.5@AA{}] MgII doublet's mean center.

@item mgiiblue
[2796@AA{}] MgII doublet's bluer line.

@item ciiired
[1909@AA{}] CIII doublet's redder line.

@item ciii
@cindex Doublet: CIII
@cindex CIII doublet
[1908@AA{}] CIII doublet's mean center.

@item ciiiblue
[1907@AA{}] CIII doublet's bluer line.

@item si_iiired
[1892@AA{}] SiIII doublet's redder line.

@item si_iii
@cindex Doublet: SiIII
@cindex SiIII doublet
[1887.5@AA{}] SiIII doublet's mean center.

@item si_iiiblue
[1883@AA{}] SiIII doublet's bluer line.

@item oiiired-uv
[1666@AA{}] OIII doublet's redder line in the ultra-violet.

@item oiii-uv
@cindex Doublet: OIII (in UV)
@cindex OIII doublet in UV
[1663.5@AA{}] OIII doublet's mean center in the ultra-violet.

@item oiiiblue-uv
[1661@AA{}] OIII doublet's bluer line in the ultra-violet.

@item heii-uv
[1640@AA{}] HeII doublet's bluer line in the ultra-violet.

@item civred
[1551@AA{}] CIV doublet's redder line.

@item civ
@cindex Doublet: CIV
@cindex CIV doublet
[1549@AA{}] CIV doublet's mean center.

@item civblue
[1548@AA{}] CIV doublet's bluer line.

@item nv
[1240@AA{}] NV (four times ionized Sodium).

@item lyalpha
@cindex Lyman-alpha
[1215.67@AA{}] Lyman-@mymath{\alpha} line.

@item lybeta
@cindex Lyman-beta
[1025.7@AA{}] Lyman-@mymath{\beta} line.

@item lygamma
@cindex Lyman-gamma
[972.54@AA{}] Lyman-@mymath{\gamma} line.

@item lydelta
@cindex Lyman-delta
[949.74@AA{}] Lyman-@mymath{\delta} line.

@item lyepsilon
@cindex Lyman-epsilon
[937.80@AA{}] Lyman-@mymath{\epsilon} line.

@item lylimit
@cindex Lyman limit
[912@AA{}] Lyman limit.

@end table

@end table



@node CosmicCalculator basic cosmology calculations, CosmicCalculator spectral line calculations, CosmicCalculator input options, Invoking astcosmiccal
@subsubsection CosmicCalculator basic cosmology calculations
By default, when no specific calculations are requested, CosmicCalculator will print a complete set of all its calculators (one line for each calculation, see @ref{Invoking astcosmiccal}).
The full list of calculations can be useful when you don't want any specific value, but just a general view.
In other contexts (for example in a batch script or during a discussion), you know exactly what you want and don't want to be distracted by all the extra information.

You can use any number of the options described below in any order.
When any of these options are requested, CosmicCalculator's output will just be a single line with a single space between the (possibly) multiple values.
In the example below, only the tangential distance along one arc-second (in kpc), absolute magnitude conversion, and age of the universe at redshift 2 are printed (recall that you can merge short options together, see @ref{Options}).

@example
$ astcosmiccal -z2 -sag
8.585046 44.819248 3.289979
@end example

Here is one example of using this feature in scripts: by adding the following two lines in a script to keep/use the comoving volume with varying redshifts:

@example
z=3.12
vol=$(astcosmiccal --redshift=$z --volume)
@end example

@cindex GNU Grep
@noindent
In a script, this operation might be necessary for a large number of objects (several of galaxies in a catalog for example).
So the fact that all the other default calculations are ignored will also help you get to your result faster.

If you are indeed dealing with many (for example thousands) of redshifts, using CosmicCalculator is not the best/fastest solution.
Because it has to go through all the configuration files and preparations for each invocation.
To get the best efficiency (least overhead), we recommend using Gnuastro's cosmology library (see @ref{Cosmology library}).
CosmicCalculator also calls the library functions defined there for its calculations, so you get the same result with no overhead.
Gnuastro also has libraries for easily reading tables into a C program, see @ref{Table input output}.
Afterwards, you can easily build and run your C program for the particular processing with @ref{BuildProgram}.

If you just want to inspect the value of a variable visually, the description (which comes with units) might be more useful.
In such cases, the following command might be better.
The other calculations will also be done, but they are so fast that you will not notice on modern computers (the time it takes your eye to focus on the result is usually longer than the processing: a fraction of a second).

@example
$ astcosmiccal --redshift=0.832 | grep volume
@end example

The full list of CosmicCalculator's specific calculations is present below in two groups: basic cosmology calculations and those related to spectral lines.
In case you have forgot the units, you can use the @option{--help} option which has the units along with a short description.

@table @option

@item -e
@itemx --usedredshift
The redshift that was used in this run.
In many cases this is the main input parameter to CosmicCalculator, but it is useful in others.
For example in combination with @option{--obsline} (where you give an observed and rest-frame wavelength and would like to know the redshift) or with @option{--velocity} (where you specify the velocity instead of redshift).
Another example is when you run CosmicCalculator in a loop, while changing the redshift and you want to keep the redshift value with the resulting calculation.

@item -Y
@itemx --usedvelocity
The velocity (in km/s) that was used in this run.
The conversion from redshift will be done with the more general and accurate relativistic equation of @mymath{1+z=\sqrt{(c+v)/(c-v)}}, not the simplified @mymath{z\approx v/c}.

@item -G
@itemx --agenow
The current age of the universe (given the input parameters) in Ga (Giga annum, or billion years).

@item -C
@itemx --criticaldensitynow
The current critical density (given the input parameters) in grams per centimeter-cube (@mymath{g/cm^3}).

@item -d
@itemx --properdistance
The proper distance (at current time) to object at the given redshift in Megaparsecs (Mpc).
See @ref{Distance on a 2D curved space} for a description of the proper distance.

@item -A
@itemx --angulardimdist
The angular diameter distance to object at given redshift in Megaparsecs (Mpc).

@item -s
@itemx --arcsectandist
The tangential distance covered by 1 arc-seconds at the given redshift in kiloparsecs (Kpc).
This can be useful when trying to estimate the resolution or pixel scale of an instrument (usually in units of arc-seconds) at a given redshift.

@item -L
@itemx --luminositydist
The luminosity distance to object at given redshift in Megaparsecs (Mpc).

@item -u
@itemx --distancemodulus
The distance modulus at given redshift.

@item -a
@itemx --absmagconv
The conversion factor (addition) to absolute magnitude.
Note that this is practically the distance modulus added with @mymath{-2.5\log{(1+z)}} for the desired redshift based on the input parameters.
Once the apparent magnitude and redshift of an object is known, this value may be added with the apparent magnitude to give the object's absolute magnitude.

@item -g
@itemx --age
Age of the universe at given redshift in Ga (Giga annum, or billion years).

@item -b
@itemx --lookbacktime
The look-back time to given redshift in Ga (Giga annum, or billion years).
The look-back time at a given redshift is defined as the current age of the universe (@option{--agenow}) subtracted by the age of the universe at the given redshift.

@item -c
@itemx --criticaldensity
The critical density at given redshift in grams per centimeter-cube (@mymath{g/cm^3}).

@item -v
@itemx --onlyvolume
The comoving volume in Megaparsecs cube (Mpc@mymath{^3}) until the desired redshift based on the input parameters.

@end table




@node CosmicCalculator spectral line calculations,  , CosmicCalculator basic cosmology calculations, Invoking astcosmiccal
@subsubsection CosmicCalculator spectral line calculations

@cindex Rest frame wavelength
At different redshifts, observed spectral lines are shifted compared to their rest frame wavelengths with this simple relation: @mymath{\lambda_{obs}=\lambda_{rest}(1+z)}.
Although this relation is very simple and can be done for one line in the head (or a simple calculator!), it slowly becomes tiring when dealing with a lot of lines or redshifts, or some precision is necessary.
The options in this section are thus provided to greatly simplify usage of this simple equation, and also helping by storing a list of pre-defined spectral line wavelengths.

For example if you want to know the wavelength of the @mymath{H\alpha} line (at 6562.8 Angstroms in rest frame), when @mymath{Ly\alpha} is at 8000 Angstroms, you can call CosmicCalculator like the first example below.
And if you want the wavelength of all pre-defined spectral lines at this redshift, you can use the second command.

@example
$ astcosmiccal --obsline=lyalpha,8000 --lineatz=halpha
$ astcosmiccal --obsline=lyalpha,8000 --listlinesatz
@end example

Bellow you can see the printed/output calculations of CosmicCalculator that are related to spectral lines.
Note that @option{--obsline} is an input parameter, so its discussed (with the full list of known lines) in @ref{CosmicCalculator input options}.

@table @option

@item --listlines
List the pre-defined rest frame spectral line wavelengths and their names on standard output, then abort CosmicCalculator.
When this option is given, other operations on the command-line will be ignored.
This is convenient when you forget the specific name of the spectral line used within Gnuastro, or when you forget the exact wavelength of a certain line.

These names can be used with the options that deal with spectral lines, for example @option{--obsline} and @option{--lineatz} (@ref{CosmicCalculator basic cosmology calculations}).

The format of the output list is a two-column table, with Gnuastro's text table format (see @ref{Gnuastro text table format}).
Therefore, if you are only looking for lines in a specific range, you can pipe the output into Gnuastro's table program and use its @option{--range} option on the @code{wavelength} (first) column.
For example, if you only want to see the lines between 4000 and 6000 Angstroms, you can run this command:

@example
$ astcosmiccal --listlines \
               | asttable --range=wavelength,4000,6000
@end example

@noindent
And if you want to use the list later and have it as a table in a file, you can easily add the @option{--output} (or @option{-o}) option to the @command{asttable} command, and specify the filename, for example @option{--output=lines.fits} or @option{--output=lines.txt}.

@item --listlinesatz
Similar to @option{--listlines} (above), but the printed wavelength is not in the rest frame, but redshifted to the given redshift.
Recall that the redshift can be specified by @option{--redshift} directly or by @option{--obsline}, see @ref{CosmicCalculator input options}.

@item -i STR/FLT
@itemx --lineatz=STR/FLT
The wavelength of the specified line at the redshift given to CosmicCalculator.
The line can be specified either by its name or directly as a number (its wavelength).
To get the list of pre-defined names for the lines and their wavelength, you can use the @option{--listlines} option, see @ref{CosmicCalculator input options}.
In the former case (when a name is given), the returned number is in units of Angstroms.
In the latter (when a number is given), the returned value is the same units of the input number (assuming its a wavelength).

@end table




















@node Library, Developing, High-level calculations, Top
@chapter Library

Each program in Gnuastro that was discussed in the prior chapters (or any program in general) is a collection of functions that is compiled into one executable file which can communicate directly with the outside world.
The outside world in this context is the operating system.
By communication, we mean that control is directly passed to a program from the operating system with a (possible) set of inputs and after it is finished, the program will pass control back to the operating system.
For programs written in C and C++, the unique @code{main} function is in charge of this communication.

Similar to a program, a library is also a collection of functions that is compiled into one executable file.
However, unlike programs, libraries don't have a @code{main} function.
Therefore they can't communicate directly with the outside world.
This gives you the chance to write your own @code{main} function and call library functions from within it.
After compiling your program into a binary executable, you just have to @emph{link} it to the library and you are ready to run (execute) your program.
In this way, you can use Gnuastro at a much lower-level, and in combination with other libraries on your system, you can significantly boost your creativity.

This chapter starts with a basic introduction to libraries and how you can use them in @ref{Review of library fundamentals}.
The separate functions in the Gnuastro library are then introduced (classified by context) in @ref{Gnuastro library}.
If you end up routinely using a fixed set of library functions, with a well-defined input and output, it will be much more beneficial if you define a program for the job.
Therefore, in its @ref{Version controlled source}, Gnuastro comes with the @ref{The TEMPLATE program} to easily define your own programs(s).


@menu
* Review of library fundamentals::  Guide on libraries and linking.
* BuildProgram::                Link and run source files with this library.
* Gnuastro library::            Description of all library functions.
* Library demo programs::       Demonstration for using libraries.
@end menu

@node Review of library fundamentals, BuildProgram, Library, Library
@section Review of library fundamentals

Gnuastro's libraries are written in the C programming language.
In @ref{Why C}, we have thoroughly discussed the reasons behind this choice.
C was actually created to write Unix, thus understanding the way C works can greatly help in effectively using programs and libraries in all Unix-like operating systems.
Therefore, in the following subsections some important aspects of C, as it relates to libraries (and thus programs that depend on them) on Unix are reviewed.
First we will discuss header files in @ref{Headers} and then go onto @ref{Linking}.
This section finishes with @ref{Summary and example on libraries}.
If you are already familiar with these concepts, please skip this section and go directly to @ref{Gnuastro library}.

@cindex Modularity
In theory, a full operating system (or any software) can be written as one function.
Such a software would not need any headers or linking (that are discussed in the subsections below).
However, writing that single function and maintaining it (adding new features, fixing bugs, documentation, etc) would be a programmer or scientist's worst nightmare! Furthermore, all the hard work that went into creating it cannot be reused in other software: every other programmer or scientist would have to re-invent the wheel.
The ultimate purpose behind libraries (which come with headers and have to be linked) is to address this problem and increase modularity: ``the degree to which a system's components may be separated and recombined'' (from Wikipedia).
The more modular the source code of a program or library, the easier maintaining it will be, and all the hard work that went into creating it can be reused for a wider range of problems.

@menu
* Headers::                     Header files included in source.
* Linking::                     Linking the compiled source files into one.
* Summary and example on libraries::  A summary and example on using libraries.
@end menu

@node Headers, Linking, Review of library fundamentals, Review of library fundamentals
@subsection Headers

@cindex Pre-Processor
C source code is read from top to bottom in the source file, therefore program components (for example variables, data structures and functions) should all be @emph{defined} or @emph{declared} closer to the top of the source file: before they are used.
@emph{Defining} something in C or C++ is jargon for providing its full details.
@emph{Declaring} it, on the other-hand, is jargon for only providing the minimum information needed for the compiler to pass it temporarily and fill in the detailed definition later.

For a function, the @emph{declaration} only contains the inputs and their data-types along with the output's type@footnote{Recall that in C, functions only have one output.}.
The @emph{definition} adds to the declaration by including the exact details of what operations are done to the inputs to generate the output.
As an example, take this simple summation function:

@example
double
sum(double a, double b)
@{
  return a + b;
@}
@end example
@noindent
What you see above is the @emph{definition} of this function: it shows you (and the compiler) exactly what it does to the two @code{double} type inputs and that the output also has a @code{double} type.
Note that a function's internal operations are rarely so simple and short, it can be arbitrarily long and complicated.
This unreasonably short and simple function was chosen here for ease of reading.
The declaration for this function is:

@example
double
sum(double a, double b);
@end example

@noindent
You can think of a function's declaration as a building's address in the city, and the definition as the building's complete blueprints.
When the compiler confronts a call to a function during its processing, it doesn't need to know anything about how the inputs are processed to generate the output.
Just as the postman doesn't need to know the inner structure of a building when delivering the mail.
The declaration (address) is enough.
Therefore by @emph{declaring} the functions once at the start of the source files, we don't have to worry about @emph{defining} them after they are used.

Even for a simple real-world operation (not a simple summation like above!), you will soon need many functions (for example, some for reading/preparing the inputs, some for the processing, and some for preparing the output).
Although it is technically possible, managing all the necessary functions in one file is not easy and is contrary to the modularity principle (see @ref{Review of library fundamentals}), for example the functions for preparing the input can be usable in your other projects with a different processing.
Therefore, as we will see later (in @ref{Linking}), the functions don't necessarily need to be defined in the source file where they are used.
As long as their definitions are ultimately linked to the final executable, everything will be fine.
For now, it is just important to remember that the functions that are called within one source file must be declared within the source file (declarations are mandatory), but not necessarily defined there.

In the spirit of modularity, it is common to define contextually similar functions in one source file.
For example, in Gnuastro, functions that calculate the median, mean and other statistical functions are defined in @file{lib/statistics.c}, while functions that deal directly with FITS files are defined in @file{lib/fits.c}.

Keeping the definition of similar functions in a separate file greatly helps their management and modularity, but this fact alone doesn't make things much easier for the caller's source code: recall that while definitions are optional, declarations are mandatory.
So if this was all, the caller would have to manually copy and paste (@emph{include}) all the declarations from the various source files into the file they are working on now.
To address this problem, programmers have adopted the header file convention: the header file of a source code contains all the declarations that a caller would need to be able to use any of its functions.
For example, in Gnuastro, @file{lib/statistics.c} (file containing function definitions) comes with @file{lib/gnuastro/statistics.h} (only containing function declarations).

The discussion above was mainly focused on functions, however, there are many more programming constructs such as pre-processor macros and data structures.
Like functions, they also need to be known to the compiler when it confronts a call to them.
So the header file also contains their definitions or declarations when they are necessary for the functions.

@cindex Macro
@cindex Structures
@cindex Data structures
@cindex Pre-processor macros
Pre-processor macros (or macros for short) are replaced with their defined value by the pre-processor before compilation.
Conventionally they are written only in capital letters to be easily recognized.
It is just important to understand that the compiler doesn't see the macros, it sees their fixed values.
So when a header specifies macros you can do your programming without worrying about the actual values.
The standard C types (for example @code{int}, or @code{float}) are very low-level and basic.
We can collect multiple C types into a @emph{structure} for a higher-level way to keep and pass-along data.
See @ref{Generic data container} for some examples of macros and data structures.

The contents in the header need to be @emph{include}d into the caller's source code with a special pre-processor command: @code{#include <path/to/header.h>}.
As the name suggests, the @emph{pre-processor} goes through the source code prior to the processor (or compiler).
One of its jobs is to include, or merge, the contents of files that are mentioned with this directive in the source code.
Therefore the compiler sees a single entity containing the contents of the main file and all the included files.
This allows you to include many (sometimes thousands of) declarations into your code with only one line.
Since the headers are also installed with the library into your system, you don't even need to keep a copy of them for each separate program, making things even more convenient.

Try opening some of the @file{.c} files in Gnuastro's @file{lib/} directory with a text editor to check out the include directives at the start of the file (after the copyright notice).
Let's take @file{lib/fits.c} as an example.
You will notice that Gnuastro's header files (like @file{gnuastro/fits.h}) are indeed within this directory (the @file{fits.h} file is in the @file{gnuastro/} directory).
You will notice that files like @file{stdio.h}, or @file{string.h} are not in this directory (or anywhere within Gnuastro).

On most systems the basic C header files (like @file{stdio.h} and @file{string.h} mentioned above) are located in @file{/usr/include/}@footnote{The @file{include/} directory name is taken from the pre-processor's @code{#include} directive, which is also the motivation behind the `I' in the @option{-I} option to the pre-processor.}.
Your compiler is configured to automatically search that directory (and possibly others), so you don't have to explicitly mention these directories.
Go ahead, look into the @file{/usr/include} directory and find @file{stdio.h} for example.
When the necessary header files are not in those specific libraries, the pre-processor can also search in places other than the current directory.
You can specify those directories with this pre-processor option@footnote{Try running Gnuastro's @command{make} and find the directories given to the compiler with the @option{-I} option.}:

@table @option
@item -I DIR
``Add the directory @file{DIR} to the list of directories to be searched for header files.
Directories named by '-I' are searched before the standard system include directories.
If the directory @file{DIR} is a standard system include directory, the option is ignored to ensure that the default search order for system directories and the special treatment of system headers are not defeated...'' (quoted from the GNU Compiler Collection manual).
Note that the space between @key{I} and the directory is optional and commonly not used.
@end table

If the pre-processor can't find the included files, it will abort with an error.
In fact a common error when building programs that depend on a library is that the compiler doesn't not know where a library's header is (see @ref{Known issues}).
So you have to manually tell the compiler where to look for the library's headers with the @option{-I} option.
For a small software with one or two source files, this can be done manually (see @ref{Summary and example on libraries}).
However, to enhance modularity, Gnuastro (and most other bin/libraries) contain many source files, so the compiler is invoked many times@footnote{Nearly every command you see being executed after running @command{make} is one call to the compiler.}.
This makes manual addition or modification of this option practically impossible.

@cindex GNU build system
@cindex @command{CPPFLAGS}
To solve this problem, in the GNU build system, there are conventional environment variables for the various kinds of compiler options (or flags).
These environment variables are used in every call to the compiler (they can be empty).
The environment variable used for the C Pre-Processor (or CPP) is @command{CPPFLAGS}.
By giving @command{CPPFLAGS} a value once, you can be sure that each call to the compiler will be affected.
See @ref{Known issues} for an example of how to set this variable at configure time.

@cindex GNU build system
As described in @ref{Installation directory}, you can select the top installation directory of a software using the GNU build system, when you @command{./configure} it.
All the separate components will be put in their separate sub-directory under that, for example the programs, compiled libraries and library headers will go into @file{$prefix/bin} (replace @file{$prefix} with a directory), @file{$prefix/lib}, and @file{$prefix/include} respectively.
For enhanced modularity, libraries that contain diverse collections of functions (like GSL, WCSLIB, and Gnuastro), put their header files in a sub-directory unique to themselves.
For example all Gnuastro's header files are installed in @file{$prefix/include/gnuastro}.
In your source code, you need to keep the library's sub-directory when including the headers from such libraries, for example @code{#include <gnuastro/fits.h>}@footnote{the top @file{$prefix/include} directory is usually known to the compiler}.
Not all libraries need to follow this convention, for example CFITSIO only has one header (@file{fitsio.h}) which is directly installed in @file{$prefix/include}.




@node Linking, Summary and example on libraries, Headers, Review of library fundamentals
@subsection Linking

@cindex GNU Libtool
To enhance modularity, similar functions are defined in one source file (with a @file{.c} suffix, see @ref{Headers} for more).
After running @command{make}, each human-readable, @file{.c} file is translated (or compiled) into a computer-readable ``object'' file (ending with @file{.o}).
Note that object files are also created when building programs, they aren't particular to libraries.
Try opening Gnuastro's @file{lib/} and @file{bin/progname/} directories after running @command{make} to see these object files@footnote{Gnuastro uses GNU Libtool for portable library creation.
Libtool will also make a @file{.lo} file for each @file{.c} file when building libraries (@file{.lo} files are human-readable).}.
Afterwards, the object files are @emph{linked} together to create an executable program or a library.

@cindex GNU Binutils
The object files contain the full definition of the functions in the respective @file{.c} file along with a list of any other function (or generally ``symbol'') that is referenced there.
To get a list of those functions you can use the @command{nm} program which is part of GNU Binutils.
For example from the top Gnuastro directory, run:

@example
$ nm bin/arithmetic/arithmetic.o
@end example

@noindent
This will print a list of all the functions (more generally, `symbols') that were called within @file{bin/arithmetic/arithmetic.c} along with some further information (for example a @code{T} in the second column shows that this function is actually defined here, @code{U} says that it is undefined here).
Try opening the @file{.c} file to check some of these functions for your self. Run @command{info nm} for more information.

@cindex Linking
To recap, the @emph{compiler} created the separate object files mentioned above for each @file{.c} file.
The @emph{linker} will then combine all the symbols of the various object files (and libraries) into one program or library.
In the case of Arithmetic (a program) the contents of the object files in @file{bin/arithmetic/} are copied (and re-ordered) into one final executable file which we can run from the operating system.

@cindex Static linking
@cindex Linking: Static
@cindex Dynamic linking
@cindex Linking: Dynamic
There are two ways to @emph{link} all the necessary symbols: static and dynamic/shared.
When the symbols (computer-readable function definitions in most cases) are copied into the output, it is called @emph{static} linking.
When the symbols are kept in their original file and only a reference to them is kept in the executable, it is called @emph{dynamic}, or @emph{shared} linking.

Let's have a closer look at the executable to understand this better: we'll assume you have built Gnuastro without any customization and installed Gnuastro into the default @file{/usr/local/} directory (see @ref{Installation directory}).
If you tried the @command{nm} command on one of Arithmetic's object files above, then with the command below you can confirm that all the functions that were defined in the object file above (had a @code{T} in the second column) are also defined in the @file{astarithmetic} executable:

@example
$ nm /usr/local/bin/astarithmetic
@end example

@noindent
These symbols/function have been statically linked (copied) in the final executable.
But you will notice that there are still many undefined symbols in the executable (those with a @code{U} in the second column).
One class of such functions are Gnuastro's own library functions that start with `@code{gal_}':

@example
$ nm /usr/local/bin/astarithmetic | grep gal_
@end example

@cindex Plugin
@cindex GNU Libtool
@cindex Shared library
@cindex Library: shared
@cindex Dynamic linking
@cindex Linking: dynamic
These undefined symbols (functions) are present in another file and will be linked to the Arithmetic program every time you run it.
Therefore they are known as dynamically @emph{linked} libraries @footnote{Do not confuse dynamically @emph{linked} libraries with dynamically @emph{loaded} libraries.
The former (that is discussed here) are only loaded once at the program startup.
However, the latter can be loaded anytime during the program's execution, they are also known as plugins.}.
As we saw above, static linking is done when the executable is being built.
However, when a program is dynamically linked to a library, at build-time, the library's symbols are only checked with the available libraries: they are not actually copied into the program's executable.
Every time you run the program, the (dynamic) linker will be activated and will try to link the program to the installed library before the program starts.

If you want all the libraries to be statically linked to the executables, you have to tell Libtool (which Gnuastro uses for the linking) to disable shared libraries at configure time@footnote{Libtool is very common and is commonly used.
Therefore, you can use this option to configure on most programs using the GNU build system if you want static linking.}:

@example
$ configure --disable-shared
@end example

@noindent
Try configuring Gnuastro with the command above, then build and install it (as described in @ref{Quick start}).
Afterwards, check the @code{gal_} symbols in the installed Arithmetic executable like before.
You will see that they are actually copied this time (have a @code{T} in the second column).
If the second column doesn't convince you, look at the executable file size with the following command:

@example
$ ls -lh /usr/local/bin/astarithmetic
@end example

@noindent
It should be around 4.2 Megabytes with this static linking.
If you configure and build Gnuastro again with shared libraries enabled (which is the default), you will notice that it is roughly 100 Kilobytes!

This huge difference would have been very significant in the old days, but with the roughly Terabyte storage drives commonly in use today, it is negligible.
Fortunately, output file size is not the only benefit of dynamic linking: since it links to the libraries at run-time (rather than build-time), you don't have to re-build a higher-level program or library when an update comes for one of the lower-level libraries it depends on.
You just install the new low-level library and it will automatically be used/linked next time in the programs that use it.
To be fair, this also creates a few complications@footnote{Both of these can be avoided by joining the mailing lists of the lower-level libraries and checking the changes in newer versions before installing them.
Updates that result in such behaviors are generally heavily emphasized in the release notes.}:

@itemize
@item
Reproducibility: Even though your high-level tool has the same version as before, with the updated library, you might not get the same results.
@item
Broken links: if some functions have been changed or removed in the updated library, then the linker will abort with an error at run-time.
Therefore you need to re-build your higher-level program or library.
@end itemize

@cindex GNU C library
To see a list of all the shared libraries that are needed for a program or
a shared library to run, you can use GNU C library's
@command{ldd}@footnote{If your operating system is not using the GNU C
library, you might need another tool.} program, for example:

@example
$ ldd /usr/local/bin/astarithmetic
@end example

Library file names (in their installation directory) start with a @file{lib} and their ending (suffix) shows if they are static (@file{.a}) or dynamic (@file{.so}), as described below.
The name of the library is in the middle of these two, for example @file{libgsl.a} or @file{libgnuastro.a} (GSL and Gnuastro's static libraries), and @file{libgsl.so.23.0.0} or @file{libgnuastro.so.4.0.0} (GSL and Gnuastro's shared library, the numbers may be different).

@itemize
@item
A static library is known as an archive file and has the @file{.a} suffix.
A static library is not an executable file.

@item
@cindex Shared library versioning
@cindex Versioning: Shared library
A shared library ends with the @file{.so.X.Y.Z} suffix and is executable.
The three numbers in the suffix, describe the version of the shared library.
Shared library versions are defined to allow multiple versions of a shared library simultaneously on a system and to help detect possible updates in the library and programs that depend on it by the linker.

It is very important to mention that this version number is different from the software version number (see @ref{Version numbering}), so do not confuse the two.
See the ``Library interface versions'' chapter of GNU Libtool for more.

For each shared library, we also have two symbolic links ending with @file{.so.X} and @file{.so}.
They are automatically set by the installer, but you can change them (point them to another version of the library) when you have multiple versions of a library on your system.

@end itemize

@cindex GNU Libtool
Libraries that are built with GNU Libtool (including Gnuastro and its dependencies), build both static and dynamic libraries by default and install them in @file{prefix/lib/} directory (for more on @file{prefix}, see @ref{Installation directory}).
In this way, programs depending on the libraries can link with them however they prefer.
See the contents of @file{/usr/local/lib} with the command below to see both the static and shared libraries available there, along with their executable nature and the symbolic links:

@example
$ ls -l /usr/local/lib/
@end example

To link with a library, the linker needs to know where to find the library.
@emph{At compilation time}, these locations can be passed to the linker with two separate options (see @ref{Summary and example on libraries} for an example) as described below.
You can see these options and their usage in practice while building Gnuastro (after running @command{make}):

@table @option
@item -L DIR
Will tell the linker to look into @file{DIR} for the libraries.
For example @file{-L/usr/local/lib}, or @file{-L/home/yourname/.local/lib}.
You can make multiple calls to this option, so the linker looks into several directories at compilation time.
Note that the space between @key{L} and the directory is optional and commonly ignored (written as @option{-LDIR}).

@item -lLIBRARY
Specify the unique library identifier/name (not containing directory or shared/dynamic nature) to be linked with the executable.
As discussed above, library file names have fixed parts which must not be given to this option.
So @option{-lgsl} will guide the linker to either look for @file{libgsl.a} or @file{libgsl.so} (depending on the type of linking it is suppose to do).
You can link many libraries by repeated calls to this option.

@strong{Very important: } The place of this option on the compiler's command matters.
This is often a source of confusion for beginners, so let's assume you have asked the linker to link with library A using this option.
As soon as the linker confronts this option, it looks into the list of the undefined symbols it has found until that point and does a search in library A for any of those symbols.
If any pending undefined symbol is found in library A, it is used.
After the search in undefined symbols is complete, the contents of library A are completely discarded from the linker's memory.
Therefore, if a later object file or library uses an unlinked symbol in library A, the linker will abort after it has finished its search in all the input libraries or object files.

As an example, Gnuastro's @code{gal_fits_img_read} function depends on the @code{fits_read_pix} function of CFITSIO (specified with @option{-lcfitsio}, which in turn depends on the cURL library, called with @option{-lcurl}).
So the proper way to link something that uses this function is @option{-lgnuastro -lcfitsio -lcurl}.
If instead, you give: @option{-lcfitsio -lgnuastro} the linker will complain and abort.
To avoid such linking complexities when using Gnuastro's library, we recommend using @ref{BuildProgram}.

@end table

If you have compiled and linked your program with a dynamic library, then the dynamic linker also needs to know the location of the libraries after building the program: @emph{every time} the program is run afterwards.
Therefore, it may happen that you don't get any errors when compiling/linking a program, but are unable to run your program because of a failure to find a library.
This happens because the dynamic linker hasn't found the dynamic library @emph{at run time}.

To find the dynamic libraries at run-time, the linker looks into the paths, or directories, in the @code{LD_LIBRARY_PATH} environment variable.
For a discussion on environment variables, especially search paths like @code{LD_LIBRARY_PATH}, and how you can add new directories to them, see @ref{Installation directory}.



@node Summary and example on libraries,  , Linking, Review of library fundamentals
@subsection Summary and example on libraries

After the mostly abstract discussions of @ref{Headers} and @ref{Linking}, we'll give a small tutorial here.
But before that, let's recall the general steps of how your source code is prepared, compiled and linked to the libraries it depends on so you can run it:

@enumerate
@item
The @strong{pre-processor} includes the header (@file{.h}) files into the function definition (@file{.c}) files, expands pre-processor macros.
Generally the pre-processor prepares the human-readable source for compilation (reviewed in @ref{Headers}).

@item
The @strong{compiler} will translate (compile) the human-readable contents of each source (merged @file{.c} and the @file{.h} files, or generally the output of the pre-processor) into the computer-readable code of @file{.o} files.

@item
The @strong{linker} will link the called function definitions from various compiled files to create one unified object.
When the unified product has a @code{main} function, this function is the product's only entry point, enabling the operating system or user to directly interact with it, so the product is a program.
When the product doesn't have a @code{main} function, the linker's product is a library and its exported functions can be linked to other executables (it has many entry points).
@end enumerate

@cindex GCC: GNU Compiler Collection
@cindex GNU Compiler Collection (GCC)
The GNU Compiler Collection (or GCC for short) will do all three steps.
So as a first example, from Gnuastro's source, go to @file{tests/lib/}.
This directory contains the library tests, you can use these as some simple tutorials.
For this demonstration, we will compile and run the @file{arraymanip.c}.
This small program will call Gnuastro library for some simple operations on an array (open it and have a look).
To compile this program, run this command inside the directory containing it.

@example
$ gcc arraymanip.c -lgnuastro -lm -o arraymanip
@end example

@noindent
The two @option{-lgnuastro} and @option{-lm} options (in this order) tell GCC to first link with the Gnuastro library and then with C's math library.
The @option{-o} option is used to specify the name of the output executable, without it the output file name will be @file{a.out} (on most OSs), independent of your input file name(s).

If your top Gnuastro installation directory (let's call it @file{$prefix}, see @ref{Installation directory}) is not recognized by GCC, you will get pre-processor errors for unknown header files.
Once you fix it, you will get linker errors for undefined functions.
To fix both, you should run GCC as follows: additionally telling it which directories it can find Gnuastro's headers and compiled library (see @ref{Headers} and @ref{Linking}):

@example
$ gcc -I$prefix/include -L$prefix/lib arraymanip.c -lgnuastro -lm     \
      -o arraymanip
@end example

@noindent
This single command has done all the pre-processor, compilation and linker operations.
Therefore no intermediate files (object files in particular) were created, only a single output executable was created.
You are now ready to run the program with:

@example
$ ./arraymanip
@end example

The Gnuastro functions called by this program only needed to be linked with the C math library.
But if your program needs WCS coordinate transformations, needs to read a FITS file, needs special math operations (which include its linear algebra operations), or you want it to run on multiple CPU threads, you also need to add these libraries in the call to GCC: @option{-lgnuastro -lwcs -lcfitsio -lgsl -lgslcblas -pthread -lm}.
In @ref{Gnuastro library}, where each function is documented, it is mentioned which libraries (if any) must also be linked when you call a function.
If you feel all these linkings can be confusing, please consider Gnuastro's @ref{BuildProgram} program.


@node BuildProgram, Gnuastro library, Review of library fundamentals, Library
@section BuildProgram
The number and order of libraries that are necessary for linking a program with Gnuastro library might be too confusing when you need to compile a small program for one particular job (with one source file).
BuildProgram will use the information gathered during configuring Gnuastro and link with all the appropriate libraries on your system.
This will allow you to easily compile, link and run programs that use Gnuastro's library with one simple command and not worry about which libraries to link to, or the linking order.

@cindex GNU Libtool
BuildProgram uses GNU Libtool to find the necessary libraries to link against (GNU Libtool is the same program that builds all of Gnuastro's libraries and programs when you run @code{make}).
So in the future, if Gnuastro's prerequisite libraries change or other libraries are added, you don't have to worry, you can just run BuildProgram and internal linking will be done correctly.

@cartouche
@noindent
@strong{BuildProgram requires GNU Libtool:} BuildProgram depends on GNU Libtool, other implementations don't have some necessary features.
If GNU Libtool isn't available at Gnuastro's configure time, you will get a notice at the end of the configuration step and BuildProgram will not be built or installed.
Please see @ref{Optional dependencies} for more information.
@end cartouche

@menu
* Invoking astbuildprog::       Options and examples for using this program.
@end menu

@node Invoking astbuildprog,  , BuildProgram, BuildProgram
@subsection Invoking BuildProgram

BuildProgram will compile and link a C source program with Gnuastro's library and all its dependencies, greatly facilitating the compilation and running of small programs that use Gnuastro's library.
The executable name is @file{astbuildprog} with the following general template:

@example
$ astbuildprog [OPTION...] C_SOURCE_FILE
@end example


@noindent
One line examples:

@example
## Compile, link and run `myprogram.c':
$ astbuildprog myprogram.c

## Similar to previous, but with optimization and compiler warnings:
$ astbuildprog -Wall -O2 myprogram.c

## Compile and link `myprogram.c', then run it with `image.fits'
## as its argument:
$ astbuildprog myprogram.c image.fits

## Also look in other directories for headers and linking:
$ astbuildprog -Lother -Iother/dir myprogram.c

## Just build (compile and link) `myprogram.c', don't run it:
$ astbuildprog --onlybuild myprogram.c
@end example

If BuildProgram is to run, it needs a C programming language source file as input.
By default it will compile and link the given source into a final executable program and run it.
The built executable name can be set with the optional @option{--output} option.
When no output name is set, BuildProgram will use Gnuastro's @ref{Automatic output} system to remove the suffix of the input source file (usually @file{.c}) and use the resulting name as the built program name.

For the full list of options that BuildProgram shares with other Gnuastro programs, see @ref{Common options}.
You may also use Gnuastro's @ref{Configuration files} to specify other libraries/headers to use for special directories and not have to type them in every time.

The C compiler can be chosen with the @option{--cc} option, or environment variables, please see the description of @option{--cc} for more.
The two common @code{LDFLAGS} and @code{CPPFLAGS} environment variables are also checked and used in the build by default.
Note that they are placed after the values to the corresponding options @option{--includedir} and @option{--linkdir}.
Therefore BuildProgram's own options take precedence.
Using environment variables can be disabled with the @option{--noenv} option.
Just note that BuildProgram also keeps the important flags in these environment variables in its configuration file.
Therefore, in many cases, even though you may needed them to build Gnuastro, you won't need them in BuildProgram.

The first argument is considered to be the C source file that must be compiled and linked.
Any other arguments (non-option tokens on the command-line) will be passed onto the program when BuildProgram wants to run it.
Recall that by default BuildProgram will run the program after building it.
This behavior can be disabled with the @code{--onlybuild} option.

@cindex GNU Make
When the @option{--quiet} option (see @ref{Operating mode options}) is not called, BuildPrograms will print the compilation and running commands.
Once your program grows and you break it up into multiple files (which are much more easily managed with Make), you can use the linking flags of the non-quiet output in your @code{Makefile}.

@table @option
@item -c STR
@itemx --cc=STR
@cindex C compiler
@cindex Compiler, C
@cindex GCC: GNU Compiler Collection
@cindex GNU Compiler Collection (GCC)
C compiler to use for the compilation, if not given environment variables will be used as described in the next paragraph.
If the compiler is in your systems's search path, you can simply give its name, for example @option{--cc=gcc}.
If its not in your system's search path, you can give its full path, for example @option{--cc=/path/to/your/custom/cc}.

If this option has no value after parsing the command-line and all configuration files (see @ref{Configuration file precedence}), then BuildProgram will look into the following environment variables in the given order @code{CC} and @code{GCC}.
If they are also not defined, BuildProgram will ultimately default to the @command{gcc} command which is present in many systems (sometimes as a link to other compilers).

@item -I STR
@itemx --includedir=STR
@cindex GNU CPP
@cindex C Pre-Processor
Directory to search for files that you @code{#include} in your C program.
Note that headers relating to Gnuastro and its dependencies don't need this option.
This is only necessary if you want to use other headers.
It may be called multiple times and order matters.
This directory will be searched before those of Gnuastro's build and also the system search directories.
See @ref{Headers} for a thorough introduction.

From the GNU C Pre-Processor manual: ``Add the directory @code{STR} to the list of directories to be searched for header files.
Directories named by @option{-I} are searched before the standard system include directories.
If the directory @code{STR} is a standard system include directory, the option is ignored to ensure that the default search order for system directories and the special treatment of system headers are not defeated''.

@item -L STR
@itemx --linkdir=STR
@cindex GNU Libtool
Directory to search for compiled libraries to link the program with.
Note that all the directories that Gnuastro was built with will already be used by BuildProgram (GNU Libtool).
This option is only necessary if your libraries are in other directories.
Multiple calls to this option are possible and order matters.
This directory will be searched before those of Gnuastro's build and also the system search directories.
See @ref{Linking} for a thorough introduction.

@item -l STR
@itemx --linklib=STR
Library to link with your program.
Note that all the libraries that Gnuastro was built with will already be linked by BuildProgram (GNU Libtool).
This option is only necessary if you want to link with other directories.
Multiple calls to this option are possible and order matters.
This library will be linked before Gnuastro's library or its dependencies.
See @ref{Linking} for a thorough introduction.

@item -O INT/STR
@itemx --optimize=INT/STR
@cindex Optimization
@cindex GCC: GNU Compiler Collection
@cindex GNU Compiler Collection (GCC)
Compiler optimization level: 0 (for no optimization, good debugging), 1, 2, 3 (for the highest level of optimizations).
From the GNU Compiler Collection (GCC) manual: ``Without any optimization option, the compiler's goal is to reduce the cost of compilation and to make debugging produce the expected results.
Statements are independent: if you stop the program with a break point between statements, you can then assign a new value to any variable or change the program counter to any other statement in the function and get exactly the results you expect from the source code.
Turning on optimization flags makes the compiler attempt to improve the performance and/or code size at the expense of compilation time and possibly the ability to debug the program.'' Please see your compiler's manual for the full list of acceptable values to this option.

@item -g
@itemx --debug
@cindex Debug
Emit extra information in the compiled binary for use by a debugger.
When calling this option, it is best to explicitly disable optimization with @option{-O0}.
To combine both options you can run @option{-gO0} (see @ref{Options} for how short options can be merged into one).

@item -W STR
@itemx --warning=STR
Print compiler warnings on command-line during compilation.
``Warnings are diagnostic messages that report constructions that are not inherently erroneous but that are risky or suggest there may have been an error.'' (from the GCC manual).
It is always recommended to compile your programs with warnings enabled.

All compiler warning options that start with @option{W} are usable by this option in BuildProgram also, see your compiler's manual for the full list.
Some of the most common values to this option are: @option{pedantic} (Warnings related to standard C) and @option{all} (all issues the compiler confronts).

@item -t
@itemx --tag=STR
The language configuration information.
Libtool can build objects and libraries in many languages.
In many cases, it can identify the language automatically, but when it doesn't you can use this option to explicitly notify Libtool of the language.
The acceptable values are: @code{CC} for C, @code{CXX} for C++, @code{GCJ} for Java, @code{F77} for Fortran 77, @code{FC} for Fortran, @code{GO} for Go and @code{RC} for Windows Resource.
Note that the Gnuastro library is not yet fully compatible with all these languages.

@item -b
@itemx --onlybuild
Only build the program, don't run it.
By default, the built program is immediately run afterwards.

@item -d
@itemx --deletecompiled
Delete the compiled binary file after running it.
This option is only relevant when the compiled program is run after being built.
In other words, it is only relevant when @option{--onlybuild} is not called.
It can be useful when you are busy testing a program or just want a fast result and the actual binary/compiled file is not of later use.

@item -a STR
@itemx --la=STR
Use the given @file{.la} file (Libtool control file) instead of the one that was produced from Gnuastro's configuration results.
The Libtool control file keeps all the necessary information for building and linking a program with a library built by Libtool.
The default @file{prefix/lib/libgnuastro.la} keeps all the information necessary to build a program using the Gnuastro library gathered during configure time (see @ref{Installation directory} for prefix).
This option is useful when you prefer to use another Libtool control file.

@item -e
@itemx --noenv
@cindex @code{CC}
@cindex @code{GCC}
@cindex @code{LDFLAGS}
@cindex @code{CPPFLAGS}
Don't use environment variables in the build, just use the values given to the options.
As described above, environment variables like @code{CC}, @code{GCC}, @code{LDFLAGS},  @code{CPPFLAGS} will be read by default and used in the build if they have been defined.
@end table





@node Gnuastro library, Library demo programs, BuildProgram, Library
@section Gnuastro library

Gnuastro library's programming constructs (function declarations, macros, data structures, or global variables) are classified by context into multiple header files (see @ref{Headers})@footnote{Within Gnuastro's source, all installed @file{.h} files in @file{lib/gnuastro/} are accompanied by a @file{.c} file in @file{/lib/}.}.
In this section, the functions in each header will be discussed under a separate sub-section, which includes the name of the header.
Assuming a function declaration is in @file{headername.h}, you can include its declaration in your source code with:

@example
# include <gnuastro/headername.h>
@end example

@noindent
The names of all constructs in @file{headername.h} are prefixed with @code{gal_headername_} (or @code{GAL_HEADERNAME_} for macros).
The @code{gal_} prefix stands for @emph{G}NU @emph{A}stronomy @emph{L}ibrary.

Gnuastro library functions are compiled into a single file which can be linked on the command-line with the @option{-lgnuastro} option.
See @ref{Linking} and @ref{Summary and example on libraries} for an introduction on linking and some fully working examples of the libraries.

Gnuastro's library is a high-level library which depends on lower level libraries for some operations (see @ref{Dependencies}).
Therefore if at least one of Gnuastro's functions in your program use functions from the dependencies, you will also need to link those dependencies after linking with Gnuastro.
See @ref{BuildProgram} for a convenient way to deal with the dependencies.
BuildProgram will take care of the libraries to link with your program (which uses the Gnuastro library), and can even run the built program afterwards.
Therefore it allows you to conveniently focus on your exciting science/research when using Gnuastro's libraries.



@cartouche
@noindent
@strong{Libraries are still under heavy development: } Gnuastro was initially created to be a collection of command-line programs.
However, as the programs and their the shared functions grew, internal (not installed) libraries were added.
Since the 0.2 release, the libraries are install-able.
Hence the libraries are currently under heavy development and will significantly evolve between releases and will become more mature and stable in due time.
It will stabilize with the removal of this notice.
Check the @file{NEWS} file for interface changes.
If you use the Info version of this manual (see @ref{Info}), you don't have to worry: the documentation will correspond to your installed version.
@end cartouche

@menu
* Configuration information::   General information about library config.
* Multithreaded programming::   Tools for easy multi-threaded operations.
* Library data types::          Definitions and functions for types.
* Pointers::                    Wrappers for easy working with pointers.@strong{}
* Library blank values::        Blank values and functions to deal with them.
* Library data container::      General data container in Gnuastro.
* Dimensions::                  Dealing with coordinates and dimensions.
* Linked lists::                Various types of linked lists.
* Array input output::          Reading and writing images or cubes.
* Table input output::          Reading and writing table columns.
* FITS files::                  Working with FITS data.
* File input output::           Reading and writing to various file formats.
* World Coordinate System::     Dealing with the world coordinate system.
* Arithmetic on datasets::      Arithmetic operations on a dataset.
* Tessellation library::        Functions for working on tiles.
* Bounding box::                Finding the bounding box.
* Polygons::                    Working with the vertices of a polygon.
* Qsort functions::             Helper functions for Qsort.
* K-d tree::                    Space partitioning in K dimensions.
* Permutations::                Re-order (or permute) the values in a dataset.
* Matching::                    Matching catalogs based on position.
* Statistical operations::      Functions for basic statistics.
* Binary datasets::             Datasets that can only have values of 0 or 1.
* Labeled datasets::            Working with Segmented/labeled datasets.
* Convolution functions::       Library functions to do convolution.
* Interpolation::               Interpolate (over blank values possibly).
* Git wrappers::                Wrappers for functions in libgit2.
* Unit conversion library (@file{units.h})::  Convert between units.
* Spectral lines library::      Functions for operating on Spectral lines.
* Cosmology library::           Cosmological calculations.
@end menu

@node Configuration information, Multithreaded programming, Gnuastro library, Gnuastro library
@subsection Configuration information (@file{config.h})

The @file{gnuastro/config.h} header contains information about the full Gnuastro installation on your system.
Gnuastro developers should note that this is the only header that is not available within Gnuastro, it is only available to a Gnuastro library user @emph{after} installation.
Within Gnuastro, @file{config.h} (which is included in every Gnuastro @file{.c} file, see @ref{Coding conventions}) has more than enough information about the overall Gnuastro installation.

@deffn Macro GAL_CONFIG_VERSION
This macro can be used as a string literal@footnote{@url{https://en.wikipedia.org/wiki/String_literal}} containing the version of Gnuastro that is being used.
See @ref{Version numbering} for the version formats. For example:

@example
printf("Gnuastro version: %s\n", GAL_CONFIG_VERSION);
@end example

@noindent
or

@example
char *gnuastro_version=GAL_CONFIG_VERSION;
@end example
@end deffn


@deffn Macro GAL_CONFIG_HAVE_LIBGIT2
Libgit2 is an optional dependency of Gnuastro (see @ref{Optional dependencies}).
When it is installed and detected at configure time, this macro will have a value of @code{1} (one).
Otherwise, it will have a value of @code{0} (zero).
Gnuastro also comes with some wrappers to make it easier to use libgit2 (see @ref{Git wrappers}).
@end deffn

@deffn Macro GAL_CONFIG_HAVE_FITS_IS_REENTRANT
@cindex CFITSIO
This macro will have a value of 1 when the CFITSIO of the host system has the @code{fits_is_reentrant} function (available from CFITSIO version 3.30).
This function is used to see if CFITSIO was configured to read a FITS file simultaneously on different threads.
@end deffn


@deffn Macro GAL_CONFIG_HAVE_WCSLIB_VERSION
WCSLIB is the reference library for world coordinate system transformation (see @ref{WCSLIB} and @ref{World Coordinate System}).
However, only more recent versions of WCSLIB also provide its version number.
If the WCSLIB that is installed on the system provides its version (through the possibly existing @code{wcslib_version} function), this macro will have a value of one, otherwise it will have a value of zero.
@end deffn

@deffn Macro GAL_CONFIG_HAVE_WCSLIB_DIS_H
This macro has a value of 1 if the host's WCSLIB has the @file{wcslib/dis.h} header for distortion-related operations.
@end deffn

@deffn Macro GAL_CONFIG_HAVE_WCSLIB_MJDREF
This macro has a value of 1 if the host's WCSLIB reads and stores the @file{MJDREF} FITS header keyword as part of its core @code{wcsprm} structure.
@end deffn

@deffn Macro GAL_CONFIG_HAVE_WCSLIB_OBSFIX
This macro has a value of 1 if the host's WCSLIB supports the @code{OBSFIX} feature (used by @code{wcsfix} function to parse the input WCS for known errors).
@end deffn

@deffn Macro GAL_CONFIG_HAVE_PTHREAD_BARRIER
The POSIX threads standard define barriers as an optional requirement.
Therefore, some operating systems choose to not include it.
As one of the @command{./configure} step checks, Gnuastro we check if your system has this POSIX thread barriers.
If so, this macro will have a value of @code{1}, otherwise it will have a value of @code{0}.
see @ref{Implementation of pthread_barrier} for more.
@end deffn

@cindex 32-bit
@cindex 64-bit
@cindex bit-32
@cindex bit-64
@deffn Macro GAL_CONFIG_SIZEOF_LONG
@deffnx Macro GAL_CONFIG_SIZEOF_SIZE_T
The size of (number of bytes in) the system's @code{long} and @code{size_t} types.
Their values are commonly either 4 or 8 for 32-bit and 64-bit systems.
You can also get this value with the expression `@code{sizeof size_t}' for example without having to include this header.
@end deffn

@node Multithreaded programming, Library data types, Configuration information, Gnuastro library
@subsection Multithreaded programming (@file{threads.h})

@cindex Multithreaded programming
In recent years, newer CPUs don't have significantly higher frequencies any
more. However, CPUs are being manufactured with more cores, enabling more
than one operation (thread) at each instant. This can be very useful to
speed up many aspects of processing and in particular image processing.

Most of the programs in Gnuastro utilize multi-threaded programming for the
CPU intensive processing steps. This can potentially lead to a significant
decrease in the running time of a program, see @ref{A note on threads}. In
terms of reading the code, you don't need to know anything about
multi-threaded programming. You can simply follow the case where only one
thread is to be used. In these cases, threads are not used and can be
completely ignored.

@cindex POSIX threads library
@cindex Lawrence Livermore National Laboratory
When the C language was defined (the K&R's book was written), using threads
was not common, so C's threading capabilities aren't introduced
there. Gnuastro uses POSIX threads for multi-threaded programming, defined
in the @file{pthread.h} system wide header. There are various resources for
learning to use POSIX threads. An excellent
@url{https://computing.llnl.gov/tutorials/pthreads/, tutorial} is provided
by the Lawrence Livermore National Laboratory, with abundant figures to
better understand the concepts, it is a very good start.  The book
`Advanced programming in the Unix environment'@footnote{Don't let the title
scare you! The two chapters on Multi-threaded programming are very
self-sufficient and don't need any more knowledge than K&R.}, by Richard
Stevens and Stephen Rago, Addison-Wesley, 2013 (Third edition) also has two
chapters explaining the POSIX thread constructs which can be very helpful.

@cindex OpenMP
An alternative to POSIX threads was OpenMP, but POSIX threads are low
level, allowing much more control, while being easier to understand, see
@ref{Why C}. All the situations where threads are used in Gnuastro
currently are completely independent with no need of coordination between
the threads.  Such problems are known as ``embarrassingly parallel''
problems. They are some of the simplest problems to solve with threads and
are also the ones that benefit most from them, see the LLNL
introduction@footnote{@url{https://computing.llnl.gov/tutorials/parallel_comp/}}.

One very useful POSIX thread concept is
@code{pthread_barrier}. Unfortunately, it is only an optional feature in
the POSIX standard, so some operating systems don't include it. Therefore
in @ref{Implementation of pthread_barrier}, we introduce our own
implementation. This is a rather technical section only necessary for more
technical readers and you can safely ignore it. Following that, we describe
the helper functions in this header that can greatly simplify writing a
multi-threaded program, see @ref{Gnuastro's thread related functions} for
more.

@menu
* Implementation of pthread_barrier::  Some systems don't have pthread_barrier
* Gnuastro's thread related functions::  Functions for managing threads.
@end menu

@node Implementation of pthread_barrier, Gnuastro's thread related functions, Multithreaded programming, Multithreaded programming
@subsubsection Implementation of @code{pthread_barrier}
@cindex POSIX threads
@cindex pthread_barrier
One optional feature of the POSIX Threads standard is the
@code{pthread_barrier} concept. It is a very useful high-level construct
that allows for independent threads to ``wait'' behind a ``barrier'' for
the rest after they finish. Barriers can thus greatly simplify the code in
a multi-threaded program, so they are heavily used in Gnuastro. However,
since its an optional feature in the POSIX standard, some operating systems
don't include it. So to make Gnuastro portable, we have written our own
implementation of those @code{pthread_barrier} functions.

At @command{./configure} time, Gnuastro will check if
@code{pthread_barrier} constructs are available on your system or not. If
@code{pthread_barrier} is not available, our internal implementation will
be compiled into the Gnuastro library and the definitions and declarations
below will be usable in your code with @code{#include
<gnuastro/threads.h>}.

@deffn Type pthread_barrierattr_t
Type to specify the attributes of a POSIX threads barrier.
@end deffn

@deffn Type pthread_barrier_t
Structure defining the POSIX threads barrier.
@end deffn

@deftypefun int pthread_barrier_init (pthread_barrier_t @code{*b}, pthread_barrierattr_t @code{*attr}, unsigned int @code{limit})
Initialize the barrier @code{b}, with the attributes @code{attr} and total
@code{limit} (a number of) threads that must wait behind it. This function
must be called before spinning off threads.
@end deftypefun

@deftypefun int pthread_barrier_wait (pthread_barrier_t @code{*b})
This function is called within each thread, just before it is ready to
return. Once a thread's function hits this, it will ``wait'' until all the
other functions are also finished.
@end deftypefun

@deftypefun int pthread_barrier_destroy (pthread_barrier_t @code{*b})
Destroy all the information in the barrier structure. This should be called
by the function that spun-off the threads after all the threads have
finished.

@cartouche
@noindent
@strong{Destroy a barrier before re-using it:} It is very important to
destroy the barrier before (possibly) reusing it. This destroy function not
only destroys the internal structures, it also waits (in 1 microsecond
intervals, so you will not notice!) until all the threads don't need the
barrier structure any more. If you immediately start spinning off new
threads with a not-destroyed barrier, then the internal structure of the
remaining threads will get mixed with the new ones and you will get very
strange and apparently random errors that are extremely hard to debug.
@end cartouche
@end deftypefun

@node Gnuastro's thread related functions,  , Implementation of pthread_barrier, Multithreaded programming
@subsubsection Gnuastro's thread related functions

@cindex POSIX Threads
The POSIX Threads functions offered in the C library are very low-level and offer a great range of control over the properties of the threads.
So if you are interested in customizing your tools for complicated thread applications, it is strongly encouraged to get a nice familiarity with them.
Some resources were introduced in @ref{Multithreaded programming}.

However, in many cases used in astronomical data analysis, you don't need communication between threads and each target operation can be done independently.
Since such operations are very common, Gnuastro provides the tools below to facilitate the creation and management of jobs without any particular knowledge of POSIX Threads for such operations.
The most interesting high-level functions of this section are the @code{gal_threads_number} and @code{gal_threads_spin_off} that identify the number of threads on the system and spin-off threads.
You can see a demonstration of using these functions in @ref{Library demo - multi-threaded operation}.

@deftp {C @code{struct}} gal_threads_params
Structure keeping the parameters of each thread.
When each thread is created, a pointer to this structure is passed to it.
The @code{params} element can be the pointer to a structure defined by the user which contains all the necessary parameters to pass onto the worker function.
The rest of the elements within this structure are set internally by @code{gal_threads_spin_off} and are relevant to the worker function.
@example
struct gal_threads_params
@{
  size_t            id; /* Id of this thread.                  */
  void         *params; /* User-identified pointer.            */
  size_t       *indexs; /* Target indexs given to this thread. */
  pthread_barrier_t *b; /* Barrier for all threads.            */
@};
@end example
@end deftp

@deftypefun size_t gal_threads_number ()
Return the number of threads that the operating system has available for your program.
This number is usually fixed for a single machine and doesn't change.
So this function is useful when you want to run your program on different machines (with different CPUs).
@end deftypefun

@deftypefun void gal_threads_spin_off (void @code{*(*worker)(void *)}, void @code{*caller_params}, size_t @code{numactions}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap})
Distribute @code{numactions} jobs between @code{numthreads} threads and spin-off each thread by calling the @code{worker} function.
The @code{caller_params} pointer will also be passed to @code{worker} as part of the @code{gal_threads_params} structure.
For a fully working example of this function, please see @ref{Library demo - multi-threaded operation}.

If there are many jobs (millions or billions) to organize, memory issues may become important.
With @code{minmapsize} you can specify the minimum byte-size to allocate the necessary space in a memory-mapped file or alternatively in RAM.
If @code{quietmmap} is non-zero, then a warning will be printed upon creating a memory-mapped file.
For more on Gnuastro's memory management, see @ref{Memory management}.
@end deftypefun

@deftypefun void gal_threads_attr_barrier_init (pthread_attr_t @code{*attr}, pthread_barrier_t @code{*b}, size_t @code{limit})
@cindex Detached threads
This is a low-level function in case you don't want to use @code{gal_threads_spin_off}.
It will initialize the general thread attribute @code{attr} and the barrier @code{b} with @code{limit} threads to wait behind the barrier.
For maximum efficiency, the threads initialized with this function will be detached.
Therefore no communication is possible between these threads and in particular @code{pthread_join} won't work on these threads.
You have to use the barrier constructs to wait for all threads to finish.
@end deftypefun

@deftypefun {char *} gal_threads_dist_in_threads (size_t @code{numactions}, size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap}, size_t @code{**indexs}, size_t @code{*icols})
This is a low-level function in case you don't want to use @code{gal_threads_spin_off}.
The job of this function is to distribute @code{numactions} jobs/actions in @code{numthreads} threads.
To do this, it will assign each job an ID, ranging from 0 to @code{numactions}-1.
The output is the allocated @code{*indexs} array and the @code{*icols} number.
In memory, its just a simple 1D array that has @code{numthreads} @mymath{\times} @code{*icols} elements.
But you can visualize it as a 2D array with @code{numthreads} rows and @code{*icols} columns.
For more on the logic of the distribution, see below.

@cindex RAM
@cindex Memory management
When you have millions/billions of jobs to distribute, @code{indexs} will become very large.
For memory management (when to use a memory-mapped file, and when to use RAM), you need to specify the @code{minmapsize} and @code{quietmmap} arguments.
For more on memory management, see @ref{Memory management}.
In general, if your distributed jobs will not be on the scale of billions (and you want everything to always be written in RAM), just set @code{minmapsize=-1} and @code{quietmmap=1}.

When @code{indexs} is actually in a memory-mapped file, this function will return a string containing the name of the file (that you can later give to @code{gal_pointer_mmap_free} to free/delete).
When @code{indexs} is in RAM, this function will return a @code{NULL} pointer.
So after you are finished with @code{indexs}, you can free it like this:

@example
char *mmapname;
int quietmmap=1;
size_t *indexs, thrdcols;
size_t numactions=5000, minmapsize=-1;
size_t numthreads=gal_threads_number();

/* Distribute the jobs. */
mmapname=gal_threads_dist_in_threads(numactions, numthreads,
                                     minmapsize, quietmmap,
                                     &indexs, &thrdcols);

/* Do any processing you want... */

/* Free the 'indexs' array. */
if(mmapname) gal_pointer_mmap_free(&mmapname, quietmmap);
else         free(indexs);
@end example

Here is a brief description of the reasoning behind the @code{indexs} array and how the jobs are distributed.
Let's assume you have @mymath{A} actions (where there is only one function and the input values differ for each action) and @mymath{T} threads available to the system with @mymath{A>T} (common values for these two would be @mymath{A>1000} and @mymath{T<10}).
Spinning off a thread is not a cheap job and requires a significant number of CPU cycles.
Therefore, creating @mymath{A} threads is not the best way to address such a problem.
The most efficient way to manage the actions is such that only @mymath{T} threads are created, and each thread works on a list of actions identified for it in series (one after the other).
This way your CPU will get all the actions done with minimal overhead.

The purpose of this function is to do what we explained above: each row in the @code{indexs} array contains the indexs of actions which must be done by one thread (so it has @code{numthreads} rows with @code{*icols} columns).
However, when using @code{indexs}, you don't have to know the number of columns.
It is guaranteed that all the rows finish with @code{GAL_BLANK_SIZE_T} (see @ref{Library blank values}).
The @code{GAL_BLANK_SIZE_T} macro plays a role very similar to a string's @code{\0}: every row finishes with this macro, so can easily stop parsing the indexes in the row as soon as you confront @code{GAL_BLANK_SIZE_T}.
For some real examples, please see the example program in @file{tests/lib/multithread.c} for a demonstration.

@end deftypefun

@node Library data types, Pointers, Multithreaded programming, Gnuastro library
@subsection Library data types (@file{type.h})

Data in astronomy can have many types, numeric (numbers) and strings
(names, identifiers). The former can also be divided into integers and
floats, see @ref{Numeric data types} for a thorough discussion of the
different numeric data types and which one is useful for different
contexts.

To deal with the very large diversity of types that are available (and used
in different contexts), in Gnuastro each type is identified with global
integer variable with a fixed name, this variable is then passed onto
functions that can work on any type or is stored in Gnuastro's @ref{Generic
data container} as one piece of meta-data.

The actual values within these integer constants is irrelevant and you
should never rely on them. When you need to check, explicitly use the named
variable in the table below. If you want to check with more than one type,
you can use C's @code{switch} statement.

Since Gnuastro heavily deals with file input-output, the types it defines
are fixed width types, these types are portable to all systems and are
defined in the standard C header @file{stdint.h}. You don't need to include
this header, it is included by any Gnuastro header that deals with the
different types. However, the most commonly used types in a C (or C++)
program (for example @code{int} or @code{long}) are not defined by their
exact width (storage size), but by their minimum storage. So for example on
some systems, @code{int} may be 2 bytes (16-bits, the minimum required by
the standard) and on others it may be 4 bytes (32-bits, common in modern
systems).

With every type, a unique ``blank'' value (or place holder showing the
absence of data) can be defined. Please see @ref{Library blank values} for
constants that Gnuastro recognizes as a blank value for each type. See
@ref{Numeric data types} for more explanation on the limits and particular
aspects of each type.

@deffn {Global integer} GAL_TYPE_INVALID
This is just a place holder to specifically mark that no type has been set.
@end deffn

@deffn {Global integer} GAL_TYPE_BIT
Identifier for a bit-stream. Currently no program in Gnuastro works
directly on bits, but features will be added in the future.
@end deffn

@deffn {Global integer} GAL_TYPE_UINT8
Identifier for an unsigned, 8-bit integer type: @code{uint8_t} (from
@file{stdint.h}), or an @code{unsigned char} in most modern systems.
@end deffn

@deffn {Global integer} GAL_TYPE_INT8
Identifier for a signed, 8-bit integer type: @code{int8_t} (from
@file{stdint.h}), or an @code{signed char} in most modern systems.
@end deffn

@deffn {Global integer} GAL_TYPE_UINT16
Identifier for an unsigned, 16-bit integer type: @code{uint16_t} (from
@file{stdint.h}), or an @code{unsigned short} in most modern systems.
@end deffn

@deffn {Global integer} GAL_TYPE_INT16
Identifier for a signed, 16-bit integer type: @code{int16_t} (from
@file{stdint.h}), or a @code{short} in most modern systems.
@end deffn

@deffn {Global integer}  GAL_TYPE_UINT32
Identifier for an unsigned, 32-bit integer type: @code{uint32_t} (from
@file{stdint.h}), or an @code{unsigned int} in most modern systems.
@end deffn

@deffn {Global integer}  GAL_TYPE_INT32
Identifier for a signed, 32-bit integer type: @code{int32_t} (from
@file{stdint.h}), or an @code{int} in most modern systems.
@end deffn

@deffn {Global integer}  GAL_TYPE_UINT64
Identifier for an unsigned, 64-bit integer type: @code{uint64_t} (from
@file{stdint.h}), or an @code{unsigned long} in most modern 64-bit systems.
@end deffn

@deffn {Global integer}  GAL_TYPE_INT64
Identifier for a signed, 64-bit integer type: @code{int64_t} (from
@file{stdint.h}), or an @code{long} in most modern 64-bit systems.
@end deffn

@deffn {Global integer}  GAL_TYPE_INT
Identifier for a @code{int} type. This is just an alias to @code{int16}, or
@code{int32} types, depending on the system.
@end deffn

@deffn {Global integer}  GAL_TYPE_UINT
Identifier for a @code{unsigned int} type. This is just an alias to
@code{uint16}, or @code{uint32} types, depending on the system.
@end deffn

@deffn {Global integer}  GAL_TYPE_ULONG
Identifier for a @code{unsigned long} type. This is just an alias to
@code{uint32}, or @code{uint64} types for 32-bit, or 64-bit systems
respectively.
@end deffn

@deffn {Global integer}  GAL_TYPE_LONG
Identifier for a @code{long} type. This is just an alias to @code{int32},
or @code{int64} types for 32-bit, or 64-bit systems respectively.
@end deffn

@deffn {Global integer}  GAL_TYPE_SIZE_T
Identifier for a @code{size_t} type. This is just an alias to
@code{uint32}, or @code{uint64} types for 32-bit, or 64-bit systems
respectively.
@end deffn

@deffn {Global integer}  GAL_TYPE_FLOAT32
Identifier for a 32-bit single precision floating point type or
@code{float} in C.
@end deffn

@deffn {Global integer}  GAL_TYPE_FLOAT64
Identifier for a 64-bit double precision floating point type or
@code{double} in C.
@end deffn

@deffn {Global integer}  GAL_TYPE_COMPLEX32
Identifier for a complex number composed of two @code{float} types. Note
that the complex type is not yet fully implemented in all Gnuastro's
programs.
@end deffn

@deffn {Global integer}  GAL_TYPE_COMPLEX64
Identifier for a complex number composed of two @code{double} types. Note
that the complex type is not yet fully implemented in all Gnuastro's
programs.
@end deffn

@deffn {Global integer}  GAL_TYPE_STRING
Identifier for a string of characters (@code{char *}).
@end deffn

@deffn {Global integer}  GAL_TYPE_STRLL
Identifier for a linked list of string of characters
(@code{gal_list_str_t}, see @ref{List of strings}).
@end deffn

@noindent
The functions below are defined to make working with the integer constants
above easier. In the functions below, the constants above can be used for
the @code{type} input argument.

@deftypefun size_t gal_type_sizeof (uint8_t @code{type})
Return the number of bytes occupied by @code{type}. Internally, this
function uses C's @code{sizeof} operator to measure the size of each type.
@end deftypefun

@deftypefun {char *} gal_type_name (uint8_t @code{type}, int @code{long_name})
Return a string literal that contains the name of @code{type}. It can
return both short and long formats of the type names (for example
@code{f32} and @code{float32}). If @code{long_name} is non-zero, the long
format will be returned, otherwise the short name will be returned. The
output string is statically allocated, so it should not be freed. This
function is the inverse of the @code{gal_type_from_name} function. For the
full list of names/strings that this function will return, see @ref{Numeric
data types}.
@end deftypefun

@deftypefun uint8_t gal_type_from_name (char @code{*str})
Return the Gnuastro integer constant that corresponds to the string
@code{str}. This function is the inverse of the @code{gal_type_name}
function and accepts both the short and long formats of each type. For the
full list of names/strings that this function will return, see @ref{Numeric
data types}.
@end deftypefun

@deftypefun void gal_type_min (uint8_t @code{type}, void @code{*in})
Put the minimum possible value of @code{type} in the space pointed to by
@code{in}. Since the value can have any type, this function doesn't return
anything, it assumes the space for the given type is available to @code{in}
and writes the value there. Here is one example
@example
int32_t min;
gal_type_min(GAL_TYPE_INT32, &min);
@end example

@noindent
Note: Do not use the minimum value for a blank value of a general
(initially unknown) type, please use the constants/functions provided in
@ref{Library blank values} for the definition and usage of blank values.
@end deftypefun

@deftypefun void gal_type_max (uint8_t @code{type}, void @code{*in})
Put the maximum possible value of @code{type} in the space pointed to by
@code{in}. Since the value can have any type, this function doesn't return
anything, it assumes the space for the given type is available to @code{in}
and writes the value there. Here is one example
@example
uint16_t max;
gal_type_max(GAL_TYPE_INT16, &max);
@end example

@noindent
Note: Do not use the maximum value for a blank value of a general
(initially unknown) type, please use the constants/functions provided in
@ref{Library blank values} for the definition and usage of blank values.
@end deftypefun

@deftypefun int gal_type_is_int (uint8_t @code{type})
Return 1 if the type is an integer (any width and any sign).
@end deftypefun

@deftypefun int gal_type_is_list (uint8_t @code{type})
Return 1 if the type is a linked list and zero otherwise.
@end deftypefun

@deftypefun int gal_type_out (int @code{first_type}, int @code{second_type})
Return the larger of the two given types which can be used for the type of
the output of an operation involving the two input types.
@end deftypefun

@deftypefun {char *} gal_type_bit_string (void @code{*in}, size_t @code{size})
Return the bit-string in the @code{size} bytes that @code{in} points
to. The string is dynamically allocated and must be freed afterwards. You
can use it to inspect the bits within one region of memory. Here is one
short example:

@example
int32_t a=2017;
char *bitstr=gal_type_bit_string(&a, 4);
printf("%d: %s (%X)\n", a, bitstr, a);
free(bitstr);
@end example

@noindent
which will produce:
@example
2017: 11100001000001110000000000000000  (7E1)
@end example

As the example above shows, the bit-string is not the most efficient way to
inspect bits. If you are familiar with hexadecimal notation, it is much
more compact, see @url{https://en.wikipedia.org/wiki/Hexadecimal}. You can
use @code{printf}'s @code{%x} or @code{%X} to print integers in hexadecimal
format.
@end deftypefun

@deftypefun {char *} gal_type_to_string (void @code{*ptr}, uint8_t @code{type}, int @code{quote_if_str_has_space});
Read the contents of the memory that @code{ptr} points to (assuming it has
type @code{type} and print it into an allocated string which is returned.

If the memory is a string of characters and @code{quote_if_str_has_space}
is non-zero, the output string will have double-quotes around it if it
contains space characters. Also, note that in this case, @code{ptr} must be
a pointer to an array of characters (or @code{char **}), as in the example
below (which will put @code{"sample string"} into @code{out}):

@example
char *out, *string="sample string"
out = gal_type_to_string(&string, GAL_TYPE_STRING, 1);
@end example
@end deftypefun

@deftypefun int gal_type_from_string (void @code{**out}, char @code{*string}, uint8_t @code{type})
Read a string as a given data type and put a pointer to it in
@code{*out}. When @code{*out!=NULL}, then it is assumed to be already
allocated and the value will be simply put there. If @code{*out==NULL},
then space will be allocated for the given type and the string will be read
into that type.

Note that when we are dealing with a string type, @code{*out} should be
interpreted as @code{char **} (one element in an array of pointers to
different strings). In other words, @code{out} should be @code{char ***}.

This function can be used to fill in arrays of numbers from strings (in an
already allocated data structure), or add nodes to a linked list (if the
type is a list type). For an array, you have to pass the pointer to the
@code{i}th element where you want the value to be stored, for example
@code{&(array[i])}.

If the string was successfully parsed to the requested type, this function
will return a @code{0} (zero), otherwise it will return @code{1}
(one). This output format will help you check the status of the conversion
in a code like the example below:

@example
if( gal_type_from_string(&out, string, GAL_TYPE_FLOAT32) )
  @{
    fprintf(stderr, "%s couldn't be read as float32.\n", string);
    exit(EXIT_FAILURE);
  @}
@end example
@end deftypefun

@deftypefun {void *} gal_type_string_to_number (char @code{*string}, uint8_t @code{*type})
Read @code{string} into smallest type that can host the number, the allocated space for the number will be returned and the type of the number will be put into the memory that @code{type} points to.
If @code{string} couldn't be read as a number, this function will return @code{NULL}.

This function calls the C library's @code{strtod} function to read @code{string} as a double-precision floating point number.
When successful, it will check the value to put it in the smallest numerical data type that can handle it.
However, if @code{string} is successfully parsed as a number @emph{and} there is @code{.} in @code{string}, it will force the number into floating point types.
For example @code{"5"} is read as an integer, while @code{"5."} or @code{"5.0"}, or @code{"5.00"} will be read as a floating point (single-precision).

For the ranges acceptable by each type see @ref{Numeric data types}.
For integers, the range is clear, but for floating point types, this function will count the number of significant digits and determine if the given string is single or double precision as described in that section.
@end deftypefun

@node Pointers, Library blank values, Library data types, Gnuastro library
@subsection Pointers (@file{pointer.h})

@cindex Pointers
Pointers play an important role in the C programming language. As the name
suggests, they @emph{point} to a byte in memory (like an address in a
city). The C programming language gives you complete freedom in how to use
the byte (and the bytes that follow it). Pointers are thus a very powerful
feature of C. However, as the saying goes: ``With great power comes great
responsibility'', so they must be approached with care. The functions in
this header are not very complex, they are just wrappers over some basic
pointer functionality regarding pointer arithmetic and allocation (in
memory or HDD/SSD).

@deftypefun {void *} gal_pointer_increment (void @code{*pointer}, size_t @code{increment}, uint8_t @code{type})
Return a pointer to an element that is @code{increment} elements ahead of
@code{pointer}, assuming each element has type of @code{type}. For the type
codes, see @ref{Library data types}.

When working with the @code{array} elements of @code{gal_data_t}, we are
actually dealing with @code{void *} pointers. However, pointer arithmetic
doesn't apply to @code{void *}, because the system doesn't know how many
bytes there are in each element to increment the pointer respectively. This
function will use the given @code{type} to calculate where the incremented
element is located in memory.
@end deftypefun

@deftypefun size_t gal_pointer_num_between (void @code{*earlier}, void @code{*later}, uint8_t @code{type})
Return the number of elements (in the given @code{type}) between
@code{earlier} and @code{later}. For the type codes, see @ref{Library data
types}).
@end deftypefun

@deftypefun {void *} gal_pointer_allocate (uint8_t @code{type}, size_t @code{size}, int @code{clear}, const char @code{*funcname}, const char @code{*varname})
Allocate an array of type @code{type} with @code{size} elements in RAM (for the type codes, see @ref{Library data types}).
If @code{clear!=0}, then the allocated space is set to zero (cleared).
This is effectively just a wrapper around C's @code{malloc} or @code{calloc} functions but takes Gnuastro's integer type codes and will also abort with a clear error if there the allocation was not successful.

@cindex C99
When space cannot be allocated, this function will abort the program with a message containing the reason for the failure.
@code{funcname} (name of the function calling this function) and @code{varname} (name of variable that needs this space) will be used in this error message if they are not @code{NULL}.
In most modern compilers, you can use the generic @code{__func__} variable for @code{funcname}.
In this way, you don't have to manually copy and paste the function name or worry about it changing later (@code{__func__} was standardized in C99).
@end deftypefun

@deftypefun {void *} gal_pointer_allocate_ram_or_mmap (uint8_t @code{type}, size_t @code{size}, int @code{clear}, size_t @code{minmapsize}, char @code{**mmapname}, int @code{quietmmap}, const char @code{*funcname}, const char @code{*varname})
Allocate the given space either in RAM or in a memory-mapped file.
This function is just a high-level wrapper to @code{gal_pointer_allocate} (to allocate in RAM) or @code{gal_pointer_mmap_allocate} (to use a memory-mapped file).
For more on memory management in Gnuastro, please see @ref{Memory management}.
The various arguments are more fully explained in the two functions above.
@end deftypefun

@deftypefun {void *} gal_pointer_mmap_allocate (size_t @code{size}, uint8_t @code{type}, int @code{clear}, char @code{**mmapname})
Allocate the necessary space to keep @code{size} elements of type @code{type} in HDD/SSD (a file, not in RAM).
For the type codes, see @ref{Library data types}.
If @code{clear!=0}, then the allocated space will also be cleared.
The allocation is done using C's @code{mmap} function.
The name of the file containing the allocated space is an allocated string that will be put in @code{*mmapname}.

Note that the kernel doesn't allow an infinite number of memory mappings to files.
So it is not recommended to use this function with every allocation.
The best case scenario to use this function is for large arrays that are very large and can fill up the RAM.
Keep the smaller arrays in RAM, which is faster and can have a (theoretically) unlimited number of allocations.

When you are done with the dataset and don't need it anymore, don't use @code{free} (the dataset isn't in RAM).
Just delete the file (and the allocated space for the filename) with the commands below, or simply use @code{gal_pointer_mmap_free}.

@example
remove(mmapname);
free(mmapname);
@end example
@end deftypefun

@deftypefun void gal_pointer_mmap_free (char @code{**mmapname}, int @code{quietmmap})
``Free'' (actually delete) the memory-mapped file that is named @code{*mmapname}, then free the string.
If @code{quietmmap} is non-zero, then a warning will be printed for the user to know that the given file has been deleted.
@end deftypefun


@node Library blank values, Library data container, Pointers, Gnuastro library
@subsection Library blank values (@file{blank.h})
When the position of an element in a dataset is important (for example a
pixel in an image), a place-holder is necessary for the element if we don't
have a value to fill it with (for example the CCD cannot read those
pixels). We cannot simply shift all the other pixels to fill in the one we
have no value for. In other cases, it often occurs that the field of sky
that you are studying is not a clean rectangle to nicely fit into the
boundaries of an image. You need a way to separate the pixels outside your
scientific field from those inside it. Blank values act as these place
holders in a dataset. They have no usable value but they have a position.

@cindex NaN
Every type needs a corresponding blank value (see @ref{Numeric data types}
and @ref{Library data types}). Floating point types have a unique value
identified by IEEE known as Not-a-Number (or NaN) which is a unique value
that is recognized by the compiler. However, integer and string types don't
have any standard value. For integers, in Gnuastro we take an extremum of
the given type: for signed types (that allow negatives), the minimum
possible value is used as blank and for unsigned types (that only accept
positives), the maximum possible value is used. To be generic and easy to
read/write we define a macro for these blank values and strongly encourage
you only use these, and never make any assumption on the value of a type's
blank value.

@cindex NaN
The IEEE NaN blank value type is defined to fail on any comparison, so if
you are dealing with floating point types, you cannot use equality (a NaN
will @emph{not} be equal to a NaN). If you know your dataset if floating
point, you can use the @code{isnan} function in C's @file{math.h}
header. For a description of numeric data types see @ref{Numeric data
types}. For the constants identifying integers, please see @ref{Library
data types}.

@deffn {Global integer}  GAL_BLANK_UINT8
Blank value for an unsigned, 8-bit integer.
@end deffn

@deffn {Global integer}  GAL_BLANK_INT8
Blank value for a signed, 8-bit integer.
@end deffn

@deffn {Global integer}  GAL_BLANK_UINT16
Blank value for an unsigned, 16-bit integer.
@end deffn

@deffn {Global integer}  GAL_BLANK_INT16
Blank value for a signed, 16-bit integer.
@end deffn

@deffn {Global integer}  GAL_BLANK_UINT32
Blank value for an unsigned, 32-bit integer.
@end deffn

@deffn {Global integer}  GAL_BLANK_INT32
Blank value for a signed, 32-bit integer.
@end deffn

@deffn {Global integer}  GAL_BLANK_UINT64
Blank value for an unsigned, 64-bit integer.
@end deffn

@deffn {Global integer}  GAL_BLANK_INT64
Blank value for a signed, 64-bit integer.
@end deffn

@deffn {Global integer}  GAL_BLANK_INT
Blank value for @code{int} type (@code{int16_t} or @code{int32_t} depending
on the system.
@end deffn

@deffn {Global integer}  GAL_BLANK_UINT
Blank value for @code{int} type (@code{int16_t} or @code{int32_t} depending
on the system.
@end deffn

@deffn {Global integer}  GAL_BLANK_LONG
Blank value for @code{long} type (@code{int32_t} or @code{int64_t} in
32-bit or 64-bit systems).
@end deffn

@deffn {Global integer}  GAL_BLANK_ULONG
Blank value for @code{unsigned long} type (@code{uint32_t} or
@code{uint64_t} in 32-bit or 64-bit systems).
@end deffn

@deffn {Global integer}  GAL_BLANK_SIZE_T
Blank value for @code{size_t} type (@code{uint32_t} or @code{uint64_t} in
32-bit or 64-bit systems).
@end deffn

@cindex NaN
@deffn {Global integer}  GAL_BLANK_FLOAT32
Blank value for a single precision, 32-bit floating point type (IEEE NaN
value).
@end deffn

@cindex NaN
@deffn {Global integer}  GAL_BLANK_FLOAT64
Blank value for a double precision, 64-bit floating point type (IEEE NaN
value).
@end deffn

@deffn {Global integer}  GAL_BLANK_STRING
Blank value for string types (this is itself a string, it isn't the
@code{NULL} pointer).
@end deffn

@noindent
The functions below can be used to work with blank pixels.

@deftypefun void gal_blank_write (void @code{*pointer}, uint8_t @code{type})
Write the blank value for the given @code{type} into the space that
@code{pointer} points to. This can be used when the space is already
allocated (for example one element in an array or a statically allocated
variable).
@end deftypefun

@deftypefun {void *} gal_blank_alloc_write (uint8_t @code{type})
Allocate the space required to keep the blank for the given data type
@code{type}, write the blank value into it and return the pointer to it.
@end deftypefun

@deftypefun void gal_blank_initialize (gal_data_t @code{*input})
Initialize all the elements in the @code{input} dataset to the blank value
that corresponds to its type. If @code{input} is a tile over a larger
dataset, only the region that the tile covers will be set to blank.
@end deftypefun

@deftypefun void gal_blank_initialize_array (void @code{*array}, size_t @code{size}, uint8_t @code{type})
Initialize all the elements in the @code{array} to the blank value that
corresponds to its type (identified with @code{type}), assuming the array
has @code{size} elements.
@end deftypefun

@deftypefun {char *} gal_blank_as_string (uint8_t @code{type}, int @code{width})
Write the blank value for the given data type @code{type} into a string and
return it. The space for the string is dynamically allocated so it must be
freed after you are done with it. If @code{width!=0}, then the final string
will be padded with white space characters to have the requested width if
it is smaller.
@end deftypefun

@deftypefun int gal_blank_is (void @code{*pointer}, uint8_t @code{type})
Return 1 if the contents of @code{pointer} (assuming a type of @code{type})
is blank. Otherwise, return 0. Note that this function only works on one
element of the given type. So if @code{pointer} is an array, only its first
element will be checked. Therefore for strings, the type of @code{pointer}
is assumed to be @code{char *}. To check if an array/dataset has blank
elements or to find which elements in an array are blank, you can use
@code{gal_blank_present} or @code{gal_blank_flag} respectively (described
below).
@end deftypefun


@deftypefun int gal_blank_present (gal_data_t @code{*input}, int @code{updateflag})
Return 1 if the dataset has a blank value and zero if it doesn't. Before
checking the dataset, this function will look at @code{input}'s flags. If
the @code{GAL_DATA_FLAG_BLANK_CH} bit of @code{input->flag} is on, this
function will not do any check and will just use the information in the
flags. This can greatly speed up processing when a dataset needs to be
checked multiple times.

When the dataset's flags were not used and @code{updateflags} is non-zero,
this function will set the flags appropriately to avoid having to re-check
the dataset in future calls. When @code{updateflags==0}, this function has
no side-effects on the dataset: it will not toggle the flags.

If you want to re-check a dataset with the blank-value-check flag already
set (for example if you have made changes to it), then explicitly set the
@code{GAL_DATA_FLAG_BLANK_CH} bit to zero before calling this
function. When there are no other flags, you can just set the flags to zero
(@code{input->flag=0}), otherwise you can use this expression:

@example
input->flag &= ~GAL_DATA_FLAG_BLANK_CH;
@end example
@end deftypefun

@deftypefun size_t gal_blank_number (gal_data_t @code{*input}, int @code{updateflag})
Return the number of blank elements in @code{input}. If
@code{updateflag!=0}, then the dataset blank keyword flags will be
updated. See the description of @code{gal_blank_present} (above) for more
on these flags. If @code{input==NULL}, then this function will return
@code{GAL_BLANK_SIZE_T}.
@end deftypefun


@deftypefun {gal_data_t *} gal_blank_flag (gal_data_t @code{*input})
Create a dataset of the same size as the input, but with an
@code{uint8_t} type that has a value of 1 for data that are blank and 0 for
those that aren't.
@end deftypefun

@deftypefun void gal_blank_flag_apply (gal_data_t @code{*input}, gal_data_t @code{*flag})
Set all non-zero and non-blank elements of @code{flag} to blank in @code{input}.
@code{flag} has to have an unsigned 8-bit type and be the same size as @code{input}.
@end deftypefun

@deftypefun void gal_blank_flag_remove (gal_data_t @code{*input}, gal_data_t @code{*flag})
Remove all elements within @code{input} that are flagged, convert it to a 1D dataset and adjust the size properly (the number of non-flagged elements).
In practice this function doesn't@code{realloc} the input array (see @code{gal_blank_remove_realloc} for shrinking/re-allocating also), it just shifts the blank elements to the end and adjusts the size elements of the @code{gal_data_t}, see @ref{Generic data container}.

Note that elements that are blank, but not flagged will not be removed.
This function will only remove flagged elements.

If all the elements were flagged, then @code{input->size} will be zero.
This is thus a good parameter to check after calling this function to see if there actually were any non-flagged elements in the input or not and take the appropriate measure.
This check is highly recommended because it will avoid strange bugs in later steps.
@end deftypefun

@deftypefun void gal_blank_remove (gal_data_t @code{*input})
Remove blank elements from a dataset, convert it to a 1D dataset, adjust the size properly (the number of non-blank elements), and toggle the blank-value-related bit-flags.
In practice this function doesn't@code{realloc} the input array (see @code{gal_blank_remove_realloc} for shrinking/re-allocating also), it just shifts the blank elements to the end and adjusts the size elements of the @code{gal_data_t}, see @ref{Generic data container}.

If all the elements were blank, then @code{input->size} will be zero.
This is thus a good parameter to check after calling this function to see if there actually were any non-blank elements in the input or not and take the appropriate measure.
This check is highly recommended because it will avoid strange bugs in later steps.
@end deftypefun

@deftypefun void gal_blank_remove_realloc (gal_data_t @code{*input})
Similar to @code{gal_blank_remove}, but also shrinks/re-allocates the dataset's allocated memory.
@end deftypefun

@deftypefun {gal_data_t *} gal_blank_remove_rows (gal_data_t @code{*columns}, gal_list_sizet_t @code{*column_indexs})
Remove any row that has at least one blank value in any of the input columns.
The input @code{columns} is a list of @code{gal_data_t}s (see @ref{List of gal_data_t}).
After this function, all the elements in @code{columns} will still have the same size as each other, but if any of the searched columns has blank elements, all their sizes will decrease together.

If @code{column_indexs==NULL}, then all the columns (nodes in the list) will be checked for blank elements, and any row that has at least one blank element will be removed.
When @code{column_indexs!=NULL}, only the columns whose index (counting from zero) is in @code{column_indexs} will be used to check for blank values (see @ref{List of size_t}.
In any case (no matter which columns are checked for blanks), the selected rows from all columns will be removed.
@end deftypefun



@node Library data container, Dimensions, Library blank values, Gnuastro library
@subsection Data container (@file{data.h})

Astronomical datasets have various dimensions, for example 1D spectra or
table columns, 2D images, or 3D Integral field data cubes. Datasets can
also have various numeric data types, depending on the operation/purpose,
for example processed images are commonly stored in floating point format,
but their mask images are integers (allowing bit-wise flags to identify
certain classes of pixels to keep or mask, see @ref{Numeric data
types}). Certain other information about a dataset are also commonly
necessary, for example the units of the dataset, the name of the dataset
and some comments. To deal with any generic dataset, Gnuastro defines the
@code{gal_data_t} as input or output.

@menu
* Generic data container::      Definition of Gnuastro's generic container.
* Dataset allocation::          Allocate, initialize and free a dataset.
* Arrays of datasets::          Functions to help with array of datasets.
* Copying datasets::            Functions to copy a dataset to a new one.
@end menu

@node Generic data container, Dataset allocation, Library data container, Library data container
@subsubsection Generic data container (@code{gal_data_t})

To be able to deal with any dataset (various dimensions, numeric data
types, units and higher-level structures), Gnuastro defines the
@code{gal_data_t} type which is the input/output container of choice for
many of Gnuastro library's functions. It is defined in
@file{gnuastro/data.h}. If you will be using (`@code{# include}'ing) those
libraries, you don't need to include this header explicitly, it is already
included by any library header that uses @code{gal_data_t}.

@deftp {Type (C @code{struct})} gal_data_t
The main container for datasets in Gnuastro. It can host data of any
dimensions, with any numeric data type. It is actually a structure, but
@code{typedef}'d as a new type to avoid having to write the @code{struct}
before any declaration. The actual structure is shown below which is
followed by a description of each element.

@example
typedef struct gal_data_t
@{
  void     *restrict array;  /* Basic array information.   */
  uint8_t             type;
  size_t              ndim;
  size_t            *dsize;
  size_t              size;
  int            quietmmap;
  char           *mmapname;
  size_t        minmapsize;

  int                 nwcs;  /* WCS information.           */
  struct wcsprm       *wcs;

  uint8_t             flag;  /* Content description.       */
  int               status;
  char               *name;
  char               *unit;
  char            *comment;

  int             disp_fmt;  /* For text printing.         */
  int           disp_width;
  int       disp_precision;

  struct gal_data_t  *next;  /* For higher-level datasets. */
  struct gal_data_t *block;
@} gal_data_t;
@end example
@end deftp

@noindent
The list below contains a description for each @code{gal_data_t} element.

@cindex @code{void *}
@table @code
@item void *restrict array
This is the pointer to the main array of the dataset containing the raw
data (values). All the other elements in this data-structure are actually
meta-data enabling us to use/understand the series of values in this
array. It must allow data of any type (see @ref{Numeric data types}), so it
is defined as a @code{void *} pointer. A @code{void *} array is not
directly usable in C, so you have to cast it to proper type before using
it, please see @ref{Library demo - reading a image} for a demonstration.

@cindex @code{restrict}
@cindex C: @code{restrict}
The @code{restrict} keyword was formally introduced in C99 and is used to
tell the compiler that at any moment only this pointer will modify what it
points to (a pixel in an image for example)@footnote{Also see
@url{https://en.wikipedia.org/wiki/Restrict}.}. This extra piece of
information can greatly help in compiler optimizations and thus the running
time of the program. But older compilers might not have this capability, so
at @command{./configure} time, Gnuastro checks this feature and if the
user's compiler doesn't support @code{restrict}, it will be removed from
this definition.

@cindex Data type
@item uint8_t type
A fixed code (integer) used to identify the type of data in @code{array}
(see @ref{Numeric data types}). For the list of acceptable values to this
variable, please see @ref{Library data types}.

@item size_t ndim
The dataset's number of dimensions.


@cindex FORTRAN
@cindex FITS standard
@cindex Standard, FITS
@item size_t *dsize
The size of the dataset along each dimension. This is an array (with
@code{ndim} elements), of positive integers in row-major
order@footnote{Also see
@url{https://en.wikipedia.org/wiki/Row-_and_column-major_order}.}  (based
on C). When a data file is read into memory with Gnuastro's libraries, this
array is dynamically allocated based on the number of dimensions that the
dataset has.

It is important to remember that C's row-major ordering is the opposite of
the FITS standard which is in column-major order: in the FITS standard the
fastest dimension's size is specified by @code{NAXIS1}, and slower
dimensions follow. The FITS standard was defined mainly based on the
FORTRAN language which is the opposite of C's approach to multi-dimensional
arrays (and also starts counting from 1 not 0). Hence if a FITS image has
@code{NAXIS1==20} and @code{NAXIS2==50}, the @code{dsize} array must be
filled with @code{dsize[0]==50} and @code{dsize[1]==20}.

The fastest dimension is the one that is contiguous in memory: to increment
by one along that dimension, just go to the next element in the array. As
we go to slower dimensions, the number of memory cells we have to skip for
an increment along that dimension becomes larger.

@item size_t size
The total number of elements in the dataset. This is actually a
multiplication of all the values in the @code{dsize} array, so it is not an
independent parameter. However, low-level operations with the dataset
(irrespective of its dimensions) commonly need this number, so this element
is designed to avoid calculating it every time.

@item int quietmmap
When this value is zero, and the dataset must not be allocated in RAM (see @code{mmapname} and @code{minmapsize}), a warning will be printed to inform the user when the file is created and when it is deleted.
The warning includes the filename, the size in bytes, and the fact that they can toggle this behavior through @code{--minmapsize} option in Gnuastro's programs.

@item char *mmapname
Name of file hosting the @code{mmap}'d contents of @code{array}.
If the value of this variable is @code{NULL}, then the contents of @code{array} are actually stored in RAM, not in a file on the HDD/SSD.
See the description of @code{minmapsize} below for more.

If a file is used, it will be kept in the @file{gnuastro_mmap} directory of the running directory.
Its name is randomly selected to allow multiple arrays at the same time, see description of @option{--minmapsize} in @ref{Processing options}.
When @code{gal_data_free} is called the randomly named file will be deleted.

@item size_t minmapsize
The minimum size of an array (in bytes) to store the contents of @code{array} as a file (on the non-volatile HDD/SSD), not in RAM.
This can be very useful for large datasets which can be very memory intensive and the user's RAM might not be sufficient to keep/process it.
A random filename is assigned to the array which is available in the @code{mmapname} element of @code{gal_data_t} (above), see there for more.
@code{minmapsize} is stored in each @code{gal_data_t}, so it can be passed on to subsequent/derived datasets.

See the description of the @option{--minmapsize} option in @ref{Processing options} for more on using this value.

@item nwcs
The number of WCS coordinate representations (for WCSLIB).

@item struct wcsprm *wcs
The main WCSLIB structure keeping all the relevant information necessary for WCSLIB to do its processing and convert data-set positions into real-world positions.
When it is given a @code{NULL} value, all possible WCS calculations/measurements will be ignored.

@item uint8_t flag
Bit-wise flags to describe general properties of the dataset.
The number of bytes available in this flag is stored in the @code{GAL_DATA_FLAG_SIZE} macro.
Note that you should use bit-wise operators@footnote{See @url{https://en.wikipedia.org/wiki/Bitwise_operations_in_C}.} to check these flags.
The currently recognized bits are stored in these macros:

@table @code

@cindex Blank data
@item GAL_DATA_FLAG_BLANK_CH
Marking that the dataset has been checked for blank values or not.
When a dataset doesn't have any blank values, the @code{GAL_DATA_FLAG_HASBLANK} bit will be zero.
But upon initialization, all bits also get a value of zero.
Therefore, a checker needs this flag to see if the value in @code{GAL_DATA_FLAG_HASBLANK} is reliable (dataset has actually been parsed for a blank value) or not.

Also, if it is necessary to re-check the presence of flags, you just have
to set this flag to zero and call @code{gal_blank_present} for example to
parse the dataset and check for blank values. Note that for improved
efficiency, when this flag is set, @code{gal_blank_present} will not
actually parse the dataset, it will just use @code{GAL_DATA_FLAG_HASBLANK}.

@item GAL_DATA_FLAG_HASBLANK
This bit has a value of @code{1} when the given dataset has blank
values. If this bit is @code{0} and @code{GAL_DATA_FLAG_BLANK_CH} is
@code{1}, then the dataset has been checked and it didn't have any blank
values, so there is no more need for further checks.

@item GAL_DATA_FLAG_SORT_CH
Marking that the dataset is already checked for being sorted or not and
thus that the possible @code{0} values in @code{GAL_DATA_FLAG_SORTED_I} and
@code{GAL_DATA_FLAG_SORTED_D} are meaningful. The logic behind this is
similar to that in @code{GAL_DATA_FLAG_BLANK_CH}.

@item GAL_DATA_FLAG_SORTED_I
This bit has a value of @code{1} when the given dataset is sorted in an
increasing manner. If this bit is @code{0} and @code{GAL_DATA_FLAG_SORT_CH}
is @code{1}, then the dataset has been checked and wasn't sorted
(increasing), so there is no more need for further checks.

@item GAL_DATA_FLAG_SORTED_D
This bit has a value of @code{1} when the given dataset is sorted in a
decreasing manner. If this bit is @code{0} and @code{GAL_DATA_FLAG_SORT_CH}
is @code{1}, then the dataset has been checked and wasn't sorted
(decreasing), so there is no more need for further checks.

@end table

The macro @code{GAL_DATA_FLAG_MAXFLAG} contains the largest internally used
bit-position. Higher-level flags can be defined with the bit-wise shift
operators using this macro to define internal flags for libraries/programs
that depend on Gnuastro without causing any possible conflict with the
internal flags discussed above or having to check the values manually on
every release.


@item int status
A context-specific status values for this data-structure. This integer will
not be set by Gnuastro's libraries. You can use it keep some additional
information about the dataset (with integer constants) depending on your
applications.

@item char *name
The name of the dataset. If the dataset is a multi-dimensional array and
read/written as a FITS image, this will be the value in the @code{EXTNAME}
FITS keyword. If the dataset is a one-dimensional table column, this will
be the column name. If it is set to @code{NULL} (by default), it will be
ignored.

@item char *unit
The units of the dataset (for example @code{BUNIT} in the standard FITS
keywords) that will be read from or written to files/tables along with the
dataset. If it is set to @code{NULL} (by default), it will be ignored.

@item char *comment
Any further explanation about the dataset which will be written to any
output file if present.

@item disp_fmt
Format to use for printing each element of the dataset to a plain text
file, the acceptable values to this element are defined in @ref{Table input
output}. Based on C's @code{printf} standards.

@item disp_width
Width of printing each element of the dataset to a plain text file, the
acceptable values to this element are defined in @ref{Table input
output}. Based on C's @code{printf} standards.

@item disp_precision
Precision of printing each element of the dataset to a plain text file, the
acceptable values to this element are defined in @ref{Table input
output}. Based on C's @code{printf} standards.

@item gal_data_t *next
Through this pointer, you can link a @code{gal_data_t} with other datasets
related datasets, for example the different columns in a dataset each have
one @code{gal_data_t} associate with them and they are linked to each other
using this element. There are several functions described below to
facilitate using @code{gal_data_t} as a linked list. See @ref{Linked lists}
for more on these wonderful high-level constructs.

@item gal_data_t *block
Pointer to the start of the complete allocated block of memory. When this
pointer is not @code{NULL}, the dataset is not treated as a contiguous
patch of memory. Rather, it is seen as covering only a portion of the
larger patch of memory that @code{block} points to. See @ref{Tessellation
library} for a more thorough explanation and functions to help work with
tiles that are created from this pointer.
@end table


@node Dataset allocation, Arrays of datasets, Generic data container, Library data container
@subsubsection Dataset allocation

Gnuastro's main data container was defined in @ref{Generic data container}.
The functions listed in this section describe the most basic operations on
@code{gal_data_t}: those related to allocation and freeing. These functions
are declared in @file{gnuastro/data.h} which is also visible from the
function names (see @ref{Gnuastro library}).

@deftypefun {gal_data_t *} gal_data_alloc (void @code{*array}, uint8_t @code{type}, size_t @code{ndim}, size_t @code{*dsize}, struct wcsprm @code{*wcs}, int @code{clear}, size_t @code{minmapsize}, int @code{quietmmap}, char @code{*name}, char @code{*unit}, char @code{*comment})

Dynamically allocate a @code{gal_data_t} and initialize it will all the
given values. See the description of @code{gal_data_initialize} and
@ref{Generic data container} for more information. This function will often
be the most frequently used because it allocates the @code{gal_data_t}
hosting all the values @emph{and} initializes it. Once you are done with
the dataset, be sure to clean up all the allocated spaces with
@code{gal_data_free}.
@end deftypefun

@deftypefun void gal_data_initialize (gal_data_t @code{*data}, void @code{*array}, uint8_t @code{type}, size_t @code{ndim}, size_t @code{*dsize}, struct wcsprm @code{*wcs}, int @code{clear}, size_t @code{minmapsize}, int @code{quietmmap}, char @code{*name}, char @code{*unit}, char @code{*comment})

Initialize the given data structure (@code{data}) with all the given
values. Note that the raw input @code{gal_data_t} must already have been
allocated before calling this function. For a description of each variable
see @ref{Generic data container}. It will set the values and do the
necessary allocations. If they aren't @code{NULL}, all input arrays
(@code{dsize}, @code{wcs}, @code{name}, @code{unit}, @code{comment}) are
separately copied (allocated) by this function for usage in @code{data}, so
you can safely use one value to initialize many datasets or use statically
allocated variables in this function call. Once you are done with the
dataset, you can free all the allocated spaces with
@code{gal_data_free_contents}.

If @code{array} is not @code{NULL}, it will be directly copied into
@code{data->array} (based on the total number of elements calculated from
@code{dsize}) and no new space will be allocated for the array of this
dataset, this has many low-level advantages and can be used to work on
regions of a dataset instead of the whole allocated array (see the
description under @code{block} in @ref{Generic data container} for one
example). If the given pointer is not the start of an allocated block of
memory or it is used in multiple datasets, be sure to set it to @code{NULL}
(with @code{data->array=NULL}) before cleaning up with
@code{gal_data_free_contents}.

@code{ndim} may be zero. In this case no allocation will occur,
@code{data->array} and @code{data->dsize} will be set to @code{NULL} and
@code{data->size} will be zero. However (when necessary) @code{dsize} must
not have any zero values (a dimension of length zero is not defined).
@end deftypefun

@deftypefun void gal_data_free_contents (gal_data_t @code{*data})
Free all the non-@code{NULL} pointers in @code{gal_data_t} except for
@code{next} and @code{block}. If @code{data} is actually a tile
(@code{data->block!=NULL}, see @ref{Tessellation library}), then
@code{data->array} is not freed. For a complete description of
@code{gal_data_t} and its contents, see @ref{Generic data container}.
@end deftypefun

@deftypefun void gal_data_free (gal_data_t @code{*data})
Free all the non-@code{NULL} pointers in @code{gal_data_t}, then free the
actual data structure.
@end deftypefun

@node Arrays of datasets, Copying datasets, Dataset allocation, Library data container
@subsubsection Arrays of datasets

Gnuastro's generic data container (@code{gal_data_t}) is a very versatile
structure that can be used in many higher-level contexts. One such
higher-level construct is an array of @code{gal_data_t} structures to
simplify the allocation (and later cleaning) of several @code{gal_data_t}s
that are related.

For example, each column in a table is usually represented by one
@code{gal_data_t} (so it has its own name, data type, units, etc). A
table (with many columns) can be seen as an array of @code{gal_data_t}s
(when the number of columns is known a-priori). The functions below are
defined to create a cleared array of data structures and to free them when
none are necessary any more. These functions are declared in
@file{gnuastro/data.h} which is also visible from the function names (see
@ref{Gnuastro library}).

@deftypefun {gal_data_t *} gal_data_array_calloc (size_t @code{size})
Allocate an array of @code{gal_data_t} with @code{size} elements. This
function will also initialize all the values (@code{NULL} for pointers and
0 for other types). You can use @code{gal_data_initialize} to fill each
element of the array afterwards. The following code snippet is one example
of doing this.

@example
size_t i;
gal_data_t *dataarr;
dataarr=gal_data_array_calloc(10);
for(i=0;i<10;++i) gal_data_initialize(&dataarr[i], ...);
...
gal_data_array_free(dataarr, 10, 1);
@end example
@end deftypefun

@deftypefun void gal_data_array_free (gal_data_t @code{*dataarr}, size_t @code{num}, int @code{free_array})
Free all the @code{num} elements within @code{dataarr} and the actual
allocated array. If @code{free_array} is not zero, then the @code{array}
element of all the datasets will also be freed, see @ref{Generic data
container}.
@end deftypefun

@deftypefun {gal_data_t **} gal_data_array_ptr_calloc (size_t @code{size})
Allocate an array of pointers to Gnuastro's generic data structure and initialize all pointers to @code{NULL}.
This is useful when you want to allocate individual datasets later (for example with @code{gal_data_alloc}).
@end deftypefun

@deftypefun void gal_data_array_ptr_free (gal_data_t @code{**dataptr}, size_t @code{size}, int @code{free_array});
Free all the individual datasets within the elements of @code{dataptr}, then free @code{dataptr} itself (the array of pointers that was probably allocated with @code{gal_data_array_ptr_calloc}.
@end deftypefun

@node Copying datasets,  , Arrays of datasets, Library data container
@subsubsection Copying datasets

The functions in this section describes Gnuastro's facilities to copy a
given dataset into another. The new dataset can have a different type
(including a string), it can be already allocated (in which case only the
values will be written into it). In all these cases, if the input dataset
is a tile or a list, only the data within the given tile, or the given node
in a list, are copied. If the input is a list, the @code{next} pointer will
also be copied to the output, see @ref{List of gal_data_t}.

In many of the functions here, it is possible to copy the dataset to a new
numeric data type (see @ref{Numeric data types}. In such cases, Gnuastro's
library is going to use the native conversion by C. So if you are
converting to a smaller type, it is up to you to make sure that the values
fit into the output type.

@deftypefun {gal_data_t *} gal_data_copy (gal_data_t @code{*in})
Return a new dataset that is a copy of @code{in}, all of @code{in}'s
meta-data will also copied into the output, except for @code{block}. If the
dataset is a tile/list, only the given tile/node will be copied, the
@code{next} pointer will also be copied however.
@end deftypefun

@deftypefun {gal_data_t *} gal_data_copy_to_new_type (gal_data_t @code{*in}, uint8_t @code{newtype})
Return a copy of the dataset @code{in}, converted to @code{newtype}, see
@ref{Library data types} for Gnuastro library's type identifiers. The
returned dataset will have all meta-data except their type and @code{block}
equal to the input's metadata. If the dataset is a tile/list, only the
given tile/node will be copied, the @code{next} pointer will also be copied
however.
@end deftypefun

@deftypefun {gal_data_t *} gal_data_copy_to_new_type_free (gal_data_t @code{*in}, uint8_t @code{newtype})
Return a copy of the dataset @code{in} that is converted to @code{newtype}
and free the input dataset. See @ref{Library data types} for Gnuastro
library's type identifiers. The returned dataset will have all meta-data,
except their type, equal to the input's metadata (including
@code{next}). Note that if the input is a tile within a larger block, it
will not be freed. This function is similar to
@code{gal_data_copy_to_new_type}, except that it will free the input
dataset.
@end deftypefun

@deftypefun {void} gal_data_copy_to_allocated (gal_data_t @code{*in}, gal_data_t @code{*out})
Copy the contents of the array in @code{in} into the already allocated
array in @code{out}. The types of the input and output may be different,
type conversion will be done internally. When @code{in->size != out->size}
this function will behave as follows:

@table @code
@item out->size < in->size
This function won't re-allocate the necessary space, it will abort with an
error, so please check before calling this function.

@item out->size > in->size
This function will write the values in @code{out->size} and
@code{out->dsize} from the same values of @code{in}. So if you want to use
a pre-allocated space/dataset multiple times with varying input sizes, be
sure to reset @code{out->size} before every call to this function.
@end table
@end deftypefun

@deftypefun {gal_data_t *} gal_data_copy_string_to_number (char @code{*string})
Read @code{string} into the smallest type that can store the value (see
@ref{Numeric data types}). This function is just a wrapper for the
@code{gal_type_string_to_number}, but will put the value into a
single-element dataset.
@end deftypefun


@node Dimensions, Linked lists, Library data container, Gnuastro library
@subsection Dimensions (@file{dimension.h})

An array is a contiguous region of memory.  Hence, at the lowest level,
every element of an array just has one single-valued position: the number
of elements that lie between it and the first element in the array. This is
also known as the @emph{index} of the element within the array. A dataset's
number of dimensions is high-level abstraction (meta-data) that we project
onto that contiguous patch of memory. When the array is interpreted as a
one-dimensional dataset, this index is also the @emph{coordinate} of the
element. But once we associate the patch of memory with a higher dimension,
there must also be one coordinate for each dimension.

The functions and macros in this section provide you with the tools to
convert an index into a coordinate and vice-versa along with several other
issues for example issues with the neighbors of an element in a
multi-dimensional context.

@deftypefun size_t gal_dimension_total_size (size_t @code{ndim}, size_t @code{*dsize})
Return the total number of elements for a dataset with @code{ndim}
dimensions that has @code{dsize} elements along each dimension.
@end deftypefun

@deftypefun int gal_dimension_is_different (gal_data_t @code{*first}, gal_data_t @code{*second})
Return @code{1} (one) if the two datasets don't have the same size along
all dimensions. This function will also return @code{1} when the number of
dimensions of the two datasets are different.
@end deftypefun

@deftypefun {size_t *} gal_dimension_increment (size_t @code{ndim}, size_t @code{*dsize})
Return an allocated array that has the number of elements necessary to
increment an index along every dimension. For example along the fastest
dimension (last element in the @code{dsize} and returned arrays), the value
is @code{1} (one).
@end deftypefun

@deftypefun size_t gal_dimension_num_neighbors (size_t @code{ndim})
The maximum number of neighbors (any connectivity) that a data element can
have in @code{ndim} dimensions. Effectively, this function just returns
@mymath{3^n-1} (where @mymath{n} is the number of dimensions).
@end deftypefun

@deffn {Function-like macro} GAL_DIMENSION_FLT_TO_INT (@code{FLT})
Calculate the integer pixel position that the floating point @code{FLT}
number belongs to. In the FITS format (and thus in Gnuastro), the center of
each pixel is allocated on an integer (not it edge), so the pixel which
hosts a floating point number cannot simply be found with internal type
conversion.
@end deffn

@deftypefun void gal_dimension_add_coords (size_t @code{*c1}, size_t @code{*c2}, size_t @code{*out}, size_t @code{ndim})
For every dimension, add the coordinates in @code{c1} with @code{c2} and
put the result into @code{out}. In other words, for dimension @code{i} run
@code{out[i]=c1[i]+c2[i];}. Hence @code{out} may be equal to any one of
@code{c1} or @code{c2}.
@end deftypefun

@deftypefun size_t gal_dimension_coord_to_index (size_t @code{ndim}, size_t @code{*dsize}, size_t @code{*coord})
Return the index (counting from zero) from the coordinates in @code{coord}
(counting from zero) assuming the dataset has @code{ndim} elements and the
size of the dataset along each dimension is in the @code{dsize} array.
@end deftypefun

@deftypefun void gal_dimension_index_to_coord (size_t @code{index}, size_t @code{ndim}, size_t @code{*dsize}, size_t @code{*coord})
Fill in the @code{coord} array with the coordinates that correspond to
@code{index} assuming the dataset has @code{ndim} elements and the size of
the dataset along each dimension is in the @code{dsize} array. Note that
both @code{index} and each value in @code{coord} are assumed to start from
@code{0} (zero). Also that the space which @code{coord} points to must
already be allocated before calling this function.
@end deftypefun

@deftypefun size_t gal_dimension_dist_manhattan (size_t @code{*a}, size_t @code{*b}, size_t @code{ndim})
@cindex Manhattan distance
@cindex Distance, Manhattan
Return the manhattan distance (see
@url{https://en.wikipedia.org/wiki/Taxicab_geometry, Wikipedia}) between
the two coordinates @code{a} and @code{b} (each an array of @code{ndim}
elements).
@end deftypefun

@deftypefun float gal_dimension_dist_radial (size_t @code{*a}, size_t @code{*b}, size_t @code{ndim})
Return the radial distance between the two coordinates @code{a} and
@code{b} (each an array of @code{ndim} elements).
@end deftypefun

@deftypefun float gal_dimension_dist_elliptical (double @code{*center}, double @code{*pa_deg}, double @code{*q}, size_t @code{ndim}, double @code{*point})
@cindex Ellipse
@cindex Ellipsoid
@cindex Axis ratio
@cindex Position angle
@cindex Elliptical distance
@cindex Ellipsoidal distance
@cindex Distance, elliptical/ellipsoidal
Return the elliptical/ellipsoidal distance of the single point @code{point} (containing @code{ndim} values: coordinates of the point in each dimension) from an ellipse that is defined by @code{center}, @code{pa_deg} and @code{q}.
@code{center} is the coordinates of the ellipse center (also with @code{ndim} elements). @code{pa} is the position-angle in degrees (the angle of the semi-major axis from the first dimension in a 2D ellipse) and @code{q} is the axis ratio.

In a 2D ellipse, @code{pa} and @code{q} are a single-element array.
However, in a 3D ellipsoid, @code{pa} must have three elements, and @code{q} must have 2 elements.
For more see @ref{Defining an ellipse and ellipsoid}.
@end deftypefun

@deftypefun {gal_data_t *} gal_dimension_collapse_sum (gal_data_t @code{*in}, size_t @code{c_dim}, gal_data_t @code{*weight})
Collapse the input dataset (@code{in}) along the given dimension
(@code{c_dim}, in C definition: starting from zero, from the slowest
dimension), by summing all elements in that direction. If
@code{weight!=NULL}, it must be a single-dimensional array, with the same
size as the dimension to be collapsed. The respective weight will be
multiplied to each element during the collapse.

For generality, the returned dataset will have a @code{GAL_TYPE_FLOAT64}
type. See @ref{Copying datasets} for converting the returned dataset to a
desired type. Also, for more on the application of this function, see the
Arithmetic program's @option{collapse-sum} operator (which uses this
function) in @ref{Arithmetic operators}.
@end deftypefun

@deftypefun {gal_data_t *} gal_dimension_collapse_mean (gal_data_t @code{*in}, size_t @code{c_dim}, gal_data_t @code{*weight})
Similar to @code{gal_dimension_collapse_sum} (above), but the collapse will
be done by calculating the mean along the requested dimension, not summing
over it.
@end deftypefun

@deftypefun {gal_data_t *} gal_dimension_collapse_number (gal_data_t @code{*in}, size_t @code{c_dim})
Collapse the input dataset (@code{in}) along the given dimension
(@code{c_dim}, in C definition: starting from zero, from the slowest
dimension), by counting how many non-blank elements there are along that
dimension.

For generality, the returned dataset will have a @code{GAL_TYPE_INT32}
type. See @ref{Copying datasets} for converting the returned dataset to a
desired type. Also, for more on the application of this function, see the
Arithmetic program's @option{collapse-number} operator (which uses this
function) in @ref{Arithmetic operators}.
@end deftypefun

@deftypefun {gal_data_t *} gal_dimension_collapse_minmax (gal_data_t @code{*in}, size_t @code{c_dim}, int @code{max1_min0})
Collapse the input dataset (@code{in}) along the given dimension
(@code{c_dim}, in C definition: starting from zero, from the slowest
dimension), by using the largest/smallest non-blank value along that
dimension. If @code{max1_min0} is non-zero, then the collapsed dataset will
have the maximum value along the given dimension and if it is zero, the
minimum.
@end deftypefun

@deftypefun size_t gal_dimension_remove_extra (size_t @code{ndim}, size_t @code{*dsize}, struct wcsprm @code{*wcs})
Remove extra dimensions (those that only have a length of 1) from the basic
size information of a dataset. @code{ndim} is the number of dimensions and
@code{dsize} is an array with @code{ndim} elements containing the size
along each dimension in the C dimension order. When @code{wcs!=NULL}, the
respective dimension will also be removed from the WCS.

This function will return the new number of dimensions and the @code{dsize}
elements will contain the length along each new dimension.
@end deftypefun

@deffn {Function-like macro} GAL_DIMENSION_NEIGHBOR_OP (@code{index}, @code{ndim}, @code{dsize}, @code{connectivity}, @code{dinc}, @code{operation})
Parse the neighbors of the element located at @code{index} and do the
requested operation on them. This is defined as a macro to allow easy
definition of any operation on the neighbors of a given element without
having to use loops within your source code (the loops are implemented by
this macro). For an example of using this function, please see @ref{Library
demo - inspecting neighbors}. The input arguments to this function-like
macro are described below:

@table @code
@item index
Distance of this element from the first element in the array on a
contiguous patch of memory (starting from 0), see the discussion above.
@item ndim
The number of dimensions associated with the contiguous patch of memory.
@item dsize
The full array size along each dimension. This must be an array and is
assumed to have the same number elements as @code{ndim}. See the discussion
under the same element in @ref{Generic data container}.
@item connectivity
Most distant neighbors to consider. Depending on the number of dimensions,
different neighbors may be defined for each element. This function-like
macro distinguish between these different neighbors with this argument. It
has a value between @code{1} (one) and @code{ndim}. For example in a 2D
dataset, 4-connected neighbors have a connectivity of @code{1} and
8-connected neighbors have a connectivity of @code{2}. Note that this is
inclusive, so in this example, a connectivity of @code{2} will also include
connectivity @code{1} neighbors.
@item dinc
An array keeping the length necessary to increment along each
dimension. You can make this array with the following function. Just don't
forget to free the array after you are done with it:

@example
size_t *dinc=gal_dimension_increment(ndim, dsize);
@end example

@code{dinc} depends on @code{ndim} and @code{dsize}, but it must be defined
outside this function-like macro since it involves allocation to help in
performance.

@item operation
Any C operation that you would like to do on the neighbor. This macro will
provide you a @code{nind} variable that can be used as the index of the
neighbor that is currently being studied. It is defined as `@code{size_t
ndim;}'. Note that @code{operation} will be repeated the number of times
there is a neighbor for this element.
@end table

This macro works fully within its own @code{@{@}} block and except for the
@code{nind} variable that shows the neighbor's index, all the variables
within this macro's block start with @code{gdn_}.
@end deffn

@node Linked lists, Array input output, Dimensions, Gnuastro library
@subsection Linked lists (@file{list.h})

@cindex Array
@cindex Linked list
An array is a contiguous region of memory that is very efficient and easy
to use for recording and later accessing any random element as fast as any
other. This makes array the primary data container when you have many
elements (for example an image which has millions of pixels). One major
problem with an array is that the number of elements that go into it must
be known in advance and adding or removing an element will require a re-set
of all the other elements. For example if you want to remove the 3rd
element in a 1000 element array, all 997 subsequent elements have to pulled
back by one position, the reverse will happen if you need to add an
element.

In many contexts such situations never come up, for example you don't want
to shift all the pixels in an image by one or two pixels from some random
position in the image: their positions have scientific value. But in other
contexts you will find your self frequently adding/removing an a-priori
unknown number of elements. Linked lists (or @emph{lists} for short) are
the data-container of choice in such situations. As in a chain, each
@emph{node} in a list is an independent C structure, keeping its own data
along with pointer(s) to its immediate neighbor(s). Below, you can see one
simple linked list node structure along with an ASCII art schematic of how
we can use the @code{next} pointer to add any number of elements to the
list that we want. By convention, a list is terminated when @code{next} is
the @code{NULL} pointer.

@c The second and last lines lines are pushed line space forward, because
@c the `@{' at the start of the line is only seen as `{' in the book.
@example
struct list_float          /*     ---------    ---------           */
@{                          /*     | Value |    | Value |           */
  float             value; /*     |  ---  |    |  ---  |           */
  struct list_float *next; /*     |  next-|--> |  next-|--> NULL   */
@}                          /*     ---------    ---------           */
@end example

The schematic shows another great advantage of linked lists: it is very
easy to add or remove/pop a node anywhere in the list. If you want to
modify the first node, you just have to change one pointer. If it is in the
middle, you just have to change two. You initially define a variable of
this type with a @code{NULL} pointer as shown below:

@example
struct list_float *mylist=NULL;
@end example

@noindent
To add or remove/pop a node from the list you can use functions provided
for the respective type in the sections below.

@cindex last-in-first-out
@cindex first-in-first-out
@noindent
When you add an element to the list, it is conventionally added to the
``top'' of the list: the general list pointer will point to the newly
created node, which will point to the previously created node and so on. So
when you ``pop'' from the top of the list, you are actually retrieving the
last value you put in and changing the list pointer to the next youngest
node. This is thus known as a ``last-in-first-out'' list. This is the most
efficient type of linked list (easier to implement and faster to
process). Alternatively, you can add each newly created node at the end of
the list. If you do that, you will get a ``first-in-first-out'' list. But
that will force you to go through the whole list for each new element that
is created (this will slow down the processing)@footnote{A better way to
get a first-in-first-out is to first keep the data as last-in-first-out
until they are all read. Afterwards, reverse the list by popping each node
and immediately add it to the new list. This practically reverses the
last-in-first-out list to a first-in-first-out one. All the list types
discussed in this chapter have a function with a @code{_reverse} suffix for
this job.}.

The node example above creates the simplest kind of a list. We can define
each node with two pointers to both the next and previous neighbors, this
is called a ``Doubly linked list''. In general, lists are very powerful and
simple constructs that can be very useful. But going into more detail would
be out of the scope of this short introduction in this
book. @url{https://en.wikipedia.org/wiki/Linked_list, Wikipedia} has a nice
and more thorough discussion of the various types of lists. To
appreciate/use the beauty and elegance of these powerful constructs even
further, see Chapter 2 (Information Structures, in volume 1) of Donald
Knuth's ``The art of computer programming''.

In this section we will review the functions and structures that are
available in Gnuastro for working on lists. They differ by the type of data
that each node can keep. For each linked-list node structure, we will first
introduce the structure, then the functions for working on the
structure. All these structures and functions are defined and declared in
@file{gnuastro/list.h}.


@menu
* List of strings::             Simply linked list of strings.
* List of int32_t::             Simply linked list of int32_ts.
* List of size_t::              Simply linked list of size_ts.
* List of float::               Simply linked list of floats.
* List of double::              Simply linked list of doubles
* List of void::                Simply linked list of void * pointers.
* Ordered list of size_t::      Simply linked, ordered list of size_t.
* Doubly linked ordered list of size_t::  Definition and functions.
* List of gal_data_t::          Simply linked list Gnuastro's generic datatype.
@end menu

@node List of strings, List of int32_t, Linked lists, Linked lists
@subsubsection List of strings

Probably one of the most common lists you will be using are lists of
strings. They are the best tools when you are reading the user's inputs, or
when adding comments to the output files. Below you can see Gnuastro's
string list type and several functions to help in adding, removing/popping,
reversing and freeing the list.

@deftp {Type (C @code{struct})} gal_list_str_t
A single node in a list containing a string of characters.
@example
typedef struct gal_list_str_t
@{
  char *v;
  struct gal_list_str_t *next;
@} gal_list_str_t;
@end example
@end deftp

@deftypefun void gal_list_str_add (gal_list_str_t @code{**list}, char @code{*value}, int @code{allocate})
Add a new node to the list of strings (@code{list}) and update it. The new
node will contain the string @code{value}. If @code{allocate} is not zero,
space will be allocated specifically for the string of the new node and the
contents of @code{value} will be copied into it. This can be useful when
your string may be changed later in the program, but you want your list to
remain. Here is one short/simple example of initializing and adding
elements to a string list:

@example
gal_list_str_t *strlist=NULL;
gal_list_str_add(&strlist, "bottom of list.");
gal_list_str_add(&strlist, "second last element of list.");
@end example

@end deftypefun

@deftypefun {char *} gal_list_str_pop (gal_list_str_t @code{**list})
Pop the top element of @code{list}, change @code{list} to point to the next
node in the list, and return the string that was in the popped node. If
@code{*list==NULL}, then this function will also return a @code{NULL}
pointer.
@end deftypefun

@deftypefun size_t gal_list_str_number (gal_list_str_t @code{*list})
Return the number of nodes in @code{list}.
@end deftypefun

@deftypefun size_t gal_list_str_last (gal_list_str_t @code{*list})
Return a pointer to the last node in @code{list}.
@end deftypefun

@deftypefun void gal_list_str_print (gal_list_str_t @code{*list})
Print the strings within each node of @code{*list} on the standard output
in the same order that they are stored. Each string is printed on one
line. This function is mainly good for checking/debugging your program. For
program outputs, its best to make your own implementation with a better,
more user-friendly, format. For example the following code snippet.

@example
size_t i;
gal_list_str_t *tmp;
for(tmp=list; tmp!=NULL; tmp=tmp->next)
  printf("String %zu: %s\n", i, tmp->v);
@end example
@end deftypefun

@deftypefun void gal_list_str_reverse (gal_list_str_t @code{**list})
Reverse the order of the list such that the top node in the list before
calling this function becomes the bottom node after it.
@end deftypefun

@deftypefun void gal_list_str_free (gal_list_str_t @code{*list}, int @code{freevalue})
Free every node in @code{list}. If @code{freevalue} is not zero, also free
the string within the nodes.
@end deftypefun




@node List of int32_t, List of size_t, List of strings, Linked lists
@subsubsection List of @code{int32_t}

Signed integers are the best types when you are dealing with a positive or
negative integers. The are generally useful in many contexts, for example
when you want to keep the order of a series of states (each state stored as
a given number in an @code{enum} for example). On many modern systems,
@code{int32_t} is just an alias for @code{int}, so you can use them
interchangeably. To make sure, check the size of @code{int} on your system:

@deftp {Type (C @code{struct})} gal_list_i32_t
A single node in a list containing a 32-bit signed integer (see
@ref{Numeric data types}).
@example
typedef struct gal_list_i32_t
@{
  int32_t v;
  struct gal_list_i32_t *next;
@} gal_list_i32_t;
@end example
@end deftp


@deftypefun void gal_list_i32_add (gal_list_i32_t @code{**list}, int32_t @code{value})
Add a new node (containing @code{value}) to the top of the @code{list} of
@code{int32_t}s (@code{uint32_t} is equal to @code{int} on many modern
systems), and update @code{list}. Here is one short example of initializing
and adding elements to a string list:

@example
gal_list_i32_t *i32list=NULL;
gal_list_i32_add(&i32list, 52);
gal_list_i32_add(&i32list, -4);
@end example

@end deftypefun

@deftypefun {int32_t} gal_list_i32_pop (gal_list_i32_t @code{**list})
Pop the top element of @code{list} and return the value. This function will
also change @code{list} to point to the next node in the list. If
@code{*list==NULL}, then this function will also return
@code{GAL_BLANK_INT32} (see @ref{Library blank values}).
@end deftypefun

@deftypefun size_t gal_list_i32_number (gal_list_i32_t @code{*list})
Return the number of nodes in @code{list}.
@end deftypefun

@deftypefun size_t gal_list_i32_last (gal_list_i32_t @code{*list})
Return a pointer to the last node in @code{list}.
@end deftypefun

@deftypefun void gal_list_i32_print (gal_list_i32_t @code{*list})
Print the integers within each node of @code{*list} on the standard output
in the same order that they are stored. Each integer is printed on one
line. This function is mainly good for checking/debugging your program. For
program outputs, its best to make your own implementation with a better,
more user-friendly format. For example the following code snippet. You can
also modify it to print all values in one line, etc, depending on the
context of your program.

@example
size_t i;
gal_list_i32_t *tmp;
for(tmp=list; tmp!=NULL; tmp=tmp->next)
  printf("String %zu: %s\n", i, tmp->v);
@end example
@end deftypefun

@deftypefun void gal_list_i32_reverse (gal_list_i32_t @code{**list})
Reverse the order of the list such that the top node in the list before
calling this function becomes the bottom node after it.
@end deftypefun

@deftypefun {int32_t *} gal_list_i32_to_array (gal_list_i32_t @code{*list}, int @code{reverse}, size_t @code{*num})
Dynamically allocate an array and fill it with the values in
@code{list}. The function will return a pointer to the allocated array and
put the number of elements in the array into the @code{num} pointer. If
@code{reverse} has a non-zero value, the array will be filled in the
opposite order of elements in @code{list}. This function can be
useful after you have finished reading an initially unknown number of
values and want to put them in an array for easy random access.
@end deftypefun

@deftypefun void gal_list_i32_free (gal_list_i32_t @code{*list})
Free every node in @code{list}.
@end deftypefun






@node List of size_t, List of float, List of int32_t, Linked lists
@subsubsection List of @code{size_t}

The @code{size_t} type is a unique type in C: as the name suggests it is
defined to store sizes, or more accurately, the distances between memory
locations. Hence it is always positive (an @code{unsigned} type) and it is
directly related to the address-able spaces on the host system: on 32-bit
and 64-bit systems it is an alias for @code{uint32_t} and @code{uint64_t},
respectively (see @ref{Numeric data types}).

@code{size_t} is the default compiler type to index an array (recall that
an array index in C is just a pointer increment of a given
@emph{size}). Since it is unsigned, its a great type for counting (where
negative is not defined), you are always sure it will never exceed the
system's (virtual) memory and since its name has the word ``size'' inside
it, it provides a good level of documentation@footnote{So you know that a
variable of this type is not used to store some generic state for
example.}. In Gnuastro, we do all counting and array indexing with this
type, so this list is very handy. As discussed above, @code{size_t} maps to
different types on different machines, so a portable way to print them with
@code{printf} is to use C99's @code{%zu} format.

@deftp {Type (C @code{struct})} gal_list_sizet_t
A single node in a list containing a @code{size_t} value (which maps to
@code{uint32_t} or @code{uint64_t} on 32-bit and 64-bit systems), see
@ref{Numeric data types}.
@example
typedef struct gal_list_sizet_t
@{
  size_t v;
  struct gal_list_sizet_t *next;
@} gal_list_sizet_t;
@end example
@end deftp


@deftypefun void gal_list_sizet_add (gal_list_sizet_t @code{**list}, size_t @code{value})
Add a new node (containing @code{value}) to the top of the @code{list} of
@code{size_t}s and update @code{list}.  Here is one short example of
initializing and adding elements to a string list:

@example
gal_list_sizet_t *slist=NULL;
gal_list_sizet_add(&slist, 45493);
gal_list_sizet_add(&slist, 930484);
@end example

@end deftypefun

@deftypefun {sizet_t} gal_list_sizet_pop (gal_list_sizet_t @code{**list})
Pop the top element of @code{list} and return the value. This function will
also change @code{list} to point to the next node in the list. If
@code{*list==NULL}, then this function will also return
@code{GAL_BLANK_SIZE_T} (see @ref{Library blank values}).
@end deftypefun

@deftypefun size_t gal_list_sizet_number (gal_list_sizet_t @code{*list})
Return the number of nodes in @code{list}.
@end deftypefun

@deftypefun size_t gal_list_sizet_last (gal_list_sizet_t @code{*list})
Return a pointer to the last node in @code{list}.
@end deftypefun

@deftypefun void gal_list_sizet_print (gal_list_sizet_t @code{*list})
Print the values within each node of @code{*list} on the standard output in
the same order that they are stored. Each integer is printed on one
line. This function is mainly good for checking/debugging your program. For
program outputs, its best to make your own implementation with a better,
more user-friendly format. For example, the following code snippet. You can
also modify it to print all values in one line, etc, depending on the
context of your program.

@example
size_t i;
gal_list_sizet_t *tmp;
for(tmp=list; tmp!=NULL; tmp=tmp->next)
  printf("String %zu: %zu\n", i, tmp->v);
@end example
@end deftypefun

@deftypefun void gal_list_sizet_reverse (gal_list_sizet_t @code{**list})
Reverse the order of the list such that the top node in the list before
calling this function becomes the bottom node after it.
@end deftypefun

@deftypefun {size_t *} gal_list_sizet_to_array (gal_list_sizet_t @code{*list}, int @code{reverse}, size_t @code{*num})
Dynamically allocate an array and fill it with the values in
@code{list}. The function will return a pointer to the allocated array and
put the number of elements in the array into the @code{num} pointer. If
@code{reverse} has a non-zero value, the array will be filled in the
inverse of the order of elements in @code{list}. This function can be
useful after you have finished reading an initially unknown number of
values and want to put them in an array for easy random access.
@end deftypefun

@deftypefun void gal_list_sizet_free (gal_list_sizet_t @code{*list})
Free every node in @code{list}.
@end deftypefun



@node List of float, List of double, List of size_t, Linked lists
@subsubsection List of @code{float}

Single precision floating point numbers can accurately store real number
until 7.2 decimals and only consume 4 bytes (32-bits) of memory, see
@ref{Numeric data types}. Since astronomical data rarely reach that level
of precision, single precision floating points are the type of choice to
keep and read data. However, when processing the data, it is best to use
double precision floating points (since errors propagate).

@deftp {Type (C @code{struct})} gal_list_f32_t
A single node in a list containing a 32-bit single precision @code{float}
value: see @ref{Numeric data types}.
@example
typedef struct gal_list_f32_t
@{
  float v;
  struct gal_list_f32_t *next;
@} gal_list_f32_t;
@end example
@end deftp


@deftypefun void gal_list_f32_add (gal_list_f32_t @code{**list}, float @code{value})
Add a new node (containing @code{value}) to the top of the @code{list} of
@code{float}s and update @code{list}.  Here is one short example of
initializing and adding elements to a string list:

@example
gal_list_f32_t *flist=NULL;
gal_list_f32_add(&flist, 3.89);
gal_list_f32_add(&flist, 1.23e-20);
@end example

@end deftypefun

@deftypefun {float} gal_list_f32_pop (gal_list_f32_t @code{**list})
Pop the top element of @code{list} and return the value. This function will
also change @code{list} to point to the next node in the list. If
@code{*list==NULL}, then this function will return @code{GAL_BLANK_FLOAT32}
(NaN, see @ref{Library blank values}).
@end deftypefun

@deftypefun size_t gal_list_f32_number (gal_list_f32_t @code{*list})
Return the number of nodes in @code{list}.
@end deftypefun

@deftypefun size_t gal_list_f32_last (gal_list_f32_t @code{*list})
Return a pointer to the last node in @code{list}.
@end deftypefun

@deftypefun void gal_list_f32_print (gal_list_f32_t @code{*list})
Print the values within each node of @code{*list} on the standard output in
the same order that they are stored. Each floating point number is printed
on one line. This function is mainly good for checking/debugging your
program. For program outputs, its best to make your own implementation with
a better, more user-friendly format. For example, in the following code
snippet. You can also modify it to print all values in one line, etc,
depending on the context of your program.

@example
size_t i;
gal_list_f32_t *tmp;
for(tmp=list; tmp!=NULL; tmp=tmp->next)
  printf("Node %zu: %f\n", i, tmp->v);
@end example
@end deftypefun

@deftypefun void gal_list_f32_reverse (gal_list_f32_t @code{**list})
Reverse the order of the list such that the top node in the list before
calling this function becomes the bottom node after it.
@end deftypefun

@deftypefun {float *} gal_list_f32_to_array (gal_list_f32_t @code{*list}, int @code{reverse}, size_t @code{*num})
Dynamically allocate an array and fill it with the values in
@code{list}. The function will return a pointer to the allocated array and
put the number of elements in the array into the @code{num} pointer. If
@code{reverse} has a non-zero value, the array will be filled in the
inverse of the order of elements in @code{list}. This function can be
useful after you have finished reading an initially unknown number of
values and want to put them in an array for easy random access.
@end deftypefun

@deftypefun void gal_list_f32_free (gal_list_f32_t @code{*list})
Free every node in @code{list}.
@end deftypefun




@node List of double, List of void, List of float, Linked lists
@subsubsection List of @code{double}

Double precision floating point numbers can accurately store real number
until 15.9 decimals and consume 8 bytes (64-bits) of memory, see
@ref{Numeric data types}. This level of precision makes them very good for
serious processing in the middle of a program's execution: in many cases,
the propagation of errors will still be insignificant compared to actual
observational errors in a data set. But since they consume 8 bytes and more
CPU processing power, they are often not the best choice for storing and
transferring of data.

@deftp {Type (C @code{struct})} gal_list_f64_t
A single node in a list containing a 64-bit double precision @code{double}
value: see @ref{Numeric data types}.
@example
typedef struct gal_list_f64_t
@{
  double v;
  struct gal_list_f64_t *next;
@} gal_list_f64_t;
@end example
@end deftp


@deftypefun void gal_list_f64_add (gal_list_f64_t @code{**list}, double @code{value})
Add a new node (containing @code{value}) to the top of the @code{list} of
@code{double}s and update @code{list}.  Here is one short example of
initializing and adding elements to a string list:

@example
gal_list_f64_t *dlist=NULL;
gal_list_f64_add(&dlist, 3.8129395763193);
gal_list_f64_add(&dlist, 1.239378923931e-20);
@end example

@end deftypefun

@deftypefun {double} gal_list_f64_pop (gal_list_f64_t @code{**list})
Pop the top element of @code{list} and return the value. This function will
also change @code{list} to point to the next node in the list. If
@code{*list==NULL}, then this function will return @code{GAL_BLANK_FLOAT64}
(NaN, see @ref{Library blank values}).
@end deftypefun

@deftypefun size_t gal_list_f64_number (gal_list_f64_t @code{*list})
Return the number of nodes in @code{list}.
@end deftypefun

@deftypefun size_t gal_list_f64_last (gal_list_f64_t @code{*list})
Return a pointer to the last node in @code{list}.
@end deftypefun

@deftypefun void gal_list_f64_print (gal_list_f64_t @code{*list})
Print the values within each node of @code{*list} on the standard output in
the same order that they are stored. Each floating point number is printed
on one line. This function is mainly good for checking/debugging your
program. For program outputs, its best to make your own implementation with
a better, more user-friendly format. For example, in the following code
snippet. You can also modify it to print all values in one line, etc,
depending on the context of your program.

@example
size_t i;
gal_list_f64_t *tmp;
for(tmp=list; tmp!=NULL; tmp=tmp->next)
  printf("Node %zu: %f\n", i, tmp->v);
@end example
@end deftypefun

@deftypefun void gal_list_f64_reverse (gal_list_f64_t @code{**list})
Reverse the order of the list such that the top node in the list before
calling this function becomes the bottom node after it.
@end deftypefun

@deftypefun {double *} gal_list_f64_to_array (gal_list_f64_t @code{*list}, int @code{reverse}, size_t @code{*num})
Dynamically allocate an array and fill it with the values in
@code{list}. The function will return a pointer to the allocated array and
put the number of elements in the array into the @code{num} pointer. If
@code{reverse} has a non-zero value, the array will be filled in the
inverse of the order of elements in @code{list}. This function can be
useful after you have finished reading an initially unknown number of
values and want to put them in an array for easy random access.
@end deftypefun

@deftypefun void gal_list_f64_free (gal_list_f64_t @code{*list})
Free every node in @code{list}.
@end deftypefun




@node List of void, Ordered list of size_t, List of double, Linked lists
@subsubsection List of @code{void *}

In C, @code{void *} is the most generic pointer. Usually pointers are
associated with the type of content they point to. For example @code{int *}
means a pointer to an integer. This ancillary information about the
contents of the memory location is very useful for the compiler, catching
bad errors and also documentation (it helps the reader see what the address
in memory actually contains). However, @code{void *} is just a raw address
(pointer), it contains no information on the contents it points to.

These properties make the @code{void *} very useful when you want to treat
the contents of an address in different ways. You can use the @code{void *}
list defined in this section and its function on any kind of data: for
example you can use it to keep a list of custom data structures that you
have built for your own separate program. Each node in the list can keep
anything and this gives you great versatility. But in using @code{void *},
please beware that ``with great power comes great responsibility''.


@deftp {Type (C @code{struct})} gal_list_void_t
A single node in a list containing a @code{void *} pointer.
@example
typedef struct gal_list_void_t
@{
  void *v;
  struct gal_list_void_t *next;
@} gal_list_void_t;
@end example
@end deftp


@deftypefun void gal_list_void_add (gal_list_void_t @code{**list}, void @code{*value})
Add a new node (containing @code{value}) to the top of the @code{list} of
@code{void *}s and update @code{list}.  Here is one short example of
initializing and adding elements to a string list:

@example
gal_list_void_t *vlist=NULL;
gal_list_f64_add(&vlist, some_pointer);
gal_list_f64_add(&vlist, another_pointer);
@end example

@end deftypefun

@deftypefun {void *} gal_list_void_pop (gal_list_void_t @code{**list})
Pop the top element of @code{list} and return the value. This function will
also change @code{list} to point to the next node in the list. If
@code{*list==NULL}, then this function will return @code{NULL}.
@end deftypefun

@deftypefun size_t gal_list_void_number (gal_list_void_t @code{*list})
Return the number of nodes in @code{list}.
@end deftypefun

@deftypefun size_t gal_list_void_last (gal_list_void_t @code{*list})
Return a pointer to the last node in @code{list}.
@end deftypefun

@deftypefun void gal_list_void_reverse (gal_list_void_t @code{**list})
Reverse the order of the list such that the top node in the list before
calling this function becomes the bottom node after it.
@end deftypefun

@deftypefun void gal_list_void_free (gal_list_void_t @code{*list})
Free every node in @code{list}.
@end deftypefun


@node Ordered list of size_t, Doubly linked ordered list of size_t, List of void, Linked lists
@subsubsection Ordered list of @code{size_t}

Positions/sizes in a dataset are conventionally in the @code{size_t} type
(see @ref{List of size_t}) and it sometimes occurs that you want to parse
and read the values in a specific order. For example you want to start from
one pixel and add pixels to the list based on their distance to that
pixel. So that ever time you pop an element from the list, you know it is
the nearest that has not yet been studied. The @code{gal_list_osizet_t}
type and its functions in this section are designed to facilitate such
operations.

@deftp {Type (C @code{struct})} gal_list_osizet_t
@cindex @code{size_t}
Each node in this singly-linked list contains a @code{size_t} value and a
floating point value. The floating point value is used as a reference to
add new nodes in a sorted manner. At any moment, the first popped node in
this list will have the smallest @code{tosort} value, and subsequent nodes
will have larger to values.
@end deftp
@example
typedef struct gal_list_osizet_t
@{
  size_t v;                       /* The actual value. */
  float  s;                       /* The parameter to sort by. */
  struct gal_list_osizet_t *next;
@} gal_list_osizet_t;
@end example

@deftypefun void gal_list_osizet_add (gal_list_osizet_t @code{**list}, size_t @code{value}, float @code{tosort})
Allocate space for a new node in @code{list}, and store @code{value} and
@code{tosort} into it. The new node will not necessarily be at the ``top''
of the list. If @code{*list!=NULL}, then the @code{tosort} values of
existing nodes is inspected and the given node is placed in the list such
that the top element (which is popped with @code{gal_list_osizet_pop}) has
the smallest @code{tosort} value.
@end deftypefun

@deftypefun size_t gal_list_osizet_pop (gal_list_osizet_t @code{**list}, float @code{*sortvalue})
Pop a node from the top of @code{list}, return the node's @code{value} and
put its sort value in the space that @code{sortvalue} points to. This
function will also free the allocated space for the popped node and after
this function, @code{list} will point to the next node (which has a larger
@code{tosort} element).
@end deftypefun

@deftypefun void gal_list_osizet_to_sizet_free (gal_list_osizet_t @code{*in}, gal_list_sizet_t @code{**out})
Convert the ordered list of @code{size_t}s into an ordinary @code{size_t}
linked list. This can be useful when all the elements have been added and
you just need to pop-out elements and don't care about the sorting values
any more. After the conversion is done, this function will free the input
list. Note that the @code{out} list doesn't have to be empty. If it already
contains some nodes, the new nodes will be added on top of them.
@end deftypefun


@node Doubly linked ordered list of size_t, List of gal_data_t, Ordered list of size_t, Linked lists
@subsubsection Doubly linked ordered list of @code{size_t}

An ordered list of indexs is required in many contexts, one example was
discussed at the beginning of @ref{Ordered list of size_t}. But the list
that was introduced there only has one point of entry: you can always only
parse the list from smallest to largest. In this section, the doubly-linked
@code{gal_list_dosizet_t} node is defined which will allow us to parse the
values in ascending or descending order.

@deftp {Type (C @code{struct})} gal_list_dosizet_t
@cindex @code{size_t}

Doubly-linked, ordered @code{size_t} list node structure. Each node in this
Doubly-linked list contains a @code{size_t} value and a floating point
value. The floating point value is used as a reference to add new nodes in
a sorted manner. In the functions here, this linked list can be pointed to
by two pointers (largest and smallest) with the following format:
@example
            largest pointer
            |
   NULL <-- (v0,s0) <--> (v1,s1) <--> ... (vn,sn) --> NULL
                                          |
                           smallest pointer
@end example
At any moment, the two pointers will point to the nodes containing the
``largest'' and ``smallest'' values and the rest of the nodes will be
sorted. This is useful when an unknown number of nodes are being added
continuously and during the operations it is important to have the nodes in
a sorted format.

@example
typedef struct gal_list_dosizet_t
@{
  size_t v;                       /* The actual value. */
  float s;                        /* The parameter to sort by. */
  struct gal_list_dosizet_t *prev;
  struct gal_list_dosizet_t *next;
@} gal_list_dosizet_t;
@end example
@end deftp

@deftypefun void gal_list_dosizet_add (gal_list_dosizet_t @code{**largest}, gal_list_dosizet_t @code{**smallest}, size_t @code{value}, float @code{tosort})
Allocate space for a new node in @code{list}, and store @code{value} and
@code{tosort} into it. If the list is empty, both @code{largest} and
@code{smallest} must be @code{NULL}.
@end deftypefun

@deftypefun size_t gal_list_dosizet_pop_smallest (gal_list_dosizet_t @code{**largest}, gal_list_dosizet_t @code{**smallest}, float @code{tosort})
Pop the value with the smallest reference from the doubly linked list and
store the reference into the space pointed to by @code{tosort}. Note that
even though only the smallest pointer will be popped, when there was only
one node in the list, the @code{largest} pointer also has to change, so we
need both.
@end deftypefun

@deftypefun void gal_list_dosizet_print (gal_list_dosizet_t @code{*largest}, gal_list_dosizet_t @code{*smallest})
Print the largest and smallest values sequentially until the list is
parsed.
@end deftypefun


@deftypefun void gal_list_dosizet_to_sizet (gal_list_dosizet_t @code{*in}, gal_list_sizet_t @code{**out})
Convert the doubly linked, ordered @code{size_t} list into a singly-linked
list of @code{size_t}.
@end deftypefun

@deftypefun void gal_list_dosizet_free (gal_list_dosizet_t @code{*largest})
Free the doubly linked, ordered @code{sizet_t} list.
@end deftypefun


@node List of gal_data_t,  , Doubly linked ordered list of size_t, Linked lists
@subsubsection List of @code{gal_data_t}

Gnuastro's generic data container has a @code{next} element which enables
it to be used as a singly-linked list (see @ref{Generic data
container}). The ability to connect the different data containers offers
great advantages. For example each column in a table in an independent
dataset: with its own name, units, numeric data type (see @ref{Numeric data
types}). Another application is in Tessellating an input dataset into
separate tiles or only studying particular regions, or tiles, of a larger
dataset (see @ref{Tessellation} and @ref{Tessellation library}). Each
independent tile over the dataset can be connected to the others as a
linked list and thus any number of tiles can be represented with one
variable.

@deftypefun void gal_list_data_add (gal_data_t @code{**list}, gal_data_t @code{*newnode})
Add an already allocated dataset (@code{newnode}) to top of
@code{list}. Note that if @code{newnode->next!=NULL} (@code{newnode} is
itself a list), then @code{list} will be added to its end.

In this example multiple images are linked together as a list:
@example
int quietmmap=1;
size_t minmapsize=-1;
gal_data_t *tmp, *list=NULL;
tmp = gal_fits_img_read("file1.fits", "1", minmapsize, quietmmap);
gal_list_data_add( &list, tmp );
tmp = gal_fits_img_read("file2.fits", "1", minmapsize, quietmmap);
gal_list_data_add( &list, tmp );
@end example
@end deftypefun

@deftypefun void gal_list_data_add_alloc (gal_data_t @code{**list}, void @code{*array}, uint8_t @code{type}, size_t @code{ndim}, size_t @code{*dsize}, struct wcsprm @code{*wcs}, int @code{clear}, size_t @code{minmapsize}, int @code{quietmmap}, char @code{*name}, char @code{*unit}, char @code{*comment})
Allocate a new dataset (with @code{gal_data_alloc} in @ref{Dataset
allocation}) and put it as the first element of @code{list}. Note that if
this is the first node to be added to the list, @code{list} must be
@code{NULL}.
@end deftypefun

@deftypefun {gal_data_t *} gal_list_data_pop (gal_data_t @code{**list})
Pop the top node from @code{list} and return it.
@end deftypefun

@deftypefun void gal_list_data_reverse (gal_data_t @code{**list})
Reverse the order of the list such that the top node in the list before
calling this function becomes the bottom node after it.
@end deftypefun

@deftypefun {gal_data_t **} gal_list_data_to_array_ptr (gal_data_t @code{*list}, size_t @code{*num})
Allocate and return an array of @code{gal_data_t *} pointers with the same
number of elements as the nodes in @code{list}. The pointers will be put in
the same order that the list is parsed. Hence the N-th element in the array
will point to the same dataset that the N-th node in the list points to.
@end deftypefun

@deftypefun size_t gal_list_data_number (gal_data_t @code{*list})
Return the number of nodes in @code{list}.
@end deftypefun

@deftypefun size_t gal_list_data_last (gal_data_t @code{*list})
Return a pointer to the last node in @code{list}.
@end deftypefun

@deftypefun void gal_list_data_free (gal_data_t @code{*list})
Free all the datasets in @code{list} along with all the allocated spaces in
each.
@end deftypefun





@node Array input output, Table input output, Linked lists, Gnuastro library
@subsection Array input output

Getting arrays (commonly images or cubes) from a file into your program or
writing them after the processing into an output file are some of the most
common operations. The functions in this section are designed for such
operations with the known file types. The functions here are thus just
wrappers around functions of lower-level file type functions of this
library, for example @ref{FITS files} or @ref{TIFF files}. If the file type
of the input/output file is already known, you can use the functions in
those sections respectively.

@deftypefun int gal_array_name_recognized (char @code{*filename})
Return 1 if the given file name corresponds to one of the recognized file
types for reading arrays.
@end deftypefun

@deftypefun int gal_array_name_recognized_multiext (char @code{*filename})
Return 1 if the given file name corresponds to one of the recognized file
types for reading arrays which may contain multiple extensions (for example
FITS or TIFF) formats.
@end deftypefun

@deftypefun gal_data_t gal_array_read (char @code{*filename}, char @code{*extension}, gal_list_str_t @code{*lines}, size_t @code{minmapsize}, int @code{quietmmap})
Read the array within the given extension (@code{extension}) of
@code{filename}, or the @code{lines} list (see below). If the array is
larger than @code{minmapsize} bytes, then it won't be read into RAM, but a
file on the HDD/SSD (no difference for the programmer). Messages about the
memory-mapped file can be disabled with @code{quietmmap}.

@code{extension} will be ignored for files that don't support them (for
example JPEG or text). For FITS files, @code{extension} can be a number or
a string (name of the extension), but for TIFF files, it has to be
number. In both cases, counting starts from zero.

For multi-channel formats (like RGB images in JPEG or TIFF), this function
will return a @ref{List of gal_data_t}: one data structure per
channel. Thus if you just want a single array (and want to check if the
user hasn't given a multi-channel input), you can check the @code{next}
pointer of the returned @code{gal_data_t}.

@code{lines} is a list of strings with each node representing one line
(including the new-line character), see @ref{List of strings}. It will
mostly be the output of @code{gal_txt_stdin_read}, which is used to read
the program's input as separate lines from the standard input (see
@ref{Text files}). Note that @code{filename} and @code{lines} are mutually
exclusive and one of them must be @code{NULL}.
@end deftypefun

@deftypefun void gal_array_read_to_type (char @code{*filename}, char @code{*extension}, gal_list_str_t @code{*lines}, uint8_t @code{type}, size_t @code{minmapsize}, int @code{quietmmap})
Similar to @code{gal_array_read}, but the output data structure(s) will
have a numeric data type of @code{type}, see @ref{Numeric data types}.
@end deftypefun

@deftypefun void gal_array_read_one_ch (char @code{*filename}, char @code{*extension}, gal_list_str_t @code{*lines}, size_t @code{minmapsize}, int @code{quietmmap})
@cindex Channel
@cindex Color channel
Read the dataset within @code{filename} (extension/hdu/dir
@code{extension}) and make sure it is only a single channel. This is just a
simple wrapper around @code{gal_array_read} that checks if there was more
than one dataset and aborts with an informative error if there is more than
one channel in the dataset.

Formats like JPEG or TIFF support multiple channels per input, but it may
happen that your program only works on a single dataset. This function can
be a convenient way to make sure that the data that comes into your program
is only one channel.
@end deftypefun

@deftypefun void gal_array_read_one_ch_to_type (char @code{*filename}, char @code{*extension}, gal_list_str_t @code{*lines}, uint8_t @code{type}, size_t @code{minmapsize}, int @code{quietmmap})
Similar to @code{gal_array_read_one_ch}, but the output data structure will
has a numeric data type of @code{type}, see @ref{Numeric data types}.
@end deftypefun

@node Table input output, FITS files, Array input output, Gnuastro library
@subsection Table input output (@file{table.h})

Tables are a collection of one dimensional datasets that are packed
together into one file. They are the single most common format to store
high-level (processed) information, hence they play a very important role
in Gnuastro. For a more thorough introduction, please see
@ref{Table}. Gnuastro's Table program, and all the other programs that can
read from and write into tables, use the functions of this section for
reading and writing their input/output tables. For a simple demonstration
of using the constructs introduced here, see @ref{Library demo - reading
and writing table columns}.

Currently only plain text (see @ref{Gnuastro text table format}) and FITS
(ASCII and binary) tables are supported by Gnuastro. However, the low-level
table infra-structure is written such that accommodating other formats is
also possible and in future releases more formats will hopefully be
supported. Please don't hesitate to suggest your favorite format so it can
be implemented when possible.

@deffn  Macro GAL_TABLE_DEF_WIDTH_STR
@deffnx Macro GAL_TABLE_DEF_WIDTH_INT
@deffnx Macro GAL_TABLE_DEF_WIDTH_LINT
@deffnx Macro GAL_TABLE_DEF_WIDTH_FLT
@deffnx Macro GAL_TABLE_DEF_WIDTH_DBL
@deffnx Macro GAL_TABLE_DEF_PRECISION_INT
@deffnx Macro GAL_TABLE_DEF_PRECISION_FLT
@deffnx Macro GAL_TABLE_DEF_PRECISION_DBL
@cindex @code{printf}
The default width and precision for generic types to use in writing numeric
types into a text file (plain text and FITS ASCII tables). When the dataset
doesn't have any pre-set width and precision (see @code{disp_width} and
@code{disp_precision} in @ref{Generic data container}) these will be
directly used in C's @code{printf} command to write the number as a string.
@end deffn

@deffn  Macro GAL_TABLE_DISPLAY_FMT_STRING
@deffnx Macro GAL_TABLE_DISPLAY_FMT_DECIMAL
@deffnx Macro GAL_TABLE_DISPLAY_FMT_UDECIMAL
@deffnx Macro GAL_TABLE_DISPLAY_FMT_OCTAL
@deffnx Macro GAL_TABLE_DISPLAY_FMT_HEX
@deffnx Macro GAL_TABLE_DISPLAY_FMT_FLOAT
@deffnx Macro GAL_TABLE_DISPLAY_FMT_EXP
@deffnx Macro GAL_TABLE_DISPLAY_FMT_GENERAL
The display format used in C's @code{printf} to display data of different
types. The @code{_STRING} and @code{_DECIMAL} are unique for printing
strings and signed integers, they are mainly here for
completeness. However, unsigned integers and floating points can be
displayed in multiple formats:

@table @asis
@item Unsigned integer
For unsigned integers, it is possible to choose from @code{_UDECIMAL}
(unsigned decimal), @code{_OCTAL} (octal notation, for example @code{125}
in decimal will be displayed as @code{175}), and @code{_HEX} (hexadecimal
notation, for example @code{125} in decimal will be displayed as
@code{7D}).

@item Floating point
For floating point, it is possible to display the number in @code{_FLOAT}
(floating point, for example @code{1500.345}), @code{_EXP} (exponential,
for example @code{1.500345e+03}), or @code{_GENERAL} which is the best of
the two for the given number.
@end table
@end deffn

@deffn  Macro GAL_TABLE_FORMAT_INVALID
@deffnx Macro GAL_TABLE_FORMAT_TXT
@deffnx Macro GAL_TABLE_FORMAT_AFITS
@deffnx Macro GAL_TABLE_FORMAT_BFITS
All the current acceptable table formats to Gnuastro. The @code{AFITS} and
@code{BFITS} represent FITS ASCII tables and FITS Binary tables. You can
use these anywhere you see the @code{tableformat} variable.
@end deffn

@deffn  Macro GAL_TABLE_SEARCH_INVALID
@deffnx Macro GAL_TABLE_SEARCH_NAME
@deffnx Macro GAL_TABLE_SEARCH_UNIT
@deffnx Macro GAL_TABLE_SEARCH_COMMENT
When the desired column is not a number, these values determine if the
string to match, or regular expression to search, be in the @emph{name},
@emph{units} or @emph{comments} of the column meta data. These values
should be used for the @code{searchin} variables of the functions.
@end deffn

@deftypefun {gal_data_t *} gal_table_info (char @code{*filename}, char @code{*hdu}, gal_list_str_t @code{*lines}, size_t @code{*numcols}, size_t @code{*numrows}, int @code{*tableformat})
Store the information of each column of a table into an array of data
structures with @code{numcols} datasets (one data structure for each
column). The number of rows is stored in @code{numrows}. The format of the
table (e.g., ASCII text file, or FITS binary or ASCII table) will be put in
@code{tableformat} (macros defined above). If the @code{filename} is not a
FITS file, then @code{hdu} will not be used (can be @code{NULL}).

The input must be either a file (specified by @code{filename}) or a list of
strings (@code{lines}). @code{lines} is a list of strings with each node
representing one line (including the new-line character), see @ref{List of
strings}. It will mostly be the output of @code{gal_txt_stdin_read}, which
is used to read the program's input as separate lines from the standard
input (see @ref{Text files}). Note that @code{filename} and @code{lines}
are mutually exclusive and one of them must be @code{NULL}.

In the output datasets, only the meta-data strings (column name, units and
comments), will be allocated and set. This function is just for column
information (meta-data), not column contents.
@end deftypefun

@deftypefun void gal_table_print_info (gal_data_t @code{*allcols}, size_t @code{numcols}, size_t @code{numrows})
This program will print the column information for all the columns (output
of @code{gal_table_info}). The output is in the same format as this command
with Gnuastro Table program (see @ref{Table}):
@example
$ asttable --info table.fits
@end example
@end deftypefun

@deftypefun {gal_data_t *} gal_table_read (char @code{*filename}, char @code{*hdu}, gal_list_str_t @code{*lines}, gal_list_str_t @code{*cols}, int @code{searchin}, int @code{ignorecase}, size_t @code{minmapsize}, int @code{quietmmap}, size_t @code{*colmatch})

Read the specified columns in a file (named @code{filename}), or list of
strings (@code{lines}) into a linked list of data structures. If the file
is FITS, then @code{hdu} will also be used, otherwise, @code{hdu} is
ignored.

@code{lines} is a list of strings with each node representing one line
(including the new-line character), see @ref{List of strings}. It will
mostly be the output of @code{gal_txt_stdin_read}, which is used to read
the program's input as separate lines from the standard input (see
@ref{Text files}). Note that @code{filename} and @code{lines} are mutually
exclusive and one of them must be @code{NULL}.

@cindex AWK
@cindex GNU AWK
The information to search for columns should be specified by the
@code{cols} list of strings (see @ref{List of strings}). The string in each
node of the list may be a number, an exact match to a column name, or a
regular expression (in GNU AWK format) enclosed in @code{/ /}. The
@code{searchin} value must be one of the macros defined above. If
@code{cols} is NULL, then this function will read the full table.

The output is an individually allocated list of datasets (see @ref{List of
gal_data_t}) with the same order of the @code{cols} list. Note that one
column node in the @code{cols} list might give multiple columns (for
example from regular expressions), in this case, the order of output
columns that correspond to that one input, are in order of the table (which
column was read first). So the first requested column is the first popped
data structure and so on.

if @code{colmatch!=NULL}, it is assumed to be an array that has at least the
same number of elements as nodes in the @code{cols} list. The number of
columns that matched each input column will be stored in each element.
@end deftypefun

@deftypefun {gal_list_sizet_t *} gal_table_list_of_indexs (gal_list_str_t @code{*cols}, gal_data_t @code{*allcols}, size_t @code{numcols}, int @code{searchin}, int @code{ignorecase}, char @code{*filename}, char @code{*hdu}, size_t @code{*colmatch})
Returns a list of indexs (starting from 0) of the input columns that match
the names/numbers given to @code{cols}. This is a low-level operation which
is called by @code{gal_table_read} (described above), see there for more on
each argument's description. @code{allcols} is the returned array of
@code{gal_table_info}.
@end deftypefun

@cindex Git
@deftypefun void gal_table_comments_add_intro (gal_list_str_t @code{**comments}, char @code{*program_string}, time_t @code{*rawtime})
Add some basic information to the list of @code{comments}. This basic
information includes the following information
@itemize
@item
If the program is run in a Git version controlled directory, Git's
description is printed (see description under @code{COMMIT} in @ref{Output
FITS files}).
@item
The calendar time that is stored in @code{rawtime} (@code{time_t} is C's
calendar time format defined in @file{time.h}). You can calculate the time
in this format with the following expressions:
@example
time_t rawtime;
time(&rawtime);
@end example
@item
The name of your program in @code{program_string}. If it is @code{NULL},
this line is ignored.
@end itemize
@end deftypefun

@deftypefun void gal_table_write (gal_data_t @code{*cols}, struct gal_fits_list_key_t @code{**keywords}, gal_list_str_t @code{*comments}, int @code{tableformat}, char @code{*filename}, char @code{*extname}, uint8_t @code{colinfoinstdout})

Write @code{cols} (a list of datasets, see @ref{List of gal_data_t}) into a
table stored in @code{filename}. The format of the table can be determined
with @code{tableformat} that accepts the macros defined above. When
@code{filename==NULL}, the column information will be printed on the
standard output (command-line).

If @code{comments!=NULL}, the list of comments (see @ref{List of strings})
will also be printed into the output table. When the output table is a
plain text file, every node of @code{comments} will be printed after a
@code{#} (so it can be considered as a comment) and in FITS table they will
follow a @code{COMMENT} keyword.

If a file named @code{filename} already exists, the operation depends on
the type of output. When @code{filename} is a FITS file, the table will be
added as a new extension after all existing extensions. If @code{filename}
is a plain text file, this function will abort with an error.

If @code{filename} is a FITS file, the table extension will have the name
@code{extname}.

When @code{colinfoinstdout!=0} and @code{filename==NULL} (columns are
printed in the standard output), the dataset metadata will also printed in
the standard output. When printing to the standard output, the column
information can be piped into another program for further processing and
thus the meta-data (lines starting with a @code{#}) must be ignored. In
such cases, you only print the column values by passing @code{0} to
@code{colinfoinstdout}.
@end deftypefun

@deftypefun void gal_table_write_log (gal_data_t @code{*logll}, char @code{*program_string}, time_t @code{*rawtime}, gal_list_str_t @code{*comments}, char @code{*filename}, int @code{quiet})
Write the @code{logll} list of datasets into a table in @code{filename}
(see @ref{List of gal_data_t}). This function is just a wrapper around
@code{gal_table_comments_add_intro} and @code{gal_table_write} (see
above). If @code{quiet} is non-zero, this function will print a message
saying that the @code{filename} has been created.
@end deftypefun





@node FITS files, File input output, Table input output, Gnuastro library
@subsection FITS files (@file{fits.h})

@cindex FITS
@cindex CFITSIO
The FITS format is the most common format to store data (images and tables)
in astronomy. The CFITSIO library already provides a very good low-level
collection of functions for manipulating FITS data. The low-level nature of
CFITSIO is defined for versatility and portability. As a result, even a
simple and basic operation, like reading an image or table column into
memory, will require a special sequence of CFITSIO function calls which can
be inconvenient and buggy to manage in separate locations. To ease this
process, Gnuastro's library provides wrappers for CFITSIO functions. With
these, it much easier to read, write, or modify FITS file data, header
keywords and extensions. Hence, if you feel these functions don't exactly
do what you want, we strongly recommend reading the CFITSIO manual to use
its great features directly (afterwards, send us your wrappers so we can
include it here for others to benefit also).

All the functions and macros introduced in this section are declared in
@file{gnuastro/fits.h}.  When you include this header, you are also
including CFITSIO's @file{fitsio.h} header. So you don't need to explicitly
include @file{fitsio.h} anymore and can freely use any of its macros or
functions in your code along with those discussed here.


@menu
* FITS macros errors filenames::  General macros, errors and checking names.
* CFITSIO and Gnuastro types::  Conversion between FITS and Gnuastro types.
* FITS HDUs::                   Opening and getting information about HDUs.
* FITS header keywords::        Reading and writing FITS header keywords.
* FITS arrays::                 Reading and writing FITS images/arrays.
* FITS tables::                 Reading and writing FITS tables.
@end menu

@node FITS macros errors filenames, CFITSIO and Gnuastro types, FITS files, FITS files
@subsubsection FITS Macros, errors and filenames

Some general constructs provided by Gnuastro's FITS handling functions are
discussed here. In particular there are several useful functions about FITS
file names.

@deffn Macro GAL_FITS_MAX_NDIM
The maximum number of dimensions a dataset can have in FITS format,
according to the FITS standard this is 999.
@end deffn

@deftypefun void gal_fits_io_error (int @code{status}, char @code{*message})
If @code{status} is non-zero, this function will print the CFITSIO error
message corresponding to status, print @code{message} (optional) in the
next line and abort the program. If @code{message==NULL}, it will print a
default string after the CFITSIO error.
@end deftypefun

@deftypefun int gal_fits_name_is_fits (char @code{*name})
If the @code{name} is an acceptable CFITSIO FITS filename return @code{1}
(one), otherwise return @code{0} (zero). The currently acceptable FITS
suffixes are @file{.fits}, @file{.fit}, @file{.fits.gz}, @file{.fits.Z},
@file{.imh}, @file{.fits.fz}. IMH is the IRAF format which is acceptable to
CFITSIO.
@end deftypefun

@deftypefun int gal_fits_suffix_is_fits (char @code{*suffix})
Similar to @code{gal_fits_name_is_fits}, but only for the suffix. The
suffix doesn't have to start with `@key{.}': this function will return
@code{1} (one) for both @code{fits} and @code{.fits}.
@end deftypefun

@deftypefun {char *} gal_fits_name_save_as_string (char @code{*filename}, char @code{*hdu})
If the name is a FITS name, then put a @code{(hdu: ...)} after it and
return the string. If it isn't a FITS file, just print the name, if
@code{filename==NULL}, then return the string @code{stdin}. Note that the
output string's space is allocated.

This function is useful when you want to report a random file to the user
which may be FITS or not (for a FITS file, simply the filename is not
enough, the HDU is also necessary).
@end deftypefun


@node CFITSIO and Gnuastro types, FITS HDUs, FITS macros errors filenames, FITS files
@subsubsection CFITSIO and Gnuastro types

Both Gnuastro and CFITSIO have special identifiers for each type that they
accept. Gnuastro's type identifiers are fully described in @ref{Library
data types} and are usable for all kinds of datasets (images, table columns,
etc) as part of Gnuastro's @ref{Generic data container}. However,
following the FITS standard, CFITSIO has different identifiers for images
and tables. Following CFITSIO's own convention, we will use @code{bitpix}
for image type identifiers and @code{datatype} for its internal identifiers
(and mainly used in tables). The functions introduced in this section can
be used to convert between CFITSIO and Gnuastro's type identifiers.

One important issue to consider is that CFITSIO's types are not fixed width
(for example @code{long} may be 32-bits or 64-bits on different
systems). However, Gnuastro's types are defined by their width. These
functions will use information on the host system to do the proper
conversion. To have a portable (usable on different systems) code, is thus
recommended to use these functions and not to assume a fixed correspondence
between CFITSIO and Gnuastro's types.

@deftypefun uint8_t gal_fits_bitpix_to_type (int @code{bitpix})
Return the Gnuastro type identifier that corresponds to CFITSIO's
@code{bitpix} on this system.
@end deftypefun

@deftypefun int gal_fits_type_to_bitpix (uint8_t @code{type})
Return the CFITSIO @code{bitpix} value that corresponds to Gnuastro's
@code{type}.
@end deftypefun

@deftypefun char gal_fits_type_to_bin_tform (uint8_t @code{type})
Return the FITS standard binary table @code{TFORM} character that
corresponds to Gnuastro's @code{type}.
@end deftypefun

@deftypefun int gal_fits_type_to_datatype (uint8_t @code{type})
Return the CFITSIO @code{datatype} that corresponds to Gnuastro's
@code{type} on this machine.
@end deftypefun

@deftypefun uint8_t gal_fits_datatype_to_type (int @code{datatype}, int @code{is_table_column})
Return Gnuastro's type identifier that corresponds to the CFITSIO
@code{datatype}. Note that when dealing with CFITSIO's @code{TLONG}, the
fixed width type differs between tables and images. So if the corresponding
dataset is a table column, put a non-zero value into
@code{is_table_column}.
@end deftypefun

@node FITS HDUs, FITS header keywords, CFITSIO and Gnuastro types, FITS files
@subsubsection FITS HDUs

A FITS file can contain multiple HDUs/extensions. The functions in this
section can be used to get basic information about the extensions or open
them. Note that @code{fitsfile} is defined in CFITSIO's @code{fitsio.h}
which is automatically included by Gnuastro's @file{gnuastro/fits.h}.

@deftypefun {fitsfile *} gal_fits_open_to_write (char @code{*filename})
If @file{filename} exists, open it and return the @code{fitsfile} pointer
that corresponds to it. If @file{filename} doesn't exist, the file will be
created which contains a blank first extension and the pointer to its next
extension will be returned.
@end deftypefun

@deftypefun size_t gal_fits_hdu_num (char @code{*filename})
Return the number of HDUs/extensions in @file{filename}.
@end deftypefun

@deftypefun {unsigned long} gal_fits_hdu_datasum (char @code{*filename}, char @code{*hdu})
@cindex @code{DATASUM}: FITS keyword
Return the @code{DATASUM} of the given HDU in the given FITS file.
For more on @code{DATASUM} in the FITS standard, see @ref{Keyword manipulation} (under the @code{checksum} component of @option{--write}).
@end deftypefun

@deftypefun {unsigned long} gal_fits_hdu_datasum_ptr (fitsfile @code{*fptr})
@cindex @code{DATASUM}: FITS keyword
Return the @code{DATASUM} of the already opened HDU in @code{fptr}.
For more on @code{DATASUM} in the FITS standard, see @ref{Keyword manipulation} (under the @code{checksum} component of @option{--write}).
@end deftypefun

@deftypefun int gal_fits_hdu_format (char @code{*filename}, char @code{*hdu})
Return the format of the HDU as one of CFITSIO's recognized macros:
@code{IMAGE_HDU}, @code{ASCII_TBL}, or @code{BINARY_TBL}.
@end deftypefun

@deftypefun int gal_fits_hdu_is_healpix (fitsfile @code{*fptr})
@cindex HEALPix
Return @code{1} if the dataset may be a HEALpix grid and @code{0} otherwise.
Technically, it is considered to be a HEALPix if the HDU isn't an ASCII table, and has the @code{NSIDE}, @code{FIRSTPIX} and @code{LASTPIX}.
@end deftypefun

@deftypefun {fitsfile *} gal_fits_hdu_open (char @code{*filename}, char @code{*hdu}, int @code{iomode})
Open the HDU/extension @code{hdu} from @file{filename} and return a pointer
to CFITSIO's @code{fitsfile}. @code{iomode} determines how the FITS file
will be opened using CFITSIO's macros: @code{READONLY} or @code{READWRITE}.

The string in @code{hdu} will be appended to @file{filename} in square
brackets so CFITSIO only opens this extension. You can use any formatting
for the @code{hdu} that is acceptable to CFITSIO. See the description under
@option{--hdu} in @ref{Input output options} for more.
@end deftypefun

@deftypefun {fitsfile *} gal_fits_hdu_open_format (char @code{*filename}, char @code{*hdu}, int @code{img0_tab1})
Open (in read-only format) the @code{hdu} HDU/extension of @file{filename}
as an image or table. When @code{img0_tab1} is @code{0}(zero) but the HDU
is a table, this function will abort with an error. It will also abort with
an error when @code{img0_tab1} is @code{1} (one), but the HDU is an
image.

A FITS HDU may contain both tables or images. When your program needs one
of these formats, you can call this function so if the user provided the
wrong HDU/file, it will abort and inform the user that the file/HDU is has
the wrong format.
@end deftypefun




@node FITS header keywords, FITS arrays, FITS HDUs, FITS files
@subsubsection FITS header keywords

Each FITS extension/HDU contains a raw dataset which can either be a table
or an image along with some header keywords. The keywords can be used to
store meta-data about the actual dataset. The functions in this section
describe Gnuastro's high-level functions for reading and writing FITS
keywords. Similar to all Gnuastro's FITS-related functions, these functions
are all wrappers for CFITSIO's low-level functions.

The necessary meta-data (header keywords) for a particular dataset are
commonly numerous, it is much more efficient to list them in one variable
and call the reading/writing functions once. Hence the functions in this
section use linked lists, a thorough introduction to them is given in
@ref{Linked lists}. To reading FITS keywords, these functions use a list of
Gnuastro's generic dataset format that is discussed in @ref{List of
gal_data_t}. To write FITS keywords we define the
@code{gal_fits_list_key_t} node that is defined below.

@deftp {Type (C @code{struct})} gal_fits_list_key_t
@cindex Linked list
@cindex last-in-first-out
@cindex first-in-first-out
Structure for writing FITS keywords. This structure is used for one keyword
and you don't need to set all elements. With the @code{next} element, you
can link it to another keyword thus creating a linked list to add any
number of keywords easily and at any step during your program (see
@ref{Linked lists} for an introduction on lists). See the functions below
for adding elements to the list.

@example
typedef struct gal_fits_list_key_t
@{
  int                        tfree;   /* ==1, free title string.   */
  int                        kfree;   /* ==1, free keyword name.   */
  int                        vfree;   /* ==1, free keyword value.  */
  int                        cfree;   /* ==1, free comment.        */
  int                        ufree;   /* ==1, free unit.           */
  uint8_t                     type;   /* Keyword value type.       */
  char                      *title;   /* !=NULL, only print title. */
  char                    *keyname;   /* Keyword Name.             */
  void                      *value;   /* Keyword value.            */
  char                    *comment;   /* Keyword comment.          */
  char                       *unit;   /* Keyword unit.             */
  struct gal_fits_list_key_t *next;   /* Pointer next keyword.     */
@} gal_fits_list_key_t;
@end example
@end deftp

@deftypefun {void *} gal_fits_key_img_blank (uint8_t @code{type})
Returns a pointer to an allocated space containing the value to the FITS
@code{BLANK} header keyword, when the input array has a type of
@code{type}. This is useful when you want to write the @code{BLANK} keyword
using CFITSIO's @code{fits_write_key} function.

According to the FITS standard: ``If the @code{BSCALE} and @code{BZERO}
keywords do not have the default values of 1.0 and 0.0, respectively, then
the value of the @code{BLANK} keyword must equal the actual value in the
FITS data array that is used to represent an undefined pixel and not the
corresponding physical value''. Therefore a special @code{BLANK} value is
needed for datasets containing signed 8-bit, unsigned 16-bit, unsigned
32-bit, and unsigned 64-bit integers (types that are defined with
@code{BSCALE} and @code{BZERO} in the FITS standard).

@cartouche
@noindent
@strong{Not usable when reading a dataset:} As quoted from the FITS
standard above, the value returned by this function can only be generically
used for the writing of the @code{BLANK} keyword header. It @emph{must not}
be used as the blank pointer when when reading a FITS array using
CFITSIO. When reading an array with CFITSIO, you can use
@code{gal_blank_alloc_write} to generate the necessary pointer.
@end cartouche
@end deftypefun

@deftypefun void gal_fits_key_clean_str_value (char @code{*string})
Remove the single quotes and possible extra spaces around the keyword values
that CFITSIO returns when reading a string keyword. CFITSIO doesn't remove
the two single quotes around the string value of a keyword. Hence the
strings it reads are like: @code{'value '}, or
@code{'some_very_long_value'}. To use the value during your processing, it
is commonly necessary to remove the single quotes (and possible extra
spaces). This function will do this within the allocated space of the
string.
@end deftypefun

@deftypefun {char *} gal_fits_key_date_to_struct_tm (char @code{*fitsdate}, struct tm @code{*tp})
@cindex Date: FITS format
Parse @code{fitsdate} as a FITS date format string (most generally: @code{YYYY-MM-DDThh:mm:ss.ddd...}) into the C library's broken-down time structure, or @code{struct tm} (declared in @file{time.h}) and return a pointer to a newly allocated array for the sub-second part of the format (@code{.ddd...}).
Therefore it needs to be freed afterwards (if it isn't @code{NULL})
When there is no sub-second portion, this pointer will be @code{NULL}.

This is a relatively low-level function, an easier function to use is @code{gal_fits_key_date_to_seconds} which will return the sub-seconds as double precision floating point.

Note that the FITS date format mentioned above is the most complete representation.
The following two formats are also acceptable: @code{YYYY-MM-DDThh:mm:ss} and @code{YYYY-MM-DD}.
This option can also interpret the older FITS date format where only two characters are given to the year and the date format is reversed (@code{DD/MM/YYThh:mm:ss.ddd...}).
In this case (following the GNU C Library), this option will make the following assumption: values 68 to 99 correspond to the years 1969 to 1999, and values 0 to 68 as the years 2000 to 2068.
@end deftypefun

@deftypefun size_t gal_fits_key_date_to_seconds (char @code{*fitsdate}, char @code{**subsecstr}, double @code{*subsec})
@cindex Unix epoch time
@cindex Epoch time, Unix
Return the Unix epoch time (number of seconds that have passed since 00:00:00 Thursday, January 1st, 1970) corresponding to the FITS date format string @code{fitsdate} (see description of @code{gal_fits_key_date_to_struct_tm} above).
This function will return @code{GAL_BLANK_SIZE_T} if the broken-down time couldn't be converted to seconds.

The Unix epoch time is in units of seconds, but the FITS date format allows sub-second accuracy.
The last two arguments are for the (optional) sub-second portion.
If @code{fitsdate} contains sub-second accuracy, then the starting of the sub-second part's string is stored in @code{subsecstr} (allocated separately), and @code{subsec} will be the corresponding numerical value (between 0 and 1, in double precision floating point).
So to avoid leaking memory, when @code{subsecstr!=NULL}, it must be freed.

This is a very useful function for operations on the FITS date values, for example sorting FITS files by their dates, or finding the time difference between two FITS files.
The advantage of working with the Unix epoch time is that you don't have to worry about calendar details (for example the number of days in different months, or leap years, etc).
@end deftypefun

@deftypefun void gal_fits_key_read_from_ptr (fitsfile @code{*fptr}, gal_data_t @code{*keysll}, int @code{readcomment}, int @code{readunit})

Read the list of keyword values from a FITS pointer. The input should be a
linked list of Gnuastro's generic data container
(@code{gal_data_t}). Before calling this function, you just have to set the
@code{name} and desired @code{type} values of each node in the list to the
keyword you want it to keep the value of. The given @code{name} value will
be directly passed to CFITSIO to read the desired keyword name. This
function will allocate space to keep the value. If @code{readcomment} and
@code{readunit} are non-zero, this function will also try to read the
possible comments and units of the keyword.

Here is one example of using this function:

@example
/* Allocate an array of datasets. */
gal_data_t *keysll=gal_data_array_calloc(N);

/* Make the array usable as a list too (by setting `next'). */
for(i=0;i<N-1;++i) keysll[i].next=keysll[i+1];

/* Fill the datasets with a `name' and a `type'. */
keysll[0].name="NAME1";     keysll[0].type=GAL_TYPE_INT32;
keysll[1].name="NAME2";     keysll[1].type=GAL_TYPE_STRING;
...
...

/* Call this function. */
gal_fits_key_read_from_ptr(fptr, keysll, 0, 0);

/* Use the values as you like... */

/* Free all the allocated spaces. Note that `name' wasn't allocated
   in this example, so we should explicitly set it to NULL before
   calling `gal_data_array_free'. */
for(i=0;i<N;++i) keysll[i].name=NULL;
gal_data_array_free(keysll, N, 1);
@end example

If the @code{array} pointer of each keyword's dataset is not @code{NULL},
then it is assumed that the space to keep the value has already been
allocated. If it is @code{NULL}, space will be allocated for the value by
this function.

Strings need special consideration: the reason is that generally,
@code{gal_data_t} needs to also allow for array of strings (as it supports
arrays of integers for example). Hence when reading a string value, two
allocations may be done by this function (one if
@code{array!=NULL}).

Therefore, when using the values of strings after this function,
@code{keysll[i].array} must be interpreted as @code{char **}: one
allocation for the pointer, one for the actual characters. If you use
something like the example, above you don't have to worry about the
freeing, @code{gal_data_array_free} will free both allocations. So to read
a string, one easy way would be the following:

@example
char *str, **strarray;
strarr = keysll[i].array;
str    = strarray[0];
@end example

If CFITSIO is unable to read a keyword for any reason the @code{status} element of the respective @code{gal_data_t} will be non-zero.
If it is zero, then the keyword was found and successfully read.
Otherwise, it is a CFITSIO status value.
You can use CFITSIO's error reporting tools or @code{gal_fits_io_error} (see @ref{FITS macros errors filenames}) for reporting the reason of the failure.
A tip: when the keyword doesn't exist, CFITSIO's status value will be @code{KEY_NO_EXIST}.

CFITSIO will start searching for the keywords from the last place in the header that it searched for a keyword.
So it is much more efficient if the order that you ask for keywords is based on the order they are stored in the header.

@end deftypefun


@deftypefun void gal_fits_key_read (char @code{*filename}, char @code{*hdu}, gal_data_t @code{*keysll}, int @code{readcomment}, int @code{readunit})
Same as @code{gal_fits_read_keywords_fptr} (see above), but accepts the
filename and HDU as input instead of an already opened CFITSIO
@code{fitsfile} pointer.
@end deftypefun


@deftypefun void gal_fits_key_list_add (gal_fits_list_key_t @code{**list}, uint8_t @code{type}, char @code{*keyname}, int @code{kfree}, void @code{*value}, int @code{vfree}, char @code{*comment}, int @code{cfree}, char @code{*unit}, int @code{ufree})
Add a keyword to the top of list of header keywords that need to be written into a FITS file.
In the end, the keywords will have to be freed, so it is important to know before hand if they were allocated or not (hence the presence of the arguments ending in @code{free}).
If the space for the respective element is not allocated, set these arguments to @code{0} (zero).

You can call this function multiple times on a single list add several keys that will be written in one call to @code{gal_fits_key_write} or @code{gal_fits_key_write_in_ptr}.
However, the resulting list will be a last-in-first-out list (for more on lists, see @ref{Linked lists}).
Hence, the written keys will have the inverse order of your calls to this function.
To avoid this problem, you can either use @code{gal_fits_key_list_add_end} instead (which will add each key to the end of the list, not to the top like this function).
Alternatively, you can use @code{gal_fits_key_list_reverse} after adding all the keys with this function.

@strong{Important note for strings}: the value should be the pointer to the string its-self (@code{char *}), not a pointer to a pointer (@code{char **}).
@end deftypefun

@deftypefun void gal_fits_key_list_add_end (gal_fits_list_key_t @code{**list}, uint8_t @code{type}, char @code{*keyname}, int @code{kfree}, void @code{*value}, int @code{vfree}, char @code{*comment}, int @code{cfree}, char @code{*unit}, int @code{ufree})
Similar to @code{gal_fits_key_list_add}, but add the given keyword to the end of the list, see the description of @code{gal_fits_key_list_add} for more.
Use this function if you want the keywords to be written in the same order that you add nodes to the list of keywords.
@end deftypefun

@deftypefun void gal_fits_key_list_title_add (gal_fits_list_key_t @code{**list}, char @code{*title}, int @code{tfree})
Add a special ``title'' keyword (with the @code{title} string) to the top of the keywords list.
If @code{cfree} is non-zero, the space allocated for @code{comment} will be freed immediately after writing the keyword (in another function).
@end deftypefun

@deftypefun void gal_fits_key_list_title_add_end (gal_fits_list_key_t @code{**list}, char @code{*title}, int @code{tfree})
Similar to @code{gal_fits_key_list_title_add}, but put the comments at the end of the list.
@end deftypefun

@deftypefun void gal_fits_key_list_comment_add (gal_fits_list_key_t @code{**list}, char @code{*comment}, int @code{fcfree})
Add a @code{COMMENT} keyword to the top of the keywords list.
If the comment is longer than 70 characters, CFITSIO will automatically break it into multiple @code{COMMENT} keywords.
If @code{fcfree} is non-zero, the space allocated for @code{comment} will be freed immediately after writing the keyword (in another function).
@end deftypefun

@deftypefun void gal_fits_key_list_comment_add_end (gal_fits_list_key_t @code{**list}, char @code{*comment}, int @code{fcfree})
Similar to @code{gal_fits_key_list_comment_add}, but put the comments at the end of the list.
@end deftypefun

@deftypefun void gal_fits_key_list_reverse (gal_fits_list_key_t @code{**list})
Reverse the input list of keywords.
@end deftypefun

@deftypefun void gal_fits_key_write_title_in_ptr (char @code{*title}, fitsfile @code{*fptr})
Add two lines of ``title'' keywords to the given CFITSIO @code{fptr}
pointer. The first line will be blank and the second will have the string
in @code{title} roughly in the middle of the line (a fixed distance from
the start of the keyword line). A title in the list of keywords helps in
classifying the keywords into groups and inspecting them by eye. If
@code{title==NULL}, this function won't do anything.
@end deftypefun

@deftypefun void gal_fits_key_write_filename (char @code{*keynamebase}, char @code{*filename}, gal_fits_list_key_t @code{**list}, int @code{top1end0})
Put @file{filename} into the @code{gal_fits_list_key_t} list (possibly
broken up into multiple keywords) to later write into a HDU header. The
@code{keynamebase} string will be appended with a @code{_N} (N>0) and used
as the keyword name. If @code{top1end0!=0}, then the keywords containing
the filename will be added to the top of the list.

The FITS standard sets a maximum length for the value of a keyword. This
creates problems with file names (which include directories). Because file
names/addresses can become very long. Therefore, when @code{filename} is
longer than the maximum length of a FITS keyword value, this function will
break it into several keywords (breaking up the string on directory
separators).
@end deftypefun

@deftypefun void gal_fits_key_write_wcsstr (fitsfile @code{*fptr}, char @code{*wcsstr}, int @code{nkeyrec})
Write the WCS header string (produced with WCSLIB's @code{wcshdo} function)
into the CFITSIO @code{fitsfile} pointer. @code{nkeyrec} is the number of
FITS header keywords in @code{wcsstr}. This function will put a few blank
keyword lines along with a comment @code{WCS information} before writing
each keyword record.
@end deftypefun

@deftypefun void gal_fits_key_write (gal_fits_list_key_t @code{**keylist}, char @code{*title}, char @code{*filename}, char @code{*hdu})
Write the list of keywords in @code{keylist} into the @code{hdu} extension of the file called @code{filename} (the file must already exist) and free the list.

The list nodes are meant to be dynamically allocated (because they will be freed after being written).
We thus recommend using the @code{gal_fits_key_list_add} or @code{gal_fits_key_list_add_end} to create and fill the list.
Below is one fully working example of using this function to write a keyword into an existing FITS file.

@example
#include <stdio.h>
#include <stdlib.h>
#include <gnuastro/fits.h>

int main()
@{
  char *filename="test.fits";
  gal_fits_list_key_t *keylist=NULL;

  char *unit="unit";
  float value=123.456;
  char *keyname="MYKEY";
  char *comment="A good description of the key";
  gal_fits_key_list_add_end(&keylist, GAL_TYPE_FLOAT32, keyname, 0,
                            &value, 0, comment, 0, unit, 0);
  gal_fits_key_write(&keylist, "Matching metadata", filename, "1");
  return EXIT_SUCCESS;
@}
@end example
@end deftypefun

@deftypefun void gal_fits_key_write_in_ptr (gal_fits_list_key_t @code{**keylist}, fitsfile @code{*fptr})
Write the list of keywords in @code{keylist} into the given CFITSIO @code{fitsfile} pointer and free keylist.
For more on the input @code{keylist}, see the description and example for @code{gal_fits_key_write}, above.
@end deftypefun

@deftypefun void gal_fits_key_write_version (gal_fits_list_key_t @code{**keylist}, char @code{*title}, char @code{*filename}, char @code{*hdu})
Write the (optional, when @code{keylist!=NULL}) given list of keywords
under the optional FITS keyword @code{title}, then print all the important
version and date information. This is basically, just a wrapper over
@code{gal_fits_key_write_version_in_ptr}.
@end deftypefun

@deftypefun void gal_fits_key_write_version_in_ptr (gal_fits_list_key_t @code{**keylist}, char @code{*title}, fitsfile @code{*fptr})
Write or update (all the) keyword(s) in @code{headers} into the FITS
pointer, but also the date, name of your program (@code{program_name}),
along with the versions of CFITSIO, WCSLIB (when available), GSL, Gnuastro,
and (the possible) commit information into the header as described in
@ref{Output FITS files}.

Since the data processing depends on the versions of the libraries you have
used, it is strongly recommended to include this information in every FITS
output. @code{gal_fits_img_write} and @code{gal_fits_tab_write} will
automatically use this function.
@end deftypefun

@deftypefun void gal_fits_key_write_config (gal_fits_list_key_t @code{**keylist}, char @code{*title}, char @code{*extname}, char @code{*filename}, char @code{*hdu})
Write the given keyword list (@code{keylist}) into the @code{hdu} extension
of @code{filename}, ending it with version information. This function will
write @code{extname} as the name of the extension (value to the standard
@code{EXTNAME} FITS keyword). The list of keywords will then be printed
under a title called @code{title}.

This function is used by many Gnuastro programs and is primarily intended
for writing configuration settings of a program into the zero-th extension
of their FITS outputs (which is empty when the FITS file is created by
Gnuastro's program and this library).
@end deftypefun





@node FITS arrays, FITS tables, FITS header keywords, FITS files
@subsubsection FITS arrays (images)

Images (or multi-dimensional arrays in general) are one of the common data
formats that is stored in FITS files. Only one image may be stored in each
FITS HDU/extension. The functions described here can be used to get the
information of, read, or write images in FITS files.

@deftypefun void gal_fits_img_info (fitsfile @code{*fptr}, int @code{*type}, size_t @code{*ndim}, size_t @code{**dsize}, char @code{**name}, char @code{**unit})
Read the type (see @ref{Library data types}), number of dimensions, and
size along each dimension of the CFITSIO @code{fitsfile} into the
@code{type}, @code{ndim}, and @code{dsize} pointers respectively. If
@code{name} and @code{unit} are not @code{NULL} (point to a @code{char *}),
then if the image has a name and units, the respective string will be put
in these pointers.
@end deftypefun

@deftypefun {size_t *} gal_fits_img_info_dim (char @code{*filename}, char @code{*hdu}, size_t @code{*ndim})
Put the number of dimensions in the @code{hdu} extension of @file{filename}
in the space that @code{ndim} points to and return the size of the dataset
along each dimension as an allocated array with @code{*ndim} elements.
@end deftypefun

@deftypefun {gal_data_t *} gal_fits_img_read (char @code{*filename}, char @code{*hdu}, size_t @code{minmapsize}, int @code{quietmmap})
Read the contents of the @code{hdu} extension/HDU of @code{filename} into a
Gnuastro generic data container (see @ref{Generic data container}) and
return it. If the necessary space is larger than @code{minmapsize}, then
don't keep the data in RAM, but in a file on the HDD/SSD. For more on
@code{minmapsize} and @code{quietmmap} see the description under the same
name in @ref{Generic data container}.

Note that this function only reads the main data within the requested FITS
extension, the WCS will not be read into the returned dataset. To read the
WCS, you can use @code{gal_wcs_read} function as shown below. Afterwards,
the @code{gal_data_free} function will free both the dataset and any WCS
structure (if there are any).
@example
data=gal_fits_img_read(filename, hdu, -1, 1);
data->wcs=gal_wcs_read(filename, hdu, 0, 0, &data->wcs->nwcs);
@end example
@end deftypefun

@deftypefun {gal_data_t *} gal_fits_img_read_to_type (char @code{*inputname}, char @code{*inhdu}, uint8_t @code{type}, size_t @code{minmapsize}, int @code{quietmmap})
Read the contents of the @code{hdu} extension/HDU of @code{filename} into a
Gnuastro generic data container (see @ref{Generic data container}) of type
@code{type} and return it.

This is just a wrapper around @code{gal_fits_img_read} (to read the
image/array of any type) and @code{gal_data_copy_to_new_type_free} (to
convert it to @code{type} and free the initially read dataset). See the
description there for more.
@end deftypefun

@cindex NaN
@cindex Convolution kernel
@cindex Kernel, convolution
@deftypefun {gal_data_t *} gal_fits_img_read_kernel (char @code{*filename}, char @code{*hdu}, size_t @code{minmapsize}, int @code{quietmmap})
Read the @code{hdu} of @code{filename} as a convolution kernel. A
convolution kernel must have an odd size along all dimensions, it must not
have blank (NaN in floating point types) values and must be flipped around
the center to make the proper convolution (see @ref{Convolution
process}). If there are blank values, this function will change the blank
values to @code{0.0}. If the input image doesn't have the other two
requirements, this function will abort with an error describing the
condition to the user. The finally returned dataset will have a
@code{float32} type.
@end deftypefun

@deftypefun {fitsfile *} gal_fits_img_write_to_ptr (gal_data_t @code{*input}, char @code{*filename})
Write the @code{input} dataset into a FITS file named @file{filename} and
return the corresponding CFITSIO @code{fitsfile} pointer. This function
won't close @code{fitsfile}, so you can still add other extensions to it
after this function or make other modifications.
@end deftypefun

@deftypefun void gal_fits_img_write (gal_data_t @code{*data}, char @code{*filename}, gal_fits_list_key_t @code{*headers}, char @code{*program_string})
Write the @code{input} dataset into the FITS file named @file{filename}.
Also add the @code{headers} keywords to the newly created HDU/extension it
along with your program's name (@code{program_string}).
@end deftypefun

@deftypefun void gal_fits_img_write_to_type (gal_data_t @code{*data}, char @code{*filename}, gal_fits_list_key_t @code{*headers}, char @code{*program_string}, int @code{type})
Convert the @code{input} dataset into @code{type}, then write it into the
FITS file named @file{filename}. Also add the @code{headers} keywords to
the newly created HDU/extension along with your program's name
(@code{program_string}). After the FITS file is written, this function will
free the copied dataset (with type @code{type}) from memory.

This is just a wrapper for the @code{gal_data_copy_to_new_type} and
@code{gal_fits_img_write} functions.
@end deftypefun

@deftypefun void gal_fits_img_write_corr_wcs_str (gal_data_t @code{*data}, char @code{*filename}, char @code{*wcsstr}, int @code{nkeyrec}, double @code{*crpix}, gal_fits_list_key_t @code{*headers}, char @code{*program_string})
Write the @code{input} dataset into @file{filename} using the @code{wcsstr}
while correcting the @code{CRPIX} values.

This function is mainly useful when you want to make FITS files in parallel
(from one main WCS structure, with just differing CRPIX). This can happen
in the following cases for example:

@itemize
@item
When a large number of FITS images (with WCS) need to be created in
parallel, it can be much more efficient to write the header's WCS keywords
once at first, write them in the FITS file, then just correct the CRPIX
values.

@item
WCSLIB's header writing function is not thread safe. So when writing FITS
images in parallel, we can't write the header keywords in each thread.
@end itemize
@end deftypefun


@node FITS tables,  , FITS arrays, FITS files
@subsubsection FITS tables

Tables are one of the common formats of data that is stored in FITS
files. Only one table may be stored in each FITS HDU/extension, but each
table column must be viewed as a different dataset (with its own name,
units and numeric data type for example). The only constraint of the column
datasets in a table is that they must be one-dimensional and have the same
number of elements as the other columns. The functions described here can
be used to get the information of, read, or write columns into FITS tables.

@deftypefun void gal_fits_tab_size (fitsfile @code{*fitsptr}, size_t @code{*nrows}, size_t @code{*ncols})
Read the number of rows and columns in the table within CFITSIO's
@code{fitsptr}.
@end deftypefun

@deftypefun int gal_fits_tab_format (fitsfile @code{*fitsptr})
Return the format of the FITS table contained in CFITSIO's
@code{fitsptr}. Recall that FITS tables can be in binary or ASCII
formats. This function will return @code{GAL_TABLE_FORMAT_AFITS} or
@code{GAL_TABLE_FORMAT_BFITS} (defined in @ref{Table input output}). If the
@code{fitsptr} is not a table, this function will abort the program with an
error message informing the user of the problem.
@end deftypefun

@deftypefun {gal_data_t *} gal_fits_tab_info (char @code{*filename}, char @code{*hdu}, size_t @code{*numcols}, size_t @code{*numrows}, int @code{*tableformat})
Store the information of each column in @code{hdu} of @code{filename} into
an array of data structures with @code{numcols} elements (one data
structure for each column) see @ref{Arrays of datasets}. The total number
of rows in the table is also put into the memory that @code{numrows} points
to. The format of the table (e.g., FITS binary or ASCII table) will be put
in @code{tableformat} (macros defined in @ref{Table input output}).

This function is just for column information. Therefore it only stores
meta-data like column name, units and comments. No actual data (contents of
the columns for example the @code{array} or @code{dsize} elements) will be
allocated by this function. This is a low-level function particular to
reading tables in FITS format. To be generic, it is recommended to use
@code{gal_table_info} which will allow getting information from a variety
of table formats based on the filename (see @ref{Table input output}).
@end deftypefun

@deftypefun {gal_data_t *} gal_fits_tab_read (char @code{*filename}, char @code{*hdu}, size_t @code{numrows}, gal_data_t @code{*colinfo}, gal_list_sizet_t @code{*indexll}, size_t @code{minmapsize}, int @code{quietmmap})
Read the columns given in the list @code{indexll} from a FITS table (in
@file{filename} and HDU/extension @code{hdu}) into the returned linked list
of data structures, see @ref{List of size_t} and @ref{List of
gal_data_t}. If the necessary space for each column is larger than
@code{minmapsize}, don't keep it in the RAM, but in a file in the HDD/SSD.
For more on @code{minmapsize} and @code{quietmmap}, see the description
under the same name in @ref{Generic data container}.

Each column will have @code{numrows} rows and @code{colinfo} contains any
further information about the columns (returned by
@code{gal_fits_tab_info}, described above). Note that this is a low-level
function, so the output data linked list is the inverse of the input indexes
linked list. It is recommended to use @code{gal_table_read} for generic
reading of tables, see @ref{Table input output}.
@end deftypefun

@deftypefun void gal_fits_tab_write (gal_data_t @code{*cols}, gal_list_str_t @code{*comments}, int @code{tableformat}, char @code{*filename}, char @code{*extname})
Write the list of datasets in @code{cols} (see @ref{List of gal_data_t}) as
separate columns in a FITS table in @code{filename}. If @code{filename}
already exists then this function will write the table as a new extension
called @code{extname}, after all existing ones. The format of the table
(ASCII or binary) may be specified with the @code{tableformat} (see
@ref{Table input output}). If @code{comments!=NULL}, each node of the list
of strings will be written as a @code{COMMENT} keywords in the output FITS
file (see @ref{List of strings}.

This is a low-level function for tables. It is recommended to use
@code{gal_table_write} for generic writing of tables in a variety of
formats, see @ref{Table input output}.
@end deftypefun





@node File input output, World Coordinate System, FITS files, Gnuastro library
@subsection File input output

The most commonly used file format in astronomical data analysis is the
FITS format (see @ref{Fits} for an introduction), therefore Gnuastro's
library provides a large and separate collection of functions to read/write
data from/to them (see @ref{FITS files}). However, FITS is not well
recognized outside the astronomical community and cannot be imported into
documents or slides. Therefore, in this section, we discuss the other
different file formats that Gnuastro's library recognizes.

@menu
* Text files::                  Reading and writing from/to plain text files.
* TIFF files::                  Reading and writing from/to TIFF files.
* JPEG files::                  Reading and writing from/to JPEG files.
* EPS files::                   Writing to EPS files.
* PDF files::                   Writing to PDF files.
@end menu

@node Text files, TIFF files, File input output, File input output
@subsubsection Text files (@file{txt.h})

The most universal and portable format for data storage are plain text files.
They can be viewed and edited on any text editor or even on the command-line.
This section are describes some functions that help in reading from and writing to plain text files.

@cindex CRLF line terminator
@cindex Line terminator, CRLF
Lines are one of the most basic building blocks (delimiters) of a text file.
Some operating systems like Microsoft Windows, terminate their ASCII text lines with a carriage return character and a new-line character (two characters, also known as CRLF line terminators).
While Unix-like operating systems just use a single new-line character.
The functions below that read an ASCII text file are able to identify lines with both kinds of line terminators.

Gnuastro defines a simple format for metadata of table columns in a plain text file that is discussed in @ref{Gnuastro text table format}.
The functions to get information from, read from and write to plain text files also follow those conventions.


@deffn Macro GAL_TXT_LINESTAT_INVALID
@deffnx Macro GAL_TXT_LINESTAT_BLANK
@deffnx Macro GAL_TXT_LINESTAT_COMMENT
@deffnx Macro GAL_TXT_LINESTAT_DATAROW
Status codes for lines in a plain text file that are returned by @code{gal_txt_line_stat}.
Lines which have a @key{#} character as their first non-white character are considered to be comments.
Lines with nothing but white space characters are considered blank.
The remaining lines are considered as containing data.
@end deffn

@deftypefun int gal_txt_line_stat (char @code{*line})
Check the contents of @code{line} and see if it is a blank, comment, or data line.
The returned values are the macros that start with @code{GAL_TXT_LINESTAT}.
@end deftypefun

@deftypefun {char *} gal_txt_trim_space (char @code{*str})
Trim the white space characters before and after the given string.
The operation is done within the allocated space of the string, so if you need the string untouched, please pass an allocated copy of the string to this function.
The returned pointer is within the input string.
If the input pointer is @code{NULL}, or the string only has white-space characters, the returned pointer will be @code{NULL}.
@end deftypefun

@deftypefun {gal_data_t *} gal_txt_table_info (char @code{*filename}, gal_list_str_t @code{*lines}, size_t @code{*numcols}, size_t @code{*numrows})
Store the information of each column in a text file @code{filename}, or list of strings (@code{lines}) into an array of data structures with @code{numcols} elements (one data structure for each column) see @ref{Arrays of datasets}.
The total number of rows in the table is also put into the memory that @code{numrows} points to.

@code{lines} is a list of strings with each node representing one line (including the new-line character), see @ref{List of strings}.
It will mostly be the output of @code{gal_txt_stdin_read}, which is used to read the program's input as separate lines from the standard input (see below).
Note that @code{filename} and @code{lines} are mutually exclusive and one of them must be @code{NULL}.

This function is just for column information.
Therefore it only stores meta-data like column name, units and comments.
No actual data (contents of the columns for example the @code{array} or @code{dsize} elements) will be allocated by this function.
This is a low-level function particular to reading tables in plain text format.
To be generic, it is recommended to use @code{gal_table_info} which will allow getting information from a variety of table formats based on the filename (see @ref{Table input output}).
@end deftypefun

@deftypefun {gal_data_t *} gal_txt_table_read (char @code{*filename}, gal_list_str_t @code{*lines}, size_t @code{numrows}, gal_data_t @code{*colinfo}, gal_list_sizet_t @code{*indexll}, size_t @code{minmapsize}, int @code{quietmmap})
Read the columns given in the list @code{indexll} from a plain text file (@code{filename}) or list of strings (@code{lines}), into a linked list of data structures (see @ref{List of size_t} and @ref{List of gal_data_t}).
If the necessary space for each column is larger than @code{minmapsize}, don't keep it in the RAM, but in a file on the HDD/SSD.
For more one @code{minmapsize} and @code{quietmmap}, see the description under the same name in @ref{Generic data container}.

@code{lines} is a list of strings with each node representing one line (including the new-line character), see @ref{List of strings}.
It will mostly be the output of @code{gal_txt_stdin_read}, which is used to read the program's input as separate lines from the standard input (see below).
Note that @code{filename} and @code{lines} are mutually exclusive and one of them must be @code{NULL}.

Note that this is a low-level function, so the output data list is the inverse of the input indexs linked list.
It is recommended to use @code{gal_table_read} for generic reading of tables in any format, see @ref{Table input output}.
@end deftypefun

@deftypefun {gal_data_t *} gal_txt_image_read (char @code{*filename}, gal_list_str_t @code{*lines}, size_t @code{minmapsize}, int @code{quietmmap})
Read the 2D plain text dataset in file (@code{filename}) or list of strings (@code{lines}) into a dataset and return the dataset.
If the necessary space for the image is larger than @code{minmapsize}, don't keep it in the RAM, but in a file on the HDD/SSD.
For more on @code{minmapsize} and @code{quietmmap}, see the description under the same name in @ref{Generic data container}.

@code{lines} is a list of strings with each node representing one line (including the new-line character), see @ref{List of strings}.
It will mostly be the output of @code{gal_txt_stdin_read}, which is used to read the program's input as separate lines from the standard input (see below).
Note that @code{filename} and @code{lines} are mutually exclusive and one of them must be @code{NULL}.
@end deftypefun

@deftypefun {gal_list_str_t *} gal_txt_stdin_read (long @code{timeout_microsec})
@cindex Standard input
Read the complete standard input and return a list of strings with each line (including the new-line character) as one node of that list.
If the standard input is already filled (for example connected to another program's output with a pipe), then this function will parse the whole stream.

If Standard input is not pre-configured and the @emph{first line} is typed/written in the terminal before @code{timeout_microsec} micro-seconds, it will continue parsing until reaches an end-of-file character (@key{CTRL-D} after a new-line on the keyboard) with no time limit.
If nothing is entered before @code{timeout_microsec} micro-seconds, it will return @code{NULL}.

All the functions that can read plain text tables will accept a filename as well as a list of strings (intended to be the output of this function for using Standard input).
The reason for keeping the standard input is that once something is read from the standard input, it is hard to put it back.
We often need to read a text file several times: once to count how many columns it has and which ones are requested, and another time to read the desired columns.
So it easier to keep it all in allocated memory and pass it on from the start for each round.
@end deftypefun

@deftypefun void gal_txt_write (gal_data_t @code{*cols}, struct gal_fits_list_key_t @code{**keylist}, gal_list_str_t @code{*comment}, char @code{*filename}, uint8_t @code{colinfoinstdout})
Write @code{cols} in a plain text file @code{filename}.
@code{cols} may have one or two dimensions which determines the output:

@table @asis
@item 1D
@code{cols} is treated as a column and a list of datasets (see @ref{List of gal_data_t}): every node in the list is written as one column in a table.

@item 2D
@code{cols} is a two dimensional array, it cannot be treated as a list (only one 2D array can currently be written to a text file).
So if @code{cols->next!=NULL} the next nodes in the list are ignored and will not be written.
@end table

This is a low-level function for tables.
It is recommended to use @code{gal_table_write} for generic writing of tables in a variety of formats, see @ref{Table input output}.

It is possible to add two types of metadata to the printed table: comments and keywords.
Each string in the list given to @code{comments} will be printed into the file as a separate line, starting with @code{#}.
Keywords have a more specific and computer-parsable format and are passed through @code{keylist}.
Each keyword is also printed in one line, but with the format below.
Because of the various components in a keyword, it is thus necessary to use the @code{gal_fits_list_key_t} data structure.
For more, see @ref{FITS header keywords}.

@example
# [key] NAME: VALUE / [UNIT] KEYWORD COMMENT.
@end example

If @code{filename} already exists this function will abort with an error and will not write over the existing file.
Before calling this function make sure if the file exists or not.
If @code{comments!=NULL}, a @code{#} will be put at the start of each node of the list of strings and will be written in the file before the column meta-data in @code{filename} (see @ref{List of strings}).

When @code{filename==NULL}, the column information will be printed on the standard output (command-line).
When @code{colinfoinstdout!=0} and @code{filename==NULL} (columns are printed in the standard output), the dataset metadata will also printed in the standard output.
When printing to the standard output, the column information can be piped into another program for further processing and thus the meta-data (lines starting with a @code{#}) must be ignored.
In such cases, you only print the column values by passing @code{0} to @code{colinfoinstdout}.
@end deftypefun


@node TIFF files, JPEG files, Text files, File input output
@subsubsection TIFF files (@file{tiff.h})

@cindex TIFF format
Outside of astronomy, the TIFF standard is arguably the most commonly used format to store high-precision data/images.
Unlike FITS however, the TIFF standard only supports images (not tables), but like FITS, it has support for all standard data types (see @ref{Numeric data types}) which is the primary reason other fields use it.

Another similarity of the TIFF and FITS standards is that TIFF supports multiple images in one file.
The TIFF standard calls each one of these images (and their accompanying meta-data) a `directory' (roughly equivalent to the FITS extensions).
Unlike FITS however, the directories can only be identified by their number (counting from zero), recall that in FITS you can also use the extension name to identify it.

The functions described here allow easy reading (and later writing) of TIFF files within Gnuastro or for users of Gnuastro's libraries.
Currently only reading is supported, but if you are interested, please get in touch with us.

@deftypefun {int} gal_tiff_name_is_tiff (char @code{*name})
Return @code{1} if @code{name} has a TIFF suffix.
This can be used to make sure that a given input file is TIFF.
See @code{gal_tiff_suffix_is_tiff} for a list of recognized suffixes.
@end deftypefun

@deftypefun {int} gal_tiff_suffix_is_tiff (char @code{*name})
Return @code{1} if @code{suffix} is a recognized TIFF suffix.
The recognized suffixes are @file{tif}, @file{tiff}, @file{TIFF} and @file{TIFF}.
@end deftypefun

@deftypefun {size_t} gal_tiff_dir_string_read (char @code{*string})
Return the number within @code{string} as a @code{size_t} number to identify a TIFF directory.
Note that the directories start counting from zero.
@end deftypefun

@deftypefun {gal_data_t *} gal_tiff_read (char @code{*filename}, size_t @code{dir}, size_t @code{minmapsize}, int @code{quietmmap})
Read the @code{dir} directory within the TIFF file @code{filename} and return the contents of that TIFF directory as @code{gal_data_t}.
If the directory's image contains multiple channels, the output will be a list (see @ref{List of gal_data_t}).
@end deftypefun





@node JPEG files, EPS files, TIFF files, File input output
@subsubsection JPEG files (@file{jpeg.h})

@cindex JPEG format
The JPEG file format is one of the most common formats for storing and transferring images, recognized by almost all image rendering and processing programs.
In particular, because of its lossy compression algorithm, JPEG files can have low volumes, making it used heavily on the internet.
For more on this file format, and a comparison with others, please see @ref{Recognized file formats}.

For scientific purposes, the lossy compression and very limited dynamic range (8-bit integers) make JPEG very unattractive for storing of valuable data.
However, because of its commonality, it will inevitably be needed in some situations.
The functions here can be used to read and write JPEG images into Gnuastro's @ref{Generic data container}.
If the JPEG file has more than one color channel, each channel is treated as a separate node in a list of datasets (see @ref{List of gal_data_t}).

@deftypefun {int} gal_jpeg_name_is_jpeg (char @code{*name})
Return @code{1} if @code{name} has a JPEG suffix.
This can be used to make sure that a given input file is JPEG.
See @code{gal_jpeg_suffix_is_jpeg} for a list of recognized suffixes.
@end deftypefun

@deftypefun {int} gal_jpeg_suffix_is_jpeg (char @code{*name})
Return @code{1} if @code{suffix} is a recognized JPEG suffix.
The recognized suffixes are @code{.jpg}, @code{.JPG}, @code{.jpeg}, @code{.JPEG}, @code{.jpe}, @code{.jif}, @code{.jfif} and @code{.jfi}.
@end deftypefun

@deftypefun {gal_data_t *} gal_jpeg_read (char @code{*filename}, size_t @code{minmapsize}, int @code{quietmmap})
Read the JPEG file @code{filename} and return the contents as @code{gal_data_t}.
If the directory's image contains multiple colors/channels, the output will be a list with one node per color/channel (see @ref{List of gal_data_t}).
@end deftypefun

@cindex JPEG compression quality
@deftypefun {void} gal_jpeg_write (gal_data_t @code{*in}, char @code{*filename}, uint8_t @code{quality}, float @code{widthincm})
Write the given dataset (@code{in}) into @file{filename} (a JPEG file).
If @code{in} is a list, then each node in the list will be a color channel, therefore there can only be 1, 3 or 4 nodes in the list.
If the number of nodes is different, then this function will abort the program with a message describing the cause.
The lossy JPEG compression level can be set through @code{quality} which is a value between 0 and 100 (inclusive, 100 being the best quality).
The display width of the JPEG file in units of centimeters (to suggest to viewers/users, only a meta-data) can be set through @code{widthincm}.
@end deftypefun





@node EPS files, PDF files, JPEG files, File input output
@subsubsection EPS files (@file{eps.h})

The Encapsulated PostScript (EPS) format is commonly used to store images
(or individual/single-page parts of a document) in the PostScript
documents. For a more complete introduction, please see @ref{Recognized
file formats}. To provide high quality graphics, the Postscript language is
a vectorized format, therefore pixels (elements of a ``rasterized'' format)
aren't defined in their context.

To display rasterized images, PostScript does allow arrays of
pixels. However, since the over-all EPS file may contain many vectorized
elements (for example borders, text, or other lines over the text) and
interpreting them is not trivial or necessary within Gnuastro's scope,
Gnuastro only provides some functions to write a dataset (in the
@code{gal_data_t} format, see @ref{Generic data container}) into EPS.

@deftypefun {int} gal_eps_name_is_eps (char @code{*name})
Return @code{1} if @code{name} has an EPS suffix. This can be used to make
sure that a given input file is EPS. See @code{gal_eps_suffix_is_eps} for a
list of recognized suffixes.
@end deftypefun

@deftypefun {int} gal_eps_suffix_is_eps (char @code{*name})
Return @code{1} if @code{suffix} is a recognized EPS suffix. The recognized
suffixes are @code{.eps}, @code{.EPS}, @code{.epsf}, @code{.epsi}.
@end deftypefun

@deftypefun {void} gal_eps_to_pt (float @code{widthincm}, size_t @code{*dsize}, size_t @code{*w_h_in_pt})
Given a specific width in centimeters (@code{widthincm} and the number of
the dataset's pixels in each dimension (@code{dsize}) calculate the size of
the output in PostScript points. The output values are written in the
@code{w_h_in_pt} array (which has to be allocated before calling this
function). The first element in @code{w_h_in_pt} is the width and the
second is the height of the image.
@end deftypefun

@deftypefun {void} gal_eps_write (gal_data_t @code{*in}, char @code{*filename}, float @code{widthincm}, uint32_t @code{borderwidth}, int @code{hex}, int @code{dontoptimize}, int @code{forpdf})
Write the @code{in} dataset into an EPS file called
@code{filename}. @code{in} has to be an unsigned 8-bit character type
(@code{GAL_TYPE_UINT8}, see @ref{Numeric data types}). The desired width of
the image in human/non-pixel units (to help the displayer) can be set with
the @code{widthincm} argument. If @code{borderwidth} is non-zero, it is
interpreted as the width (in points) of a solid black border around the
image. A border can helpful when importing the EPS file into a document.

@cindex ASCII85 encoding
@cindex Hexadecimal encoding
EPS files are plain-text (can be opened/edited in a text editor), therefore
there are different encodings to store the data (pixel values) within
them. Gnuastro supports the Hexadecimal and ASCII85 encoding. ASCII85 is
more efficient (producing small file sizes), so it is the default
encoding. To use Hexadecimal encoding, set @code{hex} to a non-zero
value. Currently If you don't directly want to import the EPS file into a
PostScript document but want to later compile it into a PDF file, set the
@code{forpdf} argument to @code{1}.

@cindex PDF
@cindex EPS
@cindex PostScript
By default, when the dataset only has two values, this function will use
the PostScript optimization that allows setting the pixel values per bit,
not byte (@ref{Recognized file formats}). This can greatly help reduce the
file size. However, when @option{dontoptimize!=0}, this optimization is
disabled: even though there are only two values (is binary), the
difference between them does not correspond to the full contrast of black
and white.
@end deftypefun





@node PDF files,  , EPS files, File input output
@subsubsection PDF files (@file{pdf.h})

The portable document format (PDF) has arguably become the most common
format used for distribution of documents. In practice, a PDF file is just
a compiled PostScript file. For a more complete introduction, please see
@ref{Recognized file formats}. To provide high quality graphics, the PDF is
a vectorized format, therefore pixels (elements of a ``rasterized'' format)
aren't defined in their context. As a result, similar to @ref{EPS files},
Gnuastro only writes datasets to a PDF file, not vice-versa.

@deftypefun {int} gal_pdf_name_is_pdf (char @code{*name})
Return @code{1} if @code{name} has an PDF suffix. This can be used to make
sure that a given input file is PDF. See @code{gal_pdf_suffix_is_pdf} for a
list of recognized suffixes.
@end deftypefun

@deftypefun {int} gal_pdf_suffix_is_pdf (char @code{*name})
Return @code{1} if @code{suffix} is a recognized PDF suffix. The recognized
suffixes are @code{.pdf} and @code{.PDF}.
@end deftypefun

@deftypefun {void} gal_pdf_write (gal_data_t @code{*in}, char @code{*filename}, float @code{widthincm}, uint32_t @code{borderwidth}, int @code{dontoptimize})
Write the @code{in} dataset into an EPS file called
@code{filename}. @code{in} has to be an unsigned 8-bit character type
(@code{GAL_TYPE_UINT8}, see @ref{Numeric data types}). The desired width of
the image in human/non-pixel units (to help the displayer) can be set with
the @code{widthincm} argument. If @code{borderwidth} is non-zero, it is
interpreted as the width (in points) of a solid black border around the
image. A border can helpful when importing the PDF file into a document.

This function is just a wrapper for the @code{gal_eps_write} function in
@ref{EPS files}. After making the EPS file, Ghostscript (with a version of
9.10 or above, see @ref{Optional dependencies}) will be used to compile the
EPS file to a PDF file. Therefore if GhostScript doesn't exist, doesn't have
the proper version, or fails for any other reason, the EPS file will
remain. It can be used to find the cause, or use another converter or
PostScript compiler.

@cindex PDF
@cindex EPS
@cindex PostScript
By default, when the dataset only has two values, this function will use
the PostScript optimization that allows setting the pixel values per bit,
not byte (@ref{Recognized file formats}). This can greatly help reduce the
file size. However, when @option{dontoptimize!=0}, this optimization is
disabled: even though there are only two values (is binary), the
difference between them does not correspond to the full contrast of black
and white.
@end deftypefun






@node World Coordinate System, Arithmetic on datasets, File input output, Gnuastro library
@subsection World Coordinate System (@file{wcs.h})

The FITS standard defines the world coordinate system (WCS) as a mechanism to associate physical values to positions within a dataset.
For example, it can be used to convert pixel coordinates in an image to celestial coordinates like the right ascension and declination.
The functions in this section are mainly just wrappers over CFITSIO, WCSLIB and GSL library functions to help in common applications.

@deffn  Macro GAL_WCS_DISTORTION_TPD
@deffnx Macro GAL_WCS_DISTORTION_SIP
@deffnx Macro GAL_WCS_DISTORTION_TPV
@deffnx Macro GAL_WCS_DISTORTION_DSS
@deffnx Macro GAL_WCS_DISTORTION_WAT
@deffnx Macro GAL_WCS_DISTORTION_INVALID
@cindex WCS distortion
@cindex Distortion, WCS
@cindex TPD WCS distortion
@cindex SIP WCS distortion
@cindex TPV WCS distortion
@cindex DSS WCS distortion
@cindex WAT WCS distortion
@cindex Prior WCS distortion
@cindex sequent WCS distortion
Gnuastro identifiers of the various WCS distortion conventions, for more, see Calabretta et al. (2004, preprint)@footnote{@url{https://www.atnf.csiro.au/people/mcalabre/WCS/dcs_20040422.pdf}}.
Among these, SIP is a prior distortion, the rest other are sequent distortions.
TPD is a superset of all these, hence it has both prior and sequeal distortion coefficients.
More information is given in the documentation of @code{dis.h}, from the WCSLIB manual@footnote{@url{https://www.atnf.csiro.au/people/mcalabre/WCS/wcslib/dis_8h.html}}.
@end deffn

@deffn Macro GAL_WCS_FLTERROR
Limit of rounding for floating point errors.
@end deffn

@deftypefun {struct wcsprm *} gal_wcs_read_fitsptr (fitsfile @code{*fptr}, size_t @code{hstartwcs}, size_t @code{hendwcs}, int @code{*nwcs})
[@strong{Not thread-safe}] Return the WCSLIB @code{wcsprm} structure that
is read from the CFITSIO @code{fptr} pointer to an opened FITS file. Also
put the number of coordinate representations found into the space that
@code{nwcs} points to. To read the WCS structure directly from a filename,
see @code{gal_wcs_read} below. After processing has finished, you can free
the returned structure with WCSLIB's @code{wcsvfree} keyword:

@example
status = wcsvfree(&nwcs,&wcs);
@end example

If you don't want to search the full FITS header for WCS-related FITS keywords (for example due to conflicting keywords), but only a specific range of the header keywords you can use the @code{hstartwcs} and @code{hendwcs} arguments to specify the keyword number range (counting from zero).
If @code{hendwcs} is larger than @code{hstartwcs}, then only keywords in the given range will be checked.
Hence, to ignore this feature (and search the full FITS header), give both these arguments the same value.

If the WCS information couldn't be read from the FITS file, this function will return a @code{NULL} pointer and put a zero in @code{nwcs}.
A WCSLIB error message will also be printed in @code{stderr} if there was an error.

This function is just a wrapper over WCSLIB's @code{wcspih} function which is not thread-safe.
Therefore, be sure to not call this function simultaneously (over multiple threads).
@end deftypefun

@deftypefun {struct wcsprm *} gal_wcs_read (char @code{*filename}, char @code{*hdu}, size_t @code{hstartwcs}, size_t @code{hendwcs}, int @code{*nwcs})
[@strong{Not thread-safe}] Return the WCSLIB structure that is read from the HDU/extension @code{hdu} of the file @code{filename}.
Also put the number of coordinate representations found into the space that @code{nwcs} points to.
Please see @code{gal_wcs_read_fitsptr} for more.
@end deftypefun

@deftypefun {struct wcsprm *} gal_wcs_create (double @code{*crpix}, double @code{*crval}, double @code{*cdelt}, double @code{*pc}, char @code{**cunit}, char @code{**ctype}, size_t @code{ndim})
Given all the most common standard components of the WCS standard, construct a @code{struct wcsprm}, initialize and set it for future processing.
See the FITS WCS standard for more on these keywords.
All the arrays must have @code{ndim} elements with them except for @code{pc} which should have @code{ndim*ndim} elements (a square matrix).
Also, @code{cunit} and @code{ctype} are arrays of strings.
@end deftypefun

@deftypefun {char *} gal_wcs_dimension_name (struct wcsprm @code{*wcs}, size_t @code{dimension})
Return an allocated string array (that should be freed later) containing the first part of the @code{CTYPEi} FITS keyword (which contains the dimension name in the FITS standard).
For example if @code{CTYPE1} is @code{RA---TAN}, the string that function returns will be @code{RA}.
Recall that the second component of @code{CTYPEi} contains the type of projection.
@end deftypefun

@deftypefun void gal_wcs_write (struct wcsprm @code{*wcs}, char @code{*filename}, char @code{*extname}, gal_fits_list_key_t @code{*headers}, char @code{*program_string})
Write the given WCS structure into the second extension of an empty FITS header.
The first/primary extension will be empty like the default format of all Gnuastro outputs.
When @code{extname!=NULL} it will be used as the FITS extension name.
Any set of extra headers can also be written through the @code{headers} list and if @code{program_string!=NULL} it will be used in a commented keyword title just above the written version information.
@end deftypefun

@deftypefun void gal_wcs_write_in_fitsptr(fitsfile @code{*fptr}, struct wcsprm @code{*wcs})
Convert the input @code{wcs} structure (keeping the WCS programmatically) into FITS keywords and write them into the given FITS file pointer.
This is a relatively low-level function which assumes the FITS file has already been opened with CFITSIO.
If you just want to write the WCS into an empty file, you can use @code{gal_wcs_write} (which internally calls this function after creating the FITS file and later closes it safely).
@end deftypefun

@deftypefun {struct wcsprm *} gal_wcs_copy (struct wcsprm @code{*wcs})
Return a fully allocated (independent) copy of @code{wcs}.
@end deftypefun

@deftypefun void gal_wcs_remove_dimension (struct wcsprm @code{*wcs}, size_t @code{fitsdim})
Remove the given FITS dimension from the given @code{wcs} structure.
@end deftypefun

@deftypefun void gal_wcs_on_tile (gal_data_t @code{*tile})
Create a WCSLIB @code{wcsprm} structure for @code{tile} using WCS
parameters of the tile's allocated block dataset, see @ref{Tessellation
library} for the definition of tiles. If @code{tile} already has a WCS
structure, this function won't do anything.

In many cases, tiles are created for internal/low-level processing. Hence
for performance reasons, when creating the tiles they don't have any WCS
structure. When needed, this function can be used to add a WCS structure to
each tile tile by copying the WCS structure of its block and correcting the
reference point's coordinates within the tile.
@end deftypefun

@deftypefun {double *} gal_wcs_warp_matrix (struct wcsprm @code{*wcs})
Return the Warping matrix of the given WCS structure as an array of double
precision floating points. This will be the final matrix, irrespective of
the type of storage in the WCS structure. Recall that the FITS standard has
several methods to store the matrix. The output is an allocated square
matrix with each side equal to the number of dimensions.
@end deftypefun

@deftypefun void gal_wcs_clean_small_errors (struct wcsprm @code{*wcs})
Errors can make small differences between the pixel-scale elements (@code{CDELT}) and can also lead to extremely small values in the @code{PC} matrix.
With this function, such errors will be ``cleaned'' as follows: 1) if the maximum difference between the @code{CDELT} elements is smaller than the reference error, it will be set to the mean value.
When the FITS keyword @code{CRDER} (optional) is defined it will be used as a reference, if not the default value is @code{GAL_WCS_FLTERROR}.
2) If any of the PC elements differ from 0, 1 or -1 by less than @code{GAL_WCS_FLTERROR}, they will be rounded to the respective value.
@end deftypefun

@deftypefun void gal_wcs_decompose_pc_cdelt (struct wcsprm @code{*wcs})
Decompose the @code{PCi_j} and @code{CDELTi} elements of
@code{wcs}. According to the FITS standard, in the @code{PCi_j} WCS
formalism, the rotation matrix elements @mymath{m_{ij}} are encoded in the
@code{PCi_j} keywords and the scale factors are encoded in the
@code{CDELTi} keywords. There is also another formalism (the @code{CDi_j}
formalism) which merges the two into one matrix.

However, WCSLIB's internal operations are apparently done in the
@code{PCi_j} formalism. So its outputs are also all in that format by
default. When the input is a @code{CDi_j}, WCSLIB will still read the
matrix directly into the @code{PCi_j} matrix and the @code{CDELTi} values
are set to @code{1} (one). This function is designed to correct such
issues: after it is finished, the @code{CDELTi} values in @code{wcs} will
correspond to the pixel scale, and the @code{PCi_j} will correction show
the rotation.
@end deftypefun

@deftypefun int gal_wcs_distortion_from_string (char @code{*distortion})
Convert the given string (assumed to be a FITS-standard, string-based distortion identifier) to a Gnuastro's integer-based distortion identifier (one of the @code{GAL_WCS_DISTORTION_*} macros defined above).
The sting-based distortion identifiers have three characters and are all in capital letters.
@end deftypefun

@deftypefun int gal_wcs_distortion_to_string (int @code{distortion})
Convert the given Gnuastro integer-based distortion identifier (one of the @code{GAL_WCS_DISTORTION_*} macros defined above) to the string-based distortion identifier) of the FITS standard.
The sting-based distortion identifiers have three characters and are all in capital letters.
@end deftypefun

@cindex WCS distortion
@cindex Distortion, WCS
@deftypefun {int} gal_wcs_distortion_identify (struct wcsprm @code{*wcs})
Returns the Gnuastro identifier for the distortion of the input WCS structure.
The returned value is one of the @code{GAL_WCS_DISTORTION_*} macros defined above.
When the input pointer to a structure is @code{NULL}, or it doesn't contain a distortion, the returned value will be @code{GAL_WCS_DISTORTION_INVALID}.
@end deftypefun

@cindex SIP WCS distortion
@cindex TPV WCS distortion
@deftypefun {struct wcsprm *} gal_wcs_distortion_convert(struct wcsprm @code{*inwcs}, int @code{outdisptype}, size_t @code{*fitsize})
Return a newly allocated WCS structure, where the distortion is implemented in a different standard, identified by the identifier @code{outdisptype}.
The Gnuastro WCS distortion identifiers are defined in the @code{GAL_WCS_DISTORTION_*} macros mentioned above.

The available conversions in this function will grow.
Currently it only supports converting TPV to SIP and vice versa, following the recipe of Shupe et al. (2012)@footnote{Proc. of SPIE Vol. 8451  84511M-1. @url{https://doi.org/10.1117/12.925460}, also available at @url{http://web.ipac.caltech.edu/staff/shupe/reprints/SIP_to_PV_SPIE2012.pdf}.}.
Please get in touch with us if you need other types of conversions.

For some conversions, direct analytical conversions don't exist.
It is thus necessary to model and fit the two types.
In such cases, it is also necessary to specify the @code{fitsize} array that is the size of the array along each C-ordered dimension, so you can simply pass the @code{dsize} element of your @code{gal_data_t} dataset, see @ref{Generic data container}.
Currently this is only necessary when converting TPV to SIP.
For other conversions you may simply pass a @code{NULL} pointer.

For example, if you want to convert the TPV coefficients of your input @file{image.fits} to SIP coefficients, you can use the following functions (which are also available as a command-line operation in @ref{Fits}).

@example
int nwcs;
gal_data_t *data=gal_fits_img_read("image.fits", "1", -1, 1);
inwcs=gal_wcs_read("image.fits", "1", 0, &nwcs);
data->wcs=gal_wcs_distortion_convert(inwcs, GAL_WCS_DISTORTION_TPV,
                                     NULL);
wcsfree(inwcs);
gal_fits_img_write(data, "tpv.fits", NULL, NULL);
@end example

@end deftypefun

@deftypefun double gal_wcs_angular_distance_deg (double @code{r1}, double @code{d1}, double @code{r2}, double @code{d2})
Return the angular distance (in degrees) between a point located at
(@code{r1}, @code{d1}) to (@code{r2}, @code{d2}). All input coordinates are
in degrees. The distance (along a great circle) on a sphere between two
points is calculated with the equation below.

@dispmath {\cos(d)=\sin(d_1)\sin(d_2)+\cos(d_1)\cos(d_2)\cos(r_1-r_2)}

However, since the pixel scales are usually very small numbers, this
function won't use that direct formula. It will be use the
@url{https://en.wikipedia.org/wiki/Haversine_formula, Haversine formula}
which is better considering floating point errors:

@dispmath{{\sin^2(d)\over 2}=\sin^2\left( {d_1-d_2\over 2} \right)+\cos(d_1)\cos(d_2)\sin^2\left( {r_1-r_2\over 2} \right)}
@end deftypefun

@deftypefun {double *} gal_wcs_pixel_scale (struct wcsprm @code{*wcs})
Return the pixel scale for each dimension of @code{wcs} in degrees. The
output is an allocated array of double precision floating point type with
one element for each dimension. If its not successful, this function will
return @code{NULL}.
@end deftypefun

@deftypefun double gal_wcs_pixel_area_arcsec2 (struct wcsprm @code{*wcs})
Return the pixel area of @code{wcs} in arc-second squared. If the input WCS
structure is not two dimensional and the units (@code{CUNIT} keywords) are
not @code{deg} (for degrees), then this function will return a NaN.
@end deftypefun

@deftypefun int gal_wcs_coverage (char @code{*filename}, char @code{*hdu}, size_t @code{*ondim}, double @code{**ocenter}, double @code{**owidth}, double @code{**omin}, double @code{**omax})
Find the sky coverage of the image HDU (@code{hdu}) within @file{filename}.
The the number of dimensions is written into @code{ndim}, and space for the various output arrays is internally allocated and filled with the respective values.
@end deftypefun

@deftypefun {gal_data_t *} gal_wcs_world_to_img (gal_data_t @code{*coords}, struct wcsprm @code{*wcs}, int @code{inplace})
Convert the linked list of world coordinates in @code{coords} to a linked
list of image coordinates given the input WCS structure. @code{coords} must
be a linked list of data structures of float64 (`double') type,
see@ref{Linked lists} and @ref{List of gal_data_t}. The top (first
popped/read) node of the linked list must be the first WCS coordinate (RA
in an image usually) etc. Similarly, the top node of the output will be
the first image coordinate (in the FITS standard).

If @code{inplace} is zero, then the output will be a newly allocated list
and the input list will be untouched. However, if @code{inplace} is
non-zero, the output values will be written into the input's already
allocated array and the returned pointer will be the same pointer to
@code{coords} (in other words, you can ignore the returned value). Note
that in the latter case, only the values will be changed, things like units
or name (if present) will be untouched.
@end deftypefun

@deftypefun {gal_data_t *} gal_wcs_img_to_world (gal_data_t @code{*coords}, struct wcsprm @code{*wcs}, int @code{inplace})
Convert the linked list of image coordinates in @code{coords} to a linked
list of world coordinates given the input WCS structure. See the
description of @code{gal_wcs_world_to_img} for more details.
@end deftypefun





@node Arithmetic on datasets, Tessellation library, World Coordinate System, Gnuastro library
@subsection Arithmetic on datasets (@file{arithmetic.h})

When the dataset's type and other information are already known, any
programming language (including C) provides some very good tools for
various operations (including arithmetic operations like addition) on the
dataset with a simple loop. However, as an author of a program, making
assumptions about the type of data, its dimensions and other basic
characteristics will come with a large processing burden.

For example if you always read your data as double precision floating
points for a simple operation like addition with an integer constant, you
will be wasting a lot of CPU and memory when the input dataset is
@code{int32} type for example (see @ref{Numeric data types}). This overhead
may be small for small images, but as you scale your process up and work
with hundred/thousands of files that can be very large, this overhead will
take a significant portion of the processing power. The functions and
macros in this section are designed precisely for this purpose: to allow
you to do any of the defined operations on any dataset with no overhead (in
the native type of the dataset).

Gnuastro's Arithmetic program uses the functions and macros of this
section, so please also have a look at the @ref{Arithmetic} program and in
particular @ref{Arithmetic operators} for a better description of the
operators discussed here.

The main function of this library is @code{gal_arithmetic} that is
described below. It can take an arbitrary number of arguments as operands
(depending on the operator, similar to @code{printf}). Its first two
arguments are integers specifying the flags and operator. So first we will
review the constants for the recognized flags and operators and discuss
them, then introduce the actual function.

@deffn  Macro GAL_ARITHMETIC_INPLACE
@deffnx Macro GAL_ARITHMETIC_FREE
@deffnx Macro GAL_ARITHMETIC_NUMOK
@deffnx Macro GAL_ARITHMETIC_FLAGS_ALL
@cindex Bitwise Or
Bit-wise flags to pass onto @code{gal_arithmetic} (see below). To pass
multiple flags, use the bitwise-or operator, for example
@code{GAL_ARITHMETIC_INPLACE |
GAL_ARITHMETIC_FREE}. @code{GAL_ARITHMETIC_FLAGS_ALL} is a combination of
all flags to shorten your code if you want all flags activated. Each flag
is described below:
@table @code
@item GAL_ARITHMETIC_INPLACE
Do the operation in-place (in the input dataset, thus modifying it) to
improve CPU and memory usage. If this flag is used, after
@code{gal_arithmetic} finishes, the input dataset will be modified. It is
thus useful if you have no more need for the input after the operation.

@item GAL_ARITHMETIC_FREE
Free (all the) input dataset(s) after the operation is done. Hence the
inputs are no longer usable after @code{gal_arithmetic}.

@item GAL_ARITHMETIC_NUMOK
It is acceptable to use a number and an array together. For example if you
want to add all the pixels in an image with a single number you can pass
this flag to avoid having to allocate a constant array the size of the
image (with all the pixels having the same number).
@end table
@end deffn

@deffn  Macro GAL_ARITHMETIC_OP_PLUS
@deffnx Macro GAL_ARITHMETIC_OP_MINUS
@deffnx Macro GAL_ARITHMETIC_OP_MULTIPLY
@deffnx Macro GAL_ARITHMETIC_OP_DIVIDE
@deffnx Macro GAL_ARITHMETIC_OP_LT
@deffnx Macro GAL_ARITHMETIC_OP_LE
@deffnx Macro GAL_ARITHMETIC_OP_GT
@deffnx Macro GAL_ARITHMETIC_OP_GE
@deffnx Macro GAL_ARITHMETIC_OP_EQ
@deffnx Macro GAL_ARITHMETIC_OP_NE
@deffnx Macro GAL_ARITHMETIC_OP_AND
@deffnx Macro GAL_ARITHMETIC_OP_OR
Binary operators (requiring two operands) that accept datasets of any
recognized type (see @ref{Numeric data types}). When @code{gal_arithmetic}
is called with any of these operators, it expects two datasets as
arguments. For a full description of these operators with the same name,
see @ref{Arithmetic operators}. The first dataset/operand will be put on
the left of the operator and the second will be put on the right. The
output type of the first four is determined from the input types (largest
type of the inputs). The rest (which are all conditional operators) will
output a binary @code{uint8_t} (or @code{unsigned char}) dataset with
values of either @code{0} (zero) or @code{1} (one).
@end deffn

@deffn Macro GAL_ARITHMETIC_OP_NOT
The logical NOT operator. When @code{gal_arithmetic} is called with this
operator, it only expects one operand (dataset), since this is a unary
operator. The output is @code{uint8_t} (or @code{unsigned char}) dataset of
the same size as the input. Any non-zero element in the input will be
@code{0} (zero) in the output and any @code{0} (zero) will have a value of
@code{1} (one).
@end deffn

@deffn Macro GAL_ARITHMETIC_OP_ISBLANK
A unary operator with output that is @code{1} for any element in the input
that is blank, and @code{0} for any non-blank element. When
@code{gal_arithmetic} is called with this operator, it will only expect one
input dataset. The output dataset will have @code{uint8_t} (or
@code{unsigned char}) type.

@code{gal_arithmetic} with this operator is just a wrapper for the
@code{gal_blank_flag} function of @ref{Library blank values} and this
operator is just included for completeness in arithmetic operations. So in
your program, it might be easier to just call @code{gal_blank_flag}.
@end deffn

@deffn Macro GAL_ARITHMETIC_OP_WHERE
The three-operand @emph{where} operator thoroughly discussed in
@ref{Arithmetic operators}. When @code{gal_arithmetic} is called with this
operator, it will only expect three input datasets: the first (which is the
same as the returned dataset) is the array that will be modified. The
second is the condition dataset (that must have a @code{uint8_t} or
@code{unsigned char} type), and the third is the value to be used if
condition is non-zero.

As a result, note that the order of operands when calling
@code{gal_arithmetic} with @code{GAL_ARITHMETIC_OP_WHERE} is the opposite
of running Gnuastro's Arithmetic program with the @code{where} operator
(see @ref{Arithmetic}). This is because the latter uses the reverse-Polish
notation which isn't necessary when calling a function (see @ref{Reverse
polish notation}).
@end deffn

@deffn  Macro GAL_ARITHMETIC_OP_SQRT
@deffnx Macro GAL_ARITHMETIC_OP_LOG
@deffnx Macro GAL_ARITHMETIC_OP_LOG10
Unary operator functions for calculating the square root
(@mymath{\sqrt{i}}), @mymath{ln(i)} and @mymath{log(i)} mathematic
operators on each element of the input dataset. The returned dataset will
have a floating point type, but its precision is determined from the input:
if the input is a 64-bit floating point, the output will also be
64-bit. Otherwise, the returned dataset will be 32-bit floating point. See
@ref{Numeric data types} for the respective precision.

If you want your output to be 64-bit floating point but your input is a
different type, you can convert the input to a floating point type with
@code{gal_data_copy_to_new_type} or
@code{gal_data_copy_to_new_type_free}(see @ref{Copying datasets}).
@end deffn

@deffn  Macro GAL_ARITHMETIC_OP_RA_TO_DEGREE
@deffnx Macro GAL_ARITHMETIC_OP_DEC_TO_DEGREE
@deffnx Macro GAL_ARITHMETIC_OP_DEGREE_TO_RA
@deffnx Macro GAL_ARITHMETIC_OP_DEGREE_TO_DEC
@cindex Sexagesimal
@cindex Declination
@cindex Right Ascension
Unary operators to convert between degrees (as a single floating point number) to the sexagesimal Right Ascension and Declination format (as strings, respectively in the format of @code{_h_m_s} and @code{_d_m_s}).
The first two operators expect a string operand (in the sexagesimal formats mentioned above, but also in the @code{_:_:_}) and will return a double-precision floating point operand.
The latter two are the opposite.
@end deffn

@deffn  Macro GAL_ARITHMETIC_OP_MINVAL
@deffnx Macro GAL_ARITHMETIC_OP_MAXVAL
@deffnx Macro GAL_ARITHMETIC_OP_NUMBERVAL
@deffnx Macro GAL_ARITHMETIC_OP_SUMVAL
@deffnx Macro GAL_ARITHMETIC_OP_MEANVAL
@deffnx Macro GAL_ARITHMETIC_OP_STDVAL
@deffnx Macro GAL_ARITHMETIC_OP_MEDIANVAL
Unary operand statistical operators that will return a single value for
datasets of any size. These are just wrappers around similar functions in
@ref{Statistical operations} and are included in @code{gal_arithmetic} only
for completeness (to use easily in @ref{Arithmetic}). In your programs, it
will probably be easier if you use those @code{gal_statistics_} functions
directly.
@end deffn

@deffn Macro GAL_ARITHMETIC_OP_ABS
Unary operand absolute-value operator.
@end deffn

@deffn  Macro GAL_ARITHMETIC_OP_MIN
@deffnx Macro GAL_ARITHMETIC_OP_MAX
@deffnx Macro GAL_ARITHMETIC_OP_NUMBER
@deffnx Macro GAL_ARITHMETIC_OP_SUM
@deffnx Macro GAL_ARITHMETIC_OP_MEAN
@deffnx Macro GAL_ARITHMETIC_OP_STD
@deffnx Macro GAL_ARITHMETIC_OP_MEDIAN
Multi-operand statistical operations. When @code{gal_arithmetic} is called
with any of these operators, it will expect only a single operand that will
be interpreted as a list of datasets (see @ref{List of gal_data_t}). These
operators can work on multiple threads using the @code{numthreads}
argument. See the discussion under the @code{min} operator in
@ref{Arithmetic operators}.

The output will be a single dataset with each of its elements replaced by
the respective statistical operation on the whole list. The type of the
output is determined from the operator (irrespective of the input type):
for @code{GAL_ARITHMETIC_OP_MIN} and @code{GAL_ARITHMETIC_OP_MAX}, it will
be the same type as the input, for @code{GAL_ARITHMETIC_OP_NUMBER}, the
output will be @code{GAL_TYPE_UINT32} and for the rest, it will be
@code{GAL_TYPE_FLOAT32}.
@end deffn

@deffn Macro GAL_ARITHMETIC_OP_QUANTILE
Similar to the operands above (including @code{GAL_ARITHMETIC_MIN}), except that when @code{gal_arithmetic} is called with these operators, it requires two arguments.
The first is the list of datasets like before, and the second is the 1-element dataset with the quantile value.
The output type is the same as the inputs.
@end deffn

@deffn  Macro GAL_ARITHMETIC_OP_SIGCLIP_STD
@deffnx Macro GAL_ARITHMETIC_OP_SIGCLIP_MEAN
@deffnx Macro GAL_ARITHMETIC_OP_SIGCLIP_MEDIAN
@deffnx Macro GAL_ARITHMETIC_OP_SIGCLIP_NUMBER
Similar to the operands above (including @code{GAL_ARITHMETIC_MIN}), except
that when @code{gal_arithmetic} is called with these operators, it requires
two arguments. The first is the list of datasets like before, and the
second is the 2-element list of @mymath{\sigma}-clipping parameters. The first
element in the parameters list is the multiple of sigma and the second is
the termination criteria (see @ref{Sigma clipping}). The output type of
@code{GAL_ARITHMETIC_OP_SIGCLIP_NUMBER} will be @code{GAL_TYPE_UINT32} and
for the rest it will be @code{GAL_TYPE_FLOAT32}.
@end deffn

@deffn Macro GAL_ARITHMETIC_OP_SIZE
Size operator that will return a single value for datasets of any kind. When @code{gal_arithmetic} is called with this operator, it requires two arguments.
The first is the dataset, and the second is a single integer value.
The output type is a single integer.
@end deffn

@deffn Macro GAL_ARITHMETIC_OP_POW
Binary operator to-power operator. When @code{gal_arithmetic} is called with any of these operators, it will expect two operands: raising the first by the second (returning a floating point, inputs can be integers).
@end deffn

@deffn  Macro GAL_ARITHMETIC_OP_BITAND
@deffnx Macro GAL_ARITHMETIC_OP_BITOR
@deffnx Macro GAL_ARITHMETIC_OP_BITXOR
@deffnx Macro GAL_ARITHMETIC_OP_BITLSH
@deffnx Macro GAL_ARITHMETIC_OP_BITRSH
@deffnx Macro GAL_ARITHMETIC_OP_MODULO
Binary integer-only operand operators. These operators are only defined on
integer data types. When @code{gal_arithmetic} is called with any of these
operators, it will expect two operands: the first is put on the left of the
operator and the second on the right. The ones starting with @code{BIT} are
the respective bit-wise operators in C and @code{MODULO} is the
modulo/remainder operator. For a discussion on these operators, please see
@ref{Arithmetic operators}.

The output type is determined from the input types and C's internal
conversions: it is strongly recommended that both inputs have the same type
(any integer type), otherwise the bit-wise behavior will be determined by
your compiler.
@end deffn

@deffn Macro GAL_ARITHMETIC_OP_BITNOT
The unary bit-wise NOT operator. When @code{gal_arithmetic} is called with
any of these operators, it will expect one operand of an integer type and
preform the bitwise-NOT operation on it. The output will have the same type
as the input.
@end deffn


@deffn  Macro GAL_ARITHMETIC_OP_TO_UINT8
@deffnx Macro GAL_ARITHMETIC_OP_TO_INT8
@deffnx Macro GAL_ARITHMETIC_OP_TO_UINT16
@deffnx Macro GAL_ARITHMETIC_OP_TO_INT16
@deffnx Macro GAL_ARITHMETIC_OP_TO_UINT32
@deffnx Macro GAL_ARITHMETIC_OP_TO_INT32
@deffnx Macro GAL_ARITHMETIC_OP_TO_UINT64
@deffnx Macro GAL_ARITHMETIC_OP_TO_INT64
@deffnx Macro GAL_ARITHMETIC_OP_TO_FLOAT32
@deffnx Macro GAL_ARITHMETIC_OP_TO_FLOAT64
Unary type-conversion operators. When @code{gal_arithmetic} is called with
any of these operators, it will expect one operand and convert it to the
requested type. Note that with these operators, @code{gal_arithmetic} is
just a wrapper over the @code{gal_data_copy_to_new_type} or
@code{gal_data_copy_to_new_type_free} that are discussed in @code{Copying
datasets}. It accepts these operators only for completeness and easy usage
in @ref{Arithmetic}. So in your programs, it might be preferable to
directly use those functions.
@end deffn

@deffn  Macro GAL_ARITHMETIC_OP_MAKENEW
Create a new, zero-valued dataset with an unsigned 8-bit data type.
The length along each dimension of the dataset should be given as a single list of @code{gal_data_t}s.
The number of dimensions is derived from the number of nodes in the list and the length along each dimension is the single-valued element within that list.
Just note that the list should be in the reverse of the desired dimensions.
@end deffn

@deftypefun {gal_data_t *} gal_arithmetic (int @code{operator}, size_t @code{numthreads}, int @code{flags}, ...)
Do the arithmetic operation of @code{operator} on the given operands (the
third argument and any further argument). If the operator can work on
multiple threads, the number of threads can be specified with
@code{numthreads}. When the operator is single-threaded, @code{numthreads}
will be ignored. Special conditions can also be specified with the
@code{flag} operator (a bit-flag with bits described above, for example
@code{GAL_ARITHMETIC_INPLACE} or @code{GAL_ARITHMETIC_FREE}). The
acceptable values for @code{operator} are also defined in the macros above.

@code{gal_arithmetic} is a multi-argument function (like C's
@code{printf}). In other words, the number of necessary arguments is not
fixed and depends on the value to @code{operator}. Here are a few examples
showing this variability:

@example
out_1=gal_arithmetic(GAL_ARITHMETIC_OP_LOG,   1, 0, in_1);
out_2=gal_arithmetic(GAL_ARITHMETIC_OP_PLUS,  1, 0, in_1, in_2);
out_3=gal_arithmetic(GAL_ARITHMETIC_OP_WHERE, 1, 0, in_1, in_2, in_3);
@end example

The number of necessary operands for each operator (and thus the number of
necessary arguments to @code{gal_arithmetic}) are described above under
each operator.
@end deftypefun

@deftypefun int gal_arithmetic_set_operator (char @code{*string}, size_t @code{*num_operands})
Return the operator macro/code that corresponds to @code{string}. The
number of operands that it needs are written into the space that
@code{*num_operands} points to. If the string couldn't be interpreted as
an operator, this function will return @code{GAL_ARITHMETIC_OP_INVALID}.

This function will check @code{string} with the fixed human-readable names
(using @code{strcmp}) for the operators and return the two numbers. Note
that @code{string} must only contain the single operator name and nothing
else (not even any extra white space).
@end deftypefun

@deftypefun {char *} gal_arithmetic_operator_string (int @code{operator})
Return the human-readable standard string that corresponds to the given
operator. For example when the input is @code{GAL_ARITHMETIC_OP_PLUS} or
@code{GAL_ARITHMETIC_OP_MEAN}, the strings @code{+} or @code{mean} will be
returned.
@end deftypefun

@node Tessellation library, Bounding box, Arithmetic on datasets, Gnuastro library
@subsection Tessellation library (@file{tile.h})

In many contexts, it is desirable to slice the dataset into subsets or
tiles (overlapping or not). In such a way that you can work on each tile
independently. One method would be to copy that region to a separate
allocated space, but in many contexts this isn't necessary and in fact can
be a big burden on CPU/Memory usage. The @code{block} pointer in Gnuastro's
@ref{Generic data container} is defined for such situations: where
allocation is not necessary. You just want to read the data or write to it
independently (or in coordination with) other regions of the dataset. Added
with parallel processing, this can greatly improve the time/memory
consumption.

See the figure below for example: assume the @code{larger} dataset is a
contiguous block of memory that you are interpreting as a 2D array. But
you only want to work on the smaller @code{tile} region.

@example
                            larger
              ---------------------------------
              |                               |
              |              tile             |
              |           ----------          |
              |           |        |          |
              |           |_       |          |
              |           |*|      |          |
              |           ----------          |
              |       tile->block = larger    |
              |_                              |
              |*|                             |
              ---------------------------------
@end example

To use @code{gal_data_t}'s @code{block} concept, you allocate a
@code{gal_data_t *tile} which is initialized with the pointer to the first
element in the sub-array (as its @code{array} argument). Note that this is
not necessarily the first element in the larger array. You can set the size
of the tile along with the initialization as you please. Recall that, when
given a non-@code{NULL} pointer as @code{array}, @code{gal_data_initialize}
(and thus @code{gal_data_alloc}) do not allocate any space and just uses
the given pointer for the new @code{array} element of the
@code{gal_data_t}. So your @code{tile} data structure will not be pointing
to a separately allocated space.

After the allocation is done, you just point @code{tile->block} to the
@code{larger} dataset which hosts the full block of memory. Where relevant,
Gnuastro's library functions will check the @code{block} pointer of their
input dataset to see how to deal with dimensions and increments so they can
always remain within the tile. The tools introduced in this section are
designed to help in defining and working with tiles that are created in
this manner.

Since the block structure is defined as a pointer, arbitrary levels of
tessellation/grid-ing are possible (@code{tile->block} may itself be a tile
in an even larger allocated space). Therefore, just like a linked-list (see
@ref{Linked lists}), it is important to have the @code{block} pointer of
the largest (allocated) dataset set to @code{NULL}. Normally, you won't
have to worry about this, because @code{gal_data_initialize} (and thus
@code{gal_data_alloc}) will set the @code{block} element to @code{NULL} by
default, just remember not to change it. You can then only change the
@code{block} element for the tiles you define over the allocated space.

Below, we will first review constructs for @ref{Independent tiles} and then
define the current approach to fully tessellating a dataset (or covering
every pixel/data-element with a non-overlapping tile grid in @ref{Tile
grid}. This approach to dealing with parts of a larger block was inspired
from a similarly named concept in the GNU Scientific Library (GSL), see its
``Vectors and Matrices'' chapter for their implementation.



@menu
* Independent tiles::           Work on or check independent tiles.
* Tile grid::                   Cover a full dataset with non-overlapping tiles.
@end menu

@node Independent tiles, Tile grid, Tessellation library, Tessellation library
@subsubsection Independent tiles

The most general application of tiles is to treat each independently, for
example they may overlap, or they may not cover the full image. This
section provides functions to help in checking/inspecting such tiles. In
@ref{Tile grid} we will discuss functions that define/work-with a tile grid
(where the tiles don't overlap and fully cover the input
dataset). Therefore, the functions in this section are general and can be
used for the tiles produced by that section also.

@deftypefun void gal_tile_start_coord (gal_data_t @code{*tile}, size_t @code{*start_coord})
Calculate the starting coordinates of a tile in its allocated block of
memory and write them in the memory that @code{start_coord} points to
(which must have @code{tile->ndim} elements).
@end deftypefun

@deftypefun void gal_tile_start_end_coord (gal_data_t @code{*tile}, size_t @code{*start_end}, int @code{rel_block})
Put the starting and ending (end point is not inclusive) coordinates of
@code{tile} into the @code{start_end} array. It is assumed that a space of
@code{2*tile->ndim} has been already allocated (static or dynamic) for
@code{start_end} before this function is called.

@code{rel_block} (or relative-to-block) is only relevant when @code{tile}
has an intermediate tile between it and the allocated space (like a
channel, see @code{gal_tile_full_two_layers}). If it doesn't
(@code{tile->block} points the allocated dataset), then the value to
@code{rel_block} is irrelevant.

When @code{tile->block} is its self a larger block and @code{rel_block} is
set to 0, then the starting and ending positions will be based on the
position within @code{tile->block}, not the allocated space.
@end deftypefun

@deftypefun {void *} gal_tile_start_end_ind_inclusive (gal_data_t @code{*tile}, gal_data_t @code{*work}, size_t @code{*start_end_inc})
Put the indexs of the first/start and last/end pixels (inclusive) in a tile
into the @code{start_end} array (that must have two elements). NOTE: this
function stores the index of each point, not its coordinates. It will then
return the pointer to the start of the tile in the @code{work} data
structure (which doesn't have to be equal to @code{tile->block}.

The outputs of this function are defined to make it easy to parse over an
n-dimensional tile. For example, this function is one of the most important
parts of the internal processing of in @code{GAL_TILE_PARSE_OPERATE}
function-like macro that is described below.
@end deftypefun

@deftypefun {gal_data_t *} gal_tile_series_from_minmax (gal_data_t @code{*block}, size_t @code{*minmax}, size_t @code{number})
Construct a list of tile(s) given coordinates of the minimum and maximum of
each tile. The minimum and maximums are assumed to be inclusive and in C
order (slowest dimension first). The returned pointer is an allocated
@code{gal_data_t} array that can later be freed with
@code{gal_data_array_free} (see @ref{Arrays of datasets}). Internally, each
element of the output array points to the next element, so the output may
also be treated as a list of datasets (see @ref{List of gal_data_t}) and
passed onto the other functions described in this section.

The array keeping the minimum and maximum coordinates for each tile must
have the following format. So in total @code{minmax} must have
@code{2*ndim*number} elements.

@example
| min0_d0 | min0_d1 | max0_d0 | max0_d1 | ...
                ... | minN_d0 | minN_d1 | maxN_d0 | maxN_d1 |
@end example
@end deftypefun

@deftypefun {gal_data_t *} gal_tile_block (gal_data_t @code{*tile})
Return the dataset that contains @code{tile}'s allocated block of
memory. If tile is immediately defined as part of the allocated block, then
this is equivalent to @code{tile->block}. However, it is possible to have
multiple layers of tiles (where @code{tile->block} is itself a tile). So
this function is the most generic way to get to the actual allocated
dataset.
@end deftypefun

@deftypefun size_t gal_tile_block_increment (gal_data_t @code{*block}, size_t @code{*tsize}, size_t @code{num_increment}, size_t @code{*coord})
Return the increment necessary to start at the next contiguous patch memory
associated with a tile. @code{block} is the allocated block of memory and
@code{tsize} is the size of the tile along every dimension. If @code{coord}
is @code{NULL}, it is ignored. Otherwise, it will contain the coordinate of
the start of the next contiguous patch of memory.

This function is intended to be used in a loop and @code{num_increment} is
the main variable to this function. For the first time you call this
function, it should be @code{1}. In subsequent calls (while you are parsing
a tile), it should be increased by one.
@end deftypefun

@deftypefun {gal_data_t *} gal_tile_block_write_const_value (gal_data_t @code{*tilevalues}, gal_data_t @code{*tilesll}, int @code{withblank}, int @code{initialize})
Write a constant value for each tile over the area it covers in an
allocated dataset that is the size of @code{tile}'s allocated block of
memory (found through @code{gal_tile_block} described above). The arguments
to this function are:

@table @code
@item tilevalues
This must be an array that has the same number of elements as the nodes in
in @code{tilesll} and in the same order that `tilesll' elements are parsed
(from top to bottom, see @ref{Linked lists}). As a result the array's
number of dimensions is irrelevant, it will be parsed contiguously.

@item tilesll
The list of input tiles (see @ref{List of gal_data_t}). Internally, it
might be stored as an array (for example the output of
@code{gal_tile_series_from_minmax} described above), but this function
doesn't care, it will parse the @code{next} elements to go to the next
tile. This function will not pop-from or free the @code{tilesll}, it will
only parse it from start to end.

@item withblank
If the block containing the tiles has blank elements, those blank elements
will be blank in the output of this function also, hence the array will be
initialized with blank values when this option is called (see below).

@item initialize
Initialize the allocated space with blank values before writing in the
constant values. This can be useful when the tiles don't cover the full
allocated block.
@end table
@end deftypefun

@deftypefun {gal_data_t *} gal_tile_block_check_tiles (gal_data_t @code{*tilesll})
Make a copy of the memory block and fill it with the index of each tile in
@code{tilesll} (counting from 0). The non-filled areas will have blank
values. The output dataset will have a type of @code{GAL_TYPE_INT32} (see
@ref{Library data types}).

This function can be used when you want to check the coverage of each tile
over the allocated block of memory. It is just a wrapper over the
@code{gal_tile_block_write_const_value} (with @code{withblank} set to zero).
@end deftypefun

@deftypefun {void *} gal_tile_block_relative_to_other (gal_data_t @code{*tile}, gal_data_t @code{*other})
Return the pointer corresponding to the start of the region covered by
@code{tile} over the @code{other} dataset. See the examples in
@code{GAL_TILE_PARSE_OPERATE} for some example applications of this
function.
@end deftypefun


@deftypefun void gal_tile_block_blank_flag (gal_data_t @code{*tilell}, size_t @code{numthreads})
Check if each tile in the list has blank values and update its @code{flag}
to mark this check and its result (see @ref{Generic data container}). The
operation will be done on @code{numthreads} threads.
@end deftypefun


@deffn {Function-like macro} GAL_TILE_PARSE_OPERATE (@code{IN}, @code{OTHER}, @code{PARSE_OTHER}, @code{CHECK_BLANK}, @code{OP})
Parse @code{IN} (which can be a tile or a fully allocated block of memory)
and do the @code{OP} operation on it. @code{OP} can be any combination of C
expressions. If @code{OTHER!=NULL}, @code{OTHER} will be interpreted as a
dataset and this macro will allow access to its element(s) and it can
optionally be parsed while parsing over @code{IN}.

If @code{OTHER} is a fully allocated block of memory (not a tile), then the
same region that is covered by @code{IN} within its own block will be
parsed (the same starting pixel with the same number of pixels in each
dimension). Hence, in this case, the blocks of @code{OTHER} and @code{IN}
must have the same size. When @code{OTHER} is a tile it must have the same
size as @code{IN} and parsing will start from its starting element/pixel.
Also, the respective allocated blocks of @code{OTHER} and @code{IN} (if
different) may have different sizes. Using @code{OTHER} (along with
@code{PARSE_OTHER}), this function-like macro will thus enable you to parse
and define your own operation on two fixed size regions in one or two
blocks of memory. In the latter case, they may have different numeric
data types, see @ref{Numeric data types}).

The input arguments to this macro are explained below, the expected type of
each argument are also written following the argument name:

@table @code
@item IN (gal_data_t)
Input dataset, this can be a tile or an allocated block of memory.

@item OTHER (gal_data_t)
Dataset (@code{gal_data_t}) to parse along with @code{IN}. It can be
@code{NULL}. In that case, @code{o} (see description of @code{OP} below)
will be @code{NULL} and should not be used. If @code{PARSE_OTHER} is zero,
only its first element will be used and the size of this dataset is
irrelevant.

When @code{OTHER} is a block of memory, it has to have the same size as the
allocated block of @code{IN}. When its a tile, it has to have the same size
as @code{IN}.

@item PARSE_OTHER (int)
Parse the other dataset along with the input. When this is non-zero and
@code{OTHER!=NULL}, then the @code{o} pointer will be incremented to cover
the @code{OTHER} tile at the same rate as @code{i}, see description of
@code{OP} for @code{i} and @code{o}.

@item CHECK_BLANK (int)
If it is non-zero, then the input will be checked for blank values and
@code{OP} will only be called when we are not on a blank element.

@item OP
Operator: this can be any number of C expressions. This macro is going to
define a @code{itype *i} variable which will increment over each element of
the input array/tile. @code{itype} will be replaced with the C type that
corresponds to the type of @code{INPUT}. As an example, if @code{INPUT}'s
type is @code{GAL_DATA_UINT16} or @code{GAL_DATA_FLOAT32}, @code{i} will be
defined as @code{uint16} or @code{float} respectively.

This function-like macro will also define an @code{otype *o} which you can
use to access an element of the @code{OTHER} dataset (if
@code{OTHER!=NULL}). @code{o} will correspond to the type of @code{OTHER}
(similar to @code{itype} and @code{INPUT} discussed above). If
@code{PARSE_OTHER} is non-zero, then @code{o} will also be incremented to
the same index element but in the other array. You can use these along with
any other variable you define before this macro to process the input and/or
the other.

All variables within this function-like macro begin with @code{tpo_} except
for the three variables listed below. Therefore, as long as you don't start
the names of your variables with this prefix everything will be fine. Note
that @code{i} (and possibly @code{o}) will be incremented once by this
function-like macro, so don't increment them within @code{OP}.

@table @code
@item i
Pointer to the element of @code{INPUT} that is being parsed with the proper
type.

@item o
Pointer to the element of @code{OTHER} that is being parsed with the proper
type. @code{o} can only be used if @code{OTHER!=NULL} and it will be
parsed/incremented if @code{PARSE_OTHER} is non-zero.

@item b
Blank value in the type of @code{INPUT}.
@end table
@end table

You can use a given tile (@code{tile} on a dataset that it was not
initialized with but has the same size, let's call it @code{new}) with the
following steps:

@example
void *tarray;
gal_data_t *tblock;

/* `tile->block' must be corrected AFTER `tile->array'. */
tarray      = tile->array;
tblock      = tile->block;
tile->array = gal_tile_block_relative_to_other(tile, new);
tile->block = new;

/* Parse and operate over this region of the `new' dataset. */
GAL_TILE_PARSE_OPERATE(tile, NULL, 0, 0, @{
    YOUR_PROCESSING;
  @});

/* Reset `tile->block' and `tile->array'. */
tile->array=tarray;
tile->block=tblock;
@end example

You can work on the same region of another block in one run of this
function-like macro. To do that, you can make a fake tile and pass that as
the @code{OTHER} argument. Below is a demonstration, @code{tile} is the
actual tile that you start with and @code{new} is the other block of
allocated memory.

@example
size_t zero=0;
gal_data_t *faketile;

/* Allocate the fake tile, these can be done outside a loop
 * (over many tiles). */
faketile=gal_data_alloc(NULL, new->type, 1, &zero,
                        NULL, 0, -1, 1, NULL, NULL, NULL);
free(faketile->array);               /* To keep things clean. */
free(faketile->dsize);               /* To keep things clean. */
faketile->block = new;
faketile->ndim  = new->ndim;

/* These can be done in a loop (over many tiles). */
faketile->size  = tile->size;
faketile->dsize = tile->dsize;
faketile->array = gal_tile_block_relative_to_other(tile, new);

/* Do your processing.... in a loop (over many tiles). */
GAL_TILE_PARSE_OPERATE(tile, faketile, 1, 1, @{
    YOUR_PROCESSING_EXPRESSIONS;
  @});

/* Clean up (outside the loop). */
faketile->array=NULL;
faketile->dsize=NULL;
gal_data_free(faketile);
@end example
@end deffn



@node Tile grid,  , Independent tiles, Tessellation library
@subsubsection Tile grid

One very useful application of tiles is to completely cover an input
dataset with tiles. Such that you know every pixel/data-element of the
input image is covered by only one tile. The constructs in this section
allow easy definition of such a tile structure. They will create lists of
tiles that are also usable by the general tools discussed in
@ref{Independent tiles}.

As discussed in @ref{Tessellation}, (mainly raw) astronomical images will
mostly require two layers of tessellation, one for amplifier channels which
all have the same size and another (smaller tile-size) tessellation over
each channel. Hence, in this section we define a general structure to keep
the main parameters of this two-layer tessellation and help in benefiting
from it.

@deftp {Type (C @code{struct})} gal_tile_two_layer_params
The general structure to keep all the necessary parameters for a two-layer
tessellation.

@example
struct gal_tile_two_layer_params
@{
  /* Inputs */
  size_t             *tilesize;  /*******************************/
  size_t          *numchannels;  /* These parameters have to be */
  float          remainderfrac;  /* filled manually before      */
  uint8_t           workoverch;  /* calling the functions in    */
  uint8_t           checktiles;  /* this section.               */
  uint8_t       oneelempertile;  /*******************************/

  /* Internal parameters. */
  size_t                  ndim;
  size_t              tottiles;
  size_t          tottilesinch;
  size_t           totchannels;
  size_t          *channelsize;
  size_t             *numtiles;
  size_t         *numtilesinch;
  char          *tilecheckname;
  size_t          *permutation;
  size_t           *firsttsize;

  /* Tile and channel arrays (which are also lists). */
  gal_data_t            *tiles;
  gal_data_t         *channels;
@};
@end example
@end deftp

@deftypefun {size_t *} gal_tile_full (gal_data_t @code{*input}, size_t @code{*regular}, float @code{remainderfrac}, gal_data_t @code{**out}, size_t @code{multiple}, size_t @code{**firsttsize})
Cover the full dataset with (mostly) identical tiles and return the number
of tiles created along each dimension. The regular tile size (along each
dimension) is determined from the @code{regular} array. If @code{input}'s
size is not an exact multiple of @code{regular} for each dimension, then
the tiles touching the edges in that dimension will have a different size
to fully cover every element of the input (depending on
@code{remainderfrac}).

The output is an array with the same dimensions as @code{input} which
contains the number of tiles along each dimension.  See @ref{Tessellation}
for a description of its application in Gnuastro's programs and
@code{remainderfrac}, just note that this function defines only one layer of
tiles.

This is a low-level function (independent of the
@code{gal_tile_two_layer_params} structure defined above). If you want a
two-layer tessellation, directly call @code{gal_tile_full_two_layers} that
is described below. The input arguments to this function are:

@table @code
@item input
The main dataset (allocated block) which you want to create a tessellation
over (only used for its sizes). So @code{input} may be a tile also.

@item regular
The the size of the regular tiles along each of the input's dimensions. So
it must have the same number of elements as the dimensions of @code{input}
(or @code{input->ndim}).

@item remainderfrac
The significant fraction of the remainder space to see if it should be
split into two and put on both sides of a dimension or not. This is thus
only relevant @code{input} length along a dimension isn't an exact multiple
of the regular tile size along that dimension. See @ref{Tessellation} for a
more thorough discussion.

@item out
Pointer to the array of data structures that will keep all the tiles (see
@ref{Arrays of datasets}). If @code{*out==NULL}, then the necessary space
to keep all the tiles will be allocated. If not, then all the tile
information will be filled from the dataset that @code{*out} points to, see
@code{multiple} for more.

@item multiple
When @code{*out==NULL} (and thus will be allocated by this function),
allocate space for @code{multiple} times the number of tiles needed. This
can be very useful when you have several more identically sized
@code{inputs}, and you want all their tiles to be allocated (and thus
indexed) together, even though they have different @code{block} datasets
(that then link to one allocated space). See the definition of channels in
@ref{Tessellation} and @code{gal_tile_full_two_layers} below.

@item firsttsize
The size of the first tile along every dimension. This is only different
from the regular tile size when @code{regular} is not an exact multiple of
@code{input}'s length along every dimension. This array is allocated
internally by this function.
@end table
@end deftypefun


@deftypefun void gal_tile_full_sanity_check (char @code{*filename}, char @code{*hdu}, gal_data_t @code{*input}, struct gal_tile_two_layer_params @code{*tl})
Make sure that the input parameters (in @code{tl}, short for two-layer)
correspond to the input dataset. @code{filename} and @code{hdu} are only
required for error messages. Also, allocate and fill the
@code{tl->channelsize} array.
@end deftypefun

@deftypefun void gal_tile_full_two_layers (gal_data_t @code{*input}, struct gal_tile_two_layer_params @code{*tl})
Create the two layered tessellation in @code{tl}.  The general set of steps
you need to take to define the two-layered tessellation over an image can
be seen in the example code below.

@example
gal_data_t *input;
struct gal_tile_two_layer_params tl;
char *filename="input.fits", *hdu="1";

/* Set all the inputs shown in the structure definition. */
...

/* Read the input dataset. */
input=gal_fits_img_read(filename, hdu, -1, 1);

/* Do a sanity check and preparations. */
gal_tile_full_sanity_check(filename, hdu, input, &tl);

/* Build the two-layer tessellation*/
gal_tile_full_two_layers(input, &tl);

/* `tl.tiles' and `tl.channels' are now a lists of tiles.*/
@end example
@end deftypefun


@deftypefun void gal_tile_full_permutation (struct gal_tile_two_layer_params @code{*tl})
Make a permutation to allow the conversion of tile location in memory to
its location in the full input dataset and put it in
@code{tl->permutation}. If a permutation has already been defined for the
tessellation, this function will not do anything. If permutation won't be
necessary (there is only one channel or one dimension), then this function
will not do anything (@code{tl->permutation} must have been initialized to
@code{NULL}).

When there is only one channel OR one dimension, the tiles are allocated in
memory in the same order that they represent the input data. However, to
make channel-independent processing possible in a generic way, the tiles of
each channel are allocated contiguously. So, when there is more than one
channel AND more than one dimension, the index of the tile does not
correspond to its position in the grid covering the input dataset.

The example below may help clarify: assume you have a 6x6 tessellation with
two channels in the horizontal and one in the vertical. On the left you can
see how the tile IDs correspond to the input dataset. NOTE how `03' is on
the second row, not on the first after `02'. On the right, you can see how
the tiles are stored in memory (and shown if you simply write the array
into a FITS file for example).

@example
   Corresponding to input               In memory
   ----------------------             --------------
     15 16 17 33 34 35               30 31 32 33 34 35
     12 13 14 30 31 32               24 25 26 27 28 29
     09 10 11 27 28 29               18 19 20 21 22 23
     06 07 08 24 25 26      <--      12 13 14 15 16 17
     03 04 05 21 22 23               06 07 08 09 10 11
     00 01 02 18 19 20               00 01 02 03 04 05
@end example

As a result, if your values are stored in same order as the tiles, and you
want them in over-all memory (for example to save as a FITS file), you need
to permute the values:

@example
gal_permutation_apply(values, tl->permutation);
@end example

If you have values over-all and you want them in tile-order, you can apply
the inverse permutation:

@example
gal_permutation_apply_inverse(values, tl->permutation);
@end example

Recall that this is the definition of permutation in this context:

@example
permute:    IN_ALL[ i       ]   =   IN_MEMORY[ perm[i] ]
inverse:    IN_ALL[ perm[i] ]   =   IN_MEMORY[ i       ]
@end example
@end deftypefun

@deftypefun void gal_tile_full_values_write (gal_data_t @code{*tilevalues}, struct gal_tile_two_layer_params @code{*tl}, int @code{withblank}, char @code{*filename}, gal_fits_list_key_t @code{*keys}, char @code{*program_string})
Write one value for each tile into a file. It is important to note that the
values in @code{tilevalues} must be ordered in the same manner as the
tiles, so @code{tilevalues->array[i]} is the value that should be given to
@code{tl->tiles[i]}. The @code{tl->permutation} array must have been
initialized before calling this function with
@code{gal_tile_full_permutation}.

If @code{withblank} is non-zero, then block structure of the tiles will be
checked and all blank pixels in the block will be blank in the final output
file also.
@end deftypefun

@deftypefun {gal_data_t *} gal_tile_full_values_smooth (gal_data_t @code{*tilevalues}, struct gal_tile_two_layer_params @code{*tl}, size_t @code{width}, size_t @code{numthreads})
Smooth the given values with a flat kernel of the given @code{width}. This
cannot be done manually because if @code{tl->workoverch==0}, tiles in
different channels must not be mixed/smoothed. Also the tiles are
contiguous within the channel, not within the image, see the description
under @code{gal_tile_full_permutation}.
@end deftypefun

@deftypefun size_t gal_tile_full_id_from_coord (struct gal_tile_two_layer_params @code{*tl}, size_t @code{*coord})
Return the ID of the tile that corresponds to the coordinates
@code{coord}. Having this ID, you can use the @code{tl->tiles} array to get
to the proper tile or read/write a value into an array that has one value
per tile.
@end deftypefun

@deftypefun void gal_tile_full_free_contents (struct gal_tile_two_layer_params @code{*tl})
Free all the allocated arrays within @code{tl}.
@end deftypefun



@node Bounding box, Polygons, Tessellation library, Gnuastro library
@subsection Bounding box (@file{box.h})

Functions related to reporting the bounding box of certain inputs are
declared in @file{gnuastro/box.h}. All coordinates in this header are in
the FITS format (first axis is the horizontal and the second axis is
vertical).

@deftypefun void gal_box_bound_ellipse_extent (double @code{a}, double @code{b}, double @code{theta_deg}, double @code{*extent})
Return the maximum extent along each dimension of the given ellipse from
the center of the ellipse. Therefore this is half the extent of the box in
each dimension. @code{a} is the ellipse semi-major axis, @code{b} is the
semi-minor axis, @code{theta_deg} is the position angle in degrees. The
extent in each dimension is in floating point format and stored in
@code{extent} which must already be allocated before this function.
@end deftypefun

@deftypefun void gal_box_bound_ellipse (double @code{a}, double @code{b}, double @code{theta_deg}, long @code{*width})
Any ellipse can be enclosed into a rectangular box. This function will
write the height and width of that box where @code{width} points to. It
assumes the center of the ellipse is located within the central pixel of
the box. @code{a} is the ellipse semi-major axis length, @code{b} is the
semi-minor axis, @code{theta_deg} is the position angle in degrees. The
@code{width} array will contain the output size in long integer
type. @code{width[0]}, and @code{width[1]} are the number of pixels along
the first and second FITS axis. Since the ellipse center is assumed to be
in the center of the box, all the values in @code{width} will be an odd
integer.
@end deftypefun

@deftypefun void gal_box_bound_ellipsoid_extent (double @code{*semiaxes}, double @code{*euler_deg}, double @code{*extent})
Return the maximum extent along each dimension of the given ellipsoid from
its center. Therefore this is half the extent of the box in each
dimension. The semi-axis lengths of the ellipsoid must be present in the 3
element @code{semiaxis} array. The @code{euler_deg} array contains the
three ellipsoid Euler angles in degrees. For a description of the Euler
angles, see description of @code{gal_box_bound_ellipsoid} below. The extent
in each dimension is in floating point format and stored in @code{extent}
which must already be allocated before this function.
@end deftypefun

@deftypefun void gal_box_bound_ellipsoid (double @code{*semiaxes}, double @code{*euler_deg}, long @code{*width})
Any ellipsoid can be enclosed into a rectangular volume/box. The purpose of
this function is to give the integer size/width of that box. The semi-axes
lengths of the ellipse must be in the @code{semiaxes} array (with three
elements). The major axis length must be the first element of
@code{semiaxes}. The only other condition is that the next two semi-axes
must both be smaller than the first. The orientation of the major axis is
defined through three proper Euler angles (ZXZ order in degrees) that are
given in the @code{euler_deg} array. The @code{width} array will contain
the output size in long integer type (in FITS axis order). Since the
ellipsoid center is assumed to be in the center of the box, all the values
in @code{width} will be an odd integer.

@cindex Euler angles
The proper Euler angles can be defined in many ways (which axes to rotate
about). For a full description of the Euler angles, please see
@url{https://en.wikipedia.org/wiki/Euler_angles, Wikipedia}. Here we adopt
the ZXZ (or @mymath{Z_1X_2Z_3}) proper Euler angles were the first rotation
is done around the Z axis, the second one about the (rotated) X axis and
the third about the (rotated) Z axis.
@end deftypefun

@deftypefun void gal_box_border_from_center (double @code{center}, size_t @code{ndim}, long @code{*width}, long @code{*fpixel}, long @code{*lpixel})
Given the center coordinates in @code{center} and the @code{width} (along
each dimension) of a box, return the coordinates of the first
(@code{fpixel}) and last (@code{lpixel}) pixels. All arrays must have
@code{ndim} elements (one for each dimension).
@end deftypefun

@deftypefun int gal_box_overlap (long @code{*naxes}, long @code{*fpixel_i}, long @code{*lpixel_i}, long @code{*fpixel_o}, long @code{*lpixel_o}, size_t @code{ndim})
An @code{ndim}-dimensional dataset of size @code{naxes} (along each
dimension, in FITS order) and a box with first and last (inclusive)
coordinate of @code{fpixel_i} and @code{lpixel_i} is given. This box
doesn't necessarily have to lie within the dataset, it can be outside of
it, or only partially overlap. This function will change the values of
@code{fpixel_i} and @code{lpixel_i} to exactly cover the overlap in the
input dataset's coordinates.

This function will return 1 if there is an overlap and 0 if there
isn't. When there is an overlap, the coordinates of the first and last
pixels of the overlap will be put in @code{fpixel_o} and @code{lpixel_o}.
@end deftypefun







@node Polygons, Qsort functions, Bounding box, Gnuastro library
@subsection Polygons (@file{polygon.h})

Polygons are commonly necessary in image processing.
For example in Crop they are used for cutting out non-rectangular regions of a image (see @ref{Crop}), and in Warp, for mapping different pixel grids over each other (see @ref{Warp}).

@cindex Convex polygons
@cindex Concave polygons
@cindex Polygons, Convex
@cindex Polygons, Concave
Polygons come in two classes: convex and concave (or generally, non-convex!), see below for a demonstration.
Convex polygons are those where all inner angles are less than 180 degrees.
By contrast, a convex polygon is one where an inner angle may be more than 180 degress.

@example
            Concave Polygon        Convex Polygon

             D --------C          D------------- C
              \        |        E /              |
               \E      |          \              |
               /       |           \             |
              A--------B             A ----------B
@end example

In all the functions here the vertices (and points) are defined as an array.
So a polygon with 4 vertices will be identified with an array of 8 elements with the first two elements keeping the 2D coordinates of the first vertice and so on.

@deffn Macro GAL_POLYGON_MAX_CORNERS
The largest number of vertices a polygon can have in this library.
@end deffn

@deffn Macro GAL_POLYGON_ROUND_ERR
@cindex Round-off error
We have to consider floating point round-off errors when dealing with polygons.
For example we will take @code{A} as the maximum of @code{A} and @code{B} when @code{A>B-GAL_POLYGON_ROUND_ERR}.
@end deffn

@deftypefun void gal_polygon_vertices_sort_convex (double @code{*in}, size_t @code{n}, size_t @code{*ordinds})
We have a simple polygon (that can result from projection, so its edges don't collide or it doesn't have holes) and we want to order its corners in an anticlockwise fashion.
This is necessary for clipping it and finding its area later.
The input vertices can have practically any order.

The input (@code{in}) is an array containing the coordinates (two values) of each vertice.
@code{n} is the number of corners.
So @code{in} should have @code{2*n} elements.
The output (@code{ordinds}) is an array with @code{n} elements specifying the indexs in order.
This array must have been allocated before calling this function.
The indexes are output for more generic usage, for example in a homographic transform (necessary in warping an image, see @ref{Warping basics}), the necessary order of vertices is the same for all the pixels.
In other words, only the positions of the vertices change, not the way they need to be ordered.
Therefore, this function would only be necessary once.

As a summary, the input is unchanged, only @code{n} values will be put in the @code{ordinds} array.
Such that calling the input coordinates in the following fashion will give an anti-clockwise order when there are 4 vertices:

@example
1st vertice: in[ordinds[0]*2], in[ordinds[0]*2+1]
2nd vertice: in[ordinds[1]*2], in[ordinds[1]*2+1]
3rd vertice: in[ordinds[2]*2], in[ordinds[2]*2+1]
4th vertice: in[ordinds[3]*2], in[ordinds[3]*2+1]
@end example

@cindex Convex Hull
@noindent
The implementation of this is very similar to the Graham scan in finding the Convex Hull.
However, in projection we will never have a concave polygon (the left condition below, where this algorithm will get to E before D), we will always have a convex polygon (right case) or E won't exist!
This is because we are always going to be calculating the area of the overlap between a quadrilateral and the pixel grid or the quadrilateral its self.

The @code{GAL_POLYGON_MAX_CORNERS} macro is defined so there will be no need to allocate these temporary arrays separately.
Since we are dealing with pixels, the polygon can't really have too many vertices.
@end deftypefun

@deftypefun int gal_polygon_is_convex (double @code{*v}, size_t @code{n})
Returns @code{1} if the polygon is convex with vertices defined by @code{v} and @code{0} if it is a concave polygon.
Note that the vertices of the polygon should be sorted in an anti-clockwise manner.
@end deftypefun

@deftypefun double gal_polygon_area (double @code{*v}, size_t @code{n})
Find the area of a polygon with vertices defined in @code{v}.
@code{v} points to an array of doubles which keep the positions of the vertices such that @code{v[0]} and @code{v[1]} are the positions of the first vertice to be considered.
@end deftypefun

@deftypefun int gal_polygon_is_inside (double @code{*v}, double @code{*p}, size_t @code{n})
Returns @code{0} if point @code{p} in inside a polygon, either convex or concave.
The vertices of the polygon are defined by @code{v} and @code{0} otherwise, they have to be ordered in an anti-clockwise manner.
This function uses the @url{https://en.wikipedia.org/wiki/Point_in_polygon#Winding_number_algorithm, winding number algorithm}, to check the points.
Note that this is a generic function (working on both concave and convex polygons, so if you know before-hand that your polygon is convex, it is much more efficient to use @code{gal_polygon_is_inside_convex}.
@end deftypefun

@deftypefun int gal_polygon_is_inside_convex (double @code{*v}, double @code{*p}, size_t @code{n})
Return @code{1} if the point @code{p} is within the polygon whose vertices are defined by @code{v}.
The polygon is assumed to be convex, for a more generic function that deals with concave and convex polygons, see @code{gal_polygon_is_inside}.
Note that the vertices of the polygon have to be sorted in an anti-clock-wise manner.
@end deftypefun

@deftypefun int gal_polygon_ppropin (double @code{*v}, double @code{*p}, size_t @code{n})
Similar to @code{gal_polygon_is_inside_convex}, except that if the point @code{p} is on
one of the edges of a polygon, this will return @code{0}.
@end deftypefun

@deftypefun int gal_polygon_is_counterclockwise (double @code{*v}, size_t @code{n})
Returns  @code{1} if the sorted polygon has a counter-clockwise orientation and @code{0} otherwise.
This function uses the concept of ``winding'', which defines the relative order in which the vertices of a polygon are listed to determine the orientation of vertices.
For complex polygons (where edges, or sides, intersect), the most significant orientation is returned.
In a complex polygon, when the alternative windings are equal (for example an @code{8}-shape) it will return @code{1} (as if it was counter-clockwise).
Note that the polygon vertices have to be sorted before calling this function.
@end deftypefun

@deftypefun int gal_polygon_to_counterclockwise (double @code{*v}, size_t @code{n})
Arrange the vertices of the sorted polygon in place, to be in a counter-clockwise direction.
If the input polygon already has a counter-clockwise direction it won't touch the input.
The return value is @code{1} on successful execution.
This function is just a wrapper over @code{gal_polygon_is_counterclockwise}, and will reverse the order of the vertices when necessary necessary.
@end deftypefun

@deftypefun void gal_polygon_clip (double @code{*s}, size_t @code{n}, double @code{*c}, size_t @code{m}, double @code{*o}, size_t @code{*numcrn})
Clip (find the overlap of) two polygons.
This function uses the @url{https://en.wikipedia.org/wiki/Sutherland%E2%80%93Hodgman_algorithm, Sutherland-Hodgman} polygon clipping algorithm.
Note that the vertices of both polygons have to be sorted in an anti-clock-wise manner.
@end deftypefun

@deftypefun void gal_polygon_vertices_sort (double @code{*vertices}, size_t @code{n}, size_t @code{*ordinds})
Sort the indexs of the un-ordered @code{vertices} array to a counter-clockwise polygon in the already allocated space of @code{ordinds}.
It is assumed that there are @code{n} vertices, and thus that @code{vertices} contains @code{2*n} elements where the two coordinates of the first vertice occupy the first two elements of the array and so on.

The polygon can be both concave and convex (see the start of this section).
However, note that for concave polygons there is no unique sort from an un-ordered set of vertices.
So after this function you may want to use @code{gal_polygon_is_convex} and print a warning to check the output if the polygon was concave.

Note that the contents of the @code{vertices} array are left untouched by this function.
If you want to write the ordered vertice coordinates in another array with the same size, you can use a loop like this:

@example
for(i=0;i<n;++i)
@{
  ordered[i*2  ] = vertices[ ordinds[i]*2    ];
  ordered[i*2+1] = vertices[ ordinds[i]*2 + 1];
@}
@end example

In this algorithm, we find the rightmost and leftmost points (based on their x-coordinate) and use the diagonal vector between those points to group the points in arrays based on their position with respect to this vector.
For anticlockwise sorting, all the points below the vector are sorted by their ascending x-coordinates and points above the vector are sorted in decreasing order using @code{qsort}.
Finally, both these arrays are merged together to get the final sorted array of points, from which the points are indexed into the @code{ordinds} using linear search.
@end deftypefun




@node Qsort functions, K-d tree, Polygons, Gnuastro library
@subsection Qsort functions (@file{qsort.h})

@cindex @code{qsort}
When sorting a dataset is necessary, the C programming language provides
the @code{qsort} (Quick sort) function. @code{qsort} is a generic function
which allows you to sort any kind of data structure (not just a single
array of numbers). To define ``greater'' and ``smaller'' (for sorting),
@code{qsort} needs another function, even for simple numerical types. The
functions introduced in this section are to passed onto @code{qsort}.

@cindex NaN
Note that larger and smaller operators are not defined on NaN
elements. Therefore, if the input array is a floating point type, and
contains NaN values, the relevant functions of this section are going to
put the NaN elements at the end of the list (after the sorted non-NaN
elements), irrespective of the requested sorting order (increasing or
decreasing).

The first class of functions below (with @code{TYPE} in their names) can be
used for sorting a simple numeric array. Just replace @code{TYPE} with the
dataset's numeric datatype. The second set of functions can be used to sort
indexs (leave the actual numbers untouched). To use the second set of
functions, a global variable or structure are also necessary as described
below.

@deffn {Global variable} {gal_qsort_index_single}
@cindex Thread-safety
@cindex Multi-threaded operation
Pointer to an array (for example @code{float *} or @code{int *}) to use as
a reference in @code{gal_qsort_index_single_TYPE_d} or
@code{gal_qsort_index_single_TYPE_i}, see the explanation of these
functions for more. Note that if @emph{more than one} array is to be sorted
in a multi-threaded operation, these functions will not work as
expected. However, when all the threads just sort the indexs based on a
@emph{single array}, this global variable can safely be used in a
multi-threaded scenario.
@end deffn

@deftp {Type (C @code{struct})} gal_qsort_index_multi
Structure to get the sorted indexs of multiple datasets on multiple threads
with @code{gal_qsort_index_multi_d} or @code{gal_qsort_index_multi_i}. Note
that the @code{values} array will not be changed by these functions, it is
only read. Therefore all the @code{values} elements in the (to be sorted)
array of @code{gal_qsort_index_multi} must point to the same place.

@example
struct gal_qsort_index_multi
@{
  float *values;         /* Array of values (same in all).      */
  size_t  index;         /* Index of each element to be sorted. */
@};
@end example
@end deftp

@deftypefun int gal_qsort_TYPE_d (const void @code{*a}, const void @code{*b})
When passed to @code{qsort}, this function will sort a @code{TYPE} array in
decreasing order (first element will be the largest). Please replace
@code{TYPE} (in the function name) with one of the @ref{Numeric data
types}, for example @code{gal_qsort_int32_d}, or
@code{gal_qsort_float64_d}.
@end deftypefun

@deftypefun int gal_qsort_TYPE_i (const void @code{*a}, const void @code{*b})
When passed to @code{qsort}, this function will sort a @code{TYPE} array in
increasing order (first element will be the smallest). Please replace
@code{TYPE} (in the function name) with one of the @ref{Numeric data
types}, for example @code{gal_qsort_int32_i}, or
@code{gal_qsort_float64_i}.
@end deftypefun

@deftypefun int gal_qsort_index_single_TYPE_d (const void @code{*a}, const void @code{*b})
When passed to @code{qsort}, this function will sort a @code{size_t} array
based on decreasing values in the @code{gal_qsort_index_single}. The global
@code{gal_qsort_index_single} pointer has a @code{void *} pointer which
will be cast to the proper type based on this function: for example
@code{gal_qsort_index_single_uint16_d} will cast the array to an unsigned
16-bit integer type. The array that @code{gal_qsort_index_single} points to
will not be changed, it is only read. For example, see this demo program:

@example
#include <stdio.h>
#include <stdlib.h>           /* qsort is defined in stdlib.h. */
#include <gnuastro/qsort.h>

int
main (void)
@{
  size_t s[4]=@{0, 1, 2, 3@};
  float f[4]=@{1.3,0.2,1.8,0.1@};
  gal_qsort_index_single=f;
  qsort(s, 4, sizeof(size_t), gal_qsort_index_single_float_d);
  printf("%zu, %zu, %zu, %zu\n", s[0], s[1], s[2], s[3]);
  return EXIT_SUCCESS;
@}
@end example

@noindent
The output will be: @code{2, 0, 1, 3}.
@end deftypefun

@deftypefun int gal_qsort_index_single_TYPE_i (const void @code{*a}, const void @code{*b})
Similar to @code{gal_qsort_index_single_TYPE_d}, but will sort the indexes
such that the values of @code{gal_qsort_index_single} can be parsed in
increasing order.
@end deftypefun

@deftypefun int gal_qsort_index_multi_d (const void @code{*a}, const void @code{*b})
When passed to @code{qsort} with an array of @code{gal_qsort_index_multi},
this function will sort the array based on the values of the given
indexs. The sorting will be ordered according to the @code{values} pointer
of @code{gal_qsort_index_multi}. Note that @code{values} must point to the
same place in all the structures of the @code{gal_qsort_index_multi} array.

This function is only useful when the indexs of multiple arrays on
multiple threads are to be sorted. If your program is single threaded, or
all the indexs belong to a single array (sorting different sub-sets of
indexs in a single array on multiple threads), it is recommended to use
@code{gal_qsort_index_single_d}.
@end deftypefun

@deftypefun int gal_qsort_index_multi_i (const void @code{*a}, const void *@code{b})
Similar to @code{gal_qsort_index_multi_d}, but the result will be sorted in
increasing order (first element will have the smallest value).
@end deftypefun





@node K-d tree, Permutations, Qsort functions, Gnuastro library
@subsection K-d tree (@file{kdtree.h})
@cindex K-d tree
K-d tree is a space-partitioning binary search tree for organizing points in a k-dimensional space.
They are a very useful data structure for multidimensional searches like range searches and nearest neighbor searches.
For a more formal and complete introduction see @url{https://en.wikipedia.org/wiki/K-d_tree, the Wikipedia page}.

Each non-leaf node in a k-d tree divides the space into two parts, known as half-spaces.
To select the top/root node for partitioning, we find the median of the points and make a hyperplane normal to the first dimension.
The points to the left of this space are represented by the left subtree of that node and points to the right of the space are represented by the right subtree.
This is then repeated for all the points in the input, thus associating a ``left'' and ``right'' branch for each input point.

Gnuastro uses the standard algorithms of the k-d tree with one small difference that makes it much more memory and CPU optimized.
The set of input points that define the tree nodes are given as a list of Gnuastro's data container type, see @ref{List of gal_data_t}.
Each @code{gal_data_t} in the list represents the point's coordinate in one dimension, and the first element in the list is the first dimension.
Hence the number of data values in each @code{gal_data_t} (which must be equal in all of them) represents the number of points.
This is the same format that Gnuastro's Table reading/writing functions read/write columns in tables, see @ref{Table input output}.

The output k-d tree is a list of two @code{gal_data_t}s, representing the input's row-number (or index, counting from 0) of the left and right subtrees of each row.
Each @code{gal_data_t} thus has the same number of rows (or points) as the input, but only containing integers with a type of @code{uint32_t} (unsigned 32-bit integer).
If a node has no left, or right subtree, then @code{GAL_BLANK_UINT32} will be used.
Below you can see the simple tree for 2D points from Wikipedia.
The input point coordinates are represented as two input @code{gal_data_t}s (@code{X} and @code{Y}, where @code{X->next=Y} and @code{Y->next=NULL}).
If you had three dimensional points, you could define an extra @code{gal_data_t} such that @code{Y->next=Z} and @code{Z->next=NULL}.
The output is always a list of two @code{gal_data_t}s, where the first one contains the index of the left sub-tree in the input, and the second one, the index of the right subtree.
The index of the root node (@code{0} in the case below@footnote{This example input table is the same as the example in Wikipedia (as of December 2020).
However, on the Wikipedia output, the root node is (7,2), not (5,4).
The difference is primarily because there are 6 rows and the median element of an even number of elements can vary by integer calculation strategies.
Here we use 0-based indexes for finding median and round to the smaller integer.}) is also returned as a single number.

@example
INDEX         INPUT              OUTPUT              K-D Tree
(as guide)    X --> Y        LEFT --> RIGHT        (visualized)
----------    -------        --------------     ------------------
0             5     4        1        2               (5,4)
1             2     3        BLANK    4               /   \
2             7     2        5        3           (2,3)    \
3             9     6        BLANK    BLANK           \    (7,2)
4             4     7        BLANK    BLANK         (4,7)  /   \
5             8     1        BLANK    BLANK               (8,1) (9,6)
@end example

This format is therefore scalable to any number of dimensions: the number of dimensions are determined from the number of nodes in the input list of @code{gal_data_t}s (for example, using @code{gal_list_data_number}).
In Gnuastro's k-d tree implementation, there are thus no special structures to keep every tree node (which would take extra memory and would need to be moved around as the tree is being created).
Everything is done internally on the index of each point in the input dataset: the only thing that is flipped/sorted during tree creation is the index to the input row for any number of dimensions.
As a result, Gnuastro's k-d tree implementation is very memory and CPU efficient and its two output columns can directly be written into a standard table (without having to define any special binary format).

@deftypefun {gal_data_t *} gal_kdtree_create (gal_data_t @code{*coords_raw}, size_t @code{*root})
Create a k-d tree in a bottom-up manner (from leaves to the root).
This function returns two @code{gal_data_t}s connected as a list, see description above.
The first dataset contains the indexes of left and right nodes of the subtrees for each input node.
The index of the root node is written into the memory that @code{root} points to.
@code{coords_raw} is the list of the input points (one @code{gal_data_t} per dimension, see above).

For example, assume you have the simple set of points below (from the visualized example at the start of this section) in a plain-text file called @file{coordinates.txt}:

@example
$ cat coordinates.txt
5     4
2     3
7     2
9     6
4     7
8     1
@end example

With the program below, you can calculate the kd-tree, and write it in a FITS file (while keeping the root index as a FITS keyword inside of it).

@example
#include <stdio.h>
#include <gnuastro/table.h>
#include <gnuastro/kdtree.h>

int
main (void)
@{
  gal_data_t *input, *kdtree;
  char kdtreefile[]="kd-tree.fits";
  char inputfile[]="coordinates.txt";

  /* To write the root within the saved file. */
  size_t root;
  char *unit="index";
  char *keyname="KDTROOT";
  gal_fits_list_key_t *keylist=NULL;
  char *comment="k-d tree root index (counting from 0).";

  /* Read the input table. Note: this assumes the table only
   * contains your input point coordinates (one column for each
   * dimension). If it contains more columns with other properties
   * for each point, you can specify which columns to read by
   * name or number, see the documentation of 'gal_table_read'. */
  input=gal_table_read(inputfile, "1", NULL, NULL,
		       GAL_TABLE_SEARCH_NAME, 0, -1, 0, NULL);

  /* Construct a k-d tree. The index of root is stored in `root` */
  kdtree=gal_kdtree_create(input, &root);

  /* Write the k-d tree to a file and write root index and input
   * name as FITS keywords ('gal_table_write' frees 'keylist').*/
  gal_fits_key_list_title_add(&keylist, "k-d tree parameters", 0);
  gal_fits_key_write_filename("KDTIN", inputfile, &keylist, 0);
  gal_fits_key_list_add_end(&keylist, GAL_TYPE_SIZE_T, keyname, 0,
			    &root, 0, comment, 0, unit, 0);
  gal_table_write(kdtree, &keylist, NULL, GAL_TABLE_FORMAT_BFITS,
		  kdtreefile, "kdtree", 0);

  /* Clean up and return. */
  gal_list_data_free(input);
  gal_list_data_free(kdtree);
  return EXIT_SUCCESS;
@}
@end example

You can inspect the saved k-d tree FITS table with Gnuastro's @ref{Table} (first command below), and you can see the keywords containing the root index with @ref{Fits} (second command below):

@example
asttable kd-tree.fits
astfits kd-tree.fits -h1
@end example

@end deftypefun

@deftypefun size_t gal_kdtree_nearest_neighbour (gal_data_t @code{*coords_raw}, gal_data_t @code{*kdtree}, size_t @code{root}, double @code{*point}, double @code{*least_dist})
Returns the index of the nearest input point to the query point (@code{point}, assumed to be an array with same number of elements as @code{gal_data_t}s in @code{coords_raw}).
The distance between the query point and its nearest neighbor is stored in the space that @code{least_dist} points to.
This search is efficient due to the constant checking for the presence of possible best points in other branches.
If it isn't possible for the other branch to have a better nearest neighbor, that branch is not searched.

As an example, let's use the k-d tree that was created in the example of @code{gal_kdtree_create} (above) and find the nearest row to a given coordinate (@code{point}).
This will be a very common scenario, especially in large and multi-dimensional datasets where the k-d tree creation can take long and you don't want to re-create the k-d tree every time.
In the @code{gal_kdtree_create} example output, we also wrote the k-d tree root index as a FITS keyword (@code{KDTROOT}), so after loading the two table data (input coordinates and k-d tree), we'll read the root from the FITS keyword.
This is a very simple example, but the scalability is clear: for example it is trivial to parallelize (see @ref{Library demo - multi-threaded operation}).

@example
#include <stdio.h>
#include <gnuastro/table.h>
#include <gnuastro/kdtree.h>

int
main (void)
@{
  /* INPUT: desired point. */
  double point[2]=@{8.9,5.9@};

  /* Same as example in description of 'gal_kdtree_create'. */
  gal_data_t *input, *kdtree;
  char kdtreefile[]="kd-tree.fits";
  char inputfile[]="coordinates.txt";

  /* Processing variables of this function. */
  char kdtreehdu[]="1";
  double *in_x, *in_y, least_dist;
  size_t root, nkeys=1, nearest_index;
  gal_data_t *rkey, *keysll=gal_data_array_calloc(nkeys);

  /* Read the input coordinates, see comments in example of
   * 'gal_kdtree_create' for more. */
  input=gal_table_read(inputfile, "1", NULL, NULL,
                       GAL_TABLE_SEARCH_NAME, 0, -1, 0, NULL);

  /* Read the k-d tree contents (created before). */
  kdtree=gal_table_read(kdtreefile, "1", NULL, NULL,
                        GAL_TABLE_SEARCH_NAME, 0, -1, 0, NULL);

  /* Read the k-d tree root index from the header keyword.
   * See example in description of 'gal_fits_key_read_from_ptr'.*/
  keysll[0].name="KDTROOT";
  keysll[0].type=GAL_TYPE_SIZE_T;
  gal_fits_key_read(kdtreefile, kdtreehdu, keysll, 0, 0);
  keysll[0].name=NULL; /* Since we didn't allocate it. */
  rkey=gal_data_copy_to_new_type(&keysll[0], GAL_TYPE_SIZE_T);
  root=((size_t *)(rkey->array))[0];

  /* Find the nearest neighbour of the point. */
  nearest_index=gal_kdtree_nearest_neighbour(input, kdtree, root,
                                             point, &least_dist);

  /* Print the results. */
  in_x=input->array;
  in_y=input->next->array;
  printf("(%g, %g): nearest is (%g, %g), with a distance of %g\n",
         point[0], point[1], in_x[nearest_index],
         in_y[nearest_index], least_dist);

  /* Clean up and return. */
  gal_data_free(rkey);
  gal_list_data_free(input);
  gal_list_data_free(kdtree);
  gal_data_array_free(keysll, nkeys, 1);
  return EXIT_SUCCESS;
@}
@end example
@end deftypefun





@node Permutations, Matching, K-d tree, Gnuastro library
@subsection Permutations (@file{permutation.h})
@cindex permutation
Permutation is the technical name for re-ordering of values. The need for
permutations occurs a lot during (mainly low-level) processing. To do
permutation, you must provide two inputs: an array of values (that you want
to re-order in place) and a permutation array which contains the new index
of each element (let's call it @code{perm}). The diagram below shows the
input array before and after the re-ordering.

@example
permute:    AFTER[ i       ] = BEFORE[ perm[i] ]     i = 0 .. N-1
inverse:    AFTER[ perm[i] ] = BEFORE[ i       ]     i = 0 .. N-1
@end example

@cindex GNU Scientific Library
The functions here are a re-implementation of the GNU Scientific Library's
@code{gsl_permute} function. The reason we didn't use that function was
that it uses system-specific types (like @code{long} and @code{int}) which
can have different widths on different systems, hence are not easily
convertible to Gnuastro's fixed width types (see @ref{Numeric data
types}). There is also a separate function for each type, heavily using
macros to allow a @code{base} function to work on all the types. Thus it is
hard to read/understand. Hence, Gnuastro contains a re-write of their steps
in a new type-agnostic method which is a single function that can work on
any type.

As described in GSL's source code and manual, this implementation comes
from Donald Knuth's @emph{Art of computer programming} book, in the
"Sorting and Searching" chapter of Volume 3 (3rd ed). Exercise 10 of
Section 5.2 defines the problem and in the answers, Knuth describes the
solution. So if you are interested, please have a look there for more.

We are in contact with the GSL developers and in the
future@footnote{Gnuastro's @url{http://savannah.gnu.org/task/?14497, Task
14497}. If this task is still ``postponed'' when you are reading this and
you are interested to help, your help would be very welcome. Both Gnuastro
and GSL developers are very busy, hence both would appreciate your help.}
we will submit these implementations to GSL. If they are finally
incorporated there, we will delete this section in future versions.

@deftypefun void gal_permutation_check (size_t @code{*permutation}, size_t @code{size})
Print how @code{permutation} will re-order an array that has @code{size}
elements for each element in one one line.
@end deftypefun

@deftypefun void gal_permutation_apply (gal_data_t @code{*input}, size_t @code{*permutation})
Apply @code{permutation} on the @code{input} dataset (can have any type),
see above for the definition of permutation.
@end deftypefun

@deftypefun void gal_permutation_apply_inverse (gal_data_t @code{*input}, size_t @code{*permutation})
Apply the inverse of @code{permutation} on the @code{input} dataset (can
have any type), see above for the definition of permutation.
@end deftypefun





@node Matching, Statistical operations, Permutations, Gnuastro library
@subsection Matching (@file{match.h})

@cindex Matching
@cindex Coordinate matching
Matching is often necessary when two measurements of the same points have been done using different instruments (or hardware), different software or different configurations of the same software.
In other words, you have two catalogs or tables and each has N columns containing the N-dimensional ``positional'' values of each point.
Each can have other columns too, for example one can have brightness measurements in one filter, and another can have brightness measurements in another filter as well as morphology measurements or etc.

The matching functions here will use the positional columns to find the permutation necessary to apply to both tables.
This will enable you to match by the positions, then apply the permutation to the brightness or morphology columns in the example above.
The input and output data formats of the functions below are the some and described below before the actual functions.
Each function also has extra arguments due to the particular algorithm it uses for the matching.

The two inputs of the functions (@code{coord1} and @code{coord2}) must be @ref{List of gal_data_t}.
Each @code{gal_data_t} node in @code{coord1} or @code{coord2} should be a single dimensional dataset (column in a table) and all the nodes must have the same number of elements (rows).
In other words, each column can be visualized as having the coordinates of each point in its respective dimension.
The dimensions of the coordinates is determined by the number of @code{gal_data_t} nodes in the two input lists (which must be equal).
The number of rows (or the number of elements in each @code{gal_data_t}) in the columns of @code{coord1} and @code{coord2} can be different.
All these functions will all be satisfied if you use @code{gal_table_read} to read the two coordinate columns, see @ref{Table input output}.

@cindex Permutation
The functions below return a simply-linked list of three 1D datasets (see @ref{List of gal_data_t}), let's call the returned dataset @code{ret}.
The first two (@code{ret} and @code{ret->next}) are permutations.
In other words, the @code{array} elements of both have a type of @code{size_t}, see @ref{Permutations}.
The third node (@code{ret->next->next}) is the calculated distance for that match and its array has a type of @code{double}.
The number of matches will be put in the space pointed by the @code{nummatched} argument.
If there wasn't any match, this function will return @code{NULL}.

The two permutations can be applied to the rows of the two inputs: the first one (@code{ret}) should be applied to the rows of the table containing @code{coord1} and the second one (@code{ret->next}) to the table containing @code{coord2}.
After applying the returned permutations to the inputs, the top @code{nummatched} elements of both will match with each other.
The ordering of the rest of the elements is undefined (depends on the matching funciton used).
The third node is the distances between the respective match (which may be elliptical distance, see discussion of ``aperture'' below).

The functions will not simply return the nearest neighbor as a match.
The nearest neighbor may be too far to be a meaningful.
They will check the distance between the distance of the nearest neighbor of each point and only return a match for it it is within an acceptable N-dimensional distance (or ``aperture'').
The matching aperture is defined by the @code{aperture} array that is an input argument to the functions.
If several points of one catalog lie within this aperture of a point in the other, the  nearest is defined as the match.
In a 2D situation (where the input lists have two nodes), for the most generic case, it must have three elements: the major axis length, axis ratio and position angle (see @ref{Defining an ellipse and ellipsoid}).
If @code{aperture[1]==1}, the aperture will be a circle of radius @code{aperture[0]} and the third value won't be used.
When the aperture is an ellipse, distances between the points are also calculated in the respective elliptical distances (@mymath{r_{el}} in @ref{Defining an ellipse and ellipsoid}).




@deftypefun {gal_data_t *} gal_match_coordinates (gal_data_t @code{*coord1}, gal_data_t @code{*coord2}, double @code{*aperture}, int @code{sorted_by_first}, int @code{inplace}, size_t @code{minmapsize}, int @code{quietmmap}, size_t @code{*nummatched})

Use a basic sort-based match to find the matching points of two input coordinates.
See the descriptions above on the format of the inputs and outputs.
To speed up the search, this function will sort the input coordinates by their first column (first axis).
If @emph{both} are already sorted by their first column, you can avoid the sorting step by giving a non-zero value to @code{sorted_by_first}.

When sorting is necessary and @code{inplace} is non-zero, the actual input columns will be sorted.
Otherwise, an internal copy of the inputs will be made, used (sorted) and later freed before returning.
Therefore, when @code{inplace==0}, inputs will remain untouched, but this function will take more time and memory.
If internal allocation is necessary and the space is larger than @code{minmapsize}, the space will be not allocated in the RAM, but in a file, see description of @option{--minmapsize} and @code{--quietmmap} in @ref{Processing options}.

@cartouche
@noindent
@strong{Output permutations ignore internal sorting}: the output permutations will correspond to the initial inputs.
Therefore, even when @code{inplace!=0} (and this function re-arranges the inputs in place), the output permutation will correspond to original (possibly non-sorted) inputs.

The reason for this is that you rarely want to permute the actual positional columns after the match.
Usually, you also have other columns (for example the brightness, morphology and etc) and you want to find how they differ between the objects that match.
Once you have the permutations, they can be applied to those other columns (see @ref{Permutations}) and the higher-level processing can continue.
So if you don't need the coordinate columns for the rest of your analysis, it is better to set @code{inplace=1}.
@end cartouche
@end deftypefun

@node Statistical operations, Binary datasets, Matching, Gnuastro library
@subsection Statistical operations (@file{statistics.h})

After reading a dataset into memory from a file or fully simulating it with
another process, the most common processes that will be done on it are
statistical operations to let you quantify different aspects of the
data. the functions in this section describe Gnuastro's current set of
tools for this job. All these functions can work on any numeric data type
natively (see @ref{Numeric data types}) and can also work on tiles over a
dataset. Hence the inputs and outputs are in Gnuastro's @ref{Generic data
container}.

@deffn  Macro GAL_STATISTICS_SIG_CLIP_MAX_CONVERGE
The maximum number of clips, when @mymath{\sigma}-clipping should be done
by convergence. If the clipping does not converge before making this many
clips, all @mymath{\sigma}-clipping outputs will be NaN.
@end deffn

@deffn  Macro GAL_STATISTICS_MODE_GOOD_SYM
The minimum acceptable symmetricity of the mode calculation. If the
symmetricity of the derived mode is less than this value, all the returned
values by @code{gal_statistics_mode} will have a value of NaN.
@end deffn

@deffn  Macro GAL_STATISTICS_BINS_INVALID
@deffnx Macro GAL_STATISTICS_BINS_REGULAR
@deffnx Macro GAL_STATISTICS_BINS_IRREGULAR
Macros used to identify if the regularity of the bins when defining bins.
@end deffn

@cindex Number
@deftypefun {gal_data_t *} gal_statistics_number (gal_data_t @code{*input})
Return a single-element dataset with type @code{size_t} which contains the
number of non-blank elements in @code{input}.
@end deftypefun

@cindex Minimum
@deftypefun {gal_data_t *} gal_statistics_minimum (gal_data_t @code{*input})
Return a single-element dataset containing the minimum non-blank value in
@code{input}. The numerical datatype of the output is the same as
@code{input}.
@end deftypefun

@cindex Maximum
@deftypefun {gal_data_t *} gal_statistics_maximum (gal_data_t @code{*input})
Return a single-element dataset containing the maximum non-blank value in
@code{input}. The numerical datatype of the output is the same as
@code{input}.
@end deftypefun

@cindex Sum
@deftypefun {gal_data_t *} gal_statistics_sum (gal_data_t @code{*input})
Return a single-element (@code{double} or @code{float64}) dataset
containing the sum of the non-blank values in @code{input}.
@end deftypefun

@cindex Mean
@cindex Average
@deftypefun {gal_data_t *} gal_statistics_mean (gal_data_t @code{*input})
Return a single-element (@code{double} or @code{float64}) dataset
containing the mean of the non-blank values in @code{input}.
@end deftypefun

@cindex Standard deviation
@deftypefun {gal_data_t *} gal_statistics_std (gal_data_t @code{*input})
Return a single-element (@code{double} or @code{float64}) dataset
containing the standard deviation of the non-blank values in @code{input}.
@end deftypefun

@deftypefun {gal_data_t *} gal_statistics_mean_std (gal_data_t @code{*input})
Return a two-element (@code{double} or @code{float64}) dataset containing
the mean and standard deviation of the non-blank values in
@code{input}. The first element of the returned dataset is the mean and the
second is the standard deviation.

This function will calculate both values in one pass over the
dataset. Hence when both the mean and standard deviation of a dataset are
necessary, this function is much more efficient than calling
@code{gal_statistics_mean} and @code{gal_statistics_std} separately.
@end deftypefun

@cindex Median
@deftypefun {gal_data_t *} gal_statistics_median (gal_data_t @code{*input}, int @code{inplace})
Return a single-element dataset containing the median of the non-blank
values in @code{input}. The numerical datatype of the output is the same as
@code{input}.

Calculating the median involves sorting the dataset and removing blank
values, for better performance (and less memory usage), you can give a
non-zero value to the @code{inplace} argument. In this case, the sorting
and removal of blank elements will be done directly on the input
dataset. However, after this function the original dataset may have changed
(if it wasn't sorted or had blank values).
@end deftypefun

@cindex Quantile
@deftypefun size_t gal_statistics_quantile_index (size_t @code{size}, double @code{quantile})
Return the index of the element that has a quantile of @code{quantile}
assuming the dataset has @code{size} elements.
@end deftypefun

@deftypefun {gal_data_t *} gal_statistics_quantile (gal_data_t @code{*input}, double @code{quantile}, int @code{inplace})
Return a single-element dataset containing the value with in a quantile
@code{quantile} of the non-blank values in @code{input}. The numerical
datatype of the output is the same as @code{input}. See
@code{gal_statistics_median} for a description of @code{inplace}.
@end deftypefun

@deftypefun size_t gal_statistics_quantile_function_index (gal_data_t @code{*input}, gal_data_t @code{*value}, int @code{inplace})
Return the index of the quantile function (inverse quantile) of
@code{input} at @code{value}. In other words, this function will return the
index of the nearest element (of a sorted and non-blank) @code{input} to
@code{value}. If the value is outside the range of the input, then this
function will return @code{GAL_BLANK_SIZE_T}.
@end deftypefun

@deftypefun {gal_data_t *} gal_statistics_quantile_function (gal_data_t @code{*input}, gal_data_t @code{*value}, int @code{inplace})

Return a single-element dataset containing the quantile function of the non-blank values in @code{input} at @code{value} (a single-element dataset).
The numerical data type is of the returned dataset is @code{float64} (or @code{double}).
In other words, this function will return the quantile of @code{value} in @code{input}.
@code{value} has to have the same type as @code{input}.
See @code{gal_statistics_median} for a description of @code{inplace}.

When all elements are blank, the returned value will be NaN.
If the value is smaller than the input's smallest element, the returned value will be negative infinity.
If the value is larger than the input's largest element, then the returned value will be positive infinity
@end deftypefun

@deftypefun {gal_data_t *} gal_statistics_unique (gal_data_t @code{*input}, int @code{inplace})
Return a 1D dataset with the same numeric data type as the input, but only containing its unique elements and without any (possible) blank/NaN elements.
Note that the input's number of dimensions is irrelevant for this function.
If @code{inplace} is not zero, then the unique values will over-write the allocated space of the input, otherwise a new space will be allocated and the input will not be touched.
@end deftypefun

@deftypefun {gal_data_t *} gal_statistics_mode (gal_data_t @code{*input}, float @code{mirrordist}, int @code{inplace})
Return a four-element (@code{double} or @code{float64}) dataset that
contains the mode of the @code{input} distribution. This function
implements the non-parametric algorithm to find the mode that is described
in Appendix C of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and
Ichikawa [2015]}.

In short it compares the actual distribution and its ``mirror
distribution'' to find the mode. In order to be efficient, you can
determine how far the comparison goes away from the mirror through the
@code{mirrordist} parameter (think of it as a multiple of sigma/error). See
@code{gal_statistics_median} for a description of @code{inplace}.

The output array has the following elements (in the given order, note that
counting in C starts from 0).
@example
array[0]: mode
array[1]: mode quantile.
array[2]: symmetricity.
array[3]: value at the end of symmetricity.
@end example
@end deftypefun

@deftypefun {gal_data_t *} gal_statistics_mode_mirror_plots (gal_data_t @code{*input}, gal_data_t @code{*value}, size_t @code{numbins}, int @code{inplace}, double @code{*mirror_val})
Make a mirrored histogram and cumulative frequency plot (with
@code{numbins}) with the mirror distribution of the @code{input} having a
value in @code{value}. If all the input elements are blank, or the mirror
value is outside the range of the input, this function will return a
@code{NULL} pointer.

The output is a list of data structures (see @ref{List of gal_data_t}): the
first is the bins with one bin at the mirror point, the second is the
histogram with a maximum of one and the third is the cumulative frequency
plot (with a maximum of one).
@end deftypefun


@deftypefun int gal_statistics_is_sorted (gal_data_t @code{*input}, int @code{updateflags})
Return @code{0} if the input is not sorted, if it is sorted, this function
will return @code{1} and @code{2} if it is increasing or decreasing,
respectively. This function will abort with an error if @code{input} has
zero elements and will return @code{1} (sorted, increasing) when there is
only one element. This function will only look into the dataset if the
@code{GAL_DATA_FLAG_SORT_CH} bit of @code{input->flag} is @code{0}, see
@ref{Generic data container}.

When the flags don't indicate a previous check @emph{and}
@code{updateflags} is non-zero, this function will set the flags
appropriately to avoid having to re-check the dataset in future calls (this
can be very useful when repeated checks are necessary). When
@code{updateflags==0}, this function has no side-effects on the dataset: it
will not toggle the flags.

If you want to re-check a dataset with the blank-value-check flag already
set (for example if you have made changes to it), then explicitly set the
@code{GAL_DATA_FLAG_SORT_CH} bit to zero before calling this function. When
there are no other flags, you can simply set the flags to zero (with
@code{input->flag=0}), otherwise you can use this expression:

@example
input->flag &= ~GAL_DATA_FLAG_SORT_CH;
@end example
@end deftypefun

@deftypefun void gal_statistics_sort_increasing (gal_data_t @code{*input})
Sort the input dataset (in place) in an increasing order and toggle the
sort-related bit flags accordingly.
@end deftypefun

@deftypefun void gal_statistics_sort_decreasing (gal_data_t @code{*input})
Sort the input dataset (in place) in a decreasing order and toggle the
sort-related bit flags accordingly.
@end deftypefun

@deftypefun {gal_data_t *} gal_statistics_no_blank_sorted (gal_data_t @code{*input}, int @code{inplace})
Remove all the blanks and sort the input dataset. If @code{inplace} is
non-zero this will happen on the input dataset (in the allocated space of
the input dataset). However, if @code{inplace} is zero, this function will
allocate a new copy of the dataset and work on that. Therefore if
@code{inplace==0}, the input dataset will be modified.

This function uses the bit flags of the input, so if you have modified the
dataset, set @code{input->flag=0} before calling this function. Also note
that @code{inplace} is only for the dataset elements. Therefore even when
@code{inplace==0}, if the input is already sorted @emph{and} has no blank
values, then the flags will be updated to show this.

If all the elements were blank, then the returned dataset's @code{size}
will be zero. This is thus a good parameter to check after calling this
function to see if there actually were any non-blank elements in the input
or not and take the appropriate measure. This can help avoid strange bugs
in later steps. The flags of a zero-sized returned dataset will indicate
that it has no blanks and is sorted in an increasing order. Even if having
blank values or being sorted is not defined on a zero-element dataset, it
is up to the caller to choose what they will do with a zero-element
dataset. The flags have to be set after this function any way.
@end deftypefun

@deftypefun {gal_data_t *} gal_statistics_regular_bins (gal_data_t @code{*input}, gal_data_t @code{*inrange}, size_t @code{numbins}, double @code{onebinstart})
Generate an array of regularly spaced elements as a 1D array (column) of
type @code{double} (i.e., @code{float64}, it has to be double to account
for small differences on the bin edges). The input arguments are described
below

@table @code
@item input
The dataset you want to apply the bins to. This is only necessary if the
range argument is not complete, see below. If @code{inrange} has all the
necessary information, you can pass a @code{NULL} pointer for this.

@item inrange
This dataset keeps the desired range along each dimension of the input data
structure, it has to be in @code{float} (i.e., @code{float32}) type.

@itemize
@item
If you want the full range of the dataset (in any dimensions, then just set
@code{inrange} to @code{NULL} and the range will be specified from the
minimum and maximum value of the dataset (@code{input} cannot be
@code{NULL} in this case).

@item
If there is one element for each dimension in range, then it is viewed as a
quantile (Q), and the range will be: `Q to 1-Q'.

@item
If there are two elements for each dimension in range, then they are
assumed to be your desired minimum and maximum values. When either of the
two are NaN, the minimum and maximum will be calculated for it.
@end itemize

@item numbins
The number of bins: must be larger than 0.

@item onebinstart
A desired value for onebinstart. Note that with this option, the bins won't
start and end exactly on the given range values, it will be slightly
shifted to accommodate this request.
@end table
@end deftypefun


@deftypefun {gal_data_t *} gal_statistics_histogram (gal_data_t @code{*input}, gal_data_t @code{*bins}, int @code{normalize}, int @code{maxone})
@cindex Histogram
Make a histogram of all the elements in the given dataset with bin values that are defined in the @code{inbins} structure (see @code{gal_statistics_regular_bins}, they currently have to be equally spaced).
@code{inbins} is not mandatory, if you pass a @code{NULL} pointer, the bins structure will be built within this function based on the @code{numbins} input.
As a result, when you have already defined the bins, @code{numbins} is not used.

Let's write the center of the @mymath{i}th element of the bin array as @mymath{b_i}, and the fixed half-bin width as @mymath{h}.
Then element @mymath{j} of the input array (@mymath{in_j}) will be counted in @mymath{b_i} if @mymath{(b_i-h) \le in_j < (b_i+h)}.
However, if @mymath{in_j} is somewhere in the last bin, the condition changes to @mymath{(b_i-h) \le in_j \le (b_i+h)}.

If @code{normalize!=0}, the histogram will be ``normalized'' such that the sum of the counts column will be one. In other words, all the counts in every bin will be divided by the total number of counts. If @code{maxone!=0}, the histogram's maximum count will be 1. In other words, the counts in every bin will be divided by the value of the maximum.
@end deftypefun

@deftypefun {gal_data_t *} gal_statistics_histogram2d (gal_data_t @code{*input}, gal_data_t @code{*bins})
@cindex Histogram, 2D
@cindex 2D histogram
This function is very similar to @code{gal_statistics_histogram}, but will build a 2D histogram (count how many of the elements of @code{input} have a within a 2D box.
The bins comprising the first dimension of the 2D box are defined by @code{bins}.
The bins of the second dimension are defined by @code{bins->next} (@code{bins} is a @ref{List of gal_data_t}).
Both the @code{bin} and @code{bin->next} can be created with @code{gal_statistics_regular_bins}.

This function returns a list of @code{gal_data_t} with three nodes/columns, so you can directly write them into a table (see @ref{Table input output}).
Assuming @code{bins} has @mymath{N1} bins and @code{bins->next} has @mymath{N2} bins, each node/column of the returned output is a 1D array with @mymath{N1\times N2} elements.
The first and second columns are the center of the 2D bin along the first and second dimensions and have a @code{double} data type.
The third column is the 2D histogram (the number of input elements that have a value within that 2D bin) and has a @code{uint32} data type (see @ref{Numeric data types}).
@end deftypefun

@deftypefun {gal_data_t *} gal_statistics_cfp (gal_data_t @code{*input}, gal_data_t @code{*bins}, int @code{normalize})
Make a cumulative frequency plot (CFP) of all the elements in @code{input}
with bin values that are defined in the @code{bins} structure (see
@code{gal_statistics_regular_bins}).

The CFP is built from the histogram: in each bin, the value is the sum of all previous bins in the histogram.
Thus, if you have already calculated the histogram before calling this function, you can pass it onto this function as the data structure in @code{bins->next} (see @code{List of gal_data_t}).
If @code{bin->next!=NULL}, then it is assumed to be the histogram.
If it is @code{NULL}, then the histogram will be calculated internally and freed after the job is finished.

When a histogram is given and it is normalized, the CFP will also be normalized (even if the normalized flag is not set here): note that a normalized CFP's maximum value is 1.
@end deftypefun


@deftypefun {gal_data_t *} gal_statistics_sigma_clip (gal_data_t @code{*input}, float @code{multip}, float @code{param}, int @code{inplace}, int @code{quiet})
Apply @mymath{\sigma}-clipping on a given dataset and return a dataset that
contains the results. For a description of @mymath{\sigma}-clipping see
@ref{Sigma clipping}. @code{multip} is the multiple of the standard
deviation (or @mymath{\sigma}, that is used to define outliers in each
round of clipping).

The role of @code{param} is determined based on its value. If @code{param}
is larger than @code{1} (one), it must be an integer and will be
interpreted as the number clips to do. If it is less than @code{1} (one),
it is interpreted as the tolerance level to stop the iteration.

The returned dataset (let's call it @code{out}) contains a four-element
array with type @code{GAL_TYPE_FLOAT32}. The final number of clips is
stored in the @code{out->status}.
@example
float *array=out->array;
array[0]: Number of points used.
array[1]: Median.
array[2]: Mean.
array[3]: Standard deviation.
@end example

If the @mymath{\sigma}-clipping doesn't converge or all input elements are
blank, then this function will return NaN values for all the elements
above.
@end deftypefun


@deftypefun {gal_data_t *} gal_statistics_outlier_bydistance (int @code{pos1_neg0}, gal_data_t @code{*input}, size_t @code{window_size}, float @code{sigma}, float @code{sigclip_multip}, float @code{sigclip_param}, int @code{inplace}, int @code{quiet})

Find the first positive outlier (if @code{pos1_neg0!=0}) in the @code{input} distribution.
When @code{pos1_neg0==0}, the same algorithm goes to the start of the dataset.
The returned dataset contains a single element: the first positive outlier.
It is one of the dataset's elements, in the same type as the input.
If the process fails for any reason (for example no outlier was found), a @code{NULL} pointer will be returned.

All (possibly existing) blank elements are first removed from the input dataset, then it is sorted.
A sliding window of @code{window_size} elements is parsed over the dataset.
Starting from the @code{window_size}-th element of the dataset, in the direction of increasing values.
This window is used as a reference.
The first element where the distance to the previous (sorted) element is @code{sigma} units away from the distribution of distances in its window is considered an outlier and returned by this function.

Formally, if we assume there are @mymath{N} non-blank elements.
They are first sorted.
Searching for the outlier starts on element @mymath{W}.
Let's take @mymath{v_i} to be the @mymath{i}-th element of the sorted input (with no blank values) and @mymath{m} and @mymath{\sigma} as the @mymath{\sigma}-clipped median and standard deviation from the distances of the previous @mymath{W} elements (not including @mymath{v_i}).
If the value given to @code{sigma} is displayed with @mymath{s}, the @mymath{i}-th element is considered as an outlier when the condition below is true.

@dispmath{{(v_i-v_{i-1})-m\over \sigma}>s}

The @code{sigclip_multip} and @code{sigclip_param} arguments specify the properties of the @mymath{\sigma}-clipping (see @ref{Sigma clipping} for more).
You see that by this definition, the outlier cannot be any of the lower half elements.
The advantage of this algorithm compared to @mymath{\sigma}-clippign is that it only looks backwards (in the sorted array) and parses it in one direction.

If @code{inplace!=0}, the removing of blank elements and sorting will be done within the input dataset's allocated space.
Otherwise, this function will internally allocate (and later free) the necessary space to keep the intermediate space that this process requires.

If @code{quiet!=0}, this function will report the parameters every time it moves the window as a separate line with several columns.
The first column is the value, the second (in square brackets) is the sorted index, the third is the distance of this element from the previous one.
The Fourth and fifth (in parenthesis) are the median and standard deviation of the @mymath{\sigma}-clipped distribution within the window and the last column is the difference between the third and fourth, divided by the fifth.

@end deftypefun

@deftypefun {gal_data_t *} gal_statistics_outlier_flat_cfp (gal_data_t @code{*input}, size_t @code{numprev}, float @code{sigclip_multip}, float @code{sigclip_param}, float @code{thresh}, size_t @code{numcontig}, int @code{inplace}, int @code{quiet}, size_t @code{*index})

Return the first element in the given dataset where the cumulative
frequency plot first becomes significantly flat for a sufficient number of
elements. The returned dataset only has one element (with the same type as
the input). If @code{index!=NULL}, the index (counting from zero, after
sorting the dataset and removing any blanks) is written in the space that
@code{index} points to. If no sufficiently flat portion is found, the
returned pointer will be @code{NULL}.

@cindex Sigma-clipping
The flatness on the cumulative frequency plot is defined like this (see
@ref{Histogram and Cumulative Frequency Plot}): on the sorted dataset, for
every point (@mymath{a_i}), we calculate @mymath{d_i=a_{i+2}-a_{i-2}}. This
done on the first @mymath{N} elements (value of @code{numprev}). After
element @mymath{a_{N+2}}, we start estimating the flatness as follows: for
every element we use the @mymath{N}, @mymath{d_i} measurements before it as
the reference. Let's call this set @mymath{D_i} for element @mymath{i}. The
@mymath{\sigma}-clipped median (@mymath{m}) and standard deviation
(@mymath{s}) of @mymath{D_i} are then calculated. The
@mymath{\sigma}-clipping can be configured with the two
@code{sigclip_param} and @code{sigclip_multip} arguments.

Taking @mymath{t} as the significance threshold (value to @code{thresh}), a
point is considered flat when @mymath{a_i>m+t\sigma}. But a single point
satisfying this condition will probably just be due to noise. To make a
more robust estimate, this significance/condition has to hold for
@code{numcontig} contiguous elements after @mymath{a_i}. When this is
satisfied, @mymath{a_i} is returned as the point where the distribution's
cumulative frequency plot becomes flat.

To get a good estimate of @mymath{m} and @mymath{s}, it is thus recommended
to set @code{numprev} as large as possible. However, be careful not to set
it too high: the checks in the paragraph above are not done on the first
@code{numprev} elements and this function assumes the flatness occurs
after them. Also, be sure that the value to @code{numcontig} is much less
than @code{numprev}, otherwise @mymath{\sigma}-clipping may not be able to
remove the immediate outliers in @mymath{D_i} near the boundary of the flat
region.

When @code{quiet==0}, the basic measurements done on each element are
printed on the command-line (good for finding the best parameters). When
@code{inplace!=0}, the sorting and removal of blank elements is done on the
input dataset, so the input may be altered after this function.
@end deftypefun


@node Binary datasets, Labeled datasets, Statistical operations, Gnuastro library
@subsection Binary datasets (@file{binary.h})

@cindex Thresholding
@cindex Binary datasets
@cindex Dataset: binary
Binary datasets only have two (usable) values: 0 (also known as background)
or 1 (also known as foreground). They are created after some binary
classification is applied to the dataset. The most common is thresholding:
for example in an image, pixels with a value above the threshold are given
a value of 1 and those with a value less than the threshold are assigned a
value of 0.

@cindex Connectivity
@cindex Immediate neighbors
@cindex Neighbors, immediate
Since there is only two values, in the processing of binary images, you are
usually concerned with the positioning of an element and its vicinity
(neighbors). When a dataset has more than one dimension, multiple classes
of immediate neighbors (that are touching the element) can be defined for
each data-element. To separate these different classes of immediate
neighbors, we define @emph{connectivity}.

The classification is done by the distance from element center to the
neighbor's center. The nearest immediate neighbors have a connectivity of
1, the second nearest class of neighbors have a connectivity of 2 and so
on. In total, the largest possible connectivity for data with @code{ndim}
dimensions is @code{ndim}. For example in a 2D dataset, 4-connected
neighbors (that share an edge and have a distance of 1 pixel) have a
connectivity of 1. The other 4 neighbors that only share a vertice (with a
distance of @mymath{\sqrt{2}} pixels) have a connectivity of
2. Conventionally, the class of connectivity-2 neighbors also includes the
connectivity 1 neighbors, so for example we call them 8-connected neighbors
in 2D datasets.

Ideally, one bit is sufficient for each element of a binary
dataset. However, CPUs are not designed to work on individual bits, the
smallest unit of memory addresses is a byte (containing 8 bits on modern
CPUs). Therefore, in Gnuastro, the type used for binary dataset is
@code{uint8_t} (see @ref{Numeric data types}). Although it does take
8-times more memory, this choice offers much better performance and the
some extra (useful) features.

The advantage of using a full byte for each element of a binary dataset is
that you can also have other values (that will be ignored in the
processing). One such common ``other'' value in real datasets is a blank
value (to mark regions that should not be processed because there is no
data). The constant @code{GAL_BLANK_UINT8} value must be used in these
cases (see @ref{Library blank values}). Another is some temporary value(s)
that can be given to a processed pixel to avoid having another copy of the
dataset as in @code{GAL_BINARY_TMP_VALUE} that is described below.

@deffn Macro GAL_BINARY_TMP_VALUE
The functions described below work on a @code{uint8_t} type dataset with
values of 1 or 0 (no other pixel will be touched). However, in some cases,
it is necessary to put temporary values in each element during the
processing of the functions. This temporary value has a special meaning for
the operation and will be operated on. So if your input datasets have
values other than 0 and 1 that you don't want these functions to work on,
be sure they are not equal to this macro's value. Note that this value is
also different from @code{GAL_BLANK_UINT8}, so your input datasets may also
contain blank elements.
@end deffn


@deftypefun {gal_data_t *} gal_binary_erode (gal_data_t @code{*input}, size_t @code{num}, int @code{connectivity}, int @code{inplace})
Do @code{num} erosions on the @code{connectivity}-connected neighbors of
@code{input} (see above for the definition of connectivity).

If @code{inplace} is non-zero @emph{and} the input's type is
@code{GAL_TYPE_UINT8}, then the erosion will be done within the input
dataset and the returned pointer will be @code{input}. Otherwise,
@code{input} is copied (and converted if necessary) to
@code{GAL_TYPE_UINT8} and erosion will be done on this new dataset which
will also be returned. This function will only work on the elements with a
value of 1 or 0. It will leave all the rest unchanged.

@cindex Erosion
@cindex Mathematical morphology
Erosion (inverse of dilation) is an operation in mathematical morphology
where each foreground pixel that is touching a background pixel is flipped
(changed to background). The @code{connectivity} value determines the
definition of ``touching''. Erosion will thus decrease the area of the
foreground regions by one layer of pixels.
@end deftypefun

@deftypefun {gal_data_t *} gal_binary_dilate (gal_data_t @code{*input}, size_t @code{num}, int @code{connectivity}, int @code{inplace})
Do @code{num} dilations on the @code{connectivity}-connected neighbors of
@code{input} (see above for the definition of connectivity). For more on
@code{inplace} and the output, see @code{gal_binary_erode}.

@cindex Dilation
Dilation (inverse of erosion) is an operation in mathematical morphology
where each background pixel that is touching a foreground pixel is flipped
(changed to foreground). The @code{connectivity} value determines the
definition of ``touching''. Dilation will thus increase the area of the
foreground regions by one layer of pixels.
@end deftypefun

@deftypefun {gal_data_t *} gal_binary_open (gal_data_t @code{*input}, size_t @code{num}, int @code{connectivity}, int @code{inplace})
Do @code{num} openings on the @code{connectivity}-connected neighbors of
@code{input} (see above for the definition of connectivity). For more on
@code{inplace} and the output, see @code{gal_binary_erode}.

@cindex Opening (Mathematical morphology)
Opening is an operation in mathematical morphology which is defined as
erosion followed by dilation (see above for the definitions of erosion and
dilation). Opening will thus remove the outer structure of the
foreground. In this implementation, @code{num} erosions are going to be
applied on the dataset, then @code{num} dilations.
@end deftypefun


@deftypefun size_t gal_binary_connected_components (gal_data_t @code{*binary}, gal_data_t @code{**out}, int @code{connectivity})
@cindex Breadth first search
@cindex Connected component labeling
Return the number of connected components in @code{binary} through the
breadth first search algorithm (finding all pixels belonging to one
component before going on to the next). Connection between two pixels is
defined based on the value to @code{connectivity}. @code{out} is a dataset
with the same size as @code{binary} with @code{GAL_TYPE_INT32} type. Every
pixel in @code{out} will have the label of the connected component it
belongs to. The labeling of connected components starts from 1, so a label
of zero is given to the input's background pixels.

When @code{*out!=NULL} (its space is already allocated), it will be cleared
(to zero) at the start of this function. Otherwise, when @code{*out==NULL},
the necessary dataset to keep the output will be allocated by this
function.

@code{binary} must have a type of @code{GAL_TYPE_UINT8}, otherwise this
function will abort with an error. Other than blank pixels (with a value of
@code{GAL_BLANK_UINT8} defined in @ref{Library blank values}), all other
non-zero pixels in @code{binary} will be considered as foreground (and will
be labeled). Blank pixels in the input will also be blank in the output.
@end deftypefun

@deftypefun {gal_data_t *} gal_binary_connected_indexs(gal_data_t @code{*binary}, int @code{connectivity})
Build a @code{gal_data_t} linked list, where each node of the list contains an array with indexs of the connected regions.
Therefore the arrays of each node can have a different size.
Note that the indexs will only be calculated on the pixels with a value of 1 and internally, it will temporarily change the values to 2 (and return them back to 1 in the end).
@end deftypefun

@deftypefun {gal_data_t *} gal_binary_connected_adjacency_matrix (gal_data_t @code{*adjacency}, size_t @code{*numconnected})
@cindex Adjacency matrix
@cindex Matrix, adjacency
Find the number of connected labels and new labels based on an adjacency matrix, which must be a square binary array (type @code{GAL_TYPE_UINT8}).
The returned dataset is a list of new labels for each old label.
In other words, this function will find the objects that are connected (possibly through a third object) and in the output array, the respective elements for all input labels is going to have the same value.
The total number of connected labels is put into the space that @code{numconnected} points to.

An adjacency matrix defines connection between two labels.
For example, let's assume we have 5 labels and we know that labels 1 and 5 are connected to label 3, but are not connected with each other.
Also, labels 2 and 4 are not touching any other label.
So in total we have 3 final labels: one combined object (merged from labels 1, 3, and 5) and the initial labels 2 and 4.
The input adjacency matrix would look like this (note the extra row and column for a label 0 which is ignored):

@example
            INPUT                             OUTPUT
            =====                             ======
          in_lab  1  2  3  4  5   |
                                  |       numconnected = 3
               0  0  0  0  0  0   |
in_lab 1 -->   0  0  0  1  0  0   |
in_lab 2 -->   0  0  0  0  0  0   |  Returned: new labels for the
in_lab 3 -->   0  1  0  0  0  1   |       5 initial objects
in_lab 4 -->   0  0  0  0  0  0   |   | 0 | 1 | 2 | 1 | 3 | 1 |
in_lab 5 -->   0  0  0  1  0  0   |
@end example

Although the adjacency matrix as used here is symmetric, currently this function assumes that it is filled on both sides of the diagonal.
@end deftypefun

@deftypefun {gal_data_t *} gal_binary_connected_adjacency_list (gal_list_sizet_t @code{**listarr}, size_t @code{number}, size_t @code{minmapsize}, int @code{quietmmap}, size_t @code{*numconnected})
@cindex RAM
Find the number of connected labels and new labels based on an adjacency list.
The output of this function is identical to that of @code{gal_binary_connected_adjacency_matrix}.
But the major difference is that it uses a list of connected labels to each label instead of a square adjacency matrix.
This is done because when the number of labels becomes very large (for example on the scale of 100,000), the adjacency matrix can consume more than 10GB of RAM!

The input list has the following format: it is an array of pointers to @code{gal_list_sizet_t *} (or @code{gal_list_sizet_t **}).
The array has @code{number} elements and each @code{listarr[i]} is a linked list of @code{gal_list_sizet_t *}.
As a demonstration, the input of the same example in @code{gal_binary_connected_adjacency_matrix} would look like below and the output of this function will be identical to there.

@example
listarr[0] = NULL
listarr[1] = 3
listarr[2] = NULL
listarr[3] = 1 -> 5
listarr[4] = NULL
listarr[5] = 3
@end example

From this example, it is already clear that this method will consume far less memory.
But because it needs to parse lists (and not easily jump between array elements), it can be slower.
But in scenarios where there are too many objects (that may exceed the whole system's RAM+SWAP), this option is a good alternative and the drop in processing speed is worth getting the job done.

Similar to @code{gal_binary_connected_adjacency_matrix}, this function will write the final number of connected labels in @code{numconnected}.
But since it takes no @code{gal_data_t *} argument (where it can inherit the @code{minmapsize} and @code{quietmmap} parameters), it also needs these as input.
For more on @code{minmapsize} and @code{quietmmap}, see @ref{Memory management}.
@end deftypefun

@deftypefun {gal_data_t *} gal_binary_holes_label (gal_data_t @code{*input}, int @code{connectivity}, size_t @code{*numholes})
Label all the holes in the foreground (non-zero elements in input) as
independent regions. Holes are background regions (zero-valued in input)
that are fully surrounded by the foreground, as defined by
@code{connectivity}. The returned dataset has a 32-bit signed integer type
with the size of the input. All holes in the input will have
labels/counters greater or equal to @code{1}. The rest of the background
regions will still have a value of @code{0} and the initial foreground
pixels will have a value of @code{-1}. The total number of holes will be
written where @code{numholes} points to.
@end deftypefun

@deftypefun void gal_binary_holes_fill (gal_data_t @code{*input}, int @code{connectivity}, size_t @code{maxsize})
Fill all the holes (0 valued pixels surrounded by 1 valued pixels) of the
binary @code{input} dataset. The connectivity of the holes can be set with
@code{connectivity}. Holes larger than @code{maxsize} are not filled. This
function currently only works on a 2D dataset.
@end deftypefun

@node Labeled datasets, Convolution functions, Binary datasets, Gnuastro library
@subsection Labeled datasets (@file{label.h})

A labeled dataset is one where each element/pixel has an integer label (or
counter). The label identifies the group/class that the element belongs
to. This form of labeling allows the higher-level study of all pixels
within a certain class.

For example, to detect objects/targets in an image/dataset, you can apply a
threshold to separate the noise from the signal (to detect diffuse signal,
a threshold is useless and more advanced methods are necessary, for example
@ref{NoiseChisel}). But the output of detection is a binary dataset (which
is just a very low-level labeling of @code{0} for noise and @code{1} for
signal).

The raw detection map is therefore hardly useful for any kind of analysis
on objects/targets in the image. One solution is to use a
connected-components algorithm (see @code{gal_binary_connected_components}
in @ref{Binary datasets}). It is a simple and useful way to separate/label
connected patches in the foreground. This higher-level (but still
elementary) labeling therefore allows you to count how many connected
patches of signal there are in the dataset and is a major improvement
compared to the raw detection.

However, when your objects/targets are touching, the simple connected
components algorithm is not enough and a still higher-level labeling
mechanism is necessary. This brings us to the necessity of the functions in
this part of Gnuastro's library. The main inputs to the functions in this
section are already labeled datasets (for example with the connected
components algorithm above).

Each of the labeled regions are independent of each other (the labels
specify different classes of targets). Therefore, especially in large
datasets, it is often useful to process each label on independent CPU
threads in parallel rather than in series. Therefore the functions of this
section actually use an array of pixel/element indexs (belonging to each
label/class) as the main identifier of a region. Using indexs will also
allow processing of overlapping labels (for example in deblending
problems). Just note that overlapping labels are not yet implemented, but
planned. You can use @code{gal_label_indexs} to generate lists of indexs
belonging to separate classes from the labeled input.

@deffn  Macro GAL_LABEL_INIT
@deffnx Macro GAL_LABEL_RIVER
@deffnx Macro GAL_LABEL_TMPCHECK
Special negative integer values used internally by some of the functions in
this section. Recall that meaningful labels are considered to be positive
integers (@mymath{\geq1}). Zero is conventionally kept for regions with no
labels, therefore negative integers can be used for any extra
classification in the labeled datasets.
@end deffn

@deftypefun {gal_data_t *} gal_label_indexs (gal_data_t @code{*labels}, size_t @code{numlabs}, size_t @code{minmapsize}, int @code{quietmmap})

Return an array of @code{gal_data_t} containers, each containing the pixel
indexs of the respective label (see @ref{Generic data
container}). @code{labels} contains the label of each element and has to
have an @code{GAL_TYPE_INT32} type (see @ref{Library data types}). Only
positive (greater than zero) values in @code{labels} will be used/indexed,
other elements will be ignored.

Meaningful labels start from @code{1} and not @code{0}, therefore the
output array of @code{gal_data_t} will contain @code{numlabs+1} elements.
The first (zero-th) element of the output (@code{indexs[0]} in the example
below) will be initialized to a dataset with zero elements. This will allow
easy (non-confusing) access to the indexs of each (meaningful) label.

@code{numlabs} is the number of labels in the dataset. If it is given a
value of zero, then the maximum value in the input (largest label) will be
found and used. Therefore if it is given, but smaller than the actual
number of labels, this function may/will crash (it will write in
unallocated space). @code{numlabs} is therefore useful in a highly
optimized/checked environment.

For example, if the returned array is called @code{indexs}, then
@code{indexs[10].size} contains the number of elements that have a label of
@code{10} in @code{labels} and @code{indexs[10].array} is an array (after
casting to @code{size_t *}) containing the indexs of each one of those
elements/pixels.

By @emph{index} we mean the 1D position: the input number of dimensions is
irrelevant (any dimensionality is supported). In other words, each
element's index is the number of elements/pixels between it and the
dataset's first element/pixel. Therefore it is always greater or equal to
zero and stored in @code{size_t} type.
@end deftypefun

@deftypefun size_t gal_label_watershed (gal_data_t @code{*values}, gal_data_t @code{*indexs}, gal_data_t @code{*label}, size_t @code{*topinds}, int @code{min0_max1})
@cindex Watershed algorithm
@cindex Algorithm: watershed
Use the watershed algorithm@footnote{The watershed algorithm was initially
introduced by @url{https://doi.org/10.1109/34.87344, Vincent and
Soille}. It starts from the minima and puts the pixels in, one by one, to
grow them until the touch (create a watershed). For more, also see the
Wikipedia article:
@url{https://en.wikipedia.org/wiki/Watershed_%28image_processing%29}.} to
``over-segment'' the pixels in the @code{indexs} dataset based on values in
the @code{values} dataset. Internally, each local extrema (maximum or
minimum, based on @code{min0_max1}) and its surrounding pixels will be
given a unique label. For demonstration, see Figures 8 and 9 of
@url{http://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}. If
@code{topinds!=NULL}, it is assumed to point to an already allocated space
to write the index of each clump's local extrema, otherwise, it is ignored.

The @code{values} dataset must have a 32-bit floating point type
(@code{GAL_TYPE_FLOAT32}, see @ref{Library data types}) and will only be
read by this function. @code{indexs} must contain the indexs of the
elements/pixels that will be over-segmented by this function and have a
@code{GAL_TYPE_SIZE_T} type, see the description of
@code{gal_label_indexs}, above. The final labels will be written in the
respective positions of @code{labels}, which must have a
@code{GAL_TYPE_INT32} type and be the same size as @code{values}.

When @code{indexs} is already sorted, this function will ignore
@code{min0_max1}. To judge if the dataset is sorted or not (by the values
the indexs correspond to in @code{values}, not the actual indexs), this
function will look into the bits of @code{indexs->flag}, for the respective
bit flags, see @ref{Generic data container}. If @code{indexs} is not
already sorted, this function will sort it according to the values of the
respective pixel in @code{values}. The increasing/decreasing order will be
determined by @code{min0_max1}. Note that if this function is called on
multiple threads @emph{and} @code{values} points to a different array on
each thread, this function will not return a reasonable result. In this
case, please sort @code{indexs} prior to calling this function (see
@code{gal_qsort_index_multi_d} in @ref{Qsort functions}).

When @code{indexs} is decreasing (increasing), or @code{min0_max1} is
@code{1} (@code{0}), local minima (maxima), are considered rivers
(watersheds) and given a label of @code{GAL_LABEL_RIVER} (see above).

Note that rivers/watersheds will also be formed on the edges of the labeled
regions or when the labeled pixels touch a blank pixel. Therefore this
function will need to check for the presence of blank values. To be most
efficient, it is thus recommended to use @code{gal_blank_present} (with
@code{updateflag=1}) prior to calling this function (see @ref{Library blank
values}. Once the flag has been set, no other function (including this one)
that needs special behavior for blank pixels will have to parse the dataset
to see if it has blank values any more.

If you are sure your dataset doesn't have blank values (by the design of
your software), to avoid an extra parsing of the dataset and improve
performance, you can set the two bits manually (see the description of
@code{flags} in @ref{Generic data container}):
@example
input->flag |=  GAL_DATA_FLAG_BLANK_CH; /* Set bit to 1. */
input->flag &= ~GAL_DATA_FLAG_HASBLANK; /* Set bit to 0. */
@end example
@end deftypefun

@deftypefun void gal_label_clump_significance (gal_data_t @code{*values}, gal_data_t @code{*std}, gal_data_t @code{*label}, gal_data_t @code{*indexs}, struct gal_tile_two_layer_params @code{*tl}, size_t @code{numclumps}, size_t @code{minarea}, int @code{variance}, int @code{keepsmall}, gal_data_t @code{*sig}, gal_data_t @code{*sigind})
@cindex Clump
This function is usually called after @code{gal_label_watershed}, and is
used as a measure to identify which over-segmented ``clumps'' are real and
which are noise.

A measurement is done on each clump (using the @code{values} and @code{std}
datasets, see below). To help in multi-threaded environments, the operation
is only done on pixels which are indexed in @code{indexs}. It is expected
for @code{indexs} to be sorted by their values in @code{values}. If not
sorted, the measurement may not be reliable. If sorted in a decreasing
order, then clump building will start from their highest value and
vice-versa. See the description of @code{gal_label_watershed} for more on
@code{indexs}.

Each ``clump'' (identified by a positive integer) is assumed to be
surrounded by at least one river/watershed pixel (with a non-positive
label). This function will parse the pixels identified in @code{indexs} and
make a measurement on each clump and over all the river/watershed
pixels. The number of clumps (@code{numclumps}) must be given as an input
argument and any clump that is smaller than @code{minarea} is ignored
(because of scatter). If @code{variance} is non-zero, then the @code{std}
dataset is interpreted as variance, not standard deviation.

The @code{values} and @code{std} datasets must have a @code{float} (32-bit
floating point) type. Also, @code{label} and @code{indexs} must
respectively have @code{int32} and @code{size_t} types. @code{values} and
@code{label} must have the same size, but @code{std} can have three
possible sizes: 1) a single element (which will be used for the whole
dataset, 2) the same size as @code{values} (so a different error can be
assigned to every pixel), 3) a single value for each tile, based on the
@code{tl} tessellation (see @ref{Tile grid}). In the last case, a
tile/value will be associated to each clump based on its flux-weighted
(only positive values) center.

The main output is an internally allocated, 1-dimensional array with one
value per label. The array information (length, type, etc) will be
written into the @code{sig} generic data container. Therefore
@code{sig->array} must be @code{NULL} when this function is called. After
this function, the details of the array (number of elements, type and size,
etc) will be written in to the various components of @code{sig}, see
the definition of @code{gal_data_t} in @ref{Generic data
container}. Therefore @code{sig} must already be allocated before calling
this function.

Optionally (when @code{sigind!=NULL}, similar to @code{sig}) the clump
labels of each measurement in @code{sig} will be written in
@code{sigind->array}. If @code{keepsmall} zero, small clumps (where no
measurement is made) will not be included in the output table.

This function is initially intended for a multi-threaded environment.
In such cases, you will be writing arrays of clump measures from different regions in parallel into an array of @code{gal_data_t}s.
You can simply allocate (and initialize), such an array with the @code{gal_data_array_calloc} function in @ref{Arrays of datasets}.
For example if the @code{gal_data_t} array is called @code{array}, you can pass @code{&array[i]} as @code{sig}.

Along with some other functions in @code{label.h}, this function was initially written for @ref{Segment}.
The description of the parameter used to measure a clump's significance is fully given in @url{https://arxiv.org/abs/1909.11230, Akhlaghi [2019]}.

@end deftypefun

@deftypefun void gal_label_grow_indexs (gal_data_t @code{*labels}, gal_data_t @code{*indexs}, int @code{withrivers}, int @code{connectivity})
Grow the (positive) labels of @code{labels} over the pixels in @code{indexs} (see description of @code{gal_label_indexs}).
The pixels (position in @code{indexs}, values in @code{labels}) that must be ``grown'' must have a value of @code{GAL_LABEL_INIT} in @code{labels} before calling this function.
For a demonstration see Columns 2 and 3 of Figure 10 in @url{http://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.

In many aspects, this function is very similar to over-segmentation (watershed algorithm, @code{gal_label_watershed}).
The big difference is that in over-segmentation local maximums (that aren't touching any alreadylabeled pixel) get a separate label.
However, here the final number of labels will not change.
All pixels that aren't directly touching a labeled pixel just get pushed back to the start of the loop, and the loop iterates until its size doesn't change any more.
This is because in a generic scenario some of the indexed pixels might not be reachable through other indexed pixels.

The next major difference with over-segmentation is that when there is only one label in growth region(s), it is not mandatory for @code{indexs} to be sorted by values.
If there are multiple labeled regions in growth region(s), then values are important and you can use @code{qsort} with @code{gal_qsort_index_single_d} to sort the indexs by values in a separate array (see @ref{Qsort functions}).

This function looks for positive-valued neighbors of each pixel in @code{indexs} and will label a pixel if it touches one.
Therefore, it is very important that only pixels/labels that are intended for growth have positive values in @code{labels} before calling this function.
Any non-positive (zero or negative) value will be ignored as a label by this function.
Thus, it is recommended that while filling in the `indexs' array values, you initialize all the pixels that are in @code{indexs} with @code{GAL_LABEL_INIT}, and set non-labeled pixels that you don't want to grow to @code{0}.

This function will write into both the input datasets.
After this function, some of the non-positive @code{labels} pixels will have a new positivelabel and the number of useful elements in @code{indexs} will have decreased.
The index of those pixels that couldn't be labeled will remain inside @code{indexs}.
If @code{withrivers} is non-zero, then pixels that are immediately touching more than one positive value will be given a @code{GAL_LABEL_RIVER} label.

@cindex GNU C library
Note that the @code{indexs->array} is not re-allocated to its new size at the end@footnote{Note that according to the GNU C Library, even a @code{realloc} to a smaller size can also cause a re-write of the whole array, which is not a cheap operation.}.
But since @code{indexs->dsize[0]} and @code{indexs->size} have new values after this function is returned, the extra elements just won't be used until they are ultimately freed by @code{gal_data_free}.

Connectivity is a value between @code{1} (fewest number of neighbors) and the number of dimensions in the input (most number of neighbors).
For example in a 2D dataset, a connectivity of @code{1} and @code{2} corresponds to 4-connected and 8-connected neighbors.
@end deftypefun


@node Convolution functions, Interpolation, Labeled datasets, Gnuastro library
@subsection Convolution functions (@file{convolve.h})

Convolution is a very common operation during data analysis and is
thoroughly described as part of Gnuastro's @ref{Convolve} program which is
fully devoted to this job. Because of the complete introduction that was
presented there, we will directly skip onto the currently available
convolution functions in Gnuastro's library.

As of this version, only spatial domain convolution is available in
Gnuastro's libraries. We haven't had the time to liberate the frequency
domain function convolution and de-convolution functions that are available
in the Convolve program@footnote{Hence any help would be greatly
appreciated.}.

@deftypefun {gal_data_t *} gal_convolve_spatial (gal_data_t @code{*tiles}, gal_data_t @code{*kernel}, size_t @code{numthreads}, int @code{edgecorrection}, int @code{convoverch})
Convolve the given @code{tiles} dataset (possibly a list of tiles, see
@ref{List of gal_data_t} and @ref{Tessellation library}) with @code{kernel}
on @code{numthreads} threads. When @code{edgecorrection} is non-zero, it
will correct for the edge dimming effects as discussed in @ref{Edges in the
spatial domain}.

@code{tiles} can be a single/complete dataset, but in that case the speed
will be very slow. Therefore, for larger images, it is recommended to give
a list of tiles covering a dataset. To create a tessellation that fully
covers an input image, you may use @code{gal_tile_full}, or
@code{gal_tile_full_two_layers} to also define channels over your input
dataset. These functions are discussed in @ref{Tile grid}. You may then
pass the list of tiles to this function. This is the recommended way to
call this function because spatial domain convolution is slow and breaking
the job into many small tiles and working on simultaneously on several
threads can greatly speed up the processing.

If the tiles are defined within a channel (a larger tile), by default
convolution will be done within the channel, so pixels on the edge of a
channel will not be affected by their neighbors that are in another
channel. See @ref{Tessellation} for the necessity of channels in
astronomical data analysis. This behavior may be disabled when
@code{convoverch} is non-zero. In this case, it will ignore channel borders
(if they exist) and mix all pixels that cover the kernel within the
dataset.
@end deftypefun

@deftypefun void gal_convolve_spatial_correct_ch_edge (gal_data_t @code{*tiles}, gal_data_t @code{*kernel}, size_t @code{numthreads}, int @code{edgecorrection}, gal_data_t @code{*tocorrect})
Correct the edges of channels in an already convolved image when it was
initially convolved with @code{gal_convolve_spatial} and
@code{convoverch==0}. In that case, strong boundaries might exist on the
channel edges. So if you later need to remove those boundaries at later
steps of your processing, you can call this function. It will only do
convolution on the tiles that are near the edge and were effected by the
channel borders. Other pixels in the image will not be touched. Hence, it
is much faster.
@end deftypefun

@node Interpolation, Git wrappers, Convolution functions, Gnuastro library
@subsection Interpolation (@file{interpolate.h})

@cindex Sky line
@cindex Interpolation
During data analysis, it happens that parts of the data cannot be given a value, but one is necessary for the higher-level analysis.
For example a very bright star saturated part of your image and you need to fill in the saturated pixels with some values.
Another common usage case are masked sky-lines in 1D spectra that similarly need to be assigned a value for higher-level analysis.
In other situations, you might want a value in an arbitrary point: between the elements/pixels where you have data.
The functions described in this section are for such operations.

@cindex GNU Scientific Library
The parametric interpolations discussed below are wrappers around the interpolation functions of the GNU Scientific Library (or GSL, see @ref{GNU Scientific Library}).
To identify the different GSL interpolation types, Gnuastro's @file{gnuastro/interpolate.h} header file contains macros that are discussed below.
The GSL wrappers provided here are not yet complete because we are too busy.
If you need them, please consider helping us in adding them to Gnuastro's library.
Your would be very welcome and appreciated.

@deffn Macro GAL_INTERPOLATE_NEIGHBORS_METRIC_RADIAL
@deffnx Macro GAL_INTERPOLATE_NEIGHBORS_METRIC_MANHATTAN
@deffnx Macro GAL_INTERPOLATE_NEIGHBORS_METRIC_INVALID
The metric used to find distance for nearest neighbor interpolation.
A radial metric uses the simple Euclidean function to find the distance between two pixels.
A manhattan metric will always be an integer and is like steps (but is also much faster to calculate than radial metric because it doesn't need a square root calculation).
@end deffn

@deffn Macro GAL_INTERPOLATE_NEIGHBORS_FUNC_MIN
@deffnx Macro GAL_INTERPOLATE_NEIGHBORS_FUNC_MAX
@deffnx Macro GAL_INTERPOLATE_NEIGHBORS_FUNC_MEDIAN
@deffnx Macro GAL_INTERPOLATE_NEIGHBORS_FUNC_INVALID
@cindex Saturated stars
The various types of nearest-neighbor interpolation functions for @code{gal_interpolate_neighbors}.
The names are descriptive for the operation they do, so we won't go into much more detail here.
The median operator will be one of the most used, but operators like the maximum are good to fill the center of saturated stars.
@end deffn

@deftypefun {gal_data_t *} gal_interpolate_neighbors (gal_data_t @code{*input}, struct gal_tile_two_layer_params @code{*tl}, uint8_t @code{metric}, size_t @code{numneighbors}, size_t @code{numthreads}, int @code{onlyblank}, int @code{aslinkedlist}, int @code{function})

Interpolate the values in the input dataset using a calculated statistics from the distribution of their @code{numneighbors} closest neighbors.
The desired statistics is determined from the @code{func} argument, which takes any of the @code{GAL_INTERPOLATE_NEIGHBORS_FUNC_} macros (see above).
This function is non-parametric and thus agnostic to the input's number of dimension or shape of the distribution.

Distance can be defined on different metrics that are identified through @code{metric} (taking values determined by the @code{GAL_INTERPOLATE_NEIGHBORS_METRIC_} macros described above).
If @code{onlyblank} is non-zero, then only blank elements will be interpolated and pixels that already have a value will be left untouched.
This function is multi-threaded and will run on @code{numthreads} threads (see @code{gal_threads_number} in @ref{Multithreaded programming}).

@code{tl} is Gnuastro's tessellation structure used to define tiles over an image and is fully described in @ref{Tile grid}.
When @code{tl!=NULL}, then it is assumed that the @code{input->array} contains one value per tile and interpolation will respect certain tessellation properties, for example to not interpolate over channel borders.

If several datasets have the same set of blank values, you don't need to call this function multiple times.
When @code{aslinkedlist} is non-zero, then @code{input} will be seen as a @ref{List of gal_data_t}.
In this case, the same neighbors will be used for all the datasets in the list.
Of course, the values for each dataset will be different, so a different value will be written in the each dataset, but the neighbor checking that is the most CPU intensive part will only be done once.

This is a non-parametric and robust function for interpolation.
The interpolated values are also always within the range of the non-blank values and strong outliers do not get created.
However, this type of interpolation must be used with care when there are gradients.
This is because it is non-parametric and if there aren't enough neighbors, step-like features can be created.
@end deftypefun

@deffn Macro GAL_INTERPOLATE_1D_INVALID
This is just a place holder to manage errors.
@end deffn
@deffn Macro GAL_INTERPOLATE_1D_LINEAR
[From GSL:] Linear interpolation. This interpolation method does not
require any additional memory.
@end deffn
@deffn Macro GAL_INTERPOLATE_1D_POLYNOMIAL
@cindex Polynomial interpolation
@cindex Interpolation: Polynomial
[From GSL:] Polynomial interpolation. This method should only be used for
interpolating small numbers of points because polynomial interpolation
introduces large oscillations, even for well-behaved datasets. The number
of terms in the interpolating polynomial is equal to the number of points.
@end deffn
@deffn Macro GAL_INTERPOLATE_1D_CSPLINE
@cindex Interpolation: Spline
@cindex Cubic spline interpolation
@cindex Spline (cubic) interpolation
[From GSL:] Cubic spline with natural boundary conditions. The resulting
curve is piecewise cubic on each interval, with matching first and second
derivatives at the supplied data-points.  The second derivative is chosen
to be zero at the first point and last point.
@end deffn
@deffn Macro GAL_INTERPOLATE_1D_CSPLINE_PERIODIC
[From GSL:] Cubic spline with periodic boundary conditions.  The resulting
curve is piecewise cubic on each interval, with matching first and second
derivatives at the supplied data-points.  The derivatives at the first and
last points are also matched.  Note that the last point in the data must
have the same y-value as the first point, otherwise the resulting periodic
interpolation will have a discontinuity at the boundary.
@end deffn
@deffn Macro GAL_INTERPOLATE_1D_AKIMA
@cindex Interpolation: Akima spline
@cindex Akima spline interpolation
@cindex Spline (Akima) interpolation
[From GSL:] Non-rounded Akima spline with natural boundary conditions. This
method uses the non-rounded corner algorithm of Wodicka.
@end deffn
@deffn Macro GAL_INTERPOLATE_1D_AKIMA_PERIODIC
[From GSL:] Non-rounded Akima spline with periodic boundary
conditions. This method uses the non-rounded corner algorithm of Wodicka.
@end deffn
@deffn Macro GAL_INTERPOLATE_1D_STEFFEN
@cindex Steffen interpolation
@cindex Interpolation: Steffen
@cindex Interpolation: monotonic
[From GSL:] Steffen’s
method@footnote{@url{http://adsabs.harvard.edu/abs/1990A%26A...239..443S}}
guarantees the monotonicity of the interpolating function between the given
data points. Therefore, minima and maxima can only occur exactly at the
data points, and there can never be spurious oscillations between data
points. The interpolated function is piecewise cubic in each interval. The
resulting curve and its first derivative are guaranteed to be continuous,
but the second derivative may be discontinuous.
@end deffn

@deftypefun {gsl_spline *} gal_interpolate_1d_make_gsl_spline (gal_data_t @code{*X}, gal_data_t @code{*Y}, int @code{type_1d})
@cindex GNU Scientific Library
Allocate and initialize a GNU Scientific Library (GSL) 1D @code{gsl_spline}
structure using the non-blank elements of @code{Y}. @code{type_1d}
identifies the interpolation scheme and must be one of the
@code{GAL_INTERPOLATE_1D_*} macros defined above.

If @code{X==NULL}, the X-axis is assumed to be integers starting from zero
(the index of each element in @code{Y}). Otherwise, the values in @code{X}
will be used to initialize the interpolation structure. Note that when
given, @code{X} must @emph{not} contain any blank elements and it must be
sorted (in increasing order).

Each interpolation scheme needs a minimum number of elements to
successfully operate. If the number of non-blank values in @code{Y} is less
than this number, this function will return a @code{NULL} pointer.

To be as generic and modular as possible, GSL's tools are low-level.
Therefore before doing the interpolation, many steps are necessary (like
preparing your dataset, then allocating and initializing
@code{gsl_spline}). The metadata available in Gnuastro's @ref{Generic data
container} make it easy to hide all those preparations within this
function.

Once @code{gsl_spline} has been initialized by this function, the
interpolation can be evaluated for any X value within the non-blank range
of the input using @code{gsl_spline_eval} or @code{gsl_spline_eval_e}.

For example in the small program below, we read the first two columns of
the table in @file{table.txt} and feed them to this function to later estimate
the values in the second column for three selected points. You can use
@ref{BuildProgram} to compile and run this function, see @ref{Library demo
programs} for more.

@cindex first-in-first-out
@example
#include <stdio.h>
#include <stdlib.h>
#include <gnuastro/table.h>
#include <gnuastro/interpolate.h>

int
main(void)
@{
  size_t i;
  gal_data_t *X, *Y;
  gsl_spline *spline;
  gsl_interp_accel *acc;
  gal_list_str_t *cols=NULL;

  /* Change the values based on your input table. */
  double points[]=@{1.8, 2.5, 10.3@};

  /* Read the first two columns from `tab.txt'.
     IMPORTANT: the list is first-in-first-out, so the output
     column order is the inverse of the input order. */
  gal_list_str_add(&cols, "1", 0);
  gal_list_str_add(&cols, "2", 0);
  Y=gal_table_read("table.txt", NULL, cols, GAL_TABLE_SEARCH_NAME,
                   0, -1, 1, NULL);
  X=Y->next;

  /* Allocate the GSL interpolation accelerator and make the
     `gsl_spline' structure. */
  acc=gsl_interp_accel_alloc();
  spline=gal_interpolate_1d_make_gsl_spline(X, Y,
                                 GAL_INTERPOLATE_1D_STEFFEN);

  /* Calculate the respective value for all the given points,
     if `spline' could be allocated. */
  if(spline)
    for(i=0; i<(sizeof points)/(sizeof *points); ++i)
      printf("%f: %f\n", points[i],
             gsl_spline_eval(spline, points[i], acc));

  /* Clean up and return. */
  gal_data_free(X);
  gal_data_free(Y);
  gsl_spline_free(spline);
  gsl_interp_accel_free(acc);
  gal_list_str_free(cols, 0);
  return EXIT_SUCCESS;
@}
@end example
@end deftypefun

@deftypefun void gal_interpolate_1d_blank (gal_data_t @code{*in}, int @code{type_1d})
Fill the blank elements of @code{in} using the rest of the elements and the
given interpolation. The interpolation scheme can be set through
@code{type_1d}, which accepts any of the @code{GAL_INTERPOLATE_1D_*} macros
above. The interpolation is internally done in 64-bit floating point type
(@code{double}). However the evaluated/interpolated values (originally
blank) will be written (in @code{in}) with its original numeric datatype,
using C's standard type conversion.

By definition, interpolation is only defined ``between'' valid
points. Therefore, if any number of elements on the start or end of the 1D
array are blank, those elements will not be interpolated and will remain
blank. To see if any blank (non-interpolated) elements remain, you can use
@code{gal_blank_present} on @code{in} after this function is finished.
@end deftypefun


@node Git wrappers, Unit conversion library (@file{units.h}), Interpolation, Gnuastro library
@subsection Git wrappers (@file{git.h})

@cindex Git
@cindex libgit2
Git is one of the most common tools for version control and it can often be
useful during development, for example see @code{COMMIT} keyword in
@ref{Output FITS files}. At installation time, Gnuastro will also check for
the existence of libgit2, and store the value in the
@code{GAL_CONFIG_HAVE_LIBGIT2}, see @ref{Configuration information} and
@ref{Optional dependencies}. @file{gnuastro/git.h} includes
@file{gnuastro/config.h} internally, so you won't have to include both for
this macro.

@deftypefun {char *} gal_git_describe ( )
When libgit2 is present and the program is called within a directory that
is version controlled, this function will return a string containing the
commit description (similar to Gnuastro's unofficial version number, see
@ref{Version numbering}). If there are uncommitted changes in the running
directory, it will add a `@code{-dirty}' prefix to the description. When
there is no tagged point in the previous commit, this function will return
a uniquely abbreviated commit object as fallback. This function is used for
generating the value of the @code{COMMIT} keyword in @ref{Output FITS
files}. The output string is similar to the output of the following
command:

@example
$ git describe --dirty --always
@end example

Space for the output string is allocated within this function, so after
using the value you have to @code{free} the output string. If libgit2 is
not installed or the program calling this function is not within a version
controlled directory, then the output will be the @code{NULL} pointer.
@end deftypefun

@node Unit conversion library (@file{units.h}), Spectral lines library, Git wrappers, Gnuastro library
@subsection Unit conversion library (@file{units.h})

Datasets can contain values in various formats or units.
The functions in this section are defined to facilitate the easy conversion between them and are declared in @file{units.h}.
If there are certain conversions that are useful for your work, please get in touch.

@deftypefun int gal_units_extract_decimal (char @code{*convert}, const char @code{*delimiter}, double @code{*args}, size_t @code{n})
Parse the input @code{convert} string with a certain delimiter (for example @code{01:23:45}, where the delimiter is @code{":"}) as multiple numbers (for example 1,23,45) and write them as an array in the space that @code{args} is pointing to.
The expected number of values in the string is specified by the @code{n} argument (3 in the example above).

If the function succeeds, it will return 1, otherwise it will return 0 and the values may not be fully written into @code{args}.
If the number of values parsed in the string is different from @code{n}, this function will fail.
@end deftypefun

@deftypefun double gal_units_ra_to_degree (char @code{*convert})
@cindex Right Ascension
Convert the input Right Ascension (RA) string (in the format of hours, minutes and seconds either as @code{_h_m_s} or @code{_:_:_}) to degrees (a single floating point number).
@end deftypefun

@deftypefun double gal_units_dec_to_degree (char @code{*convert})
@cindex Declination
Convert the input Declination (Dec) string (in the format of degrees, arc-minutes and arc-seconds either as @code{_d_m_s} or @code{_:_:_}) to degrees (a single floating point number).
@end deftypefun

@deftypefun {char *} gal_units_degree_to_ra (double @code{decimal}, int @code{usecolon})
@cindex Right Ascension
Convert the input Right Ascension (RA) degree (a single floating point number) to old/standard notation (in the format of hours, minutes and seconds of @code{_h_m_s}).
If @code{usecolon!=0}, then the delimiters between the components will be colons: @code{_:_:_}.
@end deftypefun

@deftypefun {char *} gal_units_degree_to_dec (double @code{decimal}, int @code{usecolon})
@cindex Declination
Convert the input Declination (Dec) degree (a single floating point number) to old/standard notation (in the format of degrees, arc-minutes and arc-seconds of @code{_d_m_s}).
If @code{usecolon!=0}, then the delimiters between the components will be colons: @code{_:_:_}.
@end deftypefun

@node Spectral lines library, Cosmology library, Unit conversion library (@file{units.h}), Gnuastro library
@subsection Spectral lines library (@file{speclines.h})

Gnuastro's library has the following macros and functions for dealing with spectral lines.
All these functions are declared in @file{gnuastro/spectra.h}.

@cindex H-alpha
@cindex H-beta
@cindex H-gamma
@cindex H-delta
@cindex H-epsilon
@cindex OII doublet
@cindex SII doublet
@cindex Lyman-alpha
@cindex Lyman limit
@cindex NII doublet
@cindex Doublet: NII
@cindex Doublet: OII
@cindex Doublet: SII
@cindex OIII doublet
@cindex MgII doublet
@cindex CIII doublet
@cindex Balmer limit
@cindex Doublet: OIII
@cindex Doublet: MgII
@cindex Doublet: CIII
@deffn  Macro GAL_SPECLINES_INVALID
@deffnx Macro GAL_SPECLINES_SIIRED
@deffnx Macro GAL_SPECLINES_SII
@deffnx Macro GAL_SPECLINES_SIIBLUE
@deffnx Macro GAL_SPECLINES_NIIRED
@deffnx Macro GAL_SPECLINES_NII
@deffnx Macro GAL_SPECLINES_HALPHA
@deffnx Macro GAL_SPECLINES_NIIBLUE
@deffnx Macro GAL_SPECLINES_OIIIRED
@deffnx Macro GAL_SPECLINES_OIII
@deffnx Macro GAL_SPECLINES_OIIIBLUE
@deffnx Macro GAL_SPECLINES_HBETA
@deffnx Macro GAL_SPECLINES_HEIIRED
@deffnx Macro GAL_SPECLINES_HGAMMA
@deffnx Macro GAL_SPECLINES_HDELTA
@deffnx Macro GAL_SPECLINES_HEPSILON
@deffnx Macro GAL_SPECLINES_NEIII
@deffnx Macro GAL_SPECLINES_OIIRED
@deffnx Macro GAL_SPECLINES_OII
@deffnx Macro GAL_SPECLINES_OIIBLUE
@deffnx Macro GAL_SPECLINES_BLIMIT
@deffnx Macro GAL_SPECLINES_MGIIRED
@deffnx Macro GAL_SPECLINES_MGII
@deffnx Macro GAL_SPECLINES_MGIIBLUE
@deffnx Macro GAL_SPECLINES_CIIIRED
@deffnx Macro GAL_SPECLINES_CIII
@deffnx Macro GAL_SPECLINES_CIIIBLUE
@deffnx Macro GAL_SPECLINES_HEIIBLUE
@deffnx Macro GAL_SPECLINES_LYALPHA
@deffnx Macro GAL_SPECLINES_LYLIMIT
@deffnx Macro GAL_SPECLINES_INVALID_MAX
Internal values/identifiers for specific spectral lines as is clear from
their names.
Note the first and last one, they can be used when parsing the lines automatically: both don't correspond to any line, but their integer values correspond to the two integers just before and after the first and last line identifier.

@code{GAL_SPECLINES_INVALID} has a value of zero, and allows you to have a fixed integer which never corresponds to a line.
@code{GAL_SPECLINES_INVALID_MAX} is the total number of pre-defined lines, plus one.
So you can parse all the known lines with a @code{for} loop like this:
@example
for(i=1;i<GAL_SPECLINES_INVALID_MAX;++i)
@end example
@end deffn

@deffn  Macro GAL_SPECLINES_ANGSTROM_SIIRED
@deffnx Macro GAL_SPECLINES_ANGSTROM_SII
@deffnx Macro GAL_SPECLINES_ANGSTROM_SIIBLUE
@deffnx Macro GAL_SPECLINES_ANGSTROM_NIIRED
@deffnx Macro GAL_SPECLINES_ANGSTROM_NII
@deffnx Macro GAL_SPECLINES_ANGSTROM_HALPHA
@deffnx Macro GAL_SPECLINES_ANGSTROM_NIIBLUE
@deffnx Macro GAL_SPECLINES_ANGSTROM_OIIIRED
@deffnx Macro GAL_SPECLINES_ANGSTROM_OIII
@deffnx Macro GAL_SPECLINES_ANGSTROM_OIIIBLUE
@deffnx Macro GAL_SPECLINES_ANGSTROM_HBETA
@deffnx Macro GAL_SPECLINES_ANGSTROM_HEIIRED
@deffnx Macro GAL_SPECLINES_ANGSTROM_HGAMMA
@deffnx Macro GAL_SPECLINES_ANGSTROM_HDELTA
@deffnx Macro GAL_SPECLINES_ANGSTROM_HEPSILON
@deffnx Macro GAL_SPECLINES_ANGSTROM_NEIII
@deffnx Macro GAL_SPECLINES_ANGSTROM_OIIRED
@deffnx Macro GAL_SPECLINES_ANGSTROM_OII
@deffnx Macro GAL_SPECLINES_ANGSTROM_OIIBLUE
@deffnx Macro GAL_SPECLINES_ANGSTROM_BLIMIT
@deffnx Macro GAL_SPECLINES_ANGSTROM_MGIIRED
@deffnx Macro GAL_SPECLINES_ANGSTROM_MGII
@deffnx Macro GAL_SPECLINES_ANGSTROM_MGIIBLUE
@deffnx Macro GAL_SPECLINES_ANGSTROM_CIIIRED
@deffnx Macro GAL_SPECLINES_ANGSTROM_CIII
@deffnx Macro GAL_SPECLINES_ANGSTROM_CIIIBLUE
@deffnx Macro GAL_SPECLINES_ANGSTROM_HEIIBLUE
@deffnx Macro GAL_SPECLINES_ANGSTROM_LYALPHA
@deffnx Macro GAL_SPECLINES_ANGSTROM_LYLIMIT
Wavelength (in Angstroms) of the named lines.
@end deffn

@deffn  Macro GAL_SPECLINES_NAME_SIIRED
@deffnx Macro GAL_SPECLINES_NAME_SII
@deffnx Macro GAL_SPECLINES_NAME_SIIBLUE
@deffnx Macro GAL_SPECLINES_NAME_NIIRED
@deffnx Macro GAL_SPECLINES_NAME_NII
@deffnx Macro GAL_SPECLINES_NAME_HALPHA
@deffnx Macro GAL_SPECLINES_NAME_NIIBLUE
@deffnx Macro GAL_SPECLINES_NAME_OIIIRED
@deffnx Macro GAL_SPECLINES_NAME_OIII
@deffnx Macro GAL_SPECLINES_NAME_OIIIBLUE
@deffnx Macro GAL_SPECLINES_NAME_HBETA
@deffnx Macro GAL_SPECLINES_NAME_HEIIRED
@deffnx Macro GAL_SPECLINES_NAME_HGAMMA
@deffnx Macro GAL_SPECLINES_NAME_HDELTA
@deffnx Macro GAL_SPECLINES_NAME_HEPSILON
@deffnx Macro GAL_SPECLINES_NAME_NEIII
@deffnx Macro GAL_SPECLINES_NAME_OIIRED
@deffnx Macro GAL_SPECLINES_NAME_OII
@deffnx Macro GAL_SPECLINES_NAME_OIIBLUE
@deffnx Macro GAL_SPECLINES_NAME_BLIMIT
@deffnx Macro GAL_SPECLINES_NAME_MGIIRED
@deffnx Macro GAL_SPECLINES_NAME_MGII
@deffnx Macro GAL_SPECLINES_NAME_MGIIBLUE
@deffnx Macro GAL_SPECLINES_NAME_CIIIRED
@deffnx Macro GAL_SPECLINES_NAME_CIII
@deffnx Macro GAL_SPECLINES_NAME_CIIIBLUE
@deffnx Macro GAL_SPECLINES_NAME_HEIIBLUE
@deffnx Macro GAL_SPECLINES_NAME_LYALPHA
@deffnx Macro GAL_SPECLINES_NAME_LYLIMIT
Names (as literal stings without any space, all in small-caps) that can be
used to refer to the lines in your program and converted to and from line
identifiers using the functions below.
@end deffn

@deftypefun {char *} gal_speclines_line_name (int @code{linecode})
Return the literal string of the given spectral line identifier Macro (for
example @code{GAL_SPECLINES_HALPHA} or @code{GAL_SPECLINES_LYLIMIT}).
@end deftypefun

@deftypefun int gal_speclines_line_code (char @code{*name})
Return the spectral line identifier of the given standard name (for example
@code{GAL_SPECLINES_NAME_HALPHA} or @code{GAL_SPECLINES_NAME_LYLIMIT}).
@end deftypefun

@deftypefun double gal_speclines_line_angstrom (int @code{linecode})
Return the wavelength (in Angstroms) of the given line.
@end deftypefun

@deftypefun double gal_speclines_line_redshift (double @code{obsline}, double @code{restline})
@cindex Rest-frame
Return the redshift where the observed wavelength (@code{obsline}) was
emitted from (if its restframe wavelength was @code{restline}).
@end deftypefun

@deftypefun double gal_speclines_line_redshift_code (double @code{obsline}, int @code{linecode})
Return the redshift where the observed wavelength (@code{obsline}) was
emitted from (assuming its a specific spectra line, identified with
@code{linecode}).
@end deftypefun





@node Cosmology library,  , Spectral lines library, Gnuastro library
@subsection Cosmology library (@file{cosmology.h})

This library does the main cosmological calculations that are commonly
necessary in extra-galactic astronomical studies. The main variable in this
context is the redshift (@mymath{z}). The cosmological input parameters in
the functions below are @code{H0}, @code{o_lambda_0}, @code{o_matter_0},
@code{o_radiation_0} which respectively represent the current (at redshift
0) expansion rate (Hubble constant in units of km/sec/Mpc), cosmological
constant (@mymath{\Lambda}), matter and radiation densities.

All these functions are declared in @file{gnuastro/cosmology.h}. For a more
extended introduction/discussion of the cosmological parameters, please see
@ref{CosmicCalculator}.

@deftypefun double gal_cosmology_age (double @code{z}, double @code{H0}, double @code{o_lambda_0}, double @code{o_matter_0}, double @code{o_radiation_0})
Returns the age of the universe at redshift @code{z} in units of Giga
years.
@end deftypefun

@deftypefun double gal_cosmology_proper_distance (double @code{z}, double @code{H0}, double @code{o_lambda_0}, double @code{o_matter_0}, double @code{o_radiation_0})
Returns the proper distance to an object at redshift @code{z} in units of
Mega parsecs.
@end deftypefun

@deftypefun double gal_cosmology_comoving_volume (double @code{z}, double @code{H0}, double @code{o_lambda_0}, double @code{o_matter_0}, double @code{o_radiation_0})
Returns the comoving volume over 4pi stradian to @code{z} in units of Mega
parsecs cube.
@end deftypefun

@deftypefun double gal_cosmology_critical_density (double @code{z}, double @code{H0}, double @code{o_lambda_0}, double @code{o_matter_0}, double @code{o_radiation_0})
Returns the critical density at redshift @code{z} in units of
@mymath{g/cm^3}.
@end deftypefun

@deftypefun double gal_cosmology_angular_distance (double @code{z}, double @code{H0}, double @code{o_lambda_0}, double @code{o_matter_0}, double @code{o_radiation_0})
Return the angular diameter distance to an object at redshift @code{z} in
units of Mega parsecs.
@end deftypefun

@deftypefun double gal_cosmology_luminosity_distance (double @code{z}, double @code{H0}, double @code{o_lambda_0}, double @code{o_matter_0}, double @code{o_radiation_0})
Return the luminosity diameter distance to an object at redshift @code{z}
in units of Mega parsecs.
@end deftypefun

@deftypefun double gal_cosmology_distance_modulus (double @code{z}, double @code{H0}, double @code{o_lambda_0}, double @code{o_matter_0}, double @code{o_radiation_0})
Return the distance modulus at redshift @code{z} (with no units).
@end deftypefun

@deftypefun double gal_cosmology_to_absolute_mag (double @code{z}, double @code{H0}, double @code{o_lambda_0}, double @code{o_matter_0}, double @code{o_radiation_0})
Return the conversion from apparent to absolute magnitude for an object at
redshift @code{z}. This value has to be added to the apparent magnitude to
give the absolute magnitude of an object at redshift @code{z}.
@end deftypefun

@deftypefun double gal_cosmology_velocity_from_z (double @code{z})
Return the velocity (in km/s) corresponding to the given redshift (@code{z}).
@end deftypefun

@deftypefun double gal_cosmology_z_from_velocity (double @code{v})
Return the redshift corresponding to the given velocity (@code{v} in km/s).
@end deftypefun

@node Library demo programs,  , Gnuastro library, Library
@section Library demo programs

In this final section of @ref{Library}, we give some example Gnuastro
programs to demonstrate various features in the library. All these programs
have been tested and once Gnuastro is installed you can compile and run
them with with Gnuastro's @ref{BuildProgram} program that will take care of
linking issues. If you don't have any FITS file to experiment on, you can
use those that are generated by Gnuastro after @command{make check} in the
@file{tests/} directory, see @ref{Quick start}.


@menu
* Library demo - reading a image::  Read a FITS image into memory.
* Library demo - inspecting neighbors::  Inspect the neighbors of a pixel.
* Library demo - multi-threaded operation::  Doing an operation on threads.
* Library demo - reading and writing table columns::  Simple Column I/O.
@end menu

@node Library demo - reading a image, Library demo - inspecting neighbors, Library demo programs, Library demo programs
@subsection Library demo - reading a FITS image

The following simple program demonstrates how to read a FITS image into
memory and use the @code{void *array} pointer in of @ref{Generic data
container}. For easy linking/compilation of this program along with a first
run see @ref{BuildProgram}. Before running, also change the @code{filename}
and @code{hdu} variable values to specify an existing FITS file and/or
extension/HDU.

This is just intended to demonstrate how to use the @code{array} pointer of
@code{gal_data_t}. Hence it doesn't do important sanity checks, for example
in real datasets you may also have blank pixels. In such cases, this
program will return a NaN value (see @ref{Blank pixels}). So for general
statistical information of a dataset, it is much better to use Gnuastro's
@ref{Statistics} program which can deal with blank pixels any many other
issues in a generic dataset.

@example
#include <stdio.h>
#include <stdlib.h>
#include <gnuastro/fits.h> /* includes gnuastro's data.h and type.h */
#include <gnuastro/statistics.h>

int
main(void)
@{
  size_t i;
  float *farray;
  double sum=0.0f;
  gal_data_t *image;
  char *filename="img.fits", *hdu="1";


  /* Read `img.fits' (HDU: 1) as a float32 array. */
  image=gal_fits_img_read_to_type(filename, hdu, GAL_TYPE_FLOAT32,
                                  -1, 1);


  /* Use the allocated space as a single precision floating
   * point array (recall that `image->array' has `void *'
   * type, so it is not directly usable. */
  farray=image->array;


  /* Calculate the sum of all the values. */
  for(i=0; i<image->size; ++i)
    sum += farray[i];


  /* Report the sum. */
  printf("Sum of values in %s (hdu %s) is: %f\n",
         filename, hdu, sum);


  /* Clean up and return. */
  gal_data_free(image);
  return EXIT_SUCCESS;
@}
@end example


@node Library demo - inspecting neighbors, Library demo - multi-threaded operation, Library demo - reading a image, Library demo programs
@subsection Library demo - inspecting neighbors

The following simple program shows how you can inspect the neighbors of a
pixel using the @code{GAL_DIMENSION_NEIGHBOR_OP} function-like macro that
was introduced in @ref{Dimensions}. For easy linking/compilation of this
program along with a first run see @ref{BuildProgram}. Before running, also
change the file name and HDU (first and second arguments to
@code{gal_fits_img_read_to_type}) to specify an existing FITS file and/or
extension/HDU.

@example
#include <stdio.h>
#include <gnuastro/fits.h>
#include <gnuastro/dimension.h>

int
main(void)
@{
  double sum;
  float *array;
  size_t i, num, *dinc;
  gal_data_t *input=gal_fits_img_read_to_type("input.fits", "1",
                                              GAL_TYPE_FLOAT32, -1, 1);

  /* To avoid the `void *' pointer and have `dinc'. */
  array=input->array;
  dinc=gal_dimension_increment(input->ndim, input->dsize);

  /* Go over all the pixels. */
  for(i=0;i<input->size;++i)
    @{
      num=0;
      sum=0.0f;
      GAL_DIMENSION_NEIGHBOR_OP( i, input->ndim, input->dsize,
                                 input->ndim, dinc,
                                 @{++num; sum+=array[nind];@} );
      printf("%zu: num: %zu, sum: %f\n", i, num, sum);
    @}

  /* Clean up and return. */
  gal_data_free(input);
  return EXIT_SUCCESS;
@}
@end example



@node Library demo - multi-threaded operation, Library demo - reading and writing table columns, Library demo - inspecting neighbors, Library demo programs
@subsection Library demo - multi-threaded operation

The following simple program shows how to use Gnuastro to simplify spinning off threads and distributing different jobs between the threads.
The relevant thread-related functions are defined in @ref{Gnuastro's thread related functions}.
For easy linking/compilation of this program, along with a first run, see Gnuastro's @ref{BuildProgram}.
Before running, also change the @code{filename} and @code{hdu} variable values to specify an existing FITS file and/or extension/HDU.

This is a very simple program to open a FITS image, distribute its pixels between different threads and print the value of each pixel and the thread it was assigned to.
The actual operation is very simple (and would not usually be done with threads in a real-life program).
It is intentionally chosen to put more focus on the important steps in spinning of threads and how the worker function (which is called by each thread) can identify the job-IDs it should work on.

For example, instead of an array of pixels, you can define an array of tiles or any other context-specific structures as separate targets.
The important thing is that each action should have its own unique ID (counting from zero, as is done in an array in C).
You can then follow the process below and use each thread to work on all the targets that are assigned to it.
Recall that spinning-off threads is its self an expensive process and we don't want to spin-off one thread for each target (see the description of @code{gal_threads_dist_in_threads} in @ref{Gnuastro's thread related functions}.

There are many (more complicated, real-world) examples of using @code{gal_threads_spin_off} in Gnuastro's actual source code, you can see them by searching for the @code{gal_threads_spin_off} function from the top source (after unpacking the tarball) directory (for example with this command):

@example
$ grep -r gal_threads_spin_off ./
@end example

@noindent
The code of this demonstration program is shown below.
This program was also built and run when you ran @code{make check} during the building of Gnuastro (@code{tests/lib/multithread.c}, so it is already tested for your system and you can safely use it as a guide.

@example
#include <stdio.h>
#include <stdlib.h>

#include <gnuastro/fits.h>
#include <gnuastro/threads.h>


/* This structure can keep all information you want to pass onto the
 * worker function on each thread. */
struct params
@{
  gal_data_t *image;            /* Dataset to print values of. */
@};




/* This is the main worker function which will be called by the
 * different threads. `gal_threads_params' is defined in
 * `gnuastro/threads.h' and contains the pointer to the parameter we
 * want. Note that the input argument and returned value of this
 * function always must have `void *' type. */
void *
worker_on_thread(void *in_prm)
@{
  /* Low-level definitions to be done first. */
  struct gal_threads_params *tprm=(struct gal_threads_params *)in_prm;
  struct params *p=(struct params *)tprm->params;


  /* Subsequent definitions. */
  float *array=p->image->array;
  size_t i, index, *dsize=p->image->dsize;


  /* Go over all the actions (pixels in this case) that were assigned
   * to this thread. */
  for(i=0; tprm->indexs[i] != GAL_BLANK_SIZE_T; ++i)
    @{
      /* For easy reading. */
      index = tprm->indexs[i];


      /* Print the information. */
      printf("(%zu, %zu) on thread %zu: %g\n", index%dsize[1]+1,
             index/dsize[1]+1, tprm->id, array[index]);
    @}


  /* Wait for all the other threads to finish, then return. */
  if(tprm->b) pthread_barrier_wait(tprm->b);
  return NULL;
@}




/* High-level function (called by the operating system). */
int
main(void)
@{
  struct params p;
  char *filename="input.fits", *hdu="1";
  size_t numthreads=gal_threads_number();

  /* We are using * `-1' for `minmapsize' to ensure that the image is
     read into * memory and `1' for `quietmmap' (which can also be
     zero), see the "Memory management" section in the book. */
  int quietmmap=1;
  size_t minmapsize=-1;


  /* Read the image into memory as a float32 data type. */
  p.image=gal_fits_img_read_to_type(filename, hdu, GAL_TYPE_FLOAT32,
                                    minmapsize, quietmmap);


  /* Print some basic information before the actual contents: */
  printf("Pixel values of %s (HDU: %s) on %zu threads.\n", filename,
         hdu, numthreads);
  printf("Used to check the compiled library's capability in opening "
         "a FITS file, and also spinning-off threads.\n");


  /* A small sanity check: this is only intended for 2D arrays (to
   * print the coordinates of each pixel). */
  if(p.image->ndim!=2)
    @{
      fprintf(stderr, "only 2D images are supported.");
      exit(EXIT_FAILURE);
    @}


  /* Spin-off the threads and do the processing on each thread. */
  gal_threads_spin_off(worker_on_thread, &p, p.image->size, numthreads,
                       minmapsize, quietmmap);


  /* Clean up and return. */
  gal_data_free(p.image);
  return EXIT_SUCCESS;
@}
@end example



@node Library demo - reading and writing table columns,  , Library demo - multi-threaded operation, Library demo programs
@subsection Library demo - reading and writing table columns

Tables are some of the most common inputs to, and outputs of programs. This
section contains a small program for reading and writing tables using the
constructs described in @ref{Table input output}. For easy
linking/compilation of this program, along with a first run, see Gnuastro's
@ref{BuildProgram}. Before running, also set the following file and column
names in the first two lines of @code{main}. The input and output names may
be @file{.txt} and @file{.fits} tables, @code{gal_table_read} and
@code{gal_table_write} will be able to write to both formats. For plain
text tables see see @ref{Gnuastro text table format}.

This example program reads three columns from a table. The first two
columns are selected by their name (@code{NAME1} and @code{NAME2}) and the
third is selected by its number: column 10 (counting from 1). Gnuastro's
column selection is discussed in @ref{Selecting table columns}. The first
and second columns can be any type, but this program will convert them to
@code{int32_t} and @code{float} for its internal usage
respectively. However, the third column must be double for this program. So
if it isn't, the program will abort with an error. Having the columns in
memory, it will print them out along with their sum (just a simple
application, you can do what ever you want at this stage). Reading the
table finishes here.

The rest of the program is a demonstration of writing a table. While
parsing the rows, this program will change the first column (to be
counters) and multiply the second by 10 (so the output will be
different). Then it will define the order of the output columns by setting
the @code{next} element (to create a @ref{List of gal_data_t}). Before
writing, this function will also set names for the columns (units and
comments can be defined in a similar manner). Writing the columns to a file
is then done through a simple call to @code{gal_table_write}.

The operations that are shown in this example program are not necessary all
the time. For example, in many cases, you know the numerical data type of
the column before writing your program (see @ref{Numeric data types}), so
type checking and copying to a specific type won't be necessary.

@example
#include <stdio.h>
#include <stdlib.h>

#include <gnuastro/table.h>

int
main(void)
@{
  /* File names and column names (which may also be numbers). */
  char *c1_name="NAME1", *c2_name="NAME2", *c3_name="10";
  char *inname="input.fits", *hdu="1", *outname="out.fits";

  /* Internal parameters. */
  float *array2;
  double *array3;
  int32_t *array1;
  size_t i, counter=0;
  gal_data_t *c1, *c2;
  gal_data_t tmp, *col, *columns;
  gal_list_str_t *column_ids=NULL;

  /* Define the columns to read. */
  gal_list_str_add(&column_ids, c1_name, 0);
  gal_list_str_add(&column_ids, c2_name, 0);
  gal_list_str_add(&column_ids, c3_name, 0);

  /* The columns were added in reverse, so correct it. */
  gal_list_str_reverse(&column_ids);

  /* Read the desired columns. */
  columns = gal_table_read(inname, hdu, column_ids,
                           GAL_TABLE_SEARCH_NAME, 1, -1, 1, NULL);

  /* Go over the columns, we'll assume that you don't know their type
   * a-priori, so we'll check  */
  counter=1;
  for(col=columns; col!=NULL; col=col->next)
    switch(counter++)
      @{
      case 1:              /* First column: we want it as int32_t. */
        c1=gal_data_copy_to_new_type(col, GAL_TYPE_INT32);
        array1 = c1->array;
        break;

      case 2:              /* Second column: we want it as float.  */
        c2=gal_data_copy_to_new_type(col, GAL_TYPE_FLOAT32);
        array2 = c2->array;
        break;

      case 3:              /* Third column: it MUST be double.     */
        if(col->type!=GAL_TYPE_FLOAT64)
          @{
            fprintf(stderr, "Column %s must be float64 type, it is "
                    "%s", c3_name, gal_type_name(col->type, 1));
            exit(EXIT_FAILURE);
          @}
        array3 = col->array;
        break;
      @}

  /* As an example application we'll just print them out. In the
   * meantime (just for a simple demonstration), change the first
   * array value to the counter and multiply the second by 10. */
  for(i=0;i<c1->size;++i)
    @{
      printf("%zu: %d + %f + %f = %f\n", i+1, array1[i], array2[i],
             array3[i], array1[i]+array2[i]+array3[i]);
      array1[i]  = i+1;
      array2[i] *= 10;
    @}

  /* Link the first two columns as a list. */
  c1->next = c2;
  c2->next = NULL;

  /* Set names for the columns and write them out. */
  c1->name = "COUNTER";
  c2->name = "VALUE";
  gal_table_write(c1, NULL, NULL, GAL_TABLE_FORMAT_BFITS, outname,
                  "MY-COLUMNS", 0);

  /* The names weren't allocated, so to avoid cleaning-up problems,
   * we'll set them to NULL. */
  c1->name = c2->name = NULL;

  /* Clean up and return.  */
  gal_data_free(c1);
  gal_data_free(c2);
  gal_list_data_free(columns);
  gal_list_str_free(column_ids, 0); /* strings weren't allocated. */
  return EXIT_SUCCESS;
@}
@end example








@node Developing, Gnuastro programs list, Library, Top
@chapter Developing

The basic idea of GNU Astronomy Utilities is for an interested astronomer
to be able to easily understand the code of any of the programs or
libraries, be able to modify the code if s/he feels there is an improvement
and finally, to be able to add new programs or libraries for their own
benefit, and the larger community if they are willing to share it. In
short, we hope that at least from the software point of view, the
``obscurantist faith in the expert's special skill and in his personal
knowledge and authority'' can be broken, see @ref{Science and its
tools}. With this aim in mind, Gnuastro was designed to have a very basic,
simple, and easy to understand architecture for any interested inquirer.

This chapter starts with very general design choices, in particular
@ref{Why C} and @ref{Program design philosophy}. It will then get a little
more technical about the Gnuastro code and file/directory structure in
@ref{Coding conventions} and @ref{Program source}. @ref{The TEMPLATE
program} discusses a minimal (and working) template to help in creating new
programs or easier learning of a program's internal structure. Some other
general issues about documentation, building and debugging are then
discussed. This chapter concludes with how you can learn about the
development and get involved in @ref{Gnuastro project webpage},
@ref{Developing mailing lists} and @ref{Contributing to Gnuastro}.


@menu
* Why C::                       Why Gnuastro is designed in C.
* Program design philosophy::   General ideas behind the package structure.
* Coding conventions::          Gnuastro coding conventions.
* Program source::              Conventions for the code.
* Documentation::               Documentation is an integral part of Gnuastro.
* Building and debugging::      Build and possibly debug during development.
* Test scripts::                Understanding the test scripts.
* Developer's checklist::       Checklist to finalize your changes.
* Gnuastro project webpage::    Central hub for Gnuastro activities.
* Developing mailing lists::    Stay up to date with Gnuastro's development.
* Contributing to Gnuastro::    Share your changes with all users.
@end menu




@node Why C, Program design philosophy, Developing, Developing
@section Why C programming language?
@cindex C programming language
@cindex C++ programming language
@cindex Python programming language
Currently the programming language that is most commonly used in scientific
applications is C++@footnote{@url{https://isocpp.org/}},
Python@footnote{@url{https://www.python.org/}}, and
Julia@footnote{@url{https://julialang.org/}} (which is a newcomer but
swiftly gaining ground). One of the main reasons behind this choice is
their high-level abstractions. However, GNU Astronomy Utilities is fully
written in the C programming
language@footnote{@url{https://en.wikipedia.org/wiki/C_(programming_language)}}. The
reasons can be summarized with simplicity, portability and
efficiency/speed. All three are very important in a scientific software and
we will discuss them below.

@cindex ANSI C
@cindex ISO C90
@cindex Ritchie, Dennis
@cindex Kernighan, Brian
@cindex Stroustrup, Bjarne
Simplicity can best be demonstrated in a comparison of the main books of
C++ and C. The ``C programming language''@footnote{Brian Kernighan, Dennis
Ritchie. @emph{The C programming language}. Prentice Hall, Inc., Second
edition, 1988. It is also commonly known as K&R and is based on the ANSI C
and ISO C90 standards.} book, written by the authors of C, is only 286
pages and covers a very good fraction of the language, it has also remained
unchanged from 1988. C is the main programming language of nearly all
operating systems and there is no plan of any significant update. On the
other hand, the most recent ``C++ programming language''@footnote{Bjarne
Stroustrup. @emph{The C++ programming language}. Addison-Wesley
Professional; 4 edition, 2013.}  book, also written by its author, has 1366
pages and its fourth edition came out in 2013!  As discussed in
@ref{Science and its tools}, it is very important for other scientists to
be able to readily read the code of a program at their will with minimum
requirements.

@cindex Object oriented programming
In C++, inheriting objects in the object oriented programming paradigm and
their internal functions make the code very easy to write for a programmer
who is deeply invested in those objects and understands all their relations
well. But it simultaneously makes reading the program for a first time
reader (a curious scientist who wants to know only how a small step was
done) extremely hard. Before understanding the methods, the scientist has
to invest a lot of time and energy in understanding those objects and their
relations. But in C, everything is done with basic language types for
example @code{int}s or @code{float}s and their pointers to define
arrays. So when an outside reader is only interested in one part of the
program, that part is all they have to understand.

Recently it is also becoming common to write scientific software in Python,
or a combination of it with C or C++. Python is a high level scripting
language which doesn't need compilation. It is very useful when you want to
do something on the go and don't want to be halted by the troubles of
compiling, linking, memory checking, etc. When the datasets are small and
the job is temporary, this ability of Python is great and is highly
encouraged. A very good example might be plotting, in which Python is
undoubtedly one of the best.

But as the data sets increase in size and the processing becomes more
complicated, the speed of Python scripts significantly decrease. So when
the program doesn't change too often and is widely used in a large
community, mostly on large data sets (like astronomical images), using
Python will waste a lot of valuable research-hours. It is possible to wrap
C or C++ functions with Python to fix the speed issue. But this creates
further complexity, because the interested scientist has to master two
programming languages and their connection (which is not trivial).

Like C++, Python is object oriented, so as explained above, it needs a high
level of experience with that particular program to reasonably understand
its inner workings. To make things worse, since it is mainly for on-the-go
programming@footnote{Note that Python is good for fast programming, not
fast programs.}, it can undergo significant changes. One recent example is
how Python 2.x and Python 3.x are not compatible. Lots of research teams
that invested heavily in Python 2.x cannot benefit from Python 3.x or
future versions any more. Some converters are available, but since they are
automatic, lots of complications might arise in the conversion@footnote{For
example see @url{https://arxiv.org/abs/1712.00461, Jenness (2017)} which
describes how LSST is managing the transition.}.

If a research project begins using Python 3.x today, there is no telling
how compatible their investments will be when Python 4.x or 5.x will come
out. This stems from the core principles of Python, which are very useful
when you look in the `on the go' basis as described before and not future
usage. Reproducibility (ability to run the code in the future) is a core
principal of any scientific result, or the software that produced that
result. Rebuilding all the dependencies of a software in an obsolete
language is not easy. Future-proof code (as long as current operating
systems will be used) is written in C.

The portability of C is best demonstrated by the fact that both C++ and
Python are part of the C-family of programming languages which also include
Julia, Java, Perl, and many other languages. C libraries can be immediately
included in C++, and it is easy to write wrappers for them in all C-family
programming languages. This will allow other scientists to benefit from C
libraries using any C-family language that they prefer. As a result,
Gnuastro's library is already usable in C and C++, and wrappers will
be@footnote{@url{http://savannah.gnu.org/task/?13786}} added for
higher-level languages like Python, Julia and Java.

@cindex Low level programming
@cindex Programming, low level
The final reason was speed. This is another very important aspect of C
which is not independent of simplicity (first reason discussed above). The
abstractions provided by the higher-level languages (which also makes
learning them harder for a newcomer) comes at the cost of speed. Since C is
a low-level language@footnote{Low-level languages are those that directly
operate the hardware like assembly languages. So C is actually a high-level
language, but it can be considered one of the lowest-level languages among
all high-level languages.} (closer to the hardware), it is much less
complex for both the human reader @emph{and} the computer. The benefits of
simplicity for a human were discussed above. Simplicity for the computer
translates into more efficient (faster) programs. This creates a much
closer relation between the scientist/programmer (or their program) and the
actual data and processing. The GNU coding
standards@footnote{@url{http://www.gnu.org/prep/standards/}} also encourage
the use of C over all other languages when generality of usage and ``high
speed'' is desired.





@node Program design philosophy, Coding conventions, Why C, Developing
@section Program design philosophy

@cindex Gnulib
The core processing functions of each program (and all libraries) are
written mostly with the basic ISO C90 standard. We do make lots of use of
the GNU additions to the C language in the GNU C library@footnote{Gnuastro
uses many GNU additions to the C library. However, thanks to the GNU
Portability library (Gnulib) which is included in the Gnuastro tarball,
users of non-GNU/Linux operating systems can also benefit from all these
features when using Gnuastro.}, but these functions are mainly used in the
user interface functions (reading your inputs and preparing them prior to
or after the analysis). The actual algorithms, which most scientists would
be more interested in, are much more closer to ISO C90. For this reason,
program source files that deal with user interface issues and those doing
the actual processing are clearly separated, see @ref{Program source}. If
anything particular to the GNU C library is used in the processing
functions, it is explained in the comments in between the code.

@cindex GNU Coreutils
All the Gnuastro programs provide very low level and modular operations
(modeled on GNU Coreutils). Almost all the basic command-line programs like
@command{ls}, @command{cp} or @command{rm} on GNU/Linux operating systems
are part of GNU Coreutils. This enables you to use shell scripting
languages (for example GNU Bash) to operate on a large number of files or
do very complex things through the creative combinations of these tools
that the authors had never dreamed of. We have put a few simple examples in
@ref{Tutorials}.

@cindex @LaTeX{}
@cindex GNU Bash
@cindex Python Matplotlib
@cindex Matplotlib, Python
@cindex PGFplots in @TeX{} or @LaTeX{}
For example all the analysis output can be saved as ASCII tables which can
be fed into your favorite plotting program to inspect visually. Python's
Matplotlib is very useful for fast plotting of the tables to immediately
check your results. If you want to include the plots in a document, you can
use the PGFplots package within @LaTeX{}, no attempt is made to include
such operations in Gnuastro. In short, Bash can act as a glue to connect
the inputs and outputs of all these various Gnuastro programs (and other
programs) in any fashion. Of course, Gnuastro's programs are just
front-ends to the main workhorse (@ref{Gnuastro library}), allowing a user
to create their own programs (for example with @ref{BuildProgram}). So once
the functions within programs become mature enough, they will be moved
within the libraries for even more general applications.

The advantage of this architecture is that the programs become small and
transparent: the starting and finishing point of every program is clearly
demarcated. For nearly all operations on a modern computer (fast file
input-output) with a modest level of complexity, the read/write speed is
insignificant compared to the actual processing a program does. Therefore
the complexity which arises from sharing memory in a large application is
simply not worth the speed gain. Gnuastro's design is heavily influenced
from Eric Raymond's ``The Art of Unix Programming''@footnote{Eric
S. Raymond, 2004, @emph{The Art of Unix Programming}, Addison-Wesley
Professional Computing Series.}  which beautifully describes the design
philosophy and practice which lead to the success of Unix-based operating
systems@footnote{KISS principle: Keep It Simple, Stupid!}.





@node Coding conventions, Program source, Program design philosophy, Developing
@section Coding conventions

@cindex GNU coding standards
@cindex Gnuastro coding convention
In Gnuastro, we try our best to follow the GNU coding standards. Added to
those, Gnuastro defines the following conventions. It is very important for
readability that the whole package follows the same convention.

@itemize

@item
The code must be easy to read by eye. So when the order of several lines
within a function does not matter (for example when defining variables at
the start of a function). You should put the lines in the order of
increasing length and group the variables with similar types such that this
half-pyramid of declarations becomes most visible. If the reader is
interested, a simple search will show them the variable they are interested
in. However, this visual aid greatly helps in general inspections of the
code and help the reader get a grip of the function's processing.

@item
A function that cannot be fully displayed (vertically) in your monitor is
probably too long and may be more useful if it is broken up into multiple
functions. 40 lines is usually a good reference. When the start and end of
a function are clearly visible in one glance, the function is much more
easier to understand. This is most important for low-level functions (which
usually define a lot of variables). Low-level functions do most of the
processing, they will also be the most interesting part of a program for an
inquiring astronomer. This convention is less important for higher level
functions that don't define too many variables and whose only purpose is to
run the lower-level functions in a specific order and with checks.

@cindex Optimization flag
@cindex GCC: GNU Compiler Collection
@cindex GNU Compiler Collection (GCC)
In general you can be very liberal in breaking up the functions into
smaller parts, the GNU Compiler Collection (GCC) will automatically compile
the functions as inline functions when the optimizations are turned on. So
you don't have to worry about decreasing the speed. By default Gnuastro
will compile with the @option{-O3} optimization flag.

@cindex Buffers (Emacs)
@cindex Emacs buffers
@item
All Gnuastro hand-written text files (C source code, Texinfo documentation
source, and version control commit messages) should not exceed @strong{75}
characters per line. Monitors today are certainly much wider, but with this
limit, reading the functions becomes much more easier. Also for the
developers, it allows multiple files (or multiple views of one file) to be
displayed beside each other on wide monitors.

Emacs's buffers are excellent for this capability, setting a buffer width
of 80 with `@key{C-u 80 C-x 3}' will allow you to view and work on several
files or different parts of one file using the wide monitors common
today. Emacs buffers can also be used as a shell prompt and compile the
program (with @key{M-x compile}), and 80 characters is the default width in
most terminal emulators. If you use Emacs, Gnuastro sets the 75 character
@command{fill-column} variable automatically for you, see cartouche below.

For long comments you can use press @key{Alt-q} in Emacs to separate them
into separate lines automatically. For long literal strings, you can use
the fact that in C, two strings immediately after each other are
concatenated, for example @code{"The first part, " "and the second part."}.
Note the space character in the end of the first part. Since they are now
separated, you can easily break a long literal string into several lines
and adhere to the maximum 75 character line length policy.

@cindex Header file
@item
The headers required by each source file (ending with @file{.c}) should be
defined inside of it. All the headers a complete program needs should
@emph{not} be stacked in another header to include in all source files (for
example @file{main.h}). Although most `professional' programmers choose
this single header method, Gnuastro is primarily written for
professional/inquisitive astronomers (who are generally amateur
programmers). The list of header files included provides valuable general
information and helps the reader. @file{main.h} may only include the header
file(s) that define types that the main program structure needs, see
@file{main.h} in @ref{Program source}. Those particular header files that
are included in @file{main.h} can of course be ignored (not included) in
separate source files.

@item
The headers should be classified (by an empty line) into separate
groups:

@enumerate
@cindex GNU C library
@cindex Gnulib: GNU Portability Library
@cindex GNU Portability Library (Gnulib)
@item
@code{#include <config.h>}: This must be the first code line (not commented
or blank) in each source file @emph{within Gnuastro}. It sets macros that
the GNU Portability Library (Gnulib) will use for a unified environment
(GNU C Library), even when the user is building on a system that doesn't
use the GNU C library.

@item
The C library header files, for example @file{stdio.h}, @file{stdlib.h}, or
@file{math.h}.
@item
Installed library header files, including Gnuastro's installed headers (for
example @file{cfitsio.h} or @file{gsl/gsl_rng.h}, or
@file{gnuastro/fits.h}).
@item
Gnuastro's internal headers (that are not installed), for example
@file{gnuastro-internal/options.h}.
@item
For programs, the @file{main.h} file (which is needed by the next group of
headers).
@item
That particular program's header files, for example @file{mkprof.h}, or
@file{noisechisel.h}.
@end enumerate

@noindent
As much as order does not matter when you include the header of each group,
sort them by length, as described above.

@item
All function names, variables, etc should be in lower case.  Macros and
constant global @code{enum}s should be in upper case.

@item
For the naming of exported header files, functions, variables, macros, and
library functions, we adopt similar conventions to those used by the GNU
Scientific Library
(GSL)@footnote{@url{https://www.gnu.org/software/gsl/design/gsl-design.html#SEC15}}.
In particular, in order to avoid clashes with the names of functions and
variables coming from other libraries the name-space `@code{gal_}' is
prefixed to them. GAL stands for @emph{G}NU @emph{A}stronomy
@emph{L}ibrary.

@item
All installed header files should be in the @file{lib/gnuastro} directory
(under the top Gnuastro source directory). After installation, they will be
put in the @file{$prefix/include/gnuastro} directory (see @ref{Installation
directory} for @file{$prefix}). Therefore with this convention Gnuastro's
headers can be included in internal (to Gnuastro) and external (a library
user) source files with the same line
@example
# include <gnuastro/headername.h>
@end example
Note that the GSL convention for header file names is
@file{gsl_specialname.h}, so your include directive for a GSL header must
be something like @code{#include <gsl/gsl_specialname.h>}. Gnuastro doesn't
follow this GSL guideline because of the repeated @code{gsl} in the include
directive. It can be confusing and cause bugs for beginners. All Gnuastro
(and GSL) headers must be located within a unique directory and will not be
mixed with other headers. Therefore the `@file{gsl_}' prefix to the header
file names is redundant@footnote{For GSL, this prefix has an internal
technical application: GSL's architecture mixes installed and not-installed
headers in the same directory. This prefix is used to identify their
installation status. Therefore this filename prefix in GSL a technical
internal issue (for developers, not users).}.

@item
@cindex GNU coding standards
All installed functions and variables should also include the base-name of
the file in which they are defined as prefix, using underscores to separate
words@footnote{The convention to use underscores to separate words, called
``snake case'' (or ``snake_case''). This is also recommended by the GNU
coding standards.}. The same applies to exported macros, but in upper case.
For example in Gnuastro's top source directory, the prototype of function
@code{gal_box_border_from_center} is in @file{lib/gnuastro/box.h}, and the
macro @code{GAL_POLYGON_MAX_CORNERS} is defined in
@code{lib/gnuastro/polygon.h}.

This is necessary to give any user (who is not familiar with the library
structure) the ability to follow the code. This convention does make the
function names longer (a little harder to write), but the extra
documentation it provides plays an important role in Gnuastro and is worth
the cost.

@item
@cindex GNU Emacs
@cindex Trailing space
There should be no trailing white space in a line. To do this
automatically every time you save a file in Emacs, add the following
line to your @file{~/.emacs} file.
@example
(add-hook 'before-save-hook 'delete-trailing-whitespace)
@end example

@item
@cindex Tabs are evil
There should be no tabs in the indentation@footnote{If you use Emacs,
Gnuastro's @file{.dir-locals.el} file will automatically never use tabs for
indentation. To make this a default in all your Emacs sessions, you can add
the following line to your @file{~/.emacs} file: @command{(setq-default
indent-tabs-mode nil)}}.

@item
@cindex GNU Emacs
@cindex Function groups
@cindex Groups of similar functions
Individual, contextually similar, functions in a source file are separated
by 5 blank lines to be easily seen to be related in a group when parsing
the source code by eye. In Emacs you can use @key{CTRL-u 5 CTRL-o}.

@item
One group of contextually similar functions in a source file is separated
from another with 20 blank lines. In Emacs you can use @key{CTRL-u 20
CTRL-o}. Each group of functions has short descriptive title of the
functions in that group. This title is surrounded by asterisks (@key{*}) to
make it clearly distinguishable. Such contextual grouping and clear title
are very important for easily understanding the code.

@item
Always read the comments before the patch of code under it. Similarly, try
to add as many comments as you can regarding every patch of
code. Effectively, we want someone to get a good feeling of the steps,
without having to read the C code and only by reading the comments. This
follows similar principles as
@url{https://en.wikipedia.org/wiki/Literate_programming, Literate
programming}.

@end itemize

The last two conventions are not common and might benefit from a short
discussion here. With a good experience in advanced text editor operations,
the last two are redundant for a professional developer. However, recall
that Gnuastro aspires to be friendly to unfamiliar, and inexperienced (in
programming) eyes. In other words, as discussed in @ref{Science and its
tools}, we want the code to appear welcoming to someone who is completely
new to coding (and text editors) and only has a scientific curiosity.

Newcomers to coding and development, who are curious enough to venture into
the code, will probably not be using (or have any knowledge of) advanced
text editors. They will see the raw code in the web page or on a simple text
editor (like Gedit) as plain text. Trying to learn and understand a file
with dense functions that are all spaced with one or two blank lines can be
very taunting for a newcomer. But when they scroll through the file and see
clear titles and meaningful spaces for similar functions, we are helping
them find and focus on the part they are most interested in sooner and
easier.

@cartouche
@cindex GNU Emacs
@noindent
@strong{GNU Emacs, the recommended text editor:} GNU Emacs is an
extensible and easily customizable text editor which many programmers
rely on for developing due to its countless features. Among them, it
allows specification of certain settings that are applied to a single
file or to all files in a directory and its sub-directories. In order
to harmonize code coming from different contributors, Gnuastro comes
with a @file{.dir-locals.el} file which automatically configures Emacs
to satisfy most of the coding conventions above when you are using it
within Gnuastro's directories. Thus, Emacs users can readily start
hacking into Gnuastro. If you are new to developing, we strongly
recommend this editor. Emacs was the first project released by GNU and
is still one of its flagship projects. Some resources can be found at:

@table @asis
@item Official manual
At @url{https://www.gnu.org/software/emacs/manual/emacs.html}. This is a
great and very complete manual which is being improved for over 30 years
and is the best starting point to learn it. It just requires a little
patience and practice, but rest assured that you will be rewarded. If you
install Emacs, you also have access to this manual on the command-line with
the following command (see @ref{Info}).

@example
$ info emacs
@end example

@item A guided tour of emacs
At @url{https://www.gnu.org/software/emacs/tour/}. A short visual tour
of Emacs, officially maintained by the Emacs developers.

@item Unofficial mini-manual
At @url{https://tuhdo.github.io/emacs-tutor.html}. A shorter manual
which contains nice animated images of using Emacs.

@end table
@end cartouche



@node Program source, Documentation, Coding conventions, Developing
@section Program source

@cindex Source file navigation
@cindex Navigating source files
@cindex Program structure convention
@cindex Convention for program source
@cindex Gnuastro program structure convention
Besides the fact that all the programs share some functions that were
explained in @ref{Library}, everything else about each program is
completely independent. Recall that Gnuastro is written for an active
astronomer/scientist (not a passive one who just uses a software). It must
thus be easily navigable. Hence there are fixed source files (that contain
fixed operations) that must be present in all programs, these are discussed
fully in @ref{Mandatory source code files}. To easily understand the
explanations in this section you can use @ref{The TEMPLATE program} which
contains the bare minimum code for one working program. This template can
also be used to easily add new utilities: just copy and paste the directory
and change @code{TEMPLATE} with your program's name.



@menu
* Mandatory source code files::  Description of files common to all programs.
* The TEMPLATE program::        Template for easy creation of a new program.
@end menu

@node Mandatory source code files, The TEMPLATE program, Program source, Program source
@subsection Mandatory source code files

Some programs might need lots of source files and if there is no fixed
convention, navigating them can become very hard for a new inquirer into
the code. The following source files exist in every program's source
directory (which is located in @file{bin/progname}). For small programs,
these files are enough. Larger programs will need more files and developers
are encouraged to define any number of new files. It is just important that
the following list of files exist and do what is described here. When
creating other source files, please choose filenames that are a complete
single word: don't abbreviate (abbreviations are cryptic). For a minimal
program containing all these files, see @ref{The TEMPLATE program}.

@vtable @file

@item main.c
@cindex @code{main} function
Each executable has a @code{main} function, which is located in
@file{main.c}. Therefore this file is the starting point when reading any
program's source code. No actual processing functions must be defined in
this file, the function(s) in this file are only meant to connect the most
high level steps of each program. Generally, @code{main} will first call
the top user interface function to read user input and make all the
preparations. Then it will pass control to the top processing function for
that program. The functions to do both these jobs must be defined in other
source files.

@item main.h
@cindex Top root structure
@cindex @code{prognameparams}
@cindex Root parameter structure
@cindex Main parameters C structure
All the major parameters which will be used in the program must be stored
in a structure which is defined in @file{main.h}. The name of this
structure is usually @code{prognameparams}, for example @code{cropparams}
or @code{noisechiselparams}. So @code{#include "main.h"} will be a staple
in all the source codes of the program. It is also regularly the first (and
only) argument most of the program's functions which greatly helps in
readability.

Keeping all the major parameters of a program in this structure has the
major benefit that most functions will only need one argument: a pointer to
this structure. This will significantly facilitate the job of the
programmer, the inquirer and the computer. All the programs in Gnuastro are
designed to be low-level, small and independent parts, so this structure
should not get too large.

@cindex @code{p}
The main root structure of all programs contains at least one instance of
the @code{gal_options_common_params} structure. This structure will keep
the values to all common options in Gnuastro's programs (see @ref{Common
options}). This top root structure is conveniently called @code{p} (short
for parameters) by all the functions in the programs and the common options
parameters within it are called @code{cp}. With this convention any reader
can immediately understand where to look for the definition of one
parameter. For example you know that @code{p->cp->output} is in the common
parameters while @code{p->threshold} is in the program's parameters.

@cindex Structure de-reference operator
@cindex Operator, structure de-reference
With this basic root structure, source code of functions can potentially
become full of structure de-reference operators (@command{->}) which can
make the code very unreadable. In order to avoid this, whenever a
structure element is used more than a couple of times in a function, a
variable of the same type and with the same name (so it can be searched) as
the desired structure element should be defined with the value of the root
structure inside of it in definition time. Here is an example.

@example
char *hdu=p->cp.hdu;
float threshold=p->threshold;
@end example

@item args.h
@cindex GNU C library
@cindex Argp argument parser
The options particular to each program are defined in this file. Each
option is defined by a block of parameters in @code{program_options}. These
blocks are all you should modify in this file, leave the bottom group of
definitions untouched. These are fed directly into the GNU C library's Argp
facilities and it is recommended to have a look at that for better
understand what is going on, although this is not required here.

Each element of the block defining an option is described under
@code{argp_option} in @code{bootstrapped/lib/argp.h} (from Gnuastro's top
source file). Note that the last few elements of this structure are
Gnuastro additions (not documented in the standard Argp manual). The values
to these last elements are defined in @code{lib/gnuastro/type.h} and
@code{lib/gnuastro-internal/options.h} (from Gnuastro's top source
directory).

@item ui.h
Besides declaring the exported functions of @code{ui.c}, this header also
keeps the ``key''s to every program-specific option. The first class of
keys for the options that have a short-option version (single letter, see
@ref{Options}). The character that is defined here is the option's short
option name. The list of available alphabet characters can be seen in the
comments. Recall that some common options also take some characters, for
those, see @file{lib/gnuastro-internal/options.h}.

The second group of options are those that don't have a short option
alternative. Only the first in this group needs a value (@code{1000}), the
rest will be given a value by C's @code{enum} definition, so the actual
value is irrelevant and must never be used, always use the name.

@item ui.c
@cindex User interface functions
@cindex Functions for user interface
Everything related to reading the user input arguments and options,
checking the configuration files and checking the consistency of the input
parameters before the actual processing is run should be done in this
file. Since most functions are the same, with only the internal checks and
structure parameters differing. We recommend going through the @code{ui.c}
of @ref{The TEMPLATE program}, or several other programs for a better
understanding.

The most high-level function in @file{ui.c} is named
@code{ui_read_check_inputs_setup}. It accepts the raw command-line inputs
and a pointer to the root structure for that program (see the explanation
for @file{main.h}). This is the function that @code{main} calls. The basic
idea of the functions in this file is that the processing functions should
need a minimum number of such checks. With this convention an inquirer who
only wants to understand only one part (mostly the processing part and not
user input details and sanity checks) of the code can easily do so in the
later files. It also makes all the errors related to input appear before
the processing begins which is more convenient for the user.

@item progname.c, progname.h
@cindex Top processing source file
The high-level processing functions in each program are in a file named
@file{progname.c}, for example @file{crop.c} or @file{noisechisel.c}. The
function within these files which @code{main} calls is also named after
the program, for example

@example
void
crop(struct cropparams *p)
@end example

@noindent
or

@example
void
noisechisel(struct noisechiselparams *p)
@end example

@noindent
In this manner, if an inquirer is interested the processing steps, they can
immediately come and check this file for the first processing step without
having to go through @file{main.c} and @file{ui.c} first. In most
situations, any failure in any step of the programs will result in an
informative error message and an immediate abort in the program. So there
is usually no need for return values. Under more complicated situations
where a return value might be necessary, @code{void} will be replaced with
an @code{int} in the examples above. This value must be directly returned
by @code{main}, so it has to be an @code{int}.

@item authors-cite.h
@cindex Citation information
This header file keeps the global variable for the program authors and its
BibTeX record for citation. They are used in the outputs of the common
options @option{--version} and @option{--cite}, see @ref{Operating mode
options}.

@end vtable

@node The TEMPLATE program,  , Mandatory source code files, Program source
@subsection The TEMPLATE program

The extra creativity offered by libraries comes at a cost: you have to
actually write your @code{main} function and get your hands dirty in
managing user inputs: are all the necessary parameters given a value? is
the input in the correct format? do the options and the inputs correspond?
and many other similar checks. So when an operation has well-defined inputs
and outputs and is commonly needed, it is much more worthwhile to simply do
use all the great features that Gnuastro has already defined for such
operations.

To make it easier to learn/apply the internal program infra-structure
discussed in @ref{Mandatory source code files}, in the @ref{Version
controlled source}, Gnuastro ships with a template program . This template
program is not available in the Gnuastro tarball so it doesn't confuse
people using the tarball. The @file{bin/TEMPLATE} directory in Gnuastro's
Git repository contains the bare-minimum files necessary to define a new
program and all the basic/necessary files/functions are pre-defined
there.

Below you can see a list of initial steps to take for customizing this
template. We just assume that after cloning Gnuastro's history, you have
already bootstrapped Gnuastro, if not, please see @ref{Bootstrapping}.

@enumerate
@item
Select a name for your new program (for example @file{myprog}).

@item
Copy the @file{TEMPLATE} directory to a directory with your program's name:
@example
$ cp -R bin/TEMPLATE bin/myprog
@end example

@item
As with all source files in Gnuastro, all the files in template also have a
copyright notice at their top. Open all the files and correct these
notices: 1) The first line contains a single-line description of the
program. 2) In the second line only the name or your program needs to be
fixed and 3) Add your name and email as a ``Contributing author''. As your
program grows, you will need to add new files, don't forget to add this
notice in those new files too, just put your name and email under
``Original author'' and correct the copyright years.

@item
Open @file{configure.ac} in the top Gnuastro source. This file manages the
operations that are done when a user runs @file{./configure}. Going down
the file, you will notice repetitive parts for each program. You will
notice that the program names follow an alphabetic ordering in each
part. There is also a commented line/patch for the @file{TEMPLATE} program
in each part. You can copy one line/patch (from the program above or below
your desired name for example) and paste it in the proper place for your
new program. Then correct the names of the copied program to your new
program name. There are multiple places where this has to be done, so be
patient and go down to the bottom of the file. Ultimately add
@file{bin/myprog/Makefile} to @code{AC_CONFIG_FILES}, only here the
ordering depends on the length of the name (it isn't alphabetical).

@item
Open @file{Makefile.am} in the top Gnuastro source. Similar to the previous
step, add your new program similar to all the other programs. Here there
are only two places: 1) at the top where we define the conditionals (three
lines per program), and 2) immediately under it as part of the value for
@code{SUBDIRS}.

@item
Open @file{doc/Makefile.am} and similar to @file{Makefile.am} (above), add
the proper entries for the man-page of your program to be created (here,
the variable that keeps all the man-pages to be created is
@code{dist_man_MANS}). Then scroll down and add a rule to build the
man-page similar to the other existing rules (in alphabetical order). Don't
forget to add a short one-line description here, it will be displayed on
top of the man-page.

@item
Change @code{TEMPLATE.c} and @code{TEMPLATE.h} to @code{myprog.c} and
@code{myprog.h} in the file names:

@example
$ cd bin/myprog
$ mv TEMPLATE.c myprog.c
$ mv TEMPLATE.h myprog.h
@end example

@item
@cindex GNU Grep
Correct all occurrences of @code{TEMPLATE} in the input files to
@code{myprog} (in short or long format). You can get a list of all
occurrences with the following command. If you use Emacs, it will be able to
parse the Grep output and open the proper file and line automatically. So
this step can be very easy.

@example
$ grep --color -nHi -e template *
@end example

@item
Run the following commands to re-build the configuration and build system,
and then to configure and build Gnuastro (which now includes your exciting
new program).
@example
$ autoreconf -f
$ ./configure
$ make
@end example

@item
You are done! You can now start customizing your new program to do your
special processing. When its complete, just don't forget to add checks
also, so it can be tested at least once on a user's system with
@command{make check}, see @ref{Test scripts}. Finally, if you would like to
share it with all Gnuastro users, inform us so we merge it into Gnuastro's
main history.
@end enumerate







@node Documentation, Building and debugging, Program source, Developing
@section Documentation

Documentation (this book) is an integral part of Gnuastro (see @ref{Science
and its tools}).  Documentation is not considered a separate project and
must be written by its developers. Users can make edits/corrections, but
the initial writing must be by the developer. So, no change is considered
valid for implementation unless the respective parts of the book have also
been updated. The following procedure can be a good suggestion to take when
you have a new idea and are about to start implementing it.

The steps below are not a requirement, the important thing is that when you
send your work to be included in Gnuastro, the book and the code have to
both be fully up-to-date and compatible, with the purpose of the update
very clearly explained. You can follow any strategy you like, the following
strategy was what we have found to be most useful until now.

@enumerate
@item
Edit the book and fully explain your desired change, such that your idea is
completely embedded in the general context of the book with no sense of
discontinuity for a first time reader. This will allow you to plan the idea
much more accurately and in the general context of Gnuastro (a particular
program or library). Later on, when you are coding, this general context
will significantly help you as a road-map.

A very important part of this process is the program/library introduction.
These first few paragraphs explain the purposes of the program or library
and are fundamental to Gnuastro. Before actually starting to code, explain
your idea's purpose thoroughly in the start of the respective/new section
you wish to work on. While actually writing its purpose for a new reader,
you will probably get some valuable and interesting ideas that you hadn't
thought of before. This has occurred several times during the creation of
Gnuastro.

If an introduction already exists, embed or blend your idea's purpose with
the existing introduction. We emphasize that doing this is equally useful
for you (as the programmer) as it is useful for the user (reader). Recall
that the purpose of a program is very important, see @ref{Program design
philosophy}.

As you have already noticed for every program/library, it is very important
that the basics of the science and technique be explained in separate
subsections prior to the `Invoking Programname' subsection. If you are
writing a new program or your addition to an existing program involves a
new concept, also include such subsections and explain the concepts so a
person completely unfamiliar with the concepts can get a general initial
understanding. You don't have to go deep into the details, just enough to
get an interested person (with absolutely no background) started with some
good pointers/links to where they can continue studying if they are more
interested. If you feel you can't do that, then you have probably not
understood the concept yourself. If you feel you don't have the time, then
think about yourself as the reader in one year: you will forget almost all
the details, so now that you have done all the theoretical preparations,
add a few more hours and document it. Therefore in one year, when you find
a bug or want to add a new feature, you don't have to prepare as much. Have
in mind that your only limitation in length is the fatigue of the reader
after reading a long text, nothing else. So as long as you keep it
relevant/interesting for the reader, there is no page number limit/cost.

It might also help if you start discussing the usage of your idea in the
`Invoking ProgramName' subsection (explaining the options and arguments you
have in mind) at this stage too. Actually starting to write it here will
really help you later when you are coding.

@item
After you have finished adding your initial intended plan to the book, then
start coding your change or new program within the Gnuastro source
files. While you are coding, you will notice that somethings should be
different from what you wrote in the book (your initial plan). So correct
them as you are actually coding, but don't worry too much about missing a
few things (see the next step).

@item
After your work has been fully implemented, read the section documentation
from the start and see if you didn't miss any change in the coding and to
see if the context is fairly continuous for a first time reader (who hasn't
seen the book or had known Gnuastro before you made your change).

@item
If the change is notable, also update the @file{NEWS} file.
@end enumerate









@node Building and debugging, Test scripts, Documentation, Developing
@section Building and debugging

@cindex GNU Libtool
@cindex GNU Autoconf
@cindex GNU Automake
@cindex GNU build system
To build the various programs and libraries in Gnuastro, the GNU build
system is used which defines the steps in @ref{Quick start}. It consists of
GNU Autoconf, GNU Automake and GNU Libtool which are collectively known as
GNU Autotools. They provide a very portable system to check the hosts
environment and compile Gnuastro based on that. They also make installing
everything in their standard places very easy for the programmer. Most of
the small caps files that you see in the top source directory of the
tarball are created by these three tools (see @ref{Version controlled
source}). To facilitate the building and testing of your work during
development, Gnuastro comes with two useful scripts:

@table @file
@cindex @file{developer-build}
@item developer-build
This is more fully described in @ref{Configure and build in RAM}. During
development, you will usually run this command only once (at the start of
your work).

@cindex @file{tests/during-dev.sh}
@item tests/during-dev.sh
This script is designed to be run each time you make a change and want to
test your work (with some possible input and output). The script itself is
heavily commented and thoroughly describes the best way to use it, so we
won't repeat it here.

As a short summary: you specify the build directory, an output directory
(for the built program to be run in, and also contains the inputs), the
program's short name and the arguments and options that it should be run
with. This script will then build Gnuastro, go to the output directory and
run the built executable from there. One option for the output directory
might be your desktop, so you can easily see the output files and delete
them when you are finished. The main purpose of these scripts is to keep
your source directory clean and facilitate your development.
@end table

@cindex Debugging
@cindex Optimization
By default all the programs are compiled with optimization flags for
increased speed. A side effect of optimization is that valuable debugging
information is lost. All the libraries are also linked as shared libraries
by default. Shared libraries further complicate the debugging process and
significantly slow down the compilation (the @command{make} command). So
during development it is recommended to configure Gnuastro as follows:

@example
$ ./configure --enable-debug
@end example

@noindent
In @file{developer-build} you can ask for this behavior through the
@option{--debug} option, see @ref{Separate build and source directories}.

In order to understand the building process, you can go through the
Autoconf, Automake and Libtool manuals, like all GNU manuals they provide
both a great tutorial and technical documentation. The ``A small Hello
World'' section in Automake's manual (in chapter 2) can be a good starting
guide after you have read the separate introductions.





@node Test scripts, Developer's checklist, Building and debugging, Developing
@section Test scripts

@cindex Test scripts
@cindex Gnuastro test scripts
As explained in @ref{Tests}, for every program some simple tests are
written to check the various independent features of the program. All the
tests are placed in the @file{tests/} directory. The
@file{tests/prepconf.sh} script is the first `test' that will be run. It
will copy all the configuration files from the various directories to a
@file{tests/.gnuastro} directory (which it will make) so the various tests
can set the default values. This script will also make sure the programs
don't go searching for user and system wide configuration files to avoid
the mixing of values with different Gnuastro version on the system.

For each program, the tests are placed inside directories with the
program name. Each test is written as a shell script. The last line of
this script is the test which runs the program with certain
parameters. The return value of this script determines the fate of the
test, see the ``Support for test suites'' chapter of the Automake
manual for a very nice and complete explanation. In every script, two
variables are defined at first: @code{prog} and @code{execname}. The
first specifies the program name and the second the location of the
executable.

@cindex Build tree
@cindex Source tree
@cindex @file{developer-build}
The most important thing to have in mind about all the test scripts is that
they are run from inside the @file{tests/} directory in the ``build
tree''. Which can be different from the directory they are stored in (known
as the ``source tree'')@footnote{The @file{developer-build} script also
uses this feature to keep the source and build directories separate (see
@ref{Separate build and source directories}).}. This distinction is made by
GNU Autoconf and Automake (which configure, build and install Gnuastro) so
that you can install the program even if you don't have write access to the
directory keeping the source files. See the ``Parallel build trees (a.k.a
VPATH builds)'' in the Automake manual for a nice explanation.

Because of this, any necessary inputs that are distributed in the
tarball@footnote{In many cases, the inputs of a test are outputs of
previous tests, this doesn't apply to this class of inputs. Because all
outputs of previous tests are in the ``build tree''.}, for example the
catalogs necessary for checks in MakeProfiles and Crop, must be identified
with the @command{$topsrc} prefix instead of @command{../} (for the top
source directory that is unpacked). This @command{$topsrc} variable points
to the source tree where the script can find the source data (it is defined
in @file{tests/Makefile.am}). The executables and other test products were
built in the build tree (where they are being run), so they don't need to
be prefixed with that variable. This is also true for images or files that
were produced by other tests.


@node Developer's checklist, Gnuastro project webpage, Test scripts, Developing
@section Developer's checklist
This is a checklist of things to do after applying your changes/additions
in Gnuastro:

@enumerate

@item
If the change is non-trivial, write test(s) in the @file{tests/progname/}
directory to test the change(s)/addition(s) you have made. Then add their
file names to @file{tests/Makefile.am}.

@item
Run @command{$ make check} to make sure everything is working correctly.

@item
Make sure the documentation (this book) is completely up to date with your
changes, see @ref{Documentation}.

@item
Commit the change to your issue branch (see @ref{Production workflow} and
@ref{Forking tutorial}). Afterwards, run Autoreconf to generate the
appropriate version number:

@example
$ autoreconf -f
@end example

@cindex Making a distribution package
@item
Finally, to make sure everything will be built, installed and checked
correctly run the following command (after re-configuring, and
re-building). To greatly speed up the process, use multiple threads (8 in
the example below, change it appropriately)

@example
$ make distcheck -j8
@end example

@noindent
This command will create a distribution file (ending with @file{.tar.gz})
and try to compile it in the most general cases, then it will run the tests
on what it has built in its own mini-environment. If @command{$ make
distcheck} finishes successfully, then you are safe to send your changes to
us to implement or for your own purposes. See @ref{Production workflow} and
@ref{Forking tutorial}.

@end enumerate





@node Gnuastro project webpage, Developing mailing lists, Developer's checklist, Developing
@section Gnuastro project webpage

@cindex Bug
@cindex Issue
@cindex Tracker
@cindex GNU Savannah
@cindex Report a bug
@cindex Management hub
@cindex Feature request
@cindex Central management
@url{https://savannah.gnu.org/projects/gnuastro/, Gnuastro's central
management hub}@footnote{@url{https://savannah.gnu.org/projects/gnuastro/}}
is located on @url{https://savannah.gnu.org/, GNU
Savannah}@footnote{@url{https://savannah.gnu.org/}}. Savannah is the
central software development management system for many GNU
projects. Through this central hub, you can view the list of activities
that the developers are engaged in, their activity on the version
controlled source, and other things. Each defined activity in the
development cycle is known as an `issue' (or `item'). An issue can be a bug
(see @ref{Report a bug}), or a suggested feature (see @ref{Suggest new
feature}) or an enhancement or generally any @emph{one} job that is to be
done. In Savannah, issues are classified into three categories or
`tracker's:

@table @asis

@cindex Mailing list: bug-gnuastro
@item Support
This tracker is a way that (possibly anonymous) users can get in touch
with the Gnuastro developers. It is a complement to the bug-gnuastro
mailing list (see @ref{Report a bug}). Anyone can post an issue to
this tracker. The developers will not submit an issue to this
list. They will only reassign the issues in this list to the other two
trackers if they are valid@footnote{Some of the issues registered here
might be due to a mistake on the user's side, not an actual bug in the
program.}. Ideally (when the developers have time to put on Gnuastro,
please don't forget that Gnuastro is a volunteer effort), there should
be no open items in this tracker.

@item Bugs
This tracker contains all the known bugs in Gnuastro (problems with
the existing tools).

@item Tasks
The items in this tracker contain the future plans (or new
features/capabilities) that are to be added to Gnuastro.

@end table

@noindent
All the trackers can be browsed by a (possibly anonymous) visitor, but to
edit and comment on the Bugs and Tasks trackers, you have to be a
registered on Savannah. When posting an issue to a tracker, it is very
important to choose the `Category' and `Item Group' options accurately. The
first contains a list of all Gnuastro's programs along with `Installation',
`New program' and `Webpage'. The ``Item Group'' contains the nature of the
issue, for example if it is a `Crash' in the software (a bug), or a problem
in the documentation (also a bug) or a feature request or an enhancement.

The set of horizontal links on the top of the page (Starting with
`Main' and `Homepage' and finishing with `News') are the easiest way
to access these trackers (and other major aspects of the project) from
any part of the project web page. Hovering your mouse over them will
open a drop down menu that will link you to the different things you
can do on each tracker (for example, `Submit new' or `Browse').  When
you browse each tracker, you can use the ``Display Criteria'' link
above the list to limit the displayed issues to what you are
interested in. The `Category' and `Group Item' (explained above) are a
good starting point.

@cindex Mailing list: gnuastro-devel
Any new issue that is submitted to any of the trackers, or any comments
that are posted for an issue, is directly forwarded to the gnuastro-devel
mailing list (@url{https://lists.gnu.org/mailman/listinfo/gnuastro-devel},
see @ref{Developing mailing lists} for more). This will allow anyone
interested to be up to date on the over-all development activity in
Gnuastro and will also provide an alternative (to Savannah) archiving for
the development discussions. Therefore, it is not recommended to directly
post an email to this mailing list, but do all the activities (for example
add new issues, or comment on existing ones) on Savannah.


@cartouche
@noindent
@strong{Do I need to be a member in Savannah to contribute to Gnuastro?}
No.

The full version controlled history of Gnuastro is available for anonymous
download or cloning. See @ref{Production workflow} for a description of
Gnuastro's Integration-Manager Workflow. In short, you can either send in
patches, or make your own fork. If you choose the latter, you can push your
changes to your own fork and inform us. We will then pull your changes and
merge them into the main project. Please see @ref{Forking tutorial} for a
tutorial.
@end cartouche


@node Developing mailing lists, Contributing to Gnuastro, Gnuastro project webpage, Developing
@section Developing mailing lists

To keep the developers and interested users up to date with the activity
and discussions within Gnuastro, there are two mailing lists which you can
subscribe to:

@table @asis

@item @command{gnuastro-devel@@gnu.org}
@itemx (at @url{https://lists.gnu.org/mailman/listinfo/gnuastro-devel})

@cindex Mailing list: gnuastro-devel
All the posts made in the support, bugs and tasks discussions of
@ref{Gnuastro project webpage} are also sent to this mailing address and
archived. By subscribing to this list you can stay up to date with the
discussions that are going on between the developers before, during and
(possibly) after working on an issue. All discussions are either in the
context of bugs or tasks which are done on Savannah and circulated to all
interested people through this mailing list. Therefore it is not
recommended to post anything directly to this mailing list. Any mail that
is sent to it from Savannah to this list has a link under the title ``Reply
to this item at:''. That link will take you directly to the issue
discussion page, where you can read the discussion history or join it.

While you are posting comments on the Savannah issues, be sure to update
the meta-data. For example if the task/bug is not assigned to anyone and
you would like to take it, change the ``Assigned to'' box, or if you want
to report that it has been applied, change the status and so on. All these
changes will also be circulated with the email very clearly.

@item @command{gnuastro-commits@@gnu.org}
@itemx (at @url{https://lists.gnu.org/mailman/listinfo/gnuastro-commits})

@cindex Mailing list: gnuastro-commits
This mailing list is defined to circulate all commits that are done in
Gnuastro's version controlled source, see @ref{Version controlled
source}. If you have any ideas, or suggestions on the commits, please use
the bug and task trackers on Savannah to followup the discussion, do not
post to this list. All the commits that are made for an already defined
issue or task will state the respective ID so you can find it easily.

@end table






@node Contributing to Gnuastro,  , Developing mailing lists, Developing
@section Contributing to Gnuastro

You have this great idea or have found a good fix to a problem which you
would like to implement in Gnuastro. You have also become familiar with the
general design of Gnuastro in the previous sections of this chapter (see
@ref{Developing}) and want to start working on and sharing your new
addition/change with the whole community as part of the official
release. This is great and your contribution is most welcome. This section
and the next (see @ref{Developer's checklist}) are written in the hope of
making it as easy as possible for you to share your great idea with the
community.

@cindex FSF
@cindex Free Software Foundation
In this section we discuss the final steps you have to take: legal and
technical. From the legal perspective, the copyright of any work you do on
Gnuastro has to be assigned to the Free Software Foundation (FSF) and the
GNU operating system, or you have to sign a disclaimer. We do this to
ensure that Gnuastro can remain free in the future, see @ref{Copyright
assignment}. From the technical point of view, in this section we also
discuss commit guidelines (@ref{Commit guidelines}) and the general version
control workflow of Gnuastro in @ref{Production workflow}, along with a
tutorial in @ref{Forking tutorial}.

Recall that before starting the work on your idea, be sure to checkout the
bugs and tasks trackers in @ref{Gnuastro project webpage} and announce your
work there so you don't end up spending time on something others have
already worked on, and also to attract similarly interested developers to
help you.

@menu
* Copyright assignment::        Copyright has to be assigned to the FSF.
* Commit guidelines::           Guidelines for commit messages.
* Production workflow::         Submitting your commits (work) for inclusion.
* Forking tutorial::            Tutorial on workflow steps with Git.
@end menu

@node Copyright assignment, Commit guidelines, Contributing to Gnuastro, Contributing to Gnuastro
@subsection Copyright assignment

Gnuastro's copyright is owned by the FSF. Professor Eben Moglen, of the
Columbia University Law School has given a nice summary of the reasons for
this at @url{https://www.gnu.org/licenses/why-assign}. Below we are copying
it verbatim for self consistency (in case you are offline or reading in
print).

@quotation
Under US copyright law, which is the law under which most free
software programs have historically been first published, there are
very substantial procedural advantages to registration of
copyright. And despite the broad right of distribution conveyed by the
GPL, enforcement of copyright is generally not possible for
distributors: only the copyright holder or someone having assignment
of the copyright can enforce the license. If there are multiple
authors of a copyrighted work, successful enforcement depends on
having the cooperation of all authors.

In order to make sure that all of our copyrights can meet the
record keeping and other requirements of registration, and in order to
be able to enforce the GPL most effectively, FSF requires that each
author of code incorporated in FSF projects provide a copyright
assignment, and, where appropriate, a disclaimer of any work-for-hire
ownership claims by the programmer's employer. That way we can be sure
that all the code in FSF projects is free code, whose freedom we can
most effectively protect, and therefore on which other developers can
completely rely.
@end quotation

Please get in touch with the Gnuastro maintainer (currently Mohammad
Akhlaghi, mohammad -at- akhlaghi -dot- org) to follow the procedures. It is
possible to do this for each change (good for for a single contribution),
and also more generally for all the changes/additions you do in the future
within Gnuastro. So if you have already assigned the copyright of your work
on another GNU software to the FSF, it should be done again for
Gnuastro. The FSF has staff working on these legal issues and the
maintainer will get you in touch with them to do the paperwork. The
maintainer will just be informed in the end so your contributions can be
merged within the Gnuastro source code.

Gnuastro will gratefully acknowledge (see @ref{Acknowledgments}) all the
people who have assigned their copyright to the FSF and have thus helped to
guarantee the freedom and reliability of Gnuastro. The Free Software
Foundation will also acknowledge your copyright contributions in the Free
Software Supporter: @url{https://www.fsf.org/free-software-supporter} which
will circulate to a very large community (222,882 people in January 2021).
See the archives for some examples and subscribe to receive
interesting updates. The very active code contributors (or developers) will
also be recognized as project members on the Gnuastro project web page (see
@ref{Gnuastro project webpage}) and can be given a @code{gnu.org} email
address. So your very valuable contribution and copyright assignment will
not be forgotten and is highly appreciated by a very large community. If
you are reluctant to sign an assignment, a disclaimer is also acceptable.

@cartouche
@noindent
@strong{Do I need a disclaimer from my university or employer?} It depends
on the contract with your university or employer. From the FSF's
@file{/gd/gnuorg/conditions.text}: ``If you are employed to do programming,
or have made an agreement with your employer that says it owns programs you
write, we need a signed piece of paper from your employer disclaiming
rights to'' Gnuastro. The FSF's copyright clerk will kindly help you
decide, please consult the following email address: ``assign -at- gnu -dot-
org''.
@end cartouche


@node Commit guidelines, Production workflow, Copyright assignment, Contributing to Gnuastro
@subsection Commit guidelines

To be able to cleanly integrate your work with the other developers,
@strong{never commit on the @file{master} branch} (see @ref{Production
workflow} for a complete discussion and @ref{Forking tutorial} for a
cookbook example). In short, leave @file{master} only for changes you
fetch, or pull from the official repository (see
@ref{Synchronizing}).

In the Gnuastro commit messages, we strive to follow these standards. Note
that in the early phases of Gnuastro's development, we are experimenting
and so if you notice earlier commits don't satisfy some of the guidelines
below, it is because they predate that guideline.

@table @asis

@item Commit title
The commits have to start with one short descriptive title. The title is
separated from the body with one blank line. Run @command{git log} to see
some of the most recent commit messages as an example. In general, the
title should satisfy the following conditions:

@itemize
@item
It is best for the title to be short, about 60 (or even 50)
characters. Most emulated command-line terminals are about 80
characters wide. However, we should also allow for the commit hashes
which are printed in @command{git log --oneline}, and also branch
names or the graph structure outputs of @command{git log} which are
also commonly used.

@item
The title should not finish with any full-stops or periods (`@key{.}').

@end itemize

@item Commit body
@cindex Mailing list: gnuastro-commits
The body of the commit message is separated from the title by one empty
line. Recall that anyone who has subscribed to @command{gnuastro-commits}
mailing list will get the commit in their email after it has been pushed to
@file{master}. People will also read them when they synchronize with the
main Gnuastro repository (see @ref{Synchronizing}). Finally, the commit
messages will later be used to update the @file{NEWS} file on each
release. Therefore the commit message body plays a very important role in
the development of Gnuastro, so please adhere to the following guidelines.

@itemize


@item
The body should be very descriptive. Start the commit message body by
explaining what changes your commit makes from a user's perspective (added,
changed, or removed options, or arguments to programs or libraries, or
modified algorithms, or new installation step, etc).

@item
@cindex Mailing list: gnuastro-commits
Try to explain the committed contents as best as you can. Recall that the
readers of your commit message do not necessarily have your current
background. After some time you will also forget the context, so this
request is not just for
others@footnote{@url{http://catb.org/esr/writings/unix-koans/prodigy.html}}. Therefore
be very descriptive and explain as much as possible: what the bug/task was,
justify the way you fixed it and discuss other possible solutions that you
might not have included. For the last item, it is best to discuss them
thoroughly as comments in the appropriate section of the code, but only
give a short summary in the commit message. Note that all added and removed
source code lines will also be circulated in the @command{gnuastro-commits}
mailing list.

@item
Like all other Gnuastro's text files, the lines in the commit body should
not be longer than 75 characters, see @ref{Coding conventions}. This is to
ensure that on standard terminal emulators (with 80 character width), the
@command{git log} output can be cleanly displayed (note that the commit
message is indented in the output of @command{git log}). If you use Emacs,
Gnuastro's @file{.dir-locals.el} file will ensure that your commits satisfy
this condition (using @key{M-q}).

@item
@cindex Mailing list: gnuastro-commits
When the commit is related to a task or a bug, please include the
respective ID (in the format of @code{bug/task #ID}, note the space) in the
commit message (from @ref{Gnuastro project webpage}) for interested people
to be able to followup the discussion that took place there. If the commit
fixes a bug or finishes a task, the recommended way is to add a line after
the body with `@code{This fixes bug #ID.}', or `@code{This finishes task
#ID.}'. Don't assume that the reader has internet access to check the bug's
full description when reading the commit message, so give a short
introduction too.
@end itemize

@end table



@node Production workflow, Forking tutorial, Commit guidelines, Contributing to Gnuastro
@subsection Production workflow

Fortunately `Pro Git' has done a wonderful job in explaining the different
workflows in Chapter
5@footnote{@url{http://git-scm.com/book/en/v2/Distributed-Git-Distributed-Workflows}}
and in particular the ``Integration-Manager Workflow'' explained there. The
implementation of this workflow is nicely explained in Section
5.2@footnote{@url{http://git-scm.com/book/en/v2/Distributed-Git-Contributing-to-a-Project}}
under ``Forked-Public-Project''. We have also prepared a short tutorial in
@ref{Forking tutorial}. Anything on the master branch should always be
tested and ready to be built and used. As described in `Pro Git', there are
two methods for you to contribute to Gnuastro in the Integration-Manager
Workflow:

@enumerate

@item
You can send commit patches by email as fully explained in `Pro Git'. This
is good for your first few contributions. Just note that raw patches
(containing only the diff) do not have any meta-data (author name, date,
etc). Therefore they will not allow us to fully acknowledge your contributions
as an author in Gnuastro: in the @file{AUTHORS} file and at the start of
the PDF book. These author lists are created automatically from the version
controlled source.

To receive full acknowledgment when submitting a patch, is thus advised to
use Git's @code{format-patch} tool. See Pro Git's
@url{https://git-scm.com/book/en/v2/Distributed-Git-Contributing-to-a-Project#Public-Project-over-Email,
Public project over email} section for a nice explanation. If you would
like to get more heavily involved in Gnuastro's development, then you can
try the next solution.

@item
You can have your own forked copy of Gnuastro on any hosting site you like
(GitHub, GitLab, BitBucket, etc) and inform us when your changes are
ready so we merge them in Gnuastro. This is more suited for people who
commonly contribute to the code (see @ref{Forking tutorial}).

@end enumerate

In both cases, your commits (with your name and information) will be
preserved and your contributions will thus be fully recorded in the history
of Gnuastro and in the @file{AUTHORS} file and this book (second page in
the PDF format) once they have been incorporated into the official
repository. Needless to say that in such cases, be sure to follow the bug
or task trackers (or subscribe to the @command{gnuastro-devel} mailing
list) and contact us before hand so you don't do something that someone
else is already working on. In that case, you can get in touch with them
and help the job go on faster, see @ref{Gnuastro project webpage}. This
workflow is currently mostly borrowed from the general recommendations of
Git@footnote{@url{https://github.com/git/git/blob/master/Documentation/SubmittingPatches}}
and GitHub. But since Gnuastro is currently under heavy development, these
might change and evolve to better suit our needs.





@node Forking tutorial,  , Production workflow, Contributing to Gnuastro
@subsection Forking tutorial

This is a tutorial on the second suggested method (commonly known as
forking) that you can submit your modifications in Gnuastro (see
@ref{Production workflow}).

To start, please create an empty repository on your hosting service web page
(we recommend GitLab@footnote{See
@url{https://www.gnu.org/software/repo-criteria-evaluation.html} for an
evaluation of the major existing repositories. Gnuastro uses GNU Savannah
(which also has the highest ranking in the evaluation), but for starters,
GitLab may be easier.}). If this is your first hosted repository on the
web page, you also have to upload your public SSH key@footnote{For example
see this explanation provided by GitLab:
@url{http://docs.gitlab.com/ce/ssh/README.html}.} for the @command{git
push} command below to work. Here we'll assume you use the name
@file{janedoe} to refer to yourself everywhere and that you choose
@file{gnuastro-janedoe} as the name of your Gnuastro fork. Any online
hosting service will give you an address (similar to the
`@file{git@@gitlab.com:...}' below) of the empty repository you have created
using their web page, use that address in the third line below.

@example
$ git clone git://git.sv.gnu.org/gnuastro.git
$ cd gnuastro
$ git remote add janedoe git@@gitlab.com:janedoe/gnuastro-janedoe.git
$ git push janedoe master
@end example

The full Gnuastro history is now pushed onto your hosting service and the
@file{janedoe} remote is now also following your @file{master} branch. If
you run @command{git remote show REMOTENAME} for the @file{origin} and
@file{janedoe} remotes, you will see their difference: the first has pull
access and the second doesn't. This nicely summarizes the main idea behind
this workflow: you push to your remote repository, we pull from it and
merge it into @file{master}, then you finalize it by pulling from the main
repository.

To test (compile) your changes during your work, you will need to bootstrap
the version controlled source, see @ref{Bootstrapping} for a full
description. The cloning process above is only necessary for your first
time setup, you don't need to repeat it. However, please repeat the steps
below for each independent issue you intend to work on.

Let's assume you have found a bug in @file{lib/statistics.c}'s median
calculating function. Before actually doing anything, please announce it
(see @ref{Report a bug}) so everyone knows you are working on it or to find
out others aren't already working on it. With the commands below, you make
a branch, checkout to it, correct the bug, check if it is indeed fixed, add
it to the staging area, commit it to the new branch and push it to your
hosting service. But before all of them, make sure that you are on the
@file{master} branch and that your @file{master} branch is up to date with
the main Gnuastro repository with the first two commands.

@example
$ git checkout master
$ git pull
$ git checkout -b bug-median-stats      # Choose a descriptive name
$ emacs lib/statistics.c
$                                       # do your checks here
$ git add lib/statistics.c
$ git commit
$ git push janedoe bug-median-stats
@end example

Your new branch is now on your hosted repository. Through the respective
tacker on Savannah (see @ref{Gnuastro project webpage}) you can then let
the other developers know that your @file{bug-median-stats} branch is
ready. They will pull your work, test it themselves and if it is ready to
be merged into the main Gnuastro history, they will merge it into the
@file{master} branch. After that is done, you can simply checkout your
local @file{master} branch and pull all the changes from the main
repository. After the pull you can run `@command{git log}' as shown below,
to see how @file{bug-median-stats} is merged with master. To finalize, you
can push all the changes to your hosted repository and delete the branch:

@example
$ git checkout master
$ git pull
$ git log --oneline --graph --decorate --all
$ git push janedoe master
$ git branch -d bug-median-stats                # delete local branch
$ git push janedoe --delete bug-median-stats    # delete remote branch
@end example

Just as a reminder, always keep your work on each issue in a separate local
and remote branch so work can progress on them independently. After you
make your announcement, other people might contribute to the branch before
merging it in to @file{master}, so this is very important. As a final
reminder: before starting each issue branch from @file{master}, be sure to
run @command{git pull} in @file{master} as shown above. This will enable
you to start your branch (work) from the most recent commit and thus
simplify the final merging of your work.


























@node Gnuastro programs list, Other useful software, Developing, Top
@appendix Gnuastro programs list

GNU Astronomy Utilities @value{VERSION}, contains the following
programs. They are sorted in alphabetical order and a short description is
provided for each program. The description starts with the executable names
in @file{thisfont} followed by a pointer to the respective section in
parenthesis. Throughout this book, they are ordered based on their context,
please see the top-level contents for contextual ordering (based on what
they do).

@table @asis

@item Arithmetic
(@file{astarithmetic}, see @ref{Arithmetic}) For arithmetic operations on
multiple (theoretically unlimited) number of datasets (images). It has a
large and growing set of arithmetic, mathematical, and even statistical
operators (for example @command{+}, @command{-}, @command{*}, @command{/},
@command{sqrt}, @command{log}, @command{min}, @command{average},
@command{median}).

@item BuildProgram
(@file{astbuildprog}, see @ref{BuildProgram}) Compile, link and run
programs that depend on the Gnuastro library (see @ref{Gnuastro
library}). This program will automatically link with the libraries that
Gnuastro depends on, so there is no need to explicitly mention them every
time you are compiling a Gnuastro library dependent program.

@item ConvertType
(@file{astconvertt}, see @ref{ConvertType}) Convert astronomical data files
(FITS or IMH) to and from several other standard image and data formats,
for example TXT, JPEG, EPS or PDF.

@item Convolve
(@file{astconvolve}, see @ref{Convolve}) Convolve (blur or smooth) data
with a given kernel in spatial and frequency domain on multiple
threads. Convolve can also do de-convolution to find the appropriate kernel
to PSF-match two images.

@item CosmicCalculator
(@file{astcosmiccal}, see @ref{CosmicCalculator}) Do cosmological
calculations, for example the luminosity distance, distance modulus,
comoving volume and many more.

@item Crop
(@file{astcrop}, see @ref{Crop}) Crop region(s) from an image and stitch
several images if necessary. Inputs can be in pixel coordinates or world
coordinates.

@item Fits
(@file{astfits}, see @ref{Fits}) View and manipulate FITS file extensions
and header keywords.

@item MakeCatalog
(@file{astmkcatalog}, see @ref{MakeCatalog}) Make catalog of labeled image
(output of NoiseChisel). The catalogs are highly customizable and adding
new calculations/columns is very straightforward.

@item MakeNoise
(@file{astmknoise}, see @ref{MakeNoise}) Make (add) noise to an image, with
a large set of random number generators and any seed.

@item MakeProfiles
(@file{astmkprof}, see @ref{MakeProfiles}) Make mock 2D profiles in an
image. The central regions of radial profiles are made with a configurable
2D Monte Carlo integration. It can also build the profiles on an
over-sampled image.

@item Match
(@file{astmatch}, see @ref{Match}) Given two input catalogs, find the rows
that match with each other within a given aperture (may be an ellipse).

@item NoiseChisel
(@file{astnoisechisel}, see @ref{NoiseChisel}) Detect signal in noise. It
uses a technique to detect very faint and diffuse, irregularly shaped
signal in noise (galaxies in the sky), using thresholds that are below the
Sky value, see @url{http://arxiv.org/abs/1505.01664, arXiv:1505.01664}.

@item Query
(@file{astquery}, see @ref{Query}) High-level interface to query
pre-defined remote, or external databases, and directly download the
required sub-tables on the command-line.

@item Segment
(@file{astsegment}, see @ref{Segment}) Segment detected regions based on
the structure of signal and the input dataset's noise properties.

@item Statistics
(@file{aststatistics}, see @ref{Statistics}) Statistical calculations on
the input dataset (column in a table, image or datacube).

@item Table
(@file{asttable}, @ref{Table}) Convert FITS binary and ASCII tables into
other such tables, print them on the command-line, save them in a plain
text file, or get the FITS table information.

@item Warp
(@file{astwarp}, see @ref{Warp}) Warp image to new pixel grid. Any
projective transformation or Homography can be applied to the input images.

@end table

The programs listed above are designed to be highly modular and
generic. Hence, they are naturally for lower-level operations. In Gnuastro,
higher-level operations (combining multiple programs, or running a program
in a special way), are done with installed Bash scripts (all prefixed with
@code{astscript-}). They can be run just like a program and behave very
similarly (with minor differences, see @ref{Installed scripts}).

@table @code
@item astscript-sort-by-night
(See @ref{Sort FITS files by night}) Given a list of FITS files, and a HDU
and keyword name (for a date), this script separates the files in the same
night (possibly over two calendar days).
@end table








@node Other useful software, GNU Free Doc. License, Gnuastro programs list, Top
@appendix Other useful software

In this appendix the installation of programs and libraries that are
not direct Gnuastro dependencies are discussed. However they can be
useful for working with Gnuastro.

@menu
* SAO ds9::                     Viewing FITS images.
* PGPLOT::                      Plotting directly in C
@end menu

@node SAO ds9, PGPLOT, Other useful software, Other useful software
@section SAO ds9

@cindex SAO DS9
@cindex FITS image viewer
SAO ds9@footnote{@url{http://ds9.si.edu/}} is not a requirement of
Gnuastro, it is a FITS image viewer. So to check your inputs and
outputs, it is one of the best options. Like the other packages, it
might already be available in your distribution's repositories. It is
already pre-compiled in the download section of its web page. Once you
download it you can unpack and install (move it to a system recognized
directory) with the following commands (@code{x.x.x} is the version
number):

@example
$ tar xf ds9.linux64.x.x.x.tar.gz
$ sudo mv ds9 /usr/local/bin
@end example

Once you run it, there might be a complaint about the Xss library,
which you can find in your distribution package management system. You
might also get an @command{XPA} related error. In this case, you have
to add the following line to your @file{~/.bashrc} and
@file{~/.profile} file (you will have to log out and back in again for
the latter):

@example
export XPA_METHOD=local
@end example


@menu
* Viewing multiextension FITS images::  Configure SAO ds9 for multiextension images.
@end menu

@node Viewing multiextension FITS images,  , SAO ds9, SAO ds9
@subsection Viewing multiextension FITS images

@cindex Multiextension FITS
@cindex Opening multiextension FITS
The FITS definition allows for multiple extensions inside one FITS file,
each extension can have a completely independent dataset inside of it. If
you just double click on a multi-extension FITS file or run @command{$ds9
foo.fits}, SAO ds9 will only show you the first extension. If you have a
multi-extension file containing 2D images, one way to load and switch
between the each 2D extension is to take the following steps in the SAO ds9
window: @clicksequence{``File''@click{}''Open Other''@click{}''Open Multi
Ext Cube''} and then choose the Multi extension FITS file in your
computer's file structure.

@cindex @option{-mecube} (ds9)
The method above is a little tedious to do every time you want view a
multi-extension FITS file. A different series of steps is also necessary if
you the extensions are 3D data cubes. Fortunately SAO ds9 also provides
command-line options that you can use to specify a particular behavior. One
of those options is @option{-mecube} which opens a FITS image as a
multi-extension data cube (treating each 2D extension as a slice in a 3D
cube). This allows you to flip through the extensions easily while keeping
all the settings similar.

Try running @command{$ds9 -mecube foo.fits} to see the effect (for example
on the output of @ref{NoiseChisel}). If the file has multiple extensions, a
small window will also be opened along with the main ds9 window. This small
window allows you to slide through the image extensions of
@file{foo.fits}. If @file{foo.fits} only consists of one extension, then
SAO ds9 will open as usual. Just to avoid confusion, note that SAO ds9 does
not follow the GNU style of separating long and short options as explained
in @ref{Arguments and options}. In the GNU style, this `long'
(multi-character) option should have been called like @option{--mecube},
but SAO ds9 follows its own conventions.

Recall the @option{-mecube} opens each 2D input extension as a slice in
3D. Therefore, when you want to inspect a multi-extension FITS file
containing a 3D dataset, the @option{-mecube} option is no good any more
(it only opens the first slice of the 3D cube in each extension). In that
case, we have to use SAO ds9's @option{-multiframe} option to open each
extension as a separate frame. Since the input is a 3D dataset, we get the
same small window as the 2D case above for scrolling through the 3D
slices. We then have to also ask ds9 to match the frames and lock the
slices, so for example zooming in one, will also zoom the others.

We can use a script to automatize this process and make work much easier
(and save a lot of time) when opening any generic 2D or 3D dataset. After
taking the following steps, when you click on a FITS file in your graphic
user interface, ds9 will open in the respective 2D or 3D mode when double
clicking a FITS file on the graphic user interface, and an executable will
also be available to open ds9 similarly on the command-line. Note that the
following solution assumes you already have Gnuastro installed (and in
particular the @ref{Fits} program).

Let's assume that you want to store this script in @file{BINDIR} (that is
in your @file{PATH} environment variable, see @ref{Installation
directory}). [Tip: a good place would be @file{~/.local/bin}, just don't
forget to make sure it is in your @file{PATH}]. Using your favorite text
editor, put the following script into a file called
@file{BINDIR/ds9-multi-ext}. You can change the size of the opened ds9
window by changing the @code{1800x3000} part of the script below.

@example
#! /bin/bash

# To allow generic usage, if no input file is given (the `if' below is
# true), then just open an empty ds9.
if [ "x$1" == "x" ]; then
    ds9
else
    # Make sure we are dealing with a FITS file. We are using shell
    # redirection here to make sure that nothing is printed in the
    # terminal (to standard output when we have a FITS file, or to
    # standard error when we don't). Since we've used redirection,
    # we'll also have to echo the return value of `astfits'.
    check=$(astfits $1 -h0 > /dev/null 2>&1; echo $?)

    # If the file was a FITS file, then `check' will be 0.
    if [ "$check" == "0" ]; then

        # Read the number of dimensions.
        n0=$(astfits $1 -h0 | awk '$1=="NAXIS"@{print $3@}')

        # Find the number of dimensions.
        if [ "$n0" == "0" ]; then
            ndim=$(astfits $1 -h1 | awk '$1=="NAXIS"@{print $3@}')
        else
            ndim=$n0
        fi;

        # Open DS9 based on the number of dimension.
        if [ "$ndim" = "2" ]; then
            # 2D multi-extension file: use the "Cube" window to
            # flip/slide through the extensions.
            ds9 -zscale -geometry 1800x3000 -mecube $1         \
                -zoom to fit -wcs degrees
        else
            # 3D multi-extension file: The "Cube" window will slide
            # between the slices of a single extension. To flip
            # through the extensions (not the slices), press the top
            # row "frame" button and from the last four buttons of the
            # bottom row ("first", "previous", "next" and "last") can
            # be used to switch through the extensions (while keeping
            # the same slice).
            ds9 -zscale -geometry 1800x3000 -wcs degrees       \
                -multiframe $1 -match frame image              \
                -lock slice image -lock frame image -single    \
                -zoom to fit
        fi
    else
        if [ -f $1 ]; then
            echo "'$1' isn't a FITS file."
        else
            echo "'$1' doesn't exist."
        fi
    fi
fi
@end example

As described above (also in the comments of the script), if you have opened
a multi-extension 2D dataset (image), the ``Cube'' window can be used to
slide/flip through each extension. But when the input is a 3D data cube,
the ``Cube'' window will slide/flip through the slices in each extension (a
separate 3D dataset). To flip through the extensions (while keeping the
slice fixed), click the ``frame'' button on the top row of buttons, then
use the last four buttons of the bottom row ("first", "previous", "next"
and "last") to change between the extensions.

To run this script, you have to activate its executable flag with this
command:

@example
$ chmod +x BINDIR/ds9-multi-ext
@end example

If @file{BINDIR} is within your system's @file{PATH} environment variable
(see @ref{Installation directory}), you can now open ds9 conditionally
using the script above with this command:

@example
$ ds9-multi-ext foo.fits
@end example

@cindex GNOME 3
@cindex @file{.desktop}
For the graphic user interface, we'll assume you are using GNOME (the most
popular graphic user interface for GNU/Linux systems), version 3. For GNOME
2, see below. You can customize GNOME to open specific files with
@file{.desktop} files. For each user, they are stored in
@file{~/.local/share/applications/}. In case you don't have the directory,
make it your self (with @command{mkdir}). Using your favorite text editor,
you can now create @file{~/.local/share/applications/saods9.desktop} with
the following contents. Just don't forget to correct @file{BINDIR}. If you
would also like to have ds9's logo/icon in GNOME, download it, uncomment
the @code{Icon} line, and write its address in the value.

@example
[Desktop Entry]
Type=Application
Version=1.0
Name=SAO DS9
Comment=View FITS images
Terminal=false
Categories=Graphics;RasterGraphics;2DGraphics;3DGraphics
#Icon=/PATH/TO/DS9/ICON/ds9.png
Exec=BINDIR/ds9-multi-ext %f
@end example

The steps above will add SAO ds9 as one of your applications. To make it
default, take the following steps (just once is enough). Right click on a
FITS file and select @clicksequence{Open with other application@click{}View
all applications@click{}SAO ds9}.

@cindex GNOME 2
In case you are using GNOME 2 you can take the following steps: right click
on a FITS file and choose @clicksequence{Properties@click{}Open
With@click{}Add} button. A list of applications will show up, ds9 might
already be present in the list, but don't choose it because it will run
with no options. Below the list is an option ``Use a custom
command''. Click on it and write the following command:
@command{BINDIR/ds9-multi-ext} in the box and click ``Add''. Then finally
choose the command you just added as the default and click the ``Close''
button.





@node PGPLOT,  , SAO ds9, Other useful software
@section PGPLOT

@cindex PGPLOT
@cindex C, plotting
@cindex Plotting directly in C
PGPLOT is a package for making plots in C. It is not directly needed
by Gnuastro, but can be used by WCSLIB, see @ref{WCSLIB}. As
explained in @ref{WCSLIB}, you can install WCSLIB without it too. It
is very old (the most recent version was released early 2001!), but
remains one of the main packages for plotting directly in C. WCSLIB
uses this package to make plots if you want it to make plots. If you
are interested you can also use it for your own purposes.

@cindex Python Matplotlib
@cindex Matplotlib, Python
@cindex PGFplots in @TeX{} or @LaTeX{}
If you want your plotting codes in between your C program, PGPLOT is
currently one of your best options. The recommended alternative to
this method is to get the raw data for the plots in text files and
input them into any of the various more modern and capable plotting
tools separately, for example the Matplotlib library in Python or
PGFplots in @LaTeX{}. This will also significantly help code
readability. Let's get back to PGPLOT for the sake of
WCSLIB. Installing it is a little tricky (mainly because it is so
old!).

You can download the most recent version from the FTP link in its
web page@footnote{@url{http://www.astro.caltech.edu/~tjp/pgplot/}}. You can
unpack it with the @command{tar xf} command. Let's assume the directory you
have unpacked it to is @file{PGPLOT}, most probably it is:
@file{/home/username/Downloads/pgplot/}.  open the @file{drivers.list}
file:
@example
$ gedit drivers.list
@end example
@noindent
Remove the @code{!} for the following lines and save the file in the
end:
@example
PSDRIV 1 /PS
PSDRIV 2 /VPS
PSDRIV 3 /CPS
PSDRIV 4 /VCPS
XWDRIV 1 /XWINDOW
XWDRIV 2 /XSERVE
@end example
@noindent
Don't choose GIF or VGIF, there is a problem in their codes.

Open the @file{PGPLOT/sys_linux/g77_gcc.conf} file:
@example
$ gedit PGPLOT/sys_linux/g77_gcc.conf
@end example
@noindent
change the line saying: @code{FCOMPL="g77"} to
@code{FCOMPL="gfortran"}, and save it. This is a very important step
during the compilation of the code if you are in GNU/Linux. You now
have to create a folder in @file{/usr/local}, don't forget to replace
@file{PGPLOT} with your unpacked address:
@example
$ su
# mkdir /usr/local/pgplot
# cd /usr/local/pgplot
# cp PGPLOT/drivers.list ./
@end example
To make the Makefile, type the following command:
@example
# PGPLOT/makemake PGPLOT linux g77_gcc
@end example
@noindent
It should finish by saying: @command{Determining object file
dependencies}. You have done the hard part! The rest is easy: run
these three commands in order:
@example
# make
# make clean
# make cpg
@end example

Finally you have to place the position of this directory you just made
into the @command{LD_LIBRARY_PATH} environment variable and define the
environment variable @command{PGPLOT_DIR}. To do that, you have to
edit your @file{.bashrc} file:
@example
$ cd ~
$ gedit .bashrc
@end example
@noindent
Copy these lines into the text editor and save it:
@cindex @file{LD_LIBRARY_PATH}
@example
PGPLOT_DIR="/usr/local/pgplot/"; export PGPLOT_DIR
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/pgplot/
export LD_LIBRARY_PATH
@end example
@noindent
You need to log out and log back in again so these definitions take
effect. After you logged back in, you want to see the result of all
this labor, right? Tim Pearson has done that for you, create a
temporary folder in your home directory and copy all the demonstration
files in it:
@example
$ cd ~
$ mkdir temp
$ cd temp
$ cp /usr/local/pgplot/pgdemo* ./
$ ls
@end example
You will see a lot of pgdemoXX files, where XX is a number. In order
to execute them type the following command and drink your coffee while
looking at all the beautiful plots! You are now ready to create your
own.
@example
$ ./pgdemoXX
@end example




















@node GNU Free Doc. License, GNU General Public License, Other useful software, Top
@appendix GNU Free Doc. License

@cindex GNU Free Documentation License
@include fdl.texi



@node GNU General Public License, Index, GNU Free Doc. License, Top
@appendix GNU Gen. Pub. License v3

@cindex GPL
@cindex GNU General Public License (GPL)
@include gpl-3.0.texi






@c Print the index and finish:
@node Index,  , GNU General Public License, Top
@unnumbered Index: Macros, structures and functions
All Gnuastro library's exported macros start with @code{GAL_}, and its
exported structures and functions start with @code{gal_}. This abbreviation
stands for @emph{G}NU @emph{A}stronomy @emph{L}ibrary. The next element in
the name is the name of the header which declares or defines them, so to
use the @code{gal_array_fset_const} function, you have to @code{#include
<gnuastro/array.h>}. See @ref{Gnuastro library} for more. The
@code{pthread_barrier} constructs are our implementation and are only
available on systems that don't have them, see @ref{Implementation of
pthread_barrier}.
@printindex fn
@unnumbered Index
@printindex cp


@bye